The Energy on the Grid

Two posts ago, I introduced you to the national energy grid. Last post I described how The Grid is organized. This post will describe some of the major energy flows along The Grid.

Figure 1 shows the major sources of electricity generated in the USA in 2017. Almost 1/3 of it was generated by burning natural gas, about 1/3 by burning coal, and about 1/3 was generated from other energy sources. Nuclear was the largest of the other sources, accounting for

Figure 1. Data source: U.S. Energy Information Administration 2018.

about 20% of total demand. Nuclear, hydro, wind, solar, and geothermal are the sources that do not emit carbon dioxide by burning fuel, so they are desirable from climate change and pollution perspectives. Together, they accounted for about 36% of total demand. Wind and solar are intermittent sources, and that has important implications for grid reliability. Together they account for about 7% of total demand.

(Click on chart for larger view.)





Figure 2. Historical and Projected Changes in Electricity Demand. Source: North American Electrical Reliability Corporation 2017.

I described NERC, the North American Electric Reliability Corporation, in the last post. To ensure grid reliability, NERC attempts to estimate changes in demand The Grid will have to meet. Figure 2 combines historical and projected changes in demand over rolling 10-year periods from 1990 through 2027. It shows the change in demand during summer (light blue) and winter (dark blue). (In the South, demand for electricity is highest in summer, but in the North, it is highest during winter.) The columns show the change in GW (gigawatts, billions of watts) and the blue lines show the percentage compound annual growth rate.

Well, interpreting this complex graph as a little challenging, so let’s unpack it. First of all, it is a graph of change, not overall demand. So, demand for electricity has grown over every 10-year period, and it is expected to continue to do so. Second, the chart shows average annual change during each 10-year period, not cumulative change over the whole period. Third, the last period for which the data is all historical is 2007-2016. Starting with 2008-2017, some of the data is historical, some of it is projection. By 2017-2026, all of the data is projection. Fourth, the rate of demand growth accelerated in the decades starting around 2004. But fifth, the increase in demand is projected to slow in the future, in both the raw number of gigawatts and in the compound annual growth rate.

Bottom line here: The Grid is projected to have to satisfy increased demand for electricity, although the rate of growth is projected to slow.

Figure 3 shows historical additions and retirements in generating capacity supplied to The Grid, by fuel. You can see that for each type of generation, in most years, some was added and some was retired. Over the span of the chart, the net result has been a decrease in coal and nuclear generation, with an increase in natural gas, wind, and solar. Figure 4 shows similar data projected into the future. The projection shows a continuation of the trend: net retirement of coal and nuclear generating capacity, net addition of natural gas, wind, and solar. NERC projects that more natural gas generating capacity will be added than any other kind.

Figure 3. Historical Changes in Generating Capacity. Source: North American Electrical Reliability Corporation 2017.

Figure 4. Projected Future Changes in Generating Capacity. Source: North American Electrical Reliability Corporation 2017.









Figure 5. Source: Department of Energy 2017.

Figure 5 shows some of this data in a form that is a bit less wonkish. It shows the location, size, and type of generating station retirements on The Grid from 2002-2016 . Triangles represent power plants owned by independent power generators, while circles represent power pants owned by vertically integrated electric utilities. Grey icons represent coal burning plants, blue icons represent natural gas burning plants, and green icons represent nuclear plants. The size of the icon represents the plant’s generating capacity. Look at the concentration of gray icons in the eastern part of the country! The retirements all occurred in a  14-year period.

Many of these plants were old and inefficient, and many of them were large spewers of GHGs and other pollutants. So, from an environmental perspective, their retirement may be good news. It doesn’t take a rocket scientist, however, to see that the retirement of so many plants represents a significant transition on The Grid.

Why does this matter? Because we are looking at reliability here, not climate change. We haven’t quite developed enough information to understand the implications yet, but we will by the end of the series of posts. At this point, we can simply say that coal-based and nuclear generating stations have proven very reliable, and they fit into The Grid nicely. NERC has concerns about the reliability of natural gas, wind, and solar generating stations for the supply of bulk electricity.

Now, what about geography?

Most electricity is generated within the NERC region where it is consumed. The flow between NERC regions is comparatively small, but because The Grid has to be so finely balanced, it is important. It flows in sometimes surprising directions. The direction is determined by many factors, including the availability of transmission lines with unused capacity, historical patterns of energy consumption, and the cost of the electricity. Inexpensive electricity generated at a distance is sometimes substituted for more expensive electricity generated locally.

The flow of energy over The Grid is shown in Figure 5 at right. The map is from 2010. The regions shown in it differ slightly from current NERC regions, and they use different names. However, it was the best representation I could find. On this map, “Midwest” = the MISO Region, “Central” = the SPP Region, “TVA” = the SERC-N Region, and “Mid-Atlantic” is roughly the PJM Interconnection Region. Let’s look a bit more closely at the map.

A region in Northern Illinois served by Commonwealth Edison belongs to the Mid-Atlantic Region, but is physically separated from it. The largest power flow in the nation occurs from this region to the rest of the Mid-Atlantic Region. This represents power that is generated by highly efficient coal and nuclear generating stations operated by Commonwealth Edison. They can’t be cycled on and off easily, so during periods of slack demand (at night) they export large amounts of power at low prices.

The second largest flow occurs from the Southwest into California. As a single state, California imports more electricity than any other.

The Midwest Region is a net exporter of power. It receives power from Manitoba and Commonwealth Edison, but it distributes even more to the TVA and Central Regions. In doing this, it participates in a counterclockwise flow from Manitoba, through the Midwest and the South, and eventually to the Mid-Atlantic Region.

The Central region is a net importer of electricity. It receives inflows from the Midwest, keeps some of it, and distributes less than it receives to Texas and the Gulf.

The amount of energy available to any region, therefore, depends mostly on the generating capacity within the region, but also on the amount it receives from other regions. The transmission of energy between regions depends not only on the need for it, but also on the availability of transmission capacity.


Department of Energy. 2017. Staff Report to the Secretary on Electricity Markets and Reliability. Downloadedm 2018-05-19 from

North American Electrical Reliability Corporation. 2017. 2017 Long-Term Reliability Assessment. Downloaded 4/27/2018 from

Source: “Electricity tends to flow south in North America.” Today in Energy. EIA,

U.S. Energy Information Administration. “U.S. Electricity Generation by Source, Amount, and Share of Total in 2017.” Frequently Asked Questions. Downloaded 4/28/2018 at

NERC, The North American Electric Reliability Corporation

Northeast Blackout, 2003. Photo by Brendan Loy. Source: Flickr Creative Commons.

In the last post, I gave a general description of the national electric grid. In this post, I will describe how The Grid is organized.

As we have seen in dramatic fashion several times, problems on The Grid can bring down wide areas of the whole network, plunging them into darkness and bringing life as we know it to an immediate halt.

(Click on photo at right for larger view.)

The first one of these blackouts was the famous 1965 blackout in New York. States affected included New Jersey, New York, Connecticut, Rhode Island, Massachusetts, New Hampshire, Vermont, and the Province of Ontario. Similar blackouts occurred in 1977 and 2003 (the largest of all). In response, the electric power industry formed an organization to study the problem and develop methods to prevent future occurrences. Over time, it developed into NERC, the North American Electric Reliability Corporation.

NERC does not operate The Grid. Rather, it is a nonprofit membership organization tasked with ensuring the long-term reliability of The Grid. The companies that do operate The Grid are its members, and NERC sets the standards and operating practices they must follow in order to ensure the reliability of The Grid. NERC covers the contiguous 48 states, part of Alaska, Canada except for the far north, and a small portion of Baja California. It divides its territory into 8 regional regional entities.

Until recently, membership in NERC or in one of the regional corporations was not mandatory. However, after the passage of the Energy Policy Act of 2005, NERC was designated as the only Electric Reliability Organization for the United States, and all power supplies and distributers who participate in the bulk power network were required to join.

NERC develops national standards, the Federal Energy Regulatory Commission adopts them, and they are then handed back to NERC for enforcement. They are enforceable with fines up to $1 million per day.

Figure 2. Source: North American Reliability Corporation 2017.

I noted above that NERC is divided into 8 regional entities. They administer The Grid in their territory. NERC also divides itself into 21 reporting regions. This series of posts is headed toward reporting NERC’s 2017 Long-Term Reliability Assessment, so it is the assessment areas we are most interested in. In Figure 2, I superimposed two NERC maps to show how the boundaries of the reporting regions align with state boundaries. In some cases they align well, in others, like Missouri, they don’t. In addition, some of the assessment regions have boundaries contiguous with NERC operating regions, others represent subdivisions of NERC operating regions, and still others have boundaries that follow the boundaries of Regional Transmission Organizations (discussed in the previous post).

This is all very confusing, and one wonders what is going on. Don’t think of The Grid as something that always covered all of the country. It didn’t. The idea of transmitting electricity to consumers from remote generating stations was part of The Grid from very early on. However, electrification came to cities first. Rural areas were generally thought to be difficult to electrify because the large distances required high infrastructure costs that would be born by relatively few people. This was even more the case in difficult terrain such as the Ozarks and the Appalachians. These regions were among the last to electrify.

Thus, The Grid is irregular. New electrical service areas expanded from existing service as a patchwork that followed routes of easiest access, not according to some overall plan of simplicity and symmetry. The flow of electricity through The Grid seems a bit chaotic and the boundaries of the NERC regions seem bizarre and arbitrary, but they have to do with how energy flowed into new regions from pre-existing service areas before The Grid was interconnected.

Thus, the boundaries used to make assessments are not always those used to operate The Grid, and over time, both have changed. It’s enough to drive a person crazy! And yet The Grid is amazingly reliable, and NERC is tasked with keeping it that way.

The next post will focus on major energy flows along The Grid.


Anderson, Pamela, and Donald Kari. 2010. Is your organization prepared for complaince with NERC reliability standards? Perkins Coie. Viewed online 4/27/2018 at

Loy, Brendan. 2003. The Empire State Building in the Dark During the Great Northeast Blackout of 2003, IMG 6514. Source: Flickr Creative Commons.

North American Electric Reliability Corporation. About NERC.

North American Electrical Reliability Corporation. 2017. 2017 Long-Term Reliability Assessment. Downloaded 4/27/2018 from

Nersesian, Roy. 2007. Energy for the 21st Century: A Comprehensive Guide to Conventional and Alternative Sources. Armonik, NY: M.E. Sharpe.

Wikipedia. Adams Power Plant Transformer House.

Wikipedia. Tennessee Valley Authority.

The Grid – Update 2018

Electricity Grid Schematic by MBizon. Downloaded from

There is something that we use all day every day. We couldn’t live as we do without it, yet, for the most part, we never look at it. We just assume it will always be there, until it isn’t, and then we are most unhappy! What is it?

The electrical grid, of course. In 2014 I did a series of posts describing The Grid, how it worked, and the reliability issues it was facing. This post begins a series updating that work. The most recent NERC Long-Term Reliability Assessment was published in 2017. By the time this series of posts is finished, readers will understand what it means.

The Grid is the interconnected network that delivers electricity from suppliers to consumers in the Continental United States, most of Canada, and a small portion of Baja California.

Figure 1 shows a schematic drawing of The Grid. Electricity is generated in thousands of generating stations across the country. It is gathered together over high voltage lines and stepped-up to ultra-high voltage, which is more efficient to transmit over long distances. It is then transmitted over ultra-high voltage transmission lines until it nears its destination. Then, through a series of steps, it is reduced to low voltage and distributed to millions of end users.

(Click on graphic for larger view.)

Figure 2. Map of High Voltage Transmission Lines. Source: U.S. Energy Information Administration 2012.

All of this, from the door of the generating station to the door of the customer, is properly part of The Grid. However, this series of posts is going to focus on the high voltage and ultra-high voltage transmission system, aka the bulk power system. Figure 2 shows the network of ultra-high voltage transmission lines in the United States, color coded by voltage (kV = kilovolt, DC = direct current). The Grid is densest in the eastern part of the country.

There is also an area of the country where there are very few transmission lines, especially running east-west. The Grid is organized into 3 large interconnections. Within each interconnection, all of the power has the same voltage and it is precisely synchronized. Power crossing the boundary of two interconnections has to be adjusted. The Eastern Interconnection includes everything east of the Rocky Mountains, while the Western Interconnection includes the Rocky Mountains and everything to their west. The Texas Interconnection includes most of the State of Texas. There are surprisingly few connections between the Eastern and Western Interconnections.

Missouri is part of the Eastern Interconnection. Thus, Missouri is part of a big electrical network that includes everything from the Rocky Mountains to the East Coast, from Texas and Florida to the northern edge of Manitoba and Saskatchewan, all of which is coordinated and precisely synchronized.

Now, in the description above, I noted that The Grid includes generating stations and transmission systems. In some locations, electric utilities own both the generating stations and transmission systems. That is the case with Ameren and Kansas City Power & Light, Missouri’s 2 largest electric utilities. Sometimes, however, they are separate. In these cases, a generating station may be owned by one company, while the transmission network is owned by a separate company. If the transmission company operates only in one state, it tends to be called an Independent Service Operator (ISO). If it operates across multiple states, it tends to be called a Regional Transmission Organization (RTO). For our purposes here, we may view ISOs and RTOs as roughly similar.

We’ll investigate this a little more when we look a little deeper into how The Grid is organized in the next post.


MBizon. Electricity Grid Schematic English. Downloaded Nov. 2014 from

North American Electrical Reliability Corporation. 2017. 2017 Long-Term Reliability Assessment. Downloaded 4/27/2018 from

U.S. Energy Information Administration. 2012. “Electric Transmission Crosses North American Borders.” In Canada Week: Integrated Electric Grid Improves Reliability for United States, Canada. Downloaded 4/27/2018 from

Energy-Related Emissions Grew in 2017

Figure 1. Source: International Energy Agency, 2018.

Global energy-related carbon dioxide emissions grew by 1.4% in 2017, reaching a historic high of 32.5 billion metric tons, according to a recent report by the International Energy Agency. The increase occurred because of a 2.1% increase in the global amount of energy consumed. Figure 1 shows the trend on energy-related carbon dioxide emissions.

(Click on figure for larger view.)



Figure 2. Source: International Energy Agency, 2018.

More than 40% of the increase in energy consumption was driven by China and India. (See Figure 2) The result was an almost 150 million metric ton increase in China’s carbon dioxide emissions from energy. India’s emissions are not broken out, but carbon dioxide emissions from the rest of developing Asia (ex-China) were approximately 125 million metric tons higher than in 2016 (amounts are not precise because they are read from a graph).

Some countries had lower carbon dioxide emissions. The biggest decline came from the USA, where emissions declined 25 million metric tons, or 0.5%. In Mexico, emissions dropped 4%, and in the United Kingdom they dropped 3.8%. Way to go Mexico and United Kingdom! Because those countries consume far less energy than does the USA, the raw number of metric tons reduced was less than in the USA, despite the percentage being higher.

Last December I published a post reporting that worldwide carbon dioxide emissions from energy had held constant for the three years ending in 2016. What happened?

Figure 3. Source: International Energy Agency, 2018.

Figure 3 shows the drivers of the change in carbon dioxide emissions. Energy intensity (in yellow) has decreased every year since 2011, meaning that it required less energy to produce a unit of economic output. The rate at which energy intensity improved seemed to grow until 2015, but the rate of improvement seems to have slowed since then. Carbon dioxide intensity also seems to have improved in many of the years (meaning that less carbon dioxide is released per unit of energy produced, most likely from cleaner fuel). On the other hand, economic growth has occurred in every year. It accelerated in 2017, and its effect overwhelmed the effects of the other two drivers.

Figure 4. Source: International Energy Agency, 2018.

Figure 4 shows the annual growth in energy consumption by fuel. The chart shows that from 2006-2015, there was an average increase in consumption of all types of energy except nuclear. In 2016, however, there was a significant reduction in demand for energy from burning coal. Readers of this blog know that represents an important achievement, as coal emits more carbon dioxide per unit of energy than do the other fuels. However, in 2017, that achievement reversed itself, and demand for energy from burning coal rose again.

In 2017, the largest increase in energy demand was met by burning natural gas. The second largest increase in energy demand was met by renewable energy.

Overall, the report is not good news. As readers of this blog know, to prevent the worst effects of climate change, greenhouse gas emissions need to peak, and then be significantly reduced. There is no sign that is occurring. To quote the report:

The IEA’s Sustainable Development Scenario charts a path towards meeting long-term climate goals. Under this scenario, global emissions need to peak soon and decline steeply to 2020; this decline will now need to be even greater given the increase in emissions in 2017. The share of low-carbon energy sources must increase by 1.1 percentage points every year, more than five-times the growth registered in 2017. In the power sector, specifically, generation from renewable sources must increase by an average 700 TWh annually in that scenario, an 80% increase compared to the 380 TWh increase registered in 2017. (International Energy Agency, 2018, p. 4)

International Energy Agency. 2018. Global Energy & CO2 Status Report, 2017. Downloaded 4/18/2018 from

More Developed Land Nationally and in Missouri

Figure 1. Source: U.S. Department of Agriculture, 2015.

Developed land is on the increase, while cropland, pastureland, and rangeland are on the decrease, according to the 2012 Natural Resources Inventory. The U.S. Department of Agriculture has conducted the inventory every 5 years since 1982, but it takes several years to put the report together, so the inventory for 2017 is not yet available.

Figure 1 graphs the surface area of the contiguous 48 states by land cover/land use in 2012. The top 3 uses were forest land, rangeland, and federal land, each of which accounted for 21% of the total. When the USA was first settled, forest land and rangeland were much more extensive, but they have been converted into cropland and developed land. In addition, we think of our country as having huge freshwater lakes, but only about 3% of the surface area is water. Freshwater is very precious and special.

Of course, federal land could also be categorized into forest land, rangeland, cropland, and the other categories, but the Natural Resources Inventory does not do so.

Figure 2. Source: U.S. Department of Agriculture, 2015.

Figure 2 shows the change in land cover/land use since 1982. Over that time, cropland decreased and developed land increased by more acres than did any other category. “CRP Land” is land placed in the Conservation Resource Program.

The Natural Resources Inventory grew out of the National Erosion Reconnaissance Survey, conducted in 1934 because of severe dust storms and erosion during the Dust Bowl. Thus, since its inception, the report has been concerned with erosion. Figure 3 shows the estimated erosion rate on cropland in 1982, and Figure 4 shows the same data for 2012. You can see that in 1982, erosion was most severe in a region centered on Iowa’s borders with Illinois, Missouri, and Nebraska, but also extending along the Mississippi River into western Tennessee. In 2012, that region remained the one with the most severe erosion, but the rate had been significantly reduced. Across northern Missouri in 1982, more than 10 tons of soil eroded from each acre of cropland each year! In 2012 that had been reduced by 50% or so.

Figure 4. Source: U.S. Department of Agriculture, 2015.

Figure 3. Source: U.S. Department of Agriculture, 2015.










Figure 5. Data source: U.S. Department of Agriculture, 2015.

Figure 5 shows land use in Missouri from 1982 – 2012 in a few broad categories. The green areas of the columns represent federal land, which is not broken-out according to use. The red areas represent water. The two blue areas represent non-federal land, and they are broken into two categories: developed (light blue) and rural (dark blue). You can see that rural land represents by far the largest use of land in Missouri. In 2012, it represented 86.8% of Missouri’s surface area, while federal land, water areas, and developed land represented 4.5%, 2.0%, and 6.7%, respectively. Over the 30-year period, federal land increased slightly, water areas increased slightly, and developed areas increased by a whopping 38%, all being converted from rural land.


Figure 6. Data source: U.S. Department of Agriculture, 2015.

Figure 6 looks at Missouri’s non-federal rural land more closely. In 2012, more land was used for crops than for any other purpose (36% of rural land), followed by forest land (32%) and pastureland (27%). Over the 30-year period, the amount used for cropland decreased slightly, pastureland has decreased 17%, and rangeland, which was already such a small portion of the land that you can barely see it on the chart, declined 62%. Forest land and other rural land have increased. The Conservation Reserve Program (CRP Land) began after 1982, peaking in 1997, and declining since then.

This report is compiled and published by the U.S. Department of Agriculture, and from an environmental perspective it may be a bit misleading. Figure 5 shows that developed land represents only 6.7% of all Missouri land. However, Figure 6 shows that almost 1/3 of rural land is cropland, and another 27% of it is pastureland. It is not as if these lands are undeveloped. While they may not be covered in asphalt or highly populated, they are intensively used. They may be subject to high levels of erosion, as shown in Figure 3, or they may be disturbed by tilling and the application of agricultural chemicals. Pig farms and feed lots, for instance, are located in rural areas, but they are highly developed operations, in many cases resembling factories.

Thus, the Natural Resources Inventory probably provides the most comprehensive look at land cover/land use in the USA. It does not, however, provide an in depth review of the ecological status of the land.


Missouri Department of Natural Resources. 2018. Soil and Water Conservation Program. Viewed online 4/18/2018 at

U.S. Department of Agriculture. 2015. Summary Report: 2012 National Resources Inventory, Natural Resources Conservation Service, Washington, DC, and Center for Survey Statistics and Methodology, Iowa State University, Ames, Iowa.

Second Lowest Arctic Sea Ice on Record

Figure 1. Source: National Snow & Ice Data Center.

Arctic sea ice apparently reached its annual maximum extent on March 17, 2018, and it was the second lowest in the record, according to a report from the National Snow and Ice Data Center.

Each summer the arctic warms, and as it does, the sea ice covering the Arctic Ocean melts, reaching an annual low-point in late summer. Then, each winter the arctic cools, the surface of the ocean freezes, and the area covered by sea ice expands. The sea ice reaches its maximum extent in late winter, this year on March 17.

The National Snow and Ice Data Center tracks the extent of the sea ice using satellite images, as shown in Figure 1. The map is a polar view, with the North Pole in the center, the sea ice in white, and the ocean in blue. The land forms are in gray, with North America at lower left, and Eurasia running from Spain at lower right to the Russian Far East at the top. The magenta line shows the 1981-2010 average extent of the ice for the month of March. It doesn’t look like much on the map, but the anomaly in 2018 amounts to 436,300 square miles less than average.

(Click on figure for larger view.)

Figure 2. Source: National Snow & Ice Data Center.

Figure 2 shows the trend in Arctic sea ice from 1979-2018. The declining trend is easy to see. (The y-axis does not extend to zero to better show the change.) The National Snow and Ice Data Center applied a linear regression trend line to the data (blue line), and the trend shows an average loss of 16,400 square miles per year.






What about the annual minimum? That has been shrinking, too. Figure 3 shows the Arctic sea ice minimum in 1980, and Figure 4 shows it in 2012. The prevailing winds tend to blow the ice up against Greenland and the far northern islands of Canada, but you can see that in 1980 most of the sea, from the Canadian islands, to Greenland, to the Svalbard Islands, to Severnaya Zemla (anybody remember the Bond movie “GoldenEye?”), to the north of Far Eastern Russia, was covered by ice. In 2012, however, more than half of the Arctic Sea was ice-free, from north of the Svalbard Islands right around to the Canadian Islands. Even the famed Northwest Passage, a channel through the Canadian Islands, was open.

Figure 3. Minimum Extent of Arctic Sea Ice, 1980. Source: NASA Scientific Visualization Studio.

Figure 4. Minimum Extent of Arctic Sea Ice, 2012. Source: NASA Scientific Visualization Studio.









Figure 5. Minimum Extent of Arctic Sea Ice, 1979-2017. Source: NASA Global Climate Change.

Figure 5 charts the trend in the annual minimum. At its low in 2012, it was less than half of what it was in 1980.

The volume of the polar ice cap also depends on how thick the ice is. Satellites can photograph the entire ice cap, but data on thickness come to us from on-site measurements at a limited number of points. I don’t have a chart to share with you, but the data seem to indicate that compared to the years 1958-1976, in 2003-2007 the thickness had declined about 50% to 64%, depending on where the measurement was taken. (This change is approximate, being read off of a graph by Kwok and Rothrock, 2009.)

Thus, the decline in the arctic ice cap is actually much larger than suggested by the change in its extent.

Why does arctic sea ice matter? First, Arctic sea ice does not form primarily from snowfall, as does the snowcap in the western United States. Arctic sea ice forms because the temperature is low enough to cause the surface of the water to freeze, just as the your local pond or lake freezes if it gets cold enough. Thus, declining Arctic sea ice is a sign that the Arctic is warming. The Arctic seems to be the part of the planet that is warming the most from climate change, and this is a clear and graphic sign of that change.

Oddly, the warming arctic is one reason for the bizarre weather we have had in Missouri this winter. As noted in a post on 1/22/2015, the warming arctic weakens the polar vortex, which allows arctic cold to escape and travel south, impacting us in Missouri. Figure 6 shows the anomaly in Arctic temperatures from December, 2017 through February, 2018, in C. While it was warm over the entire Arctic, as much as 7°C above average (12.6°F), it was 2-3°C cooler than average over North America (3.6-5.4°F).

Second, it matters because ice is white, but the ocean is blue. That means that sunlight hitting ice reflects back towards space, and is not absorbed. Being blue, however, the ocean absorbs the light, and converts the energy to heat. This reflective capacity is called “albedo,” and the albedo of ocean is less than that of ice. Thus, the ice is melting because of global warming, but then, the melting contributes to even more global warming through the change in albedo. People are fond of saying that the earth has buffering mechanisms that tend to inhibit large climate changes, and such mechanisms do exist, but not everywhere in all things. This is one example where the earth shows positive feedback that destabilizes the climate even further.

Melting Arctic ice is not a major factor in the rising sea level. The reason is that the ice is already in the water. When the ice in your glass of iced tea melts, it doesn’t make the glass overflow. In the same way, as this ice melts, it has only a small effect on sea level. On the other hand, the Greenland Ice Cap and the Antarctic Ice Cap are not already in the water, and as they melt, they do affect sea level.

One final word: the data above are not computer models of future events. They are the best data available of what has already been happening, and what is happening now. To deny the reality of climate change is like denying that a river will flood, even as its water already swirls around your knees.


Kwok, R., and D./A. Rothrock. 2009. “Decline in Arctic Sea Ice Thickness from Submarine and ICESat Records: 1958-2008. Beophysical Research Letters 36:L15501. Cited in National Snow & Ice Data Center. State of the Cryosphere. Viewed online 4/12/2018 at

NASA Global Climate Change. Arctic Sea Ice Minimum. Downloaded 4/12/18 from

NASA Scientific Visualization Studio. Annual Arsctic Sea Ice Minimum 1979-2015 with Area Graph. Downloaded 4/12/18 from

NASA Scientific Visualization Studio. Annual Arsctic Sea Ice Minimum 1979-2015 with Area Graph. Downloaded 4/12/18 from

National Snow & Ice Data Center. “2018 Winter Arctic Sea Ice: Bering Down. Arctic Sea Ice News & Analysis. 4/4/2018. Downloaded 4/12/2018 from

Birth Rates Have Declined, the Number of Births Has Not

The National Center for Health Statistics keeps data for each year going back to 1909 on the number of live births in the United States and on the fertility rate. Fertility rate is defined as the number of births per 1,000 women. These data are an important environmental concern because they greatly influence future population. The more people there are in the world, and the higher their standard of living, the more environmental stress is created. The United States has a high standard of living, so an increasing population here increases environmental strain.

Figure 1. Data source: National Center for Health Statistics Data Visualization Gallery; Martin et al., 2018.

Figure 1 shows the trend in births and fertility rate from 1909 to 2016. Live births are shown in blue, and should be read against the left vertical axis. The fertility rate is shown in red and should be read against the right vertical axis. In 1909 there were 2,718,000 live births, rising to a peak of 4,316,233 in 2007, and easing since then to 3,945,875 in 2016. In 1909 the birth rate was 126.8. It fell sharply to 75.8 in 1936 (the depths of the Great Depression), then increased sharply to 122.9 in 1957 (the baby boom). It then decreased sharply until the 1970s, and has trended slowly down since then. In 2016, the fertility rate was 62.0.

(Click on chart for larger view.)

Birth and fertility rates are also important from several other policy perspectives The NCHS report shows that the fertility rate is declining among all age groups under 30 years old, and the rate of teen births has been cut by more than half since 2007. This is a very important change for public health and welfare. The fertility rate for women over 30 has increased over time. In fact, the fertility rate for women in their 30s was 102.7 in 2016, compared to 73.8 for women aged 20-24, and 102.1 for women aged 25-29. Thus, more older women are giving birth.

Figure 2. Data source: Missouri Department of Health and Senior Services.

In Missouri, data on the number of births goes back to 1990, when there were 79,135. Births then decreased to a low in 1995 of 72,804, after which they increased to 81,833 in 2007. Since 2007, they have declined to 74,664 in 2016. The fertility rate statistic is only available from 1996, when it was 61.4. It increased to 68.8 in 2007, and has declined since then, to 63.7 in 2016. The data is shown in Figure 2. The blue line is for the number of births, and should be read against the left vertical axis. The red line is for the fertility rate, and should be read against the right vertical axis.



Figure 3. Data source: Martin et al., 2018.

Figure 3 shows the 2016 fertility rate for the 50 states plus the District of Columbia. South Dakota had the highest fertility rate, at 77.7, and Vermont had the lowest, at 50.3. Missouri was 19th highest. I don’t think that anybody believes that state boundaries control fertility rate, but these data give a small snapshot of what is happening in our state compared to others.



National Center for Health Statistics Data Visualization Gallery (data portal). Data downloaded 3/28/2018 from

Martin JA, Hamilton BE, Osterman MJK, Driscoll AK, Drake P. Births: Final data for 2016. National Vital Statistics Reports; vol 67 no 1. Hyattsville, MD: National Center for Health Statistics. 2018.

Missouri Department of Health and Senior Services. Missouri Information for Community Assessment Data Portal. Data downloaded 3/28/2018 from

Below Average Snowpack in the American West

The western snowpack was seriously below average this year, and it was way below average in the Lower Colorado Region.

It is early April, and that means it is time to check-in with snowpack data in California and the American West. On average, the snowpack reaches its maximum by April 1, after which it begins to shrink as it melts away. California and much of the West have a monsoonal precipitation pattern: the bulk of the yearly precipitation falls during the winter. Because the summer and fall are so dry, many regions depend on melting snow, which they collect into reservoirs. The snowpack serves as a kind of natural reservoir, collecting precipitation during the winter, and releasing it gradually as the snow melts.

Snowpack is measured in inches of water equivalent. To equal an inch of melted water requires between 7 and 20 inches of snow, depending on how slushy or powdery the snow is. To quantify the snowpack, scientists calculate how many inches of snow are on the ground, and how much water it would represent if it were instantaneously melted. The result is called the snow water equivalent. Thus, 1 inch of snow water equivalent means that, no matter how deep the snow is lying on the ground, if you melted it, it would equal 1 inch of water.

Figure 1. Source: California Department of Water Resources, California Data Exchange Center.

Figure 1 shows the snowpack in California for the three major snow regions: North, Central, and South, with the snow water equivalent given along the vertical axis on the left. The dark blue line represents the 2017-2018 winter, and the line ends on March 29. The blue number at the end of each blue line represents the snow water equivalent of this year’s snowpack as a percentage of the historical average for that date. At lower right the three regions are combined into a single number, representing the snow water content of the entire state’s snowpack for 3/29/18. At the bottom left the chart shows the statewide percentage compared to what’s average for April 1.

Through the end of February, this winter was the second driest on record, and the snowpack was something like 20% of average. March was a wet month, however, tripling the snowpack. Even so, that only brought it up to a statewide average of 57%.

Figure 2. Source: National Resources Conservation Service.

California also depends on water from outside of the state, especially water from the Colorado River. Figure 2 shows readings for the entire region upon which California draws. It  encompasses much of the southwestern United States. The data for this map come from a different data set than the ones in the previous chart, and thus the data for California are slightly different. (Most of the difference probably arises from using somewhat different reference periods to represent “average.”)

As you can see, the entire region has had a smaller than average snowpack. However, the snowpack in the Lower Colorado Region is particularly worrisome, as it is only 21% of average.



Figure 3. Data source: Mammoth Mountain Ski Resort.

The Mammoth Mountain Ski Resort publishes a detailed history of the snowfall at the resort, and I use it as an example of the snowfall in a given California location. Figure 3 shows the data. The total amount of snow at Mammoth Mountain through March 31 was 248 inches this year, compared to an average of 308 over the period from 1969-2018. The length of the colored bars for 2018 illustrates that more than half of the snow for the whole season fell during March. The chart also shows just how wet a winter it was last year, the second wettest in the record. Bear in mind that Mammoth Mountain is measuring snowfall, not snowpack.

So, measurements of the snowpack indicate that it is seriously below average. What, then, is the status of California’s water supply? The quick answer is that for this year they should be fine.

California’s water supply is impacted this year by an extraordinary circumstance: in February, 2017, the Oroville Dam suffered a failure of the main and emergency spillways, leading to the evacuation of 188,000 people lest the dam fail entirely (see here). It didn’t fail, but since then the reservoir has been partially emptied to facilitate repairs and improvements.

Figure 4. Source: California Department of Water Resources, California Data Exchange Center (A).

Figure 4 shows the data for the largest California reservoirs. On the chart, the blue bars represent the level of each reservoir on March 30, while the yellow bars represent the maximum capacity. The red line represents the historical average level of each reservoir on March 30. The blue number below the bars represents the amount of water in each reservoir compared to its capacity, while the red number represents the amount of water compared to the historical average for March 30.

As you can see, most of the reservoirs are at or above their average for March 30, and only Lake Oroville is considerably below average. The region around Santa Barbara, however, remains in a serious drought. The two largest reservoirs in Santa Barbara County, the Cachuma and Twitchell Reservoirs, are at 40% and 2% of capacity, respectively (not shown on the chart).





Figure 5. Source:

In addition to the California reservoir system, southern California relies heavily on water from the Colorado River. Lake Mead, the largest reservoir on the Colorado River, has been overused for years, and was even forecast to have a strong chance of going dry (see here). Figure 5 plots the water level at Lake Mead over the past year. Each year it fills with the spring snowmelt, and then is drawn down throughout the rest of the year. Beginning just after 2000 Lake Mead has suffered a steady and rather alarming drop. Last year, for the first time in many years, Lake Mead showed a year-to-year increase in its water storage. This year, as of April 1, the water level of Lake Mead is basically unchanged from last year.

Lake Powell, a large reservoir upstream from Lake Mead, is up 16 feet from last year on this date. That is a significant increase, and it comes entirely from the large snowpack last year.

So, what does all this mean? The snowpack this year was seriously below average, and it was way below average in the Lower Colorado drainage region. California’s reservoirs, however, appear to be in good shape except in the region around Santa Barbara. Lake Mead has not lost additional water, and the fact that Lake Powell has gained water means that officials may be able to move water from there to Lake Mead if needed. Thus, the water supply, for this year may be sufficient for California and for those regions that draw on the Colorado River below Lake Mead.

It is worrisome, however, that after having experienced a severe multi-year drought, and then only 1 year of high precipitation, California and the Southwest have returned to below average snowpacks. I have reported previously that climate predictions include a permanent reduction of the snowpack throughout the West (see here) and in California (see here). We will have to keep watching over many years to see how this plays out.


California Department of Water Resources, California Data Exchange Center. Reservoir Conditions, 4/1/2018. Downloaded 4/2/2018 from

California Department of Water Resources,         California Data Exchange Center. California Statewide Water Conditions, Current Year Regional Snow Sensor Water Content Chart (PDF). Downloaded 4/1/2018 from Lake Mead Daily Water Levels. Downloaded 4/1/2018 from

Mammoth Mountain Ski Resort. Snow Conditions and Weather. Viewed online 4/1/2018 at

National Resources Conservation Service. Open the Interactive Map. Select “Basins Only.” On the map, select “Percent oNCRS 1981-2010 Average,” “Region,” “Watershed Labels,” and “Parameter.” Downloaded 4/2/2018 from

Santa Barbara County Flood Control District. Rainfall and Reservoir Summary, 4/1/2018. Viewed online 4/2/2018 at

Breeding Bird Survey, 2015

How are the birds doing? Ever since Rachael Carson revealed in the 1960s that pesticides were decimating bird populations, how the birds are doing has been an important question. DDT was the worst-offending pesticide, and it was soon banned, but other chemicals and other factors affect the ability of birds to survive. These days, the most important may be habitat destruction, competition from invasive species, and the effects of other chemicals, such as lead.

Many, many bird species migrate. Those that do require habitats along the way where they can rest and refuel. Break the chain of habitats in even one place, and you seriously harm the ability of the birds to survive.

Figure 1. Breeding Bird Survey Routes. Source: Sauer et al, 2017.

The largest and most important survey of bird populations is the Breeding Bird Survey, which has been conducted every year since 1966. Here’s how they conduct the survey: during peak breeding season, starting 1/2-hour before sunrise, volunteers follow a route with 50 stops, each stop at least 1/2 mile apart. The route stays the same from year-to-year. The volunteer counts all birds of that species seen or heard within a quarter mile of the stop. Figure 1 shows a map of the routes. The routes look like blue dots because of the scale of the map. You can see that coverage of the USA is quite good.

From the multiple routes in each geographical area, for each species a yearly index is constructed. These indexes represent “the mean count of birds on a typical route in the region for a year.” (USGS, Patuxent Wildlife Research Center)

The results are mixed, differing from species-to-species and from region-to-region. As you might expect, even though the routes have 50 stops on them, and the method used is quite rigorous, it is not the same as physically being able to count every bird. Some of the birds may not be calling when the volunteer is there, or they may be hidden in brush, etc. The survey method does not permit a calculation of the absolute number of birds in a region, and the annual index is only reliable if a sufficient number of birds are observed. Thus, the Breeding Bird Survey provides crucial data, but it may be only part of the picture.

Table 1. Breeding Bird Survey Trend Estimates for Bird Species Observed in Missouri. Data source: Sauer, et al. 2017.

Trend data on how the annual indices for each species have changed is available for every species and for every state and region. I shall focus only on observations in Missouri. Table 1 shows the data. The trends are reported from 1966-2015 and from 2005-2015. The trends represent the annual rate of change over the period of interest.

(Click on table for larger view.)

The table is a bit complex, so let’s unpack it. It shows all species observed in Missouri. They are listed in order of the change between 1966 and 2005, with species that declined on the left side, and species that increased on the right. Each side of the chart begins with 4 columns intended to comment on the quality of the data for a given species. They are coded “G”, for green, or good, “Y” for yellow, or caution, and “R” for red, or extreme caution. The first column comments on the credibility of the measurement. The second column comments on the size of the data sample. The third column comments on how precise the measurements are. The fourth column comments on the relative abundance of the species.

The trend statistics follow the names of the species, and they are color-coded with green and red bars, representing the size of the change. Readers of this blog know that time series are vulnerable to year-to-year variation, but the fact that these are trends computed over the entire period of measurement should minimize that effect.

Between 1966 and 2015, annual indices for 58 bird species decreased, while 79 increased. If one counts only species for which the Regional Credibility Measure was “G,” then the situation is reversed: 40 species decreased and 31 increased.

Those with declines of more than 5% were the blue-winged teal, the loggerhead shrike, the house sparrow, and the American bittern. The blue-winged teal declined at a rate of 18.1% per year, however the Regional Credibility Measure for that species is red, indicating that use and interpretation of the data for that species warrants extreme caution. The same is true for the American bittern. The Regional Credibility Measures for the loggerhead shrike and house sparrow, however, are good.

Because 1966-2015 is a 49 year period, even small annual changes can accumulate to rather significant changes across the entire period. Any decline of 1.4% per year over 49 years would result in a 50% decline over the whole period. The loggerhead shrike, for which the Regional Credibility Measure is “G,” declined at an annual rate of 6.68% per year. Over 49 years, that computes to a decline of 97%!

Among the success stories are some birds that are everybody’s favorites: bald eagle observations increased almost 40% per year, great egret observations increased almost 11%, and cedar waxwing observations increased almost 9%. With the bald eagle and great egret, however the Regional Credibility Measures are red, again indicating extreme caution in using and interpreting the data, and for the cedar waxwing it is yellow.

These findings reinforce what was stated above: the Breeding Bird Survey provides crucial data, but it may not be a complete picture.

Missouri is home to 9 federal wildlife refuges and hundreds of state conservation areas. All are devoted to providing animals and plants the habitat they need to survive. If you visit them on the wrong day, they often look empty, and you can come away wondering what the big deal is. If you visit them on the right day, however, they can be teeming. Figure 2, for instance, shows the afternoon lift-off of a flock of snow geese at Loess Bluffs NWR in northwestern Missouri. The snow geese are only there to rest and refuel for a few days each spring and fall.

Figure 2. Snow Geese Lift Off at Loess Bluffs NWR. Source: Keyserill, 2017.


Keyserill, Robert. 2017. “Afternoon Lift Off.” Source: U.S. Fish and Wildlife Service. “Loess Bluffs National Wildlife Refuge.” Downloaded 3/18/2018 from

Sauer, J. R., D. K. Niven, J. E. Hines, D. J. Ziolkowski, Jr, K. L. Pardieck, J. E. Fallon, and W. A. Link. 2017. The North American Breeding Bird Survey, Results and Analysis 1966 – 2015. Version 2.07.2017 USGS Patuxent Wildlife Research Center, Laurel, MD. Downloaded 3/14/2018 from

Siolkowski, Dave, Jr., Keith Pardieck, and John Sauer. 2010. “On the Road Again for a Bird Survey that Counts.” Birding, 42, (4), pp. 32-40. Downloaded 3/18/2018 from

United States Geological Survey, Patuxent Wildlife Research Center. Trend and Annual Index Information. Downloaded 3/19/2018 from

Social Cost of Carbon Update

We know that emitting carbon dioxide into the atmosphere causes climate change. We also know that climate change is causing damage, and that it will cause even greater damage in the future. But how much damage? Can anybody put a dollar sign on the cost?

That is just what a group called the Interagency Working Group on Social Cost of Carbon (IWGSCGG) tries to do. The task is especially difficult because the damage caused by carbon dioxide does not occur when it is first emitted. Carbon dioxide remains in the atmosphere for 80-100 years, and it continues to cause global warming the whole time it is there. The damages from carbon dioxide emitted today will continue to accrue over the entire 80-100 years. As the concentration of carbon dioxide in the atmosphere continues to rise, climate change will accelerate, and the damage it causes will increase. Thus, a metric ton of carbon dioxide emitted in 2050 is expected to cause more damage than a ton emitted in 2010.

First the numbers, then some background on what it means. The IWGSCGG uses several different methods to estimate the future costs of carbon emissions. Then they average the estimates and adjust them for inflation back to 2007 dollars. In calculations of this sort, the assumed inflation rate often has a large effect on the outcome.

Table 1. Data source: IWGSCGG 2016

In Table 1, the left column represents years in which a ton of CO2 might be emitted. The next three columns each assume a different inflation rate. The column on the far right represents similar information as the 3.0% Discount Average column, except instead of taking the average damage cost estimate, they took the 95th percentile. The idea is that, if inflation is 3.0%, the odds are 95% that the cost of the damage will be no higher than the values in this column.

The 3% discount rate is the one the author’s adopt as their most likely scenario. So, to say this data in plain English:

The most plausible estimate of the damage caused by each metric ton of carbon dioxide emitted into the atmosphere in 2010 is $31. The damage caused by each metric ton emitted in 2015 is $36, and for each metric ton emitted in 2020 it will be $42, and for each metric ton emitted 2050 it will be $69.

Compared to estimates made in 2013, the damages are estimated to be 1-2 dollars less per metric ton.

In 2010, the United States emitted an estimated 5,736.4 million metric tons of CO2. At $32 per metric ton, that equates to $183.6 billion. The GDP of the United States in 2010 was $14,958 billion, so the damage is roughly equal to 1.2% of our total economic output.

Why is this estimate important? Policy makers need to analyze the costs and benefits of the programs they mandate. Avoided future damage is a significant benefit, so they need to estimate how much future cost is avoided. The report suggests that the United States could spend up to $183.6 billion per year to reduce CO2 emissions, and be paid back by the damage prevented.

This report is an update of the second IWGSCGG report, issued in 2013. The cost estimates changed between reports because of increased knowledge about climate change and improvements in the computer models used to make the estimates. There is still considerable uncertainty here, but the IWGSCGG estimate may be the best estimate available.


Interagency Working Group on Social Cost of Greenhouse Gases. 2016. Technical Support Document: – Technical Update of the Social Cost of Carbon for Regulatory Impact Analysis – Under Executive Order 12866. Downloaded 3/20/2018 from

For U.S. greenhouse gas emissions: EPA > Climate Change > Emissions > National Data,

For U.S. GDP: Bureau of Economic Analysis > National Economic Accounts > Current Dollar and “Real” GDP (Excel Spreadsheet).