Human Exposure to Environmental Chemicals

From time-to-time I report on toxic chemicals in the environment, whether it be in fish we eat (here), polluted streams (here), or toxic waste sites (here). People come into contact with these chemicals by eating contaminated food, drinking contaminated water, and breathing contaminated air. I thought it might be interesting to see whether people are carrying a dangerous load of toxic chemicals in their bodies.

It turns out that the Centers for Disease Control (CDC) was also interested, and they have systematically tested samples of the population of the USA to see which environmental chemicals people are carrying in their blood and urine, and at what levels. They have published their findings in a series of reports, most recently in 2009, and they regularly update the data associated with the report, most recently in 2017.

There is some basic information you need to know in looking at this data. First, the data covers 308 environmental chemicals. There are over 80,000 chemicals registered for use in the USA, however, and the American Chemical Society database contains over 50 million unique chemical substances that have been discovered or created. Very little is known about the toxicity of many of them.

Second, once a toxic chemical enters your body, it seldom remains in the bloodstream for very long. Some chemicals are cleared from the body relatively quickly (often in urine), others migrate into the body’s tissues, where they sometimes persist for decades. Thus, toxic chemicals have two kinds of effects on the body: acute symptoms (those related to high levels in the blood), and chronic effects (which can be caused by even small amounts of some chemicals remaining in the tissues of the body). The CDC surveyed blood and urine levels. Thus, their data would bear most directly on acute symptoms, and would have less to say about long-term chronic exposure at low levels.

Third, most of the chemicals tested by the CDC exist at some level in the environment. You can find them in just about everyone. In fact, many are essential for health. For instance, too much iron in the blood is toxic, but too little can cause iron deficiency anemia.

Our chemical tests have become so sophisticated that they can find traces of chemicals that are smaller than microscopic. Thus, the fact that people carry some level of a chemical in their body is not evidence that it is toxic. Many additional factors need to be taken into account. The CDC reports are intended to provide baseline data.

Rather than get into the hundreds of tables provided in the report, I’ll just report a few headline findings from the Executive Summary:

  • Exposure to some chemicals is widespread.
    • Polybrominated diphenyl ethers (flame retardants used in a wide variety of products) were found in almost all of the subjects tested. These accumulate in the environment and in fatty tissue.
    • Bisphenol A, a component of epoxy resins and polycarbonates, was found in the urine samples of more than 90% of tested subjects.
    • Perfluoroctanoic acid, used in the manufacture of non-stick coatings in cookware, was found in “most” subjects tested.
  • Figure 1. Source: Centers for Disease Control and Prevention, 2009a.

    Because exposure to environmental chemicals is so widespread, it means that many (most?) people are carrying more than one in their body. Very little is known about how (if) they interact. Do they potentiate each other, making even low level exposure dangerous? We just don’t know.

  • Serum levels of lead in children have declined. The CDC has set the upper limit for lead in the blood of adults at 10 micrograms per deciliter. (This is not the safe level for children, but it does provide a marker against which change can be measured.) The percentage of children with a blood level greater than 10 µg./dl. has declined significantly since 1970 (Figure 1). Where once it was 88.2%, now it is 1.4%. This suggests that lead mitigation efforts have been tremendously successful. Some special populations remain at risk, however, especially children living in homes containing lead-based paint.

    Figure 2. Source: Centers for Disease Control and Prevention, 2009a.

  • For the first time, the report included data on exposure to mercury. The report found that mercury levels increase with age for all demographic groups, then begin to decline after age 50 (Figure 2). The levels were well below those associated with mercury poisoning. Non-hispanic blacks had the highest blood levels, then Mexican Americans, then non-hispanic whites. I reported on the bioaccumulation of mercury here. Thus, it makes sense that blood levels increase with age. What accounts for the decrease after age 50 and the racial differences? I would put my money on lifestyle differences (where you live, what you do for a living, what you eat), but I don’t really know.
  • Perchlorate was found in the urine of all subjects. Perchlorate is a chemical used to manufacture fireworks, explosives, flares, and rocket propellant. It’s hard to imagine that those uses would make it ubiquitous in the environment, but on the other hand, 14 billion pounds of bombs were dropped by the United States during the Vietnam War alone. This chemical is known to affect thyroid function, but the maximum safe blood level has not yet been determined. This data will help scientists develop standards for safe and unsafe exposure.

I wish I could report on the chemical burden of people living in Missouri, but the CDC data is not broken out by state, and I have not been able to find a report that addresses the issue.

In summary, the report seems to find that environmental chemicals can be widely detected in the blood or urine of Americans. Safe blood levels have been established for some chemicals, and for those the data seem to show that the mean blood level across all significant groups is within the safe level. There may be special populations in which the blood or urine level is higher. Similarly, the number of chemicals tested is a small fraction of those that have been discovered or created, and about most of them, we know very little.


American Chemical Society. 50 Million Unique Chemical Substance Recorded in CAS Registry. Viewed online 4/24/2017 at

Centers for Disease Control and Prevention. Executive Summary, Fourth National Report on Human Exposure to Environmental Chemicals: 2009. National Center for Environmental Health, Division of Laboratory Sciences, Mail Stop F-20, 4770 Buford Highway, NW, Atlanta, GA 30341-3724. Downloaded 4/22/2017 from

Centers for Disease Control and Prevention. Fourth National Report on Human Exposure to Environmental Chemicals: Updated Tables, January 2017, Volumes One and Two.. National Center for Environmental Health, Division of Laboratory Sciences, Mail Stop F-20, 4770 Buford Highway, NW, Atlanta, GA 30341-3724. Downloaded 4/22/2017 from

Centers for Disease Control and Prevention. Executive Summary, Fourth National Report on Human Exposure to Environmental Chemicals: 2009. National Center for Environmental Health, Division of Laboratory Sciences, Mail Stop F-20, 4770 Buford Highway, NW, Atlanta, GA 30341-3724. Downloaded 4/22/2017 from

Clodfelter, Micheal. 1995. Vietnam in Military Statistics: A History of the Indochina Wars, 1792—1991. Jefferson, NC: McFarland & Company. Cited in Wikipedia. List of Bombs Used in the Vietnam War. Viewed online 5/4/2017 at

National Toxicology Program. About NTP. Viewed online 4/24/17 at

New York State Department of Health. Understanding Mercury Exposure Levels. Viewed online 4/24/2017 at

Wikipedia. Iron Poisoning. Viewed online 2/24/2017 at

California Continues to Face Future Water Supply Challenges

Despite the wet winter in 2017, climate change will pose severe challenges to California’s future water supply.

In the last post I reported that Gov. Brown has declared California’s drought emergency officially over. The state has plenty of water for the next year. This post explores the implications of this wet winter for California’s long-term water status.

I first looked at this topic in a 13-post series that ran during the summer of 2015. The series starts here. It contains a lot of information about California’s water supply and consumption. I concluded that at some point in the not-too-distant future California would experience a significant permanent water deficit. The #1 cause of the deficit would be climate change, which is projected to result in a significant reduction in the size of California’s snowcap. The #2 cause would be population increase. I performed the analysis myself because I could find no sources that did anything similar. I’m not going to repeat that analysis in this post. Rather, I’m going to report a couple of new reports that confirm the concerns I had in 2015.

Figure 1. Source: California Dept. of Water Resources.

Figure 1 illustrates the problem California faces. Almost all of California’s precipitation falls during the winter. Some of it gets temporarily “locked up” as snowpack on the Sierra Nevada mountains. Demand for water, however, peaks during the summer. California has many man-made reservoirs that release water during the summer and fall, and the state depends on the melting snowpack to recharge the man-made reservoirs as water is drawn from them. In Figure 1, the blue line represents runoff and the red line represents water demand. You can see that moving the date of maximum runoff earlier in the year increases the amount of water that cannot be captured into storage (yellow area). It has to be dumped; see the post on Oroville Dam to see what happens if the volume of water being dumped gets too high. It increases the amount of water that must be released from storage in the summer and fall. The amount released is now larger than the amount of inflow the reservoir receives, resulting in an increased water deficit (the blue area represents water received, the green area represents water discharged equal to the size of the blue area, and the red area represents the deficit). There is a water deficit in average years, but it is small, and a winter with slightly above average precipitation can make up the deficit. Moving maximum runoff earlier in the year increases the size of the deficit; now only a much wetter year can recharge the reservoirs.

Figure 2. Source: California Dept. of Water Resources.

Figure 2 includes two charts. The first chart shows the percentage of precipitation in California that occurred as rain from 1948-2012. If precipitation occurs as rain, it is not snow and can’t add to the snowpack. On the chart, the black horizontal line is the mean percentage across all years. Red columns represent years with above average percentage of rain, the blue columns below average. There is variation between years, but you can see that the red columns cluster to the right while blue columns cluster to the left. That means that on average an increasing percentage of precipitation is falling as rain. Thus, on average, unless annual precipitation undergoes a sustained increase (which hasn’t happened and is not projected), California’s snowpack will shrink, because what once was snow is now rain.

The second chart in Figure 2 shows runoff measured on the Sacramento River. The red line represents the 50-year period from 1906-1955, while the blue line represents the 52-year period from 1956-2007. This is the specific problem that was discussed conceptually in Figure 1. You can see that runoff has moved earlier in the year by about a month.


Figure 3. Source: National Centers for Environmental Information.

Why is more precipitation falling as rain rather than snow, and why is melt occurring earlier? Because of increased temperature. Winter is when the snow falls in California, and it is when the state receives the bulk of its precipitation. Figure 3 shows that the average winter temperature (December – March) has increased more than 2°F. In addition, if you look at Figure 3 carefully, you can see that the rate of temperature increase accelerated somewhere around 1980. The runoff chart in Figure 2 chunks the data into only 2 groups, each about 50 years long. Because of the acceleration in the increase in temperature, I believe that if they had chunked the data into 3 groups, each about 33 years long, the change towards earlier snowmelt would have been even greater than the one shown.

How dire is the threat is to California’s snowpack? It depends on which climate projection is used. The projected effects of climate change depend very much on how humankind responds to the threat. If we greatly reduce our GHG emissions immediately, the climate will warm less; if we don’t, it will warm more.

Figure 4. Source: California Dept. of Water Resources.

Figure 4 shows the historical size of the California snowpack plus 2 projections. The middle map show the projected size of the snow pack if warming is less. The map on the right shows its size if warming is more. You can see that, even under the low warming scenario, a loss of 48% of the snowpack is projected. Under the high warming scenario, a 65% loss of the snowpack is projected. These projections are for the end of the century. In my original series, I estimated the loss of snowpack at 40% by mid-century. That is not too far off from the high warming scenario. And I have to say, the evidence suggests that so far the world is operating under the high warming scenario, possibly, even worse.

Surface water is not the only source on which California depends. California withdraws significant amounts of water from underground aquifers, especially in (but not limited to) the agricultural areas of the Central Valley. Aquifers can be compared to underground lakes, but don’t think of them as being like a big, hollow cave in which there is a concentrated, pure body of water. Rather, think of them as regions of porous ground, such as gravel or sand. In between the pieces of gravel or sand is space, and that space can hold water. Below and on the sides are rocks or clay that are impervious to water, which allow the water to be held in the aquifer.

So long as the aquifer is charged with water, this is a situation that can last for thousands of years. If, however, water is pumped out without being replaced, then nothing occupies the spaces between the pieces of gravel or sand. If that occurs, the weight of the ground over the aquifer can compress the aquifer, reducing the amount of space available between the pieces of sand and gravel, reducing the capacity of the aquifer to hold water. When this occurs, it sometimes shows up as subsidence on the surface. In California, it is primarily the snowpack that feeds the aquifers. If a significant amount of the snowpack is lost, it will be less able to recharge the aquifers, and they will undergo increased compaction.

Figure 5. Map of Permanent Subsidence. Source: Smith et al, 2016.

As noted in my original series, significant subsidence has already occurred over California’s aquifers. More seems to be occurring every year. A recent study attempted to quantify the amount of water storage capacity being lost to compaction. The study covered the years 2007-2010, so it didn’t even include the recent severe drought (2007, 2008, and 2009 were dry years, but 2010 was 9th wettest in the record). The study covered only a small portion of the south end of the Central Valley Aquifer, yet it found that during those 4 years significant permanent subsidence had occurred (see Figure 5), resulting in a total loss of 748 million cubic meters of water storage, an amount roughly equal to 9% of the groundwater pumping that occurs in the study area. If this ratio held going forward, it would mean that for every 44.4 gallons of water pumped out each year, about 1 gallon of aquifer storage would be lost.

During the recent drought many newspaper articles reported that there had been a sharp increase in the number of wells being drilled in the Central Valley, and that the depth of the wells had also significantly increased. This suggests an increase in the rate at which the water table is being lowered, which would lead to an increased rate of compaction. As the study notes, this is a loss that cannot be replenished; aquifer storage lost to compaction is gone forever.

Dry periods become more devastating when they occur during hot periods. One reason the recent drought in California was so devastating was because it was a hot drought. A recent study found that climate change has already raised the temperature in the state (as in Figure 3 above), and will continue to raise it further, to the point that every dry year is likely to be a hot drought. The report concludes that anthropogenic warming has substantially increased the risk of severe impacts on human and natural systems, such as reduced snowpack, increased wildfire risk, acute water shortages, critical groundwater overdraft, and species extinction.

The bottom line here is that we are talking about the effects of climate change. Climate means average patterns over long periods of time – 30 years at minimum. The current wet period represents only 1 winter. Just as one swallow doesn’t make a summer, so one wet winter doesn’t make a climate trend. For that matter, neither do 5 dry years. However, California’s increase in temperature is a long-term change that does make a climate trend, and every indication suggests it will only increase more.

My conclusion is that this wet winter not withstanding, the concerns I voiced in 2015 over California’s water supply remain valid. As time passes, California will face increasing challenges meeting the demand for water (see here). The state will be unable to secure large new sources of surface water or ground water (see here), and will have to construct large, expensive desalination plants (see here). There will be sufficient water to supply human consumption if it is properly allocated (see here), but water available to agriculture will be reduced, resulting in a decline in California’s agricultural economy (see here). That loss, plus the cost of the desalination plants, will impact California’s economy (see here), as well as the food supply for the entire country.

[In the above paragraph I have referenced several of the posts in my 2015 series Drought in California. If you are interested in the topic, you should read the series sequentially, beginning with Drought in California Part 1: Introduction.]


California Department of Water Resources. 2015. California Climate Science and Data for Water Resources Management. Downloaded 4/6/2017 from

Diffenbaugh, Noah, Daniel Swain, and Danielle Touma. 2015. “Anthropogenic Warming Has Increased Drought Risk in California.” Proceedings of the National Academy of Sciences. Downloaded 3/30/2017 from

National Centers for Environmental Information. “California, Average Temperature, December-March, 1896-2016” Graph generated and downloaded 4/13/2017 at

Smith, R.G., R. Kinght, J. Chen, J.A. Reeves, H.A. Zebker, T. Farr, and Z. Liu. 2016. “Estimating the Permanent Loss of Groundwater Storage in the Southern San Joaquin Valley, California.” Water Resources Research, American Geophysical Union. 10.1002/2016WRO19861. Downloaded 3/30/2017 from

California Drought Emergency Officially Over

Gov. Jerry Brown officially declared California’s drought emergency over on Friday, April 7. It was a fitting ending to one of the worst episodes in California’s drought-laden history.

Or was it? The next two posts update California’s water situation. This one focuses on the current short-term situation. The next one focuses on the future, with an eye toward the future impact of climate change. I have personal reasons for following California’s water situation – I have family living there. But in addition, California is the most populous state in the Union, it has the largest economy of any state, and the state grows a ridiculously large fraction of our food. What happens in California affects us here in Missouri.

Figure 1. California Snowpack, 3/31/2017. Source: California Department of Water Resources.

Is the short-term drought truly over? Yes, I think so. The vast majority of California’s precipitation falls during the winter, and the snowpack that builds up in the Sierra Nevada Mountains serves as California’s largest “reservoir.” As it melts, it not only releases water that represents about 30% of the state’s water supply, but it also feeds water into the underground aquifers that provide groundwater to much of the state. Thus, the size of the snowpack is the most important factor in determining California’s water status. California measures the water content of the snowpack electronically and manually. The measurements around April 1 are considered the most important, as that is when the snowpack is typically at its largest. Figure 1 shows the report for this year. Statewide, the water content of the snowpack was 164% of average for the date, almost 2/3 larger than average. The water content was significantly above average in all three regions of the snowpack, North, Central, and South.



I follow the snow report at Mammoth Mountain Ski Resort to provide a specific example of the snow conditions. Figure 2 shows that through March, Mammoth received over 500 inches of snow, one of the highest totals in the record going back to 1969-70. The column for 2016-17 has very large blue and orange sections, indicating that the majority of the snow fell in January and February. Figure 3 confirms the impression. It charts the amount of snowfall at Mammoth during each month of the 2016-17 snow season, and compares it to the average for that month across all years. You can see that both January and February were monster snow months, especially January. By March, snowfall had already fallen below average. I wouldn’t make too much of this fact, one month doesn’t make a trend.

Figure 2. Source: Mammoth Mountain Ski Resort.

Figure 3. Data source: Mammoth Mountain Ski Resort.











Figure 4. Source: California Data Exchange Center.

California also stores water in man-made reservoirs. Figure 4 show the condition of 12 especially important ones on March 31. Most were above their historical average for that date, and many were approaching their maximum capacity. Those who follow this blog know that the Oroville Reservoir actually received so much water that it damaged both the main and emergency spillways, threatening collapse of the dam and requiring evacuation of thousands of people down stream. (See here.)







Figure 5. Elevation of the Surface of Lake Mead. Source:

In addition, Southern California receives the lion’s share of water drawn from the Colorado River, thus the status of Lake Mead, the largest reservoir on the Colorado, is important to the state. A study in 2008 found that there was a 50% chance the reservoir would go dry by 2021. On March 31, Lake Mead was at 1088.26 feet above sea level. (This doesn’t mean there were that many feet of water in the reservoir, Hoover Dam isn’t that tall. Rather, it represents how many feet above sea level the surface of the water was. Lake Mead’s maximum depth is 532 feet.) The current level represents 41.38% of capacity. Figure 5 shows the level of the lake over time. You can see that the line tends to go up with the spring snowmelt, and down during the rest of the year. This year it is up very slightly year-over-year, but the trend has been relentlessly down since 2000.

The conclusion seems inescapable: for this year at least, California has plenty of water. The short-term drought is over. One year doesn’t make a climate trend, however. In the next post I will consider the implications of this wet winter for California’s water situation going into the future.


Barnett, Tim, and David Pierce. 2008. “When Will Lake Mead Go Dry?” Water Resources Research, 44, W03201. Retrieved online at

CA.GOV. Governor Brown Lifts Drought Emergency, Retains Prohibition on Wasteful Practices. Viewed online 4/10/2017 at

California Data Exchange Center. Conditions for Major Reservoirs: 31-Mar-2017. Viewed online at

California Department of Water Resources. Snow Water Equivalents (inches) for 3/30/2017. Viewed online 3/31/2017 at

Mammoth Mountain Ski Resort. Snow Conditions and Weather, Extended Snow History. Data downloaded 4/2/2017 from “Lake Mead Daily Lake Levels.” Downloaded 4/5/2017 from

Ozone Was the Most Important Air Pollutant in Missouri in 2016

Ozone was the most important air pollutant in Missouri on more days than any other. It increased its “lead” over PM2.5, which was second.

The Air Quality Index is a measure that combines the level of pollution from six criterion pollutants: ozone (O3), sulfur dioxide (SO2), nitrous oxide (NO2), carbon monoxide (CO), particulate matter smaller than 2.5 micrometers (PM2.5), and particulate matter between 2.5 and 10 micrometers (PM10). For a brief discussion of these pollutants, see Air Quality Update 2016.

Figure 1. Data source: Environmental Protection Agency.

Figure 1 shows the percentage of days for which each of the criterion pollutants was the most important one. The chart combines all 20 counties together. Since 2009 ozone has been the most important pollutant on more days than any of the other pollutants, and it extended its “lead” in 2016. PM2.5 was the most important pollutant on the second highest number of days. Since 2007, however, the percentage of days on which it was the most important pollutant has been trending lower. One or the other of these two pollutants was the most important on 85% of all days statewide.

Thirty years ago, ozone was a much less important pollutant than it is now. In 1983, it was the most important pollutant on fewer than 30% of the days statewide, but in 2016 it was the most important pollutant on 54% of the days. While we need ozone in the upper atmosphere to shield us from ultraviolet radiation, at ground level it is a strongly corrosive gas that is harmful to plants and animals (including us humans). We don’t emit it directly into the air. Rather, it is created when nitrogen oxides and volatile organic compounds (vapor from gasoline and other similar liquids) react in the presence of sunlight. These pollutants are emitted into the atmosphere by industrial facilities, electric power plants, and motor vehicles.

The second most important pollutant was PM2.5 (31% of days in 2016). These tiny particles were not recognized as dangerous until relatively recently, though now they are thought to be the most deadly form of air pollution. I can’t find anything that says so specifically, but I believe the zero readings in 1983 and 1993 means that PM2.5 wasn’t being measured in Missouri, not that it wasn’t a significant pollutant back then. The EPA significantly tightened its regulations for PM2.5 in 2012. In 2015, no Missouri county was determined to be noncompliant with the new standards, however data gaps from sensors just across the Mississippi River prevented determination of whether pollution from Missouri was causing a violation of standards in the Illinois side of the metro area. Thus, the counties of Franklin, Jefferson, St. Charles, St. Louis, and St. Louis City were all called “unclassified.” Road vehicles, industrial emissions, power plants, and fires are important sources of PM2.5.

Sulfur dioxide used to be by far the most important pollutant. While it has not been eliminated and was still the most important pollutant on some days, good progress has been made on reducing SO2 emissions (9% of days in 2016). For the role of SO2 in background air pollution, see this post.

Don’t forget that Figure 1 does not show the levels of the six pollutants, it only shows the number of days on which each was the most important. As previous posts have clearly shown, air quality is better. As we have reduced some types of air pollution, apparently, other types have become more important.

Missouri has come a long way in improving its air quality. To a large extent, it did so in two ways: by kicking some of its coal habit (replacing coal with natural gas and oil as sources of energy), and by requiring large industrial emitters to install pollution control equipment. We have more work to do, especially with regard to ozone and PM2.5, but it has been a significant environmental success story.


Environmental Protection Agency. Air Quality Index Report. This is a data portal operated by the EPA. Data for 2014, Missouri, and grouped by County downloaded on 11/6/2015 from

Missouri Department of Natural Resources. Missouri State Implementation Plan: Infrastructure Elements for the 2012 Annual PM2.5 Standard. Viewed online 3/30/2017 at

Few Unhealthy Air Days in Missouri Counties

Figure 1. Data source: Environmental Protection Agency.

In the previous post, I reported on the percentage of days during which air quality was in the good range in 20 Missouri counties. It is one thing to ask whether a county’s air quality is good, and another to ask if it is so bad that it is unhealthy. This post focuses on the percentage of days with unhealthy air quality.

I looked at data from the EPA’s Air Quality System Data Mart for 20 Missouri counties. The data covered the years 2003-2016, plus the years 1983 and 1993 for a longer term perspective. For a fuller discussion of air quality and the data used for this post, and a map of the 20 counties, see my post Air Quality Update, 2016.

The EPA data distinguishes 4 levels of unhealthy air: Unhealthy for Sensitive Individuals, Unhealthy, Very Unhealthy, and Hazardous. No Missouri county was reported to have Very Unhealthy or Hazardous air quality for any of the years I studied. Figure 1 shows the percentage of monitored days for which air quality was either Unhealthy for Sensitive Individuals, or Unhealthy. The top chart shows a group of counties along the Mississippi River north or south of St. Louis. The middle chart shows a group of counties in the Kansas City-St. Joseph region. The bottom chart shows a group of widely dispersed counties outside of the other two areas.

(Click on chart for larger view).

The percentage of unhealthy air days was 3% or below for all Missouri counties . There were no unhealthy air days at all in 12 of the 20 counties. Compared to 2014, two counties in the Mississippi region and one in the Kansas City region showed very small increases (St. Charles, Jefferson, and Stoddard Counties). Jackson County showed a significant decrease and the City of St. Louis showed a very small decrease. The other counties all stayed the same.

It is heartening, and good for the lungs too, that no county in Missouri had a significant fraction of days on which the air quality was unhealthy. The state clearly has improved its air quality.


Environmental Protection Agency. Air Quality Index Report. This is a data portal operated by the EPA. Data downloaded on 3/23/2017 from

Missouri Air Quality Improved in 2016

Figure 1. Data source: Environmental Protection Agency.

Air quality in 13 out of 20 counties in Missouri improved in 2016 compared to 2014, while air quality in 7 declined. The data come from the Air Quality System Data Mart maintained by the EPA , which contains data on the air quality of a number of Missouri counties going back to the early 1980s. For a fuller discussion of air quality and the data maintained by the EPA, or for a map of the counties, see my previous post.

Figures 1 shows the percent of monitored days on which the Air Quality Index was in the Good Range. The top chart is for a group of counties along the Mississippi River, the middle chart is for a group of counties in the Kansas City-St. Joseph region, and the bottom chart is for a widely scattered group of counties in neither of the other two groups.

(Click on chart for larger view.)

First, compared to 2014, the percentage of good air days increased in 13 out of the 20 counties. Most of the increases were small, but the percentage of good AQI days jumped by 31% in Cass County, by 28% in the Jackson County, by 19% in Buchanan County, and by 16% in the City of St. Louis. The increase in Jackson County is especially notable, as their trend had not been toward significant improvement for several years. It is hard to achieve a very high percentage of good AQI days in large cities, and both Jackson County and the City of St. Louis have made significant progress over the years.

The percentage of good AQI days fell in 7 counties. In three of them, the decline was greater than 10%: Stoddard County (-13%), Clinton County (-12%), and Perry County (-12%).

While the overall trend in 2016 was favorable compared to 2014, local factors seem to have controlled the variation between counties. The overall trend may not be attributable to weather, as it was almost 3°F warmer in 2016 than in 2014.

Second, in almost all Missouri counties the percentage of good air quality days was high in 2016. In no county was it below 60%, and it was 80% or above in 16 out of the 20 counties. As in previous years, the outstate group led in the percentage of good AQI days, which is expected because they don’t experience the concentration of pollution sources that large cities do.

In 2016, Stoddard County and the City of St. Louis tied for the lowest percentage of good air days of any county in Missouri: 64%. For Stoddard County, this represents a significant decline: they had 91% good AQI days in 2015. For the City of St. Louis, however, it represents a continuing trend of improvement from very poor AQI. St. Louis still has plenty of air quality challenges, but we’ve come a long way, baby!

Over a longer term, the chart for the Mississippi counties is encouraging. The lines start pretty low for some of the counties, but have a clear upward trend. The chart for the Other counties is also encouraging. The lines start pretty high, most had an upward trend for a number of years, and in recent years most seem to be staying high. The chart for the Kansas City-St. Joseph counties is more variable, showing yearly ups and downs. When I looked at the 2014 data, the air quality in most of the Kansas City-St. Joseph counties had declined since 1983. In 2016, that trend has largely been reversed.


Environmental Protection Agency. Air Quality Index Report. This is a data portal operated by the EPA. Data downloaded 3/23/2017 from

Air Quality Update, 2016

I last looked at Missouri air quality data through the year 2014. This post begins a series to update the information through 2016. First will come an introduction to the Air Quality Index (AQI) criterion pollutants, then a post on AQI trends over the years, and then a post on which are the most important pollutants.

Figure 1. The St. Louis Cathedral viewed from the Park Plaza on Black Tuesday (11/28/1939). Source: St. Louis Post-Dispatch.

Missouri has a notorious role in the annals of air quality. On November 28, 1939, a temperature inversion trapped pollutants in St. Louis; a thick cloud of dark smoke blanketed the city, blotting out the sun. The day came to be known as “Black Tuesday,” and it was one of the worst air quality events in recorded history. Figure 1 at right shows a view that day of the St. Louis Cathedral from (I think) the Park Plaza. More photos are available by searching on Google Images for “Black Tuesday St. Louis.”

Since then, many steps have been taken to reduce air pollution, and air quality has improved dramatically. Has the trend continued, or has the trend begun to backslide?

Since the 1980s the EPA has gathered air quality data from cities and counties in Missouri and maintained it in a national database. The following posts look at yearly data from 2003-2016. In addition, to give a longer term perspective, they include data for 1983 and 1993.

Figure 2. Missouri counties with AQI data. Data source: Environmental Protection Agency.

I have been following data for 20 counties in Missouri. Though the EPA data now includes 2 more counties, measuring began in them only recently, thus, meaningful trends over time cannot be inferred. Figure 2 is map showing the locations of the 20 counties. They can be gathered into three groups: a group along the Mississippi River, a group in the Kansas City-St. Joseph Area, and a widely dispersed group that does not fall into either of the other two groups.

The EPA constructs an air quality index based on measurements of 6 criterion pollutants: particulates smaller than 2.5 micrometers particulates between 2.5 and 10 micrometers, ozone, carbon monoxide, nitrous oxide, and sulphur dioxide.


Figure 3. Size difference between human hair and PM2.5 particle.

Particulates are tiny particles of matter that float around in the atmosphere. When we breathe, we inhale them, and if there are too many of them, they cause lung damage. There are 2 sizes: inhalable coarse particles have diameters between 2.5 and 10.0 micrometers, while fine particles have diameters less than 2.5 micrometers. How small is that? The diameter of a human hair is about 70 micrometers, so they are roughly 1/30 the width of a human hair. Figure 3 illustrates the size difference – these are really tiny particles. Recent evidence suggests that fine particles cause serious health problems; they get deep into the lungs, sometimes even getting into the bloodstream. (EPA 2015)

Ozone is a highly corrosive form of oxygen. High in the atmosphere, we need ozone in order to absorb ultra-violet radiation. But at ground levels, it is corrosive to plants and animals, and too much of it can cause lung damage.

Sulfur dioxide smells like rotten eggs. Too much of it causes lung damage, and it also reacts with water vapor in the atmosphere to form sulfuric acid, one of the main ingredients of acid rain. A series of posts I wrote on background air pollution shows that background levels of sulfur dioxide have decreased over the last 30 years. However, concentrations of it can still build up and affect public health near emission sources.

Nitrous oxide is corrosive and reacts with ozone and sunlight to form smog. It is also one of the main causes of acid rain. Background levels in the atmosphere have decreased, but it, too, can build up locally near emission sources.

Carbon dioxide, the main cause of climate change, is not included in the list of pollutants monitored by the AQI.

The biggest sources of air pollution are power plants, industrial facilities, and cars. These tend to concentrate in urban areas, but air quality can be a concern anywhere; some of Missouri’s air quality monitoring stations are located near rural lead smelters, for instance. Indeed, in my countdown of the largest GHG emitting facilities in Missouri (here), I discovered that 7 out of 10 were located in rural areas. In addition, weather plays an important role in air quality. On some days, weather patterns allow pollution to disperse, but on others they trap it, causing air quality to worsen. Hot, sunny summer days are of particular concern, although unhealthy air quality can happen any time. Black Tuesday was in November, after all.

The EPA has established maximum levels of each pollutant, and reports the number of days on which there are violations. The EPA also combines the pollutants into an overall Air Quality Index, or AQI, in order to represent the overall healthfulness of the air. The AQI is a number, but it does not have an obvious meaning. Suppose the median AQI is 75 – what does that mean? So the EPA has created six broad AQI ranges: Good, Moderate, Unhealthy for Sensitive Individuals, Unhealthy, Very Unhealthy, and Hazardous. The EPA reports a yearly AQI number and the number of days in which the AQI falls in each range.

In the following posts, I will update Missouri’s AQI, then the specific pollutants that seem to cause repeated problems.


Environmental Protection Agency. Air Quality Index Report. This is a data portal operated by the EPA. Data downloaded on 3/23/2017 from

Environmental Protection Agency. 2015. Particulate Matter: Basic Information. Viewed online 3/23/2017 at

St. Louis Post-Dispatch. Look Back: Smoky St. Louis. This is a gallery of photos concerning the 1930s smog problem in St. Louis. Photo purchased online from

Wikipedia. 1939 St. Louis Smog. Viewed 11/6/15 at

Missouri Fish Advisory for 2017

Table 1. Source: Missouri Department of Health and Senior Services, 2017.

Eating fish is thought to have healthful benefits, including cognitive benefits for the young and a reduction in the risk of cardiovascular disease in adults. However, environmental toxins limit the amount of fish you should eat. The principle of bioaccumulation, which explains why, was reviewed in the previous post.

Whether fish are safe to eat depends on the water where they were caught. The Missouri Department of Health and Senior Services publishes a report identifying lakes, rivers, and streams where environmental contamination requires a fish advisory. This post looks at the report for 2017.

Table 1 lists the bodies of water for which fish advisories have been issued, the species of fish affected, the sizes of fish affected, the contaminants, and the limit that should be observed (serving advice).

Looking at the table, the toxins include chlordane, lead, mercury, and PCBs. In some cases the fish are safe to eat once weekly, in other cases only once monthly, and in some cases they should not be eaten at all. Several species from rivers near Missouri’s old lead belt tend not to be safe at all. I’ve posted on the Big River previously (here).

The advisories are separated into those that apply to all consumers, and those that apply to “sensitive populations.” Please look at who is included under “sensitive populations”: children younger than 13 and women who are either pregnant, nursing, or of childbearing age. Wow, that is a huge portion of the population! For them, there is no fish caught in any body of water in the United States that is safe to consume more than once weekly. And for them, several important species of game fish caught in Missouri waters should only be consumed once monthly.

Figure 1. Source: Missouri Department of Health and Senior Services.

Figure 1 shows a map of the affected bodies of water. Looking at the bodies of water affected, you can see that they cover a lot of territory: the entire lengths of the Mississippi and Missouri Rivers in the state, the major portion of the Big River, the Blue River, Clearwater Lake (ironic, no?), and Montrose Lake.

The contaminants of concern listed by the Missouri report include chlordane, PCBs (polychlorinated biphenyls), lead, and methylmercury. Chlordane was an insecticide. It was widely used as a termite control in residences, and it was used widely on crops. Starting in 1988, sales of chlordane were banned in the United States. However, chlordane persists in the environment. It adheres to soil particles in the ground and very slowly dissolves into groundwater, where it migrates to rivers and lakes. Once in the water of rivers and lakes, it bioaccumulates. That is why, even though banned in 1988, it is still a contaminant of concern in Missouri fish. Elevated levels of chlordane in the blood are associated with an increased risk of cognitive decline, prostate cancer, type-2 diabetes, and obesity. According to the Missouri report, levels of chlordane are gradually decreasing, but remain a concern in some bodies of water.

PCBs are a family of chemicals that were once widely used as insulating and cooling liquids in electrical mechanisms. PCBs were banned in the United States in 1979, however they are extremely long-lived compounds, and it is estimated that 40% of all PCB ever manufactured remain in use. Toxicity varies among specific chemicals in the family. Exposure to PCBs is capable of causing a variety of health effects, including rashes, reduced immune function, poor cognitive development in children, liver damage, and increased risk of cancer. PCBs in the environment generally enter bodies of water, where they enter the bodies of aquatic species and bioaccumulate up the food chain. According to the Missouri report, levels of PCB are gradually decreasing, but remain a concern in some bodies of water.

Lead is a heavy metal that was once heavily mined in Missouri. Lead mining continues, and as recently as 2014, more lead was released into the environment in Missouri than any other toxic chemical. (See here.) Lead used to be released into the environment through the inclusion of tetraethyl lead in gasoline, and through lead paint. Both of those uses have been banned in the United States. Today, lead enters the environment through mine tailings. Thus, it is of greatest concern in locations that either have or had significant lead mining activities (portions of Southeastern Missouri, for instance). Tailings containing lead were (are) dumped on the ground. From the tailings lead washes into nearby bodies of water, where it is ingested by aquatic species and then bioaccumulates. Lead is readily absorbed by living tissue. It affects almost every organ and system in the body. At high levels it can be immediately dangerous to life and health. At lower levels, symptoms include abdominal pain, weakness in fingers, wrists, and ankles, blood pressure increases, miscarriage, delayed puberty, and cognitive impairment.

Mercury enters the environment from many sources. One important source is coal. When coal is burned, it is emitted up the flue. Though the amount in any lump of coal is tiny, so much coal is burned to produce energy that tons and tons are emitted every year. The mercury falls out of the atmosphere, where it gets washed into bodies of water. There, it is converted by microbes into methylmercury, which is then ingested into aquatic species, and it bioaccumulates. In children a high level of methylmercury has been associated with language and memory deficits, reduced IQ, and learning disabilities. In adults, it has been associated with an increased risk of cardiovascular disease and autoimmune conditions.

It seems to me that for all of these contaminants, the situation may be slowly improving, though it is still problematic. The persistence of these contaminants in the environment, in many cases decades after their manufacture was banned, demonstrates an important environmental principle: the environmental problems you create may not go away quickly. They are likely to remain with you for a long, long time.


Missouri Department of Health and Senior Services. 2017. 2017 Missouri Fish Advisory: A Guide to Eating Missouri Fish. Downloaded 3/9/17 from

Wikipedia. Chlordane. Viewed online 3/15/2017 at

Wikipedia. Lead. Viewed online 3/15/2017 at

Wikipedia. Methylmercury. Viewed online 3/15/2017 at

Wikipedia. Polychlorinated biphenyl. Viewed online 3/15/2017 at

Environmental Toxins Limit Fish Consumption

Eating fish may be good for you, or it may poison you. (Pick one)

In the 1970s, researchers reported that native people living in Greenland (Inuits) had very low rates of heart disease compared with counterparts living in Denmark. Scientists attributed these health benefits to the consumption of fish and sea mammals containing high levels of long-chain polyunsaturated fatty acids. Recently, however, research has questioned the accuracy of these early studies, as more recent research shows that the rate of heart disease and heart attack among the Inuit are similar to those in non-Inuit populations. Thus, there has been some question regarding how strong the association is between reduced risk of cardiovascular disease and fish consumption. The situation reminds me of one of my favorite sayings: It ain’t what we don’t know that’s gonna hurt us, it’s what we do know that just ain’t so.

Over the years, thousands of research studies have been conducted, with the result that the consumption of fish is included in most dietary guidelines. The benefits are primarily considered to be the previously mentioned reduction in the risk of coronary heart disease in adults, but also an improvement in cognitive development in infants and young children.

The current dietary guidelines in the USA have moved away from the concept of the minimum daily requirement. Instead they describe recommended patterns of healthy eating. The recommendation for seafood has not changed, however: 8 oz. of seafood per week. (Dietary Guidelines, p. 18)

It is generally recognized, however, that some fish species contain significant levels of contaminants. These contaminants include a number of really nasty poisons, including chlordane, polychlorinated biphenyls (PCBs), lead, and methylmercury. These compounds can be toxic even in very small amounts, and they are bioaccumlative.

Bioaccumulation is an important concept in understanding environmental toxins. The basic idea is that even tiny amounts of toxin can build up in the body. Here’s how: at any given feeding, a toxin may be eaten in such tiny amounts that there is no immediate effect on the animal that consumes it. However, it is absorbed by the body, and it is not readily eliminated by natural processes. Thus, over time, the amount in the body builds up each time the animal eats a little more.

Imagine a lake. Mercury emitted by coal-burning power plants falls into the lake, where microbes convert it to methylmercury. Algae living in the lake take in some of that methylmercury. Along comes a tiny fish fry, and it eats some of that algae. Now with each mouthful of algae, the fish fry ingests a dose of methylmercury. And it starts to build up. How many mouthfuls of algae does a fish fry eat? I don’t know, but it is quite a lot. Now, along comes a medium-sized fish, and it eats the fish fry. With one bite, it has ingested not just a tiny amount of methylmercury, but all the methylmercury that built up in the body of the fish fry during its lifetime. How many fish fry does a medium-sized fish eat? I don’t know, but it is quite a few, and the medium-sized fish ingests all of the methylmercury built up in the bodies of each fish it eats. Now, along comes a large fish, and it eats the medium-sized fish. With one bite, it has ingested not just a tiny amount of mercury, but all the methylmercury built up in the body of the medium-sized fish. How many medium-sized fish does a large fish eat? I don’t know, but it is quite a few, and the large fish ingests all of the methylmercury built up in the bodies of each fish it eats.

Now, let’s imagine that our fish are living in a Missouri lake. Along comes a fisherman, and he catches one fish per week and eats it. That will be 52 fish per year. Now, I don’t know what the actual numbers are, but let us assume that a fish fry eats 1,000 individual alga, while a medium-sized fish eats 100 fry, and a large fish eats 100 medium-sized fish. These estimates may be wildly wrong, but the point is to illustrate the principle of bioaccumulation, and they will allow us to do so.

Using the estimates above, each fish fry will ingest the methlymercury contained in 1,000 algae; each medium-sized fish will ingest the amount contained in 100,000 algae; each large fish will ingest the amount contained in 10 million algae, and in a year, our fisherman will ingest the amount contained in 520 million algae. If he continues for 10 years, he will consume the amount contained in 5.2 billion algae.

Over time, the amount of methylmercury in our fisherman’s body will build up, perhaps eventually reaching the point where it starts to poison him.

Now, my presentation is over-simplified; in real life bioaccumulation is much more complex. Further, the numbers I chose for my progression were totally arbitrary. Nonetheless, they illustrate the basic idea of bioaccumulation. And the principle applies not only to methylmercury, but also to lead, PCBs, and dioxins.

The result is that, however good for you eating fish may be in theory, there are limits due to environmental contaminants. The next post will look at what those limits are in Missouri.


Committee on a Framework for Assessing the Health, Environmental, and Social Effects of the Food System; Food and Nutrition Board; Board on Agriculture and Natural Resources; Institute of Medicine; National Research Council; Nesheim MC, Oria M, Yih PT, editors. A Framework for Assessing Effects of the Food System. Washington (DC): National Academies Press (US); 2015 Jun 17. ANNEX 1, DIETARY RECOMMENDATIONS FOR FISH CONSUMPTION. Available from:

Missouri Department of Health and Senior Services. 2017. 2017 Missouri Fish Advisory: A Guide to Eating Missouri Fish. Downloaded 3/9/17 from

U.S. Department of Health and Human Services and U.S. Department of Agriculture.
2015–2020 Dietary Guidelines for Americans. 8th Edition. December 2015. Available at

The Challenge of Urban Sustainability

Figure 1. The triple bottom line.

Figure 1. The triple bottom line.

There is no generally accepted definition of urban sustainability. A recent report issued by the National Academies of Sciences, Engineering, and Medicine defines it as “the process by which the measurable improvement of near- and long-term human well-being can be achieved” in three areas: environmental, economic, and social. These three areas constitute the “triple bottom line” we hear so much about these days. The report conceptualizes them as combining to represent urban sustainability as illustrated in Figure 1 at right. By mentioning “near- and long-term” welfare, the report points to a popular conceptualization of sustainability: not compromising future welfare in the pursuit of short-term goals.




Figure 2. Top National Priorities. Source: Pew Center for Research.

Figure 2. Top National Priorities. Source: Pew Center for Research.

This blog typically focuses on the environmental part of sustainability. Research consistently indicates that, while a large majority of Americans favor protecting our environment, they consistently rank its importance below other national priorities. For instance, Figure 2 shows the results of Pew Research Center polls asking Americans to rate which issues should be top policy priorities. The chart shows that out of 20, protecting the environment ranks 14th, and dealing with global warming ranks 19th. Polls in 2009 and 2013 had similar results. I feel that the capacity of the planet to support life should not be a low priority; I focus on it because it is neglected.

The specific urban processes that might underly urban sustainability are still under conceptual development. The real purpose of the report is to review that work. It looks at 4 sustainability rating systems that have been developed: the American Green City Index (EIU, 2011), the Urban Sustainability Indicators (Mega and Pedersen, 1998), The Sustainable Cities Index (Arcadis, 2015), the Sustainability Urban Development Indicators (Lynch et al., 2011). In addition, the report develops its own rating metrics by looking at 9 North American urban centers, plus the United States itself, to see what systems are being monitored, and which specific indicators are being used to monitor those systems. The 9 cities are Cedar Rapids, Chattanooga, Flint, Grand Rapids, Los Angeles, New York, Philadelphia, Pittsburgh, and Vancouver.

Table 1. Environmental Indicators. Source: National Academies.

Table 1. Environmental Indicators. Source: National Academies.

Table 1 at right shows the results of the review. I have adapted the table to focus only on the environmental indicators, and to eliminate the scholarly references.

If you are interested in this conceptual work, then the report would make important reading. I suspect than many readers of this blog, however, want to know what the results are: which cities rate as sustainable, and which don’t. As I said, the metrics are still under conceptual development, and I could find only one rating system that has actually been applied to cities in the United States: the US and Canada Green City Index. They rate 27 North American cities using their system. They provide separate ratings on policies related to CO2 emissions, energy mix and consumption, land use, green buildings, green transportation, water consumption and purity, waste management, air quality, and environmental governance. They also combine it all into an overall index.

Figure 3. Source: Economist Intelligence Unit.

Figure 3. Source: Economist Intelligence Unit.

Figure 3 shows the results for the overall index. St. Louis is the only urban area in Missouri represented, and it comes in 26th out of 27; only Detroit ranks lower.

The index values have no specific meaning other than as a score on this particular index. Thus, absolute values probably have no interpretable meaning. They probably do have relative meaning, however, in comparison to each other. What disturbs me is not that St. Louis is low on the scale, anybody familiar with the city would suspect as much, but with how far behind the city is.

In considering this chart, please be aware that the index was not constructed by an academic or governmental body. It was developed by the Economist Intelligence Unit (a part of The Economist Media Group) in cooperation with Siemans AG (a German corporation). This does not mean its conclusions are invalid, but it may mean that their work hasn’t undergone the review processes that academic and governmental publications do.


Arcadis. 2015. Sustainable Cities Index 2015: Balancing the Economic, Social and Environmental Needs of the World’s Leading Cities. Available at
Economist Intelligence Unit. 2010. US and Canada Green City Index. Munich, Germany: Siemens AG. Downloaded 2/26/17 from

Lynch, A. J., S. Andreason, T. Eisenman, J. Robinson, K. Steif, and E. L. Birch. 2011. Sustainable Urban Development Indicators for the United States. Report to the Office of International and Philanthropic Innovation, Office of Policy Development and Research, U.S. Department of Housing and Urban Development. Philadelphia: Penn Institute for Urban Research. Online. Available at uploads/media/sustainable-urban-development-indicators-for-the-united-states.pdf.
Mega, V., and J. Pedersen. 1998. Urban sustainability indicators. Dublin, Ireland: European Foundation for the Improvement of Living and Working Conditions. Online. Available at pdf.
National Academies of Sciences, Engineering, and Medicine. 2016. Pathways to Urban Sustainability: Challenges and Opportunities for the United States. Washington, DC: The National Academies Press. doi: 10.17226/23551. Downloaded 1/12/2017 from