|In e-mail, Dr. Mangano has asked some important questions about the methods used to generate the data for this story. We are in the process of reviewing existing data, and obtaining additional data. We will update this piece within the next week. Until then, this article in Scientific American offers a similar critique to ours.|
The researchers used data from the Centers for Disease Control’s Morbidity and Mortality Weekly Report (MMWR). The MMWR includes data from 122 U.S. cities reporting deaths for the week. The numbers of deaths for each city are broken down by age group, and the deaths related to pneumonia and influenza are also broken out. All data is preliminary and it takes nearly two years for the CDC to finalize the numbers. This report does not specify the cause of death.
The authors used the data from the MMWR reports for weeks 12 to 25, March 20 to June 25, 2011. That data was compared to the same period in 2010, as well as the fourteen weeks prior.
We took a look at infant deaths, children under 12 months of age. The study reports that infant deaths in the 122 cities rose by 1.8 percent year over year. Year over year for the prior fourteen week period, they declined 8.37 percent. They calculate that 822 infant deaths during the fourteen week study period were “excess”.
Recall that we used the term “preliminary” to refer to this data. We used the CDC’s database to search for the same data for the entire country for the study period. We found that our fourteen week total for infant deaths for 2011 agrees with the study, 2,743 deaths. However, the total for 2010 differs, and alters their analysis. They report 2,722 infant deaths for the period in 2010, while the current CDC count is 2,754. infant deaths went down year over year, not up by 1.8 percent.
Looking at the prior fourteen weeks’ data, MMWR weeks 50-11, the difference is even more striking. In the 2009-2010 period, infant deaths from our CDC data set equal 2,859. For the fourteen weeks prior to the study period, our 2010-2011 infant death total is 2,608. Infant deaths fell far more than the study states.
Among the weaknesses in this study are the failure to correct for pneumonia and influenza deaths. We included 2009 in our data. Weekly infant mortality totals for 2009 in the study period are much higher than for either 2010 or 2011. The novel H1N1 pandemic was affecting mortality in the United States. The flu is a key factor in mortality during the same time periods that the study examined.
The cities represent about 25 percent of the national population. Other questionable assumptions are that any effects from radioactive fallout would be evening distributed throughout the 122 cities in the data and that the effects in the cities would be equal to those in the rest of the nation.
The population rank tables provided in the study, tables four and five, demonstrate mortality variances that do not correlate with geography. One would expect West Coast cities to have a higher exposure to any Fukushima fallout, and thus demonstrate higher mortality rates. Houston leads with respect to an increase in mortality in both tables, not Los Angeles or San Diego.
We believe that the study’s authors have not proven their thesis. The current data refutes part of their analysis with respect to infant mortality. The authors use of just one prior year fails a commonsense test that you need more than two data points to establish a trend. They authors have not corrected the data with respect to the actual geographic distribution of fallout, nor did they make any attempt to correct for the effects of seasonal influenza or other variables such as violent deaths. Without a cause of death, any “excess deaths” which may be found cannot be related to Fukushima radiation and fallout.
At this point, if excess deaths exist, it is just coincidental that measurable amounts of radioactive byproducts from the Fukushima nuclear accident were found in the U.S. during the same time period.