All posts by admin

Roy Spencer Congressional Testimony

I am a bit late on this, but Roy Spencer raises a number of good issues here in his testimony to Congress.  In particular, he focuses on just how much climate alarmists’ assumption of strong positive feedback drive catastrophic forecasts.  Put in more realistic, better justified feedback assumptions, and the catastrophe goes away.

Testimony of Roy W. Spencer before the
Senate Environment and Public Works Committee on 22 July 2008

A printable PDF of this testimony can be found here

I would like to thank Senator Boxer and members of the Committee for allowing me to discuss my experiences as a NASA employee engaged in global warming research, as well as to provide my current views on the state of the science of global warming and climate change.

I have a PhD in Meteorology from the University of Wisconsin-Madison, and have been involved in global warming research for close to twenty years. I have numerous peer reviewed scientific articles dealing with the measurement and interpretation of climate variability and climate change. I am also the U.S. Science Team Leader for the AMSR-E instrument flying on NASA’s Aqua satellite.

1. White House Involvement in the Reporting of Agency Employees’ Work

On the subject of the Administration’s involvement in policy-relevant scientific work performed by government employees in the EPA, NASA, and other agencies, I can provide some perspective based upon my previous experiences as a NASA employee. For example, during the Clinton-Gore Administration I was told what I could and could not say during congressional testimony. Since it was well known that I am skeptical of the view that mankind’s greenhouse gas emissions are mostly responsible for global warming, I assumed that this advice was to help protect Vice President Gore’s agenda on the subject.

This did not particularly bother me, though, since I knew that as an employee of an Executive Branch agency my ultimate boss resided in the White House. To the extent that my work had policy relevance, it seemed entirely appropriate to me that the privilege of working for NASA included a responsibility to abide by direction given by my superiors.

But I eventually tired of the restrictions I had to abide by as a government employee, and in the fall of 2001 I resigned from NASA and accepted my current position as a Principal Research Scientist at the University of Alabama in Huntsville. Despite my resignation from NASA, I continue to serve as Team Leader on the AMSR-E instrument flying on the NASA Aqua satellite, and maintain a good working relationship with other government researchers.

2. Global Warming Science: The Latest Research
Regarding the currently popular theory that mankind is responsible for global warming, I am very pleased to deliver good news from the front lines of climate change research. Our latest research results, which I am about to describe, could have an enormous impact on policy decisions regarding greenhouse gas emissions.
Despite decades of persistent uncertainty over how sensitive the climate system is to increasing concentrations of carbon dioxide from the burning of fossil fuels, we now have new satellite evidence which strongly suggests that the climate system is much less sensitive than is claimed by the U.N.’s Intergovernmental Panel on Climate Change (IPCC).

Another way of saying this is that the real climate system appears to be dominated by “negative feedbacks” — instead of the “positive feedbacks” which are displayed by all twenty computerized climate models utilized by the IPCC. (Feedback parameters larger than 3.3 Watts per square meter per degree Kelvin (Wm-2K-1) indicate negative feedback, while feedback parameters smaller than 3.3 indicate positive feedback.)

If true, an insensitive climate system would mean that we have little to worry about in the way of manmade global warming and associated climate change. And, as we will see, it would also mean that the warming we have experienced in the last 100 years is mostly natural. Of course, if climate change is mostly natural then it is largely out of our control, and is likely to end — if it has not ended already, since satellite-measured global temperatures have not warmed for at least seven years now.

2.1 Theoretical evidence that climate sensitivity has been overestimated
The support for my claim of low climate sensitivity (net negative feedback) for our climate system is two-fold. First, we have a new research article1 in-press in the Journal of Climate which uses a simple climate model to show that previous estimates of the sensitivity of the climate system from satellite data were biased toward the high side by the neglect of natural cloud variability. It turns out that the failure to account for natural, chaotic cloud variability generated internal to the climate system will always lead to the illusion of a climate system which appears more sensitive than it really is.

Significantly, prior to its acceptance for publication, this paper was reviewed by two leading IPCC climate model experts – Piers Forster and Isaac Held– both of whom agreed that we have raised a legitimate issue. Piers Forster, an IPCC report lead author and a leading expert on the estimation of climate sensitivity, even admitted in his review of our paper that other climate modelers need to be made aware of this important issue.

To be fair, in a follow-up communication Piers Forster stated to me his belief that the net effect of the new understanding on climate sensitivity estimates would likely be small. But as we shall see, the latest evidence now suggests otherwise.

2.2 Observational evidence that climate sensitivity has been overestimated
The second line of evidence in support of an insensitive climate system comes from the satellite data themselves. While our work in-press established the existence of an observational bias in estimates of climate sensitivity, it did not address just how large that bias might be.

But in the last several weeks, we have stumbled upon clear and convincing observational evidence of particularly strong negative feedback (low climate sensitivity) from our latest and best satellite instruments. That evidence includes our development of two new methods for extracting the feedback signal from either observational or climate model data, a goal which has been called the “holy grail” of climate research.
The first method separates the true signature of feedback, wherein radiative flux variations are highly correlated to the temperature changes which cause them, from internally-generated radiative forcings, which are uncorrelated to the temperature variations which result from them. It is the latter signal which has been ignored in all previous studies, the neglect of which biases feedback diagnoses in the direction of positive feedback (high climate sensitivity).
Based upon global oceanic climate variations measured by a variety of NASA and NOAA satellites during the period 2000 through 2005 we have found a signature of climate sensitivity so low that it would reduce future global warming projections to below 1 deg. C by the year 2100. As can be seen in Fig. 1, that estimate from satellite data is much less sensitive (a larger diagnosed feedback) than even the least sensitive of the 20 climate models which the IPCC summarizes in its report. It is also consistent with our previously published analysis of feedbacks associated with tropical intraseasonal oscillations3.

Fig. 1. Frequency distributions of feedback parameters (regression slopes) computed from three-month low-pass filtered time series of temperature (from channel 5 of the AMSU instrument flying on the NOAA-15 satellite) and top-of-atmosphere radiative flux variations for 6 years of global oceanic satellite data measured by the CERES instrument flying on NASA’s Terra satellite; and from a 60 year integration of the NCAR-CCSM3.0 climate model forced by 1% per year CO2 increase. Peaks in the frequency distributions indicate the dominant feedback operating. This NCAR model is the least sensitive (greatest feedback parameter value) of all 20 IPCC models.
A second method for extracting the true feedback signal takes advantage of the fact that during natural climate variability, there are varying levels of internally-generated radiative forcings (which are uncorrelated to temperature), versus non-radiative forcings (which are highly correlated to temperature). If the feedbacks estimated for different periods of time involve different levels of correlation, then the “true” feedback can be estimated by extrapolating those results to 100% correlation. This can be seen in Fig. 2, which shows that even previously published4 estimates of positive feedback are, in reality, supportive of negative feedback (feedback parameters greater than 3.3 Wm-2K-1).

Fig. 2. Re-analysis of the satellite-based feedback parameter estimates of Forster and Gregory (2006) showing that they are consistent with negative feedback rather than positive feedback (low climate sensitivity rather than high climate sensitivity).

2.3 Why do climate models produce so much global warming?
The results just presented beg the following question: If the satellite data indicate an insensitive climate system, why do the climate models suggest just the opposite? I believe the answer is due to a misinterpretation of cloud behavior by climate modelers.

The cloud behaviors programmed into climate models (cloud “parameterizations”) are based upon researchers’ interpretation of cause and effect in the real climate system5. When cloud variations in the real climate system have been measured, it has been assumed that the cloud changes were the result of certain processes, which are ultimately tied to surface temperature changes. But since other, chaotic, internally generated mechanisms can also be the cause of cloud changes, the neglect of those processes leads to cloud parameterizations which are inherently biased toward high climate sensitivity.

The reason why the bias occurs only in the direction of high climate sensitivity is this: While surface warming could conceivably cause cloud changes which lead to either positive or negative cloud feedback, causation in the opposite direction (cloud changes causing surface warming) can only work in one direction, which then “looks like” positive feedback. For example, decreasing low cloud cover can only produce warming, not cooling, and when that process is observed in the real climate system and assumed to be a feedback, it will always suggest a positive feedback.
2.4 So, what has caused global warming over the last century?
One necessary result of low climate sensitivity is that the radiative forcing from greenhouse gas emissions in the last century is not nearly enough to explain the upward trend of 0.7 deg. C in the last 100 years. This raises the question of whether there are natural processes at work which have caused most of that warming.
On this issue, it can be shown with a simple climate model that small cloud fluctuations assumed to occur with two modes of natural climate variability — the El Nino/La Nina phenomenon (Southern Oscillation), and the Pacific Decadal Oscillation — can explain 70% of the warming trend since 1900, as well as the nature of that trend: warming until the 1940s, no warming until the 1970s, and resumed warming since then. These results are shown in Fig. 3.

Fig. 3. A simple climate model forced with cloud cover variations assumed to be proportional to a linear combination of the Southern Oscillation Index (SOI) and Pacific Decadal Oscillation (PDO) index. The heat flux anomalies in (a), which then result in the modeled temperature response in (b), are assumed to be distributed over the top 27% of the global ocean (1,000 meters), and weak negative feedback has been assumed (4 W m-2 K-1).

While this is not necessarily being presented as the only explanation for most of the warming in the last century, it does illustrate that there are potential explanations for recent warming other that just manmade greenhouse gas emissions. Significantly, this is an issue on which the IPCC has remained almost entirely silent. There has been virtually no published work on the possible role of internal climate variations in the warming of the last century.

3. Policy Implications
Obviously, what I am claiming today is of great importance to the global warming debate and related policy decisions, and it will surely be controversial. These results are not totally unprecedented, though, as other recently published research6 has also led to the conclusion that the real climate system does not exhibit net positive feedback.

While it will take some time for the research community to digest this new information, it must be mentioned that new research contradicting the latest IPCC report is entirely consistent with the normal course of scientific progress. I predict that in the coming years, there will be a growing realization among the global warming research community that most of the climate change we have observed is natural, and that mankind’s role is relatively minor.

While other researchers need to further explore and validate my claims, I am heartened by the fact that my recent presentation of these results to an audience of approximately 40 weather and climate researchers at the University of Colorado in Boulder last week (on July 17, 2008 ) led to no substantial objections to either the data I presented, nor to my interpretation of those data.

And, curiously, despite its importance to climate modeling activities, no one from Dr. Kevin Trenberth’s facility, the National Center for Atmospheric Research (NCAR), bothered to drive four miles down the road to attend my seminar, even though it was advertised at NCAR.

I hope that the Committee realizes that, if true, these new results mean that humanity will be largely spared the negative consequences of human-induced climate change. This would be good news that should be celebrated — not attacked and maligned.

And given that virtually no research into possible natural explanations for global warming has been performed, it is time for scientific objectivity and integrity to be restored to the field of global warming research. This Committee could, at a minimum, make a statement that encourages that goal.

REFERENCES
1. Spencer, R.W., and W.D. Braswell, 2008: Potential biases in cloud feedback diagnosis:
A simple model demonstration. J. Climate, in press.
2. Allen, M.R., and D.J. Frame, 2007: Call off the quest. Science, 318, 582.
3. Spencer, R.W., W. D. Braswell, J. R. Christy, and J. Hnilo, 2007: Cloud and radiation
budget changes associated with tropical intraseasonal oscillations. Geophys. Res.
Lett., 34, L15707, doi:10.1029/2007GL029698.
4. Forster, P. M., and J. M. Gregory, 2006: The climate sensitivity and its components
diagnosed from Earth Radiation Budget data. J. Climate, 19, 39-52.
5. Stephens, G. L., 2005: Clouds feedbacks in the climate system: A critical review. J.
Climate, 18, 237-273.
6. Schwartz, S. E., 2007: Heat capacity, time constant, and sensitivity of the Earth’s
climate system. J. Geophys. Res., 112, D24S05, doi:10.1029/2007JD008746.

Climate Global Warming Is Caused by Everything Our Interest Group Opposed Before It Came Along As An Issue

Many leftish groups have for years had a curious opposition to advertising.  Ralph Nader and his PIRG groups always made it a particular issue.  This always struck me as inherently insulting, as the "logic" behind their opposition to advertising is that people are all dumb, unthinking, programmable robots who launch off and buy whatever they see advertised on TV.

The global warming hysteria kind of sucks all the oxygen out of every other goofy leftish issue out there, so now its necessary to link your leftish cause to global warming.  So it is no surprise to find out that advertising apparently causes global warming:

AUSTRALIAN television advertising is producing as much as 57 tonnes of carbon dioxide per hour, and thirty second ad breaks are among the worst offenders, according to audit figures from pitch consultants TrinityP3.

Carbon emissions are particularly strong during high-rating programs such as the final episodes of the Ten Network’s Biggest Loser, which produced 2135kgs per 30 second ad, So You Think You Can Dance at 2061kg for every 30 seconds, closely followed by the Seven News 6pm news at 1689kg and Border Security at 1802kg.

TrinityP3 managing director Darren Woolley said emissions are calculated by measuring a broadcasters’ power consumption and that of a consumer watching an ad on television in their home, B&T Magazine reports.

“We look at the number of households and the number of TVs, and then the proportion of TVs that are plasma, LCD or traditional, and calculate energy consumption based on those factors,” Woolley said.

TrinityP3 is formalising a standard carbon footprint measurement of advertising, which it claims will be the first of its kind.

“Most companies have been obliged to think through their strategies on reducing carbon emissions and they need to remember that their marketing strategies do have an environmental impact that needs to be included. This is not something that is easily able to be measured,” Mr Woolley said.

“Reality television is interesting as the more viewers and voters that tune in, the higher the carbon footprint. The more people vote, the more it adds to the CO2 in the atmosphere.

Note that, oddly, the 54 minutes an hour of regular programming is OK, it’s only the 6 minutes of advertising that has a carbon footprint.  That’s OK, though, because I am going to start turning off the TV during advertisements and go out and sit in my idling SUV and listen to my commercial-free satellite radio instead.

Climate Global Warming Is Caused by Everything Our Interest Group Opposed Before It Came Along As An Issue

Many leftish groups have for years had a curious opposition to advertising.  Ralph Nader and his PIRG groups always made it a particular issue.  This always struck me as inherently insulting, as the "logic" behind their opposition to advertising is that people are all dumb, unthinking, programmable robots who launch off and buy whatever they see advertised on TV.

The global warming hysteria kind of sucks all the oxygen out of every other goofy leftish issue out there, so now its necessary to link your leftish cause to global warming.  So it is no surprise to find out that advertising apparently causes global warming:

AUSTRALIAN television advertising is producing as much as 57 tonnes of carbon dioxide per hour, and thirty second ad breaks are among the worst offenders, according to audit figures from pitch consultants TrinityP3.

Carbon emissions are particularly strong during high-rating programs such as the final episodes of the Ten Network’s Biggest Loser, which produced 2135kgs per 30 second ad, So You Think You Can Dance at 2061kg for every 30 seconds, closely followed by the Seven News 6pm news at 1689kg and Border Security at 1802kg.

TrinityP3 managing director Darren Woolley said emissions are calculated by measuring a broadcasters’ power consumption and that of a consumer watching an ad on television in their home, B&T Magazine reports.

“We look at the number of households and the number of TVs, and then the proportion of TVs that are plasma, LCD or traditional, and calculate energy consumption based on those factors,” Woolley said.

TrinityP3 is formalising a standard carbon footprint measurement of advertising, which it claims will be the first of its kind.

“Most companies have been obliged to think through their strategies on reducing carbon emissions and they need to remember that their marketing strategies do have an environmental impact that needs to be included. This is not something that is easily able to be measured,” Mr Woolley said.

“Reality television is interesting as the more viewers and voters that tune in, the higher the carbon footprint. The more people vote, the more it adds to the CO2 in the atmosphere.

Note that, oddly, the 54 minutes an hour of regular programming is OK, it’s only the 6 minutes of advertising that has a carbon footprint.  That’s OK, though, because I am going to start turning off the TV during advertisements and go out and sit in my idling SUV and listen to my commercial-free satellite radio instead.

Some Day Climate May Be A Big-Boy Science

In big-boy science, people who run an experiment and arrive at meaningful findings will publish not only those findings but the data and methodology they used to reach those findings.  They do that because in most sciences, a conclusion is not really considered robust until multiple independent parties have replicated the finding, and they can’t replicate the finding until they know exactly how it was reached.  Physics scientists don’t run around talking about peer review as the be-all-end-all of scientific validation.  Instead of relying on peers to read over an article to look for mistakes, they go out and see if they can replicate the results.  It is expected that others in the profession will try to replicate, or even tear down, a controversial new finding.  Such a process is why we aren’t all running around talking about the cold fusion "consensus" based on "peer-reviewed science."  It would simply be bizarre for someone in physics, say, to argue that their findings were beyond question simply because it had been peer reviewed by a cherry-picked review group and to refuse to publish their data or detailed methodology. 

Some day climate science may be all grown up, but right now its far from it.

1990: A Year Selected Very Carefully

Most of you will know that the Kyoto Treaty adopted CO2 reduction goals referenced to a base year of 1990.  But what you might not know is exactly how that year was selected.  Why would a treaty, negotiated and signed in the latter half of the 90’s adopt 1990 as a base year, rather than say 1995 or 2000?  Or even 1980.

Closely linked to this question of base year selection for the treaty is a sort of cognitive dissonance that is occurring in reports about compliance of the signatories with the treaty.  Some seem to report substantial progress by European countries in reducing emissions, while others report that nearly everyone is going to miss the goals by a lot and that lately, the US has been doing better than signatory countries in terms of CO2 emissions.

To answer this, lets put ourselves back in about 1997 as the Kyoto Treat was being hammered out.  Here is what the negotiators knew at that time:

  • Both Japan and Europe had been mired in a recession since about 1990, cutting economic growth and reducing emissions growth.  The US economy had been booming.  From 1990-1995, US average real GDP growth was 2.5%, while Japan and Europe were both around 1.4% per year (source xls). 
  • The Berlin Wall fell in 1989, and Germany began unifying with East Germany in 1990.  In 1990, All that old, polluting, inefficient Soviet/Communist era industry was still running, pumping out incredible amounts of CO2 per unit produced.  By 1995, much of that industry had been shut down, though even to this day Germany continues to reap year over year efficiency improvements as they restructure old Soviet-era industry, transportation infrastructure, etc.
  • The UK in the late 1980’s had embarked on a huge campaign to replace Midlands coal with natural gas from the North Sea.  From 1990-1995, for reasons having nothing to do with CO2, British substituted a lot of lower CO2 gas combustion in place of higher CO2 coal production.

Remember, negotiators knew all this stuff in 1997.  All the above experience netted to this CO2 data that was in the negotiators pocket at Kyoto (from here):

CO2 Emissions Changes, 1990-1995

EU -2.2%
Former Communist -26.1%
Germany -10.7%
UK -6.9%
Japan 7.2%
US 6.4%

In the above, the categories are not mutually exclusive.  Germany and UK are also in the EU numbers, and Germany is included in the former communist number as well.  Note that all numbers exclude offsets and credits.

As you can see, led by the collapse of the former communist economies and the shuttering of inefficient Soviet industries, in addition to the substitution of British gas for coal, the European negotiators knew they had tremendous CO2 reductions already in their pocket, IF 1990 was chosen as a base year.  They could begin Kyoto already looking like heroes, despite the fact that the reductions from 1990-1997 were almost all due to economic and political happenings unrelated to CO2 abatement programs.

Even signatory Japan was ticked off about the 1990 date, arguing that it benefitted the European countries but was pegged years after Japan had made most of their improvements in energy efficiency:

Jun Arima, lead negotiator for Japan’s energy ministry, said the 1990 baseline for CO2 cuts agreed at Kyoto was arranged for the convenience of the UK and Germany. …

Mr Arima said: "The base year of 1990 was very advantageous to European countries. In the UK, you had already experienced the ‘dash for gas’ from coal – then in Germany they merged Eastern Germany where tremendous restructuring occurred.

"The bulk of CO2 reductions in the EU is attributable to reductions in UK and Germany."

His other complaint was that the 1990 baseline ruled inadmissible the huge gains in energy efficiency Japan had made in the 1980s in response the 1970s oil shocks.

"Japan achieved very high level of energy efficiency in the 1980s so that means the additional reduction from 1990 will mean tremendous extra cost for Japan compared with other countries that can easily achieve more energy efficiency."

So 1990 was chosen by the European negotiators as the best possible date for their countries to look good and, as an added bonus, as a very good date to try to make the US look bad.  That is why, whenever you see a press release from the EU about carbon dioxide abatement, you will see them trumpet their results since 1990.  Any other baseline year would make them look worse.

One might arguably say that anything that occured before the signing of the treaty in 1997 is accidental or unrelated, and that it is more interesting to see what has happened once governments had explicit programs in place to reduce CO2.  This is what you will see:

Just let me remind you of some salutary statistics. Between 1997 and 2004, carbon dioxide emissions rose as follows:

Emissions worldwide increased 18.0%;

Emissions from countries that ratified the protocol increased 21.1%;

Emissions from non-ratifiers of the protocol increased 10.0%;

Emissions from the US (a non-ratifier) increased 6.6%;

A lot more CO2 data here.

Postscript:  One would expect that absent changes in government regulations, the US has probably continued to do better than Europe on this metric the last several years.  The reason is that increases in wholesale gas prices increase US gas retail prices by a higher percentage than it does European retail prices.   This is because fixed-amount taxes make up a much higher portion of European gas prices than American.  While it does not necesarily follow from this, it is not illogical to assume that recent increases in oil and gas prices have had a greater effect on US than European demand, particularly since, with historically lower energy prices, the US has not made many of the lower-hanging efficiency investments that have already been made in Europe.

Climate Re-Education Program

  A reader sent me a heads-up to an article in the Bulletin of the American Meteorological Society ($, abstract here) titled "Climate Change Education and the Ecological Footprint".  The authors express concern that non-science students don’t sufficiently understand global warming and its causes, and want to initiate a re-education program in schools to get people thinking the "right" way.

So, do climate scientists want to focus on better educating kids in details of the carbon cycle?  In the complexities in sorting out causes of warming between natural and man-made effects?  In difficulties with climate modeling?  In the huge role that feedback plays in climate forecasts?

Actually, no.  Interestingly, the curriculum advocated in the Journal of American Meteorology has very little to do with meteorology or climate science.  What they are advocating is a social engineering course structured around the concept of "ecological footprint."  The course, as far as I can tell, has more in common with this online kids game where kids find out what age they should be allowed to live to based on their ecological footprint.

Like the Planet Slayer game above, the approach seems to be built around a quiz (kind of slow and tedious to get through).  Like Planet Slayer, most of the questions are lifestyle questions – do you eat meat, do you buy food from more than 200 miles away, how big is your house, do you fly a lot, etc.  If you answer that yes, you have a good diet and a nice house and travel a bit and own a car, then you are indeed destroying the planet.

I could go nuts on a rant about propoganda in government monopoly schools, but I want to make a different point [feel free to insert rant of choice here].  The amazing thing to me is that none of this has the first thing to do with meteoroogy or climate science.  If there were any science at all in this ecological footprint stuff, it would have to be economics.  What does meteorology have to say about the carrying capacity of the earth?  Zero.  What does climate science have to say about the balance between the benefits of air travel and the cost of the incremental warming that might result from that air travel?  Zero. 

Take one example – food miles.  I live in Phoenix.  The cost to grow crops around here (since most of the agricultural water has to be brought in from hundreds of miles away) is high.  The cost is also high because even irrigated, the soil is not as productive for many crops as it is in, say, Iowa, so crops require more labor, more fertilizer, and more land for the same amount of yield.  I could make a really good argument that an ear of corn trucked in from Iowa probably uses less resources than an ear of corn grown withing 200 miles of where I live.  Agree or disagree, this is a tricky economics question that requires fairly sophisiticated analysis to answer.  How is teaching kids that "food grown within 200 miles helps save the planet" advancing the cause of climate science?  What does meteorology have to say about this question?

I am sorry I don’t have more excerpts, but I am lazy and I have to retype them by hand.  But this is too priceless to miss:

Responding to the statement "Buying bottled water instead of drinking water from a faucet contributes to global warming" only 21% of all [San Jose State University] Meteorology 112 students answered correctly.  In the EF student group, this improved to a 53% correct response….  For the statement, "Eating a vegetarian diet can reduce global warming," the initial correct response by all Meteorology 112 students was 14%, while the EF group improved to 80%.

Oh my god, every time you drink bottled water you are adding 0.0000000000000000000000000001C to the world temperature.  How much global warming do I prevent if I paint flowers on my VW van?  We are teaching college meteorology students this kind of stuff?  The gulf between this and my freshman physics class is so wide, I can’t even get my head around it.  This is a college science class?

In fact, the authors admit that their curriculum is an explicit rejection of science education, bringing the odd plea in a scientific journal that science students should be taught less science:

Critics of conventional environmental education propose that curriculum focused solely on science without personal and social connections may not be the most effective educational model for moving toward social change.

I think it is a pretty good sign that a particular branch of science has a problem when it is focused more on "social change" than on getting the science right, and when its leading journal focuses on education studies rather than science.

If I were a global warming believer, this program would piss me off.  Think about it.  Teaching kids this kind of stuff and then sending them out to argue with knowlegeable skeptics is like teaching a bunch of soldiers only karate and judo and then sending them into a modern firefight.  They are going to get slaughtered. 

Hockey Stick: RIP

I have posted many times on the numerous problems with the historic temperature reconstructions that were used in Mann’s now-famous "hockey stick."   I don’t have any problems with scientists trying to recreate history from fragmentary evidence, but I do have a problem when they overestimate the certainty of their findings or enter the analysis trying to reach a particular outcome.   Just as an archaeologist must admit there is only so much that can be inferred from a single Roman coin found in the dirt, we must accept the limit to how good trees are as thermometers.  The problem with tree rings (the primary source for Mann’s hockey stick) is that they vary in width for any number of reasons, only one of which is temperature.

One of the issues scientists are facing with tree ring analyses is called "divergence."  Basically, when tree rings are measured, they have "data" in the form of rings and ring widths going back as much as 1000 years (if you pick the right tree!)  This data must be scaled — a ring width variation of .02mm must be scaled in some way so that it translates to a temperature variation.  What scientists do is take the last few decades of tree rings, for which we have simultaneous surface temperature recordings, and scale the two data sets against each other.  Then they can use this scale when going backwards to convert ring widths to temperatures.

But a funny thing happened on the way to the Nobel Prize ceremony.  It turns out that if you go back to the same trees 10 years later and gather updated samples, the ring widths, based on the scaling factors derived previously, do not match well with what we know current temperatures to be. 

The initial reaction from Mann and his peers was to try to save their analysis by arguing that there was some other modern anthropogenic effect that was throwing off the scaling for current temperatures (though no one could name what such an effect might be).  Upon further reflection, though, scientists are starting to wonder whether tree rings have much predictive power at all.  Even Keith Briffa, the man brought into the fourth IPCC to try to save the hockey stick after Mann was discredited, has recently expressed concerns:

There exists very large potential for over-calibration in multiple regressions and in spatial reconstructions, due to numerous chronology predictors (lag variables or networks of chronologies – even when using PC regression techniques). Frequently, the much vaunted ‘verification’ of tree-ring regression equations is of limited rigour, and tells us virtually nothing about the validity of long-timescale climate estimates or those that represent extrapolations beyond the range of calibrated variability.

Using smoothed data from multiple source regions, it is all too easy to calibrate large scale (NH) temperature trends, perhaps by chance alone.

But this is what really got me the other day.  Steve McIntyre (who else) has a post that analyzes each of the tree ring series in the latest Mann hockey stick.  Apparently, each series has a calibration period, where the scaling is set, and a verification period, an additional period for which we have measured temperature data to verify the scaling.  A couple of points were obvious as he stepped through each series:

  1. Each series individually has terrible predictive ability.  Each were able to be scaled, but each has so much noise in them that in many cases, standard T-tests can’t even be run and when they are, confidence intervals are huge.  For example, the series NOAMER PC1 (the series McIntyre showed years ago dominates the hockey stick) predicts that the mean temperature value in the verification period should be between -1C and -16C.  For a mean temperature, this is an unbelievably wide range.  To give one a sense of scale, that is a 27F range, which is roughly equivalent to the difference in average annual temperatures between Phoenix and Minneapolis!  A temperature forecast with error bars that could encompass both Phoenix and Minneapolis is not very useful.
  2. Even with the huge confidence intervals above, the series above does not verify!  (the verification value is -.19).  In fact, only one out of numerous data series individually verifies, and even this one was manually fudged to make it work.

Steve McIntyre is a very careful and fair person, so he allows that even if none of the series individually verify or have much predictive power, they might when combined.  I am not a statistician, so I will leave that to him to think about, but I know my response — if all of the series are of low value individually, their value is not going to increase when combined.  They may accidentally in mass hit some verification value, but we should accept that as an accident, not as some sort of true signal emerging from the data. 

Why Does NASA Oppose Satellites? A Modest Proposal For A Better Data Set

One of the ironies of climate science is that perhaps the most prominent opponent of satellite measurement of global temperature is James Hansen, head of … wait for it … the Goddard Institute for Space Studies at NASA!  As odd as it may seem, while we have updated our technology for measuring atmospheric components like CO2, and have switched from surface measurement to satellites to monitor sea ice, Hansen and his crew at the space agency are fighting a rearguard action to defend surface temperature measurement against the intrusion of space technology.

For those new to the topic, the ability to measure global temperatures by satellite has only existed since about 1979, and is admittedly still being refined and made more accurate.  However, it has a number of substantial advantages over surface temperature measurement:

  • It is immune to biases related to the positioning of surface temperature stations, particularly the temperature creep over time for stations in growing urban areas.
  • It is relatively immune to the problems of discontinuities as surface temperature locations are moved.
  • It is much better geographic coverage, lacking the immense holes that exist in the surface temperature network.

Anthony Watt has done a fabulous job of documenting the issues with the surface temperature measurement network in the US, which one must remember is the best in the world.  Here is an example of the problems in the network.  Another problem that Mr. Hansen and his crew are particularly guilty of is making a number of adjustments in the laboratory to historical temperature data that are poorly documented and have the result of increasing apparent warming.  These adjustments, that imply that surface temperature measurements are net biased on the low side, make zero sense given the surfacestations.org surveys and our intuition about urban heat biases.

What really got me thinking about this topic was this post by John Goetz the other day taking us step by step through the GISS methodology for "adjusting" historical temperature records  (By the way, this third party verification of Mr. Hansen’s methodology is only possible because pressure from folks like Steve McIntyre forced NASA to finally release their methodology for others to critique).  There is no good way to excerpt the post, except to say that when its done, one is left with a strong sense that the net result is not really meaningful in any way.  Sure, each step in the process might have some sort of logic behind it, but the end result is such a mess that its impossible to believe the resulting data have any relevance to any physical reality.  I argued the same thing here with this Tucson example.

Satellites do have disadvantages, though I think these are minor compared to their advantages  (Most skeptics believe Mr. Hansen prefers the surface temperature record because of, not in spite of, its biases, as it is believed Mr. Hansen wants to use a data set that shows the maximum possible warming signal.  This is also consistent with the fact that Mr. Hansen’s historical adjustments tend to be opposite what most would intuit, adding to rather than offsetting urban biases).  Satellite disadvantages include:

  • They take readings of individual locations fewer times in a day than a surface temperature station might, but since most surface temperature records only use two temperatures a day (the high and low, which are averaged), this is mitigated somewhat.
  • They are less robust — a single failure in a satellite can prevent measuring the entire globe, where a single point failure in the surface temperature network is nearly meaningless.
  • We have less history in using these records, so there may be problems we don’t know about yet
  • We only have history back to 1979, so its not useful for very long term trend analysis.

This last point I want to address.  As I mentioned above, almost every climate variable we measure has a technological discontinuity in it.  Even temperature measurement has one between thermometers and more modern electronic sensors.  As an example, below is a NOAA chart on CO2 that shows such a data source splice:

Atmosphericcarbondioxide

I have zero influence in the climate field, but I would never-the-less propose that we begin to make the same data source splice with temperature.  It is as pointless continue to rely on surface temperature measurements as our primary metric of global warming as it is to rely on ship observations for sea ice extent. 

Here is the data set I have begun to use (Download crut3_uah_splice.xls ).  It is a splice of the Hadley CRUT3 historic data base with the UAH satellite data base for historic temperature anomalies.  Because the two use different base periods to zero out their anomalies, I had to reset the UAH anomaly to match CRUT3.  I used the first 60 months of UAH data and set the UAH average anomaly for this period equal to the CRUT3 average for the same period.  This added exactly 0.1C to each UAH anomaly.  The result is shown below (click for larger view)

Landsatsplice

Below is the detail of the 60-month period where the two data sets were normalized and the splice occurs.  The normalization turned out to be a simple addition of 0.1C to the entire UAH anomaly data set.  By visual inspection, the splice looks pretty good.

Landsatsplice2

One always needs to be careful when splicing two data sets together.  In fact, in the climate field I have warned of the problem of finding an inflection point in the data right at a data source splice.  But in this case, I think the splice is clean and reasonable, and consistent in philosophy to, say, the splice in historic CO2 data sources.

A Reminder

As we know, alarmists have adopted the term "climate change" over "global warming," in large part since the climate is always changing for all manner of reasons, one can always find, well, climate change.   This allows alarmists in the media to point to any bit of weather in the tails for the normal distribution and blame these events on man-made climate change.

But here is a reminder for those who may be uncomfortable with their own grasp of climate science (don’t feel bad, the media goes out of its way not to explain things very well).  There is no mechanism that has been proven, or even credibly identified, for increasing levels of CO2 in the atmosphere to "change the climate" or cause extreme weather without first causing warming.  In other words, the only possible causality is CO2 –> warming –> changing weather patterns.  If we don’t see the warming, we don’t see the changing weather patterns. 

I feel the need to say this, because alarmists (including Gore) have adopted the tactic of saying that climate change is accelerating, or that they see the signs of accelerating climate change everywhere.  But for the last 10 years, we have not seen any warming.  Uah

So if climate change is in fact somehow "accelerating," then it cannot possibly be due to CO2.  I believe that they are trying to create the impression that somehow CO2 is directly causing extreme weather, which it does not, under any mechanism anyone has ever suggested.   

Antarctic Sea Ice

I have written a number of times that alarmists like Al Gore focus their cameras and attention on small portions of the Antarctic Peninsula where sea ice is has been shrinking  (actually, it turns out Al Gore did not focus actual cameras but used special effects footage from the disaster movie Day after Tomorrow).  I have argued that this is disingenuous, because the Antarctic Peninsula is not representative of climate trends in the rest of Antarctica, much less a good representative of climate trends across the whole globe.  This map reinforces my point, showing in red where sea ice has increased, and in blue where it has decreased  (this is a little counter-intuitive where we expect anomaly maps to show red as hotter and blue as colder).

Clivarvariations6n1

The Cost of the Insurance Policy Matters

Supporters of the precautionary principle argue that even if it is uncertain that we will face a global warming catastrophe from producing CO2, we should insure against it by abating CO2 just in case.  "You buy insurance on your house, don’t you," they often ask.  Sure, I answer, except when the cost of the insurance is more than the cost of the house.

In a speech yesterday here in Washington, Al Gore challenged the United States to "produce every kilowatt of electricity through wind, sun, and other Earth-friendly energy sources within 10 years. This goal is achievable, affordable, and transformative." (Well, the goal is at least one of those things.) Gore compared the zero-carbon effort to the Apollo program. And the comparison would be economically apt if, rather than putting a man on the moon—which costs about $100 billion in today’s dollars—President Kennedy’s goal had been to build a massive lunar colony, complete with a casino where the Rat Pack could perform.

Gore’s fantastic—in the truest sense of the word—proposal is almost unfathomably pricey and makes sense only if you think that not doing so almost immediately would result in an uninhabitable planet. …

This isn’t the first time Gore has made a proposal with jaw-dropping economic consequences. Environmental economist William Nordhaus ran the numbers on Gore’s idea to reduce carbon emissions by 90 percent by 2050. Nordhaus found that while such a plan would indeed reduce the maximum increase in global temperatures to between 1.3 and 1.6 degrees Celsius, it did so "at very high cost" of between $17 trillion and $22 trillion over the long term, as opposed to doing nothing. (Again, just for comparative purposes, the entire global economy is about $50 trillion.)

I think everyone’s numbers are low, because they don’t include the cost of storage (technology unknown) or alternative capacity when it is a) dark and/or b) not windy.

A while back I took on Gore’s suggestion that all of America’s electricity needs could be met with current Solar technology with a 90 mile x 90 mile tract of solar.  Forgetting the fact that Al’s environmental friends would never allow us to cover 8100 square miles of the desert in silicon, I got a total installation cost of $21 trillion dollars.  And that did not include the electrical distribution systems necessary for the whole country to take power from this one spot, nor any kind of storage technology for using electricity at night  (it was hard to cost one out when no technology exist for storing America’s total energy needs for 12 hours).  Suffice it to say that a full solution with storage and distribution would easily cost north of $30 trillion dollars.

This Too Shall Pass (By Popular Demand)

In perhaps the largest batch of email I have ever gotten on one subject, readers are demanding more coverage of the effect of trace atmospheric gasses on kidney function.  So here you go:

In early July, when a former government employee accused Dick Cheney’s office of deleting from congressional testimony key statements about the impact of climate change on public health, White House staff countered that the science just wasn’t strong enough to include. Not two weeks later, however, things already look different. University of Texas researchers have laid out some of the most compelling science to date linking climate change with adverse public-health effects: scientists predict a steady rise in the U.S. incidence of kidney stones — a medical condition largely brought on by dehydration — as the planet continues to warm.

I am certainly ready to believe that this is "the most compelling science to date" vis a vis the negative effects of global warming, though I thought perhaps the study about global warming increasing acne was right up there as well.

Here are 48,900 other things that "global warming will cause."  More from Lubos Motl.  And here is the big list of global warming catastrophe claims.

Update:  I am not sure I would have even bothered, but Ryan M actually dives into the "science" of the kidney stone finding

Working on New Videos

Sorry posting has been light, but I am working on starting a new series of videos.  At some point I want to update the old ones, but right now I want to experiment with some new approaches — the old ones are pretty good, but are basically just powerpoint slides with some narration.  If you have not seen the previous videos, you may find them as follows:

  • The 6-part, one hour version is here
  • The 10-minute version, which is probably the best balance of time vs. material covered, is here.
  • The short 3-minute version I created for a contest (I won 2nd place) is here.

Combined, they have over 40,000 views.

Another Dim Bulb Leading Global Warming Efforts

Rep. Edward Markey (D-Mass.) is chairman of the House (Select) Energy Independence and Global Warming Committee.  He sure seems to know his stuff, huh:

A top Democrat told high school students gathered at the U.S. Capitol Thursday that climate change caused Hurricane Katrina and the conflict in Darfur, which led to the “black hawk down” battle between U.S. troops and Somali rebels….

“In Somalia back in 1993, climate change, according to 11 three- and four-star generals, resulted in a drought which led to famine,” said Markey.

“That famine translated to international aid we sent in to Somalia, which then led to the U.S. having to send in forces to separate all the groups that were fighting over the aid, which led to Black Hawk Down. There was this scene where we have all of our American troops under fire because they have been put into the middle of this terrible situation,” he added.

Ugh.

Yes, It’s Another Antarctic Ice Post

From a reader, comes yet another article claiming micro-climate variations on the Antarctic Peninsula are indicative of global warming.

New evidence has emerged that a large plate of floating ice shelf attached to Antarctica is breaking up, in a troubling sign of global warming, the European Space Agency (ESA) said on Thursday.

Images taken by its Envisat remote-sensing satellite show that Wilkins Ice Shelf is "hanging by its last thread" to Charcot Island, one of the plate’s key anchors to the Antarctic peninsula, ESA said in a press release.

"Since the connection to the island… helps stabilise the ice shelf, it is likely the breakup of the bridge will put the remainder of the ice shelf at risk," it said.

Wilkins Ice Shelf had been stable for most of the last century, covering around 16,000 square kilometres (6,000 square miles), or about the size of Northern Ireland, before it began to retreat in the 1990s.

No, No, No.  The Antartic Peninsula’s climate is not indicative of the rest of Antarctica or the rest of the Southern Hemisphere, much less of the globe.  Here, one more time, is the missing context:

    1. The Antarctic Peninsula is a very small area that has very clearly been warming substantially over the last decades, but it represents only 2% of Antarctica 

    2. The rest of Antarctica has seen flat and even declining temperatures, as has the entire southern hemisphere.  In fact, the Antarctic Peninsula is a very small area that is anomalous within the entire Southern Hemisphere, which makes it incredible that it so often is used as indicative of a global climate trend.

      Uahmsuspol

      Uahmsushem

    3. Antarctic sea ice extent is actually at the highest levels observed since we started watching it via satellite around 1979.  Ice may be shrinking around the Peninsula, but is net growing over the whole continent
      Currentanomsouth
    4. We have no clue how ice shelves behave over time spans longer than the 100 years we have watched them.  It may well be they go through long-term natural growth and collapse cycles.

Much more here.

In Search of Honesty

Both major presidential candidates have endorsed CO2 abatement targets for the US, with Obama advocating for the most stringent — the "20 by 50" target by which the US would reduce CO2 emissions by 80% in the next 40 years.

Given that they support such targets, the candidates’ public positions on gasoline prices should be something like this:

Yeah, I know that $4 gas is painful.  But do you know what?  Gas prices are going to have to go a LOT higher for us to achieve the CO2 abatement targets I am proposing, so suck it up.  Just to give you a sense of scale, the Europeans pay nearly twice as much as we do for gas, and even at those levels, they are orders of magnitude short of the CO2 abatement I have committed us to achieve.  Since late 2006, gas prices in this country have doubled, and demand has fallen by perhaps 5%.  That will probably improve over time as people buy new cars and change behaviors, but it may well require gasoline prices north of $20 a gallon before we meet the CO2 goal I have adopted.  So get ready.

You have heard Obama and McCain say this?  Yeah, neither have I.  At least Obama was consistent enough not to adopt McCain’s gas tax holiday idea.  But it’s time for some honesty here, not that I really expect it. 

We need to start being a lot clearer about the real costs of CO2 abatement and stop this mindless "precautionary principle" jargon that presupposes that there are no costs to CO2 abatement.  When proponents of the precautionary principle say "Well, CO2 abatement is like insurance — you buy insurance on your house, don’t you," I answer, "Not if the insurance costs more than the cost to replace the house, I don’t."

Climate Blogs That Don’t Necessarily Accept “The Consensus”

Via Tom Nelson and Climate Debate Daily

William M. Briggs
Climate Audit
Climate Change Facts
Climate Change Fraud
Climate Police
Climate Resistance
Climate Scam
Climate Science
CO2 Science
Friends of Science
Global Climate Scam
Global Warming Heretic
Global Warming Hoax
Global Warming Skeptic
GlobalWarming.org
Greenie Watch
Bruce Hall
Warwick Hughes
Lucia Liljegren
Jennifer Marohasy
Warren Meyer
Maurizio Morabito
Luboš Motl
Tom Nelson
Newsbusters climate
Planet Gore
Roger Pielke Sr.
Fred Singer
David Stockwell
Philip Stott
Anthony Watts
World Climate Report

Map of Pain Created by CO2 Abatement Efforts

Government treaties and legislation will of necessity increase the cost of energy substantially.  It will also indirectly increase the cost of food and other staples, as fertilizer, equipment, and transportation costs rise.  This is not to mention the substantial rise in food costs that will continue as long as governments continue their misguided efforts to promote and subsidize food-based ethanol as a global warming solution. 

I found the map below in another context at economist Mark Perry’s site.  It shows the percentage of the average person’s income that is spent on food, fuel, and drink, with low percentages in green and high percentages in red.  However, this could easily be a map of the pain created by CO2 abatement efforts, with the most pain felt in red and the least in green.  In fact, this map actually underestimates the pain in yellow-red areas, as is does not factor in the lost development potential and thus lost future income from CO2 abatement efforts.

Foodfuelmap

Update on food prices:

Biofuels have caused a 75 per cent increase in world food prices, a new report suggests.

The rise is far greater than previous estimates including a US Government claim that plant-derived fuels contribute less than three per cent to food price hikes.

According to reports last night, a confidential World Bank document indicates the true extent of the effect of biofuels on prices at a crucial time in the world’s negotiations on biofuel policy.

Rising food prices have been blamed for pushing 100 million people beneath the poverty line. The confidential report, based on a detailed economic analysis of the effect of biofuels, will put pressure on the American and European governments, which have turned to biofuels in attempts to reduce the greenhouse gases associated with fossil fuels and to reduce their reliance on oil imports.

The report says: "Without the increase in biofuels, global wheat and maize stocks would not have declined appreciably and price increases due to other factors would have been moderate."

Extrapolating From One Data Point

Years ago, when I was studying engineering in college, I had a professor who used to "joke"  (remember, these are engineers, so the bar for the word "joke" is really low) that when he wanted to prove something, it was a real benefit to have only one data point.  That way, he said, you could plot a trend in any direction with any slope you wanted through the point.  Once you had two or three or more data points, your flexibility was ruined.

I am reminded of this in many global warming articles in the press today.  Here is one that caught my eye today on Tom Nelson’s blog.  There is nothing unusual about it, it just is the last one I saw:

Byers said he has decided to run because he wants to be able to look at his children in 20 or 30 years and be able to say that he took action to try to address important challenges facing humanity. He cited climate change as a “huge” concern, noting that this was driven home during a trip he took to the Arctic three weeks ago.

“The thing that was most striking was how the speed of climate change is accelerating—how it’s much worse than anyone really wants to believe,” Byers said. “To give you a sense of this, we flew over Cumberland Sound, which is a very large bay on the east coast of Baffin Island. This was three weeks ago; there was no ice.”

Do you see the single data point:  Cumberland Sound three weeks ago had no ice.  Incredibly, from this single data point, he not only comes up with a first derivative (the world is warming) but he actually gets the second derivative from this single data point (change is accelerating).  Wow!

We see this in other forms all the time:

  • We had a lot of flooding in the Midwest this year
  • There were a lot of tornadoes this year
  • Hurricane Katrina was really bad
  • The Northwest Passage was navigable last year
  • An ice shelf collapsed in Antarctica
  • We set a record high today in such-and-such city

I often criticize such claims for their lack of any proof of causality  (for example, linking this year’s floods and tornadoes to global warming when it is a cooler year than most of the last 20 seems a real stretch). 

But all of these stories share another common problem – they typically are used by the writer to make a statement about the pace and direction of change (and even the acceleration of this change), something that is absolutely scientifically impossible to do from a single data point.  As it turns out, we often have flooding in the Midwest.  Neither tornadoes nor hurricanes have shown any increasing trend over the past decades.  The Northwest Passage has been navigable a number of years in the last century.  During the time of the ice shelf collapse panic, Antarctica was actually setting 30-year record highs for sea ice extent.  And, by simple math, every city on average should set a new 100-year high temperature record every 100 days, and this is even before considering the urban heat island effect’s upward bias on city temperature measurement.

Postscript:  Gee, I really hate to add a second data point to the discussion, but from Cyrosphere Today, here is a comparison of the Arctic sea ice extent today and exactly 20 years ago (click for a larger view)

Deetmp7873arrow

The arrow points to Cumberland Sound.  I will not dispute Mr. Byers personal observations, except to say that whatever condition it is in today, there seems to have been even less ice there 20 years ago.

To be fair, sea ice extent in the Arctic is down about a million square kilometers today vs. where it was decades ago (though I struggle to see it in these maps), while the Antarctic is up about a million, so the net world anomaly is about zero right now.