All posts by admin

The First Rule of Regression Analysis

Here is the first thing I was ever taught about regression analysis — never, ever use multi-variable regression analysis to go on a fishing expedition.  In other words, never throw in a bunch of random variables and see what turns out to have the strongest historical relationship.  Because the odds are that if you don’t understand the relationship between the variables and why you got the answer that you did, it is very likely a spurious result.

The purpose of a regression analysis is to confirm and quantify a relationship that you have a theoretical basis for believing to exist.  For example, I might think that home ownership rates might drop as interest rates rose, and vice versa, because interest rate increases effectively increase the cost of a house, and therefore should reduce the demand.  This is a perfectly valid proposition to test.  What would not be valid is to throw interest rates, population growth, regulatory levels, skirt lengths,  superbowl winners, and yogurt prices together into a regression with housing prices and see what pops up as having a correlation.   Another red flag would be, had we run our original regression between home ownership and interest rates and found the opposite result than we expected, with home ownership rising with interest rates, we need to be very very suspicious of the correlation.  If we don’t have a good theory to explain it, we should treat the result as spurious, likely the result of mutual correlation of the two variables to a third variable, or the result of time lags we have not considered correctly, etc.

Makes sense?  Well, then, what do we make of this:  Michael Mann builds temperature reconstructions from proxies.  An example is tree rings.  The theory is that warmer temperatures lead to wider tree rings, so one can correlate tree ring growth to temperature.  The same is true for a number of other proxies, such as sediment deposits.

In the particular case of the Tiljander sediments, Steve McIntyre observed that Mann had included the data upside down – meaning he had essentially reversed the sign of the proxy data.  This would be roughly equivalent to our running our interest rate – home ownership regression but plugging the changes in home ownership with the wrong sign (ie decreases shown as increases and vice versa).

You can see that the data was used upside down by comparing Mann’s own graph with the orientation of the original article, as we did last year. In the case of the Tiljander proxies, Tiljander asserted that “a definite sign could be a priori reasoned on physical grounds” – the only problem is that their sign was opposite to the one used by Mann. Mann says that multivariate regression methods don’t care about the orientation of the proxy.

The world is full of statements that are strictly true and totally wrong at the same time.  Mann’s statement in bold is such a case.  This is strictly true – the regression does not care if you get the sign right, it will still get a correlation.  But it is totally insane, because this implies that the correlation it is getting is exactly the opposite of what your physics told you to expect.  It’s like getting a positive correlation between interest rates and home ownership.  Or finding that tree rings got larger when temperatures dropped.

This is a mistake that Mann seems to make a lot — he gets buried so far down into the numbers, he forgets that they have physical meaning.  They are describing physical systems, and what they are saying in this case makes no sense.  He is essentially using a proxy that is essentially behaving exactly the opposite of what his physics tell him it should – in fact behaving exactly opposite to the whole theory of why it should be a proxy for temperature in the first place.  And this does not seem to bother him enough to toss it out.

PS-  These flawed Tiljander sediments matter.  It has been shown that the Tiljander series have an inordinate influence on Mann’s latest proxy results.  Remove them, and a couple of other flawed proxies  (and by flawed, I mean ones with manually made up data) and much of the hockey stick shape he loves so much goes away

The Dividing Line Between Nuisance and Catastrophe: Feedback

I have written for quite a while that the most important issue in evaluating catastrophic global warming forecasts is feedback.  Specifically, is the climate dominated by positive feedbacks, such that small CO2-induced changes in temperatures are multiplied many times, or even hit a tipping point where temperatures run away?  Or is the long-term stable system of climate more likely dominated by flat to negative feedback, as are most natural physical systems?  My view has always been that the earth will warm at most a degree for a doubling of CO2 over the next century, and may warm less if feedbacks turn out to be negative.

I am optimistic that this feedback issue may finally be seeing the light of day.  Here is Professor William Happer of Princeton in US Senate testimony:

There is little argument in the scientific community that a direct effect of doubling the CO2 concentration will be a small increase of the earth’s temperature — on the order of one degree. Additional increments of CO2 will cause relatively less direct warming because we already have so much CO2 in the atmosphere that it has blocked most of the infrared radiation that it can. It is like putting an additional ski hat on your head when you already have a nice warm one below it, but your are only wearing a windbreaker. To really get warmer, you need to add a warmer jacket. The IPCC thinks that this extra jacket is water vapor and clouds.

Since most of the greenhouse effect for the earth is due to water vapor and clouds, added CO2 must substantially increase water’s contribution to lead to the frightening scenarios that are bandied about. The buzz word here is that there is “positive feedback.” With each passing year, experimental observations further undermine the claim of a large positive feedback from water. In fact, observations suggest that the feedback is close to zero and may even be negative. That is, water vapor and clouds may actually diminish the already small global warming expected from CO2, not amplify it. The evidence here comes from satellite measurements of infrared radiation escaping from the earth into outer space, from measurements of sunlight reflected from clouds and from measurements of the temperature the earth’s surface or of the troposphere, the roughly 10 km thick layer of the atmosphere above the earth’s surface that is filled with churning air and clouds, heated from below at the earth’s surface, and cooled at the top by radiation into space.

When the IPCC gets to a forecast of 3-5C warming over the next century (in which CO2 concentrations are expected to roughly double), it is in two parts.  As professor Happer relates, only about 1C of this is directly from the first order effects of more Co2.  This assumption of 1C warming for a doubling of Co2 is relatively stable across both scientists and time, except that the IPCC actually reduced this number a bit between their 3rd and 4th reports.

They get from 1C to 3C-5C with feedback.  Here is how feedback works.

Lets say the world warms 1 degree.  Lets also assume that the only feedback is melting ice and albedo, and that for every degree of warming, the lower albedo from melted ice reflecting less sunlight back into space adds another 0.1 degree of warming.  But this 0.1 degree extra warming would in turn melt a bit more ice, which would result in 0.01 degree 3rd order warming.  So the warming from an initial 1 degree with such 10% feedback would be 1+0.1+0.01+0.001 …. etc.   This infinite series can be calculated as   dT * (1/(1-g))  where dT is the initial first order temperature change (in this case 1C) and g is the percentage that is fed back (in this case 10%).  So a 10% feedback results in a gain or multiplier of the initial temperature effect of 1.11 (more here).

So how do we get a multiplier of 3-5 in order to back into the IPCC forecasts?  Well, using our feedback formula backwards and solving for g, we get feedback percents of 67% for a 3 multiplier and 80% for a 5 multiplier.  These are VERY high feedbacks for any natural physical system short of nuclear fission, and this issue is the main (but by no means only) reason many of us are skeptical of catastrophic forecasts.

[By the way, to answer past criticisms, I know that the models do not use this simplistic feedback methodology in their algorithms.  But no matter how complex the details are modeled, the bottom line is that somewhere in the assumptions underlying these models, a feedback percent of 67-80% is implicit]

For those paying attention, there is no reason that feedback should apply in the future but not in the past.  Since the pre-industrial times, it is thought we have increased atmospheric Co2 by 43%.  So, we should have seen, in the past, 43% of the temperature rise from a doubling, or 43% of 3-5C, which is 1.3C-2.2C.  In fact, this underestimates what we should have seen historically since we just did a linear interpolation.  But Co2 to temperature is a logarithmic diminishing return relationship, meaning we should see faster warming with earlier increases than with later increases.  Never-the-less, despite heroic attempts to posit some offsetting cooling effect which is masking this warming, few people believe we have seen any such historic warming, and the measured warming is more like 0.6C.  And some of this is likely due to the fact that the solar activity was at a peak in the late 20th century, rather than just Co2.

I have a video discussing these topics in more depth:

This is the bait and switch of climate alarmism.  When pushed into the corner, they quickly yell “this is all settled science,”  when in fact the only part that is fairly well agreed upon is the 1C of first order warming from a doubling.  The majority of the warming, the amount that converts the forecast from nuisance to catastrophe, comes from feedback which is very poorly understood and not at all subject to any sort of consensus.

A Cautionary Tale About Models Of Complex Systems

I have often written warming about the difficulty of modeling complex systems.  My mechanical engineering degree was focused on the behavior and modeling of dynamic systems.  Since then, I have spent years doing financial, business, and economic modeling.  And all that experienced has taught me humility, as well as given me a good knowledge of where modelers tend to cheat.

Al Gore has argued that we should trust long-term models, because Wall Street has used such models successfully for years  (I am not sure he has been using this argument lately, lol).  I was immediately skeptical of this statement.  First, Wall Street almost never makes 100-year bets based on models (they may be investing in 30-year securities, but the bets they are making are much shorter term).  Second, my understanding of Wall Street history is that lower Manhattan is littered with the carcasses of traders who bankrupted themselves following the hot model of the moment.  It is ever so easy to create a correlation model that seems to back-cast well.  But no one has ever created one that holds up well going forward.

A reader sent me this article about the Gaussian copula, apparently the algorithm that underlay the correlation models Wall Streeters used to assess mortgage security and derivative risk.

Wall Streeters have the exact same problem that climate modelers have.  There is a single output variable they both care about (security price for traders, global temperature for modelers).  This variable’s value changes in a staggeringly complex system full of millions of variables with various levels of cross-correlation.  The modelers challenge is to look at the historical data, and to try to tease out correlation factors between their output variable and all the other input variables in an environment where they are all changing.

The problem is compounded because some of the input variables move on really long cycles, and some move on short cycles.  Some of these move in such long cycles that we may not even recognize the cycle at all.  In the end, this tripped up the financial modelers — all of their models derived correlation factors from a long and relatively unbroken period of home price appreciation.  Thus, when this cycle started to change, all the models fell apart.

Li’s copula function was used to price hundreds of billions of dollars’ worth of CDOs filled with mortgages. And because the copula function used CDS prices to calculate correlation, it was forced to confine itself to looking at the period of time when those credit default swaps had been in existence: less than a decade, a period when house prices soared. Naturally, default correlations were very low in those years. But when the mortgage boom ended abruptly and home values started falling across the country, correlations soared.

I never criticize people for trying to do an analysis with the data they have.  If they have only 10 years of data, that’s as far as they can run the analysis.  However, it is then important that they recognize that their analysis is based on data that may be way too short to measure longer term trends.

As is typical when models go wrong, early problems in the model did not cause users to revisit their assumptions:

His method was adopted by everybody from bond investors and Wall Street banks to ratings agencies and regulators. And it became so deeply entrenched—and was making people so much money—that warnings about its limitations were largely ignored.

Then the model fell apart. Cracks started appearing early on, when financial markets began behaving in ways that users of Li’s formula hadn’t expected. The cracks became full-fledged canyons in 2008—when ruptures in the financial system’s foundation swallowed up trillions of dollars and put the survival of the global banking system in serious peril.

A couple of lessons I draw out for climate models:

  1. Limited data availability can limit measurement of long-term cycles.  This is particularly true in climate, where cycles can last hundreds and even thousands of years, but good reliable data on world temperatures is only available for our 30 years and any data at all for about 150 years.  Interestingly, there is good evidence that many of the symptoms we attribute to man-made global warming are actually part of climate cycles that go back long before man burned fossil fuels in earnest.  For example, sea levels have been rising since the last ice age, and glaciers have been retreating since the late 18th century.
  2. The fact that models hindcast well has absolutely no predictive power as to whether they will forecast well
  3. Trying to paper over deviations between model forecasts and actuals, as climate scientists have been doing for the last 10 years, without revisiting the basic assumptions of the model can be fatal.

A Final Irony

Do you like irony?  In the last couple of months, I have been discovering I like it less than I thought.  But here is a bit of irony for you anyway.  The first paragraph of Obama’s new budget read like this:

This crisis is neither the result of a normal turn of the business cycle nor an accident of history, we arrived at this point as a result of an era of profound irresponsibility that engulfed both private and public institutions from some of our largest companies’ executive suites to the seats of power in Washington, D.C.

As people start to deconstruct last year’s financial crisis, most of them are coming to the conclusion that the #1 bit of “irresponsibility” was the blind investment of trillions of dollars based on solely on the output of correlation-based computer models, and continuing to do so even after cracks appeared in the models.

The irony?  Obama’s budget includes nearly $700 billion in new taxes (via a cap-and-trade system) based solely on … correlation-based computer climate models that predict rapidly rising temperatures from CO2.  Climate models in which a number of cracks have appeared, but which are being ignored.

Postscript: When I used this comparison the other day, a friend of mine fired back that the Wall Street guys were just MBA’s, but the climate guys were “scientists” and thus presumably less likely to err.  I responded that I didn’t know if one group or the other was more capable (though I do know that Wall Street employs a hell of a lot of top-notch PhD’s).  But I did know that the financial consequences for Wall Street traders having the wrong model was severe, while the impact on climate modelers of being wrong was about zero.  So, from an incentives standpoint, I know who I would more likely bet on to try to get it right.

The Plug

I have always been suspicious of climate models, in part because I spent some time in college trying to model chaotic dynamic systems, and in part because I have a substantial amount of experience with financial modeling.   There are a number of common traps one can fall into when modeling any system, and it appears to me that climate modelers are falling into most of them.

So a while back (before I even created this site) I was suspicious of this chart from the IPCC.  In this chart, the red is the “backcasting” of temperature history using climate models, the black line is the highly smoothed actuals, while the blue is a guess from the models as to what temperatures would have looked like without manmade forcings, particularly CO2.

ipcc1

As I wrote at the time:

I cannot prove this, but I am willing to make a bet based on my long, long history of modeling (computers, not fashion).  My guess is that the blue band, representing climate without man-made effects, was not based on any real science but was instead a plug.  In other words, they took their models and actual temperatures and then said “what would the climate without man have to look like for our models to be correct.”  There are at least four reasons I strongly suspect this to be true:

  1. Every computer modeler in history has tried this trick to make their models of the future seem more credible.  I don’t think the climate guys are immune.
  2. There is no way their models, with our current state of knowledge about the climate, match reality that well.
  3. The first time they ran their models vs. history, they did not match at all.  This current close match is the result of a bunch of tweaking that has little impact on the model’s predictive ability but forces it to match history better.  For example, early runs had the forecast run right up from the 1940 peak to temperatures way above what we see today.
  4. The blue line totally ignores any of our other understandings about the changing climate, including the changing intensity of the sun.  It is conveniently exactly what is necessary to make the pink line match history.  In fact, against all evidence, note the blue band falls over the century.  This is because the models were pushing the temperature up faster than we have seen it rise historically, so the modelers needed a negative plug to make the numbers look nice.

As you can see, the blue band, supposedly sans mankind, shows a steadily declining temperature. This never made much sense to me, given that, almost however you measure it, solar activity over the last half of the decade was stronger than the first half, but they show the natural forcings to be exactly opposite from what we might expect from this chart of solar activity as measured by sunspots (red is smoothed sunspot numbers, green is Hadley CRUT3 temperature).

temp_spots_with_pdo

By the way, there is a bit of a story behind this chart.  It was actually submitted by a commenter to this site of the more alarmist persuasion  (without the PDO bands), to try to debunk the link between temperature and the sun  (silly rabbit – the earth’ s temperature is not driven by the sun, but by parts per million changes in atmospheric gas concentrations!).  While the sun still is not the only factor driving the mercilessly complex climate, clearly solar activity in red was higher in the latter half of the century when temperatures in green were rising.  Which is at least as tight as the relation between CO2 and the same warming.

Anyway, why does any of this matter?  Skeptics have argued for quite some time that climate models assume too high of a sensitivity of temperature to CO2 — in other words, while most of us agree that Co2 increases can affect temperatures somewhat, the models assume temperature to be very sensitive to CO2, in large part because the models assume that the world’s climate is dominated by positive feedback.

One way to demonstrate that these models may be exaggerated is to plot their predictions backwards.  A relationship between Co2 and temperature that exists in the future should hold in the past, adjusting for time delays  (in fact, the relationship should be more sensitive in the past, since sensitivity is a logarithmic diminishing-return curve).  But projecting the modelled sensitivities backwards (with a 15-year lag) result in ridiculously high predicted historic temperature increases that we simply have never seen.  I discuss this in some depth in my 10 minute video here, but the key chart is this one:

feedback_projection

You can see the video to get a full explanation, but in short, models that include high net positive climate feedbacks have to produce historical warming numbers that far exceed measured results.  Even if we assign every bit of 20th century warming to man-made causes, this still only implies 1C of warming over the next century.

So the only way to fix this is with what modelers call a plug.  Create some new variable, in this case “the hypothetical temperature changes without manmade CO2,” and plug it in.  By making this number very negative in the past, but flat to positive in the future, one can have a forecast that rises slowly in the past but rapidly in the future.

Now, I can’t prove that this is what was done.  In fact, I am perfectly willing to believe that modelers can spin a plausible story with enough jargon to put off most layman, as to how they created this “non-man” line and why it has been decreasing over the last half of the century.   I have a number of reasons to disbelieve any such posturing:

  1. The last IPCC report spent about a thousand pages on developing the the “with Co2” forecasts.  They spent about half a page discussing the “without Co2” case.  These is about zero scientific discussion of how this forecast is created, or what the key elements are that drive it
  2. The IPCC report freely admits their understanding of cooling factors is “low”
  3. The resulting forecasts is WAY to good.  We will see this again in a moment.  But with such a chaotic system, your first reaction to anyone who shows you a back-cast that nicely overlays history almost exactly should be “bullshit.”  Its not possible, except with tuning and plugs
  4. The sun was almost undeniably stronger in the second half of the 20th century than the first half.  So what is the countervailing factor that overcomes both the sun and CO2?

The IPCC does not really say what is making the blue line go down, it just goes down (because, as we can see now, it has to to make their hypothesis work).  Today, the main answer to the question of what might be offsetting warming  is “aerosols,” particularly sulfur and carbon compounds that are man-made pollutants (true pollutants) from burning fossil fuels.  The hypothesis is that these aerosols reflect sunlight back to space and cool the earth  (by the way, the blue line above in the IPCC report is explicitly only non-anthropogenic effects, so at the time it went down due to natural effects – the manmade aerosol thing is a newer straw to grasp).

But black carbon and aerosols have some properties that create some problems with this argument, once you dig into it.  First, there are situations where they are as likely to warm as to cool.  For example, one reason the Arctic has been melting faster in the summer of late is likely due to black carbon from Chinese coal plants that land on the ice and warm it faster.

The other issue with aerosols is that they disperse quickly.  Co2 mixes fairly evenly worldwide and remains in the atmosphere for years.  Many combustion aerosols only remain in the air for days, and so they tend to be concentrated regionally.   Perhaps 10-20% of the earth’s surface might at any one time have a decent concentration of man-made aerosols.  But for that to drive a, say, half degree cooling effect that offsets CO2 warming, that would mean that cooling in these aerosol-affected areas would have to be 2.5-5.0C in magnitude.  If this were the case, we would see those colored global warming maps with cooling in industrial aerosol-rich areas and warming in the rest of the world, but we just don’t see that.  In fact, the vast, vast majority of man-made aerosols can be found in the northern hemisphere, but it is the northern hemisphere that is warming much faster than the southern hemisphere.  If aerosols were really offsetting half or more of the warming, we should see the opposite, with a toasty south and a cool north.

All of this is a long, long intro to a guest post on WUWT by Bill Illis.  He digs into one of the major climate models, GISS model E, and looks at the back-casts from this model.  What he finds mirrors a lot of what we discussed above:

modeleextraev0

Blue is the GISS actual temperature measurement.  Red is the model’s hind-cast of temperatures.  You can see that they are remarkably, amazingly, staggeringly close.  There are chaotic systems we have been modelling for hundreds of years (e.g. the economy) where we have never approached the accuracy this relative infant of a science seems to achieve.

That red forecasts in the middle is made up of a GHG component, shown in orange, plus a negative “everything else” component, shown in brown.  Is this starting to seem familiar?  Does the brown line smell suspiciously to anyone else like a “plug?”  Here are some random thoughts inspired by this chart:

  1. As with any surface temperature measurement system, the GISS system is full of errors and biases and gaps.  Some of these their proprietors would acknowledge, and such have been pointed out by outsiders.  Never-the-less, the GISS metric is likely to have an error of at least a couple tenths of a degree.  Which means the climate model here is perfectly fitting itself to data that isn’t even likely correct.  It is fitting closer to the GISS temperature number than the GISS temperature number likely fits to the actual world temperature anomaly, if such a thing could be measured directly.  Since the Hadley Center or the satellite guys at UAH and RSS get different temperature histories for the last 30-100 years, it is interesting that the GISS model exactly matches the GISS measurement but not these others.  Does that make anyone suspicious?  When the GISS makes yet another correction of its historical data, will the model move with it?
  2. As mentioned before, the sum total of time spent over the last 10 years trying to carefully assess the forcings from other natural and man-made effects and how they vary year-to-year is minuscule compared to the time spent looking at CO2.  I don’t think we have enough knowledge to draw the Co2 line on this chart, but we CERTAINLY don’t have knowledge to draw the “all other” line (with monthly resolution, no less!).
  3. Looking back over history, it appears the model is never off by more than 0.4C in any month, and never goes more than about 10 months before re-intersecting the “actual” line.  Does it bother anyone else that this level of precision is several times higher than the model has when run forward?  Almost immediately, the model is more than 0.4C off, and goes years without intercepting reality.

Relax — A Statement About Comment Policy

Anthony Watts is worried about the time it takes to moderate comments

Lately I’ve found that I spend a lot of time moderating posts that are simply back and forth arguments between just a few people whom have inflexible points of view. Often the discussion turns a bit testy. I’ve had to give some folks (on both sides of the debate) a time out the last couple of days. While the visitors of this blog (on both sides of the debate) are often more courteous than on some other blogs I’ve seen, it still gets tiresome moderating the same arguments between the same people again and again.

This does not surprise me, as I have emailed back and forth to Anthony during a time he was stressed about a particular comment thread.   I told him then what I say now:  Relax.

It might have been that 10 years ago or even 5 that visitors would be surprised and shocked by the actions of certain trolls on the site.  But I would expect that anyone, by now, who spends time in blog comment sections knows the drill — that blog comments can be a free-for-all and some folks just haven’t learned how to maturely operate in an anonymous environment.

I have never tried to moderate my comments (except for spam, which is why you might have  a comment with embedded links held for moderation — I am looking to filter people selling male enhancement products, not people who disagree with me.)  In fact, I relish buffoons who disagree with me when they make an ass of themselves – after all, as Napoleon said, never interrupt an enemy when he is making a mistake.  And besides, I think it makes a nice contrast with a number of leading climate alarmist sites that do not accept comments or are Stalinist in purging dissent from them.

In fact, I find that the only danger in my wide open policy is the media.  For you see, the only exception to my statement above, the only group on the whole planet that seems not to have gotten the message that comment threads don’t necessarily reflect the opinions of the domain operator, is the mainstream media.  I don’t know if this is incompetence or willful, but they still write stories predicated on some blog comment being reflective of the blog’s host.

By the way, for Christmas last year I bought myself an autographed copy of this XKCD comic to go over my desk:

duty_calls

Global Warming “Accelerating”

I have written a number of times about the “global warming accelerating” meme.  The evidence is nearly irrefutable that over the last 10 years, for whatever reason, the pace of global warming has decelerated (click below to enlarge)

hansenjan20091

This is simply a fact, though of course it does not necessarily “prove” that the theory of catastrophic anthropogenic global warming is incorrect.  Current results continue to be fairly consistent with my personal theory, that man-made CO2 may add 0.5-1C to global temperatures over the next century (below alarmist estimates), but that this warming may be swamped at times by natural climactic fluctuations that alarmists tend to under-estimate.

Anyway, in this context, I keep seeing stuff like this headline in the WaPo

Scientists:  Pace of Climate change Exceeds Estimates

This headline seems to clearly imply that the measured pace of actual climate change is exceeding previous predictions and forecasts.   This seems odd since we know that temperatures have flattened recently.  Well, here is the actual text:

The pace of global warming is likely to be much faster than recent predictions, because industrial greenhouse gas emissions have increased more quickly than expected and higher temperatures are triggering self-reinforcing feedback mechanisms in global ecosystems, scientists said Saturday.

“We are basically looking now at a future climate that’s beyond anything we’ve considered seriously in climate model simulations,” Christopher Field, founding director of the Carnegie Institution’s Department of Global Ecology at Stanford University, said at the annual meeting of the American Association for the Advancement of Science.

So in fact, based on the first two paragraphs, in true major media tradition, the headline is a total lie.  In fact, the correct headline is:

“Scientists Have Raised Their Forecasts for Future Warming”

Right?  I mean, this is all the story is saying, is that based on increased CO2 production, climate scientists think their forecasts of warming should be raised.  This is not surprising, because their models assume a direct positive relationship between CO2 and temperature.

The other half of the statement, that “higher temperatures are triggering self-reinforcing feedback mechanisms in global ecosystems” is a gross exaggeration of the state of scientific knowledge.  In fact, there is very little good understanding of climate feedback as a whole.  While we may understand individual pieces – ie this particular piece is a positive feedback – we have no clue as to how the whole thing adds up.  (see my video here for more discussion of feedback)

In fact, I have always argued that the climate models’ assumptions of strong positive feedback (they assume really, really high levels) is totally unrealistic for a long-term stable system.  In fact, if we are really seeing runaway feedbacks triggered after the less than one degree of warming we have had over the last century, it boggles the mind how the Earth has staggered through the last 5 billion years without a climate runaway.

All this article is saying is “we are raising our feedback assumptions higher than even the ridiculously high assumptions we were already using.”  There is absolutely no new confirmatory evidence here.

But this creates a problem for alarmists

For you see, their forecasts have consistently demonstrated themselves to be too high.  You can see above how Hansen’s forecast to Congress 20 years ago has played out (and the Hansen A case was actually based on a CO2 growth forecast that has turned out to be too low).  Lucia, who tends to be scrupulously fair about such things, shows the more recent IPCC models just dancing on the edge of being more than 2 standard deviations higher than actual measured results.

But here is the problem:  The creators of these models are now saying that actual CO2 production, which is the key input to their model, is far exceeding their predictions.  So, presumably, if they re-ran their predictions using actual CO2 data, they would get even higher temperature forecasts. Further, they are saying that the feedback multiplier in their models should be higher as well.  But the forecasts of their models are already high vs. observations — this will even cause them to diverge further from actual measurements.

So here is the real disconnect of the model:  If you tell me that modelers underestimated the key input (CO2) in their models,  and have so far overestimated the key output (Temperature), I would have said the conclusion to this article is that climate sensitivity must be lower than what was embedded in the models.  But they are saying exactly the opposite.  How is this possible?

Postscript: I hope readers understand this, but it is worth saying because clearly reporters do not understand this:  There is no way that climate change from CO2 can be accelerating if global warming is not accelerating.  There is no mechanism I have ever heard by which CO2 can change the climate without the intermediate step of raising temperatures.  Co2–>temperature increase–>changes in the climate.

Update: Chart originally said 1998 forecast.  Has been corrected to 1988.

Update#2: I am really tired of having to re-explain the choice of using Hansen’s “A” forecast, but I will do it again.  Hansen had forecasts A, B, C, with A being based on more CO2 than B, and B with more CO2 than C.  At the time, Hansen said he thought the A case was extreme.  This is then used by his apologists to say that I am somehow corrupting Hansen’s intent or taking him out of context by using the A case, because Hansen himself at the time said the A case was probably high.

But the only difference between A, B, and C were not the model assumptions of climate sensitivity or any other variable — they only differed in the amount of Co2 growth and the number of volcano eruptions (which have a cooling effect via aerosols).  We can go back and decide for ourselves which case turned out to be the most or least conservative.   As it turns out, all three cases UNDERESTIMATED the amount of CO2 man produced in the last 20 years.  So, we should not really use any of these lines as representative, but Scenario A is by far the closest.  The other two are way, way below our actual CO2 history.

The people arguing to use, say, the C scenario for comparison are being disingenuous.  The C scenario, while closer to reality in its temperature forecast, was based on an assumption of a freeze in Co2 production levels, something that obviously did not occur.

Most Useless Phrase in the Political Lexicon: “Peer Reviewed”

Last week, while I was waiting for my sandwich at the deli downstairs, I was applying about 10% of my consciousness to CNN running on the TV behind the counter.  I saw some woman, presumably in the Obama team, defending some action of the administration as being based on “peer reviewed” science.

This may be a legacy of the climate debate.  One of the rhetorical tools climate alarmists have latched onto is to inflate the meaning of peer review.  Often, folks, like the person I saw on TV yesterday, use “peer review” as a synonym for “proven correct and generally accepted in its findings by all right-thinking people who are not anti-scientific wackos.”  Sort of the scientific equivalent of “USDA certified.”

Here is a great example of that, from the DailyKos via Tom Nelson:

Contact NBC4 and urge them to send weatherman Jym Ganahl to some climate change conferences with peer-reviewed climatologists. Let NBC4 know that they have a responsibility to have expert climatologists on-air to debunk Ganahl’s misinformation and the climate change deniers don’t deserve an opportunity to spread their propaganda:

NBC 4 phone # 614-263-4444

NBC 4 VP/GM Rick Rogala email: rrogala(ATSIGN)wcmh.com

By the way, is this an over-the-top attack on heresy or what?  Let’s all deluge a TV station with complaints because their weatherman has the temerity to have a different scientific opinion than ours?  Seriously guys, its a freaking local TV weatherman in central Ohio, and the fate of mankind depends on burning this guy at the stake?  I sometimes get confused about what leftists really think about free speech, but this sure sounds more like a bunch of good Oklahoma Baptists reacting to finding out their TV minister is pro-abortion.   But it is we skeptics who are anti-science?

Anyway, back to peer review, you can see in this example again the use of “peer review” as some kind of impremateur of correctness and shield against criticism.   The author treats it as if it were a sacrament, like baptism or ordination.   This certification seems to be so strong in their mind that just having been published in a peer-reviewed journal seems to be sufficient to complete the sacrament — the peer review does not necessarily seem to even have to be on the particular topic being discussed.

But in fact peer review has a much narrower function, and certainly is not, either in intent or practice,  any real check or confirmation of the study in question.  The main goals of peer review are:

  • Establish that the article is worthy of publication and consistent with the scope of the publication in question.  They are looking to see if the results are non-trivial, if they are new (ie not duplicative of findings already well-understood), and in some way important.  If you think of peer-reviewers as an ad hoc editorial board for the publication, you get closest to intent
  • Reviewers will check, to the extent they can, to see if the methodology  and its presentation is logical and clear — not necessarily right, but logical and clear.  Their most frequent comments are for clarification of certain areas of the work or questions that they don’t think the authors answered.  They do not check all the sources, but if they are familiar with one of the sources references, may point out that this source is not referenced correctly, or that some other source with which they are familiar might be referenced as well.  History has proven time and again that gross and seemingly obvious math and statistical errors can easily clear peer review.
  • Peer review is not in any way shape or form a proof that a study is correct, or even likely to be correct.  Enormous numbers of incorrect conclusions have been published in peer-reviewed journals over time.  This is demonstrably true.  For example, at any one time in medicine, for every peer-reviewed study I can usually find another peer-reviewed study with opposite or wildly different findings.  The fraud in the “peer reviewed” Lancet on MMR vaccines and autism by Andrew Wakefield is a good example.
  • Studies are only accepted as likely correct a over time after the community has tried as hard as it can to poke holes in the findings.  Future studies will try to replicate the findings, or disprove them.  As a result of criticism of the methodology, groups will test the findings in new ways that respond to methodological criticisms.  It is the accretion of this work over time that solidifies confidence  (Ironically, this is exactly the process that climate alarmists want to short-circuit, and even more ironically, they call climate skeptics “anti-scientific” for wanting to follow this typical scientific dispute and replication process).
So, typical peer review comments might be:
  • I think Smith, 1992 covered most of this same ground.  I am not sure what is new here
  • Jones, 1996 is fairly well accepted and came up with opposite conclusions.  The authors need to explain why they think they got different results from Jones.
A typical peer review comment would not be:
  • The results here looked suspicious so I organized a major effort here at my university and we spent 6 months trying to replicate their work and cuold not duplicate their findings.

That latter is a follow-up article, not a peer review comment.

Further, the quality and sharpness of peer review depends a lot on the reviewers chosen.  For example, a peer review of Rush Limbaugh by the folks at LGF, Free Republic, and Powerline might not be as compelling as a peer review by Kos or Kevin Drum.

But instead of this, peer review is used by folks, particularly in political settings, as a shield against criticism, usually for something they don’t understand and probably haven’t even read themselves.  Here is an example dialog:

Politician or Activist:  “Mann’s hockey stick proves humans are warming the planet”

Critic:  “But what about Mann’s cherry-picking of proxy groups; or the divergence problem  in the data; or the fact that he routinely uses proxy’s as a positive correlation in one period and different, even negative, correlation in another; or the fact that the results are most driven by proxys that have been manually altered; or the fact that trees really make bad proxies, as they seldom actually display the assumed linear positive relationship between growth and temperature?”

Politician or Activist, who 99% of the time has not even read the study in question and understands nothing of what critic is saying:  “This is peer-reviewed science!  You can’t question that.”

Postscript: I am not trying to offend anyone or make a point about religion per se in the comparisons above.  I am not religious, but I don’t have a problem with those that are.  However, alarmists on the left often portray skepticism as part-and-parcel of what they see as anti-scientific ideas tied to the religious right.  I get this criticism all the time, which is funny since I am not religious and not a political conservative.  But I find parallels between climate alarmist and religion to be interesting, and a particularly effective criticism given some of the left’s foaming-at-the-mouth disdain for religion.

What Other Discipline Does This Sound Like?

Arnold Kling via Cafe Hayek on macro-economic modelling:

We badly want macroeconometrics to work.  If it did, we could resolve bitter theoretical disputes with evidence.  We could achieve better forecasting and control of the economy.  Unfortunately, the world is not set up to enable macroeconometrics to work.  Instead, all macroeconometric models are basically simulation models that use data for calibration purposes.  People judge these models based on their priors for how the economy works.  Imposing priors related to rational expectations does not change the fact that macroeconometrics provides no empirical information to anyone except those who happen to share all of the priors of the model-builder.

Skipping A Step

Here is a little glimpse of how climate alarmism works.  Check out this article in the NewScientist (I don’t know anything about this particular publication, but my general assumption is that most periodicals use “New” in the context of such a title as a synonym for “socialist.”):

Rather than spreading out evenly across all the oceans, water from melted Antarctic ice sheets will gather around North America and the Indian Ocean. That’s bad news for the US East Coast, which could bear the brunt of one of these oceanic bulges.

It goes on and on with more detail, which sounds really scary:

First, Jerry Mitrovica and colleagues from the University of Toronto in Canada considered the gravitational attraction of the Antarctic ice sheets on the surrounding water, which pulls it towards the South Pole. As the ice sheet melts, this bulge of water dissipates into surrounding oceans along with the meltwater. So while the sea level near Antarctica will fall, sea levels away from the South Pole will rise.

Once the ice melts, the release of pressure could also cause the Antarctic continent to rise by 100 metres. And as the weight of the ice pressing down on the continental shelf is released, the rock will spring back, displacing seawater that will also spread across the oceans.

Redistributing this mass of water could even change the axis of the Earth’s spin. The team estimates that the South Pole will shift by 500 metres towards the west of Antarctica, and the North Pole will shift in the opposite direction. Since the spin of the Earth creates bulges of oceanic water in the regions between the equator and the poles, these bulges will also shift slightly with the changing axis….

The upshot is that the North American continent and the Indian Ocean will experience the greatest changes in sea level – adding 1 or 2 metres to the current estimates. Washington DC sits squarely in this area, meaning it could face a 6.3-metre sea level rise in total. California will also be in the target zone.

Spotting the skipped logic step does not require one to be a climate skeptic.  Anyone familiar with the most recent IPCC report should see it too.  Specifically, the authors simply posit — without even bothering to mention it as an assumption! — that tons of land-based ice (remember, sea ice melting has no effect on sea levels) is going to melt in Antarctica.  But just about everyone, even the alarmists at the IPCC, predict just the opposite, even in 3C per century global warming scenarios.

Why?  Well, for a couple of reasons.  The first is that Antarctica is so cold that several degrees of warming will not bring most of the continent above freezing, even in the summer.  The exception is probably the Antarctic Peninsula, which sticks out north of the rest of the continent and accounts for 2% of the land mass and a much smaller percentage of the total ice pack.

The other reason is that if the world warms, the seas around Antarctica will warm and the models show the warming surrounding seas increasing precipitation on the continent and actually increasing snow pack.  In fact, increases in Antarctic ice pack actually exceed decreases forecast in ice packs around the rest of the world.  The entirety of the IPCC ocean rise scenario is driven by the thermal expansion of water, not net ice melting.

By the way, I presume these guys have their math right, but it seems astonishing to me that the ice mass (or lack of it) could really exert enough gravitational pull to change sea levels in the northern hemisphere by a meter or two.  Gravity is an astonishingly weak force — does this reality check?  I had always thought differences in ocean levels (say for example the fact that the Atlantic and Pacific are not the same height on either side of the Panama Canal)  had more to do with differentials in evaporation rates.

PS- Is telling me global warming will flood Washington DC supposed to make me be against global warming?  Because that sounds pretty good to me. ;=)

Why I Don’t Post This Stuff

I get emails asking why I haven’t reported on this or than nose count survey of climate scientists, or such and such declaring himself a skeptic, or whatever.  Someone in Senator Inofe’s office constantly spams me with that stuff.

The answer is that I don’t think headcounts are a particularly relevent or interesting way to conduct science.  Interestingly, Russel Roberts answered a similar but different question in much the same way I would have:

A number of people have asked why my name was missing from the petition against the spending package [which appeared as an ad in the NY Times].  The simple answer is that I didn’t know about it. But I probably wouldn’t have signed it anyway. I decided a while back not to sign these kind of petitions. First, there’s usually something I don’t agree with in the text, and second, the whole thing is a little weird about the whole thing–the idea that people should care that there are a bunch of economists who feel this way, especially given that there are a bunch of economists on the other side of the political spectrum who feel the exact opposite. So is the idea that we have more Nobel Laureates than they do? But what if it’s fewer?

More Thoughts on Tree Mortality Study

I got a copy of the Science article by Van Mantgem et. al. on tree mortality referred to in my previous post here.  I have not done a comprehensive review, but I have now read it and its supplements and have a few immediate reactions.

This article struck me as an absolutely classic academic study, for the following reason:   The study can be broken up into two parts – measurement of a natural phenomenon and possible explanations for the measurements. The meat of the effort and real work is in the first part, the measurement of tree mortality, with very weak work on the second part, on the links to global warming.  Many academic studies are guilty of this to some extent.  I once had a professor tell me every study was a year of intensive data gathering and analysis followed by 2 hours of a group of grad students trying to brainstorm causes and implications, the more exciting the better.  Unfortunately, the press releases and media attention in climate tend to focus on this hypothesizing as if it had as much credibility as the actual data analysis.  Let me be specific.

The first part of the study, the measurement of tree die-off rates, appears to be where the bulk of the work is, and their findings seem fairly reasonable — that tree die-off rates seem to have gone up over the last several decades in the western US, and that this die-off seems to be consistent across geography, tree size, and tree type.   My only complaint is that their data shows a pretty clear relationship between study plot size and measured mortality.   Most of the measured tree mortality is in plots 1 hectare or less (about 2.5 acres, or the size of a large suburban home lot).  There is not nearly as much mortality in the larger study plots — I would have liked to see the authors address this issue as a possible methodological problem.

Anyway, the finding of large and spatially diverse increases in mortality of trees is an important finding, and one for which the authors should feel proud to identify.  The second part of the study, the hypothesized causes of this mortality, is far far weaker than the first, though this is not atypical of academic studies.  Remember, in the press summaries, the authors claimed that global warming had to be the cause because they had eliminated everything else it could possibly be.  So here is what their Science article mentions that they considered and rejected as possible causes:

  • Changes in forest density and fire exclusion policies
  • Old trees falling and crushing new trees
  • Ozone levels (they claim they look at “pollution” but ozone is the only chemical discussed)
  • Activity of fungal pathogen, Cronartium ribicola, in certain pines
  • Forest fragmentation

Wow, well that certainly seems comprehensive.  Can’t think of a single other thing that could be causing it.  By the way, the last one is interesting to me, because I suggested forest fragmentation and micro-climate issues in my first post.  So, just to give you an idea of the kind of scholarship that passes peer review, let’s see how they tested for forest fragmentation:  they compared mortality of trees inside national parks vs. mortality of trees outside of national parks.  The logic is that National Park trees would see less fragmentation over time since they are protected from logging, but that of course is a supposition.

This is really weak.  I guess it’s not a bad test if you had to come up with such a test in an afternoon without the time to do any extra work, but it is a very course macro test of a very micro problem.  For example, the top of Kilimanjaro is protected as  a National Park, but evidence is pretty strong that snow on the mountain is being reduced by land-use-related changes in precipitation and local climate due to logging outside the national park.

A lot of folks in the comments of the last post mentioned, reasonably, the massive infestations of western pine bark beetles.  The only mention of the  bark beetle infestations was, interestingly, in their last paragraph, where they said:

First, increasing mortality rates could presage substantial changes in forest structure, composition, and function (7, 25), and in some cases could be symptomatic of forests that are stressed and vulnerable to abrupt dieback (5). Indeed, since their most recent censuses, several of our plots in the interior region experienced greatly accelerated mortality due to bark beetle outbreaks, and in some cases nearly complete mortality of large trees

I guess that is a handy way to deal with an exogenous factor you don’t want to admit drove some of your observations – just reverse the causality.  So now mortality is not caused in part by beetles, beetles are caused by mortality!

By the way, before I head into temperature, I had a question for those of you who may know trees better than I.  Do trees have demographics and generations, like human populations?  For example, we expect a rise in mortality among humans over the next 30 years because there was a spike in birth rates 50 years ago.  Do forests have similar effects?  It struck me that humans cleared a lot of western forests from 1860-1920, and since then the total forested area in the US has expanded.  Is there some sort of baby boomer generation of trees born around 1900 that are now dying off?

Anyway, on to temperature.   Here is the key statement from the Science article:

We suggest that regional warming may be the dominant contributor to the increases in tree mortality rates. From the 1970s to 2006 (the period including the bulk of our data; table S1), the mean annual temperature of the western United States increased at a rate of 0.3° to 0.4°C decade−1, even approaching 0.5°C decade−1 at the higher elevations typically occupied by forests (18). This regional warming has contributed to widespread hydrologic changes, such as declining fraction of precipitation falling as snow (19), declining snowpack water content (20), earlier spring snowmelt and runoff (21), and a consequent lengthening of the summer drought (22). Specific to our study sites, mean annual precipitation showed no directional trend over the study period (P = 0.62, LMM), whereas both mean annual temperature and climatic water deficit (annual evaporative demand that exceeds available water) increased significantly (P < 0.0001, LMM) (10). Furthermore, temperature and water deficit were positively correlated with tree mortality rates (P ≤ 0.0066, GNMM; table S4).

The footnotes reference that the temperature and water correlations are in the supplementary online material, but I have access to that material and there is nothing there.  I may be unfair here, but it really looks to me like some guys did some nice work on tree mortality, couldn’t get it published, and then tacked on some stuff about global warming to increase the interest in it.   Note that Science recognizes what the study is about, when it titles the article “Widespread Increase of Tree Mortality Rates in the Western United States,” without mention of global warming.  But when it moves to the MSM, it is about global warming, despite the fact that none of the warming and drought data and regressions are considered important enough or persuasive enough to make the article or even the supplementary material.

OK, if this paragraph is all we have, what can we learn from it?  Well, the real eye-catcher for me is this:

From the 1970s to 2006…the mean annual temperature of the western United States increased at a rate of 0.3° to 0.4°C decade−1, even approaching 0.5°C decade−1 at the higher elevations typically occupied by forests

They are saying that for the period 1971-2006 the temperature of the Western US increased 1.1°C to 1.4°C, or 2-2.5°F.  And it increased as much as 6.3°F in the higher elevations.    This seems really high to me, so I wondered at the source.  Apparently, it is coming from something called the PRISM data base.  These guys seem to have some sort of spacial extrapolation program that takes discreet station data and infills data for the area between the stations, mainly using a linear regression of temperature vs. altitude.  I have zero idea if their methodology makes any sense, but knowing the quality of some of the station data they are using, it may be GIGO.  (By the way, someone at Oregon State, who apparently runs this site, needs to hire a better business manager.  Their web site repors that in an academic environment awash with money for climate research, their climate data base work has been suspended for lack of funding).

As a back check on this number, LaDochy in 2007 looked at California temporal and spatial temperature trends in some depth.  He found that when one pulls out the urban stations, California rural areas experienced a 0.034C per decade temperature increase from 1950-2000, an order of magnitude lower than the numbers this study is using (click to expand slide below):

ladochy

Satellite data from UAH, which does not have the same urban bias problems, shows near-surface temperatures rising 0.1-0.3C per decade from 1978-2006 in the study areas, higher than the LaDochy numbers but still well below the study numbers.

This is a real problem with the study.  If you really want to measure the impact of temperature and participation on a 2.5 acre lot, you have to actually measure the temperature and precipitation, and even better things like the soil moisture content, somewhere near the 2.5 acre lot, and not look at western averages or computer interpolated values.

The study authors conclude:

Warming could contribute to increasing mortality rates by (i) increasing water deficits and thus drought stress on trees, with possible direct and indirect contributions to tree mortality (13, 23); (ii) enhancing the growth and reproduction of insects and pathogens that attack trees (6); or (iii) both. A contribution from warming is consistent with both the apparent role of warming in episodes of recent forest dieback in western North America (5, 6) and the positive correlation between short-term fluctuations in background mortality rates and climatic water deficits observed in California and Colorado (13, 24).

I guess I won’t argue with the word “consistent,” and I suppose it is unfair to hammer these guys too hard for the way the MSM over-interprets the conclusions and latches on to the global warming hypothesis, but really, isn’t that why the warming material is included in the paper, to get attention for the authors?  Because this paragraph would be a nice summary in a paper proposing a new study, and the hypothesis is a reasonable one to test, but it certainly isn’t proven by this study.

Postscript: From the map, some of the test plots are almost right on top of the California bristlecone pines used for climate reconstruction.  Remember, Mann and company begin with the assumption that tree growth is positively correlated with temperature.  This article argues that warming is stunting tree growth and causing trees to die.  While these are not impossible to reconcile  (though its hard considering the authors of this study said their findings were consistent across tree age, size, and variety) I would love to see how the RealClimate folks do so.

Update: Note that I still have not read the complete study itself, so I am sure there are climate regressions and such that did not make the publication or the online exhibits in Science.  So this quick reading may still be missing something.

Update #2: The best reconciliation I have received on this study vs. dendro-climatology work is the following, and is suggested on this page.   Certain trees seem to be growth-limited by temperatures, and certain trees are growth limited by water  (I presume there are other modes as well).  Trees that are temperature-limited will have their growth gated by temperature.  Trees that are water-limited will have growth controlled primarily by precipitation levels.  Grassino-Mayer states:

…sites useful to dendrochronology can be identified and selected based on criteria that will produce tree-ring series sensitive to the environmental variable being examined. For example, trees that are especially responsive to drought conditions can usually be found where rainfall is limiting, such as rocky outcrops, or on ridgecrests of mountains. Therefore, a dendrochronologist interested in past drought conditions would purposely sample trees growing in locations known to be water-limited. Sampling trees growing in low-elevation, mesic (wet) sites would not produce tree-ring series especially sensitive to rainfall deficits. The dendrochronologist must select sites that will maximize the environmental signal being investigated. In the figure below, the tree on the left is growing in an environment that produced a complacent series of tree rings.

So I suppose that while most trees are suffering from higher temperatures via the moisture mechanism, so may be benefiting, and the key is to pick the right trees.

Of course, given that bristlecones were selected as much for the fact that they are old as the fact their growth is driven by one thing or another, the problem is how one knows a particular tree’s  is temperature or moisture driven, and how one can have confidence that this “mode” has not changed for a thousand or more years.

Are bristlecones driven by temperature (as they are at fairly high altitude) or by precipitation (as they are in a very arid region of the southwest).  One might expect that given divergence issues in the bristlecone proxies, the Mannian answer of “temperature” might be wrong.  The NASA site offers this answer on the bristlecones:

Douglas’ [bristlecone] rings [from the White Mountains of CA, the same ones Mann uses] tell about rainfall in the southwestern United States, but trees also respond to changes in sunlight, temperature, and wind, as well as non-climate factors like the amount of nutrients in the soil and disease. By observing how these factors combine to affect tree rings in a region today, scientists can guess how they worked in the past. For example, rainfall in the southwestern United States is the factor that affects tree growth most, but in places where water is plentiful, like the Pacific Northwest, the key factor affecting tree ring growth may be temperature. Once scientists know how these factors affect tree ring formation, scientists can drill a small core from several trees in an area (a process that does not harm the tree) and determine what the climate was in previous years. The trees may also record things like forest fires by bearing a scar in a ring.

We Eliminated Everything We Could Think Of, So It Has To Be Warming

I am still trying to get a copy of the article in Science on which this is based, but the AZ Republic writes:

Western forests that withstood wildfire, insect attacks and drought are now withering under an even greater menace.

Heat.

Rising temperatures are wiping out trees faster than the forests can replace them, killing pines, firs, hemlocks and almost every other kind of tree at almost every elevation from northern Arizona to southwestern Canada.

Writing today in the journal Science, a team of 11 researchers says global warming is almost certainly the culprit behind a sharp spike in tree deaths over the past several decades. The higher death rates, which doubled in as few as 17 years in some areas, coincide with a regional increase in temperature and appear unrelated to other outside factors.

Perhaps this question is answered somewhere in the unreported details, but my first reaction was to want to ask “Dendroclimatologists like Michael Mann reconstruct history from tree rings based on the assumption that increasing temperatures correlates linearly and positively with tree growth and therefore tree ring width.  Your study seems to indicate the correlation between tree growth and temperature is negative and probably non-linear.  Can you reconcile these claims?’    Seriously, there may be an explanation (different kinds of trees?) but after plastering the hockey stick all over the media for 10 years, no one even thinks to ask?

Normally, I just ignore the flood of such academic work  (every study nowadays has global warming in it — if these guys had just wanted to study the forest, they would have struggled for grant money, but make it about forest and global warming and boom, here’s your money).  The reasons I picked it out was because I just love the quote below — I can’t tell you how often I see this in climate science-related work:

Scientists combed more than 50 years of data that included tree counts and conditions. The sharp rise in tree mortality was apparent quickly. Researchers then eliminated possible causes for the tree deaths, such as air pollution, fire suppression or overgrowth. They concluded the most likely culprit was heat.

Again, I need to see the actual study, but this would not be the first time a climate study said “well, we investigated every cause we could think of, and none of them seemed to fit, so it must be global warming.”  It’s a weird way to conduct science, assuming Co2 and warming are the default cause for every complex natural process.  No direct causal relationship is needed with warming, all that is required is to eliminate any other possible causes.  This means that the less well we understand any complex system, the more likely we are to determine changes in the system are somehow anthropogenic.

Speaking of anthropogenic, I am fairly certain that the authors have not even considered the most likely anthropogenic cause, if the source of the forest loss is even man-made at all.  From my reading of the literature, nearby land use changes (clearing forests for agriculture, urbanization, etc) have a much greater affect on local climates and particularly moisture patterns than does a general global warming trend.  If you clear all the surrounding forest, it is likely that the piece that is left is not as healthy as it would have been in the midst of other forested land.

The article, probably because it is making an Arizona connection, makes a big deal about the role of a study forest near Flagstaff in the study.  But if the globe is warming, the area around Norther Arizona has not really been participating.  The nearest station to the forest is the USHCN station at the Grand Canyon, a pretty decent analog because it is nearby, rural, and forested as well.  Here is the plot of temperatures from that station:

grand_canyon_temp

Its hard to tell from the article, but my guess is that there is actually a hidden logic leap embedded.  Likely, their finding is that drought has stressed trees and reduced growth.  They then rely on other studies to say that this drought is due to global warming, so then they can get to the grant-tastic finding that global warming is hurting the forest.   But the “western drought caused mainly by anthropogenic warming” is not a well proven connection.  Warming likely has some contribution to it, but the west has been through wet-dry cycles for tens of thousands of years, and has been through much worse and longer droughts long before the Clampetts started pumping black gold from the ground.

The Magic Correlation

This discussion, including the comments, over at Climate Audit, really is amazing.  Just when you think all the procedural errors that could be mined from the Mann hockey stick have been pulled to the surface, another gem emerges.

Here is how I understand it (please correct me if I am wrong in the comments):  Michael Mann uses a variety of proxies to reconstruct history  (he actually pre-screens them to only use the ones that will give him the answer he wants, but that is another problem that has been detailed in other posts).  To be able to tell temperature with these proxies (since their original measurements are things like mm of tree ring width, not degrees) they must be scaled based on periods in which the thermometer-measured surface temperature record overlaps the proxy record.

Apparently, when making these calibrations, he used the surface temperature record from 1850-1995, but also did other runs with sub-periods of this, such as 1850-1949 and 1896-1995.  OK so far.  Well, McIntyre believes he has found that when running these correlations, the sign of the correlation factor for a single proxy actually changes.

What does this mean?  Well, lets assume proxy 1 is tree ring width from a particular tree, and a calibration based on 1850-1995 has such-and-such ring width data correlated at x per degree.   This means that an increase in ring width of X implies a temperature increase of one.  But, when calibrating on one of the other periods, the exact same proxy has a calibration of -Y.  This means that an increase in the ring width of Y yields a temperature DECREASE of one.

I had a professor of physics back in undergrad who used to just drive me crazy with his insistence on good error estimations in the lab  (which he was right to emphasize, just proving I was not meant for the lab).  He used to say that if your error range crossed zero, in other words, if your range of possible answers included both positive and negative numbers, then you really did not understand a process.  You don’t understand a relationship, he would say, if you don’t even know the sign.  Well, Mann has gotten over this little problem, I guess, because he is perfectly able to have the same physical process have exactly opposite relationships with temperature depending on what 50 year period he is working with.

OK, so Steve caught him with one bad proxy.  Heck, he has over a thousand others.  But now McIntyre is reporting in the comments he has found 308 such cases, where Mann has correlations that change signs like this.  Wow.

Postscript: By the way, one of the most fundamental rules of regression analysis is that when you throw a variable into the regression, you should have some theoretical reason for doing so.  This is because every single variable you add, no matter how spurious, is going to improve the fit of a regression (trust me on this, it’s in the math).

In the case of proxy regressions, it is simply unacceptable to rely on the regression for the sign.  You rely on physics for the sign, not the regression.   If you don’t even know the sign of the relationship between your proxy and temperature, then you don’t understand the proxy well enough physically to justify even calling it a proxy.

This is a big, big deal in financial modelling.  I can’t tell you how often it is emphasized in financial modelling to make sure you have a working theory as to how and why a variable should affect a regression, and then when you get the result, you need to test it against your original theory.  And if they are too far apart, you need to doubt the computer result.  Because in financial modelling, if you get too much confidence in regressions against spurius data, you can go bankrupt  (in climate, it instead seems to lead to fame, large grants, and hanging out with vice-presidents).

Update: Oops, I missed the first post on this at Climate Audit, which discusses the issues in my postscript in more depth.  This is a good example, and it is not surprising they revert to a financial example as I did, as financial modelers have the greatest immediate incentives not to fool themselves.

We (the authors of this paper) have identified a weather station whose temperature readings predict daily changes in the value of a specific set of stocks with a correlation of r=-0.87. For $50.00, we will provide the list of stocks to any interested reader. That way, you can buy the stocks every morning when the weather station posts a drop in temperature, and sell when the temperature goes up. Obviously, your potential profits here are enormous. But you may wonder: how did we find this correlation? The figure of -.87 was arrived at by separately computing the correlation between the readings of the weather station in Adak Island, Alaska, with each of the 3315 financial instruments available for the New York Stock Exchange (through the Mathematica function FinancialData) over the 10 days that the market was open between November 18th and December 3rd, 2008. We then averaged the correlation values of the stocks whose correlation exceeded a high threshold of our choosing, thus yielding the figure of -.87. Should you pay us for this investment strategy? Probably not: Of the 3,315 stocks assessed, some were sure to be correlated with the Adak Island temperature measurements simply by chance – and if we select just those (as our selection process would do), there was no doubt we would find a high average correlation. Thus, the final measure (the average correlation of a subset of stocks) was not independent of the selection criteria (how stocks were chosen): this, in essence, is the non-independence error. The fact that random noise in previous stock fluctuations aligned with the temperature readings is no reason to suspect that future fluctuations can be predicted by the same measure, and one would be wise to keep one’s money far away from us, or any other such investment advisor

Update #2: I guess I have to issue a correction.  I have argued that climate scientists tend to be unique in trying to avoid criticism by labeling critics as “un-scientific”.  In retrospect, it does not appear climate scientists are unique:

The iconoclastic tone have attracted coverage on many blogs, including that of Newsweek. Those attacked say they have not had the chance to argue their case in the normal academic channels. “I first heard about this when I got a call from a journalist,” comments neuroscientist Tania Singer of the University of Zurich, Switzerland, whose papers on empathy are listed as examples of bad analytical practice. “I was shocked — this is not the way that scientific discourse should take place.”

The Wrong Tree

I don’t really understand how this discussion at the Reference Frame is relevant to anything.  A study says that the clustering of high temperatures at the end of the last 100 years cannot be just random statistical chance, while Lubos argues that the chance of it happening is low but not nearly as low as the authors state.

I guess this may be an interesting exercise in probability theory for autocorrellated functions, but that is about it.  I mean, does anyone really doubt that there has been some sort of upward trend in world temperatures?

More relevent are the questions

Can you have a consensus if no one agrees what the consensus is?

Over at the Blackboard, Lucia has a post with a growing set of comments about anthropogenic warming and the tropical, mid-tropospheric hotspot.  Unlike many who are commenting on the topic, I have actually read most of the IPCC AR4 (painful as that was), and came to the same conclusion as Lucia:  that the IPCC said the climate models predicted a hot spot in the mid-troposphere, and that this hot spot was a unique fingerprint of global warming (“fingerprint” being a particularly popular word among climate scientists).  Quoting Lucia:

I have circled the plates illustrating the results for well mixed GHG’s and those for all sources of warming combined. As you see, according to the AR4– a consensus document written for the UN’s IPCC and published in 2007 — models predict the effect of GHG’s as distinctly different from that of solar or volcanic forcings. In particular: The tropical tropospheric hotspots appears in the plate discussing heating by GHG’s and does not appear when the warming results from other causes.

hotspotar9_fordeepclimate

OK, pretty straight-forward.   The problem is that this hot spot has not really appeared.  In fact, the pattern of warming by altitude and latitude over the last thirty years looks nothing like the circled prediction graphs.  Steve McIntyre does some processing of RSS satellite data and produces this chart of actual temperature anomalies for the last 30 years by attitude and altitude  (Altitude is measured in these graphs by atmospheric pressure, where 1000 millibars is the surface and 100 millibars is about 10 miles up.

bigred50

The scientists at RealClimate (lead defenders of the climate orthodoxy) are not unaware that the hot spot is not appearing.  They responded about a year ago that 1)  The hot spot is not an anthropogentic-specific fingerprint at all, but will result from all new forcings

the pattern really has nothing to do with greenhouse gas changes, but is a more fundamental response to warming (however caused). Indeed, there is a clear physical reason why this is the case – the increase in water vapour as surface air temperature rises causes a change in the moist-adiabatic lapse rate (the decrease of temperature with height) such that the surface to mid-tropospheric gradient decreases with increasing temperature (i.e. it warms faster aloft). This is something seen in many observations and over many timescales, and is not something unique to climate models.

and they argued 2) that we have not had enough time for the hot spot to appear and they argued 3) all that satellite data really has a lot of error in it anyway.

Are the Real Climate guys right on this?  I don’t know.  That’s what they suck up all my tax money for, to figure this stuff out.

But here is what makes me crazy:  It is quite normal in science for scientists to have a theory, make a prediction based on this theory, and then go back and tweak the theory when data from real physical processes does not match the predictions.  There is certainly no shame in being wrong.  The whole history of science is about lurching from failed hypothesis to the next, hopefully improving understanding with each iteration.

But the weird thing about climate science is the sort of Soviet-era need to rewrite history.  Commenters on both Lucia’s site and at Climate Audit argue that the IPCC never said the hot spot was a unique fingerprint.  The fingerprint has become an un-person.

Why would folks want to do this?  After all, science is all about hypothesis – experimentation – new hypothesis.  Well, most science.  The problem is that climate science has been declared to be 1)  A Consensus and 2) Settled.    But settled consensus can’t, by definition, have disagreements and falsified forecasts.  So history has to be rewritten to protect the infallibility of the Pope the Presidium the climate consensus.  It’s a weird way to conduct science, but a logical outcome when phrases like “the science is settled” and  “consensus” are used as clubs to silence criticism.

“Anti-Scientific”

From Joe Romm, via Tom Nelson

The finalist list is out for the 2008 Weblog awards “Best Science Blog,” and two of the ten finalists are anti-scientific websites primarily devoted to spreading disinformation (and noninformation) on global warming– just like 2007.

The 2007 “competition” ended up being yet another classic exercise in the right wing perverting an otherwise reasonable web idea — online voting for the best science blog. As Desmogblog explained in a post titled, The “Vast Right Wing Conspiracy” beating “Vast Left Wing” Voting for Best Science Weblog, the right wing voted en masse for Climate Audit and the rational people all voted for Discover magazine’s excellent Bad Astronomy Blog. In the end, the process was so controverisal that the Awards folk simply called it a tie — saying each blog ended up with exactly 20,000 votes.

The Weblog Awards should not be legitimizing anti-scientific denialism.

As a student of history, I try really hard to never use the word “unprecedented.”  For example, those who think the partisan bickering we have today is somehow at a peak should go back to any American paper in 1855 and take a gander at the vitriol that flew back and forth.

But I must say I do find it difficult to find a good historical analog for this whole “anti-scientific” knock on climate skeptics.  I can understand accusing others of being wrong on a topic in science.   For example, it took decades for plate tectonics theory to catch on outside of small fringes of the geologic community, but I don’t remember folks accusing others of being anti-scientific.

This is particularly true in the case of the two blogs Mr. Romm mentions.   Here are a couple of quick thoughts:

  • Steve McIntyre, at Climate Audit, spends most of his time trying (in great, statistical depth) trying to replicate work by scientists such as Michael Mann and James Hansen, and critiques their work when he thinks he finds flaws.  Mann and Hansen spend much of their time trying to stonewall Mr. McIntyre and prevent him from having access to their data (most of which was collected and analyzed at taxpayer expense, either directly or through government grants).  Which of these parties seems closer to the spirit of science.
  • Anthony Watt argued for years with the government operators of the surface temperature measurement network that their system had location biases that were not being taken into account, and that were much large than being acknowledged.  When the operators of these systems were uninterested in pursuing the matter, Watt started a volunteer effort to survey and photograph these stations to the location biases, where they may exist, would be visible and available for anyone who wished to see.
  • Only one side in this debate ever argues that the other should be banned from even speaking or being heard.  I think you know which one that is.  So which side is the one that is “anti-science” — the one that is happy to mix it up in open debate or the one that is trying to get its opposition silenced?

Again, Watt and McIntyre could be wrong, but their sites are often scientific.  I could easily name 10 climate skeptic sites that, while I wouldn’t call them anti-science, are certainly a-scientific, focusing more on polemic than data.  But I could do the exact same on the alarmist side.  Certainly Watt and McIntyre’s sites are not in this category.

Here is the best analogy I can come up with (one which, not being religious myself, hopefully I can portray with a bit of detachment).   During the reformation, the Catholic Church accused critics of the Church of being anti-Christian.  But the religious skeptics were not anti-Christian per se, they merely contested the Church’s (and the Pope’s) ability to speak with absolute authority on religious matters.  In this case, the priests of the Church were upset that their monopoly to speak for Church doctrine was being challenged. They challenged their opposition as being anti-religious, but what they were was actually against the established Church, doctrine, and priesthood.

And by the way, is any actual adult human being with more than a year experience blogging really surprised that voting on the Internet

Who is Being Facile?

I very seldom go wallowing about responding to comments in my comment threads.  As I have posted before, I try to learn from criticisms in the comments and improve or modify my positions next time I post on a similar subject.  Besides, I would lose my life to playing the troll game on climate issues.

However, I am sitting at home with some time on my hands and thought I would address a representative critical comment, from this post on the sun.  Take this post as evidence, I guess, that I am perfectly capable of responding in depth to criticisms in the comments, had I this much time to spend with every one.

Staggering. You obviously haven’t read a single scholarly paper on this, or even looked at the data. This graph should dispel all your wrong-headed thinking. It’s the temperature, and monthly sunspot numbers, both plotted as 11 year running means (and scaled so that they roughly align). How, exactly, did rising temperatures in 1920 trigger increased solar activity 10 years later? Why did a peak in solar activity in the 1950s not correspond to a rise in temperatures then? Why do the green line and the red line diverge so wildly after 1985? Why is there basically hardly any correlation between solar activity and temperatures, over the last 150 years?

You betray a great immaturity on this web page, regurgitating the same nonsense time and again, calling people ‘morons’ and never even having the courtesy to respond when people tell you you’re wrong. When a fellow denialist tells you you’ve misrepresented him, and you don’t even bother to reply, let alone correct your error, what becomes crystal clear is that you’re simply dishonest.

A couple of responses:

  1. One of the great things about WordPress is that I have a nifty and powerful site search plugin (called Search Regex).  I just searched every post on this site.  The word “moron” has never, ever appeared on this site in text I have written.   She puts moron in quotes, but I am fairly certain she is not quoting me.  According to a Google search of this site, “moron” does appear 20-30 times in the comment section, usually wielded by my critics.
  2. When folks email me that I have made a mistake in quoting them, I post an update 100% of the time.  However, I occasionally miss such notifications if they are posted the comments and not emailed to me.  Believe it or not, I can go days without even glancing at this site or thinking about climate when the real job intervenes.  But that is what comments are for.  Unlike other climate sites that will remain nameless, *cough* realclimate *cough* I don’t moderate any comments, good, bad, or indifferent, except to eliminate outright spam.  If you disagree or I screwed up, that’s what the comments are there for.
  3. The commenter argues that I am simplistic and immature in this post.  I find this odd, I guess, for the following reason:  One group out there tends to argue that the sun is largely irrelevant to the past century’s temperature increases.  Another argues that the sun is the main or only driver.  I argue that the evidence seems to point to it being a mix, with the sun explaining some but not all of the 20th century increase, and I am the one who is simplistic?  Just for the record, I actually began the post with this:

    “I wouldn’t say that I am a total sun hawk, meaning that I believe the sun and natural trends are 100% to blame for global warming. I don’t think it unreasonable to posit that once all the natural effects are unwound, man-made CO2 may be contributing a 0.5-1.0C a century trend”

  4. The commenter links to this graph, which I will include.  It is a comparison of the Hadley CRUT3 global temperature index (green) and sunspot numbers (red):

    mean-1321

    Since I am so ridiculously immature, I guess I don’t trust myself to interpret this chart, but I would have happily used this chart myself had I had access to it originally (The chart uses a trailing 12 average of temperature as well as sunspots, which is why the line does not flatten and fall at the end.  I have to think a bit if I accept this metric as the correct comparison).

    It is wildly dangerous to try to visually interpret data and data correlations, but I don’t think it is unreasonable to say that there might be a relationship between these two data sets.  Certainly not 100%, but then again the same could easily be said of the relationship of temperature to Co2.  The same type of inconsistencies the commenter points out in this correlation could easily be made for Co2 (e.g., why, if CO2 was increasing, and in fact accelerating, were temps in 1980 lower than 1940?The answer, of course, is that climate is complicated.  But I see nothing in this chart that is inconsistent with the hypothesis that the sun might have been responsible for perhaps half of the 20th century warming.  And if Co2 is left with credit for just 0.3-0.4C warming over the last century, it is a very tough road to get from this past warming to sensitivities as high as 3C or greater.  I have all along contended that Co2 will likely drive 0.5-1.0C warming over the next century, and see nothing in this chart that makes me want to change that prediction.

  5. I was playing around a bit more, and found adding in PDO cycles fairly interesting (really this should be some combined AMO/PDO metric, and the exact dates of PDO reversals can be argued about, but I was going for quick and dirty).  Here is what I got:

    temp_spots_with_pdo

    If I wanted to make the same kind of observations as the commenter, I could say that temperature outpaced the sun during PDO warm phases and lagged the sun during cool phases, about what one would expect.  Again, I firmly believe there is still a positive warming trend when you take out cycles like the PDO and effects of the sun and other such influences — but that trend, even if all due to CO2, appears to be far below catastrophic sensitivity levels.

  6. It is kind of ironic that the post was actually not an in-depth analysis of solar cycles, but merely a statement of a hypothesis that solar activity level rather than the trend in solar activity should be regressed against temperature changes.  This seems like a fair hypothesis — one only has to think of a burner on a stove to understand it — but the commenter ignored it.  In fact, the divergence she points to in the late 1990’s is really exactly to the point.   Should a decreased solar output yield decreased temperatures?  Or, if that output is still higher than a historical average, will it still drive temperatures higher?  The answer likely boils down to how fast equilibrium is reached, and I don’t know the answer, nor do I think anyone else does either.
  7. I ask that people use their terms carefully.  I am not a “denialist” if that is meant to mean that I deny any anthropogenic effects on temperature or climate.  I am a denialist if that is meant to mean that I deny that warming from anthropogenic Co2 will cause catastrophic impacts that will outweigh the cost of Co2 abatement.

Updates: OK, I see that in the post in question, one of the quotes from another source used the word “morons.”  For those not experienced with reading blogs, indented text is generally quoted material from another source.  I guess I now understand the confusion — I stand by my statement, though, that I never use such terms in my own writing.  It is not the style I try to adopt on this blog.  Writers I quote have their own style, and my quoting them does not necesarily mean I totally agree with them, merely that the point they are making is somehow thought-provoking or one I want to comment on, extend, or rebut.

More on the Sun

I wouldn’t say that I am a total sun hawk, meaning that I believe the sun and natural trends are 100% to blame for global warming. I don’t think it unreasonable to posit that once all the natural effects are unwound, man-made CO2 may be contributing a 0.5-1.0C a century trend (note this is far below alarmist forecasts).

But the sun almost had to be an important fact in late 20th century warming. Previously, I have shown this chart of sunspot activity over the last century, demonstrating a much higher level of solar activity in the second half than the first (the 10.8 year moving average was selected as the average length of a 20th century sunspot cycle).
sunspot2

Alec Rawls has an interesting point to make about how folks are considering the sun’s effect on climate:

Over and over again the alarmists claim that late 20th century warming can’t be caused by the solar-magnetic effects because there was no upward trend in solar activity between 1975 and 2000, when temperatures were rising. As Lockwood and Fröhlich put it last year:

Since about 1985,… the cosmic ray count [inversely related to solar activity] had been increasing, which should have led to a temperature fall if the theory is correct – instead, the Earth has been warming. … This should settle the debate.

Morons. It is the levels of solar activity and galactic cosmic radiation that matter, not whether they are going up or down. Solar activity jumped up to “grand maximum” levels in the 1940’s and stayed there (averaged across the 11 year solar cycles) until 2000. Solar activity doesn’t have to keep going up for warming to occur. Turn the gas burner under a pot of stew to high and the stew will heat. You don’t have to keep turning the flame up further and further to keep getting heating!

Update: A commenter argues that I am simplistic and immature in this post.  I find this odd, I guess, for the following reason.  One group tends to argue that the sun is largely irrelevant to the past century’s temperature increases.  Another argues that the sun is the main or only driver.  I argue that the evidence seems to point to it being a mix, with the sun explaining some but not all of the 20th century increase, and I am the one who is simplistic?

The commenter links to this graph, which I will include.  It is a comparison of the Hadley CRUT3 global temperature index (green) and sunspot numbers (red):

mean-132

Since I am so ridiculously immature, I guess I don’t trust myself to interpret this chart, but I would have happily used this chart myself had I had access to it originally.  Its wildly dangerous to try to visually interpret data and data correlations, but I don’t think it is unreasonable to say that there might be a relationship between these two data sets.  Certainly not 100%, but then again the same could easily be said of the relationship of temperature to Co2.  The same type of inconsistencies the commenter points out in this correlation could easily be made for Co2 (e.g., why, if CO2 was increasing, and in fact accelerating, were temps in 1980 lower than 1940?

The answer, of course, is that climate is complicated.  But I see nothing in this chart that is inconsistent with the hypothesis that the sun might have been responsible for half of the 20th century warming.  And if Co2 is left with just 0.3-0.4C warming over the last century, it is a very tough road to get from past warming to sensitivities as high as 3C or greater.  I have all along contended that Co2 will likely drive 0.5-1.0C warming over the next century, and see nothing in this chart that makes me want to change that prediction.

Update #2: I guess I must be bored tonight, because commenter Jennifer has inspired me to go beyond my usual policy of not mixing it up much in the comments section.  A lengthy response to her criticism is here.

Whew! Done.

OK, I think I have finally, successfully migrated both my blogs from the Typepad ASP service to self-hosted WordPress. Many of you on feeds may have gotten a one-time slug of about 10 old posts in your feed (sorry). This was an artifact of the change of feed sources toFeedburner and should not happen again. Overall, I am very pleased with the results. The sites look better , they are easier to modify, they run faster, and the back-end interface is MUCH better. Most of you don’t care, but I will post on the process I followed to migrate as a repayment to others whose past such posts helped me through the process.

If you are getting this post, you should not have to change any of your settings. Enjoy.

Site Migration

If you are reading this, it means that you have found my new WordPress site for Climate-Skeptic.com.  You may find that permalinks or some images don’t function quite right — I learned from migrating my other blog that it takes about 24-48 hours for all these problems to settle out.  If you are using the feed at feeds.feedburner.com/ClimateSkeptic, you should be fine and that feed should still work.  If you are using a different feed, I will soon post instructions on how to switch.