When catastrophist climate models were first run against history, they did not even come close to matching. Over the last several years, after a lot of time under the hood, climate models have been tweaked and forced to match historic warming observations pretty closely. A prominent catastrophist and climate modeller finally asks the logical question:
One curious aspect of this result is that it is also well known [Houghton et al., 2001] that the same models that agree in simulating the anomaly in surface air temperature differ significantly in their predicted climate sensitivity. The cited range in climate sensitivity from a wide collection of models is usually 1.5 to 4.5 deg C for a doubling of CO2, where most global climate models used for climate change studies vary by at least a factor of two in equilibrium sensitivity.
The question is: if climate models differ by a factor of 2 to 3 in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy. Kerr [2007] and S. E. Schwartz et al. (Quantifying climate change–too rosy a picture?, available at www.nature.com/reports/climatechange, 2007) recently pointed out the importance of understanding the answer to this question. Indeed, Kerr [2007] referred to the present work and the current paper provides the ‘‘widely circulated analysis’’ referred to by Kerr [2007]. This report investigates the most probable explanation for such an agreement. It uses published results from a wide variety of model simulations to understand this apparent paradox between model climate responses for the 20th century, but diverse climate model sensitivity.
One wonders how it took so long for supposedly trained climate scientists right in the middle of the modelling action to ask an obvious question that skeptics have been asking for years (though this particular guy will probably have his climate decoder ring confiscated for brining this up). The answer seems to be that rather than using observational data, modellers simply make man-made forcing a plug figure, meaning that they set the man-made historic forcing number to whatever number it takes to make the output match history.
Gee, who would have guessed? Well, actually, I did, though I guessed the wrong plug figure. I did, however, guess that one of the key numbers was a plug for all the models to match history so well:
I am willing to make a bet based on my long, long history of modeling (computers, not fashion). My guess is that the blue band, representing climate without man-made effects, was not based on any real science but was instead a plug. In other words, they took their models and actual temperatures and then said "what would the climate without man have to look like for our models to be correct." There are at least four reasons I strongly suspect this to be true:
- Every computer modeler in history has tried this trick to make their models of the future seem more credible. I don’t think the climate guys are immune.
- There is no way their models, with our current state of knowledge about the climate, match reality that well.
- The first time they ran their models vs. history, they did not match at all. This current close match is the result of a bunch of tweaking that has little impact on the model’s predictive ability but forces it to match history better. For example, early runs had the forecast run right up from the 1940 peak to temperatures way above what we see today.
- The blue line totally ignores any of our other understandings about the changing climate, including the changing intensity of the sun. It is conveniently exactly what is necessary to make the pink line match history. In fact, against all evidence, note the blue band falls over the century. This is because the models were pushing the temperature up faster than we have seen it rise historically, so the modelers needed a negative plug to make the numbers look nice.
Here is one other reason I know the models to be wrong: The climate sensitivities quoted above of 1.5 to 4.5 degrees C are unsupportable by history. In fact, this analysis shows pretty clearly that 1.2 is about the most one can derive for sensitivity from our past 120 years of experience, and even that makes the unreasonable assumption that all warming for the past century was due to CO2.
Having fudged a model or 2 myself, this is the confirmation of stupidity on the part of the IPCC I was looking for.
Well, that, and this:
“My [Dr. Vincent Gray, IPCC member] main complaint with the IPCC is in the methods used to ” rel=”nofollow”>
No computer climate model has ever been tested in this way, so none should be used for prediction. They sort of accept this by never permitting the use of the term “prediction”, only “projection”. But they then go ahead predicting anyway.”
Since climate scientists have this annoying holier-than-thou attitude about their failed models and statistical analyses, I find it more than amusing that they employ the bullshit amateur tactics silencing debate, when both you and I, Coyote, said months ago, the debate is only getting started.
Your site(s) remains a vital source of information, while the AZ Republic prints crap like this:
Warming planet spawning fierce storms, studies say
“Extreme storms can produce good or bad results, Redmond said. Studies, including the one released by Environment Arizona, warn that big storms could disrupt natural cycles, reducing the amount of water produced by snowmelt.”
Yeah, and the natural cycle could result in me publicly ridiculing “Kelly Redmond, regional climatologist and deputy director of the Western Regional Climate Center in Reno.” But I would never do such a thing.
Awful. And scientifically despicable.