All posts by admin

My Best Skeptic’s Argument

Crossposted from Coyote Blog

I began with an 85-page book.  I shortened that to a 50-minute film, and then a 9-minute film.  With that experience, I think I can now pull out and summarize in just a few paragraphs why we should not fear catastrophic global warming.  Here goes:

Climate catastrophists often argue that global warming theory is "settled science."  And they are right in one respect:  We have a pretty good understanding of how CO2 can act as a greenhouse gas and cause the earth to warm.  What is well agreed upon, but is not well communicated in the media, is that a doubling of CO2, without other effects that we will discuss in a moment, will heat the earth about 1 degree Celsius (plus or minus a few tenths).  This is not some skeptic’s hallucination — this is straight out of the IPCC third and fourth assessments.  CO2, acting alone, warms the Earth only slowly, and at this rate we would see less than a degree of warming over the next century, more of a nuisance than a catastrophe.

But some scientists do come up with catastrophic warming forecasts.  They do so by assuming that our Earth’s climate is dominated by positive feedbacks that multiply the initial warming from CO2 by a factor of three, four, five or more.  This is a key point — the catastrophe does not come from the science of greenhouse gases, but from separate hypotheses that the earth’s climate is dominated by positive feedback.  This is why saying that greenhouse gas theory is "settled" is irrelevant to the argument about catastrophic forecasts.  Because these positive feedbacks are NOT settled science.  In fact, the IPCC admits it does not even know the sign of the most important effect (water vapor), much less its magnitude.  They assume that the net effect is positive, but they are on very shaky ground doing so, particularly since having long-term stable systems like climate dominated by positive feedback is a highly improbable.

And, in fact, with the 100 or so years of measurements we have for temperature and CO2, empirical evidence does not support these high positive feedbacks.  Even if we assign all the 20th century warming to CO2, which is unlikely, our current warming rates imply close to zero feedback.  If there are other causes for measured 20th century warming other than CO2, thereby reducing the warming we blame on CO2, then the last century’s experience implies negative rather than positive feedback in the system.  As a result, it should not be surprising that high feedback-driven forecasts from the 1990 IPCC reports have proven to be way too high vs. actual experience (something the IPCC has since admitted).

However, climate scientists are unwilling to back down from the thin branch they have crawled out on.  Rather than reduce their feedback assumptions to non-catastrophic levels, they currently hypothesize a second man-made cooling effect that is masking all this feedback-driven warming.  They claim now that man-made sulfate aerosols and black carbon are cooling the earth, and when some day these pollutants are reduced, we will see huge catch-up warming.  If anything, this cooling effect is even less understood than feedback.  What we do know is that, unlike CO2, the effects of these aerosols are short-lived and therefore localized, making it unlikely they are providing sufficient masking to make catastrophic forecasts viable.  I go into several reality checks in my videos, but here is a quick one:  Nearly all the man-made cooling aerosols are in the northern hemisphere, meaning that most all the cooling effect should be there — but the northern hemisphere has actually exhibited most of the world’s warming over the past 30 years, while the south has hardly warmed at all.

In sum, to believe catastrophic warming forecasts, one has to believe both of the following:

  1. The climate is dominated by strong positive feedback, despite our experience with other stable systems that says this is unlikely and despite our measurements over the last 100 years that have seen no such feedback levels.
  2. Substantial warming, of 1C or more, is being masked by aerosols, despite the fact that aerosols really only have strong presence over 5-10% of the globe and despite the fact that the cooler part of the world has been the one without the aerosols.

Here’s what this means:  Man will cause, at most, about a degree of warming over the next century.  Most of this warming will be concentrated in raising minimum temperatures at night rather than maximum daytime temperatures  (this is why, despite some measured average warming, the US has not seen an increase of late in maximum temperature records set).  There are many reasons to believe that man’s actual effect will be less than 1 degree, and that whatever effect we do have will be lost in the natural cyclical variations the climate experiences, but we are only just now starting to understand.

To keep this relatively short, I have left out all the numbers and such.  To see the graphs and numbers and sources, check out my new climate video, or my longer original video, or download my book for free.

Update:  Commenters are correct that positive feedback dominated systems can be stable as long as the feedback percentage is less than 100%.  By trying to get too compact in my arguments, I combined a couple of things.  First, there are many catastrophists that argue that climate IS in fact dominated by feedback over 100% — anyone who talks of "tipping points" is effectively saying this.  The argument about instability making stable processes impossible certainly applies to these folks’ logic.  Further, even positive feedback <100% makes a system highly subject to dramatic variations.  But Mann et. al. are already on the record saying that without man, global temperatures are unbelievably stable and move in extremely narrow ranges.   It is hard to imagine this to be true in a climate system dominated by positive feedback, particularly when it is beset all the time with dramatic perturbations, from volcanoes to the Maunder Minimum.

To some extent, climate catastrophists are in a bind.  If historic temperatures show a lot of variance, then a strong argument can be made that a large portion of 20th century warming is natural occilation.  If historic temperatures move only in narrow ranges, they have a very difficult time justifying that the climate is dominated by positive feedbacks of 60-80%,

The point to remember, though, is that irregardless of likelihood, the historical temperature record simply does not support assumptions of feedback much larger than zero.  Yes, time delays and lags make a small difference, but all one has to do is compare current temperatures to CO2 levels 12-15 years ago to account for this lag and one still gets absolutely no empirical support for large positive feedbacks.

Remember this when someone says that greenhouse gas theory is "Settled."  It may or may not be, but the catastrophe does not come directly from greenhouse gasses.  Alone, they cause at most nuisance warming.  The catastrophe comes from substantial positive feedback (it takes 60-80% levels to get climate sensitivities of 3-5C) which is far from settled science.

Where “Consensus” Comes From

Via Tom Nelson:

I hate to burst the bubble here, but the American Geophysical Union’s (AGU) climate ‘consensus’ statement does not hold up to even the lightest scrutiny.

It appears that the AGU Board issued a statement on climate change without putting it to a vote of the group’s more than 50,000 members. Its sweeping claims were drafted by what appears to be only nine AGU committee members. The statement relies heavily on long term computer model projections, cherry-picking of data and a very one-sided view of recent research. As with the recent statements by the American Meteorological Society (AMS) and the National Academy of Sciences (NAS), the AGU statement is the product of a small circle of scientists (again apparently a 9 member panel according to AGU) who all share the same point of view, and who failed to put their statement to a vote of the AGU members on whose behalf they now claim to speak. As such it amounts to nothing more than a restatement of the opinion of this small group, not a ‘consensus’ document.

New Climate Short: Don’t Panic — Flaws in Catastrophic Global Warming Forecasts

After releasing my first climate video, which ran over 50 minutes, I had a lot of feedback that I should aim for shorter, more focused videos.  This is my first such effort, setting for myself the artificial limit of 10 minutes, which is the YouTube limit on video length.

While the science of how CO2 and other greenhouse gases cause warming is fairly well understood, this core process only results in limited, nuisance levels of global warming. Catastrophic warming forecasts depend on added elements, particularly the assumption that the climate is dominated by strong positive feedbacks, where the science is MUCH weaker. This video explores these issues and explains why most catastrophic warming forecasts are probably greatly exaggerated.


You can also access the YouTube video here, or you can access the same version on Google video here.

If you have the bandwidth, you can download a much higher quality version by right-clicking either of the links below:

I am not sure why the quicktime version is so porky.  In addition, the sound is not great in the quicktime version, so use the windows media wmv files if you can.  I will try to reprocess it tonight.  All of these files for download are much more readable than the YouTube version (memo to self:  use larger font next time!)

This is a companion video to the longer and more comprehensive climate skeptic video "What is Normal — a Critique of Catastrophic Man-Made Global Warming Theory."

Antarctica

On Sunday, CBS claimed that Antarctica is melting.  In fact, once small portion of the Antarctic peninsula is warming and may be losing snow, while the rest of Antarctica has not been warming and in fact has been gaining ice cover.  The show visits an island off the Antarctic Peninsula which has about as much weather relevance and predictive power to the rest of Antarctica as Key West has to the rest of the United States.  Absolutely absurd.

Unfortunately, I have a real job and I don’t have time to restate all the rebuttals to the CBS show.  However, I took on the Antarctic issue in depth here, and this post at NC Media Watch has more.

A Metric of Climatge Science Health

If one wanted to measure the "health" of climate science, a good approach would be to compare the zealous pursuit of even the tiniest errors in analyses that show the world to be warming slowly or naturally with the absolute and total apathy at correcting or even uncovering massive errors in anslyses showing substantial man-made warming (example).

Anyone who made this check would have to come to the conclusion that there are not enough skeptical analyses coming into print, rather than the opposite point of view from folks like James Hansen and Al Gore that skeptics are harming the process and need to shut up.

Six Degrees of Global Warming

No, not six degrees of actual temperature increase, but six degrees of separation between every activist’s issue and global warming.  As pointed out by Tom Nelson, it should be increasingly obvious to everyone what some of us have been saying for years — the global warming scare is not driven by science, but is a vehicle for pushing a broad range of socialist / progressive issues.  Today’s example:  Dams on the Klamath River case global warming.  The mechanism?  Well, its a little hard to grasp because the article is so poorly written (don’t they employ editors on this paper?) but apparently the dams increase algae which in turn off-gasses methane which is a greenhouse gas.  Of course, common sense says this effect is trivial, and ignores other effects in the entire system, but the author treats it like he is playing a trump card.

This is Six Inches

There is an old joke that goes "why do women have poor depth perception?  A:  Because men tell them [holding two fingers very close together] that this is six inches."

I am kind of reminded of that joke here, where the graphic on this eco-catastrophist page shows a 3-foot sea level rise engulfing the botton half of San Francisco’s Transamerica building (hit refresh if you don’t see anyting happening on the top banner).  Even the moderately catastrophist IPCC shows less than half of a meter of sea level rise over the next 100 years, not three feet.

By the way, memo to environmental activists:  If you want to sway middle America to your cause, complaining that global warming will flood San Francisco may not get the results you want.  HT:  Tom Nelson

Mother Nature Will Win

In some crazy violation of Boston Globe editorial policy, this article (via TJIC) discusses weather without mentioning global warming.  But it does demonstrate how crazy it is to declare that the condition of the Earth circa 1950 was "normal,"  and any change is somehow abnormal.  The article highlights that change itself is the norm.

The sea, some in Chatham say, gives to the town as much as it takes away. The Atlantic constantly erodes the coastline, but also replenishes it with sediment washed from elsewhere. The only problem, says Leo Concannon, Chatham assistant harbormaster, is that the giving and taking "is often not where people want or need it." The region so incessantly reshapes itself with new shoals, sandbars, and breaks that Concannon’s office updates navigational charts with pencils and erasers.

In 2006, the ocean deposited enough sand to reconnect mainland Chatham to South Monomoy Island for the first time in 50 years. In 1987, the change was more abrupt: The Atlantic breached North Beach about 2 miles south of the current break. That gap, now more than a mile wide, eventually destroyed 10 houses on the mainland.

Somehow, the Globe seems to have found the last beachfront homeowner in the country who does not think someone else owes him when nature punishes him for building a house on a sandbar.

This is pretty funny

Link emailed by a reader:

A new menace to the planet has been discovered and validated by a consensus of politically reliable scientists: Anthropogenic Continental Drift (ACD) will result in catastrophic damage and untold suffering, unless immediate indemnity payments from the United Sates, Europe, and Australia be made to the governments of non-industrial nations, to counteract this man-made threat to the world’s habitats….

The continents rest on massive tectonic plates. Until the beginning of the Industrial Revolution in the mid 18th century, these plates were fixed in place and immobile. However, drilling for oil and mining for minerals has cut these plates loose from their primordial moorings and left them to drift aimlessly.

Irony

A few days ago, I wrote about sattelite temperature measurement:

Satellite temperature measurement makes immensely more sense – it has full coverage (except for the poles) and is not subject to local biases.  Can anyone name one single reason why the scientific community does not use the satellite temps as the standard EXCEPT that the "answer" (ie lower temperature increases) is not the one they want?  Consider the parallel example of measurement of arctic ice area.  My sense is that before satellites, we got some measurements of arctic ice extent from fixed observation stations and ship reports, but these were spotty and unreliable.  Now satellites make this measurement consistent and complete.  Would anyone argue to ignore the satellite data for spotty surface observations?  No, but this is exactly what the entire climate community seems to do for temperature.

Today in the Washington Post, Gavin Schmidt of NASA is pushing his GISS numbers that 2007 was really hot — a finding only his numbers support, since every other land and space-based temperature rollup for the earth shows lower numbers than his do.  As Tom Nelson points out, the Washington Post goes along with Schmidt in only using numbers from this one, flawed, surface temperature rollup and never mentions the much lower numbers coming from satellites.

But here is the real irony — does anyone else find it hilarious that #1 person trying to defend flawed surface measurement against satellite measurement is the head of the Goddard Institute for Space Studies at NASA?

Isn’t Gavin Schmidt Out on Strikes By Now?

From the Washington Post today"

According to the NASA analysis, the global average land-ocean temperature last year was 58.2 degrees Fahrenheit, slightly more than 1 degree above the average temperature between 1951 and 1980, which scientists use as a baseline. While a 1-degree rise may not seem like much, it represents a major shift in a world where average temperatures over broad regions rarely vary more than a couple hundredths of a degree.

This is not written as a quote from NASA’s Gavin Schmidt, but it is clear in context the statement must have come from him.  If so, the last part of the statement is absolutely demonstrably false, and for a man in Schmidt’s position is tantamount to scientific malpractice.  There are just piles of evidence from multiple disciplines – from climate and geophysics to history and literature and archeology, to say that regional climates vary a hell of a lot more than a few hundredths of a degree.  This is just absurd.

By the way, do you really want to get your science form an organation that says stuff like this:

Taking into account the new data, they said, seven of the eight warmest years on record have occurred since 2001

What new data?  That another YEAR had been discovered?  Because when I count on my own fingers, I only can come up with 6 years since 2001.

Grading the IPCC Forecasts

Roger Pielke Jr has gone back to the first IPCC assessment to see how the IPCC is doing on its long-range temperature forecasting.  He had to dign back into his own records, because the IPCC seems to be taking its past reports offline, perhaps in part to avoid just this kind of scrutiny.  Here is what he finds:

1990_ipcc_verification_2

The colored lines are various measures of world temperature.  Only the GISS, which maintains a surface temerpature rollup that is by far the highest of any source, manages to eek into the forecast band at the end of the period.  The two satellite measures (RSS and UAH) seldom even touch the forecast band except in the exceptional El Nino year of 1998.  Pielke comments:

On the graph you will also see the now familiar temperature records from two satellite and two surface analyses. It seems pretty clear that the IPCC in 1990 over-forecast temperature increases, and this is confirmed by the most recent IPCC report (Figure TS.26), so it is not surprising.

Which is fascinating, for this reason:  In essence, the IPCC is saying that we know that past forecasts based on a 1.5, much less a 2.5, climate sensitivity have proven to be too high, so in our most recent report we are going to base our forecast on … a 3.0+!!

The First Argument, Not the Last

The favorite argument of catastrophists in taking on skeptics is "all skeptics are funded by Exxon."  Such ad hominem rebuttals to skeptics are common, such as…

…comments like those of James Wang of Environmental Defense, who says that scientists who publish results against the consensus are “mostly in the pocket of oil companies”; and those of the, yes, United Kingdom’s Royal Society that say that there “are some individuals and organisations, some of which are funded by the US oil industry, that seek to undermine the science of climate change and the work of the IPCC”

and even from the editor of Science magazine:

As data accumulate, denialists retreat to the safety of the Wall Street Journal op-ed page or seek social relaxation with old pals from the tobacco lobby from whom they first learned to "teach the controversy."

Here is my thought on this subject.  There is nothing wrong with mentioning potential biases in your opponent as part of your argument.  For example, it is OK to argue "My opponent has X and Y biases, which should make us suspicious of his study.  Let’s remember these as we look into the details of his argument to see his errors…"  In this case, pointing to potential biases is an acceptable first argument before taking on issues with the opponent’s arguments.  Unfortunately, climate catastrophists use such charges as their last and only argument.  The believe they can stick the "QED" in right after the mention of Exxon funding, and then not bother to actually deal with the details.

Postscript:  William Briggs makes a nice point on the skeptic funding issue that I have made before:

The editors at Climate Resistance have written an interesting article about the “Well funded ‘Well-funded-Denial-Machine’ Denial Machine”, which details Greenpeace’s chagrin on finding that other organizations are lobbying as vigorously as they are, and that these counter-lobbyists actually have funding! For example, the Competitive Enterprise Institute, a think tank “advancing the principles of free enterprise and limited government”, got, Greenpeace claims, about 2 million dollars from Exxon Mobil from 1998 to 2005. The CEI has used some of this money to argue that punitive greenhouse laws aren’t needed. Greenpeace sees this oil money as ill-gotten and say that it taints all that touch it. But Greenpeace fails to point out that, over the same period, they got about 2 billion dollars! (Was any of that from Exxon, Greenpeace?)

So even though Greenpeace got 1000 times more than the CEI got, it helped CEI to effectively stop enlightenment and “was enough to stall worldwide action on climate change.” These “goats” have power!

Most skeptics are well aware that climate catastrophists themselves have strong financial incentives to continue to declare the sky is falling, but we don’t rely on this fact as 100% or even 10% of our "scientific" argument.

Thoughts on Satelite Measurement

From my comments to this post on comparing IPCC forecasts to reality, I had a couple of thoughts on satellite temperature measurement that I wanted to share:

  1. Any convergence of surface temperature measurements with satellite should be a source of skepticism, not confidence.  We know that the surface temperature measurement system is immensely flawed:  there are still many station quality issues in the US like urban biases that go uncorrected, and the rest of the world is even worse.  There are also huge coverage gaps (read:  oceans).  The fact this system correlates with satellite measurement feels like the situation where climate models, many of which take different approaches, some of them demonstrably wrong or contradictory, all correlate well with history.  It makes us suspicious the correlation is a managed artifact, not a real outcome.
  2. Satellite temperature measurement makes immensely more sense – it has full coverage (except for the poles) and is not subject to local biases.  Can anyone name one single reason why the scientific community does not use the satellite temps as the standard EXCEPT that the "answer" (ie lower temperature increases) is not the one they want?  Consider the parallel example of measurement of arctic ice area.  My sense is that before satellites, we got some measurements of arctic ice extent from fixed observation stations and ship reports, but these were spotty and unreliable.  Now satellites make this measurement consistent and complete.  Would anyone argue to ignore the satellite data for spotty surface observations?  No, but this is exactly what the entire climate community seems to do for temperature.

How Much Are Sea Levels Rising?

This is a surprisingly tricky question.  It turns out sea level is much less of a static benchmark than we might imagine.  Past efforts to measure long-term trends in sea level have been frustrating.  For example, even if sea level is not changing, land level often is, via subsidence or the reverse.  The IPCC famously drew some of its most catastrophic sea level predictions from tide gages in Hong Kong that are on land that is sinking (thus imparting an artificial sea level rise in the data).

A new study tries to sort this out:

The article is published in Geophysical Research Letters, the authors are from Tulane University and the State University of New York at Stony Brook, and the work was not funded by any horrible industry group. Kolker and Hameed begin their article stating “Determining the rate of global sea level rise (GSLR) during the past century is critical to understanding recent changes to the global climate system. However, this is complicated by non-tidal, short-term, local sea-level variability that is orders of magnitude greater than the trend.”

Once again, we face the dual problems in climate measurement of 1. Sorting through long-term cyclical changes and 2. Very low signal to noise ratio in climate change data.

The authors further note that “Estimates of recent rates of global sea level rise (GSLR) vary considerably” noting that many scientists have calculated rates of 1.5 to 2.0 mm per year over the 20th century. They also show that other very credible approaches have led to a 1.1 mm per year result, and they note that “the IPCC [2007] calls for higher rates for the period 1993–2003: 3.1 ± 0.7.”…

Kolker and Hameed gathered long-term data regarding the Icelandic Low and the Azores High to capture variation and trend in atmospheric “Centers of Action” associated with the North Atlantic Oscillation which is regarded as “One potential driver of Atlantic Ocean sea level.” As seen in Figure 1, these large-scale features of atmospheric circulation vary considerably from year-to-year and appear to change through time in terms of latitude and longitude.

Kolker and Hameed used these relationships to statistically control for variations and trends in atmospheric circulation. They find that the “residual” sea level rise (that not explained by COA variability) in the North Atlantic lies somewhere between 0.49±0.25mm/yr and 0.93±0.39mm/yr depending on the assumptions they employ, which is substantially less than the 1.40 to 2.15 mm per year rise found in the data corrected for the glacial isostatic adjustment. This “residual” sea level rise includes both local processes such as sedimentation changes, as well as larger-scale processes such as rising global temperatures.

By the way, this forecast translates to 2-6 inches per century.  This falls slightly short of the 20+ feet Al Gore promised in his movie.

All the “Catastrophe” Comes from Feedback

I had an epiphany the other day:  While skeptics and catastrophists debate the impact of CO2 on future temperatures, to a large extent we are arguing about the wrong thing.  Nearly everyone on both sides of the debate agree that, absent of feedback, the effect of a doubling of CO2 from pre-industrial concentrations (e.g. 280 ppm to 560 ppm, where we are at about 385ppm today) is to warm the Earth by about 1°C ± 0.2°C.  What we really should be arguing about is feedback.

In the IPCC Third Assessment, which is as good as any proxy for the consensus catasrophist position, it is stated:

If the amount of carbon dioxide were doubled instantaneously, with everything else remaining the same, the outgoing infrared radiation would be reduced by about 4 Wm-2. In other words, the radiative forcing corresponding to a doubling of the CO2 concentration would be 4 Wm-2. To counteract this imbalance, the temperature of the surface-troposphere system would have to increase by 1.2°C (with an accuracy of ±10%), in the absence of other changes.

Skeptics would argue that the 1.2°C is (predictably) at the high end of the band, but in the ballpark none-the-less.  The IPCC also points out that there is a diminishing return relationship between CO2 and temperature, such that each increment of CO2 has less effect on temperature than the last.  Skeptics also agree to this.  What this means in practice is that though the world, currently at 385ppm CO2, is only about 38% of the way to a doubling of CO2 from pre-industrial times, we should have seen about half of the temperature rise for a doubling, or if the IPCC is correct, about 0.6°C (again absent feedback).  This means that as CO2 concentrations rise from today’s 385 to 560 toward the end of this century, we might expect another 0.6°C warming.

This is nothing!  We probably would not have noticed the last 0.6°C if we had not been told it happened, and another 0.6°C would be trivial to manage.  So, without feedback, even by the catastrophist estimates at the IPCC, warming from CO2 over the next century will not rise about nuisance level.  Only massive amounts of positive feedback, as assumed by the IPCC, can turn this 0.6°C into a scary number.  In the IPCC’s words:

To counteract this imbalance, the temperature of the surface-troposphere system would have to increase by 1.2°C (with an accuracy of ±10%), in the absence of other changes. In reality, due to feedbacks, the response of the climate system is much more complex. It is believed that the overall effect of the feedbacks amplifies the temperature increase to 1.5 to 4.5°C. A significant part of this uncertainty range arises from our limited knowledge of clouds and their interactions with radiation. …

So, this means that debate about whether CO2 is a greenhouse gas is close to meaningless.  The real debate should be, how much feedback can we expect and in what quantities?  (By the way, have you ever heard the MSM mention the word "feedback" even once?)   And it is here that the scientific "consensus" really breaks down.  There is no good evidence that feedback numbers are as high as those plugged into climate models, or even that they are positive.  This quick analysis demonstrates pretty conclusively that net feedback is probably pretty close to zero.  I won’t go much more into feedback here, but suffice it to say that climate scientists are way out on a thin branch in assuming that a long-term stable process like climate is dominated by massive amounts of positive feedback.  I discuss and explain feedback in much more detail here and here.

Update:  Thanks to Steve McIntyre for digging the above quotes out of the Third Assessment Report.  I have read the Fourth report cover to cover and could not find a single simple statement making this breakdown of warming between CO2 in isolation and CO2 with feedbacks.  The numbers and science has not changed, but they seem to want to bury this distinction, probably because the science behind the feedback analysis is so weak.

Sea Ice Rorschach Test

The chart below is from the Cryosphere Today and shows the sea ice anomaly for the short period of time (since 1979) we have been able to observe it by satellite.  The chart is large so you need to click the thumbnail below to really see it:

Sea_ice_jan_2008_2

OK, now looking at the anomaly in red, what do you see:

  1. A trend in sea ice consistent with a 100+ year warming trend?
  2. A sea ice extent that is remarkably stable except for an anomaly over the last three years which now appears to be returning to normal?

The media can only see #1.  I may be crazy, but it sure looks like #2 to me.

They are Not Fudge Factors, They are “Flux Adjustments”

Previously, I have argued that climate models can duplicate history only because they are fudged.  I understand this phenomenon all too well, because I have been guilty of it many times.  I have built economic and market models for consulting clients that seem to make sense, yet did not backcast history very well, at least until I had inserted a few "factors" into them.

Climate modelers have sworn for years that they are not doing this.  But Steve McIntyre finds this in the IPCC 4th Assessment:

The strong emphasis placed on the realism of the simulated base state provided a rationale for introducing ‘flux adjustments’ or ‘flux corrections’ (Manabe and Stouffer, 1988; Sausen et al., 1988) in early simulations. These were essentially empirical corrections that could not be justified on physical principles, and that consisted of arbitrary additions of surface fluxes of heat and salinity in order to prevent the drift of the simulated climate away from a realistic state.

Boy, that is some real semantic goodness there.  We are not putting in fudge factors, we are putting in "empirical corrections that could not be justified on physical principles" that were "arbitrary additions" to the numbers.  LOL.

But the IPCC only finally admits this because they claim to have corrected it, at least in some of the models:

By the time of the TAR, however, the situation had evolved, and about half the coupled GCMs assessed in the TAR did not employ flux adjustments. That report noted that ‘some non-flux adjusted models are now able to maintain stable climatologies of comparable quality to flux-adjusted models’

Let’s just walk on past the obvious question of how they define "comparable quality" or why scientists are comfortable when multiple models using different methodologies, several of which are known to be wrong, come up with nearly the same exact answer.  Let’s instead be suspicious that the problem of fudging has not gone away, but likely has just had its name changed again, as climate scientists are likely tuning the models but with tools other than changes to flux values.  But climate models have hundreds of other variables that can be fudged, and, remembering this priceless quote

"I remember my friend Johnny von Neumann used to say, ‘with four parameters I can fit an elephant and with five I can make him wiggle his trunk.’" A meeting with Enrico Fermi, Nature 427, 297; 2004.

We should be suspicious.  But we don’t just have to rely on our suspicions, because the IPCC TAR goes on to essentially confirm my fears:

(1.5.3) The design of the coupled model simulations is also strongly linked with the methods chosen for model initialisation. In flux adjusted models, the initial ocean state is necessarily the result of preliminary and typically thousand-year-long simulations to bring the ocean model into equilibrium. Non-flux-adjusted models often employ a simpler procedure based on ocean observations, such as those compiled by Levitus et al. (1994), although some spin-up phase is even then necessary. One argument brought forward is that non-adjusted models made use of ad hoc tuning of radiative parameters (i.e., an implicit flux adjustment).

Update:  In another post, McIntyre points to just one of the millions of variables in these models and shows how small changes in assumptions make huge differences in the model outcomes.  The following is taken directly from the IPCC 4th assessment:

The strong effect of cloud processes on climate model sensitivities to greenhouse gases was emphasized further through a now-classic set of General Circulation Model (GCM) experiments, carried out by Senior and Mitchell (1993). They produced global average surface temperature changes (due to doubled atmospheric CO2 concentration) ranging from 1.9°C to 5.4°C, simply by altering the way that cloud radiative properties were treated in the model. It is somewhat unsettling that the results of a complex climate model can be so drastically altered by substituting one reasonable cloud parameterization for another, thereby approximately replicating the overall intermodel range of sensitivities.