Charlie Martin is looking through some of James Hansen’s emails and found this:
[For] example, we extrapolate station measurements as much as 1200 km. This allows us to include results for the full Arctic. In 2005 this turned out to be important, as the Arctic had a large positive temperature anomaly. We thus found 2005 to be the warmest year in the record, while the British did not and initially NOAA also did not. …
So he is trumpeting this approach as an innovation? Does he really think he has a better answer because he has extrapolated station measurement by 1200km (746 miles)? This is roughly equivalent, in distance, to extrapolating the temperature in Fargo to Oklahoma City. This just represents for me the kind of false precision, the over-estimation of knowledge about a process, that so characterizes climate research. If we don’t have a thermometer near Oklahoma City then we don’t know the temperature in Oklahoma City and lets not fool ourselves that we do.
I had a call from a WaPo reporter today about modeling and modeling errors. We talked about a lot of things, but my main point was that whether in finance or in climate, computer models typically perform what I call knowledge laundering. These models, whether forecasting tools or global temperature models like Hansen’s, take poorly understood descriptors of a complex system in the front end and wash them through a computer model to create apparent certainty and precision. In the financial world, people who fool themselves with their models are called bankrupt (or bailed out, I guess). In the climate world, they are Oscar and Nobel Prize winners.
Update: To the 1200 km issue, this is somewhat related.
Stevo,
My god, you really don’t understand that when you measure something, you have to report the error of what you measure, not of the error of something sort-kinda the same, do you?
“They quote values, and uncertainties.”
Sure, they quoted uncertainties, but not the uncertainty in their model, just global mean temps. The two are different.
“You are embarrassing yourself to claim that uncertainty in the analysis and uncertainty in the value resulting from the analysis are somehow intrinsically different.”
Boy, that’s a gross misunderstanding and/or representation of what I said and what they did. They didn’t actually report the error “in the value resulting from the analysis” the pulled that error from a the global mean as measured by individual station, not from their extrapolation.
They did this because either they didn’t know how to measure error in their analysis, or they new the error in their analysis would be unacceptably large, so they went with reporting the error of something that sounded similar to what they wanted and was easier to calculate and even already known. Which should tip you off. How do you know the error of something before you have done the analysis?
“They did. You’ve been told what they said. ”
They certainly never did. What was the exact level of confidence they had that those trends were different from zero?
“Apparently, though, you’ve poked your eyes out and sewn your ears shut in a fit of anti-scientific reactionary pique.”
Nice that you insult people, now if you could only understand statistics….
BTW, “stevo”
You read and act exactly like “hunter” and conveniently took up this conversation when he left… now there’s a correlation… wonder what it means…
“not the uncertainty in their model, just global mean temps. The two are different”
They are exactly the same thing. Your inability to understand this is abject.
“They are exactly the same thing. Your inability to understand this is abject.”
They most certainly are not. One set of error comes from direct measurements, the other should come from their measurements and their regressions and the creation of their model. If they already knew the global average and the global standard dev., why did they even bother to do this study?
Now, I’ve explained why as best I can. I’m sorry you do not have the capability to understand that, or that I don’t have the ability to explain it to you. Though I’m somewhat unsure it is possible for you to understand this.
“If they already knew the global average and the global standard dev., why did they even bother to do this study?”
So, you really have completely misunderstood their analysis. Or you’re just trolling. Enough, anyway. If you can’t get this, you’ve got no hope.
They did not already know the global average temperature. The global average temperature does not have an intrinsic standard deviation, any more than the height of a mountain does. The study sought to find out what the global average temperature was, based on direct measurements at discrete locations, and estimated the uncertainty in the values found.
Would someone else have an idea of why Wally is having such problems with the ultra-basics?
Stevo,
This is rather pathetic of you. Even the authors admit that this error estimation based on the global mean temps isn’t all the error in their analysis: “Since the GCM and real-world temperature variabilities are not identical and since THERE ARE OTHER SOURCES OF ERROR, the bars only represent a nominal measure of the EXPECTED error in the temperature change.” (caps are my emphasis)
So they are trying to get around measuring their model’s error directly. My argument is that this what they call an error estimate, is grossly underestimating their actual error, or at the very least is a poor “estimate” of their error and that they could have easily enough measured determined their error directly.
“The study sought to find out what the global average temperature was, based on direct measurements at discrete locations, and estimated the uncertainty in the values found.”
No, the uncertainty only came from those discrete locations, not the extrapolated temps from their analysis using a regression on top of a regression and creating a weighting factor. They never gave you the error in that part of the analysis.
This is about the closest they come: “Plate 4 suggests that the model’s variability of annual mean temperature is realistic to first order. The standard deviation of annual mean temperature is typically 0.25ø-0.5øC at low latitudes, increasing to about 1øC in polar latitudes. At mid-latitudes the variability is greater in midcontinents than in coastal regions.”
But that’s just the year to year variation not a measure of their whole model (which remember includes more then just year to year measurements but regressions and creating that weighting factor), nor a measure of the confidence in their result. But look at that for a second. .25-.5 degrees C yearly, in just a single standard deviation, and their trends were around .2-.3 degrees C but over a decade. This is why the “error bars” in figure 6 are of the GCM and not their own error. Their own error would have made that trend line look flat over the entire period even just using the yearly SD, think if they actually then added in the error that comes from the regressions of this data and the extrapolation using the weighting factor. This makes their whole paper fall in the “statistically insignificant” category.
Oh dear oh dear oh dear. You do not understand the paper, at all. You don’t even know how to discuss it. Plate 4 shows the interannual variability in global temperatures, not the uncertainty in their analysis. You are either too blinkered or too stupid, or both, to know that these things are different.
The uncertainty on the annual global mean temperature anomalies is far smaller than the decadal changes described in the paper. You’re just going to have to deal with that, and don’t come back bleating any more, because all of your tiresome non-questions have been dealt with.
Funny that the only way in which you can deal with my “tiresome” questions is to insult me. I’ll leave anyone left reading this to determine who’s right, wrong or stupid.