GIStemp – A Human View

GIStemp: Goddard Institute for Space Studies, temperature Series.

If we would study global temperature change over time, we need a temperature record over time, and over the globe. GIStemp attempts to create a temperature history with full coverage over time and over space. Unfortunately, the (GHCN – Global Historical Climate Network) data start with one thermometer in Germany: Berlin Tempel in 1701

Over time, thermometers are added, and they slowly migrate south and to both the new, and old, worlds. Eventually, about 1900 A.D., there are sufficient thermometers on the globe to get a partial idea what is happening. But climate is subject to cyclical changes. Some, like the Pacific Decadal Oscillation, have about a 40 to 60 year full cycle length. Others, like solar cycles that run 178 years, and Bond Events – a 1500 year cycle, are a bit longer. A 100 year record is inadequate to allow for these events.

At its core, GIStemp tries to bridge this gap, both in time and in space, between the one thermometer and the globe, and between the 100 years and the 1500. This is a noble goal, but is just “A thermometer too far” to bridge.

How does it do this?

First, it glues together some added data
from Hohenpeissenburg, and from the Antarctic research stations. It squashes together the U.S. data from USHCN with the same U.S. data from GHCN. And to deal with the poor spacial coverage in the 1700’s, it deletes everything older than 1880. (While this gives a smoother spacial coverage, it does not handle the past quite as well; it now “starts history” at the bottom of the end of the Little Ice Age.)

By Bits

In many cases, our thermometer record is made of fragments. A thermometer may appear in the record for a decade (sometimes less) then disappear just as quickly. W.W.II caused a great ‘drop out’ of Pacific Island thermometers, for example. The growth of The Jet Age added thermometers at vacation spots around the globe at Tropical Vacation Destinations, but not all “stuck”. And folks move to new homes. So we have a thermometer here, and it moves there. Two records from different places. One over grass near the woods, the next over tarmac at the Jet Airport. GIStemp tries to stitch these patchworks together into a smooth quilt of coverage. Some thermometers get stretched this way or that (over time and over space). Some get their temperatures adjusted higher or lower (via a thing called “The Reference Station Method”) to better join with their neighbours. Where needed, missing data are often fabricated to try and glue the bits together. If a piece, even after such a stretch, is shorter than 20 years, it gets thrown away.

In the end, we are still left with gaps. (The entire southern hemisphere ocean band has less than 1% of the thermometers, and those are at the airport on a few specific islands for the most part). So we have a patchwork quilt, but with some rather large holes, and some pieces are stretched out of all recognition. (A thermometer may be stretched to 1000 km away. Rather like saying that London is a good proxy for the beach in the south of France.)

Adjusting for Urban Growth

Some places have changed over time. Cities grow, and get hotter, as they fill with cars, tarmac, heaters or A/C vents, airports and jet engines, and coal or nuclear power plants. To adjust for this, GIStemp looks at “nearby” stations up to 1000 km away and guesses who is rural and who is urban and “adjusts” for it. Unfortunately, like all guesses, this sometimes does not work well. Large airports are often marked as “rural” since they have few residents living there. The largest US Marine Air Station, Quantico Virginia, is classed as rural, for example. Pisa Italy takes a look at Hohenpeissenburg on the German approach to the Alps as a ‘nearby’ rural station and Pisa promptly has it’s past made colder (an odd way to adjust for the present being too warm… making it look even warmer in comparison).

So we’ve ironed out our quilt, even if some bits stuck to the iron and got scorched a bit and others were melted and smeared.

But still we have “holes” in time and in space.

At this point, the globe is divided into a “grid” of “boxes”. The data that we do have (after the stitching and stretching and ironing and…) are now assumed to be pristine and pure and suitable for telling about even more places where we have no data. A station of the record may now fill in a set of boxes on the grid up to 1200 km away. This means, for example, that the airport on Diego Garcia can “fill in” the ocean covering an area roughly the size of Western Europe.

In the final steps, the grid of boxes is compared to the past for those grids of boxes (said past having been dutifully made up if need be) and an “anomaly map” is made which would then show that the ocean 1200 km out to sea (but reflecting the tarmac at the new military jet airport in Diego Garcia today) is now warmer than when a passing ship dunked a bucket in it during a passage of the 1950’s. (Or a Ship of the Line passing in the late 1800’s. Hadley CRU provides historical sea surface temperature anomalies that are merged at the very end, as an option).

Is an Anomaly an Odd Thing?

If you compare a temperature now with what it has typically been, the difference is the “anomaly”. If the average is 15 C, and today is 16 C; you have a 1 C “anomaly”.

It is important here to note that GIStemp creates an anomaly map. I have frequently run into folks who assert that “Since GIStemp uses anomalies and not temperatures, changes of thermometer locations will have no effect.”. But in reality, GIStemp uses averages of thermometer readings, sometimes dozens of them, to create an anomaly map. You can not use the nature of the product to protect you from the process…

The Thermometer Great Dying

One final note: There has been A Great Dying lately for thermometers. Since about 1990, there has been a reduction in thermometer counts globally. In the USA, the number has dropped from 1850 at peak (in the year 1968) to 136 now (in the year 2009). As you might guess, this has presented some “issues” for our thermal quilt. But do not fear, GIStemp will fill in what it needs, guessing as needed, stretching and fabricating until it has a result.

In Japan, no thermometers now record above 300 meters. Japan has no mountains now. For California, where we once had thermometers in the mountain snow and in the far north near Oregon; there are now 4 surviving thermometers near the beach and in the warm south. But GIStemp is sure we can use them as a fine proxy for Mount Shasta with it’s glaciers and for the snows and ice of Yosemite winters.

In Conclusion

In the end, it will produce it’s quilt. Scorched in some spots? Sure. A few holes, some patched over with tropical airport tarmac? Well, yes. But a fine quilt all the same! Bright thermal reds sometimes reaching far out to sea and way up north. And even reaching from an Island near the Falklands (Base Orcadas) over to Antarctica for those years before we had thermometers on the continent.

A patchwork quilt I’m sure we can all trust to keep us comfortable.

After all, we only have this history, so we must make do with what we’ve got. Even if it isn’t enough and even if riddled with holes. And even if, in re-imagining it, some parts get melted, scorched and smeared. Otherwise we’d have to admit we just don’t have the data to describe the globe in such detail in the past; and that would not be very comforting at all.

For a more “terse” summary of the issues with AGW, you can see:

About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in AGW GIStemp Specific, AGW Science and Background, Favorites and tagged , . Bookmark the permalink.

35 Responses to GIStemp – A Human View

  1. vjones says:


    GIStemp uses averages of thermometer readings, sometimes dozens of them, to create an anomaly map.

    Ah! Now this is something I have been wondering about. Obvously the rest of us cannot ‘see into’ GIStemp like you can. I have never been sure if the anomaly calculation was done on individual thermometers or on groups as you assert and I have puzzled over this for a long time.

    It seemed to me that if it was done on individual records the argument about thermometer absolute temperature and therefore placement was (relatively) sound, but if grouped in someway then location and coverage is everything. Even looking at the bits of code you have printed, it wasn’t easy to decide….

    I’ve had another quick look at the code tonhight. So are you – is the code saying – “average by 1000km groups then calculate regression”?

    The key is the order in which the subroutines are executed. There are modification (UHI adjustment) subroutines in there by the look of it and they would have to be done first.

    This really needs to be made clear to the world.

  2. marchesarosa says:

    Bravo, Chiefio! It is careful work like this which will dig away the foundations from beneath the edifice of AGW.

    At least it should, if there were any justice.

  3. E.M.Smith says:

    vjones: Obvously the rest of us cannot ‘see into’ GIStemp like you can.

    Well, you could, it just takes a lot of Pepto Bismol, Aspirin, and a willingness to shove your mind into places where the sun rarely shines… Other than that, no problem 8-} Oh, and coffee or tea… or lots of both… Both is good… And gin doesn’t hurt either… but Whiskey is better… the mind wanders… ;-}

    I have never been sure if the anomaly calculation was done on individual thermometers or on groups as you assert and I have puzzled over this for a long time.

    Well, that’s because it’s “some of each”…

    It seemed to me that if it was done on individual records the argument about thermometer absolute temperature and therefore placement was (relatively) sound, but if grouped in someway then location and coverage is everything. Even looking at the bits of code you have printed, it wasn’t easy to decide….

    I’m working my way up to an ‘end to end benchmark’. Then you don’t need to figure out what order everything is done. You shove in the front, and measure out the back. It either does, or does not, suppress the impact of thermometer change. Expect it in about a week. The “For TonyB and STEP2 Bias” posting:

    is 90% of the way there and it still shows thermometer change bias. The only bit left is the STEP3 ‘shove it in a box’ step, and at first read, I see no way for the code to undo what has already been done…

    I’ve had another quick look at the code tonhight. So are you – is the code saying – “average by 1000km groups then calculate regression”?

    It depends on ‘for what function’. For UHI, it is a variable group in a variable radius outward up to 1000 km. (It starts close and keep reaching further until it ‘gets enough’. So thermometer deletion, over time, will spread the UHI ‘reach’ further and further and…). For most other purposes, it is not a group. See below…

    This really needs to be made clear to the world.

    Well, I’m trying… but it’s a rather dense subject to make clear…

    OK, per averaging singly or by groups: it’s some of each.

    I intended this posting as a ‘for normal folks’ posting, so would rather keep the ‘in the tech weeds’ discussions on the more ‘tech folks’ pages; but the planned “Confessions of A Serial Averager” posting that discusses this topic has not been written yet … and ‘that might be an issue’ 8-)

    The Crib Notes version:

    GIStemp starts with a heavily averaged set of data: The daily MIN/MAX are averaged for a station. This daily average is then averaged over the days of the month. That is the input data from NOAA (that has also had some “fill in” and “homogenizing” that can also involve averages and averaging…).

    This “Average of Averages of Averaged Adjustements” data then, yes, has the USHCN and GHCN data for the USA averaged (though in a partial, and partly broken, way in STEP0).

    That then goes to STEP1, where for each station, disjoint bits can be glued together (and sometimes filled in with … wait for it… averages); that can be made up from looking at nearby stations… averages… OK, I’m starting to lose count, but take a breath…

    This goes to STEP2, that does the UHI adjustment. It does this in a very complicated way (partly the PApars.f code you looked at) that involves looking at a set of “Nearby Reference stations up to 1000 km away” that it creates by finding a bunch of stations that get averaged together (but with a, IMHO, buggy ‘offset removal’ process) and this average (of what are themselves by now ‘serial averages’) are used to “adjust” the individual station data (that are now the above described average of averages of averages of…)

    Breathe, I said BREATHE !

    Ok, notice something… Not Once so far have I used the word “Anomaly” in this description…

    We are about to leave the steps that have already smeared data 1000 km (STEP0), another 1000 km (STEP1), done the splice and homogenize, done a look aside for “reference stations” for UHI another 1000 km to station data that might have been “made up” in prior steps from even further away… and with more averaging of station data than I can keep count of. And not a single “anomaly” in sight…

    So we “send in the zones, there must be zones…”

    STEP2 creates 6 zone bands and creates the zonal anomaly map that gets handed to STEP3 along with the UHI adjusted homogenized station data (that has NOT yet been anomalized). That zonal anomaly map is a product that is fed into STEP3, but not the one you see when you look at zones. Why 2 sets of books, er, zones? No idea… (Dig Here!)

    It is only, finally, in STEP3 (that makes it’s OWN different more numerous zone bands) that we take the station data and make the “Anomaly Map” that everyone gets worked up about. This is where the Station Data (after all that “serial averaging”) get mapped onto Boxes in Grids and turned into Anomalies. We are now done with land data.

    So we created an anomaly map as the last step of the process. We did not USE the anomaly map to reach this point. It is the output not the input.

    At the end of it all, we reach this point.

    But what about STEP4 and STEP5 ??? I can hear the true believers winding up to say: “We’re only 1/2 way through!!!”

    But that is a false assumption based on the numbers 4 and 5 being bigger than 3. Not on what the code does.

    Now there is an entirely optional blending in of the Hadley CRUT Sea Surface Anomaly Map in the optional, final, and oddly numbered step STEP4_5 but all this does is take that the GIStemp anomaly map (that includes sea surface anomalies in grid boxes up to 1200 km out to sea from any land or island) and, yes, average those boxes in with the Hadley Sea Suface boxes.

    Now I don’t know about you, but for me, the notion that a box of water over 1000 km from Diego Garcia might, just maybe, have a little bit of actual sea surface temperature data averaged into the anomaly product calculated from the land data (as an entirely optional, per the code and documents provided by GISS, step) does not make me feel really comfortable about the “anomaly” in Chicago being ‘based on anomalies’… Especially when the code tells me it is based on a whole lot of average of averages of averages of averages of averages (repeat until) station data…

    Oh, and when you see ANY individual station data, such as that from the GISS web site reached from the link in the top title: Those are from before the “anomaly” steps. That is, they are from STEP0, STEP1, or STEP2 (depending on what you pick from the “dropdown menu”). So it’s pretty darned clear to me that it is quite a stretch to assert they are ‘based on anomalies’ rather than ‘based on averages’ (of averages of averages of averages of averages of…)

    One hopes this did not cause too many folks to glaze over…

    (Though, having written it, I need another cup of tea…)

  4. Michael Lenaghan says:

    “In the USA, the number has dropped from 1850 at peak to 136 in 2009.”

    Do you mean that there were 1,850 thermometers at the peak, or that the peak count happened in the year 1850?

    REPLY: [ Try it now. The 1850 is the thermometer count. -ems]

  5. j ferguson says:

    The acrobatics which you report are not the sorts of things one would do to discover signal. They seem more likely intended to produce a signal – one arising, somehow, mystically, out of some very sparse data sets.

    But then isn’t that what a lot of this is about? Sparse datasets?

    My take on first seeing Visicalc was “My God, it’s painting with numbers.” But I was thinking Paul Klee or Mondrian, not finger painting.

    Can you possibly imagine that the guys who put the STEP code together didn’t understand how little the product had to do with the likely reality?

    The Czarists had their Odessa Steps and we have the GIStemp steps. (Sorry – no baby carriage.)

    It really is sophistry, but not the GIGO kind. They didn’t quite start with garbage, they made their own. Synthetic garbage.

    Now I can understand why I got such dodgy responses after asking them to point me to an explanation for how, and why, the thermometer population had been so decimated. They said budget problems.

    It’s unfortunate that your findings contend so strongly with the orthodox view and orthodox political agenda.

    The history of the creation and development of these fraudulent analyses needs discovery.

    thanks for your astonishing efforts.

  6. E.M.Smith says:

    @j ferguson

    You are most welcome.

    I must admit, it has turned out to be more (and worse) than I expected when I started this journey. But a job once started must be carried to a conclusion…

    As to motive and goal of the GIStemp “designers”. On the one hand, I cherish Hanlon’s Razor and try desperately to find a way to attribute this mess to stupidity rather than malice. For a long time I could make it fit. It just took a lot of stupidity.

    Lately, and most especially with the world wide thermometer deletions, seeming timed in just such a way as to continue a warming trend in the product even in the face of a cooling reality, peaking just as Copenhagen looms… I find the quantity of stupidity needed to “do the deed” and the exquisite timing of the arrival of that stupidity, first in the code, then in the thermometer counts after the code was released and could no longer be ‘tuned’; I find it begins to approach infinity…

    And frankly, given the extreme views publicly expressed by Hansen as a public person and NASA ranking manager; and his blatant advocacy for breaking the law, trespass, vandalism, and civil disobedience to promote that agenda: I find it increasingly hard to presume he would be above putting bias into his work product in the furtherance of his stated political goals when he has stated vandalism is a viable moral behaviour.

    And finally, the places in the code that are parameterized and show clear tuning (places where a knob can be turned and the result observed, that now have the knob pegged at a position that shows more warming…) those places are ‘footprints in the snow’ showing what the designers were thinking and doing.

    So, reluctantly, and against my will, I find myself forced to inspect ALL of Hanlon’s Razor: Never attribute to malice that which is adequately explained by stupidity.

    And I find stupidity inadequate. And more so every day and every page of code examined and every change of thermometer locations discovered.

    “The truth just is. -emsmith”

    And there is only so much you can ignore before you must decide that the truth just is, and accept it. However reluctantly.

    BTW, I think you have come to have a decent understanding of GIStemp, given your comment. And that is why I call GIStemp “A Data Fabrication Program”. It must create data where there are none. It has no other choice. The data are not there at the start, and are there in the product.

    And the way it fabricates those data are tuned. The code has clear parameters chosen to do that tuning. (In the code listings, look for the FORTRAN key word PARAMETER. Also look at the values passed in at run time from the scripts to the programs – like variously 1000 km or 1200 km; or sometimes 6 zones, sometimes more…)

    Hansen wrote most of it (and manages all of it); and he has a stated public agenda promoted with vigor and has advocated for folks doing public malicious acts in promotion of that agenda.

    Motive. Capacity. Opportunity. Behaviour. Effects.

    It’s all there but the email logs and meeting notes. And there is no moral compass to prevent “the deed”.

    So while I’d want to subpoena some records and do a formal investigation before bringing charges, I think there is plenty of probable cause to support doing that investigation.

  7. Tonyb says:

    E M Smith

    I had already drawn up a short profile of the Berlin station before I saw this post. I use it as part of the station background to my Little Ice Age Thermometers.

    Berlin 1701-2009
    Location 100 yards south on edge of that indicated small airport in very built up part of Berlin near city centre. Closed in 2008
    3.4 million population in 2008

    Capital of Prussia from 1701 with Frederik 1st Coronation in that year
    During the 1920’s third largest municipality in the world
    201,000 in 1819 to 914,000 in 1871; by 1900 it was 2,712,000.
    Extract The site of the airport was originally Knights Templar land in medieval Berlin, and from this beginning came the name Tempelhof. Later, the site was used as a parade field by Prussian forces, and by unified German forces from 1720 to the start of World War I. In 1909, Frenchman Armand Zipfel made the first flight demonstration in Tempelhof, followed by Orville Wright later that same year.[6] Tempelhof was first officially designated as an airport on 8 October 1923. Lufthansa was founded in Tempelhof on 6 January 1926.


  8. j ferguson says:

    I had not picked up that this was Hansen’s work. How do we know that?

    Knowing that would make the non-responsive email answers from the GIS shop easier to understand. It could be that the troops know this thing is a data fabricator, and are uncomfortable with it.

    It makes me wonder why Gavin Schmidt, a Hansen associate, is comfortable requesting, publicly, a few degrees of separation from it.

  9. vjones says:


    Your reply has left me….what is the typing equivalent of speechless?

    REPLY: [ Numb? Hopefully in a good way… -ems]

  10. E.M.Smith says:

    j ferguson
    I had not picked up that this was Hansen’s work. How do we know that?

    My first statement was a bit imprecise (in keeping with the goal of this being a non-technobabble ‘for everbody’ thread / description) but in computer programming terms “wrote” can have several detailed meanings. At the lowest level is the ‘coder’. That is the person who’s fingers create the FORTRAN from a detailed specification (often a ‘flow chart’ and statement of function of each module). The person who makes that specification is the “analyst” (or sometimes systems analyst). The top level may be the analyst, or there may be a designer (sometimes a team) doing the design work. Any, or all of these, may be said to have “written” the code in that they decided what it was to do and somebody typed it in. Now, in the code, you will not find Hansen’s name. But in all the published (peer reviewed, I might add…) works that describe the methods used, you find Hansen as author and you find phrases claiming ownership of at least the design aspects. Further, we know that he started life coding “venus gas modules” for some other researcher.

    We also know that Hansen is the manager of the group that runs the code and has managed the GIStemp project and product for decades (since inception).

    So we can easily assign “ownership” and easily assign responsibility for the “designer” role and the “analyst” role to Hansen (perhaps assisted by some of his co-authors). That just leaves the question of “coder” (and frankly, it’s not a very interesting question…). There is one section that is clearly ‘not his’. It is the Python in STEP1. And it’s a well written piece. The rest shows evolution over a long period of time (FORTRAN f77 is different from f90) so we have ‘residency’ from the ’70s to the ’90s. As has Hansen. The “style” is not that of a professional programmer. It is that of a “hand tool” for a researcher. (I know, I’m writing similar hand tools now for my analysis; and I’ve written code that had to go through formal QA, bench check, code review, … The two products have a very different character… Rather like the difference between a phone conversation and a peer reviewed formal paper.) So all the code other than STEP1 was “coded by a researcher as a hand tool”.

    Finally, we have chunks reused. There is a commonality in most of the code where you can see ‘the same hand’ (rather like hand writing). That hand dates from the ’70s. Some bits were repurposed and some bits have had maintenance. And in some of those bits of maintenance you can see new hand. The time when a contractor or underling was assigned a minor change.

    So we put this together and the pattern is pretty clear: As part of his early research, where he published his findings in peer reviewed papers, Hansen described his work product, the methods coded into GIStemp. He was a programmer at the time, experienced in the language and the ‘style’ of the code fits his psych / work profile. He is also fond of “control” and such control oriented folks usually code there own stuff.

    Oh, the heck with the forensic view. Here’s the GISS web links where Hansen’s name is all over it:


    The basic GISS temperature analysis scheme was defined in the late 1970s by James Hansen when a method of estimating global temperature change was needed for comparison with one-dimensional global climate models. Prior temperature analyses, most notably those of Murray Mitchell, covered only 20-90°N latitudes. Our rationale was that the number of Southern Hemisphere stations was sufficient for a meaningful estimate of global temperature change, because temperature anomalies and trends are highly correlated over substantial geographical distances. Our first published results (Hansen et al. 1981) showed that, contrary to impressions from northern latitudes, global cooling after 1940 was small, and there was net global warming of about 0.4°C between the 1880s and 1970s.

    The analysis method was documented in Hansen and Lebedeff (1987)
    , showing that the correlation of temperature change was reasonably strong for stations separated by up to 1200 km, especially at middle and high latitudes.

    j ferguson
    Knowing that would make the non-responsive email answers from the GIS shop easier to understand. It could be that the troops know this thing is a data fabricator, and are uncomfortable with it.

    He’s the boss. You never toss rocks at the boss’s baby…


    Down in the footer:

    GISS Website Curator: Robert B. Schmunk
    Responsible NASA Official: James E. Hansen
    Page updated: 2009-10-30

    j ferguson
    It makes me wonder why Gavin Schmidt, a Hansen associate, is comfortable requesting, publicly, a few degrees of separation from it.

    Well, there comes a time….

    “See how they distance themselves, Kohai?” from Rising Sun

  11. Iridium says:

    Very nice post! I can take a break from the technical aspects which are testing my limits (I’d like to think it is just because of lack of time to study … I am still a few posts behind).
    Continue the good work !

    REPLY: [ Thanks! I like to “plough through the detailed code and junk” level, and make a series of “tech talk” posts while doing it; then, when I’ve got it sorted out (or sometimes when I just need to come up for air) I make one of these “So what does it all really mean?” postings. WIth links to the stuff down in “belly of the snake” land, so you can see I’ve done my homework. But you don’t need to go do it all yourself ;-) Bon Chance! -ems ]

  12. Vincent Gray says:

    I have always had a soft spot for Jim Hansen as he seems to have bouts of regret for what he has done. I give him bags of credit for actually publishing the temperature data he has distorted. GHCN have only just started to come to the party, but Hadley keep their original data tightly close to their chest and even claim they have lost them..

    Hansen has always refused to incorporate the highly unreliable sea surface measurements to give “global” figures, and he lost regard from the IPCC because of it. He also confessed openly that the “homogenization” corrections can only be applied properly in the United States, so the claims that they have been applied globally are false.

    But he has published his “true confession” on his wesite. Here it is. What more can you ayt?

    “GISS Surface Temperature Analysis

    The Elusive Absolute Surface Air Temperature (SAT)

    Q. What exactly do you mean by SAT?

    A. I doubt that there is a general agreement how to answer this question. Even at the same location, the temperature near the ground may be very different from the temperature 5 ft above the ground and different again from 10ft or 50ft above the ground. Particularly in the presence of vegetation (say in a rain forest) the temperature above the vegetation may be very different from the temperature below the top of the vegetation. A reasonable suggestion might be to use the average temperature of the first 50ft of air either above ground or on top of the vegetation. To measure SAT we have to agree on what it is and, as far as I know, no such standard has been adopted. I cannot imagine that a weather station would build a 50ft stack of thermometers to be able to find the true SAT at its location.

    Q. What do we mean by daily SAT?

    A. Again, there is no universally accepted correct answer. Should we note the temperature every 6 hours and report the mean, should we do it every two hours, hourly, have a machine record it every second, or simply take the average of the highest and lowest temperature of the day? On some days the various methods may lead to drastically different results.

    Q. What SAT do the local media report?

    A. The media report the reading of one particular thermometer of a nearby weather station. This temperature may be very different from the true SAT even at that location and has certainly nothing to do with the true regional SAT. To measure the true regional SAT we would have to use many 50ft stacks of thermometers distributed evenly over the whole region, an obvious practical impossibility.”

  13. e.m.smith says:

    @Vincent Gray:

    Yes, he does seem to understand the actual problem. It does leave me wondering a bit how he can then turn around and spout the alarmist AGW party line. One can only surmise…

  14. j ferguson says:

    Is there a chance that doing this was not Hansen’s idea – that someone above him requested that these metrics be derived?

    If so, it puts an entirely different light on it and makes this thing look much more like one of the productivity time series we used to generate for a guy at the top of one of the companies I worked at who knew nothing about the business, didn’t understand that sparse data made our reports more episodic than statistical, and that the inferences which could be drawn from our reports almost completely non-predictive.

    But, Nah! He acts like he believes this thing.

  15. E.M.Smith says:

    @j ferguson

    Ah yes, I remember the day that upper management spent some $$$ tens of thousands for some consultant who told them they needed numbers to know anything… and for mere $$$ hundreds of thousands they would teach us to use numbers in status reports. But not just any numbers: “Metrics”. And I got to spend the better part of a day being told that “Metrics” were different from the metric system and that “Metrics” were different from measurements (though related) … and the rest of the week was worse…

    So we were all required to count, measure, and “metric”ize all sorts of meaningless things. One of my favorites was “Megabytes of Backups / month”. Entirely discretionary as we controlled the exact settings of the “Towers of Hanoi” levels and schedules. A wonderfully reliable indicator of “continued productivity improvement” ;-) And nobody ever asked about how long any given tape was retained.

    Then there was that time I was silly enough to try and explain that with virtual memory it wasn’t very useful to measure real memory usage percentages… swaps being more important; and that time I tried to explain that 100% CPU utilization 24 x 7 was not a very good goal since you would have many expensive engineers sitting on their thumbs waiting for a chunk of metal to have time for them… Not pretty… But never fear, they didn’t seem to understand that “idle daemon” was a bad thing ;-) and that “A find is a terrible thing to waste!”… The engineers rapidly learned that some very low priority processes that were CPU hogs got bigger CPUs on queue. (In Unix you can “nice” a process, so it only runs if everything else is done, at your personal discretion. Engineers know this…)

    At any rate, per Hansen:

    What I see in the code is a researcher trying to climb the impossible mountain who keeps trying one trick after another, and gets published and lauded (and perhaps even rewarded by his employer). The code evolves in step with his publishing career. So after a while he, and everyone else, believes that he has managed to “do it”, to bridge “A Thermometer Too Far” in time and space. From that point on, we have An Edifice!

    I would then speculate that No Body, goes back into the code base for maintenance work after it is part of The Articles Of Faith. (With the possibility of externally discovered embarrassing bugs.) To do so would be to insult The Boss Director and God Himself. No mere programmer, after all, could be better or smarter than the guy who originally wrote it, and nobody will get published saying it’s “got issues”… So it sits.

    (This, BTW, violates one of the fundamental rules of programming: The Law Of Mutual Superiority. Anything you code, I can improve; AND, anything I code, you can improve. Each person brings a new perspective and skill set and each polishes a different spot…)

    And now we have Hansen off playing Media Star instead of doing the things that ought to have been done ( code review, QA check, benchmark, acceptance test, maintenance programming, run audits,… ) and “bit rot” sets in.

    USHCN changes format; and it’s easier to just decide “it will change nothing” because one of the Articles Of Faith is that the anomaly code fixes all thermometer ills. Besides, anyone thinking of changing it would take one look and find another project to tackle… Then GHCN decides to drop 90% of the worlds thermometers; and Hansen has a trespass and vandalism conference to attend (who needs maintenance programmers when The Anomaly Will Save Us?).

    And we get these works of total fiction anomaly maps published as Received Wisdom.

    Do I think someone else told him to do it? Only in the sense that he was told “continue your research”. If any professionally trained programmer had looked at his code, they would not have said “make more like this”… And if his management could read / write code and looked at it, they would not have asked for more to be made. They would have demanded a “clean up”.

    So management would either glaze over (it being particularly obtuse in style) and he could convince them it was just a very hard problem beyond their ken; or more likely they didn’t bother to look. (I could almost never get management to actually look at code. They just don’t care. “Does it run? Fine.” is about all you get. Though as a manager, I looked at all my guys stuff.)

    So until there is evidence to the contrary, I think this is more a case of “Sucking your own exhaust” until you are heady with your own delusions of grandeur… and too little adult supervision to detect and derail that onanistic process.

  16. Kevin McGrane says:

    Data collection and processing is a mammoth task, and when you are as close to it as you are, then you have the benefit of being able to synthesize a bigger picture. Looking at temperature records for the British Isles (going back hundreds of years) it’s clear that much of the warming (and there isn’t much anyway) is due to somewhat less cold nights rather than hotter days – and the result is increased average temperature. Likewise, if one looks at temperature records stretching back over 100 years from many parts of the globe, then at those fixed sites the majority don’t show a warming trend at all. My hunch has been that much of the hyped ‘global warming’ must therefore be due to the way data is collected and assembled into a ‘global’ temperature.
    What we need from what you’ve done are some papers with charts as well as statistically robust comments. It seems to me that with the great dying of the thermometers, the march south (or north in the southern hemisphere) and the march from mountain to low plains there is potentially a strong case to be made for much of the supposed warming being in large part a statistical artifact. Of course, this can only go on so long – when all of the thermometers are on the beach there won’t be able to be a continual trend upwards, and maybe the lull in global temperature increases (stabilizing at a high level) over the last decade is partly a result of this thermometer manipulation running out of steam. But someone needs to analyse this from a deeply scientific perspective. If you can’t do this, can you work with someone who can take all this interesting material and produce a paper for the learned journals? What you are finding is not hypothetical conjecture but the manipulation of real data, so some hard and watertight assessment should be able to be made of this.

    REPLY: [ I’m able to “do it” but being only one individual who has to make a living between keystrokes, I’m “rate limited”. My major goal has been to “plow the field” so others could see where the good dirt was and lay out a homestead. If that doesn’t happen, then I’ll have 10000 acres and no mule, just me pulling the plough. But I’ll get it all done eventually. (Once I start something, I can not easily quit. No, really. It’s a compulsive behaviour issue. I find it very hard to leave something 1/2 done… it’s an Aspe trait.)

    I would be quite happy to work with a “front man” (of any gender) who wanted to do the “make a paper out of it” part. I’ve made the software available and I’m willing to help anyone who wants it to have their own running copy of GIStemp (though the GHCN studies take nothing more robust than Excel …) And frankly, that has been part of why I’ve left the graphs for last with some frequency. I’ve done graphs (though it was about a decade ago and I don’t presently have that software package…) but felt the best use of my time was to get the “stuff” out there that others could not find on their own. As that draws to a close, well, then I’ll go back for the “pretty print and publish” cycle if nobody else has joined me on this front.

    Also, FWIW, the Warmer Defense against all this data bias has been “The Anomaly Will Save US!”. But it won’t. I decided to knock out these “altitude bias” postings while contemplating how best to present the “bias benchmark” results. I’ve run a benchmark that demonstrates clearly that GIStemp is not a perfect filter. About 1/3 of a bias movement ‘leaks through’. So these bias analysis postings became much more valuable. But before the end of the week I need to get the “The Anomaly Is Dead Jim!” posting done… And take a break… I could use a break… -ems ]

  17. DonS says:

    Lots of hunters here in Montana and I used to be one of them. Got to an age and attitude where the thrill of the chase was not equal to the sweat and sprains. Really miss it.
    Now you just handed me a loaded weapon with the finest optics and deadliest ammunition. I’m gonna go track down a few alarmists and drill them through the heart. These are the answers I knew had to exist, but couldn’t find in 20,000 pages elsewhere.
    Think I’ll start with the local newspaper editor. 8-)

  18. Bob Highland says:

    E.M.S, I want to add my thanks to all the others who have marvelled at your tireless energy in working through this labyrinthine morass of data and the elaborate torturing processes applied to it, and exposing its seemingly wilful manipulation into a “we’re all doomed” scenario.

    With the emergence of the leaked emails and data from CRU and its revelation of the shenanigans among the leading lights of the AGW movement seeking to falsely represent their case, we can only hope that the evidence will finally be examined by ‘real’ scientists with no axe to grind.

    I am sure there are thousands of qualified people of good will who will be prepared to do this, rather than continuing to blindly accept “the science is settled” argument that has led them to fall in with the orthodoxy on the basis that they thought the cabal of climate scientists at the centre of all the hype were doing good science according to “the method”. (Actually, I think in hindsight those four little words – the science is settled – will go down as the most powerful phrase of the early 21st century, ironic for the fact that it is in principle fundamentally untrue and should never again be uttered by anyone.)

    As always, when in doubt one should look at the data. But few seem to have done this in such a straightforward and logical manner as you have.

    Mankind may or may not turn out to be responsible for affecting the climate in some way: as a minimum, we have indulged in a risky experiment by pumping out billions of tons of gases and smoke for decades. But whatever situation we find ourselves in, we must never let politics stand in the way of the truth.

    Congratulations on your efforts thus far, and please continue to fight the good fight!

  19. Pingback: Gavin Schmidt – Code and data for GIStemp is available « CO2 Realist

  20. vince says:

    A well written and easy to understand article; Thank you. As a layman to this topic I am starting to realize the complexity and scale of this monster The science is over indeed. I think the science has yet to begin. Thank you again, all you people that have been shouting from the rooftops that something is wrong. We, the uneducated, are finally listening.

  21. David says:

    I am a rookie, but you have shone a bright light into dark places, so that even I can start to come to terms with the total conceit of pretending to have “the data”. We had a phrase in the UK army – “bullshit baffles brains”. You have cleared away the bullshit and those impressive temp graphs will never have the same authority again.
    Really well done!

  22. DennisA says:

    Disappearing stations:

    Phil Jones comments on 1961-1990 base.

    He comments again here:

    I will be around tomorrow (so Dec 21) until Dec 23 inclusive. Then again from Jan 3.

    I will be checking email during the break from Dec 28 onwards. Are you in control of the glossary additions and modifications?

    As to change of base period – this seems like a decision for the whole of WGI.

    To redo the global temperature average, I can just move the series up/down, but this isn’t the correct way to do it.

    I should talk out a new base period from all the individual stations and recalculate anomalies for the oceans. For the oceans this isn’t a problem, but the land it is a serious problem.


    Many stations have good (i.e. near complete base periods for 1961-90) but I’ll lose hundreds, maybe over a thousand, stations if I went to 1981-2000.

    For both surface temperature and precipitation we don’t have spatially complete datasets (like models) so it will be quite difficult.

    For the circulation indices (like SOI and NAO) based on station pairs there is a variance term (SD). Some of the character of the series will change. We could easily adjust all these series by simple offsetting but it isn’t doing it properly.

    I’m in the throws of a project with the HC checking all the 61-90 normals we have for series that are incomplete, to ensure we don’t have any biases. This has taken quite a time and I don’t want to waste the effort.

    The arguments of Albert and Dave make a lot of sense – continuity with the TAR etc. These sort of things can be explained, but then the FOD will not be compatible with all the papers we are referring to. This will lead to lots of confusion. I would like tostick with 1961-90. I don’t want to change this until 1981-2010 is complete, for 3 reasons : 1) We need 30 years and 81-10 will get all the MSU in nicely,

    and 2)
    I will be near retirement !!

    3) is one of perception. As climatologists we are often changing base periods and have done for years. I remember getting a number of comments when I changed from 1951-80 to 1961-90. If we go to a more recent one
    the anomalies will seem less warm – I know this makes no sense scientifically, but it gives the skeptics something to go on about ! If we do the simple way, they will say we aren’t doing it properly.

    Best idea might be to show some maps of 1981-2000 minus 1961-90 to show spatially where it makes a difference for temp and precip. Showing it is quite small and likely within the intermodel differences for years which are only nominally 1981-2000. This might keep both sides happy.

    We also probably need to consider WGII. Also the paleo chapter will find 1981-2000 impossible. 1961-90 is difficult for them but not insurmountable.



  23. Pingback: FM newswire for 3 December, hot articles for your morning reading « Fabius Maximus

  24. KevinM says:

    Thanks EMS

  25. Thomas Johnson says:


    Well done. Will be back for more. Thanks much.



  26. Edgar says:

    Astonished, numb (as you rightly point out), and dumbfounded that any sort of credibility is given to the data being churned out. And to think that governments will drive industries based on models built with this data.

    Had any one of us tried to pull a stunt like this in our studies, the Professors would still be laughing at us…hysterically!

    Thank you for keeping things out of the ‘tech weeds’, which made understanding do-able.

    Best regards.

    REPLY: [ You are most welcome. Nice to know that my effort “works”. -E.M.Smith ]

  27. Michael Larkin says:

    I don’t know how to thank you for this. You’ve pitched it perfectly for me, and maybe now I’ll be able to understand more technical stuff better.

    Bless you. Truth is priceless.

    REPLY: [ Simply saying that you gained from it is thanks enough. I spend a few weeks digging through the technical detail and, frankly, dregs. Then every so often I come up for air and make a summary for others. Why? The Coast Guard motto comes to mind: “So that others may live.” Though in my case it’s more of “So that others need not drown in this dreck”. I know I can swim in it so I do, and that means others need not worry about surfing with the sharks… Simply to hear someone say “I got it, and without submersion” means I succeeded. And that is satisfaction enough. Besides, it bothers me when folks try to baffle other folks with technobabble. In some small way this lets me demonstrate that “anyone can get it” and it is the folks who are doing the “baffling” that “have issues” not the regular folks. -E. M. Smith ]

  28. Pingback: Through the Climategate | Pittsburgh Alpha to Omega

  29. mrpkw says:

    FANTASTIC post about this!!!
    There are sooooooooooooooooooo many AGW believers that I speak to that have absolutely no idea that this is how temperatures are collected and presented.

    Thanks !!

  30. Pingback: ClimateGate: 30 years in the making « The Ninth Law

  31. karmstronguk says:

    Great blog and good posts, Sir!

    Is there anyway of dropping you an email? I cant see a link anywhere to do that and have only just signed up with wordpress. I have a couple of ideas I would like to look at/try out with the temp data and could do with some help in setting up (so to speak). Appreciate you are busy (I am too, hence the request)!

    REPLY[ The email address is encoded in the “About” tab in words so that SPAM bots can’t harvest it. (But, it seems, people don’t read it there either… oh well.) It is “pub” numeral four then “all” with an AT sign “aol” and DOT com. -E. M. Smith ]

  32. L Ralph Park says:

    I’ve gone over as much of the data as is publicly available and I no longer think stupidity can account for their published projections. There is the possibility that most of the climate “scientists” are simply drones who haven’t exhibited an inclination for independent thought.

    I came to some unexpected conclusions as a result of analyzing the USHCN (TMAX) data. If you are interested I could send you some summaries that you might find interesting – especially concerning “greenhouse gas” effects and such. Furthermore, I found that I didn’t need any exotic methods / mathematics with the exception of using a database for aggregating and “slicing & dicing”.


    REPLY:[ Yeah, the data says very strange and very “un-physical” things… that argues for “something is very wrong”… I’d love to see whatever you’ve figured out. If it’s interesting you can do a ‘guest posting’ if you like. Exotic? IMHO it’s exotic that got us into this mess. How about starting with meat and potatoes then working your way up to au gratin and flambé… I’m with you on the straight forward database approach. email address in in the “about” tab up top as words.

  33. Peter Czerna says:

    IMO a really convincing summary of the GIStemp process.

    Thanks for all your labours on this website: the reward for your efforts will unfortunately not come on Earth, but then you know that yourself.

    Why am I not shocked or even surprised by the GIStemp process? Because it is just what you get in any top-down organization when the guy at the top says ‘I want a historical global temperature anomaly map (or whatever…) and I want it now.’ In such an organization you will not last long if your response is ‘We can do 10% of the globe to an overall accuracy of ±3°C, the rest without any meaningful accuracy at all.’

    We skeptics spend an awful lot of time thrashing around with scientific issues and treating them as if they mattered (to anyone but us, that is). In practice they don’t, since the root problems are sociological not scientific. In order to change GIStemp’s nonsense you have to change the organization, which will continue with business as usual whatever skeptics say. These organizations are staffed by employees, not scientists.

    Same applies to other organizations and their defenders: Oxburgh, Muir Russell, etc. QED

  34. DennisA says:

    James Hansen of NASA thinks 2005 globally was the warmest year on record: (quoted by Mercosur News Agency, 27/01/06).

    “A surprising Arctic warm spell is responsible for a 2005 that was likely the warmest year since instrument recordings began in the late 1800s”, added Hansen, who nevertheless admitted that the analysis had to estimate temperatures in the Arctic from nearby weather stations because no direct data were available.

    As a result, he said, “we couldn’t say with 100 percent certainty that it’s the warmest year, but I’m reasonably confident that it was”.

    That’s science for you…

  35. E.M.Smith says:

    @Peter Czerna:

    Thanks, I tried to make it clear, yet accurate. The links go to more depth for folks who want to poke around in the details.


    Sometimes I wonder if he is evil, or just believes his own BS. It’s hard to tell.

Comments are closed.