Temperatures now compared to maintained GHCN

Update 28 Jan 2010: An Arctic Blink Graph? Why not?

From “Almostcertainly” in comments we have this interesting ‘blink comparison’ of the poles with a 250 km ‘spread’ compared to a 1200 km ‘spread’. Just click on the image to see the difference between the default 1200 km GISS chart and one that a bit more closely shows how little we really know. (And even that one overstates it. Each of those little 250 km wedges are really just a point source thermometer in most cases.) See the link:


for more details on the content of this graphic. The original pointer to the 2 static graphs at GISS was from “boballab” also in comments. (I don’t know what to call this process of collaborative blog page construction and joint exploration, but I think it’s a bit “cool” ;-) Maybe “Science Barn Raising?”… )

A rather ‘graphic’ demonstration of just how much we don’t know about real temperatures in the Arctic. It also says that a closer look at those Alaska and Eastern Russian stations might be interesting. I wonder if we know any pattern of wind or sea that would be sending warm water up that way?

I also find it fascinating to watch the Greenland and nearby stations have their little red squares suddenly balloon into large red blobs, but they ALSO move poleward. A very clear example of how GIStemp moves real temperatures when making out of place anomalies.

You also get to see how the Southern Hemisphere is dominated by a few Islands, Australia and the coastal stations of Antarctica. A big empty with some Island Airports and water moderated shores…

To make the blinker “GO”, click on the planet in the image:

Dec 2009 anomaly map Arctic View 250 km 1200 km blinker

Dec 2009 anomaly map Arctic View 250 km 1200 km blinker

Original Posting.

GHCN with a 250 km ‘spread’ and 1991-2006 baseline.

Dec 2009 anomaly map vs 1991-2006 baseline with cold N. Hemisphere

Dec 2009 anomaly map compared to 1991-2006 baseline

This image shows the world as of last month when compared to the “maintained” part of GHCN as a baseline.

Why This Map?

Well, the argument is being put forward that all this talk about thermometer deletions is silly because GHCN was just an old historical archival exercise anyway and never was intended to be regularly updated. That 1990 Great Dying of Thermometers is just an artifact of the creation date. Only a limited number of stations continued to report on an ongoing basis, so this is just an accidental artifact.


Then the necessary corollary is that the only valid baseline to use is the one for which we have ongoing maintained data. After all, if the thermometer data can’t make it from Bolivia to the USA in 20 years, maybe using Bolivia as part of the baseline is not a good idea…

So I made that “anomaly map” at the GISS web site. It looks at last December data and compares them to the 1991-2006 baseline. I chose to cut off the baseline in 2006 just so that there would be some distance between the present date and the baseline and so that the baseline is 15 years ( 1/2 the GISS default, but better than the 10 years that I’d have rather used were I going for emphasis…)

I think it is pretty clear that the world is getting colder…

Remember that GISS in GIStemp STEP4_5 ‘makes up’ the Arctic temperatures via an interpolation of an estimate of temperatures from an ice estimate from a satellite (and there were some of the ice monitoring satellites that had had some sensor issues, though I don’t know which ones GISS uses) so those Arctic Reds seem to be a permanent fixture no matter how cold the actual temperatures have been Up North. This graph is from an unknown step of GIStemp, but from the lack of ocean coverage (that comes from HadCRUT with the SST anomaly map that includes the Ice Estimate based Arctic temps) I suspect it might be from STEP3 (before the estimated Arctic is blended in).

It is also interesting that all around Madagascar is slightly warm, even though they stopped reporting about 2005 / 2006. We also know that Morocco had the thermometers moved from near the ocean to closer ot the Sahara. It would be interesting to look more closely at what has gone on in Turkey… (These maps are GREAT for telling you where the thermometers have been changed. Where there is red, there is usually something interesting about the thermometer history…)

This map also uses a 250 km smoothing (i.e. making up data out to 250 km from the stations). All that grey is where we have no clue what the temperature is in GHCN… Those spots out to sea are largely Island Airports.

So if the folks at NOAA / NCDC want to make the argument that all that Great Dying of Thermometers is just an artifact of the creation being an old musty archival thing, hey, fine with me. We’ll just move the baseline up to where we have maintained data more or less in sync with the actual thermometers used in the world today and be done with it….

And we’ll also be done with the whole “getting warmer” topic while we’re at it…


One inspiration came from a question here under the “USHCN vs USHCN.v2” thread:

John in L du B
Thanks for all your work Ciefio.

As reported yesterday at ICECAP, an explanation for station dropout:

…as Thomas Peterson and Russell Vose, the researchers who assembled much of GHCN, have explained:

“The reasons why the number of stations in GHCN drop off in recent years are because some of GHCN’s source datasets are retroactive data compilations (e.g., World Weather Records) and other data sources were created or exchanged years ago. Only three data sources are available in near-real time.

It’s common to think of temperature stations as modern Internet-linked operations that instantly report temperature readings to readily accessible databases, but that is not particularly accurate for stations outside of the United States and Western Europe. For many of the world’s stations, observations are still taken and recorded by hand, and assembling and digitizing records from thousands of stations worldwide is burdensome.

During that spike in station counts in the 1970s, those stations were not actively reporting to some central repository. Rather, those records were collected years and decades later through painstaking work by researchers. It is quite likely that, a decade or two from now, the number of stations available for the 1990s and 2000s will exceed the 6,000-station peak reached in the 1970s.”

It this at all plausible?

REPLY: [ I would find it “plausible” that that is what they would like to believe. There ‘are issues’ with the thesis, though. The simplest is the USHCN problem. How can it be a “not actively reporting” problem when NOAA / NCDC make both the USHCN and the GHCN, yet selectively filter the data so only a subset now make it from their right pocket into their left pocket?

Then we have Canada saying that they DO report all the stations in Canada and it isn’t THEM dropping the data on the floor. And Russia has accused NCDC of selective listening skills too… There are also the ongoing changes in composition of the data. If it was all a “done in 1990 one time” you would expect the curve to show a one step change. There is a large step change, but it is bounded by curves on each side. Ongoing changes are happening.

BTW, the notion that the data set was created in 1990 therefore any changes in composition are only related to that date (such as the peak in 1970 being before it was created) completely ignores that there are temperature data in it from the 1800’s that NOAA / NCDC don’t mind changing as their methods change. They have with some frequency ‘diddled the past’ as their ‘data set changes over time’ demonstrate.

Finally, I find it ludicrous on the face of it that they talk about it as ‘not real time’ or not “instantly report temperature readings”. Folks, its been Twenty Years since 1990. Even a mule train from Bolivia or a row boat from Madagascar could have gotten the data here by now… And speaking of Madagascar, their thermometers survive until about 2005, then start dying off. SOMETHING is happening after 1990, it’s just not Temperature Truth that’s happening.

And oh, BTW, http://www.wunderground.com/ has no problem finding Bolivia … so the data are, in fact, being reported “real time” just in some cases it is falling on deaf ears.

But hey, if they want to use that excuse, I’m “good with that”. The necessary consequence of that line of reasoning is this:

“The GHCN data set is obsolete by 20 years. The ongoing maintenance of the data have been botched. The result is a structural deficit that makes it wholly unsuited to use in climate analysis and that makes any statements about the ‘present’ vs. the ‘baseline’ useless due to lack of recent data comparable to that baseline. All that research based on the GHCN must be discarded as tainted by a broken unmaintained data set.”

And the corollary is that we ought to fire NCDC and just contract the data set out to Wunderground. They have fast and complete access to the data…

So if they want “to go there”, then I’m “good with that”… but I don’t think they will like the result… All their excuse does is change the “issue” from sin of commission to sin of omission. Not exactly a big advantage. They still “own” the data set and they still “own” the brokenness… -E.M.Smith”

A second, though lesser, inspiration for this map and posting came from a comment over on WUWT where it was asserted that the whole business of looking at thermometer change over time was silly. So I’m going to paste an edited version of my response to that comment here. Folks who want to see the whole thread can find it here:


:A big compendium of nonsense here. I’ll try to make a start.

Thought you ‘tried to make a start” back on the other posting where we already hashed this over.

“More than 6000 stations were active in the mid- 1970s. 1500 or less are in use today.”

This just propagates a misunderstanding of what GHCN is. 6000 stations were not active (for GHCN) in the 1970’s. GHCN was a historical climatology project of the 1990’s. V1 came out in 1992, V2 in 1997. As part of that, they collected a large number of archives, sifted through them, and over time put historic data from a very large number in their archives.

Those station were, in fact, active in 1970. 5997 of them in that year. The exact date the data get into GHCN is not particularly important. (Just as the 1880 data do not imply that GHCN was compiled in 1880… yet thermometers were still active then.)

And, BTW, data neatly archived but unavailable is functionally useless. (Like that warehouse scene in Raiders of the Lost Ark…) I’d hope you are not asserting that GHCN is only usable as an archival location…

After 1997, it was decided to continue to update the archive. But it wasn’t possible to continue to regularly update monthly all the sites that had provided batches of historic data to the original collection. That’s a different kind of operation. They could only, on a regular basis, maintain a smaller number. This notion of a vast swag of sites being discontinued about 1992 is very misleading. 1992 is about when regular reporting started.

So you are saying that the data set is 1/2 obsolete archive and 1/4 usable data (and 1/4 misc who knows what… like Madagascar that gets sort of updated sometimes… maybe… until 2005 or so). OK, fine with me. Means that ALL the work based on it is based on a horridly botched data set design. Sure you want to “go there”? Broken by design? Obsolete archive?

[One could also ask if it would not, therefore, be prudent to make a subset of GHCN that only includes the active sites from today to make historical comparisons valid…]

“It is only when data from the more southerly, warmer locations is used in the interpolation to the vacant grid boxes that an artificial warming is introduced”

A constantly repeated, way-off meme.

Nope. An accurate statement of what the data say.

Firstly, there’s little quantification of such a drift.

Try looking at the data. I did. It’s easy to see and well characterized:


This first of link looks, in particular, at the impact of leaving out of GHCN the USHCN stations. GIStemp provided a convenient vehicle to do this since it uses both, but neatly dropped the USHCN stations on the ground from May 2007 to November 2009 (when they finally put in the USHCN.v2 data). So we can MEASURE the impact. And it is 0.6 C for those stations. That is the warming bias in the base data from those locations being left out of GHCN.

[If it is an accidental bias, then we’ve had one heck of a ‘temperature accident’.]

BTW, this also illustrates another silly thing you keep asserting. Those stations that are in the USHCN and were dropped from the GHCN were not due to some archival unavailability of the data or similar lack of reporting. NOAA / NCDC produce both data sets. They would have to move the data all the way from their right pocket into their left… It was a decision not an unfortunate accident of reporting circumstances. So asserting otherwise is, at best, disingenuous.


(I’m especially fond of the “Bonus Round” top 10% table at the very bottom. The more stabilized the thermometer set, the less drift of the average temperature. In that set of ‘over 100 years in the same place’, “Global Warming” is effectively non-existent. I’d love to know how the globe can be warming when the best longest lived thermometers are not, but only the new ones at tropical airports are…)

This second link lets you see directly how much the different groups of records carry a warming signal. All the warming is in short lived records. I have a whole series of “by latitude” reports as well. They clearly show the migration of the average thermometer location toward the equator.

Though I must grant you that the “southernly” reference is a bit broad. Yes, most thermometers drift south, but in Australia in the Southern Hemisphere we found them drifting north… An early look here:


just shows the southern drift. [due to everything starting out up north] It was later in the detailed ‘by country’ and ‘by continent’ looks that I saw the more subtile patterns:

“Most” of them can be reached through this link:


though the full list in chronological order is usually here:


And here is that Australia trend:


Now, for all the folks who look at these (the results of lots of hours of computer time, full of charts of numbers) please remember that he thinks these are “little quantification”…

But the main thing is, all the GMST calcs are done with anomaly data. Station temps measured with respect to their own mean over a period, or at most, at their own supplemented with some nearby station data. It doesn’t matter if stations are replaced with other stations of higher mean.

And this, frankly, is bull pucky. Station temps are run through a meat grinder of processes long before the “anomaly map” is calculated in STEP3. We have UHI “corrections” that go the wrong way in about 1/4 of the cases. We have lots and lots of “in-fill” and “homogenizing” and who knows what, then, at the very end, the station data is compared to an average of a bunch of other stations to compute an anomaly, NOT just to itself. I posted the code comments on the other thread (I’ll not put all of them here, too, folks who care can go see what the code says it does here down in the comments):


down near the very bottom (at least, right now).

[ or just look at the “GIStemp Technical and Source Code tab on the right margin]

What could matter is if stations are replaced by others with a higher warming trend.

Say, like Airports?

[There are a couple of other ‘by airport’ analysis reports in the AGW and GIStemp topics category too.]

Where we find a persistent increase in the percentage of thermometers that are at what are now airports over time. Like, oh, 92% in the USA. Good luck finding a ‘rural reference station’ in that lot…

And that’s where this argument gets really silly. The stations with higher warming trend are at higher latitudes. Shifting stations away from the poles (to whatever extent it may have happened) would have a cooling trend, not warming.

Bald faced assertion with NOTHING in the way of data to back it up. All hypothesis, no cattle.

[ Look at that map up top. See all that grey toward the south pole? No Data. And I think we can agree that some scattered Antarctic stations don’t stand in well for the whole southern ocean. (WUWT has ‘found issues’ with many of the Antarctic stations, and my analysis of the Antarctic data showed a great number of very short and messy records. So lots of ‘in fill’ and reaching too far for ‘reference stations’.) Now remember that the Arctic is an optimal interpolation of temperature estimates from ice estimates from satellites that have sometimes ‘had issues’ measuring the ice. No data. Lots of extrapolated hypothesis.]

So: No, that’s just where you are ‘sucking your own exhaust’ a bit too much. If you look at the actual DATA from Canada, you find it cooling. It’s only when you compare it to thermometers from different places over time that the “north” is warming. Same thing in New Zealand. No warming if you use the stable set. The warming only comes in because one very southernly island is in the baseline (AND used to fill in grid boxes… I’ve run the code…) but taken out recently (so grid boxes must look elsewhere for ‘in fill’ and elsewhere is airports closer to the equator…) IIRC, Campbell Island about 68 S. Oh, and in Canada they use ONE thermometer in “The Garden Spot of the Arctic” to get that warming trend north of 65 N.

“Interestingly, the very same stations that have been deleted from the world climate network were retained for computing the average-temperature base periods”

Misunderstanding of how anomalies are actually calculated underlie a lot of the argument about station shifts.

Yes, they do. And almost universally from the “warmers” side where folks assert anomalies are calculated in some nice neat “self to self” same station way when they are not. The code averages baskets of thermometers together (and different baskets at different time intervals) and compares a station to the baskets. Read The Code. An excerpt from comments in the other thread:


C**** The spatial averaging is done as follows:
C**** Stations within RCRIT km of the grid point P contribute
C**** to the mean at P with weight 1.- d/1200, (d = distance
C**** between station and grid point in km). To remove the station
C**** bias, station data are shifted before combining them with the
C**** current mean. The shift is such that the means over the time
C**** period they have in common remains unchanged (individually
C**** for each month). If that common period is less than 20(NCRIT)
C**** years, the station is disregarded. To decrease that chance,
C**** stations are combined successively in order of the length of
C**** their time record. A final shift then reverses the mean shift
C**** OR (to get anomalies) causes the 1951-1980 mean to become
C**** zero for each month.
C**** Regional means are computed similarly except that the weight
C**** of a grid box with valid data is set to its area.
C**** Separate programs were written to combine regional data in
C**** the same way, but using the weights saved on unit 11.

So not exactly like you’ve been asserting. LOTS of weighting going on. [And lots of comparisons to buckets of averages of averages of infilled homogenized averages…]

They do not calculate a global average and then subtract it. The basic method is the Climate Anomaly Method, which NOAA uses. Each station has an anomaly calculated with respect to its own average.

Flat out WRONG. The data from NOAA arrive as temperatures at GIStemp, not anomalies. An error you made in the other thread too.

In GIStemp Station data is carried AS station data through STEP2 (they do produce a couple of “zonal averages” along the way, but the temp data are carried forward) THEN that process noted above is applied. Notice that a basket of stations is averaged based on a scaling factor and then compared. But only after adjusting their mean and some other changes.

Gistemp uses the same method, but applied to grid points (Sec 4,2), rather than individual stations. Again, this is very little affect by any general drift in stations – the grid points don’t move.

BTW, many of those “grid boxes” have exactly NO stations in them and many have exactly ONE. Good luck with that whole “it’s a grid [of stations] so individual station bias won’t matter” thing… ( 8000 boxes, 1500 stations… do the math…)

[ By Definition, at least 6500 boxes will need to have some ‘data’ made up to fill them.]

The anomalies are calculate in STEP3 (STEP4_5 just blends in a pre-fab sea anomaly map from HadCRUT). So GIStemp carries temperature data to the end, then makes an anomaly map out of it after most of the damage was already done to the temperature data. And does NOT do it by comparing that thermometer data to an earlier self.

Frankly, it is blatantly obvious that it can’t.

The “record” is largely made up of disjoint segments of too few years to be usable if they did. Only 10% of it is over 100 years and a hugh chunk of thermometers are less than 25 years. And with all of 1500 stations surviving, and many of THEM short lived, they would be hard pressed to find anything against which to compare. From an analysis of the “best” thermometers representing the top quartile ( a bit over 3000 thermometers and about 1/2 the total data in the data set) we have a report that shows not many survive into the present DECADE (and we know more of them die off during that decade…):

This is a set of monthly averages of the temperature data, then the annual average, and finally the thermometer count. I’ve deleted most decades so you can focus on what matters:

DecadeAV: 1879
 1.8  2.7  5.8 10.4 14.7 18.8 20.8 20.1 16.7 12.1  6.6  2.6 11.1  575
DecadeAV: 1889
 0.4  2.0  5.3 10.7 15.4 19.0 21.1 20.3 17.2 12.0  6.5  2.7 11.1 1137
DecadeAV: 1959
 0.2  1.7  4.8 10.4 15.0 18.7 20.9 20.3 17.0 12.0  5.7  2.0 10.7 3179
DecadeAV: 1969
-0.6  1.0  4.9 10.4 14.8 18.6 20.8 20.1 16.7 12.0  6.1  1.0 10.5 3207
DecadeAV: 1979
DecadeAV: 2009
 1.8  2.8  7.0 12.0 16.3 20.2 22.6 22.1 18.3 12.7  7.0  2.1 12.1  304

That middle chunk with about 3000 is the “baseline”. Our present decade has 304 survivors.

That’s right. 304 for the whole world. The rest (~1200) are all fairly short lived records and mostly at warm low latitude and low altitude locations.

So unless you want to say that you are somehow comparing those other 1200 to an average bucket, you have to accept that they are not being compared to much at all. They just are not long enough lived.

So, you pick it: Compared to a composite bucket (as the code claims) or not compared to anything at all and we’re just wasting our time talking about ‘anomalies’…

[And now, up top, you can see what happens if you actually compare ‘the more or less constant set of thermometers’ against itself. Only putting into the ‘baseline’ the data that are not part of The Great Dying of Thermometers.]

I apologize for the length and detail of this reply, but I have gone through all the code and all the data and when folks just want to hand wave that away with “the anomaly will save us!”, well, lets just say they really need to look at what is really DONE and not what they would like to imagine is done. We’ve had enough imagination applied to the data already…

Oh, and BTW, I did a benchmark on the anomaly. It DOES change when you leave out the thermometers GHCN left out. This is a crude benchmark in that the anomaly report is for the whole Nothern Hemisphere while the data are only changed in the USA. In theory, this means a 25 X uplift is needed to adjust for the area dilution ( 50% / 2% = 25 ). The anomalies change by 1/100, 2/100. Heck even some 4/100 C. Scaled for the small number of total grid boxes of the hemisphere that are shifted, that implies about a 1/4 C to 1 C shift in the anomaly in those specific boxes that are changed…


So you can take your theoreticals and smoke ’em. I’ve run a benchmark with the actual GIStemp code on real data and the anomaly map changes. By a very significant amount. Now we’re just haggling over the price [of this particular service provider… ]


About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in AGW GIStemp Specific, NCDC - GHCN Issues and tagged , , . Bookmark the permalink.

28 Responses to Temperatures now compared to maintained GHCN

  1. boballab@hotmail.com says:

    REPLY: [ This comment was originally put on another thread in response to the comment I used as “inspiration”. I’ve chosen to duplicate it here due to the relevance to this thread. -E.M.Smith ]

    Amazing we spent all that money on “climate change” and the NCDC’s excuse is scientists in Canada either:

    1. Never heard of Laptop Computers
    2. Never knew you can buy a cheap Laptop for $500
    3. Never learned how to operate a laptop.
    4. Never heard of a Scanner before.

    I think I made my point, even in remote places in Africa they know what a laptop is. So that whole line about the time it takes to scan in a paper copy and make a message in the computer age is something you see come out the south end of a north bound steer.

    The UN and the WMO have a special network for reporting this type of inforamtion and it’s called the Global Telecommunications System and reporting stations send Monthly CLIMAT reports over them. Here are some excerpts from the WMO:

    2.6.2 Logging and reporting of observations
    Immediately after taking an observation at a manual station, the observer must enter the data into a logbook, journal, or register that is kept at the station for this purpose. Alternatively, the observation may be entered or transcribed immediately into a computer or transmission terminal and a database. Legislation or legal entities (such as courts of law) in some countries may require a paper record or a printout of the original entry to be retained for use as evidence in legal cases, or may have difficulty accepting database generated information. The observer must ensure that a complete and accurate record has been made of the observation. At a specified frequency (ranging from immediately to once a month), depending on the requirements of the NMHS, data must be transferred from the station record (including a computer database) to a specific report form for transmittal, either by mail or electronically, to a central office.

    Does the NCDC seriously expects us to believe that Canada doesn’t use a computer network. I mean I think they know what the Internent is up there.

    Some national climate centers will require the station personnel to calculate and insert monthly totals and means of precipitation and temperature so that the data may be more easily checked at the section or central office. In addition, either the climate center or observer should encode data for the CLIMAT messages (WMO/TD‐No.1188), if appropriate. WMO has software to encode the data. The observer should note in the station logbook and on the report forms the nature and times of occurrence of any damage to or failure of instruments, maintenance activities, and any change in equipment or exposure of the station, since such events might significantly affect the observed data and thus the climatological record. Where appropriate, instructions should be provided for transmitting observations electronically. If mail is the method of transmission, instructions for mailing should be provided to the station as well as preaddressed, stamped envelopes for sending the report forms to the central climate office.

    Now this next quote is illuminating

    A major step forward in climate database management occurred with the World Climate Data and Monitoring Programme (WCDMP) Climate Computing (CLICOM) project in 1985. This project led to the installation of climate database software on personal computers, thus providing NMHS in even the smallest of countries with the capability of efficiently managing their climate records. The project also provided the foundation for demonstrable improvements in climate services, applications, and research. In the late 1990s, the WCDMP initiated a CDMS project to take advantage of the latest technologies to meet the varied and growing data management needs of WMO Members. Aside from advances in database technologies such as relational databases, query languages, and links with Geographical Information Systems, more efficient data capture was made possible with the increase in AWS, electronic field books, the Internet, and other advances in technology.

    Click to access Guide2.pdf

    Looks like that doesn’t square with what NOAA is peddling.

  2. Dave N says:

    I’ve read just about the whole paper, so thanks for the update!

    I’d be interested to see what the graph is like that uses only the currently available stations for the timeline that they have all been available

  3. RuhRoh says:

    This reminds me of the time that some doofus critic suggested that Oscar Peterson didn’t know how to play Boogie and Blues.

    He toyed with the guy for a few choruses, allowing him to think he had won the point, before unleashing that left hand in a deafening display of powerhouse piano.

    I gotta find that one again.

  4. pyromancer76 says:

    I like that sentence,” We’ve had enough imagination applied to the data already.” For some reason “Joe, The Plumber” comes to mind. When the Nobel Peace Prize from the IPCC and the Goracle is rescinded, it must be given to the Persevering Plumbers of Truth in Temperature Data and Reporting — Watts, McIntyre, Smith, and the other major science auditors who are saving our representative democracies.

    Especially thanks for your efforts. What is there left to plumb after you get through with “all the code and all the data”? All they should be able to do is to hang their heads in shame — and lose their jobs.

    REPLY: [ I’ve done a lot of “shutdown” contracts… I’d be happy to manage the outsourcing project to make a decent clean and complete data set. I’d estimate I can make a full and complete data set in less than 6 months and with 1/10 th of the GHCN / USHCN temperature series departments budget. Oh, and it would be fully documented, with open decision making process, and variations from pure raw thorough EACH and EVERY modification step.

    SIDEBAR: @Pyromancer76: I promise a WSW posting Real Soon Now. Just waiting for the TOTUS to get through the Fate of the Union Speech… For now though, that “sit in mostly cash and high dividend payers like oil trusts” choice has not done too badly ;-) since last December even if I do say so myself. -E.M.Smith ]

  5. Pingback: Update « TWAWKI

  6. Bruce of Newcastle says:

    Interesting contrast between Arctic and Antarctic in your anomaly map.

    Is it reasonable to say that the Antarctic data points are all from various manned bases (maybe some automated?)?

    Would that be the case for any of the Arctic locations? I see your comment about a GISS calculated ice coverage proxy. There must be some station data to compare with though. Could be an interesting contrast – Arctic station data vs the proxy calculated data points.

    REPLY: [ The Antarctic is all from land based systems. Some staffed, some automated. After your question I went back to inspect the GISS web site for more clues about exactly where in the process this graph originates. I’ve clarified the posting to reflect that. I realized that the lack of sea surface temperatures implies this is after STEP3 but before STEP4_5 and it is that STEP4_5 that blends in a HadCRUT SST ‘anomaly map’ that includes the Arctic ‘optimal interpolations of temperature estimates based on satellite estimates of sea ice’. If the lack of Sea Surface Temperatures is a valid indicator of “step” then the stations on this map in the Arctic are real land stations. Lending support to that notion is the red bar at the top of Canada. The “blend” or “reach” that GISS calls “smoothing” is a 250 km box. You can see that at the equator these are square, but at the pole they become wide bands. That implies a land point, and it looks to be about where we know the only surviving high latitude Canadian station is located: Eureka, Canada.

    But yes, it would be fascinating to compare the ‘stations only’ to the ‘satellite ice estimate estimates’. Just don’t know exactly how to do it right now. For the Canadian station, we know it’s a Cherry Pick of a warmer than typical location. It would be interesting to pick other deep red spots and find what that station history looks like. -E.M.Smith ]

  7. Ian Beale says:

    E.M. This might be a bit off thread but it seems to fit

    As a rancher from experience my first daily look at what might happen weather-wise is http://www.wxmaps.com/pix/prec7.html rather than the local BOM.

    In the face of the above post how does wxmaps get to update twice a day with what I presume is world wide data?

    And BSCH seem to update 4 times a day according to their site info???

    REPLY: [ Well, the data are clearly available if you actually want them ;-) but for questions of motivation, well, you would have to ask them… But I think it’s pretty clear that it’s a choice, not an accident of history. (Or put another way: Not choosing to do something IS a choice.) -E.M.Smith ]

  8. Klipkap says:

    I am interested in the concept that Volcanic activity does not contribute anywhere nearly as much CO2 as does burning fossil fuels and manufacturing cement. Does anyone know where the data for total volcanic CO2 is published?

    I would be very interested to know whether there is a bias towards the (irregular) major active volcanos – the ones that ‘make the news’. As a geologist who worked for 10 years in the Andes, I can attest to the presence of vast numbers of gas vents and fumaroles, some smaller than a household drain pipe, that are to be found over large areas. Is the cumulative effect of these ‘insignificant’ but continuous sources of volcanic vapour included, and if so, how?

    Finally, how was the CO2 input derived for the tens of thousands of kilometres of submarine plate boundaries and the even greater length of conjugate fractures and faults below the sea?

    REPLY: [ Well, a quick google of “volcanic CO2 production” turned up this USGS page:


    that places volcanoes at about 1% of human CO2. As for how they figure it, well, I’d suggest hitting their web site and / or doing some googling… -E.M.Smith ]

  9. windansea says:

    what a joke

    “we can’t get the data cuz it’s hard and has to be hand counted”

    every year we have elections where millions of ballots are scanned and counted within hours of polls closing

    REPLY: [ Or, looked at in specifics: They had 6000 thermometer locations. That’s a half dozen numbers per location (High, Low, TOBS, Site ID, …) or about what you get on a single credit card purchase. OK, take a typical shopping mall with about 100 stores (a modest mall). They tend to ring up about 1 transaction per minute per store (more on holidays, less on Wednesday at 10 am… fewer in Tiffany’s more in Macy’s) so call it about an hours worth of sales at a typical shopping mall. Even if that estimate were off by a factor of 10, it would still be way less than a typical daily transaction volume for one mall. ALL of them hand processed face to face and with a signature too. And there are how many hundreds of thousands of shopping malls?…. Frankly, the “workload” would not even be big enough to solicit a bid from the large credit card processing centers… And we won’t even talk about the number of paper checks processed each day with each one “hand inspected” and “hand entered”…

    A decent key punch operator can do about 100 WPM, with ‘average’ closer to 50 WPM. If it takes 20 seconds to punch the basic temp data on a sheet, I’d be surprised. So about 2000 minutes of keypunch time. Call it 40 hours (breaks, lunch…). 5 operators. All hand typed. Now not all of it would need to be hand typed and a lot of it would come from operators in other countries electronically. But the absolute ‘worst case’ for data entry would fit at 5 workstations in a small space. ( 2 operators with a 24 hour three shift operation…) My GUESS would be that, given what is already electronic, a single workstation would be able to handle all the “paper only” data volume even if it was 1/2 of the worlds volume (which it isn’t… look at a map of thermometer density. Almost all of them are in North America, Europe, Japan. The bulk of all data volume is already electronic. )

    A few simple examples like that and you start to realize that it’s not a size issue.

    So, IMHO, the “workload” argument is really just a “priority” argument. They didn’t want to put their 1/2 an intern on it, they wanted something else instead. In any organization, such resource allocation is ALWAYS a decision. It may be a decision by default or a decision by not taking action or… but in all cases, the manager is responsible for what THEY choose to have their organization do (or not do).

    So they want the fate of $Trillions of the world economy to hang “in the balance”; but it wasn’t worth the effort put into “Midnight Madness” at your local mall one Friday Night…

    And frankly, the “Real Time” excuse is a hoot too. 20 YEARS is not “real time”. Heck, it isn’t even “slow boat from China” time. Maybe “glacial” time? To do NOTHING for 20 years is even worse, IMHO, than to say “we chose a representative subset, use it”. -E.M.Smith ]

  10. boballab says:

    Bruce of Newcastle
    Interesting contrast between Arctic and Antarctic in your anomaly map.

    Is it reasonable to say that the Antarctic data points are all from various manned bases (maybe some automated?)?

    Would that be the case for any of the Arctic locations? I see your comment about a GISS calculated ice coverage proxy. There must be some station data to compare with though. Could be an interesting contrast – Arctic station data vs the proxy calculated data points.

    Actually GISS has a Polar view map that lets you do what EM did at the top of this post.

    The first link will be to a 1200km infill polar map:

    This second link to a 250km :

  11. turkeylurkey says:

    Hey, Just tried again, and I CANNOT get the stinkin’ tip jar to operate.
    Please send a suitable snailmail address so I can just do it the old fashioned way.
    Anonymize it as you see fit.

    We have some viable G3 Macs here, if that is still on the list of needed hardware for the bigendian_whatever issue you had mentioned. Would love to donate it if it would help the overall effort.

    I’m feeling that old razzle-dazzle magic after SOTA. Nice to hear the acknowledgement of ‘Washington’ being the problem. I didn’t hear a lot of “I” or ‘Me’ or ‘We’ at that point.

    REPLY:[ What part of the world are you in? If anywhere near California I’d drive over and pick up a G3… So, the ‘tip jar’ was sitting there half broken and nobody ever using it for almost a year. I was about to just delete it, and then Anthony points folks at it… Guess figuring out a ‘better way’ is on the top of my list now ;-) -E.M.Smith ]

  12. turkeylurkey says:


    Hey, wow, those two pictures really tell the tale of the absurd extrapolation.

    On the 250Km Arctic plot there are a scattering of circumpolar red boxes of ‘data’ amidst much gray ‘no data’,
    but then on the ‘extended outreach’ plots with 1200Km extrapolation, the entire north pole is a throbbing RED Blob.

    That seems like a great candidate for the blinking gif routine if anyone knows how to do it.


  13. pyromancer76 says:

    Please check this out: my email receipts for tips sent to you indicate that paypal gave them to Merchant Jim Kukral.

  14. Hi Guy,

    We havent talked much since dinner in SF.

    Anyways here is how I would show the problem.

    As you note there has been a falloff of stations from 1990

    Your contention: this makes a difference.
    Gavin et al: makes no difference, the stations we have
    post 1990 are good, and good enough.

    Ok. Easy.

    You have a list of stations used today:
    You have GISSTEMP running.

    Modify GISSTEMP: throw out the “excess” stations that were used before 1990.

    Run GISSTEMP with only those stations used after 1990.
    Run GISSTEMP with all the stations.

    Difference the two.

  15. E.M.Smith says:


    Great idea! I had it early on ;-)

    The problem is that GIStemp is ‘brittle’ to station removal. It just crashes in a couple of steps. So you can’t just chop and run.

    I got around this with the benchmark I did run via the use of “missing data flags” in a small part of the record and clearly defined by a single statement (date > May 2007 ). That approach is a bit too cumbersome for doing selected records at selected times (and perhaps with different choices based on modification flag…)

    What I’m presently contemplating is trying it again, but with the “dropout” happening after STEP1. I *think* it is STEP0 and STEP1 that are the most sensitive to station dropout (as they ‘make assumptions’ about files matching each other line for line) and that I could simply step in after the “merge data sets toss out before 1880” and “homogenize” steps… but then we still have homogenizing happening on the whole set with ‘in fill’ coming from ‘dropped’ stations….

    The alternative is either:

    Find all places that an exact record for record file match is expected and teach the code new tricks.

    Find all the places that an exact match is expected and assure symmetrical ‘drops’ are done.

    Write some code to replace about 75% of all data items with missing data flags and hope that does not cause a crash.

    Invent a new benchmark method…

    So, yeah, I’ve thought about it. Theoretically, it’s easy… ;-)

  16. Try a subset? canada north of 65N or USA

    REPLY: [ The issue with that is that the ‘anomaly reports’ from STEP3 are coarse grained (hemisphere) text files. I’d end up needing to fish through the temperature dataset (that is a binary format by this step) to find Canada for comparison. AFAIK, the nice graphic software that makes the map is not available for personal download and use… IFF that is wrong and I can just run the ‘make map’ on the dataset, then it would be a very fast (and effective) step to use a subset…

    I’m open to ideas on this. I’m not ‘gain saying’ just to be a ‘poor little me’; I’ve just not found a good solution, yet… If you want to take a whack at it, I’ll happily provide a copy of GIStemp with intermediate files at all points. None of this is technically all that hard, it’s just a time sink and I’m fresh out of free time… -E.M.Smith ]

  17. windansea says:

    EM this is OT but since you are an active trader you might be interested. Maybe a post about this?


    A politically divided Securities and Exchange Commission voted on Wednesday to make clear when companies must provide information to investors about the business risks associated with climate change.

    The commission, in a 3 to 2 vote, decided to require that companies disclose in their public filings the impact of climate change on their businesses — from new regulations or legislation they may face domestically or abroad to potential changes in economic trends or physical risks to a company.

    REPLY: [ Interesting article. I agree with the commissioner who said it looks like an attempt to get a political endorsement for the concept, yet it is reasonable to do what they said, that is, to disclose risks like “The US Government might tax coal to death and kill our market”… I’m not sure it really matters all that much, though. Some clerks have to type some boilerplate that some bureaucrats will read and check a check box. OK. This matters to me when I’m buying Peabody Coal how again?… Since 90%+ of all the “climate change risk” is the government doing stupid things and mandating dumb things, they really ought to just get a lot of text saying “You gov’t weenies might kill us with taxes and regulations”; but we already know that… -E.M.Smith ]

  18. vjones says:

    E.M., Steven,

    I think we’re onto something re dropping stations in 1990 with the database approach.

    Altenatively, theoretically we could prepare a subset to send to you. One of the outputs of the database is CSV format – any use as an input? Would have to have a matching station inventory file, but I don’t see that as a problem.

    Reciprocally, there is an idea we’ve had that we would need your help with to run though GIStemp. Detail to follow by email I believe.

    REPLY: [ I’ll check email. This, btw, is exactly the kind of place where the database approach has major advantages. You are not carrying somebody else’s legacy on your back. So you could take a subset, do a ‘selfing’ anomaly, then compare it to the GISS map and have an answer on divergence fairly quickly. -E.M.Smith ]

  19. David Shipley says:

    Do you have enough data through FOI, other sources etc to do the reconstruction you outlined, or would it have to be an outsourced contract working with them? If you could do it without them, maybe we could raise some money to cover the time and resources needed.

    REPLY: [ The data is freely available. It’s more a matter of dealing with the cranky code. I’d be happy to be ‘funded’, but just as happy to have ‘many hands’. -E.M.Smith ]

  20. OK folks, Here’s YAATCSoNV by me;

    Dancing GIF’s of 2 polar views created above by Boballab.
    You’ll need to click the attachment at this link.

    Yet Another Attempt To Contribute Something of Nontrivial Value.

  21. E.M.Smith says:


    Consider yourself VCoSWC – Valuable Contributor of Something Way Cool. ;-) You too, boballab!

    A fascinating visualization of just what GIStemp does as it takes a very few point sources that may be next to a heated building or jet exhaust and balloons them out to cover the world…

    (And in case you just jumped down here to comments, look at the very top of the page…)

  22. Pingback: Traitors to the public good! « TWAWKI

  23. Bruce of Newcastle says:

    “Actually GISS has a Polar view map that lets you do what EM did at the top of this post.”

    Thanks for the beautiful blinker!

    I will confess to be mystified why the Arctic should have such a large anomaly when the Antarctic is fairly flat. The zonal mean graph is really weird – looks like the top of the planet is being selectively blowtorched. I’m sorry I don’t understand your weather systems, Oz is easy by comparison.

  24. Murray says:

    Hi Chiefio, – Please advise how to use the tip jar. I am in arrears for several months worth of beers, but have no url. I did run a blog back in 2004 (to try to make a small contribution to defeat Bush), but have not maintained it. I think your political views are astonishingly “over the top” (would you believe – wacko?) but I love the work you are doing on the surface temperature manipulation, and would like to support you. Murray

  25. Rod Smith says:

    Good heavens — automated world-wide weather observation collection and distribution is NOT a new concept.

    In the early 1960’s I ran a “weather editing” shop for the USAF in the Philippines. We (in the PI) gathered data roughly as far west as Afghanistan, south to the pole, north to mid-China (WMO blocks 50 & 51 as I remember) and east to about Guam. Other USAF sites worked the rest of the world.

    Data was gathered by CW Intercept, TTY, and RTTY. All data was relayed to the USAF Stateside, and then for one, to the Weather Bureau in Maryland. This data was also mailed daily to Asheville in the form of punched paper tape.

    Generally speaking we tried to relay data back to the stateside centers within half an our of observation time — except for some places, mostly in South Asia, where communications were extremely crude.

    By the mid 60’s I was running the same sort of operation out of Tinker AFB, and by 1966 we (and the Weather Bureau) were up and doing the job and trading data via computers. Now we sent data to Asheville on mag-tape!

    The point I am trying to make is that the job of key-punching data at Asheville was virtually eliminated half a century ago. If the transcription of paper records at Asheville is still a problem, then it is surely due to long standing mis-management.

    As an afterthought: Although in those days I considered myself some sort of expert on sources of weather data, but I never even heard of the climate history network. I suspect that if I had, I would have considered observations limited to high and low temperatures of little value and not worth even the effort to collect.

  26. boballab says:


    You nailed it on the head, but some people will keep running around acting like they are still keeping temp records like it was done in 1900. I quoted and linked the revelant section of the WMO operational guide and it clearly shows that the WMO was giving electronic field books and PC’s, with the appropriate software, to third world countries. Trying to say that Canada is hand recording everything and then mailing in paper copies is ridiculous. The other end is that NCDC is one of 3 woeld archives specifically set up for keeping the world climate records:

    World Data Center(WDC) for Meteorology, Asheville is one component of a global network of discipline subcenters that facilitate international exchange of scientific data. Originally established during the International Geophysical Year (IGY) of 1957, the World Data Center System now functions under the guidance of the International Council of Scientific Unions ( ICSU).

    The WDC for Meteorology, Asheville is maintained by the U.S. Department of Commerce, National Oceanic and Atmospheric Administration (NOAA) and is collocated and operated by the National Climatic Data Center (NCDC).

    In accordance with the principles set forth by ICSU, WDC for Meteorology, Asheville acquires, catalogues, and archives data and makes them available to requesters in the international scientific community. Data are exchanged with counterparts, WDC for Meteorology, Obninsk and WDC for Meteorology, Beijing as necessary to improve access. Special research data sets prepared under international programs such as the IGY, World Climate Program (WCP), Global Atmospheric Research Program (GARP), etc., are archived and made available to the research community. All data and special data sets contributed to the WDC are available to scientific investigators without restriction.


    What is one of the responsibilities of WDC Asheville?

    Various data sets and data products from international programs and/or experiments, including meteorological and nuclear radiation data for International Geophysical Year (IGY)(see IGY Annuals, Vol.26); meteorological data and data products from Global Atmospheric Research Program, World Climate Research Program, World Climate Data and Monitoring Program; and data (including data publications) exchanged with the WDC by participating countries. Quality control is performed and documentation prepared by designated experiment centers or contributors before submission to WDC.

    Global Historical Climate Network (GHCN) dataset. GHCN is a comprehensive global baseline climate data set comprised of land surface station observations of temperature, precipitation, and pressure. All GHCN data are on a monthly basis with the earliest record dating from 1697


    Some people keep trying to say that GHCN was only a one shot deal with a 1997 update. As shown above that is not true, between what I linked to from the WMO and from NCDC it iclearly shows that NCDC is to maintain a global dataset of temperatures. Once you got the old paper copies into electronic format the hardest part was done after that it is a snap. The WMO maintians the Global Telecommunication system that transmits CLIMAT reports (which is in a preset format). NCDC then receives these CLIMAT reports electronically and once that is done its just a matter of a subroutine or two to update the GHCN database. If the data gets into a CLIMAT report then NCDC has access to it, period.

  27. Ian Beale says:


    The description of what is supposed to have been done in ghcn etc vs what is actually done in the code brings to mind a Harry Truman comment on Nixon around Watergate time:-

    “If a person does wrong and knows it that’s one thing.

    If a person does wrong and doesn’t know the difference that’s entirely something else”.

  28. E.M.Smith says:

    Over on The Air Vent in this thread:


    there was a discussion of how GIStemp does ‘anomalies’ and a poster put up a quote from Hansen’s published paper on his method.

    Steve MacIntyre looked at the method and ‘found issues’


    I think he is right, but needs to extend that analysis to the case where the two baskets of thermometer averages have different contents.

    This comment is my ‘take’ on Hansen’s method (that is, my interpretation of the written words and what I remember the code doing – though the code would bear checking as I’ve not read it in a month or so…) Hansen’s description of his method are in italics, though I added a bit of bold. My interpretation interleaved:

    4.2. Regional and Global Temperature
    After the records for the same location are combined into a single time series, the resulting data set is used to estimate regional temperature change on a grid with 2°x2° resolution. Stations located within 1200 km of the grid point are employed with a weight that decreases linearly to zero at the distance 1200 km (HL87).

    In other words: We make a basket of records averaged together.

    We employ all stations for which the length of the combined records is at least 20 years; there is no requirement that an individual contributing station have any data within our 1951-1980 reference period.

    In other words, the baseline grid cell average can be a different basket of records. And each month we do a run, the contents of that basket can change as 19.9 year records grow to 20 years+ (so short lived records can have odd impacts).

    As a final step, after all station records within 1200 km of a given grid point have been averaged, we subtract the 1951-1980 mean temperature for the grid point to obtain the estimated temperature anomaly time series of that grid point.

    In other words: THEN we make an anomaly by comparing these two baskets of different things.

    Although an anomaly is defined only for grid points with a defined 1951-1980 mean, because of the smoothing over 1200 km, most places with data have a defined 1951-1980 mean.

    In other words: We can take 1500 records and stretch them into 8000 grid boxes by reusing a lot of them and with many cells having only one record, from somewhere else, and not always the same one that was in the baseline for that cell.

    Which is the whole point I’ve been trying to get across to folks about the anomaly will not save you because:

    1) It is done AFTER all the temperatures are used for ‘fill in’, homogenizing, and UHI “correction’ that often goes the wrong way.

    2) It is not done via thermometer ‘selfing’ but via “apple basket” to “orange basket”.

Comments are closed.