Update 28 Jan 2010: An Arctic Blink Graph? Why not?
From “Almostcertainly” in comments we have this interesting ‘blink comparison’ of the poles with a 250 km ‘spread’ compared to a 1200 km ‘spread’. Just click on the image to see the difference between the default 1200 km GISS chart and one that a bit more closely shows how little we really know. (And even that one overstates it. Each of those little 250 km wedges are really just a point source thermometer in most cases.) See the link:
for more details on the content of this graphic. The original pointer to the 2 static graphs at GISS was from “boballab” also in comments. (I don’t know what to call this process of collaborative blog page construction and joint exploration, but I think it’s a bit “cool” ;-) Maybe “Science Barn Raising?”… )
A rather ‘graphic’ demonstration of just how much we don’t know about real temperatures in the Arctic. It also says that a closer look at those Alaska and Eastern Russian stations might be interesting. I wonder if we know any pattern of wind or sea that would be sending warm water up that way?
I also find it fascinating to watch the Greenland and nearby stations have their little red squares suddenly balloon into large red blobs, but they ALSO move poleward. A very clear example of how GIStemp moves real temperatures when making out of place anomalies.
You also get to see how the Southern Hemisphere is dominated by a few Islands, Australia and the coastal stations of Antarctica. A big empty with some Island Airports and water moderated shores…
To make the blinker “GO”, click on the planet in the image:
GHCN with a 250 km ‘spread’ and 1991-2006 baseline.
This image shows the world as of last month when compared to the “maintained” part of GHCN as a baseline.
Why This Map?
Well, the argument is being put forward that all this talk about thermometer deletions is silly because GHCN was just an old historical archival exercise anyway and never was intended to be regularly updated. That 1990 Great Dying of Thermometers is just an artifact of the creation date. Only a limited number of stations continued to report on an ongoing basis, so this is just an accidental artifact.
Then the necessary corollary is that the only valid baseline to use is the one for which we have ongoing maintained data. After all, if the thermometer data can’t make it from Bolivia to the USA in 20 years, maybe using Bolivia as part of the baseline is not a good idea…
So I made that “anomaly map” at the GISS web site. It looks at last December data and compares them to the 1991-2006 baseline. I chose to cut off the baseline in 2006 just so that there would be some distance between the present date and the baseline and so that the baseline is 15 years ( 1/2 the GISS default, but better than the 10 years that I’d have rather used were I going for emphasis…)
I think it is pretty clear that the world is getting colder…
Remember that GISS in GIStemp STEP4_5 ‘makes up’ the Arctic temperatures via an interpolation of an estimate of temperatures from an ice estimate from a satellite (and there were some of the ice monitoring satellites that had had some sensor issues, though I don’t know which ones GISS uses) so those Arctic Reds seem to be a permanent fixture no matter how cold the actual temperatures have been Up North. This graph is from an unknown step of GIStemp, but from the lack of ocean coverage (that comes from HadCRUT with the SST anomaly map that includes the Ice Estimate based Arctic temps) I suspect it might be from STEP3 (before the estimated Arctic is blended in).
It is also interesting that all around Madagascar is slightly warm, even though they stopped reporting about 2005 / 2006. We also know that Morocco had the thermometers moved from near the ocean to closer ot the Sahara. It would be interesting to look more closely at what has gone on in Turkey… (These maps are GREAT for telling you where the thermometers have been changed. Where there is red, there is usually something interesting about the thermometer history…)
This map also uses a 250 km smoothing (i.e. making up data out to 250 km from the stations). All that grey is where we have no clue what the temperature is in GHCN… Those spots out to sea are largely Island Airports.
So if the folks at NOAA / NCDC want to make the argument that all that Great Dying of Thermometers is just an artifact of the creation being an old musty archival thing, hey, fine with me. We’ll just move the baseline up to where we have maintained data more or less in sync with the actual thermometers used in the world today and be done with it….
And we’ll also be done with the whole “getting warmer” topic while we’re at it…
One inspiration came from a question here under the “USHCN vs USHCN.v2″ thread:
John in L du B
Thanks for all your work Ciefio.
As reported yesterday at ICECAP, an explanation for station dropout:
…as Thomas Peterson and Russell Vose, the researchers who assembled much of GHCN, have explained:
“The reasons why the number of stations in GHCN drop off in recent years are because some of GHCN’s source datasets are retroactive data compilations (e.g., World Weather Records) and other data sources were created or exchanged years ago. Only three data sources are available in near-real time.
It’s common to think of temperature stations as modern Internet-linked operations that instantly report temperature readings to readily accessible databases, but that is not particularly accurate for stations outside of the United States and Western Europe. For many of the world’s stations, observations are still taken and recorded by hand, and assembling and digitizing records from thousands of stations worldwide is burdensome.
During that spike in station counts in the 1970s, those stations were not actively reporting to some central repository. Rather, those records were collected years and decades later through painstaking work by researchers. It is quite likely that, a decade or two from now, the number of stations available for the 1990s and 2000s will exceed the 6,000-station peak reached in the 1970s.”
It this at all plausible?
REPLY: [ I would find it “plausible” that that is what they would like to believe. There ‘are issues’ with the thesis, though. The simplest is the USHCN problem. How can it be a “not actively reporting” problem when NOAA / NCDC make both the USHCN and the GHCN, yet selectively filter the data so only a subset now make it from their right pocket into their left pocket?
Then we have Canada saying that they DO report all the stations in Canada and it isn’t THEM dropping the data on the floor. And Russia has accused NCDC of selective listening skills too… There are also the ongoing changes in composition of the data. If it was all a “done in 1990 one time” you would expect the curve to show a one step change. There is a large step change, but it is bounded by curves on each side. Ongoing changes are happening.
BTW, the notion that the data set was created in 1990 therefore any changes in composition are only related to that date (such as the peak in 1970 being before it was created) completely ignores that there are temperature data in it from the 1800’s that NOAA / NCDC don’t mind changing as their methods change. They have with some frequency ‘diddled the past’ as their ‘data set changes over time’ demonstrate.
Finally, I find it ludicrous on the face of it that they talk about it as ‘not real time’ or not “instantly report temperature readings”. Folks, its been Twenty Years since 1990. Even a mule train from Bolivia or a row boat from Madagascar could have gotten the data here by now… And speaking of Madagascar, their thermometers survive until about 2005, then start dying off. SOMETHING is happening after 1990, it’s just not Temperature Truth that’s happening.
But hey, if they want to use that excuse, I’m “good with that”. The necessary consequence of that line of reasoning is this:
“The GHCN data set is obsolete by 20 years. The ongoing maintenance of the data have been botched. The result is a structural deficit that makes it wholly unsuited to use in climate analysis and that makes any statements about the ‘present’ vs. the ‘baseline’ useless due to lack of recent data comparable to that baseline. All that research based on the GHCN must be discarded as tainted by a broken unmaintained data set.”
And the corollary is that we ought to fire NCDC and just contract the data set out to Wunderground. They have fast and complete access to the data…
So if they want “to go there”, then I’m “good with that”… but I don’t think they will like the result… All their excuse does is change the “issue” from sin of commission to sin of omission. Not exactly a big advantage. They still “own” the data set and they still “own” the brokenness… -E.M.Smith”
A second, though lesser, inspiration for this map and posting came from a comment over on WUWT where it was asserted that the whole business of looking at thermometer change over time was silly. So I’m going to paste an edited version of my response to that comment here. Folks who want to see the whole thread can find it here:
:A big compendium of nonsense here. I’ll try to make a start.
Thought you ‘tried to make a start” back on the other posting where we already hashed this over.
“More than 6000 stations were active in the mid- 1970s. 1500 or less are in use today.”
This just propagates a misunderstanding of what GHCN is. 6000 stations were not active (for GHCN) in the 1970’s. GHCN was a historical climatology project of the 1990’s. V1 came out in 1992, V2 in 1997. As part of that, they collected a large number of archives, sifted through them, and over time put historic data from a very large number in their archives.
Those station were, in fact, active in 1970. 5997 of them in that year. The exact date the data get into GHCN is not particularly important. (Just as the 1880 data do not imply that GHCN was compiled in 1880… yet thermometers were still active then.)
And, BTW, data neatly archived but unavailable is functionally useless. (Like that warehouse scene in Raiders of the Lost Ark…) I’d hope you are not asserting that GHCN is only usable as an archival location…
After 1997, it was decided to continue to update the archive. But it wasn’t possible to continue to regularly update monthly all the sites that had provided batches of historic data to the original collection. That’s a different kind of operation. They could only, on a regular basis, maintain a smaller number. This notion of a vast swag of sites being discontinued about 1992 is very misleading. 1992 is about when regular reporting started.
So you are saying that the data set is 1/2 obsolete archive and 1/4 usable data (and 1/4 misc who knows what… like Madagascar that gets sort of updated sometimes… maybe… until 2005 or so). OK, fine with me. Means that ALL the work based on it is based on a horridly botched data set design. Sure you want to “go there”? Broken by design? Obsolete archive?
[One could also ask if it would not, therefore, be prudent to make a subset of GHCN that only includes the active sites from today to make historical comparisons valid…]
“It is only when data from the more southerly, warmer locations is used in the interpolation to the vacant grid boxes that an artificial warming is introduced”
A constantly repeated, way-off meme.
Nope. An accurate statement of what the data say.
Firstly, there’s little quantification of such a drift.
Try looking at the data. I did. It’s easy to see and well characterized:
This first of link looks, in particular, at the impact of leaving out of GHCN the USHCN stations. GIStemp provided a convenient vehicle to do this since it uses both, but neatly dropped the USHCN stations on the ground from May 2007 to November 2009 (when they finally put in the USHCN.v2 data). So we can MEASURE the impact. And it is 0.6 C for those stations. That is the warming bias in the base data from those locations being left out of GHCN.
[If it is an accidental bias, then we’ve had one heck of a ‘temperature accident’.]
BTW, this also illustrates another silly thing you keep asserting. Those stations that are in the USHCN and were dropped from the GHCN were not due to some archival unavailability of the data or similar lack of reporting. NOAA / NCDC produce both data sets. They would have to move the data all the way from their right pocket into their left… It was a decision not an unfortunate accident of reporting circumstances. So asserting otherwise is, at best, disingenuous.
(I’m especially fond of the “Bonus Round” top 10% table at the very bottom. The more stabilized the thermometer set, the less drift of the average temperature. In that set of ‘over 100 years in the same place’, “Global Warming” is effectively non-existent. I’d love to know how the globe can be warming when the best longest lived thermometers are not, but only the new ones at tropical airports are…)
This second link lets you see directly how much the different groups of records carry a warming signal. All the warming is in short lived records. I have a whole series of “by latitude” reports as well. They clearly show the migration of the average thermometer location toward the equator.
Though I must grant you that the “southernly” reference is a bit broad. Yes, most thermometers drift south, but in Australia in the Southern Hemisphere we found them drifting north… An early look here:
just shows the southern drift. [due to everything starting out up north] It was later in the detailed ‘by country’ and ‘by continent’ looks that I saw the more subtile patterns:
“Most” of them can be reached through this link:
though the full list in chronological order is usually here:
And here is that Australia trend:
Now, for all the folks who look at these (the results of lots of hours of computer time, full of charts of numbers) please remember that he thinks these are “little quantification”…
But the main thing is, all the GMST calcs are done with anomaly data. Station temps measured with respect to their own mean over a period, or at most, at their own supplemented with some nearby station data. It doesn’t matter if stations are replaced with other stations of higher mean.
And this, frankly, is bull pucky. Station temps are run through a meat grinder of processes long before the “anomaly map” is calculated in STEP3. We have UHI “corrections” that go the wrong way in about 1/4 of the cases. We have lots and lots of “in-fill” and “homogenizing” and who knows what, then, at the very end, the station data is compared to an average of a bunch of other stations to compute an anomaly, NOT just to itself. I posted the code comments on the other thread (I’ll not put all of them here, too, folks who care can go see what the code says it does here down in the comments):
down near the very bottom (at least, right now).
[ or just look at the “GIStemp Technical and Source Code tab on the right margin]
What could matter is if stations are replaced by others with a higher warming trend.
Say, like Airports?
[There are a couple of other ‘by airport’ analysis reports in the AGW and GIStemp topics category too.]
Where we find a persistent increase in the percentage of thermometers that are at what are now airports over time. Like, oh, 92% in the USA. Good luck finding a ‘rural reference station’ in that lot…
And that’s where this argument gets really silly. The stations with higher warming trend are at higher latitudes. Shifting stations away from the poles (to whatever extent it may have happened) would have a cooling trend, not warming.
Bald faced assertion with NOTHING in the way of data to back it up. All hypothesis, no cattle.
[ Look at that map up top. See all that grey toward the south pole? No Data. And I think we can agree that some scattered Antarctic stations don’t stand in well for the whole southern ocean. (WUWT has ‘found issues’ with many of the Antarctic stations, and my analysis of the Antarctic data showed a great number of very short and messy records. So lots of ‘in fill’ and reaching too far for ‘reference stations’.) Now remember that the Arctic is an optimal interpolation of temperature estimates from ice estimates from satellites that have sometimes ‘had issues’ measuring the ice. No data. Lots of extrapolated hypothesis.]
So: No, that’s just where you are ‘sucking your own exhaust’ a bit too much. If you look at the actual DATA from Canada, you find it cooling. It’s only when you compare it to thermometers from different places over time that the “north” is warming. Same thing in New Zealand. No warming if you use the stable set. The warming only comes in because one very southernly island is in the baseline (AND used to fill in grid boxes… I’ve run the code…) but taken out recently (so grid boxes must look elsewhere for ‘in fill’ and elsewhere is airports closer to the equator…) IIRC, Campbell Island about 68 S. Oh, and in Canada they use ONE thermometer in “The Garden Spot of the Arctic” to get that warming trend north of 65 N.
“Interestingly, the very same stations that have been deleted from the world climate network were retained for computing the average-temperature base periods”
Misunderstanding of how anomalies are actually calculated underlie a lot of the argument about station shifts.
Yes, they do. And almost universally from the “warmers” side where folks assert anomalies are calculated in some nice neat “self to self” same station way when they are not. The code averages baskets of thermometers together (and different baskets at different time intervals) and compares a station to the baskets. Read The Code. An excerpt from comments in the other thread:
C**** The spatial averaging is done as follows: C**** Stations within RCRIT km of the grid point P contribute C**** to the mean at P with weight 1.- d/1200, (d = distance C**** between station and grid point in km). To remove the station C**** bias, station data are shifted before combining them with the C**** current mean. The shift is such that the means over the time C**** period they have in common remains unchanged (individually C**** for each month). If that common period is less than 20(NCRIT) C**** years, the station is disregarded. To decrease that chance, C**** stations are combined successively in order of the length of C**** their time record. A final shift then reverses the mean shift C**** OR (to get anomalies) causes the 1951-1980 mean to become C**** zero for each month. C**** C**** Regional means are computed similarly except that the weight C**** of a grid box with valid data is set to its area. C**** Separate programs were written to combine regional data in C**** the same way, but using the weights saved on unit 11.
So not exactly like you’ve been asserting. LOTS of weighting going on. [And lots of comparisons to buckets of averages of averages of infilled homogenized averages…]
They do not calculate a global average and then subtract it. The basic method is the Climate Anomaly Method, which NOAA uses. Each station has an anomaly calculated with respect to its own average.
Flat out WRONG. The data from NOAA arrive as temperatures at GIStemp, not anomalies. An error you made in the other thread too.
In GIStemp Station data is carried AS station data through STEP2 (they do produce a couple of “zonal averages” along the way, but the temp data are carried forward) THEN that process noted above is applied. Notice that a basket of stations is averaged based on a scaling factor and then compared. But only after adjusting their mean and some other changes.
Gistemp uses the same method, but applied to grid points (Sec 4,2), rather than individual stations. Again, this is very little affect by any general drift in stations – the grid points don’t move.
BTW, many of those “grid boxes” have exactly NO stations in them and many have exactly ONE. Good luck with that whole “it’s a grid [of stations] so individual station bias won’t matter” thing… ( 8000 boxes, 1500 stations… do the math…)
[ By Definition, at least 6500 boxes will need to have some ‘data’ made up to fill them.]
The anomalies are calculate in STEP3 (STEP4_5 just blends in a pre-fab sea anomaly map from HadCRUT). So GIStemp carries temperature data to the end, then makes an anomaly map out of it after most of the damage was already done to the temperature data. And does NOT do it by comparing that thermometer data to an earlier self.
Frankly, it is blatantly obvious that it can’t.
The “record” is largely made up of disjoint segments of too few years to be usable if they did. Only 10% of it is over 100 years and a hugh chunk of thermometers are less than 25 years. And with all of 1500 stations surviving, and many of THEM short lived, they would be hard pressed to find anything against which to compare. From an analysis of the “best” thermometers representing the top quartile ( a bit over 3000 thermometers and about 1/2 the total data in the data set) we have a report that shows not many survive into the present DECADE (and we know more of them die off during that decade…):
This is a set of monthly averages of the temperature data, then the annual average, and finally the thermometer count. I’ve deleted most decades so you can focus on what matters:
DecadeAV: 1879 1.8 2.7 5.8 10.4 14.7 18.8 20.8 20.1 16.7 12.1 6.6 2.6 11.1 575 DecadeAV: 1889 0.4 2.0 5.3 10.7 15.4 19.0 21.1 20.3 17.2 12.0 6.5 2.7 11.1 1137 ... DecadeAV: 1959 0.2 1.7 4.8 10.4 15.0 18.7 20.9 20.3 17.0 12.0 5.7 2.0 10.7 3179 DecadeAV: 1969 -0.6 1.0 4.9 10.4 14.8 18.6 20.8 20.1 16.7 12.0 6.1 1.0 10.5 3207 DecadeAV: 1979 ... DecadeAV: 2009 1.8 2.8 7.0 12.0 16.3 20.2 22.6 22.1 18.3 12.7 7.0 2.1 12.1 304
That middle chunk with about 3000 is the “baseline”. Our present decade has 304 survivors.
That’s right. 304 for the whole world. The rest (~1200) are all fairly short lived records and mostly at warm low latitude and low altitude locations.
So unless you want to say that you are somehow comparing those other 1200 to an average bucket, you have to accept that they are not being compared to much at all. They just are not long enough lived.
So, you pick it: Compared to a composite bucket (as the code claims) or not compared to anything at all and we’re just wasting our time talking about ‘anomalies’…
[And now, up top, you can see what happens if you actually compare ‘the more or less constant set of thermometers’ against itself. Only putting into the ‘baseline’ the data that are not part of The Great Dying of Thermometers.]
I apologize for the length and detail of this reply, but I have gone through all the code and all the data and when folks just want to hand wave that away with “the anomaly will save us!”, well, lets just say they really need to look at what is really DONE and not what they would like to imagine is done. We’ve had enough imagination applied to the data already…
Oh, and BTW, I did a benchmark on the anomaly. It DOES change when you leave out the thermometers GHCN left out. This is a crude benchmark in that the anomaly report is for the whole Nothern Hemisphere while the data are only changed in the USA. In theory, this means a 25 X uplift is needed to adjust for the area dilution ( 50% / 2% = 25 ). The anomalies change by 1/100, 2/100. Heck even some 4/100 C. Scaled for the small number of total grid boxes of the hemisphere that are shifted, that implies about a 1/4 C to 1 C shift in the anomaly in those specific boxes that are changed…
So you can take your theoreticals and smoke ’em. I’ve run a benchmark with the actual GIStemp code on real data and the anomaly map changes. By a very significant amount. Now we’re just haggling over the price [of this particular service provider… ]