## Assume A Spherical Cow – therefore all steaks are round

Assume A Spherical Cow

Original image and story (with the risk, of course, that the AGW Wiki Langoliers will go change history… again.)

Fair warning, this posting is a bit long. It covers a lot of history of what I’ve been doing and why, plus for the long suffering, it has graphs in it ;-)

### Summary

I am not trying to calculate a “Global Average Temperature”, I am looking for information about the structure of the temperature data that we have for the planet. Basically, my major goal is to see how big the DATA problem is as a first step. In a later process will come analysis of how well GIStemp handles this data bias. And along the way I am in the process of making my own analysis program that shows what the actual warming of the planet (if any) is likely to be. The DATA bias, and my analysis of the data showing what I think is the correct result, will both be used to assess the quality of the processing in GIStemp when the time comes. The first two steps are substantially done, but the last step is still a work in progress (though with some early results).

The basic “problem” with other folks assertion that a “thermometer compared to itself” anomaly process shows there is no bias in the data, so I must be wrong; is that it looks at a solution to the problem and asserts that shows there is no problem. And it looks at my measurements of the problem and asserts I’m saying that is the temperature of the planet. I’m not. So we have a solution to a problem offered as proof there is no problem, and an assertion about what I’m doing that is wrong.

So what AM I doing? “Measuring the data” or “characterizing the data”. That is the very first thing one ought to do in a problem like this. Get a feel for the data. Does it rise in steps? As a smooth 200 year ramp? Are there large dropouts or spikes and drops? Big gaps and holes?

Why do you do this? Because it tells you how to fix the issues that the data has. And all data ‘has issues’. It also gives you a first rough benchmark for the data. It shows you where you are starting and lets you see the size of the problem. And there are lots of problems in the data. About 1/2 of the data are missing. Huge gaps and holes. Short lived instruments catching a biased glimpse in time. Changed technology introducing step function jumps up.

Yes, I think it matters to find that out and put a size on it. That way you have something to measure your performance against as you test your solution. While some folks are willing to just assume their solution is perfect, I prefer to measure them against the starting point.

In the case of GIStemp I’ve run a benchmark that shows that the particular kind of anomaly process it does (an imperfect comparison of one group of thermometers in a baseline period to a different group later) does let instrument change effect leak through. So we know it ‘has issues’, now we need to measure ‘how much’. In that context, knowing what bias is in the base data does matter. Furthermore, even the folks at NASA admit this problem. From the NASA foia emails as discussed here:

https://chiefio.wordpress.com/2010/02/18/nasa-giss-speaks-foia-emails/

From: Gavin Schmidt gschmidt@giss.nasa.gov
Date: 19 Feb 2008 14:38:47 -0500
To: rruedy@giss.nasa.gov

I had a look at the data, and this whole business seems to be related to the infilling of seasonal and annual means. There is no evidence for any step change in any of the individual months.

The only anomalous point (which matches nearby deltas) is for Set 2005. Given the large amount of missing data in lampasas this gets propagated to the annual (D-N) mean – I think – with a little more weight then in the nearby stations. The other factor might be that lampasas is overall cooling, if we use climatology to infill in recent years, that might give a warm bias. But I’m not sure on how the filling-in happens.

Gavin

So it isn’t just me saying that data issues can get through the GIStemp “climatology” process, it’s Gavin in an email to the guy doing the maintenance of GIStemp.

For these kinds of reasons, it is valuable to look at the base data before doing any anomaly processes on them (but not as a replacement for same) and see just how much bias is in these base data that could be leaking through such a ‘climatology’ code.

The bulk of the GHCN analysis I have done is exactly this kind of “characterize the data” process. And it finds bias “by altitude”, “by latitude”, “by airport percentage”, and a few others. Those biases do not go away from the data just because someone has a way of fixing it that is not used in GIStemp. Those biases could very easily end up in the work product of GIStemp that is used for policy decisions.

Further, I’ve made my own variation on an ‘anomaly code’ that I’ve used to look at the data ( to find the most likely actual warming or cooling trend). What I’ve found is that the “trends” are quite different from region to region. Whatever is happening has strong local components that change when instruments change, and little “global” component. To average all the regions together “over averages” and hides the most interesting information. Most critics of actually looking at the data make this error. Averages are used to hide information that is in the way. But you don’t want to average too much or you will miss the truth hiding in the data.

Here is a sample of that product, the USA. We’ll come back to it near the bottom. For now just note that it’s not a warming hockey stick (hurrah!) and it shows things we know to be true like a warm 1930’s. (And notice that the present is on the left and the past on the right.) You ought to be able to click on the image to get a larger view.

USA All Data Delta T

And the truth that I see in the data is that there is plenty of room in the data for a warming signal based on instrument change (10’s of degrees C) and some room for a cyclical signal based on natural processes like ocean changes ( single digits of C ) but by the time you have them out of the way, the remainder is just barely able (and sometimes unable) to cover the Urban Heat Island, Airport Heat Island, and other known siting issues. After allowance for them, I see little room for CO2 as an agent. But that is an early assessment based on only a few regions. It will take a bit longer to demonstrate it fully (though a couple of examples are offered below).

At the end of it all, a Hypothetical Anomaly Process showing it can’t find bias left in the data after squeezing it all out is of little interest, and less use. I’m interested in measuring the size of the bias first, then taking it out. And in any case, it is what the real world code used for policy decisions does that matters, not the Hypothetical Code.

### The Trouble with Tamino – Spherical Cows

OK, against my better judgment, I’m going to give my evaluation of where someone else has “got it wrong”. I generally have a policy of avoiding such “prove a negative” food fights, so do not expect me to indulge in this behaviour again.
But Just This Once…

The Trouble with Tamino (what I’ll be calling that article – apologies to A. Hitchcock) begins with an error of assumption. He/She/It (that’s the trouble with pseudonymous posters, you don’t know what appellation to give them…. maybe I’ll just go with the grammatically acceptable “singular they”…) They make a false assumption about what was asserted, then attempt to prove their false assumption means something.

In this case, the first thing assumed wrong is that I have asserted that the anomaly trends for the cold stations are different from the warm stations. I have never asserted such a thing. They then run off to prove that something ( that I’ve never asserted) is wrong, so I must be wrong. Well, when I was a kid we called that “A swing and a miss”. Now it is just another “error of assumption”. But right out the gate, the basic assumption of the whole “proof” is broken. I’m not talking about anomalies, I’m talking about the DATA.

Further, I’m not overlooking the anomaly process. I’m deliberately avoiding using it. I want to see the patterns of the data. I do not want to make them disappear at this stage of the examination. To assert that I’ve got it wrong because I’m not doing what someone else wants (because I’m looking for different properties; looking for different answers) is to simply not understand what I’m doing or why.

What I have found is a persistent bias in the spacial and temporal distribution of temperature recording stations used in the GHCN. That bias is real, exists, and is demonstrable (as many of my postings have illustrated “by latitude”, “by altitude” and even “by airport” flag.) That does not mean it is impossible to correct for that bias, but first you must admit the bias is there.

To demonstrate a method of dealing with the bias is not proof that the bias is missing.

The Trouble With Tamino all hinges on the notion that the only way that a bias in data can exist or can have an impact is if the anomalies over time in the average of all of them is divergent between the two sets (the kept and the tossed) and that assumption is simply wrong. Not all processes use “self to self” anomalies, so not all processes will respond to data biases in the same way.

Further, “The Trouble With Tamino” over averages. (That is, it averages too many things together and loses too much information). There is the implicit assumption that if you average a global bunch of anomalies together it means something. Well, if I have Canada getting warmer and Africa getting colder and average them together, all I may find is an accidental zero. So the world is divided into two groups (kept and tossed) and they are found to be about the same anomalies (after a basket of ill defined area adjustments). It can simply mean that the average of everything hides the interesting bits in both cases. If you would really see what is happening, look at the detail. We’ll do that in one case at the bottom of this article.

Then the leap to the conclusion that this means something about what the actual programs used, like GIStemp, do with the data. The anomaly process in GIStemp is different from the theoretical “Self to Self”, so The Trouble With Tamino can say nothing about it. Yet it claims that the bias in the data can have no impact because it does not show up in some other process. The process in GIStemp is not Their process, so they can say nothing about how the bias will be handled by the actual code used in the real world. (The thing that I care about, and WHY I want to know what the actual bias is in the data, not what is left over after someone does some other process).

So we have an ill conceived process that addresses a question orthogonal to what I’m investigating and finds that highly processed data has had the interesting information squeezed out of it. Then leaps to the conclusion that this has meaning about how other processes and programs will handle bias in the data. To quote someone or other “It’s not even wrong”.

### Trot Out The Hypothetical Cow

Sidebar: There is a fairly consistent Hypothetical Cow that gets brought on stage each time the anomaly process is discussed. Folks leap to the conclusion that the data products, like GIStemp, do anomalies “Self to Self”. Comparing a thermometer today with itself in the past. GIStemp does not. It does “Random box of thermometers in time A” vs “Different random box of thermometers with adjustments and some fill in and some made up numbers in time B”. While it’s nice to have a Hypothetical Cow that makes perfect soycowburgers, I’m investigating what the actual code that is run does on the actual data it is fed.

At this point the Trouble with Tamino heads off to hypothetical soycowburger land by computing anomalies in some (ill defined but different from GIStemp) way and asserting that this means what I’ve said about GIStemp processes on actual data must be false. Nope. Just comparing Hypothetical Cow soyburgers to real world beef.

To quote from the Trouble with Tamino article:

“Unlike some who claim to have analyzed these data, I combined station records in a proper way, I computed temperature anomalies properly, and I’ve computed area-weighted averages. All these steps are essential for a correct result.”

So here we have a statement that what is being looked at are not the DATA, but “combined” and “computed” things in ways using the “anomalies” in “area-weighted averages”.

And it might even give correct results (though without the method and code it is not possible to say). It is just as possible to be giving bad or changeable results as GIStemp does.

But in all cases it is just another Hypothetical Cow talking about something They have done which is not relevant at all to what GIStemp does with the actual DATA bias.

So if They would like to go work for NASA and fix GIStemp, I’m sure we would all be happy, especially if Their method really does work and correctly deals with all the issues of station drops and holey data and instrument bias.

But for now it is just another Hypothetical Cow making round steaks.

And it speaks not at all to the issue of DATA bias and instrument errors. Maybe it can fix the bias, but it says noting about the presence or absence of the bias. Basically, suppressing a fever with aspirin is not proof that you are well.

### Sidebar On GIStemp Issues

Sidebar on GIStemp: I’ve run a benchmark on GIStemp that shows how it responds to station changes. First the AGW Advocates asserted that station changes would do nothing since “The Anomaly Will Fix It”. But this benchmark shows otherwise. Realize that this was NOT run on some fictional data, nor on “my data”. It uses NASA GISS code run on NOAA / NCDC data. The result was that the anomaly map changed. In 5/2007 the USHCN data set became static. Until 11/2009 GISS did not change to the USHCN.v2 set, so the USA was limited to the 136 or so stations in GHCN. I was able to “put back in” the USA stations for that interval and see that the bias leaked through to the “basket to basket” anomaly map. So in the real world of real beef, the thermometer changes DO bias the DATA and the changes in those DATA do make it into changes in the Anomaly Map from GIStemp.

https://chiefio.wordpress.com/2009/11/12/gistemp-witness-this-fully-armed-and-operational-anomaly-station/

Oh, when USHCN.v2 was “put back in” it was made warmer in the process:

https://chiefio.wordpress.com/2010/01/15/ushcn-vs-ushcn-version-2-more-induced-warmth/

So in doing such a benchmark with the intent to use the actual values in the anomaly map, you have to allow for that and do USHCN.v2 up to 5/2007 vs USHCN.v2 up to 11/2009 and not compare USHCN to USHCN.v2 (even if that is the way the GIStemp history will now look comparing older results with results after 11/2009). So the present test mostly just shows that “things change” where exactly ‘how much’ needs a bit more work to unscramble the eggs from NCDC.

How much change of thermometers is there between the two sets of time that GIStemp compares? Quite a bit. This site:

http://www.ncdc.noaa.gov/oa/climate/ghcn-monthly/source-table1.html

Makes it sound like there are thousands and thousands of thermometers used in GHCN. And there are. Just not for very long for most of them. (The total is about 7280 – though you would not know that from looking at their chart. It’s “in the fine print” elsewhere in the data set description. The chart lists things like:

```NCAR's World Monthly Surface Station Climatology  3,563
NCDC’s Maximum/Minimum Temperature Data Set  3,179
Deutscher Wetterdienst's Global Monthly Surface Summaries Data Set  2,559
Monthly Climatic Data for the World  2,176
World Weather Records (1971-80)  1,912
World Weather Records (1961-70)  1,858
U.S. Summary of the Day Data Set  1,463
U.S. Historical Climatology Network  1,221```

We’re somewhere north of 17,000 already and there is some chart left to go.

So what is the truth like? Well, starting with that 7280 total that actually make it into the GHCN data set, we can look at how many are in a sample baseline year and how many are in the present of 2010 (as of the January data). I used the built in commands of Linux to make two files with the unique count of thermometer records in each year. The following command counts how many are in each of those two files

[chiefio@Hummer data]\$ wc -l EMS.u.1960 EMS.u.2010
5253 EMS.u.1960
1113 EMS.u.2010

5253 thermometer records are in 1960
1113 thermometer records are in 2010

Hmmm…. Only 5k in the start of the baseline period. And dropping to 1113 today.

Gee, we’re under 1/10 th of the “small part of the chart” we looked at above; no where near 17,000. And we have a 5 to 1 ratio between the two years. So at least 75% of the stations in the baseline will not be in the present. Those are two very different boxes of thermometers we’re going to be comparing… But just how many of them are in both time periods?

The Linux “diff” command tells you what goes in, and what is taken out, to turn one file into another. Here I look in the output of the command for the number of “pull it out” records vs the number of “put it in” records to tell me how many are changed.

[chiefio@Hummer data]\$ grep “< ” DIFF.1960.2010.uniq | wc -l
4319
[chiefio@Hummer data]\$ grep “> ” DIFF.1960.2010.uniq | wc -l
179

So 4319 are no longer around in 2010 and 179 new ones are put in. That makes the stable set 1113-179 = 934 (Or done the other way: 5253 – 4319 = 934 )

Our common base between the middle of the baseline and now is: 934 thermometers. Not “thousands and thousands”. ( it is 715 in 1950 just before the baseline starts. Things change during the baseline period too…) I make that about 17.8 % of the starting set survive. The rest of our “box of thermometers” from that baseline year don’t make it. Not exactly a lot of stability in those “Baseline Box” to “Present Box” calculations.

And when you consider that there are 8000 grid/boxes of surface on the globe to be filled, most of those boxes are comparing two very different things to each other. (Those 5253 thermometers get smeared into 8000 grid/boxes in 1960, then the 1113 thermometers in the present get smeared into 8000 grid/boxes to cover the globe. There is about 1 / 10 stable thermometers per grid/box. Then those two sets of smears get compared to make the “Global Anomaly Map”. Not a lot in common between those sets of boxes by that point.)

(It is actually worse than that as GIStemp tosses out a bunch of these records in processing. Anything shorter than a 20 year life span, for example. So the actual number used to make the grid boxes will be lower than these gross numbers. Others may be dropped for other reasons.)

To give you an idea how sparse the coverage can be, notice that most of the dots on this picture are the ‘short lifetime’ dots:

GHCN Stations by Geography and Age

Full size image at: Wikipedia

What we have here is a whole lot more empty space than thermometers, and especially so in the early years (that are not that long ago.)

### Back to The Hypothetical Anomaly Wars

Now I can’t say if The Trouble with Tamino has actually “gotten it right” in how They work around the DATA bias with calculated anomalies. And frankly, I don’t really care. Yet Another Hypothetical Cow is completely unimportant to the world.

What matters is that the GIStemp code that folks are using to set policy DOES let the bias through, and the bias comes from imperfect anomaly mapping via “Basket A” to “Basket B” and from using the temperature data for a variety of adjustments and in-fill PRIOR to the anomaly process. ( Homogenizing, averaging USHCN and GHCN with an odd offset, UHI “correction” that often goes the wrong way, etc.)

The AGW Advocates have constantly asserted that “The Anomaly Fixes It” and I’ve constantly asserted that I care about what the data say, not the hypothetical anomaly they would like to imagine is used (but is not). So the first error is to assume that the hypothetical anomaly means anything to what I’ve asserted. It doesn’t. They then run off to show how the anomaly trends are the same for two sets of thermometers and then leap to more conclusions. OK, they have assumed a Hypothetical Spherical Cow and found that all steaks are round. There is supposed to be a surprise here?

What I have asserted is that the DATA (not any anomalies from it nor any other hypothetical cows) have a warming bias introduced INTO THE DATA SERIES (and nothing said about their anomalies) via station changes. I’ve demonstrated this bias in the gross averages of the data over time as compared to individual station data.

(Typically at this point, the AGW Advocates will assume yet another Hypothetical Cow and assert that the averaging of the data is a bad technique for measuring global warming trends and you really ought to do anomalies … so the steaks are round and I’m wrong. That completely misses the point that what I’m measuring is the DATA bias, not some theoretical global temperature construct. To measure what bias is in the DATA, you must look AT the DATA. That is how you find out what problems it has so you can know how to fix them.)

And I’ve even made a “first cut” at my own particular way to “fix the problem” using an anomaly technique similar to “First Differences” that I’ve taken to calling “dT/dt”. The difference is that with FD, each data gap results in a ‘reset’ of the difference to zero. I carry that difference forward until some data do show up. (Mine reacts less well to things like station changes, but preserves the trend across gaps better – at least up to about a 5 year gap.)

https://chiefio.wordpress.com/2010/02/09/dtdt-agw-ddt/

So contrary to the assertion that I don’t know how to ‘do it right’, I’m in fact following a very similar path of doing “Self to Self” anomaly processing to see how well that cleans up the data.

But the notion that showing “you can clean up the bias in a data set via a given process” somehow means that there is no bias IN THE DATA and that some OTHER processes will not have issues with that bias is just leaping off a cliff of conclusion to crash at the bottom of Hypothetical Cow Cliff.

### So What Are The Biases in the DATA?

What I found when I looked was a large number of what look like systematic biases introduced into the temperature series in the DATA (with nothing at all said about their anomalies). So we get thermometers leaving the mountains and headed for the beach, and we get thermometers leaving the Canadian Rockies and headed for the Mexican Megathermal Zone. (Please note: not one hypothetical cow here. Actual counts of thermometers by altitude, by latitude, etc. I’m not fond of hypothetical cowburgers, so I avoid them whenever possible). And those changes do bias the base DATA set. They load up the present with hotter places and with places that have flatter seasonal profiles.

You can find the bulk of those investigations and the measurements of how much bias is introduced into the base DATA here:

https://chiefio.wordpress.com/2009/11/03/ghcn-the-global-analysis/

though some of the articles end up in this category listing before I get them added to that “consolidator posting” link above:

https://chiefio.wordpress.com/category/ncdc-ghcn-issues/

Now realize what this is NOT saying. It is not saying that the earth is getting hotter or colder. It is not saying WHY these thermometer changes happen (malice, stupidity, or?…) It is not saying that the TREND of these averages means anything about the TREND of the actual temperature of the planet or of the anomalies for a location. It is only saying that the DATA contains a bias. It is stating the problem, not the conclusion.

(At this point many AGW Advocates commence to tossing rocks about how I’m asserting that the TEMPERATURE of the planet is headed in some direction or other and that I’m doing it wrong because I’m not doing anomalies and grid boxes. That is the error of assumption They made above. They assumed the wrong premise, then attacked it wrongly. I’m “characterizing the DATA”, not talking about the Global Average Temperature. As I’ve said many times, the average of an intensive variable is meaningless anyway. It’s like counting the number of coins in your pocket and not looking at the denominations.)

So when we look at those DATA series, we find them fairly strongly biased by thermometer changes. That still leaves open the question of what impact that bias has on the various temperature series programs ( such as GIStemp and CRUt).

For what it’s worth, while many of the “Averages of the DATA” charts are fairly obviously biased by thermometer dropouts, you can still see some valid things about average temperatures in the graphs, charts, and tables. For long periods of time the thermometer counts can be stable. During those times, we typically see little that could be called “global warming”. In some places you do see warming in some times (often with the advent of airports and cities) but the pattern of the changes is not agreeable to the CO2 Thesis. But mostly you see massive swings from instrument change. There are 10’s of degree swings from instrument change in this global calorimetry experiment and we’re trying to find 1/10’s of degrees of meaning. That is the real lesson of inspecting these charts. The signal is very very well swamped by instrument changes. (There are also visible, in the data tables, single digit degree changes from periodic ‘ripple’ as what looks to be cyclical ocean currents change along with semi-random whole digit jumps from year to year variation. We’re trying to hear a whisper at a KISS Concert.)

One of my favorite examples where you can see things is the Pacific Region.
(Region 5 in the numbering scheme).

When we look at the Pacific, there is not a lot of room to change things. Islands are often small and with only one Airport. So you often get one change from a grass field to a jet tarmac tourist stop and that’s it. Islands, though they have Airport Heat Island bias, are in many ways “Islands of Stability” too. Looking into the pattern of the DATA BIAS, I noticed that it substantially showed up in Australia and New Zealand for the Pacific Region. So I did a set of reports on what the DATA say about the Pacific Basin, and found that with just the Islands being looked at, things are mighty darned flat:

https://chiefio.wordpress.com/2009/10/29/ghcn-pacific-basin-lies-statistics-and-australia/

Now the virtue of a graph like this one:

Pacific Basin - Ex- Australia; Thermometer Count and Temps

The virtue does not come from a belief that the ‘average temperature’ of the region actually started at 18 C then rose to 25 C and held darned near flat until today but with a mysterious plunge to 21 C smack in the middle of WWII. The value comes from realizing that this is the profile of the DATA (and NOT the temperature). It is telling you that the ‘start of time’ is very cold biased. It is telling you that the middle of WWII had a major dropout of warm stations as Japan overran the Pacific Theatre, but not New Zealand and Southern Australia. And if you look at it carefully, you can also see that as long as you have instrument stability, you get about a 25 C average. It is showing you that: stability of instruments matters and does so far more than actual temperature changes.

And what is the one thing we don’t have? Stability of the instruments.

### Kiwis To The Rescue

As an interesting example, I used New Zealand. It is particularly interesting because there is a dropout of exactly ONE cold thermometer. It is in the baseline period, but it is missing in the recent past. It is Campbell Island, and it is a cold place. So when you look at the DATA profile for New Zealand with all the stations in, you find a nice warming bias introduced by dropping this one station and keeping all the others. I explored that in this article:

https://chiefio.wordpress.com/2009/11/01/new-zealand-polynesian-polarphobia/

where I showed that if you “stabilize the instruments” by leaving Campbell Island out of the whole time series, the base DATA in New Zealand have no significant warming signal present. If it isn’t in the basic DATA, it can only be introduced via the processes done to that data.

This chart shows the comparison of the average of New Zealand thermometer data with Campbell Island left in the baseline vs removing it from all of the history of New Zealand. I’ve “zoomed in” on the period from the start of the GIStemp baseline (1950) to date. Just as a reminder, the purpose of this graph is NOT so say this is the temperature of New Zealand. It isn’t. It is to say that this is the way the data look in aggregate for New Zealand. While we would expect a warming trend to cause values to rise over time: the absolute value is meaningless. ( The average of an intensive variable is meaningless, which is why a “Global Average Temperature” is meaningless, but that’s a topic for another day. Just think of it this way: the average number of coins in your pocket is meaningless if you don’t look at the denominations. For net heat gain on the planet, we need more than just temperature. Mass, specific heat, heat of vaporization and fusion of water are all missing.)

With that said, for a place of limited range, like New Zealand, we can depend on the “odds” being that most of the “coins in our pocket” will be dimes over much of the period in question… (Put less poetically, there are a limited number of stations in New Zealand and they are substantially airports at sea level in a limited latitudinal range. The major “ringer” is Campbell Island, a very cold southern island that is in the baseline interval but not in the recent history.) So we are looking at the biasing factors in the base data and we are looking at the trend over time with and without that bias.

New Zealand data with and without Campbell Island

As you can see, the major source of a rising trend in the New Zealand data comes from that one lone Island and it being in for a while, then left out.

This does not mean it is impossible to remove that bias in the data, but it does mean that the various ‘climate codes’ run on the data must be able to deal with that fundamental bias in the data. That has not been demonstrated. Hypothetical examples using pure “self to self” anomaly calculations do not answer this issue either. GIStemp does anomaly calculations by comparing “Basket with Campbell Island” in the baseline against “Basket without Campbell Island” now. It is simply assumed that the machinations of GIStemp will be sufficient to correct these two baskets to remove this basic bias. For the other climate codes, we do not have the source code to know how they calculate anomalies (but we do know that there are not a lot of choices as the base data are so sparse in many regions as to make “self to self” anomalies very limited in scope).

Taking an excerpt of the table in the New Zealand link, we can look closely at what happens in 2002 when Campbell Island is dropped:

```Thermometer Records, Average of Monthly Data and Yearly Average
by Year Across Month, with a count of thermometer records in that year
--------------------------------------------------------------------------
YEAR  JAN  FEB  MAR  APR  MAY  JUN JULY  AUG SEPT  OCT  NOV  DEC  YR COUNT
--------------------------------------------------------------------------
1880 16.6 17.3 15.7 14.0 11.4  8.4  8.2  8.5 11.0 11.6 14.4 14.8 12.7   6
1881 15.4 17.0 15.6 14.1 11.8  9.8  9.1  9.1 11.3 11.7 13.1 14.9 12.7   4
1882 16.1 15.2 16.0 13.9 11.1  9.9  8.6  8.4 11.0 11.0 12.9 15.9 12.5   4
1883 17.2 17.6 15.8 12.5 11.3  9.4  8.2  8.9  9.5 11.0 12.0 14.2 12.3   4
1884 14.0 14.7 14.2 12.2  9.8  9.0  8.2  9.1 10.2 11.0 12.2 14.1 11.6   4
1885 15.1 16.2 15.4 12.7 10.5  9.9  8.3  8.8  9.8 11.2 13.0 14.1 12.1   4
1886 16.0 17.1 15.4 13.9 11.5  8.2  7.6  8.0  9.6 11.4 13.5 14.5 12.2   4
....
2000 16.2 16.5 15.3 13.8 12.1 10.2 10.0  8.5 10.9 12.3 12.5 16.2 12.9   9
2001 14.8 16.2 15.3 13.6 12.0  9.6  8.2  9.9 11.9 13.1 14.9 17.5 13.1   9
2002 17.8 16.9 16.9 14.3 12.3 11.2  9.6  9.5 11.1 11.0 12.8 15.2 13.2   9
2003 16.9 17.2 17.2 14.2 12.6 11.3  8.6  9.8 11.6 12.3 13.5 16.4 13.5   8
2004 18.2 17.2 15.4 12.6 12.5 10.8  8.9  8.8 10.6 12.4 14.5 14.0 13.0   8
2005 17.5 18.9 16.6 13.3 12.9  9.1  9.8 10.2 11.6 12.7 14.1 17.7 13.7   8
2006 17.9 17.8 15.1 15.4 12.2  8.5  9.4  9.5 11.8 12.6 14.2 14.4 13.2   8
2007 17.2 17.4 17.0 12.2 13.4  9.3  8.9  9.8 11.0 11.7 12.6 15.8 13.0   9```

Most of the “low excursions” into 14 C and 15 C in January get ‘clipped out’ in 2002. The annual 11.x and 12.x in the right column becomes 13.x

We’re now consistentently in the 17-18 C range for January and 13.x for the year after the change. Until 2002, you have a more consistently low series.

Looking at a graph of all the data we see a generally flat trend running about 12 C then with a bit of a hockey stick at the end in 2002 when Campbell Island leaves the series. (Though the recent cooling has clipped the end of the stick).

New Zealand, All Stations, 1950 to Date

So I just took Campbell Island out of the whole thing and there is no more warming bias in the recent DATA. This graph shows the much flatter result from 1950 to date. (Prior to 1950 we still have the 12 C base due to a limited number of stations in the set, but once we have decent coverage, we have about 13.5 C and sit there.

New Zealand minus Campbell Island

So what does all this mean? It means that the bulk of all the rise of the numbers comes from instrument change, not from an increase at the individual stations. Further, it means that any temperature series codes (such as GIStemp) run on this data has a built in bias to deal with. We are depending on it to do a very good job…

But even more interesting, it says that with a reasonably stable coverage area, the DATA do not show increases over time conformant with steady increases in CO2. We have one steady number (with low thermometer count) followed by another stable number (with higher thermometer counts). The ‘recent rise” profile only shows up with the dropping of Campbell Island in 2002, though there is still enough instrument change to be masking some trend, one way or the other.

And you find this pattern repeated around the world, though with minor and interesting variations. “AGW” is not about CO2, it’s about instrument changes.

Here is a chart of the data from 1939 to date for folks who like to have the numbers. Particularly look at that right hand “YR” column:

```Thermometer Records, Average of Monthly Data and Yearly Average
by Year Across Month, with a count of thermometer records in that year
--------------------------------------------------------------------------
YEAR  JAN  FEB  MAR  APR  MAY  JUN JULY  AUG SEPT  OCT  NOV  DEC  YR COUNT
--------------------------------------------------------------------------
1939 13.8 14.6 14.5 12.7 10.2  9.1  6.0  7.6  9.4 10.8 12.6 15.2 11.4   6
1940 16.5 14.6 15.8 13.2 11.8  9.9  9.3 10.6 11.5 12.5 13.6 16.2 13.0   7
1941 18.0 17.6 17.3 13.7 12.6  9.6  9.7  9.6 11.3 11.9 14.0 15.5 13.4   7
1942 16.1 17.1 16.1 15.2 13.2 10.9 10.4 10.5 11.9 13.3 14.5 15.7 13.7   7
1943 17.1 17.6 16.1 15.1 11.9  8.6  9.7  9.5 11.6 12.7 15.0 16.9 13.5   7
1944 17.5 18.2 17.0 15.5 12.2 10.3 10.0 10.0 10.9 12.9 14.1 15.3 13.7   7
1945 18.0 18.3 16.9 14.7 12.0  9.5  9.1 11.1 11.4 11.6 14.3 14.7 13.5   7
1946 16.4 17.5 16.4 14.9 13.3 11.1 10.5 10.5 11.7 12.4 12.4 15.0 13.5   7
1947 16.0 16.9 16.3 14.7 12.6 10.6 10.2 10.6 11.9 13.2 14.3 16.4 13.6   7
1948 17.9 16.0 15.8 13.9 11.8  9.6  9.5  9.4 10.6 11.5 13.0 14.6 12.8   8
1949 15.1 16.9 15.0 12.8 11.2  9.8  9.5  9.4 10.3 12.4 13.4 14.9 12.6   8
1950 16.3 16.1 14.6 12.7 12.2  9.1  8.4  8.6 10.1 12.0 13.5 14.7 12.4   9
1951 16.2 16.5 16.1 14.1 10.7  8.3  8.8  8.7 10.3 11.9 13.6 14.5 12.5  14
1952 15.8 17.4 15.4 13.9 11.6  9.9  8.6 10.2 11.1 12.6 13.9 15.8 13.0  14
1953 16.0 16.1 15.6 13.8 12.2 10.2  9.0  9.7 10.8 11.8 14.8 16.1 13.0  15
1954 16.6 17.5 16.9 13.7 12.3 10.4  8.9  9.3 10.3 11.9 14.7 15.6 13.2  15
1955 17.5 18.8 17.3 15.5 13.3  9.8  8.9 10.7 11.8 13.6 14.7 16.5 14.0  14
1956 19.0 17.8 15.9 16.8 12.7 11.0  9.6  9.9 11.5 13.5 15.1 16.6 14.1  14
1957 17.8 18.6 17.4 15.1 12.6  9.9  8.7 10.2 11.3 12.2 14.2 15.3 13.6  14
1958 16.5 18.5 17.3 13.3 12.0  9.9  8.6 10.0 10.7 14.0 15.0 17.1 13.6  14
1959 18.1 17.6 16.6 14.5 10.5  9.2  9.2  9.7 11.6 12.1 14.4 16.4 13.3  14
1960 17.3 17.5 15.8 14.0 12.3 10.5  9.5  9.6 11.3 13.4 14.4 15.3 13.4  14
1961 16.9 17.5 15.8 14.2 11.7 10.0  9.3  9.4 10.7 13.6 14.3 16.7 13.3  16
1962 18.1 17.4 16.7 14.6 13.7 11.1 10.0 10.3 11.2 14.1 14.5 15.9 14.0  18
1963 17.2 18.0 15.7 13.4 11.7  9.6  9.2  8.8 11.0 12.4 13.2 15.0 12.9  18
1964 16.2 16.9 16.1 13.7 11.4  9.9 10.0  9.7 10.9 12.4 13.5 16.4 13.1  18
1965 17.9 16.3 15.9 13.6 11.1  9.6  8.3  9.2 10.8 11.4 13.6 15.4 12.8  18
1966 16.8 18.3 16.6 14.3 11.2  9.5  8.8  9.0 10.7 12.1 13.5 15.4 13.0  18
1967 16.7 17.1 16.8 14.4 11.8  9.3  8.8 11.1 10.5 13.0 13.9 16.1 13.3  18
1968 16.8 17.0 17.8 14.5 12.6 10.5  8.8 10.3 10.5 12.1 13.5 14.9 13.3  18
1969 16.8 16.5 16.0 13.4 11.5  9.2  8.3  9.7 12.2 11.6 14.3 17.2 13.1  18
1970 18.2 17.3 17.2 14.9 11.5 10.5 10.0 10.6 11.8 13.0 14.4 16.1 13.8  18
1971 17.6 18.1 16.4 14.5 12.7 11.4  8.9 10.7 11.4 12.7 14.2 16.1 13.7  16
1972 16.3 16.2 17.0 14.3 11.6  8.5  9.3  8.7 11.6 12.7 15.2 14.8 13.0  16
1973 16.8 17.3 16.3 14.0 12.0 10.4  8.7 10.0 11.9 12.9 14.8 16.3 13.4  16
1974 16.6 18.9 15.4 14.8 12.1  9.9 10.0  9.6 11.8 12.7 14.9 17.4 13.7  16
1975 18.4 18.0 17.4 15.0 12.6  9.4  9.1 10.2 11.2 12.8 13.5 15.1 13.6  16
1976 16.9 15.5 16.3 14.3 11.4  9.1  9.2 10.2 10.5 11.8 13.0 15.8 12.8  16
1977 16.3 17.2 16.4 14.4 10.8  9.9  9.4  9.8  9.7 12.0 13.5 15.4 12.9  16
1978 17.5 17.9 16.7 15.6 12.3  9.7  9.7 10.3 10.9 11.7 14.1 16.1 13.5  16
1979 17.4 17.2 17.2 14.3 11.6 10.6  9.6  9.6 11.3 12.5 14.7 16.2 13.5  16
1980 17.4 17.5 15.6 13.9 11.9  9.9  9.0  9.7 11.7 13.1 13.2 15.7 13.2  16
1981 17.8 18.2 17.4 15.3 11.9 11.1  9.7  9.2 10.9 12.6 14.4 17.3 13.8  12
1982 17.6 18.3 16.4 13.2 12.3  9.6  8.8  9.4 10.8 11.4 14.9 15.3 13.2  11
1983 16.2 15.6 16.2 14.0 11.7 10.2  9.0 10.1 11.2 12.7 14.3 15.5 13.1  11
1984 16.1 17.7 17.3 14.2 11.4 10.6 10.0 10.4 10.9 12.2 15.0 17.4 13.6  11
1985 18.7 18.0 15.1 14.4 11.9 10.2 10.3  9.2 10.7 11.6 14.1 16.1 13.4  11
1986 18.3 18.5 16.5 14.9 12.5 10.3  8.4  8.7 10.0 12.5 13.6 15.9 13.3  10
1987 18.2 17.1 15.5 14.1 12.2 10.2  9.3 10.5 10.9 12.9 14.7 16.3 13.5  11
1988 17.2 18.2 16.1 13.4 11.8 10.5 10.3 10.1 12.0 13.3 15.1 16.9 13.7  11
1989 18.3 17.6 16.8 14.2 12.1 10.2  8.9 10.3 11.8 12.9 14.3 14.9 13.5  11
1990 16.5 17.7 16.2 13.6 11.5  8.8  8.4  9.2  9.7 12.2 13.9 15.7 12.8   8
1991 16.5 16.6 15.5 12.9 10.8  8.2  7.9 11.3 10.8 11.7 12.1 14.5 12.4   9
1992 16.8 16.4 14.1 11.4  8.6  8.0  8.7  9.2  8.5 11.6 14.3 14.9 11.9  10
1993 16.3 16.4 15.3 12.5 11.3  9.9  8.8  8.5  9.3 12.7 13.3 14.8 12.4  10
1994 17.7 17.6 15.2 13.0 11.3  7.6  8.1  8.2  8.2 11.1-99.0 15.7 12.2   9
1995 15.1 16.5 14.8 13.9 10.8  7.9  3.0  8.1  9.7 10.9 12.2 15.6 11.5   7
1996 16.5 16.3 15.0 13.3 10.1  7.6  7.4  7.2 10.5 11.2 12.3 14.7 11.8   9
1997 15.2 16.4 14.7 12.4 11.3  9.3  8.4  9.1  9.9 11.8 13.2 15.0 12.2   9
1998 16.9 19.5 16.6 14.1 11.8  9.0 10.1  9.3 11.2 12.9 13.8 15.7 13.4   9
1999 17.8 17.4 16.9 13.8 12.5 10.5  9.8  9.7 11.6 13.6 15.0 15.4 13.7   9
2000 17.1 17.4 16.1 14.7 12.7 10.9 10.5  9.0 11.5 13.0 13.4 17.3 13.6   8
2001 15.6 17.3 16.1 14.3 12.9 10.1  8.6 10.4 11.9 14.0 14.9 17.5 13.6   8
2002 17.8 16.9 16.9 14.3 12.3 11.2  9.6 10.1 11.8 11.5 13.5 15.9 13.5   8
2003 16.9 17.2 17.2 14.2 12.6 11.3  8.6  9.8 11.6 12.3 13.5 16.4 13.5   8
2004 18.2 17.2 15.4 12.6 12.5 10.8  8.9  8.8 10.6 12.4 14.5 14.0 13.0   8
2005 17.5 18.9 16.6 13.3 12.9  9.1  9.8 10.2 11.6 12.7 14.1 17.7 13.7   8
2006 17.9 17.8 15.1 15.4 12.2  8.5  9.4  9.5 11.8 12.6 14.2 14.4 13.2   8
2007 17.2 17.4 17.0 12.2 13.4  9.3  8.9  9.8 11.0 11.7 12.6 15.8 13.0   9
2008 15.9  9.6 12.8 12.8  9.7  9.2  9.2  9.2 11.8 12.5 14.4 16.7 12.0   9
16.9 17.1 16.0 14.0 11.6  9.6  8.9  9.5 10.8 12.3 13.9 15.6 13.0
16.6 16.7 15.6 13.6 11.3  9.2  8.5  9.1 10.6 12.1 13.6 15.4 12.7
[chiefio@tubularbells Temps]\$
Such is the impact of One Little Island…```

### Pacific without Australia and New Zealand

Why this matters can be seen in this table that looks at the DATA of the Pacific Basin with the changing instrumentation of Australia and New Zealand left out. This is a fairly stable set of places in a fairly stable climate zone with all the UHI, pimples and warts left in. If there is no warming signal seen in this data, then it must come from the processing done to the data. And that is the true lesson that you learn from asking the DATA what they have to say, then listening politely to it. There is a strong instrument change signal, a very modest UHI / Airport Warming signal, but very little room left for a CO2 warming signal. I was going to make a graph of this, but a flat line just doesn’t look very impressive.

It doesn’t get much more dead flat than this. Just look at that left hand “YR” column. Wanders around 26.x from top to bottom.

```[chiefio@tubularbells analysis]\$ more Temps/Temps.LIST.yrs.GAT

Thermometer Records, Average of Monthly Data and Yearly Average
by Year Across Month, with a count of thermometer records in that year
--------------------------------------------------------------------------
YEAR  JAN  FEB  MAR  APR  MAY  JUN JULY  AUG SEPT  OCT  NOV  DEC  YR COUNT
--------------------------------------------------------------------------
1880 25.7 26.6 26.7 26.8 27.2 26.3 26.3 26.8 26.9 27.0 26.5 26.0 26.6   3
1881 26.0 26.2 26.4 26.9 27.4 26.9 26.9 26.9 26.9 27.4 27.1 26.7 26.8   3
1882 26.9 26.7 26.7 27.2 26.9 26.7 26.6 26.6 26.9 26.7 26.5 26.5 26.7   3
1883 26.2 26.0 27.1 27.2 27.4 27.1 26.9 27.0 26.8 26.6 26.4 26.0 26.7   4
1884 25.5 25.8 26.4 27.0 27.1 26.6 26.1 26.7 26.6 26.8 26.6 25.6 26.4   4
1885 25.8 25.7 26.4 27.4 27.5 27.0 26.7 26.9 27.1 27.3 26.8 26.2 26.7   4
1886 25.5 25.2 26.2 27.2 27.6 27.2 26.9 27.2 27.3 27.1 26.3 26.2 26.7   5
1887 25.3 25.2 25.9 26.7 26.9 26.8 26.6 26.7 26.4 26.4 26.1 25.3 26.2   4
1888 24.7 25.2 26.6 27.7 28.0 27.5 26.7 27.1 27.2 27.2 27.0 26.2 26.8   4
1889 26.2 26.2 26.9 28.4 28.6 27.8 27.1 27.3 27.3 27.2 26.4 25.7 27.1   4
1890 25.7 25.7 26.3 26.8 26.9 26.6 26.2 26.5 26.4 26.1 25.5 25.7 26.2   5
1891 25.9 26.0 26.7 27.4 28.1 27.3 27.0 26.6 27.0 27.3 26.8 26.2 26.9   7
1892 25.8 26.4 27.0 27.0 27.7 27.0 26.8 26.6 26.9 26.8 26.3 25.8 26.7   7
1893 25.5 25.7 26.3 27.4 26.8 26.6 26.5 26.7 26.9 26.7 26.3 25.8 26.4   7
1894 25.4 25.5 26.3 27.0 26.9 26.9 27.0 26.6 26.6 26.8 25.9 25.6 26.4   7
1895 25.8 25.8 26.4 27.2 27.3 27.1 26.6 26.8 27.3 27.5 26.8 26.1 26.7   7
1896 26.4 26.6 27.0 27.2 27.3 26.8 27.0 26.7 27.1 27.6 27.5 27.0 27.0   5
1897 26.2 26.6 27.0 27.4 27.6 27.8 26.8 27.0 27.2 27.4 27.3 26.8 27.1   5
1898 26.5 26.6 26.8 27.4 27.6 26.6 26.8 26.9 27.1 27.0 26.8 26.6 26.9   5
1899 26.3 26.2 26.7 27.1 27.3 26.9 26.2 26.1 26.6 26.8 26.5 26.0 26.6   4
1900 26.1 26.4 26.8 27.2 27.2 26.7 26.2 26.5 26.9 27.1 27.4 27.0 26.8   4
1901 26.9 26.5 27.0 27.9 27.9 27.2 27.0 26.7 27.4 27.4 27.4 26.7 27.2   4
1902 26.7 26.2 27.0 27.5 27.7 27.2 27.1 27.0 27.2 27.6 27.4 27.5 27.2   4
1903 26.3 26.0 27.1 27.6 27.7 27.7 27.0 26.9 27.0 26.9 26.4 25.6 26.8   8
1904 25.3 25.1 26.4 26.8 27.1 26.7 26.4 26.4 26.6 26.8 26.5 26.0 26.3   8
1905 26.1 26.1 26.8 27.9 27.6 27.3 26.8 26.6 26.9 27.4 26.7 26.9 26.9   9
1906 26.8 27.1 27.2 28.0 27.7 27.4 27.2 27.1 26.5 26.2 26.3 26.1 27.0  10
1907 25.8 25.9 26.2 26.6 26.5 26.0 25.7 25.3 25.7 26.2 26.0 25.7 26.0  10
1908 25.9 26.1 26.3 27.0 26.3 25.8 25.5 26.2 25.8 26.1 25.8 26.3 26.1  10
1909 26.2 26.4 26.6 26.8 26.5 26.2 25.5 26.1 25.8 26.2 26.1 25.6 26.2  10
1910 25.9 26.1 26.3 26.6 26.2 25.8 26.0 25.7 26.0 25.9 25.6 26.2 26.0  10
1911 26.1 26.0 26.5 26.6 26.5 26.2 25.6 25.7 25.7 26.0 26.2 26.6 26.1  11
1912 26.3 26.5 26.9 27.0 27.0 26.5 25.7 25.8 26.2 26.1 26.1 26.0 26.3  11
1913 26.1 26.2 26.8 26.7 26.4 26.1 25.7 25.3 25.5 25.5 26.0 26.1 26.0  11
1914 25.7 26.3 26.8 27.2 27.0 26.2 26.4 25.4 25.7 26.0 26.5 26.7 26.3  11
1915 26.2 26.7 26.7 26.4 26.3 26.1 25.2 25.3 25.5 25.8 25.8 25.9 26.0  12
1916 25.5 26.2 26.1 26.3 25.8 25.6 25.3 25.6 25.2 25.4 25.5 25.5 25.7  12
1917 25.4 25.4 25.7 26.2 25.8 25.5 25.4 25.1 25.0 25.1 25.6 25.3 25.5  12
1918 24.7 24.7 25.5 25.8 25.7 25.6 24.8 25.0 25.1 25.4 25.6 25.6 25.3  12
1919 25.8 26.3 26.6 26.9 26.2 25.3 24.7 25.0 26.2 26.4 25.2 25.2 25.8  12
1920 25.5 25.8 25.9 26.3 26.1 25.3 25.1 24.9 25.1 25.1 25.4 25.7 25.5  11
1921 25.9 26.2 26.5 26.0 25.6 25.2 25.4 25.2 25.5 25.8 25.8 25.8 25.7  17
1922 26.1 26.2 26.5 26.7 26.2 25.9 25.2 25.4 25.6 25.9 25.9 26.1 26.0  18
1923 26.2 26.6 26.4 26.7 26.4 25.9 25.5 25.2 25.4 25.9 25.9 26.0 26.0  17
1924 26.2 26.5 26.7 26.7 26.9 26.2 25.9 26.1 26.2 26.1 26.2 26.0 26.3  18
1925 26.4 26.1 26.6 26.9 26.6 25.9 25.5 25.9 26.1 25.9 26.1 25.9 26.2  18
1926 25.8 26.4 26.5 26.8 27.0 26.2 25.8 25.8 26.1 26.3 26.4 26.2 26.3  17
1927 26.1 26.3 26.8 26.9 26.6 26.0 25.4 25.4 25.6 25.7 26.1 26.2 26.1  18
1928 26.5 26.4 26.9 27.0 26.8 25.8 25.4 26.0 26.1 26.1 26.3 26.5 26.3  17
1929 25.6 26.2 26.2 26.6 26.4 25.8 25.1 25.1 25.4 25.7 25.9 25.9 25.8  19
1930 26.3 26.2 26.8 26.8 26.6 25.9 25.7 25.6 25.7 26.1 26.3 26.2 26.2  24
1931 26.4 26.6 26.9 27.1 27.0 26.4 25.8 25.9 26.4 25.9 26.2 25.9 26.4  26
1932 25.9 26.3 26.5 26.8 26.7 26.1 26.0 25.9 25.9 26.2 26.2 26.3 26.2  29
1933 26.4 26.5 26.7 26.9 27.0 26.3 25.7 25.9 26.0 26.1 26.1 26.0 26.3  30
1934 26.1 26.1 26.4 26.9 26.8 26.7 26.0 25.8 25.9 26.2 26.3 25.9 26.3  31
1935 26.2 26.6 26.9 26.8 26.7 26.4 26.0 25.8 26.1 26.5 26.5 26.2 26.4  32
1936 26.3 26.8 26.9 27.0 26.7 26.1 25.8 25.6 25.9 26.0 26.2 26.5 26.3  32
1937 26.6 26.8 27.0 27.0 26.6 26.4 26.0 26.0 26.1 26.3 26.6 26.5 26.5  34
1938 26.6 26.7 26.9 27.1 26.7 26.4 26.3 26.1 26.0 26.2 26.1 26.2 26.4  36
1939 26.2 26.4 26.9 26.7 26.5 25.9 25.7 25.7 25.4 25.7 26.4 26.0 26.1  37
1940 26.0 26.3 26.7 26.9 26.7 25.9 25.7 25.2 25.5 25.9 26.2 26.1 26.1  40
1941 26.4 26.8 26.9 26.7 26.4 25.6 25.2 25.2 25.4 25.7 26.3 26.4 26.1  36
1942 26.8 26.9 27.0 26.4 25.7 25.1 24.2 24.4 24.7 25.5 25.7 26.1 25.7  22
1943 26.4 26.6 26.4 26.0 25.3 24.4 24.0 24.5 24.6 25.4 26.0 26.1 25.5  22
1944 26.3 26.3 26.5 26.0 25.4 24.5 23.7 23.7 24.4 24.7 25.3 26.0 25.2  20
1945 26.5 26.6 26.6 26.1 25.0 24.4 24.3 24.4 24.6 25.0 25.6 26.2 25.4  22
1946 26.8 26.7 26.8 26.6 25.6 25.2 24.9 25.1 24.8 25.5 26.0 26.5 25.9  31
1947 27.0 26.8 27.0 27.1 26.6 26.2 25.6 25.6 25.7 25.9 25.9 26.3 26.3  37
1948 26.3 26.4 26.8 26.8 26.4 25.9 25.5 25.3 25.5 25.9 25.9 26.1 26.1  44
1949 26.0 26.4 26.6 26.7 26.3 26.1 25.7 25.4 25.8 26.1 25.9 26.1 26.1  62
1950 26.2 26.3 26.5 26.6 26.3 26.2 25.5 25.6 25.6 26.0 26.2 26.1 26.1  70
1951 26.0 26.2 26.4 26.9 26.4 26.2 25.8 26.0 26.1 26.4 26.6 26.4 26.3 127
1952 26.4 26.5 26.7 26.9 26.9 26.5 25.9 25.7 25.9 26.3 26.4 26.1 26.3 131
1953 26.0 26.3 26.7 26.9 26.6 26.1 25.7 25.7 25.9 26.4 26.5 26.2 26.2 134
1954 26.3 26.3 26.5 26.8 26.6 26.2 25.8 25.7 25.8 25.9 25.8 25.8 26.1 136
1955 25.8 26.1 26.3 26.5 26.6 25.9 25.5 25.6 25.9 26.0 25.8 25.7 26.0 141
1956 25.7 26.1 26.4 26.5 26.4 26.0 25.6 25.6 25.7 26.1 26.1 25.9 26.0 149
1957 26.0 26.1 26.4 26.8 26.7 26.1 25.7 25.8 25.9 26.1 26.2 26.3 26.2 155
1958 26.3 26.4 26.7 26.9 26.8 26.4 25.8 25.8 26.0 26.2 26.0 26.0 26.3 157
1959 26.0 26.3 26.5 26.6 26.5 26.4 25.8 25.5 25.8 26.1 26.2 26.3 26.2 159
1960 26.1 26.1 26.5 26.8 26.7 26.2 25.8 26.0 26.1 26.2 26.3 26.2 26.2 191
1961 25.9 26.5 26.7 26.9 26.7 26.0 25.7 25.6 25.9 26.1 26.4 26.3 26.2 195
1962 26.1 26.0 26.5 26.7 26.8 26.3 25.9 25.7 26.0 26.5 26.4 26.1 26.2 202
1963 25.6 25.9 26.3 26.7 26.8 26.3 25.8 25.8 26.1 26.2 26.6 26.4 26.2 204
1964 26.7 26.5 26.6 26.9 26.7 26.3 25.9 25.9 26.1 26.2 26.1 25.9 26.3 206
1965 25.7 26.1 26.2 26.5 26.5 26.0 25.5 25.6 26.0 26.3 26.5 26.4 26.1 205
1966 26.2 26.4 26.7 27.0 26.6 26.1 25.9 26.0 26.2 26.3 26.4 26.2 26.3 210
1967 26.0 26.1 26.4 26.6 26.7 26.2 25.8 25.8 26.1 26.3 26.3 26.0 26.2 209
1968 26.0 26.0 26.6 26.6 26.7 26.4 26.0 25.9 26.2 26.3 26.3 26.3 26.3 202
1969 26.4 26.4 27.0 27.1 27.1 26.6 25.9 25.9 26.1 26.4 26.5 26.5 26.5 207
1970 26.5 26.6 27.0 27.1 27.0 26.5 25.9 25.9 26.2 26.4 26.4 26.3 26.5 203
1971 25.8 26.0 26.1 26.5 26.4 25.9 25.6 25.7 26.0 26.1 26.0 26.0 26.0 197
1972 25.8 26.3 26.3 26.6 26.6 26.1 25.8 25.7 25.9 26.3 26.5 26.5 26.2 201
1973 26.5 26.7 26.9 27.1 26.9 26.6 26.1 26.0 26.1 26.2 26.4 26.1 26.5 197
1974 25.8 26.0 26.3 26.6 26.5 26.1 25.8 25.8 26.1 26.2 26.3 26.0 26.1 197
1975 26.1 26.2 26.5 26.8 26.6 26.1 25.7 25.8 26.0 26.1 26.1 25.9 26.2 196
1976 25.9 26.0 26.4 26.3 26.2 25.6 25.2 25.2 25.3 26.1 26.3 26.2 25.9 141
1977 26.3 26.4 26.6 26.7 26.4 25.9 25.5 25.3 25.5 26.0 26.3 26.3 26.1 144
1978 26.3 26.4 26.8 26.7 26.7 25.9 25.6 25.4 25.6 25.9 26.1 26.2 26.1 147
1979 26.3 26.6 26.8 26.7 26.5 26.2 25.6 25.5 26.0 26.2 26.3 26.2 26.2 147
1980 26.5 26.7 26.8 26.8 26.7 26.1 25.7 25.6 25.8 26.2 26.3 26.3 26.3 146
1981 26.3 26.5 26.8 26.8 26.4 26.0 25.6 25.4 25.9 26.2 26.3 26.4 26.2 141
1982 26.5 26.5 26.7 26.6 26.3 26.0 25.3 25.3 25.4 25.9 26.3 26.4 26.1 110
1983 26.5 26.8 27.1 27.1 26.7 26.1 25.7 25.7 26.0 26.3 26.2 26.3 26.4 112
1984 26.2 26.5 26.7 26.9 26.6 26.0 25.6 25.6 25.8 26.1 26.4 26.3 26.2 113
1985 26.3 26.9 27.0 26.9 26.6 25.9 25.3 25.6 25.8 26.2 26.4 26.6 26.3 102
1986 26.5 26.5 26.7 26.8 26.6 26.0 25.7 25.6 25.8 26.2 26.5 26.3 26.3  78
1987 26.1 26.6 26.8 27.0 26.6 26.1 25.3 25.3 25.8 26.3 26.6 26.3 26.2  94
1988 26.8 26.9 27.2 27.1 26.9 26.5 25.7 25.8 26.0 26.2 26.1 25.9 26.4  92
1989 26.3 26.2 26.4 26.8 26.5 25.9 25.7 25.8 25.9 26.2 26.1 26.0 26.1  96
1990 26.4 26.8 26.9 27.0 26.7 26.1 25.8 25.6 25.5 25.9 26.3 26.4 26.3  95
1991 26.5 26.6 26.7 26.9 26.5 25.7 25.8 25.3 25.5 25.9 26.0 26.1 26.1 102
1992 26.2 26.4 26.8 26.8 26.7 26.2 25.8 25.9 26.0 25.9 26.0 26.2 26.2  74
1993 26.3 26.1 26.6 26.8 26.7 25.9 26.0 25.6 25.7 25.9 26.4 26.5 26.2  71
1994 26.5 26.7 27.1 26.9 26.6 25.9 25.5 25.6 25.5 25.7 25.9 26.1 26.2  71
1995 26.3 26.3 26.7 26.9 26.8 26.6 26.2 26.2 25.8 26.1 26.2 26.0 26.3  76
1996 26.1 26.2 27.0 26.8 26.3 25.8 26.0 25.7 25.8 26.0 26.3 26.2 26.2  81
1997 25.8 26.4 26.7 27.1 26.9 26.3 25.8 25.8 26.0 26.2 26.5 26.8 26.4  81
1998 26.9 27.2 27.5 27.6 27.6 26.8 26.2 26.3 26.1 26.9 26.6 26.6 26.9  71
1999 26.5 26.5 26.9 26.9 26.8 26.2 26.0 25.6 25.7 26.0 26.1 26.1 26.3  71
2000 26.3 26.8 26.8 27.0 26.6 26.2 25.3 25.5 26.3 26.6 26.4 26.3 26.3  75
2001 26.7 26.6 27.1 27.2 26.7 26.3 26.0 26.1 26.4 26.3 26.2 26.2 26.5  72
2002 26.5 26.6 27.2 27.1 26.9 26.6 26.5 26.2 26.1 26.4 26.5 26.7 26.6  71
2003 26.4 26.9 27.0 27.4 27.2 26.1 26.1 25.9 26.0 26.2 26.4 26.6 26.5  72
2004 27.0 26.9 27.0 27.4 27.0 26.5 25.9 26.1 25.9 26.2 26.5 26.6 26.6  78
2005 27.0 27.0 27.3 27.1 26.9 26.8 26.1 26.0 26.2 26.5 26.8 26.4 26.7  73
2006 26.5 26.7 27.2 27.2 26.8 26.5 26.2 25.9 25.9 26.1 26.7 26.8 26.5  75
2007 26.6 26.8 27.0 27.1 27.2 26.7 26.3 26.1 26.2 26.7 26.4 26.4 26.6  78
2008 26.3 26.2 26.5 26.8 26.5 26.4 25.7 25.8 26.3 26.5 26.8 26.6 26.4  86
26.2 26.4 26.6 26.8 26.7 26.2 25.7 25.7 25.9 26.2 26.3 26.2 26.2
26.2 26.3 26.7 26.9 26.7 26.2 25.9 25.9 26.0 26.2 26.3 26.2

For Country Codes 502 503 504 505 506 508 509 51 52 53 54
[chiefio@tubularbells analysis]\$```

If the Pacific Basin is not significantly warming from AGW, there is nothing “global” about any change that is happening. And if it isn’t in this base data, it can only be an artifact of the processing done to the data.

### Is There A Way To See The Trend Buried In The Instrument Change?

Marshall Islands Airport - Where is the Thermometer?

So here we have the Marshal Islands Airport. Think there might be a bit of warming from all that “not a palm tree” construction? For what it’s worth, here is a view of the buildings at Majuro including the Stevenson Screen (on the far left, click on the link below the image to enlarge it, then zoom in to that building on the left). It’s the white box close to the dark exterior wall. So we have the thermometer next to the building, but with a large expanse of runway nearby. Depending on which way the wind blows you get exactly what?

Majuro site with Stevenson Screen on the left near a building / wall

Original full sized image.

There are several ways to take some of the instrument bias out of the data, but not all of it. Each with different “issues”. One of which I am particularly fond is called First Difference. It looks at a thermometer and uses the first value as a starting point, then measures each change year to year as an offset, a “delta” to the prior value. When data are missing, it resets to a new “starting point” when the data returns. In this way it will automatically compensate for things like instrument changes. But it does not do well on data with lots of “holes” in it (taking a lot of gratuitous ‘resets’ and losing some trend information). It is also rather sensitive to the starting value.

I’ve made a variation on FD that I’ve taken to calling dT/dt and then a variation on dT/dt that starts in the present (so that ‘start of time’ bias is removed and so we’re using our most recent records to set the starting point) and runs backwards in time. By definition, the present will be ‘near zero’ and the past will be seen as a rise or fall compared to us, now. Further, I don’t take a ‘reset’ on a data gap. Since the actual records are just chock full of dropouts, this makes dT/dt less sensitive to such dropouts of data ( it preserves tend information better ) but at the cost of also preserving such things as errors from instrument change bias and UHI effect. But at this stage, I’d rather see those things in the product and ask if they account for what is seen.

So using this tool, we can look at the New Zealand data and see if we can tease out some more valid information about what is actually happening. This is, substantially, the same as the Hypothetical Cow approach that others are taking. It is using an anomaly process (though most likely different from theirs) to see if we can find a hidden trend in this apparently flat basic data. And we do. (This is a good thing. There ought to be some UHI in the cities of New Zealand and since substantially all the thermometers are at airports, we ought to be seeing some impact from all the tarmac and lack of vegetation – even if there are not a lot of takeoffs and landings, the land change of tarmac taxi ways, runways, and parking area along with the removal of the vegetation ought to show some warming). So in this graph, we see about a 1.5 C max rise over the aviation age with a large bolus in 1990 or so when the ASOS type equipment is rolled out. ( I’ve run it on both the biased and the unbiased sets and the results are very similar. Not quite identical, so some of the station change bias can still leak through an anomaly process, but close enough). I don’t know exactly what equipment New Zealand uses, but the mod flags show a change. Notice also that the ‘take off’ point is at about the -1 C line near 1950. Flat before that, then warming 1 C as thermometers move from grass fields to airport tarmac. In sync with airport growth, not in sync with the industrial revolution nor all the fuel burned in WWII.

New Zealand, all data, dT

Also notice that 1934 is a cold time in New Zealand, while it was a warm one in the USA. This speaks to a probable oscillation of some sort between the hemispheres. Since we have really bad spacial coverage of the souther hemisphere, in the early years nearly none, such an oscillator could easily show as a bias in the data we do have, especially if the temperature is treated as a single global thing instead of looking for patterns by continent, or by country, or even by smaller divisions.

OK, so this tool can take basic data, with or without bias, even apparently flat data, and find an underlaying trend ( that is of about the magnitude we would have expected for this location, biased as it is by Airport Heat Islands and the conversion of instruments to the electronic type where the ASOS has been shown to have bias.)

At this point, I’m fairly comfortable with using this tool on a larger set. What does it say about the whole Pacific basin, even WITH Australia and New Zealand included? What can it do to handle that WWII dropout, for example? Note that time runs from recent on the left to long ago on the right.

Pacific Basin Delta T, All Data

Not Bad, if I do say so myself. We are able to find the AHI effect we would expect from all those Airports In The Sun as the Jet Age started planting loads of runways all over the Pacific and we see relative stability during the ‘grass field’ era prior (while the Industrial Revolution was pumping out loads of CO2). Though interesting is that “rise” at the start of time. Looks like it was warm just before GIStemp starts it’s baseline… Also of interest is that 1915 spike to about where we are now. Nothing new under the sun, it would seem, other than some tarmac.

### What About A Bigger Place? The USA

Using the same tool on the USA gives a graph that matches what we know of history:

USA All Data Delta T

Here we see the warm 1930’s followed by the “swoon” into the GIStemp baseline between 1951-1980. More interesting in many ways is further back in time. We have a fairly cold mid 1800’s, but a much warmer period early on in the 1700s (but always cycling about the zero line). And we can see how ‘cutting off time’ in 1880 as GIStemp does imparts it’s own “sample bias”. Toward the far past we are getting down to very few thermometers and the volatility starts to rise (less dampened by averaging).

Now we could argue about the “sparsity” of thermometers in the 1700s, but then we would have to address the sparsity today… And we could talk about the instrument error in the 1700s, but then we would need to address all the siting issues found by SurfaceStations.org along with the ASOS bias found in those instruments used at those modern airport locations. And I’d be just fine with that; because, IMHO, it’s all about “Instrument Change” in our little global calorimetry experiment, and not about CO2.

And we could even talk about Hypothetical Cows and my own little experiment in an anomaly process based program. But then we would also need to ask why GIStemp and the other temperature programs fail to find that ‘flat rolling’ pattern but instead find a rising hot pattern for the USA. And I’d be OK with that too… since it was, and is, my original goal.

### In Conclusion

So what’s the point of all this?

I said the data had biases introduced into it from thermometer drops, and it does.

I’ve said that bias is toward warming, and it is (with lots of links provided to reports demonstrating that).

I’ve said that when you look at samples with that bias removed, we find the ‘CO2 warming signal’ leaves with it, and demonstrated that with the entire Pacific Region where warming matches the transition from sea planes and grass fields to tarmac and jet ports, but is flat during the industrial revolution.

I’ve said that Hypothetical Cows tell you nothing about real world processes in other codes, and shown that that is true.

Further, I have my own Hypothetical Cow that shows that They have “over averaged” and hidden the interesting bits in the regions; and I’ve shown that CO2 is not needed to explain where we are today.

And at the end of it all, They have a nice Hypothetical Cow, but it has nothing to do with me, my work, nor even very much to do with the real world processing done by the actual temperature series used for policy decisions.

Hypothetical Cows are just not very important. And that is why I generally avoid wasting time admiring them. Besides, I like a Porterhouse steak, not round processed patties of mystery meat …

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in AGW Science and Background, NCDC - GHCN Issues and tagged , . Bookmark the permalink.

### 44 Responses to Assume A Spherical Cow – therefore all steaks are round

1. Baa Humbug says:

Excellent work. well worth saving for future reference.

p.s. I also like my steak and mince my own meat. You never know what others do when they mince meat (data)

2. Margaret says:

Hi

You may not be interested but in case you are and it helps you the raw Campbell Island temperatures are available at NIWAs website — you have to register to get it but it is free.

There are two stations – with a four year overlap.
Station 1

Agentno NetworkNo startdate enddate %complete name lat long
6172 K94400 01-Jul-1941 31-Aug-1995 100 Campbell Is -52.55 169.15

Station Year Stats
Code Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec Annual
6172 1941 02 – – – – – – 4.6 4.6 5.6 5.8 6.5 8.1 –
6172 1942 02 9.0 9.4 8.1 – 5.6 6.6 4.6 4.5 6.3 6.4 6.6 7.7 –
6172 1943 02 9.3 – 8.1 8.2 6.1 5.1 5.5 4.6 5.7 6.6 7.4 8.4 –
6172 1944 02 9.3 9.3 9.8 8.0 6.4 3.6 5.1 4.3 4.9 5.6 6.8 9.0 6.8
6172 1945 02 10.5 11.5 8.9 6.6 5.6 4.4 4.4 4.9 5.2 5.2 7.1 8.0 6.9
6172 1946 02 8.9 9.3 9.1 6.7 6.4 4.8 4.4 4.6 4.9 5.8 5.9 7.3 6.5
6172 1947 02 8.7 9.4 9.5 6.9 6.0 4.0 4.4 4.9 4.7 5.6 7.6 8.5 6.7
6172 1948 02 9.2 8.6 8.5 6.8 6.5 4.8 4.6 5.1 5.8 6.6 7.2 8.5 6.9
6172 1949 02 8.8 9.1 7.6 6.9 6.9 4.0 4.0 4.2 5.0 6.3 7.3 8.9 6.6
6172 1950 02 10.3 8.2 7.1 6.2 6.2 3.0 4.6 3.9 5.5 6.6 7.0 7.5 6.3
6172 1951 02 9.2 9.0 9.0 8.3 6.6 4.9 5.3 5.2 4.9 5.6 8.0 8.4 7.0
6172 1952 02 8.6 9.3 7.3 7.2 4.6 4.0 4.9 5.6 6.6 5.9 6.9 9.5 6.7
6172 1953 02 9.6 8.6 8.0 6.1 5.5 4.7 4.8 5.3 4.8 5.8 6.8 8.6 6.5
6172 1954 02 8.4 9.4 9.9 7.1 6.9 4.4 2.6 4.8 5.1 5.8 7.3 8.2 6.6
6172 1955 02 9.4 8.7 7.9 6.6 6.5 3.4 4.8 4.7 5.3 7.3 7.8 8.5 6.8
6172 1956 02 10.6 8.2 7.6 7.7 5.9 5.5 3.9 5.0 5.6 6.6 7.5 10.0 7.0
6172 1957 02 10.7 10.1 9.5 8.5 6.0 4.4 4.7 5.5 6.0 6.1 7.2 8.3 7.2
6172 1958 02 9.0 9.5 8.4 6.2 5.8 4.9 4.1 5.3 6.0 7.2 7.7 8.7 6.9
6172 1959 02 9.0 9.0 8.3 7.8 5.0 5.1 4.1 5.5 5.6 5.8 6.7 8.4 6.7
6172 1960 02 8.6 9.6 8.6 7.9 6.7 5.4 5.0 5.1 5.7 7.2 7.9 7.9 7.1
6172 1961 02 8.8 8.0 7.5 6.9 5.6 4.8 3.9 5.5 4.9 7.0 7.1 9.3 6.6
6172 1962 02 9.1 9.5 9.3 7.1 6.9 6.0 5.3 5.2 6.0 6.7 8.0 9.5 7.4
6172 1963 02 9.5 10.0 8.3 7.0 5.3 4.6 4.4 3.9 5.4 5.9 5.6 7.0 6.4
6172 1964 02 8.8 9.1 7.9 6.6 5.6 4.7 4.9 3.9 5.6 6.4 6.6 9.4 6.6
6172 1965 02 10.3 8.4 9.5 6.8 5.7 5.0 4.5 4.7 5.7 4.9 6.6 7.3 6.6
6172 1966 02 8.2 9.1 8.4 7.4 5.9 4.7 4.4 5.5 5.7 5.7 7.2 7.8 6.7
6172 1967 02 9.6 8.3 8.9 6.2 4.8 5.3 4.6 5.5 5.0 6.8 6.0 8.5 6.6
6172 1968 02 8.8 9.2 8.7 8.3 7.4 5.4 5.1 6.3 5.1 6.0 7.5 9.2 7.2
6172 1969 02 10.3 9.2 7.5 6.6 5.7 4.1 5.6 5.6 6.4 4.9 7.8 9.5 6.9
6172 1970 02 8.4 9.2 8.6 6.8 6.1 5.7 5.3 5.7 4.3 6.0 7.7 7.7 6.8
6172 1971 02 9.5 9.7 9.1 8.0 7.1 4.9 5.9 5.8 4.8 6.5 7.5 9.1 7.3
6172 1972 02 9.4 9.2 8.6 6.3 4.8 3.0 4.1 4.2 5.0 5.3 6.4 7.8 6.2
6172 1973 02 9.3 9.0 9.2 6.4 4.5 5.2 5.6 5.0 5.9 5.6 6.4 8.8 6.7
6172 1974 02 9.2 9.7 8.4 7.8 5.8 4.3 3.7 4.9 6.5 5.9 7.9 10.2 7.0
6172 1975 02 11.2 10.2 9.4 7.8 5.6 4.7 4.7 4.2 5.5 5.7 6.8 7.0 6.9
6172 1976 02 8.5 8.1 8.5 7.5 5.9 3.9 4.1 4.4 5.8 5.7 6.9 9.8 6.6
6172 1977 02 9.3 9.5 8.7 7.1 5.2 4.2 4.3 6.1 5.6 5.8 7.6 9.4 6.9
6172 1978 02 9.9 10.1 8.2 8.2 6.8 4.4 5.7 6.0 6.2 6.6 7.4 8.9 7.4
6172 1979 02 8.8 8.7 8.6 8.2 5.9 6.2 6.0 5.2 5.3 6.6 7.9 9.4 7.2
6172 1980 02 9.9 10.5 9.6 8.2 6.2 5.4 4.8 5.5 6.0 6.7 6.1 8.5 7.3
6172 1981 02 8.9 8.8 9.3 8.4 6.8 6.1 5.7 5.0 5.2 6.7 7.4 9.5 7.3
6172 1982 02 9.0 9.4 9.5 8.1 7.2 5.9 5.1 5.3 6.0 5.9 5.9 7.6 7.1
6172 1983 02 9.5 9.0 7.2 6.8 4.8 4.8 5.2 5.7 4.8 6.8 8.0 8.3 6.7
6172 1984 02 9.5 10.4 9.9 7.4 5.9 5.9 5.0 5.5 6.6 6.7 7.4 9.7 7.5
6172 1985 02 10.7 10.6 8.9 7.7 6.3 7.1 5.4 5.8 6.3 6.8 7.4 8.7 7.7
6172 1986 02 10.3 10.7 9.6 8.5 5.9 5.5 4.3 4.6 5.1 7.1 7.9 9.6 7.4
6172 1987 02 11.2 9.8 9.3 7.3 6.7 4.5 4.7 5.3 4.9 6.3 7.9 8.7 7.2
6172 1988 02 9.0 9.4 7.4 6.9 4.4 4.8 4.7 4.9 5.7 6.5 6.9 9.5 6.7
6172 1989 02 10.6 9.5 8.5 7.6 6.9 5.5 5.0 6.7 7.1 7.4 7.4 8.5 7.6
6172 1990 02 9.5 9.9 8.7 6.5 5.4 5.5 5.4 5.8 5.6 6.1 8.2 8.0 7.0
6172 1991 02 9.5 8.8 9.6 7.3 5.8 4.1 4.9 5.5 5.7 6.1 7.5 8.8 7.0
6172 1992 02 10.0 8.7 7.3 7.2 4.9 4.6 4.7 3.1 5.0 6.6 8.3 8.5 6.6
6172 1993 02 9.3 10.0 8.7 7.4 7.0 4.9 6.0 5.3 5.2 6.6 7.6 9.0 7.3
6172 1994 02 10.0 9.9 8.6 7.3 6.4 4.2 4.7 5.2 4.1 5.6 6.7 8.0 6.7
6172 1995 02 9.0 9.1 8.2 8.6 7.1 2.9 2.6 4.8 – – – – –

and Station 2
6174 K94402 01-Dec-1991 28-Feb-2010 100 Campbell Island Aws -52.55 169.15

Station Year Stats
Code Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec Annual
6174 1991 02 – – – – – – – – – – – 8.8 –
6174 1992 02 10.0 8.8 7.4 7.3 5.0 4.6 4.7 3.2 5.1 6.6 8.3 8.6 6.6
6174 1993 02 9.4 10.0 8.8 7.6 7.0 5.2 6.0 5.3 5.4 6.6 7.6 8.7 7.3
6174 1994 02 9.9 9.6 8.7 7.2 6.2 4.2 4.7 5.5 4.3 5.7 6.6 8.0 6.7
6174 1995 02 8.9 9.0 8.0 8.6 7.1 3.0 2.6 4.9 5.7 5.5 6.7 8.5 6.6
6174 1996 02 9.5 9.5 8.9 8.1 6.2 4.5 4.3 4.9 6.9 6.5 6.9 9.3 7.1
6174 1997 02 9.7 9.7 8.4 6.9 6.4 5.2 5.9 5.3 4.6 5.6 5.9 7.6 6.8
6174 1998 02 8.8 9.6 8.1 7.2 6.0 5.5 5.0 4.3 5.9 5.8 7.2 9.1 6.9
6174 1999 02 9.7 11.1 8.9 7.6 7.1 5.4 4.9 5.3 6.6 6.7 7.4 8.0 7.4
6174 2000 02 9.5 8.8 8.5 7.1 7.2 5.6 6.4 5.0 6.1 7.3 6.5 9.2 7.3
6174 2001 02 9.0 8.7 9.0 8.1 5.2 5.7 5.2 5.8 6.3 7.1 7.6 9.6 7.3
6174 2002 02 10.1 10.3 7.9 7.6 5.1 4.7 4.8 5.2 6.1 6.7 7.6 9.2 7.1
6174 2003 02 9.6 8.8 9.2 7.6 5.9 5.2 4.7 5.9 5.6 6.3 6.8 8.0 7.0
6174 2004 02 10.0 8.8 8.5 7.3 6.6 5.7 5.8 4.5 5.7 5.8 7.5 7.6 7.0
6174 2005 02 9.3 10.3 8.6 7.4 5.8 5.1 5.8 6.5 5.7 6.8 7.5 10.9 7.5
6174 2006 02 8.9 9.6 7.8 8.7 6.8 5.2 4.8 5.0 5.6 5.5 6.1 7.8 6.8
6174 2007 02 9.1 8.9 8.7 6.4 6.5 3.7 3.5 5.3 6.2 5.9 6.9 9.2 6.7
6174 2008 02 9.9 9.9 8.8 7.6 6.3 5.4 4.9 4.5 6.3 5.6 6.4 8.4 7.0
6174 2009 02 9.9 7.8 8.9 8.7 5.7 4.6 3.9 6.5 6.4 5.6 5.9 8.1 6.8
6174 2010 02 9.6 9.4 – – – – – – – – – – –

It would be interesting to know what happens if you put it back in.

As a complete aside years and years ago I was asked if I wanted to go down to Campbell Is for a year to (among other things) look after the weather station. It was manned from the Ministry of Transport (which I worked for). I didn’t go — having just recently got married — but rather regret not doing it!

3. Margaret says:

Sorry I should have given you the web address as well:
http://cliflo.niwa.co.nz/

4. Layne Blanchard says:

5. charlie says:

In an effort to arrive at an estimate of global warming they’ve “over-averaged and hidden the interesting bits”?

The interesting bits, being, say, a snow storm in Minnesota, or a hot day in Sri Lanka?

Consider the following statement: “The NBA statistical department in their records of Michael Jordan’s career point -er-game average has over-averaged. They left out interesting bits, like when he scored 63 against the Celtics in 1986”.

It’s nonsense.

REPLY: [ Nope. You’re just using the wrong metaphor. In, for example, stock trading, averages are widely used to make some things more visible while hiding others. So you average the daily values to get a 50 day moving average (that shows trends better but has a time lag and HIDES the daily volatility). You can do this for baskets of stocks (say, by sector) and that lets you see what sectors are rising compared to those that are falling and HIDES the individual stock movements. Now say you averaged it all together. You get things like the Dow Jones Industrial Average or the S&P 500 Average or even the Russell 2000 average. But now you can not see sector movements any more as it HIDES the sector changes.

So novices and nubies will dote over the major averages, and the pros look at the sector averages, the single stock moving averages, and individual stock prices. Because that is where the more valuable information is to be found. The major indexes can “go nowhere” for months or years on end (notice now, for example) and yet you can make a lot of money out of playing sectors against each other (what the entire Hedge Fund industry does. The basis for all “pairs trades” is this A vs B divergence ether at the sector or the individual issue level.)

So no, it is not nonsense. It is a correct understanding of the purpose, value, and proper use of averages. To selectively HIDE things that are in the way of seeing the information in the data that is ‘the interesting bit’.

Now for climate stuff, the really big elephant in the room is that we have darned near no data for the Southern Hemisphere for most of history. That’s why that N.Z. low in the ’30s interests me so much. In the USA we all “know” the ’30s were hot. Not so in N.Z. But if you just average it all together (with or without anomalies) you could easily conclude that “The ’30s were hot”. Hiding that there might well be a N / S hemispheric oscillation and you simply were not measuring half of it. Even with Grids / Boxes and ‘area adjustments’ you still have issues. What do you do with all the EMPTY boxes in the Southern Oceans? GISTemp just ignores them and leaves them blank, though it does try to fill them in from 1200 km away with somewhat fantasy numbers some times. The Hypothetical Cow folks also leave them blank. But we have known 60 year period weather patterns (and perhaps some 200 and even 500 year patterns). So we could very easily be averaging away the visibility into one of those “sector” movements. And making grand pronouncements about things based on nothing but “what we don’t know” being hidden in the average.

So it’s more like saying your team is going to win the next tournament, because the averages are so good, and not taking account of the fact that your Michael Jordan just retired… -E.M. Smith ]

6. David says:

” the data had biases introduced into it from thermometer drops, and it does.

I’ve said that bias is toward warming, and it is (with lots of links provided to reports demonstrating that).

I’ve said that when you look at samples with that bias removed, we find the ‘CO2 warming signal’ leaves with it, and demonstrated that with the entire Pacific Region where warming matches the transition from sea planes and grass fields to tarmac and jet ports, but is flat during the industrial revolution.”

Is it fair to say that, if the GIStemp anomalies method do not pick this up (the bias facts in the summary) then it is possible that how the anomalies are applied by GIStep to dropped stations and added in stations over various time periods still presents a warming bias?

You stated …” I care about what the data say, not the hypothetical anomaly they would like to imagine is used (but is not).”

I assume by “they” you mean Tamino, but can you further explain this? I thought GIStemp did use anomalies, within specfic grides and somehow they applied the anomalies to other stations, changing the used stations and the stations the anomalie bled to over time.

I guess I am asking, do you have the part of GIStemp code that shows how they calculate the anomalies, and if so what is your time line for seeing how this affects the various bias you have uncovered within the station changes?

Thank you for all your hard work in this area. Are you funded by big oil? :-)

Best wishes.

REPLY: [ The first question you ask is basically the working premise of my investigation going forward. We have the potential bias in the structure of the data from “basket A” having more cold stations and “basket B” having more warm ones. GIStemp has code that tries to remove that bias. The question (still to be answered) is “How well does it do?”. I’d have answered it already, but GIStemp is very “brittle” to station change. ( I’ve tried deleting some Bolivia stations, it crashes. I’ve tried deleting some USA stations, it crashes. etc. It uses exact file matching at various points and you must find all the odd places it assumes certain records will exist…)

So my effort is, substantially, to ‘build a baseline measure of data biases’ (all the “by altitude” “by latitude” etc. reports), then ‘build a tool to look at the data with a simple and direct ‘de-biasing’ (the dT/dt code – I don’t expect it to be perfect, just consistent and close to right), compare the two for hints and insights, then run GIStemp and compare to the three so you can see what it does. FWIW, I expect it to do modestly well. As I’ve said, it has code that TRIES to remove the ‘issues’ in the data. But it only has to be “wrong” by 1/2 C for the “warmers” to be crying “Global Warming!” over nothing, and I’ve already measured it as adding about that much before it gets to the Grid/ Box anomaly step…. So even if STEP3 is “perfect” but lets that 1/2 C through, the overall product is ‘warming’… STEP3 has to remove MORE than the total bias, and I’ve benchmarked it as removing less… Yeah, lots of work left to do to work out the details. And yeah, it’s a PITA and slow (as I insist on using the Real Code and not a hypothetical and I’m doing this a few hours a week).

Ok, GIStemp using a few bits of code (it’s up under the “GIStemp technical and source code category on the right side of the page) in STEP3 to do the Grid / Box anomaly step. That is in STEP3 (and STEP4_5 if you do the ‘optional add in Hadley SST anomaly map). The hypothetical method is roughly “Compute anomaly FIRST and ‘self to self’, then use it” in the typical version, though they have variations GIStemp, as I’ve mentioned, computes a “Basket A” and a “Basket B” then compares them. Stations are added to the basket by having their mean value computed then the data adjusted so the means don’t change as they are combined. This OUGHT to take out some / most of the bias. But the question is “How much”? There are Hypothetical Cow arguments saying it must be perfect. But actual code is not perfect. It never is.

It may have issues that are small enough that you don’t care, but it is not perfect. And until you have it measured, you are just hoping. And hope is not a strategy… But, IMHO, the bigger chance for an “issue” is not the exact way the Grid / Box anomalies are calculated, it’s the fact that it is done AFTER STEP0, STEP1, and STEP2 have done a pot full of homogenizing, UHI “correcting”, combining and in-filling, truncating, etc.

The whole point behind the anomaly process is to do it up front and first so as to let you do such things without ‘averaging temperatures’ yet here they are doing those things before the grid / box anomaly step. OK, the code does have parts in it that also try to mitigate that problem too. It computes offsets and adjusts means and it goes through all kinds of gyrations that I won’t go into here. And maybe they even work too. But the question is “How perfectly?”.

You could just assume “perfect”, but … I’ve measured what those steps do to the data and found it adds a net warming of the data. So at the end of the day you have a cranky brittle code full of odd gyrations that has had a couple of bugs already found in it. And it is dealing with 10’s of C of various biases in the structure of the temperatures in the data. And we’re left with 1/2 C (or so) that is attributed to “Global Warming”. (The ASOS error bias is bigger than that.) That means it could just as easily be correct 95% and letting all of about 5% of the bias through, and still giving a bogus result. So even if it were 95% of perfect, it could still be ‘wrong’. (Yes, those are ‘off the cuff’ example estimate numbers for illustration only. But they show the problem…) But you see the issue? You have to measure to about 99% to show that it’s more than 95% to show that it is right. Not just assume. ( The measuring stick must be more precise and accurate than the thing being looked for in the measurement…)

And that is the point were the Hypothetical Cow folks and I part company. They pronounce “It Uses Anomalies – Don’t Worry, Be Happy!”. And I say “It has to be better than 95% perfect and measured as such. I’m not happy until it is measured.” And what I’ve measured so far is not perfect and by a bigger margin than that… But hey, maybe STEP3 will work a miracle ;-)

So my timeline? Well, I’d guess I’m about 3/4 of the way to “have an answer”. I’ve got an annoying bit of code to write to decode the binary data structure (NCAR format – and who said NCAR {NCDC…} was independent from GISS ;-) used in STEP3. (Data formats keep mutating from STEP to STEP… just to make it fun… /sarcoff> and once that’s done I can actually compare the STEP3 output on a ‘box by box’ level with the input. I’d guess about 5 months? given my present rate of progress?

Then again, things like this come along where some folks get their panties in a bunch and DEMAND I address their silly 5 th grade rants and tantrums (“Calling people out” Really? After 5 th grade? Yes, we have the classical ‘calling out’ over at Their Place.) Almost spilled my wine laughing at it 8-} and it can suck down a week dealing with it. (Which is why I typically just ignore them when they are being childish. And why He Who Shall Not Be Named is now a key word in my SPAM filter ;-) I can’t afford to spill any more wine ;-)

And working with code, especially other peoples code, can be very non-deterministic. I had thought, with the Zombie Thermometers discovery, that maybe the GIStemp brittleness to station deletion was only in the USA handling in STEP0 (where I’d seen it first) and was going to just remove Bolivia from the baseline and do my A/B compare. It crashed. (No, I’ve not done the digging to figure out where the problem was… guess why…) So what started out looking like a 5 minute run for a baseline / compare turned into an in-determinant ‘rewrite some unidentified part’. Due to this, all ‘schedule dates’ are at best Wild Ass Guesses… It could be 5 years for all I know at this point.

But I’m making progress. (Though I must admit that with dT/dt written and in QA I’m sorely tempted to just play with it for a while. Much more fun than GIStemp and it lets me see what ALL the data say, unadorned by other folks machinations… I’m especially intrigued by that USA graph and how N.Z. is in opposition to the USA in the ’30s)

Oh, and so far I’m only funded by the tip jar (that somewhat broken one on the right hand of the page… someday I need to get a better tip jar… but that one was free…). I’m still waiting for that first check from Big Oil. I actually find the Big Oil meme humorous. Ignoring for the moment than ANY big company funds both sides – they want access whoever wins – the notion that Oil is against CO2 sequestration is just a hoot. THE biggest thing standing between them and \$BILLIONS from the latest enhanced oil recovery from their old “depleted” fields is a cheap supply of CO2. Liquid CO2 injection is the new hot technique. Roughly 1/2 the oil in an ’empty’ field is still in it; so we’re talking BIG Bucks. And coal is their major competitor. So here they are faced with a proposed CO2 sequestration mandate that will whack their major competitor hard and force that competitor to PAY to have the CO2 taken away? And they will be PAYED to put it in the ground and recover billions of bbl of oil? “Please Mr. Congressman, don’t you go throwing me in that there CO2 sequestration Briar Patch!”… But hey, if they want to send me a check, I’d be happy to take it. Still waiting though… Maybe I’ll go check my mail box and see if one just happened to show up today ;-)

-E.M.Smith ]

7. anna v says:

Thanks Chief. Quite clear.

I have not been used to thinking in terms of intensive and extensive variables, and had to look them up :).

I have been like the proverbial dog with a bone about the meaning of temperature as far as heat content goes for some time now, and also find anomalies meaningless when more than one physical mechanism can produce them.

For a star an average temperature makes sense, since we only see that, and the black body formula can turn it from a proxy into energy values. I suppose that is how the energy balance for the planet came about.

A black body radiation approximation has meaning for the solid/liquid surface of the planet if one knows the gray body constants and the gray body radiation spectrum. The point is that the temperature measured and studied in such great detail are atmospheric at 2 meters. The atmosphere does not follow a black body temperature dependence, I think it is T^6. So in such nonlinear surroundings ( gray body and different power dependence) the anomaly can have little meaning to project back to the ground. Add to that all the mechanisms of convection, evaporation, etc.

When one goes into tropospheric anomalies the basic assumption is that there is one mechanism for raising or lowering the temperature and thus the anomaly reflects what is happening on the ground. I think the physics says differently and that is why we get large positive anomalies when the ground is freezing.

REPLY:[ Thank you. And from someone of your calibre, that is most comforting. BTW, as I understand it, the latest solar change has resulted in the depth of the atmosphere being reduced. I have no idea if anyone has factored that into the tropospheric numbers. But I just have to wonder: “If you compress a layer of gas, does it not warm up somewhere?”… and “Are we measuring the same percentage height now that the pressure heights have changed?”… and so much more. Life is just too short to even ask all the interesting questions, never mind answering even a tenth of them… -E.M.Smith ]

8. David says:

BTW, have you checked out Mike McMillan (02:02:16) : WUWT USHCN original raw data vs USHCN version 2 revised raw data
blink charts? They appear very telling.

REPLY: [ In the article, there is a link to a comparison of USHCN vs USHCN.v2 article I did. That article mostly just says “Go look at his stuff” and with a link to it. Yes, it’s ‘good stuff’. On my plate as one of the things I had to let slide to do this article is to do an A/B of USHCN vs USHCN.v2 in aggregate. A week? Maybe? We’ll see. -E.M.SMith]

9. stephen richards says:

Chiefio

I think your focus on THE DATA may be way beyond the understanding of the average AGW scientist. They will still be thinking “temp, temp, temp, its temps”. Good analysis and I grasped all of it AND for the first time I really understood why you argue against ‘global ave temp’.

REPLY: [ Thanks. Spent too much time on it. But if it helps one person, it was worth it. For a while I used a ‘two pots of water one boiling and one ice. Mix them, what’s the temp? You can’t know if you don’t have the masses of the two buckets of water.” metaphor. But it just doesn’t click for most folks. I think they glaze as soon as it starts to sound like their high school physics class ;-) Even uses “mass” and all… But a ‘pocket of pennies I must be rich’ connects. It’s the same issue in either case, just finding the picture frame to put on it. So for ‘climate’ we have warm air over the ocean, and 20 feet of frozen water over Canada. But we count the temperature of a few pounds of air as just as important as a few tons of frozen water? Just silly…

Oh, and getting folks to think about the DATA as a distinct thing from the information it carries is very hard. I owe it all to a very good FORTRAN teacher at University who forced us to think about the data before we wrote code. The entire class was structured around giving us simple problems to code, then handing us data that were designed to break the typical naive designs. Broke my code too, the first two weekly assignments (got an A grade for everything else, but B overall for not catching the data issues). Got an A on the program the third week… and an A in the class some months later. NEVER fell for the “data issues” trap again. And now you know why I do what I do ;-) But yeah, it’s hard to get folks to think about the DATA as distinct from the information in it… Yet that is always the case for crypto work. You have a picture of a horse, but the cryptext is carried hidden in the bit pattern somewhere. The horse is the DATA, the decrypted text is the information. They are not the same thing…
-E.M.Smith ]

10. stephen richards says:

David

11. P.G. Sharrow says:

(It is actually worse than that as GIStemp tosses out a bunch of these records in processing. Anything shorter than a 20 year life span, for example. So the actual number used to make the grid boxes will be lower than these gross numbers. Others may be droppedd for other reasons.)

To give you an idea how sparse the coverage can be, notice that most of the dots on this picture are the ’short lifetime’ dots:

Chiefio droppedd One typo see above world temperature sites map.
Excellent post, reads very well. PG

REPLY[ Thanks! And… One Typo. One lousy little typo. I must have read the darned thing 100 times making sure it was perfect, and I added that bit about thermometer change at the end, and now I’ve got a typo. Dang it. 8-} But I’ve fixed it, thanks. ;-) And folks wonder why I’m not willing to accept that computer code is perfect either 9-) -E.M.SMith ]

12. RobT says:

Truly Excellent Work!

Many thanks for explaining so well what you are doing and why.

It would be nice if others could explain their way of looking at “Data” as clearly, concisely, and comprehensively as you have.

Perhaps Mann, Jones, et al could try?

Just a thought and most likely to remain so.

13. Steve Keohane says:

Simply brilliant Chiefio! If one doesn’t understand the raw data, any further extrapolations are meaningless. Thank you for all your hard work, this has got to be mind-numbing.

14. mikef2 says:

Well…it makes sense to me, but it appears I am a moron according to posters at Lucias!
To be honest, I may well be, I do not begin to understand the math involved, other than simple common sense.
Cheifios post makes simple common sense to me.

I’ve asked the question on Lucias and WUWT to take just the Campbell island specific, and tell me how an anomoly math method can turn a flat raw data list into a warming trend unless it involves some specific data change that throws out the average, ie creates a bias.

15. RickA says:

As always – very interesting!

I am glad you are working towards publication.

Keep up the good work.

16. hunter says:

Good work, Chiefio.
Thanks,

17. David says:

stephen richards

WUWT, On the “march of the thermometers”

18. David says:

Chiefio, I posted an earlier comment in this thread where I had the name “Tamino” in it. It was in context of your post, and not a request to further respond to Tamino. Did it go to the spam filters?

Thanks

REPLY:[ Yes. As will all things that name He Who Shall Not Be Named ;-) But if he wants to use his real name, no problem 8-) …. but don’t worry, I fish them back out after deleting the trolls and cranks. -E.M.Smith ]

19. jorgekafkazar says:

Hmm. Fascinating. Has anyone tested GIStemp by feeding in strings of constants or nulls / red noise / or more complex reference test data to see what comes out the other end?

REPLY[ My attempts to feed it test sets have generally resulted in a crash. You can rewrite values to missing data flags and that works, but deleting older records causes crashes. After a few of those, I stopped… On my “someday” list is to make a test set using EXACTLY the records in GHCN, but with the values re-written.

Unfortunately, that’s a ‘chunk of work’ to make valid test sets in that way. So my guess is “no”. But frankly, I’d be thrilled if it were done. I would much rather be advancing my stock prediction code than working on a QA suite for GIStemp. But given the way things are, any QA suite produced by the AGW True Believers would most likely be about as good as the climate code itself – i.e. ‘not very’ – and so deliver only suspect results. But that depressive note aside: What is really needed is in fact just that. A decent and unbiased QA suite and benchmark set. If it passes, it passes, and I can pursue other things that I care about more.

But since they never have published such a thing, I took it on. I didn’t expect the code to be so brittle or the task this complicated, but such is life in the software world. (Who would have thought that the data would go through a half dozen DIFFERENT formats / structures as it passes through the code. Just silly. USHCN ascii. GHCN ascii. Combined hybrid. Python ascii. Python binary database. BACK to ascii, but a different structure. NCAR. And I’ve left out a few other minor formats for other files and intermediate steps…)

Oh Well. I’ve picked this row, so I’m going to hoe it to the end. -E.M.Smith ]

I’m a confirmed cycnic about the ‘cleanliness’ of data, and one given to writing code to search out and highlight the dirt actually sprinkled liberally through most DATA.

I congratulate and salute you for the effort in both working through the coding needed, and documenting it as you go.

Good as-built docs are so rare as to be worthy of exhibition in their own right.

And what we have here, folks, is an excellent demonstration of a better way to do science: harness the hitherto untapped brain cells of clever folk all across the planet.

An important, and brilliantly clear, piece.

Thank you.

21. E.M.Smith says:

@all: Thanks for your time and for ploughing through it all. Yes, it can be “mind numbing”. I try to dig through all the detailed technical dreck, and find the nugget of understanding, then polish it off a bit and say “Look, here’s what your looking for”. I just don’t see a lot of value in having everyone need to dig through the “overburden” and sort out the dross… To the extent I take a bunch of mind numbing mumbo-jumbo and turn it into something understandable, I count it a success. No matter what “climate scientists” and trolls may say.

I’ve actually gotten flack for things like the “pocket of pennies I’m rich”, “The March Of The Thermometers” and the “spherical cow” metaphors (and many others) and not being rigorous or being too silly. Yet I think they are the most important part. The touchstone to the nuggets of truth…

mikef2 Well…it makes sense to me, but it appears I am a moron according to posters at Lucias!

Well, I’ve abandoned visiting Lucia’s place precisely because of the “Food Fight” atmosphere. (Yet some folks have wanted to toss rocks at me for ‘over moderating’ here because I don’t give those folks free run… go figure.) If forced into it, I’ve sporadically put a comment there, but typically never go back to review what the hyena pack have said. I don’t like watching scavengers much. And I don’t think it’s too much to ask that folks be polite and personable in public.

To be honest, I may well be, I do not begin to understand the math involved, other than simple common sense.
Cheifios post makes simple common sense to me.

Thanks! That means I’ve had a success. I try to understand the math and physics and work out the technical junk in the code too, then translate it to a basic core understanding. (Then get rocks tossed at me for leaving out all that detail when I post the ‘nugget’; or get rocks when I do put the detail in for being too long and not making it simple. Such is the life of a blogmeister ;-)

So to the extent I’ve taken the Tech Talk and translated well, I’ve done what I set out to do.

I’ve asked the question on Lucias and WUWT to take just the Campbell island specific, and tell me how an anomoly math method can turn a flat raw data list into a warming trend unless it involves some specific data change that throws out the average, ie creates a bias.

Well, it’s really possible to do that. As I did with the dT/dt code. It is all based on looking at what the relative slopes are of the different series. So you could have a cold Campbell Island that pulls the average down to 12.x yet it is changing from (as hypotheticals) 9 9.5 10 10.5 11 over those years. If you now replaced it with a ‘warmer” place that made the overall average 15.x you would have a warmer aggregate to the data, but if the actual CHANGE at the station was in fact something like 16 16.1 16.2 16.3 16 you can see that the particular station is just not warming up. The warming TREND is not present. The ‘cold’ station warmed from 9 to 11 while the ‘warm’ station went from 16 to 16 … You can ‘fix that’ by calculating an ‘anomaly’. The two data series would become:

0 0.5 0.5 0.5 0.5
0 0.1 0.1 0.1 -0.3

using the First Differences method. It’s now pretty clear that the second one is “not warming”. The other nice property is that an average of these anomalies actually DOES mean something (they are extensive variables). So you could average them together and get a much more valid idea what the average TREND was. Where simply averaging all those 16.x into the DATA gives an apparent warmer total, the TREND is not to the upward direction.

The problem is that you could have a cold early data segment from a cold station, and a warm later data segment from a warm station, and end up comparing Cold to Warm and finding a bogus trend to warming that was not in either station. And that is why I think DATA bias matters. We are just HOPING that GIStemp and related codes can undo all that thermometer change.

That is the point about which all the rock throwing from the AGW folks turns. They think I mean that averaging in 16.x means the TREND is warming, when I’m really just saying it puts a bias risk into the DATA.

And in a perfect world, a perfect anomaly done ‘up front’ would suppress a very large part of the DATA BIAS. Probably even enough to make it useful. My gripe is not with that notion. (As seen in dT/dt where I expect that notion to be valid). My gripe is ALL about assuming that GIStemp does that, when it does not, and about assuming that simplified hypothetical models can tell you what real world code does, when they can not. They can tell you what to expect, what to look for, but not what happens. (Otherwise no one would ever need a software QA department … )

OK, hopefully that was clear enough. If not, let me know and I’ll do a longer A/B demonstration of how a set of data could be flat while the anomalies (or trend) of the individual items in the set are warming. It can happen (as we saw with New Zealand where the base data are far flatter than the dT/dt chart made from them) and it all has to do with instrument change. With a set of number that average to about the same thing as thermometers come and go, but with each individual thermometer set showing rise over time.

Yeah, Bizarre is a good word for it. I’ve not bothered to figure out if such folks are pathological, are driven by an agenda, or just were never socially potty trained. But I don’t have to put up with them.

FWIW, He Who Shall Not Be Named said my postings were like “the ravings of a lunatic”. (Someone pushed me into seeing the drivel.) Somehow I find that a bit ‘bizarre’. ;-)

Never thought actually looking at the data, doing measurements of it, publishing the code to do the same, and mostly looking at large blocks of numbers and asking if there is a pattern to be found; could be interpreted as “ravings” and never thought requiring software to be QA tested and benchmarked was “lunatic” but hey, call me crazy, I think those things matter… 8-)

Personally, I think the notion of a Global Average Temperature is a lunatic idea and that averaging intensive variables is nutty, but the AGW folks set that part of the debate. I’d rather be looking at net heat flows and total energy balance But you can’t get that from temperature alone… so ‘they work with what they have’… and they don’t have much. (The sat. experiment that looked at ‘heat balance’ from orbit did not find a warming earth, so at least someone else had the same thought…)

So I’m going to just keep on ‘raving’ in my own little understated way. And I’m going to just keep on finding the best possible path to the most clearly demonstrable truths that can be found.

And so far those truths are that “Instrument Change” matters, more than anything else. The DATA has lots of structural bias in it. (and I haven’t even started looking at monthly dropout bias, but ‘digging in the clay’ has.) The climate codes are NOT proven to remove those bits of bias (though they do look like they reduce it). And clearly looking at the DATA to extract reasonably clean trends shows patterns more in line with Airport Growth, AHI / UHI, and climate cycles than with any CO2 idea.

Ah, here is that “Digging” link:

http://diggingintheclay.blogspot.com/2010/03/of-missing-temperatures-and-filled-in.html

-E.M.Smith ]

22. vjones says:

Hey E.M., thanks for the endorsement.

That is quite a tome in the end – 10,000 words? Quantity and quality! I must say it reads very well.

Check your email – I’m still reeling from the latest find.

23. Pingback: Top Posts — WordPress.com

24. Rod Smith says:

This is a spectacularly innovative piece of work that embodies a lot of hard work and common sense. Thank you, thank you, thank you. (And while I’m at it I’ll mention I’m a Mac user and neither Safari nor Firefox detects any sign of theTip Me link anywhere, although both can see the “Buy Me a Beer?” thing. ????)

At any rate you have hit a home run although to I don’t expect the high-powered, peer-reviewed academics (I can’t help myself!) will roll over and accept any of your methods or conclusions.

So, hang in there and don’t let-em grind you down!

25. Dan in California says:

Very well considered and presented, in my opinion. Now a question:

Pressure readings are adjusted for barometers not at sea level. Everybody does it and it’s well understood and accepted. Are there altitude adjustments for temperature reports? If not, then preferentially deleting high altitude sensors will definitely skew the data toward indicated warming.

26. Keith Hill says:

Off thread chiefio and I’m sorry I missed your 23/10/2009 article on the march North of Aussie surface stations, but as a Tasmanian, something I found that wasn’t noted at the time may spark some interest.

For Tasmania it appears that up until 1993 there were 25 stations being used. At the end of 1992 most of those stations were dropped for data gathering purposes, leaving only the ones at Launceston and Hobart Airports for the next six years. This wiped out many rural areas, all our high stations and also those on the colder, more exposed West Coast.

Two coastal stations were resurrected around 2008 – Eddystone Point on the warmer north-east tip of Tasmania and Cape Bruny on Bruny Island south of Hobart in the D’entrecasteaux Channel. They are probably now automated.

I have no idea why so many stations were dropped all at once but a good clue arises from examination of the charts. I found that virtually all recorded a sharp drop of between 1.2 to 1.4 degrees Celsius in the four years from 1988 to 1992, which of course would have been a rather uncomfortable fact for those pushing the AGW theory.

Without the colder areas and combined with the known UHI effect at airports, Tasmania would presumably have then been contributing warmer mean temperatures to the global calculations after 1992.

However, at the risk of being accused of “cherry-picking”, Launceston Airport may still be an inconvenient truth for the AGW lobby as the trend line has been remarkably stable and refusing to record any “global warming” there.

The first recorded annual mean temperature was 12.1 degrees in 1939 and 70 years later in 2009, 11.8 degrees. The 1939 mean temperature has only been exceeded five times in that 70 years and only twice with any significance – by 0.4 of a degree in 1962 and 0.6 in 1988.

Have any other Aussies checked the charts of the 1992 dropped stations in their areas? It would be very interesting to see if they recorded the same four year drop in temperatures.

Keep up the great work!

REPLY: [ Wonderful information! Every time I’ve gone down one of the Rabbit Holes, at the end, I’ve found some kind of “Instrument Change”. Your example is a stellar one. Most of them involve Airports ( having more of them, or, like Marshall Islands, turning from a grass shack in the 1960’s into a mile long tarmac heat collector now. Sometime just changing the instruments like from Liquid In Glass to the ASOS with a known heating problem – it sucks it’s own exhaust and the warmth that the electronic heater makes gets pulled back in…). Some involve location changes over time ( cold mountain in the baseline during a cold spell puts very cold excursions into the baseline, latter replaced – either in the same grid/box or via homogenizing – with data from a flat water moderated beach that just can’t have those cold excursions from the adjusted mean.) In my opinion, it’s the worst calorimetry job ever done. -E.M.Smith ]

27. boballab says:

EM I finally found what I was looking for in the Canadian data that I got from Environment Canada (EC). Seems that for Alert Canada NCDC cut off the data in 1991, but EC has data out to 2005. I graphed that data against GISS adjusted and it’s almost a perfect match, so I then went and grabbed the gridded trends and anomalies for the Grid box that Alert is in. To say that GISS infilling is off in it’s calculations is vastly understating it.

When you just look at the Gridded product for the years GISS has actual data for it matchs the station data for Alert, when they had to infill between 1990 and 2005 they got a warming trend of 1.1616° C for 1200km infill and a cooling trend of -.3246° C for 250km infill. When I looked at the trend of the EC data from 1951-2005 it was .4° C, so both infill products of GISS is wrong. 250 Km by almost 1° C and 1200km by over .6° C.

The best thing is the Alert Station is the only station that is in that grid box and while GISS used data from it they got it right, when they don’t they are way off.

http://boballab.wordpress.com/2010/03/09/giss-infilling-the-true-hypothetical-cow/

REPLY: [ Wonderful job. Very well done. I’m going to send A.Watts a pointer to it. FWIW, the “temp” files at GISS tend to evaporate under you, so you need to keep your own copy of anything you find there… -E.M.Smith ]

28. Waffle says:

Nice summary. I can help you out with your donations script. It’s looking for a hook to insert the widget code onto so you need to provide an element with an id ScratchBackWidget.

I’d just insert a hook wrapper into your sidebar template with a div like this:

*Rest of template code here, if you get my drift*

This message was funded by big oil and cost ExxonMobil \$120 000 and three full-time lobbyist to produce. :)

REPLY: [ If Only… more like one old 1990’s era ‘white box PC’ with a free Linux on it and a few tips in the tip jar paying for the coffee caffeine fix ;-) Though lately I’ve gotten a bit more help. I’ve now been given a box to do the ‘bigendian’ part on, and a pointer to how to do it in the compiler too. A FORTRAN guy has volunteered to help (and I need to send him a “please turn NCAR into v2.mean like file formats” chunk of code to modify…). Then there are the ‘spinoffs’ where some folks started reproducing what I was doing and are now pulling ahead of me. One is now doing a full blown database and making ‘trends for each station’ and finding very interesting things. All the stuff I ought to be helping along instead of admiring Hypothetical Cows. Oh Well, every so often you have to mow the weeds… -E.M.Smith ]

29. Waffle says:

Yuck, no code transformation. :(

Let’s do it manually!

<div id=”sidebar”>
<div id=”ScratchBackWidget”>

</div>
</div><!– end id:sidebar –>

REPLY: [ I’ll give it a try when I get time. There are some limitations on what I can do as I’m running on the free version of wordpress. So to do most of the really interesting things, I would need to start paying them. IIRC that was why I ended up with this particular widget in the first place (it was free) and at install I ran into {something I don’t remember right now} that I could fix if only I payed wordpress… It may have been the ability to do custom code? At any rate, I decided to just let ‘the junky one’ run a while and let it pay for an upgrade… Nothing happened… Then about 3 months? ago folks started tipping. So now I have the means to fix it. But not the time. Isn’t life fun? 8-} With luck your code will patch it and I’ll be on my way.
-E.M.Smith ]

30. Blouis79 says:

Fabulous work. My mind boggles when contemplating why climate scientists haven’t bothered to show us raw data and analysis and why we can’t get all the data they have to verify the raw data is what the national weather services have collected over the years.

I still can’t understand why we bother trying to figure global temperature when individual locations are so variable. I think Gerlich and Tscheuschner argued the concept of a global mean temperature is a nonsense.

The more sensible way to measure “global temperature” changes involves computations of energy; mass; specific heat. My understanding is that’s what underpins the work of Bo Nordell on global warming and thermal pollution – energy and derived computations to convert from energy to temperature including heat diffusion effects in solids. Climate scientists don’t like his work.

Anyone trying to record the temperature in the fridge should appreciate that air temperature has not so much to do with the temperature of the goods in the fridge because the thermal masses are far more stable than the air temperature.

I’d like to know if climate scientists can get their heads around smaller patches of earth – single sites – temperature; humidity; clouds; CO2; sunlight; etc. Individual sites have data on larger temperature and other parameter excursions than the global mean. If you lined them all up (in a computer), they should add something to predicting behaviour of the global mean based on observations rather than fantasy. Should be at least statistically as useful as a GISSTEMP + ModelE.

31. CO2 Realist says:

EM, keep up the good work. I’ve been following the discussion here and elsewhere. I agree with your statement in one of your comments above:

Personally, I think the notion of a Global Average Temperature is a lunatic idea and that averaging intensive variables is nutty, but the AGW folks set that part of the debate. I’d rather be looking at net heat flows and total energy balance

Global average temp is kind of like average house prices – meaningless. Where is the house? Kansas? Malibu? Drives me nuts.

While some of the techniques may address some issues, it seems to me that applying techniques to garbage data gets you unreliable results. Maybe I’m just too much of a common sense type of person.

32. KevinM says:

Steve McIntyre sounded the alarm early, but as we come down the stretch I find you are carrying more of the load. Maybe its a handoff from statistical analysis to database programming.

Thank god somebody is doing the work most of us are too lazy for. Some would say unqualified, but at least 1 million Americans are capable, though neither interested nor willing.

I have a guess about where AGW will stand in 20 years, and I wonder what will become of the blog records. The battle between the 5% nation of true believers and the 5% nation of true skeptics is happening on the net.

If these issues are only metaphorical snail darters thwarting our hydro project, continued revelation will not stop whats coming. At least those who care to look will have access to truth.

33. harrywr2 says:

I found some howlers in the data also.

The population data at the whiteboard lists Riyadh,Jeddah, Dhahran, Basrah, Najaf, Kirkuk and Mosul as all ‘rural sites’. They all have more then 1 million population.
They got Rutbah right as being ‘rural’, but it’s an airbase with nice long runway.

Ramstein, AB Germany is also listed as rural. Just 53,000 people living in close proximity to the base. Not to mention the runways, concrete ammo bunkers, concrete hardened aircraft shelters

http://rhinohide.cx/co2/gistemp-alt-lat-pop-2000/data/v2.inv.pop100-

Some folks don’t get it.

Garbage In – Garbage Out. Doesn’t matter how many people verify the ‘calculations are correct’ if the underlying data is in fact provably false.

34. R Dunn says:

I never saw a spherical cow;
I never hope to see one:
but I can tell you anyhow;
I’d rather see than be one.

35. docmartyn says:

Chief, have you by any chance looked at the (max+min) of the stations that have been eliminated and those that have been kept?

My guess is that sites with low (max+min) have been eliminated.

REPLY: [ It’s on the “todo” list – that is way too long ; -) So I think you are looking in the right neighborhood, but I suspect it’s the other way round. When you look at the data (or a graph … watch this space, I’m working on a batch of ‘interesting’ ugly graphs ) you will see that the “volatility’ dampens recently. Both the highs and the lows get compressed, though not quite equally. This implies that the high highs and the low lows are not being as extreme. Yeah, I’m working on it… But if you toss the high -highs and the low-lows the (MIN + MAX) that’s getting tossed ought to be the high ones. It’s possible that the (MIN+MAX) is small and it’s just the absolute (MIN+MAX)/2 at an extreme that’s being dropped, but ‘we will see’. -E.M. Smith ]

36. Ken McMurtrie says:

Congratulations on an impressive work.
It all makes sense to a humble electrical engineer.
With information like yours being available, surely the truth will eventually rise out of the mire.
Great effort regarding both your work and your issuing it to us.

REPLY: [ Thanks! Just doing what I can. I’m from the “Nuts, Bolts & Volts” school as well ;-) and really just think the “all theory no cattle” folks have too much imagination and not enough perspiration in what they are doing… So I’m plugging my way through the data at the lowest possible level to get the maximum truth out of it. And what I’m finding out is that the “theory” is a very long way from the reality. Take a look at the Talk Turkey posting. I think You’ll like it!. -E.M.SMith ]

37. Visceral Rebellion says:

Incidentally, I just learned that Voldemort is actually Grant Foster of FIOA2009.ZIP fame. That explained a lot for me.

38. juanslayton says:

In the hope that it might be useful to someone, I repeat below a comment I made to WUWT Tips and Notes. My next step will be to put up a list of the V2 stations that MMS reports as closed. There are well over 100;many have been closed since the ’90s.
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
Previous discussions (the ‘lost’ stations in Honolulu and Dutch Harbour) have already called attention to the limited intelligence of available data searches. Trivial errors lead to blind alleys. Type in ‘MCMILLIN’, instead of ‘MC MILLIN’, and MMS will simply report that it couldn’t find a match. The list of version 2 stations available at:

http://cdiac.ornl.gov/ftp/ushcn_v2_monthly/ushcn-stations.txt

contains over 250 station names that do not match the MMS data. For whatever use it may have, I have used the COOP numbers to look up and ‘correct’ these names. The resulting file is posted at:

http://members.dslextreme.com/users/juanslayton/v2_stations.txt

Of course, I don’t really know which names are ‘correct,’ but MMS has more information, so I go with their names.

39. juanslayton says:

…and here’s the list of closed v2 stations. Many have been closed since the 90’s. What could possibly motivate creating a new network incorporating stations that have been closed for many years?

40. juanslayton says:
41. vjones says:

@juanslayton
What could possibly motivate creating a new network incorporating stations that have been closed for many years?

They are potential “Lazarus Thermometers“. Steve McIntyre found some that stopped reporting into the GHCN file in 1990, but have recently started again. I’ve just found a few in Turkey.

REPLY: [ There are also the Zombie thermometers that are apparently dead, but sometimes keep coming back to ‘life’ each few months… just to die again… and return to life…
https://chiefio.wordpress.com/2010/02/15/thermometer-zombie-walk/
different from the Lazarus thermometers mostly in the duration of the ‘dead’ phase… -E.M.Smith ]

42. Keith Hill says:

E.M. I was browsing through the late John Daly’s excellent 10th May 2000 article titled “What’s Wrong With The Surface Record”
(http://www.john-daly.com/ges/surftemp/surftemp.htm). On Page 13 headed “Station Records and Climate Models” he had noted the value of looking at, quote: “individual station records, particularly those which are known to be rural, have continuous and consistent data and are known to be properly supervised. The ‘ideal’stations are those which have everything – a long-term record, no breaks, scientifically supervised, completely rural (i.e., ‘greenfields’) and set in a climatically strategic location.

An example is Valentia in Ireland which is located on an island in the extreme southwest of Ireland, right on the coast of County Kerry facing the North Atlantic. It is the first point of interception for the Gulf Stream entering northern Europe and is directly exposed to the prevailing south-westerly winds which blow in from the ocean. It is the perfect location to monitor climate change and as we can readily see (in accompanying graph) there hasn’t been
any. There has been variation year-to-year over a 2 degree range, but no overall trend since 1869 ******( a year which was itself warmer than 1999).
The pre-war warming is present – just, but quickly followed by a similar cooling.” ****** Interestingly the graph record shown by John started in 1869 but the graph now shown on the gistemp site starts from a colder 1880 which, as you mention above, introduces it’s own “sample bias”.

I decided to check what had happened in the ensuing 10 years and the 2 degree range rise and fall has been maintained at Valentia. What really caught my attention though, is the 1.5 degree fall recorded over the last four years. On checking the nine stations in Ireland still being used, as per (http://data.giss.nasa.gov/cgi-bin/gistemp/findstation.py?datatype=gistemp&data_set) they have all consistently recorded similar falls.
I know how you love islands and I was struck by the similarity between this 4 or 5 year fall in temperatures and that which occurred in Tasmania as pointed out in my old March 9th post above, four years after James Hansens’ alarmist predictions to the U.S Senate in 1988. You’ll recall 1992-93 was one of “The Great Dying of Thermometers.” I wonder if another “Great Dying” is imminent.

Because the IPCC climate models all indicated the projected warming should show up in polar regions, the Russian Vostok Base high on the Antarctic ice plateau was another favorite of John Daly. At present, that is showing a fall of around three degrees over the last three years.
In my home State, Launceston Airport is a relatively ‘greenfields’ location and still stubbornly refusing to show any sign of “global warming” .

It may be an interesting exercise for some of your other posters to check for falls at such stations in their area.

Ireland Temperatures Fall Last 4-5 years

Casement Aero 1.7degrees
Dublin Airport 1.8 ”
Shannon Airport 1.6 ”
Belmullet 1.6 ”
Belfast/Alder 1.4 ”
Cork Airport 1.5 ”