AGW is a thermometer count artifact

Update, Links

I’ve started putting up the source code and more detailed analysis in a few links. I’m just going to list them here and give them a “one line” description. The original posting continues just a bit below.

When the GHCN data set is reduced to the 3000 thermometers with the longest records (cut off at about 64 years worth of data for the station), the “global warming” signal is not present.

When we look at the data on a seasonal basis, we find that the “warming signal” is present in the winter months, but not in August. This can not be a function of decade long solar changes nor of long term accumulation of CO2. Changes in the sun may be responsible for other things, and CO2 may well cause some other effects (like improved plant growth) but the global warming signal is too seasonal to be either of them:

UPDATE: Yes, an “update to the update” ;-) I’ve found the cause for the thermometer deletions. There was a push from another UN committee to make a network of thermometers focused on Climate Research. (CRN, RCS various names in different countries) that as near as I can tell has lead to the thermometer deletions. NASA / NOAA deleted the thermometers from the GHCN. I look at the global pattern of deletions by continent and by major country here:

https://chiefio.wordpress.com/2009/11/03/ghcn-the-global-analysis/

A recent posting on The March of the Thermometers that looks at GIStemp boad zones vs a better set of zones, and what happens to thermometers over time and space is here:

https://chiefio.wordpress.com/2009/08/17/thermometer-years-by-latitude-warm-globe/

That CO2 has the ability to only work in winter is, er, odd? Or maybe it isn’t CO2 that’s the issue…

https://chiefio.wordpress.com/2009/08/09/co2-takes-summers-off/

What about those short lived stations? When we select the 10,000 stations (representing less than 1/2 the data) with the shortest lives, we get a very strong warming signal in the data. For a future bit of work, I’ll be looking in more detail at exactly which stations contribute the most, and when. But it is pretty clear that the warming signal comes with the addition of thermometers… The spacial, in addition to the temporal, distribution of “warming” do not allow for the cause to be a diffuse broad acting agent like CO2.

https://chiefio.wordpress.com/2009/08/10/well-theres-your-global-warming-problem/

A finer grained look, by quartiles of age, with a “bonus look” at the 10% best stations is here:

https://chiefio.wordpress.com/2009/08/13/gistemp-quartiles-of-age-bolus-of-heat/

And a look into how much of this signal gets through the “temperature” steps of GIStemp (everything up to zones) is here:

https://chiefio.wordpress.com/2009/08/12/gistemp-step1-data-change-profile/

The crib note is that a lot makes it to the zonal stage and with a general overall warming of the data set by about 1/2 C but the “tilt” seems to come from being a poor filter for the data profile, not from a coding failure. Realize that this “warming of the data” is in addition to the warming signal in the “raw” data. GIStemp is acting as an amplifier up to this point, not as a filter.

Want to “try this at home”? Here’s a listing of the code I wrote to do some of this:

https://chiefio.wordpress.com/2009/08/09/will-the-good-ghcn-stations-please-stand-up/

Original Posting

This is a copy of a letter that I posted on WUWT under the “tips” tab.
http://wattsupwiththat.com/tips-notes-to-wuwt/
If there is no followup, I’ll start posting the details of the code, methods, and conclusions here.

Anthony:

I have been “characterizing” the GIStemp process and how the data are transformed as they go through the process. Along the way, I have discovered a couple of Very Interesting Things. I would like someone to verify these findings (since they may be suitable for publication). A couple are fairly simple (but have significant implications). One is, IMHO, a bit of a “doozy”…

I used the “raw GHCN” data from STEP0 of GIStemp as the seed for my “benchmark”, then put together a couple of FORTRAN “hand tools” to see what happened to the trends. What I found was:

1) There is a pronounced seasonal variation in the “Global” average temperature. This means that the GAT is decidedly biased to the Northern Hemisphere. Hemispherical changes can easily bias the “GAT”. (i.e. effects of change of axial tilt, of precession (which pole is close to sun at perigee) etc.) For example:

GAT year: 1900: (by month, starting in January)

0.6  0.4  4.6 10.8 15.8 19.7 21.5 21.7 17.8 13.6  6.2  2.4 

2) In calculating these “Global Average Temperatures” I found that the exact method of calculation strongly changes the result. Do you average all the separate valid records and then divide by the count of valid records? Or do you calculate a yearly GAT, then average those to get a decade or total data series GAT? This implies that the number of thermometers active in any particular period of time has a strong impact on the GAT in that time. For example:

0.2  1.6  4.7  9.5 14.0 17.7 19.6 18.9 15.6 10.7  5.5  1.8 10.0
2.6  3.9  7.3 11.8 15.8 18.9 20.7 20.3 17.4 13.1  7.9  3.9 11.97

These two series are the average of all station records, by month, with an average of all temperatures in the 13th field. The difference is that the first record is adding the individual temperatures, where the second series is from averaging the individual years first, then averaging those averages (much as GIStemp does). It’s clear that a degree C (or 2!) in the GAT is an artifact of the number of stations used in any given average and the order of averaging.

3) Finally, this lead me to the idea of selecting only those stations with a long history (I now have a FORTRAN program that takes the GHCN format files, counts the records for each station ID, then sorts them in rank order by total years of data and lets you select a “cutoff” value. I chose to use a 3000 station cut off (that gives about 64 years for the “short lifetime” stations) but similar results happen with 1k, 2k, and even 4k stations. The result is that almost all of the AGW “signal” goes away. The conclusion is that the AGW “signal” is an artifact of the arrival and departure of thermometers from the scribal record. The addition of more thermometers in the Southern Hemisphere followed by the loss of Siberian thermometers with the collapse of the Soviet Union. The thermometer count rises from 1 in 1701 to over 9000, then drops back to under 3k today. That has an impact… I calculated “decade Global Average Temperatures” from a data set reduced to the 3000 longest lived thermometers. A sample of the records are below. The next to last field is an average of all data for the decade, while the last field is the number of thermometers (station IDs) active in this group from that 3000 in the data set. You can see that the GATs don’t change much from decade to decade anymore.

DecadeAV: 1890   
 0.6  2.2  5.8 11.9 17.0 20.9 23.2 22.4 19.0 13.2  7.2  2.9 12.2 1174
DecadeAV: 1940   
 0.3  1.3  5.4 10.4 15.3 19.1 21.6 20.9 17.5 12.3  6.2  2.0 11.0 2851
DecadeAV: 1990  
-0.1  1.4  5.7 10.8 15.4 19.3 21.6 20.9 17.2 11.9  6.1  1.0 10.9 2186
DecadeAV: 2009   
 1.7  2.2  6.6 11.7 15.9 19.9 22.3 21.7 18.0 12.3  6.9  2.1 11.8  209

I have no explanation for why the long lived thermometers drops to 209 in the last data.

To say that I think this is of some importance is a bit of an understatement.

So my question to you is simple: Would you or someone you work with like to duplicate these results and potentially collaborate in producing a paper suitable for publication? I don’t have the PhD / credentials to publish, and would benefit from someone with skills at making tables into graphics anyway ;-)

I will be putting a copy of this letter up on my blog, and I will be putting up the code and a bit more detailed set of data / results over time (unless someone wants to “vet” the code and results first for publication). Please let me know what, if anything, you would like to do.

Thanks,
E.M. Smith

About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in AGW Science and Background, Favorites and tagged , , , , , . Bookmark the permalink.

20 Responses to AGW is a thermometer count artifact

  1. solluvr says:

    The stations cannot simply be summed & divided by number of stations. Because there are more stations in the Northern Hemisphere, those temperatures will overwhelm the South. The “trick” is to weight the temperatures so they are representative of some portion of the planet’s surface. Alternatively, you could split up the stations by latitude, for example, 10 degree strips. Within those strips the temperatures should, at least, be representing the same season :-)

    In combining the strips, would it be best to control for the actual amount of surface area? After all, 80-90 degrees North is far less surface area than 0-10 degrees North.

  2. E.M.Smith says:

    @solluvr:

    Well, tell that to GIStemp! (And Hansen!)

    They eventually end up at “anomalies” via a method somewhat like what you describe, but before one ends up at “anomalies” via zones and boxes one is left with temperatures and the Global Average Temperature…. There IS a promoted “Global Average Temperature” number and that is from GIStemp data after step1 and before step2 (it “zonalizes” in step2 into 6 global zones and “homogenizes”, in step3 it does the box/grid thing and adds in SST (via yet another anomaly map…) in step4_5 (anomaly maps are not temperatures, so any Global Average Temperature MUST come from before the anomaly steps…) so your idea, while probably of merit, is not in conformance with what the rest of the world does and is antithetical to what GIStemp does in STEP0 and STEP1.

    So I’m in the odd situation of agreeing with you on a technical “what ought to happen” basis; but needing to point out that AGW as a thesis is, in fact, based on a “sum the temps and divide” basis(!). So I have to say “Yes, you are right” but also “No, it doesn’t work that way in the political / promotional world today” because the AGW thesis is based on a GAT number made by a “sum / divide” process (with which I strongly do not agree! btw…)

    At any rate, at the end of the day the major point is that in a set of consistent thermometers (the 3k with a lifetime of 64 or so years) you have no AGW “signal” while there is a warming signal in the total data set (though concentrated in the N.H. winter months). So I’m still left with the same basic observation that the AGW thesis (that there is an increase in the “global average temperature” – NOT zonal anomaly boxes…) is an artifact of how a GAT is calculated and is not present in the set of consistent recorded thermometer data.

    I must emphasize again: I absolutely do not agree with the idea nor the method of calculating a “Global Average Temperature”as used by the AGW advocates and I think that the whole idea is a bogus one. Yet it is done. It is promoted. And it is the cornerstone of the AGW thesis. So I must address it on it’s own terms.

    And what I find when I “go there” is that it is an artifact of the comings and goings of thermometers. Nothing more.

    It really is a case of “Where in the world are Carmen Sandiego’s Thermometers?”

    So Many Thermometers, So Little Time

    IMHO, the whole AGW thesis is based on False Precision with folks dancing in the error bands of calculations from whole degree F records into 1/10 and 1/100 degree C mathematical paranoid fantasies; on a broken concept of a “Global Average Temperature” (when it clearly is NOT “global” given the seasonal variation in the raw data; which is all we really have…) nor does the “Average” of a bunch of thermometers mean anything at all (what is the meaning of the global average phone number? The global average car color?) nor is the thing you come up with by averaging a non-Nyquist set of data a “temperature”… But:

    The point is that I can contest the validity of all those things until I’m blue in the face (and many folks have done so and continue to do so). The world yawns and says “But the GAT is going up!!!!” So I’m forced to address this paranoid delusion of a “global” temperature based on an “average” of temperatures. Oh Well!

    And what I’ve found is damning of the whole idea.

    1) It can NOT be “global” given the VERY strong seasonal signal in the data. (And don’t kid yourself. This is the only real data we have. If it isn’t in this data, it isn’t in existence in any data. You must resort to meta-data and proxies. Not a very good choice…)

    2) The “averaging” process is full of pitfalls. Dramatic problems of 2 whole degrees C magnitude (when the AGW thesis would have us paranoid over 1/10 degrees C changes…) arise based on exactly how you make the “average”. Program detail choices that a programmer must make to write the data processing code. (I absolutely must choose a ‘data type’ for a variable and what type conversions to do along with what precision to carry in the calculations. The language demands it. FORTRAN will assign a data type by default based on the variable name if you decline to explicitly state one; but you choose the variable name, so you still must choose the data type. It is unavoidable.)

    3) It isn’t temperature. An average of a semi-random and changing bunch of thermometers is not a temperature. I don’t know what it is, but it is not very meaningful. You average a bunch of locations without regard for the placement nor number and you get exactly what again? (Your point!). And when you hold the number of thermometers constant (so the average might start to mean something) the AGW “signal” evaporates…

    Yet we have the news saying the GAT will go up by FOO! in the next decade and the world will end Real SOON now! based on whatever an average of thermometers might be…

    The bottom line is that even if I think the average of a bunch of thermometers is bogus (and I do) a bunch of folks don’t and it is a cornerstone of the AGW thesis. So I address it on it’s own terms and find a simple way to show that there is no AGW even using the average of a bunch of thermometers (when you hold the number relatively constant).

    And that observation will not go away in “zonalizing”, homogenizing, pasteurizing, or any other *izing you come up with. Spread the thermometers far and wide. Make them zones, boxes, cells, “whatever”. Focus them into a single continent (AGW would still have that continent warming…) There simply is no “warming” signal in those temperature records. (If there were warming, the average would go up to some degree over time for a fixed set of thermometers – even though I think the method is fraught with “issues’… ) And there is NO WARMING SIGNAL in a relatively fixed set of thermometers.

    And if there is no warming in the basic data, any “warming” in a zonalized, boxed, homogenized world must be created by the process of creating the zones, boxes, or homogenized cells. Not by the actual temperature data.

    So pick one station and you find no “warming”. (I’ve done it. See the “picking cherries in Sweden” posting…).

    Picking Cherries in Sweden

    Pick a small set of stations averaged together. No warming.

    Pick a set of 3,000 stations world wide for 130 years with every station present for at least 1/2 that interval. No warming. Try 1,000. 2,000. 4,000. No warming.

    The only time there is “warming” in the data is with the whole series that starts with one (cold) thermometer, ramps up to over 9,000 and then drops back to under 3,000 with the loss of the Siberian thermometers with the collapse of the Soviet Union.

    While I can’t say what an average of that random set of thermometers might mean, I can very much say that the AGW signal is not present in a set of thermometers that is fairly stable; and that the AGW thesis is an artifact of how that “Global Warming” temperature, the “Global Average Temperature” (that is supposed to rise by 2C in the coming years…), is calculated from the variable number of thermometers.

    (And while I think that the GAT is a not-very-meaningful number, and that looking at MAX or MIN temps would be better: It is the case that, were the world really warming, one would expect to see the data from a combined set of relatively stable thermometers also showing a tendency to rise over time. It doesn’t.)

    Basically: I don’t need to show how to calculate whatever fantasy number the AGW folks believe in and I don’t need to show that it has meaning nor validity. I only need to show that there is no warming signal in a relatively stable set of temperature data from a long lived and known set of thermometers. After that, it’s pretty clear that the fantasy world of AGW is based on bogus data, broken methods, and computer fantasies.

    At that point it really is “Not My Problem!”

  3. solluvr says:

    If you get their software to produce their numbers, then you’ll have an excellent base to start from. I agree GISS is broken when their GAT goes up while UAH and RSS are going down. The challenge is to find an “easy to explain” correction that is comprehensible. If the 5×5 gridded data is NOT adjusted for the amount of Earth’s surface area represented by each grid, AND adjusting appropriately removes the warming trend, you have a rock-solid paper.

    If the warming trend is an accumulation of minor programming snafu’s, the story will never have enough zip to get beyond this small audience.

  4. E.M.Smith says:

    The reality will likely be the sum of all possibles.

    There will be some minor programming issues (like int vs float choices) that most likely will only have a small impact. Presently this looks to be about 0.1C in the first couple of steps.

    There will be some “data artifact” driven behaviours. Presently this looks to be the biggest chunk of it. If I run the straight GIStemp on the straight GHCN data there is a very significant warming signal. Oddly, this is concentrated in winters. Something hidden by the time you reach the annual anomaly map stage. That “warming signal” evaporates when short lived stations are removed (i.e. when only stations over about 50 years life span are used). This could be tickling a bug, tickling an odd algorithm (like the ‘fill in the gaps’ bits that would be reduced in impact with more complete records), or just changing WHERE the temperature is measured WHEN (and taking out a type / place bias in the raw data as thermometers come & go…)

    And there may or may not be some portion driven by the algorithms themselves. The whole ‘reference station method’ and the “change the past” rewrite of older records.

    And that’s why I’m making a benchmark for characterizing what GIStemp does. Then you can change the data, and change the code, and see just exactly what is causing the warming signal, and what makes it stronger or makes it go away entirely. But you must start with a clear benchmark for the measuring…

    While I would hope that, as you pointed out, the signal could be directly tied to a failure to properly do some specific thing, like grid area proportional; what I’ve seen so far is more pernicious.

    In constructing the benchmark, I found a strong warming signal in the raw GHCN data, but only in N.H. winter. In stabilizing the data with a smaller (though still quite large! About 1/2+ of the total data records!) set of more reliable thermometers, that warming signal “goes away”. This is independent of GIStemp!

    The thing I don’t want to discover, but that actually looks most likely at this point: GIStemp may be a perfectly valid set of processes doing exactly what it claims to do and introducing at most trivial change / bias to the data and it may be the raw data itself that is biased via the arrival and departure of thermometers from the historical record. With GIStemp only responding to that artifact… Perhaps in a stronger way than the other temperature programs (due to the “fill in the blanks” nature of the program). That is what this exercise seems to have surfaced (at least through STEP3 of the processing).

    At any rate, I need to select my “stable” set of data and get GIStemp to run on it “end to end”. Then compare that output to the the “base case” of GIStemp on the full data. Then I can make some pretty strong statements about what causes which… Oh, and I’m going to “turn off” various GIStemp behaviours to characterized what they contribute to the “warming”. I’ve already done this with STEP0. I’ve looked at the intermediate steps, and there just isn’t much “there there”. You saw some of that with the USHCN2v2 code analysis. It works out to about 1/1000 C (which is pretty much irrelevant). The 2 C in the raw data that evaporate when the short lived stations are removed is the “big deal” as of right now.

    So I have more work to do.

    But I’m still hopeful that the raw data analysis with the seasonal warming only and the dependence on the arrival and departure of thermometers to generate a ‘warming signal’ is of merit in it’s own right and could generate a minor paper.

  5. solluvr says:

    Exceptional work! Pretty easy to envision a mechanism that would primarily affect winter temps. Whether the warming is full Northern hemisphere, North America, Russia, or Europe would help choose the correct mechanism.

    For example, the use of salt in de-icing roads, which became widespread in the 60s & 70s. I’ll invest a bit of time to find some decent data for road salt consumption over time. This would affect North America and Western Europe, first. Eastern Europe and Northern Asia would warm a bit later, if at all.

    Another, which would have a different “signature”, decreased GCR incidence => reduced clouds => snow melts between precipitation events. This would be more widespread, and evenly distributed.
    Although.. I guess soot could have a similar effect as the population in the northern zones increased.

    Neither of these would have any significant effect in the Southern Hemisphere. I can think of more, but will wait to see how the data comes out.

  6. E.M.Smith says:

    Thanks for the compliment! My speculation right now centers on the arrival of new stations in the global record in the last 50 years in places that are “newly rich” like the middle east and parts of the tropics, coupled with the loss of Siberian thermometers. Both would add temps to the N.H. Winter average. One, via added warmer stations. The other, via removal of cooler stations from the winter record.

    On the “Will the Good GHCN Thermometers Please Stand Up” thread I post the code I used to filter the stations for the “good ones” but also give how to change it to get a list of “the bad ones”. It would be very straight forward to take that list of “short lived stations” and look up their station info. That tells you exactly what is causing the “warming signal”.

    I’m “down in the weeds” of GIStemp right now or I’d do it. If nobody else looks up what stations they are, I’ll probably hack together something in a few days.

  7. E.M.Smith says:

    Couldn’t resist. I “hacked together something” and the result is the new posting about the short lived stations: Well There’s Your Global Warming Problem.

    I’m presently pondering making a program to match the 10,000 station ids to names and locations… but it’s late… maybe tomorrow.

  8. Pingback: Jennifer Marohasy » AGW is a Thermometer Count Artifact: A note from E.M. Smith

  9. Warwick Hughes and others have been claiming as much for some time. But nice to see a new approach to the issue and discussed with much candour.

    And I’ve posted a snippet from the above at my blog. http://www.jennifermarohasy.com/blog

  10. Warwick Hughes and others have been claiming as much for some time. But nice to see a new approach to the issue and discussed with much candour.

    And I’ve posted a snippet from the above at my blog.

  11. E.M.Smith says:

    Jennifer,

    Thanks for the pointer!

    For some reason I can not explain, your first posting ended up in the WordPress SPAM queue. I fished it back out. I don’t have much set up (other than a fairly large limit on links – 7 or 8?) so “it wasn’t me”… You might want to find out what WordPress does with your site name, and why…)

  12. Dave E says:

    Many of the Siberia stations dropped from GISTemp are still reporting, it may be interesting to check them out if you ever get chance.

    Also, am I correct in thinking that you cannot get truly “raw” data for the stations from NOAA?

    DaveE.

  13. E.M.Smith says:

    As near as I can tell, you can select the degree of “adjustment” only in a small range. There are some adjustments you can not get un-adjusted. In theory, you could start with the historical station reporting sheets, but that would be one heck of a lot of work.

    I think it is “reasonable” to assume that a little bit of what looks like rational adjustment (for things like equipment changes) ought to be acceptable (trust, but verify…). From what I’ve seen so far, the “big lumps” are in the number and placement of the thermometers.

    I just did a posting on “GIStemp STEP1 data change profile” that shows a fairly strong impact of the number of thermometers on what GIStemp produces. It is still only a “correlation” analysis, not a causality through the code analysis, but it’s a pretty strong statement all the same…

    Since the next step after STEP1 starts the zonal mapping, this is the last stop for the thermometer data before they get turned into grids and zones. They now say all they can say, and what they say is that the warming is largely in winter, largely in sync with thermometer count, and not filtered out by GIStemp as GIStemp seems to have a sensitivity to number of thermometers in it’s product. (Not really all that surprising…)

    Oh, and yeah, it would be interesting to do the Siberia data as a specific case. So much to do, so little time (and no money… that this is all being done by a semi-retired volunteer with all the $Billions slopping around for “climate research” is a whole damning thread of its own…)

  14. Christopher Game says:

    Dear E.M. Smith,

    Your work is very good and valuable, perhaps one of the keys to saving the nation. Keep it up and make it grow.

    Yours sincerely,

    Christopher Game

  15. E.M.Smith says:

    Thank you for the compliment.

    This work does raise some points that can not, in any way, be covered by the CO2 thesis.

    It also is fully in line with what I know of human behaviour.

    Where do people come into this? Simple. We love to believe we can achieve perfection.

    GIStemp is, at it’s core, just a giant filter. A filter designed to remove the impact of sparse thermometers in some spaces and some times. A filter designed to suppress the impact of overloading of multiple thermometers in other spaces and other times. It’s proponents choose to believe that it does a perfect job. Yet nothing we build can be absolutely perfect.

    But the folks who built it and use it and depend on it need to trust it, to believe that it is “perfect enough”. And it isn’t.

    Frankly, one of the hardest parts of what I’m doing is yet to come. It is to test the “Q” of GIStemp as a filter. Every filter can be overloaded or have some signal leakage the question is what is the figure of merit of that filter, how much does it select for only the desired signal and reject the out of band signal.

    I first approached this problem from the point of view that GIStemp was horridly broken in some way, or had a fundamental pernicious fault. Having drug my brain through the thing from end to end I’ve mostly found minor issues of the sort all programmers put in their code (it’s extraordinarily hard to get every detail exactly right the first time…). A bit of warming of the data, but nothing spectacular. (See the STEP1 Data change profile posting). About 1/2 C maybe, but fairly evenly distributed in the data and with a profile that matches the arrival / departure profile of the thermometer count. I’m now of the opinion that it tries hard to filter out the bolus of short lived thermometers, but is simply overwhelmed by the size of the signal.

    There are about 10% of the thermometers (1300) that give a really clean history of the planet over a 100+ year life span. They show no warming. There are about 10,000 thermometers that come into existence for a short time, and then evaporate. Looking at those records shows a fairly strong “winter warming” signal in the data during a few decades of time (they are likely S.Hemisphere or tropical thermometers, given the seasonal profile). Trying to filter out 80% to 90% of your signal takes one “High Q” filter. GIStemp is just not up to that task.

    So we swamp the filter with an extraordinarily strong signal, then are amazed to see a “warming signal” in the output.

    All because we like to believe in perfection.

    (And a little tiny bit because characterizing GIStemp is very very hard to do… Seems I’m the only one willing to take it on, even though the original builders of the code ought to have done it. Part of my goal here is to get enough of the “heavy lifting” out of the way so that others can join in the process. I have a clean port of GIStemp now that anyone can run, without the work of porting / debugging and offered to give it to folks. I’ve published the raw data characterization code I wrote. I’ve pointed to the places that need investigation with “dig here” postings. I’m still hoping that the “Ya’ll Come!” will be heard and we can have a communal “Barn Raising” building an analysis of the behaviours of GIStemp. But if it comes to it, I can build a barn by myself. It just takes a while longer …)

    My only real worry at this point is that December is fast approaching. Doing it myself, I won’t be done enough to have any impact at all on Copenhagen. To be done and visible enough in time to mean anything, I need to throw / kick a long pass to someone with better field position or I need to pick up one heck of a lot of big guys clearing the field in front of me. Oh well. You don’t complain about your field position, you figure out how to use it to your advantage. Got Lemons? Time to make lemonade!

  16. Very interesting! I can confirm a similar trend in Australian records. I picked out some stations with a long history. Some going back to 1880. I picked the locations where I did not expect heat island contamination.
    In the isolated country locations there was no upward trend and in several cases a statistically significant downward trend. (e.g. Bathurst Prison, Deniliquin). you can access the raw data from. http://www.bom.gov.au/climate/cdo/about/sitedata.shtml
    There was a clear upward temperature trend in the big cities – about 0.6 degrees centigrade per century.

    I will put a link to my results (plots) on my web site. If you need them urgently then please email me.

  17. E.M.Smith says:

    Thanks!

    No urgency. I’ve got a load on the plate already. Your results are very interesting. It fits the global pattern. Nothing really happening. Some of the best stations with very slight cooling. A little UHI in the places where you would expect it.

    It will probably be next Tuesday or so before I can take a look in depth at your site. (I’ve already issued too many promises for this weekend…)

    Thanks!

    E.M.Smith

  18. Pingback: El calentamiento global parece circunscribirse en las ciudades del mundo « Escuadron de la verdad

  19. Stas Peterson says:

    Dear Mr. EM Smith,

    You are doing a splendid job. Analysis such as this should have been a routine set of tests for data quality. But I fear the ‘climatologists’ are not trained in data handling or the attributes of what happens with data as it is processed or varied in volume.

    From the CRU exposure that academic institution has no idea of what data it has. Or what it did with it.

    Hopefully the data processing at GISS or UHA or RSS has some what more competent people. Either that or we must admit that the baby was thrown out with the bathwater, long ago.

    Sincerely,
    SP

    REPLY: [ Thanks! Yes, routine in the commercial world. But I’ve had folks from the AGW Climate “Science” side tell me it’s a pointless waste of time! Hard to believe, but that is their belief. I was always taught that in science the data was everything; and in programming you needed to “Characterize the data” and evaluate data quality and screen bad data as the first and most important step. Guess that’s a lost art these days… I can speak directly to GISS as I have GIStemp running and have read all the code. It is just as bad as UEA CRUt (and in some places maybe worse). BOTH CRUT and GIStemp use the same GHCN input data and both have the same issues with thermometer change because of that. Heck, in the commercial world I’ve even seen systems where each data item was key entered by two different operators. If they did not match, that item got screened out for re-keying. I can only imagine the flack I’d catch for suggesting they ought to key enter every data item 2 times… Yet that was ‘normal’ for a lot of years for little things like water bills (prior to Optical Character Recognition). And there is a complete lack of “end to end benchmarks”. Or any benchmark as far as I can tell. They publish a paper saying: “Foo has been peer reviewed in context A” then proceed to write code to apply Foo in all sorts of contexts and never bother to do a QA / Benchmark on it. Amazingly sloppy from a programmers point of view. -E.M.Smith ]

  20. Bill S says:

    With regard to solluvr’s post back in August mentioning the use of salt on roads in the 60’s and 70’s, does the rather large increase in the amount of roads built after WWII have a similar effect? Both the interstate highway system and the urbanization of the cities created a lot of road area, and I think at the time most of it was blacktop instead of concrete. Or maybe that is covered under land-use changes.

Comments are closed.