Find the Stations, Find the Warming
I was chasing down the fact that the temperature stabilizes when you use only long lived thermometers and had “put off” the question of which ones were the “short lived” thermometers that showed the warming. But it kept nagging at me. These were THE problem. And it would be fairly easy to identify them.
My first cut was just to invert the logic of my “pick long lived” stations code and make it pick short lived. The result is rather dramatic.
But before we get there, what are the relative sizes of these two data series?
[chiefio@tubularbells vetted]$ ls -l *.Special *.Special.Bot -rw-rw-r-- 1 chiefio 23286186 Aug 10 09:55 v2.Special -rw-rw-r-- 1 chiefio 20412931 Aug 10 09:49 v2.Special.Bot
So we can see that I’ve got about a 50 / 50 split. While the “good stations” are only 3000 out of 13,472 they account for well over 1/2 of the record.
In the inverse of this, I only picked out 10,000 “bad” stations. Earlier, the sensitivity to the exact count was assessed, and it isn’t much. You can choose 1000, 2000, or even 4000 or 5000 “good” stations and the results are very similar. It would be interesting to make this an exact split of good vs bad, but right now I’m rather excited about this and want to just get the results out. So yeah, there is a 472 “swing group” of stations that I didn’t count as either good or bad. Just not relevant.
So What Did I Find? There’s Your AGW Problem!
So taking my benchmark code for Global Average of Thermometers by Decade, we get a very interesting set of data.
DecadeAV: 1759 -4.4 -4.3 -1.4 5.1 10.3 15.9 18.5 16.2 10.9 5.6 1.4 -3.7 5.8 2 DecadeAV: 1769 -4.4 -2.6 -0.6 4.8 10.7 15.6 17.6 16.3 12.6 6.5 2.8 -1.2 6.5 2 DecadeAV: 1779 -8.0 -7.5 -6.1 0.2 8.0 13.7 16.2 14.3 9.2 4.3 -2.0 -5.1 3.1 1 DecadeAV: 1789 -5.5 -4.1 -0.7 6.7 12.5 17.5 20.7 19.5 13.9 8.1 1.5 -3.5 7.2 1 DecadeAV: 1791 2.5 3.7 7.4 11.2 16.8 20.8 22.3 23.3 9.2 13.8 7.6 3.8 11.9 1 DecadeAV: 1809 -5.6 -4.2 -1.1 5.4 12.2 16.2 19.1 18.4 12.6 7.1 0.6 -4.1 6.4 5 DecadeAV: 1819 -6.1 -3.7 -0.7 5.0 11.0 15.6 18.1 16.5 12.4 6.1 0.2 -4.3 5.8 19 DecadeAV: 1829 -3.9 -1.7 2.5 7.8 12.6 16.0 18.0 17.2 13.6 8.5 2.7 -0.4 7.7 37 DecadeAV: 1839 -4.4 -1.9 1.4 6.3 12.1 16.1 17.8 16.5 12.8 8.0 1.9 -1.6 7.1 44 DecadeAV: 1849 -5.0 -3.0 0.2 6.4 12.0 16.1 17.9 17.2 12.9 7.5 1.8 -3.2 6.7 44 DecadeAV: 1859 -1.7 -1.0 1.9 7.4 12.6 16.9 18.8 18.3 14.1 9.5 3.2 0.4 8.4 56 DecadeAV: 1869 -1.4 -0.2 2.5 7.7 12.6 16.6 18.6 17.9 14.6 9.2 3.6 -0.1 8.5 36 DecadeAV: 1879 -1.6 -1.4 2.3 7.7 12.4 17.1 19.6 19.1 14.9 9.8 4.2 -0.9 8.6 58 DecadeAV: 1889 -0.1 1.2 4.0 8.5 13.5 17.5 20.2 19.6 16.4 11.1 6.6 2.1 10.1 81 DecadeAV: 1899 1.5 2.4 5.6 10.2 14.2 18.0 20.2 19.7 16.9 12.4 7.0 3.1 10.9 120 DecadeAV: 1909 3.3 3.8 7.1 11.4 14.8 17.6 19.5 19.3 16.9 13.5 8.9 5.0 11.8 176 DecadeAV: 1919 4.7 5.9 9.0 13.0 15.3 17.5 19.1 18.8 16.8 13.8 9.9 6.0 12.5 226 DecadeAV: 1929 3.0 4.5 7.8 11.7 15.2 17.9 19.4 19.0 16.9 13.7 9.0 4.6 11.9 309 DecadeAV: 1939 3.8 5.1 8.5 13.1 16.9 19.5 21.0 20.6 18.1 14.3 9.4 5.4 13.0 829 DecadeAV: 1949 2.3 3.5 7.2 11.9 15.6 18.5 20.1 19.6 17.1 13.2 7.9 3.6 11.7 1984 DecadeAV: 1959 5.9 7.0 9.9 13.9 17.2 19.7 21.0 20.6 18.4 14.9 10.4 7.3 13.8 4297 DecadeAV: 1969 6.3 7.6 10.4 13.9 17.0 19.3 20.5 20.2 18.1 15.0 11.0 7.6 13.9 6090 DecadeAV: 1979 6.0 7.2 10.0 13.5 16.5 18.8 20.1 19.7 17.5 14.2 10.3 7.1 13.4 5871 DecadeAV: 1989 4.7 5.8 8.8 12.7 15.9 18.3 19.8 19.5 17.1 13.6 9.1 5.7 12.6 5212 DecadeAV: 1999 8.4 9.4 11.8 14.7 17.6 19.7 21.1 20.8 18.8 15.8 11.6 9.0 14.9 1338 DecadeAV: 2009 8.1 9.2 12.1 15.1 17.8 20.1 21.2 21.1 19.1 16.1 12.3 9.4 15.1 1258
The far right number is the ramp up of thermometers over time. What is clearly happening is that a large number of thermometers came on line for a brief time between the 1950’s and 1980’s, then went away to a lesser degree. We added a whole bunch of thermometers in a bunch of places that didn’t have them before, probably warm places and S.H. I’d guess. But inspection of the station IDs would make it clear.
Now look at those temperature averages rise!
1879 January -1.6 1999 January 8.4 Delta: 10.0 C
Now THAT’s Global Warming!
1879 August 19.1 1999 August 20.8 Delta: 1.7 C
And while we do get a bit of August warming, it’s clearly still the case that it’s the N.H. Winter that’s warming in the record. I’ll bet a free beer that it is because stations were added in places that are warm during N.H. Winter. This graph:
To my eyes looks like a LOT of added light blue, green, and bright yellow all over the tropics / equatorial zones. Only Canada looks like an offsetting batch of cool thermometers added at the same time.
So to me it shows the short lived stations as being in “warm places” by my count of little blue dots, but it would be better to look over the actual station IDs and their actual locations / histories. The list is easy enough to make.
UPDATE: So I made the list(s). I also further divided the record history in quartiles and even looked at the Top Ten% of thermometer records. There are 4 quartile lists with thermometers labeled with name, latitude and longitude in the links in this listing of the quartiles of records:
Some of the “bad” lists look to have a LOT of airports on them. It would truly be ironic if the AGW thesis was based on discovering that we planted a lot of thermometers at airports as the “jet age” developed…
So what we seem to have discovered in the AGW “process” is that if you add a pot load of thermometers in warm places, you get more warmth in your average of thermometers. Who knew…
Want a Nobel Prize? All you have to do is demonstrate that fact.
(Oh, and figure out how to get through the PC committee that hands out the Nobel PeeCe Prize…)
I’m going to continue my “characterization” of GIStemp, but I’m pretty sure that errors in GIStemp or odd methods are not the “Big Fish” in the AGW fantasy. It looks to me like it is a simple failure to THINK about what your data actually mean.
Actually, I do believe that you do have the basis for a science paper here that could be published in a peer reviewed journal. To do so you will need to make contact with a couple of “name” scientists who have previous peer-reviewed papers in the climate science area.
See, this evidence appears to be something that can be subject to quantifiable analysis. It will require deeper specification as to the identification of each station, the time it came on line, the latitude and longitude, etc.
ALSO you will have to verify that these stations are in fact being processed by GISTEMP — and be able to show using that code the impact.
If all of this can be done and the impact is significant, I would say you have a paper. (Of course you will do most of the work and be probably be listed last on the list of authors.) But it should, if it holds up, be a significant paper. IMO.
Yeah, a lot of work ahead, still. I’m up for it.
I’m not very worried about who’s name comes first (heck, I think it would just be fun to be published at all) and I’m going to be doing all the work anyway. (I’m already writing code to sort the stations by quartile of duration and characterize their lat / longitude and country code.).
Per getting it through GIStemp: That’s a bit more thorny. GIStemp requires that all stations exist in the data or it stops. You can change it, but then it isn’t exactly GIStemp any more. So not only does it need to be “loosened up” a bit on station count, but that needs to be vetted. Not hard, but a complexity.
That, too, is on my ToDo list. I’ve made some headway on it.
So, thank you for your evaluation. I don’t know how long it will take me to get everything done, but I’m not the kind of person who stops chewing on a bone once he’s sunk his teeth into it, even if it does get a bit hard and dry ;-)
Global temperature measurements are NOT just taking the average of a whole bunch of thermometers.
The spatial distribution is taken into account.
It appears that your hypothesis is that adding a bunch of thermometers in warm spots will cause the global average temperatures reported by GISS and CRU to go up.
The reason your hypothesis is false is that the method for determining global temperature is (simplified version)
1. Gather up all the records.
2. Figure out the best estimates for grid boxes throughout the earth — some calculations use grid areas of roughly equal size; some calculations use grid that are a fixed number of degrees longitude and latitute (usually 1 x 1 or 5 x 5).
3. Determine a global average temperature by doing an area-weighted average of the grid estimates of step 2.
Obviously each of these steps are more complicated, but that’s the basic sequence. As you can see, adding 10,000 thermometers spaced close together in a warm spot would not change the calculation of global average temp.
I’m not talking about the “global temerature” I”m talking about the “warming signal”. The two are related, but different. If there is to be a rise in the “global temperature” over time, there must be some evidence of increasing temperatures in the individual records (and in averages of them) over time as well. THAT is the “warming signal”. It is NOT a temperature, per se (then again, I’m not convinced that a gridded box of averages of thermometers is a temperature either…).
What I’ve shown is that there is a “warming signal” in the GHCN data. Further, that warming signal is NOT present in those thermometers that have a long life span. Yet it is present in those thermometers with a very short lifespan.
Now you can take those long lived thermometers and run them on through the process and you will not be getting any warming out of the process that is from the data; for the simple reason that there is no “warming signal” there to average, grid, zonalize, interpolate, or otherwise display. All the signal is in the short lived stations.
Further, the signal is only present in the winter months.
I don’t care how you do your grid boxes, your zones, or your other activities in the spacial domain, it will NOT change the pattern in the time domain.
So it can not be from things that have a relatively constant impact over time. The solar 10 year cycles. CO2 concentrations. Things like that do not have an annual pattern to them…
Further, I’ve read every single line of GIStemp. I’ve ported it to Linux and have it running in my living room. I’m painfully aware of what it does and exactly when and how it does the grids and zones. An analysis of those bits is “in the queue” for a couple of weeks from now.
For now, I’ve run the temperature data all the way through STEP3. What I see is pretty simple. The zonalizing and gridding behviours are, IMHO, likely to reduce the impact of added thermometers, but can not eliminate it entirely. It manages to take a 10C rise in January and reduce it a lot, but still gets some signal through. Enough to cause folks to get worked up over a fractional to 1 or 2 C “rise”.
Your assertion amounts to a bald assertion that GIStemp acts as a perfect filter. It does not. It acts as a partial filter.
I’ll be spending the next months demonstrating that fact.
But until then, rest assured: There must be a “warming signal” in the raw data to find “Global Warming” and a rise of the Global Average Temperature (whatever that is, or means).
That signal is not present in summer.
That signal is not present in the long lived thermometers.
That signal is strongly present only in the short lived thermometers added, by inspection, largely in warm places during NH winter.
And no amount of gridding, zoning, or other manipulation can change those facts.
BTW, I don’t have a “thesis” (yet). I have a data analysis with a characterization of what the data look like.
You go, E.M.
I’m in your cheering section.
As happens in so many areas of modern “crises,” all one hears in the media is the high-level executive summary – but if one digs into the details, the opposite story is found.
How much does anybody want to bet that the global warmists never counted on anyone digging into the data and finding their little secret.
Thanks for the moral support! I fully expect what I’ve uncovered to create a bit of a firestorm, and the AGW “side” has shown a tendency to attack the messenger, so I’m expecting some degree of “flack”.
But I am who I am. I was raised with the English Bulldog as a model of determination (“Mum” was from England…). A mix of English, Irish, German, Viking. All of them with a tendency to stick with things; and not folks to toss rocks at…
Some time ago I’d said I was going to hang onto this bone and chew it down to dust (and some turkey tried to make a joke out of it). Nope. Just the truth.
Since then, I’ve ported GIStemp, learned the code to a remarkable degree, developed a reasonable benchmark for characterization of the code behaviours, along the way developed a profile of the data (you must know the data to know what it will do…), discovered that the data do not support the CO2 driven AGW thesis, and here shown that it more closely relates to thermometer changes over time.
Not bad for my first year (or less?).
From this point forward, I’m mostly looking to show what happens to the benchmark as it goes through GIStemp. I’d finished the “runs” for that through STEP0 and was going to post that; when the idea for changing the code to look at the “bad” thermometers grabbed me: thus this posting instead.
So sometime tonight I hope to complete the posting I’d planned for today. We’ll see. After that will come a brief look at STEP1 impacts (already covered over on WUWT, so I’ll do a lighter treatment), then on to STEP2. Hopefully all by Friday.
Slightly longer on the “development schedule” is to figure out how to best get GIStemp to run on a reduced set of GHCN records. It has built in dependencies on matching records between different data sets, so you can’t just yank the bad thermometers from the record and have it run. I can “fix” that, but it’s likely to take a week (plus some testing…).
Once that bit is out of the way, then I can run the “good cop / bad cop” comparison test of the long lived vs short lived thermometers (through STEP3 – everything but SST from Hadley). At that point it will no longer be possible to say that there is any “magic” impact from gridding…
Oh, and on the “as time permits” schedule is to either convert the “bigendian” data files to “littleendian” or get a bigendian box to run on. I have an old SPARC 2 in the garage, but lord knows what the root password is (and it’s been a few years since I hacked into one and stole root. I think I can still do it 8-) Luckily that “issue” only impacts the addition of the Hadley SST data into the process; and at this point I’m pretty sure that just does not matter any more.
I hope that counts as “go hard” and “go fast”. It sometimes seem painfully slow, but in retrospect it does look like decently fast. It always feels hard ;-) (Rather like digging in someone else’s sewer…) But then there are the “Ah Ha!” moments that make it all worth it:
I first expected to see either no “warming signal” over time or to see a small one diffused through the whole body of the raw data. I expected to see a diffused small (1/2 C?) rise over time amplified by GIStemp through the STEPs. It was startling to run the GHCN set all the way through STEP3 and STILL have a large seasonal rise / fall. Then the August “do nothing” jumped out at me, followed by the mid-winter rapid rise. That was worth more than a pound of gold to me.
I’m going to put up another posting on this (as a reply to those folks who keep trumpeting the gridding / boxing as more important that the actual data…) but I’ll just mention it here:
The approach to characterizing the data is rather like the work one does in computer forensics and signals intelligence. You may not be able to decode the message, or even say for sure how big it is; but you can very often show the existence of the message or the signal. Do the last bit positions of a flat color field of a picture have consistent slow change? No issue. Do they seem nearly random? It is highly likely there is an embedded steganographic message. An encrypted data stream is even easier to see.
So for “global warming” one ought to see some kind of general diffuse rise of the thermometer record over time. Not all, and not consistently, but there must be some rise, or there is no warming. If the rise is highly concentrated in one part of the year, and not others, you can see that in an “average of thermometers” as long as the number of records is even modestly stable. If the rise is highly present in some thermometers, and missing in others, that too ought to show up in the average of those thermometers.
This does not tell you the temperatures, nor what the “Global Average Temperature” might be. But it DOES tell you if the signal is in the data.
So I expected to average the thermometers by year, and by decade, and see some general tendency to rise over time more or less evenly distributed over the months and years. Sure, with some degree of stochastic behaviour. 2 steps forward, one back. Some years with a weak month or 3, other years with extra strong months. But the signal ought to show up in a general tendency to rise over time.
I didn’t get that. I got a strong seasonal pattern reflective of the Northern Hemisphere (that told me the data were poorly distributed spatially, and so subject to single hemisphere specific effects).
Further inspection showed that the trend over time was different for summer vs winter months. Now that woke me up! The warming signal was unevenly distributed by season.
It was at that moment that I decided to look at a cut down more stable set of thermometers (trying to remove the “noise” to see what was signal and what was not). The stable thermometers showed no warming signal… Worth two pots of gold to me…
If there is “global warming” a stable set of thermometers ought to show some warming, even if unevenly distributed in space. They didn’t. That just can’t be resolved with the AGW by CO2 thesis.
The last link in the chain of events was this posting. I realized I could very easily invert the logic of my sort / selection and see the “negative space” of the good thermometers. The seasonal and trend effects are dramatic in the data selected this way.
Substantially ALL the “global warming” signal is carried by the records from these stations (or a subset of them … more to do…). So the warming isn’t “Global” at all. And the station arrival dates have a strong correlation with the arrival of more warming. That strongly suggests that the warming signal is an artifact of station addition to the record. (Though that needs a bit more proof to make it a rock solid case. But that the warming signal in the whole body of data makes it through STEP3 and that the signal only exists in this subset of the data strongly imply that the whole of AGW comes down to these selected stations.)
Now you want to turn that signal into an “Annual Global Average Temperature”, then you would benefit from grids, boxes, etc. But since we’ve already shown that the signal isn’t global and that it has a strong seasonal component: I’m left to wonder the wisdom of making a “global” thing that isn’t and an “annual” thing that can’t be. So the “Annual Global” part of “Annual Global Average Temperature” seems to me to be hiding truths that ought not to be hidden…
I’m also still unconvinced that an average of gridded boxed interpolated thermometers is in any way a “temperature”.
So that just leaves us with “average”… And the “reference station method” puts the lie to an average. There is a great deal of non-averaging going into the calculation of a cell value.
Now some folks would like to skip all that stuff and just baldly assert that gridding and boxing make up for all ills. I’m not.
And that’s why I dug out the “sig int” skills and went hunting for where the signal was being carried. And it isn’t carried globally. And it isn’t carried in the summer. And that makes an Annual Global Average Temperature rather far too much fiction for my tastes.
OK, I’m off to the next steps:
We know where the signal is, and isn’t. It’s time to watch it flow through GIStemp and see what effect it has.
We have a valid benchmark with well characterized behaviour. Now we can see what happens to it through GIStemp.
(And, at the risk of being overbearing about it: A benchmark does not need to have any valid meaning to the data. It just needs to be a repeatable and well characterized set of data and processes that produce a usable and reasonably well understood result.)
Oh, and I guess I need to advance to the “making a thesis” step. The “warmers” seem to want me to have one…
I think I’ll start with “Warming ought to be visible in the data” and consider “An average of things getting larger ought to get larger”… Maybe I need to punch it up a bit 8-)
I think you are on to something very important here. I write that because I have some experience with trends in large data sets from my engineering days. Data sets that span many years and from many locations. Trends that were properly analyzed by assigning weights to data points to account for size differences (analogous to clusters of thermometers).
In my case, the data were taken from hundreds of refineries around the world, each refinery supplying thousands of data points, and the studies spanning many years. Then, we did the study again with chemical plants of various sorts, which included far more locations and much more data per plant, because there are so many more of them and they are much more complex than a refinery.
I also visualize what you found in the global warming record as having hundreds of small teakettles around the world, each being gently warmed over a long period. The numbers recorded for temperature over time should show a gradual warming trend. That trend would of course be changed if more teakettles were added to the group, and each added teakettle was warmer than those in the earlier group.
I like this outcome, especially because it is consistent with what I know to be true, and that I blogged about, and Dr. Pierre Latour wrote about in his January 2009 letters to editor of Hydrocarbon Processing: CO2 does not, because it cannot, have any cause/effect relationship with global temperature. To do so, CO2 must violate the fundamental laws of process control theory. The laws of process control are well-established and inviolable. Literally hundreds of thousands of control loops operate every second of every day around the world, and have done so for decades and centuries, even in primitive environs. As an example, to heat water, one places a kettle of water over a fire. To heat water faster, one places the kettle closer to the fire. One does not heat the water faster (ever) by placing the kettle further from the fire. That is an example of a fundamental law of process control.
I am not sure how an attorney and engineer can assist you in your efforts, but let me know if something occurs to you.
Thanks for the offer. I’ll let you know. As of right now my biggest “issues” are the lack of a “bigendian” box to run GIStemp and the difficulty of modifying GIStemp to take a variable number of stations. (Though I’m thinking that maybe I can just pre-process all the input files to have matching station records and in that way not have to change GIStemp at all…)
The modify to take variable stations probably must rest with me, since I think at this point I know more about GIStemp than anyone else on the planet other than the authors of it.
The “bigendian” issue is not too relevant (since it is only at the SST add in point that it breaks things) and easily fixed if I wanted to port to a Mac PowerPC or dig out an old Sun SPARC box (or even an IBM RS6000 or the HP that can run in the endianness of your choice). Basically, there are a lot of hardware choices. I just need to get the hardware, or: the g95 web site says the compiler has a swap endian flag… so it might be as simple as getting the “beta” release rather than the “last stable” that I used (Stable.91). In my release, the flag does not work.
Maybe that’s it… Find out exactly what release of the g95 compiler has a working “-convert=bigendian” flag in file open statements. And find out if that compiler is stable and the flag really works. I’d still have to do the “upgrade”, though the install was not that hard…
Beyond that, well, if you like to play with FORTRAN and computers, I can put GIStemp on a box for you and then you, too, can ask “what if” questions of it 8-)
What if I shut off the reference station method in STEP2? What if I set the radius for reference stations to 100 km instead of 1000? (Or 1500 km in one step!). What happens to the Pisa temperature history if I raise the adjacent station temp by 5 degrees? Or 1? Basically testing the sensitivity of the “product” to changes in the “parameters” and data.
That would be more of a long term, as time permits, one little thing at a time process. It would be fun, but both low demand and low probability of a “big deal” being surfaced.
Other than that my issues are all just shortage of time. Not much can be done about that unless more folks want to get their hands dirty running GIStemp and stress testing it. (Though now that I have it ported, the actual work to make it go is pretty small).
Finally, your points about CO2 and control theory are spot on.
There is absolutely no way that near monotonic rises of CO2 can produce a rise of temperatures in winters and nearly nothing in summers. The control mechanism is just not there.
Excellent work E.M. You say you are prepared for the backlash – Tamino et al will be furiously trying to debunk this already.
I’m sure that you are enjoying yourself immensely! Will you be able to get any results out before wonderful wonderful Copenhagen in December?
I’m already getting results out! (IMHO). Published reviewed? No way. I don’t have the connections for that (though I’d happily work behind the scenes for anyone who did…). But will I have more to say? You betcha!
All the “heavy lifting” is done at this point. From here on out it’s just (relatively) simple direct steps. I have model code to read and process the GHCN format files, so any “bright idea” now takes me minutes to small hours to code something up instead of weeks (as during the porting process).
FWIW, I’ve noticed an interesting statistic. I’ve had about double the total “hits” as have come in from referencing sites. And about 10 times prior daily volume… That means that a lot of folks are coming by “word of mouth” or “word of email”… I suspect that a fair number of those are the folks assigned to watch for “issues to attack” who are passing it around saying “what do we do about THIS?!”
So far the best that has been tried is the “Well you need to Grid and Box to get the Real Temperature” (see above); which is really kind of a lame dodge of the basic point I raise. I’m not trying to find the Real Temperature. I’m looking for how temperatures change over time and space in bulk.
The rise of temperatures in the records is not remotely evenly spread in time and space (while I show decade averages here, the pattern holds at the annual level – yes I looked at the 308 years of records! and finer grain as well… see the source code posted here and you see some of the places I took a peek.). The Global Warming thesis requires that there be significant spread over time and over space if CO2 is causal.
The spread over time is not there in the long lived records.
The spread over space is not there in the long lived records.
The only rise is in the cohort of stations added in a bolus of time and space.
Spread that around all you want. All you can claim to be doing is taking a bolus and blending it into a non-changing stable record to create what isn’t there.
The data are what they are. Respect the data.
Your work complements the surfacestation project by Anthony Watts’. I first noticed you discussing things on WUWT with Leif Svalgaard the other day and came by to have a look. Now I just have to figure our what you’ve done!
It sounds like the English Bulldog of the Magna Carta and Bill of Rights all the way to the Declaration of Independence. Much better than marxist mutterings of social justice — their way or the highway via CO2 Boulevard. I like “with liberty and justice for all” as well as “equality of opportunity”. Blogging the Chiefio way shows respect for these precious traditions. With gratritude, E.M. Smith.
The devil is in the detail. Keep up the excellent work, the devil is scared.
@Son Of Mulder:
I love it! Gave me this “flash” of a mental picture of one of those little devil characatures (sort of like an evil version of the daemon characters used with Linux sometimes) and me, Elmer Fud like, hunting Little Dewils…
Here Liddle Dewils, come out come out where ewer you are!
I’m off to hunt the Dewil, the Dewil, the Dewil,….
Lets just hope it works out a bit better for me that it did for Elmer ;-)
Some time ago I looked at the changing character of months as evidenced by CET. My graph shows the temperature data (the yellow dots show co2 spikes as recorded by Beck and are immaterial to the points I am making under)
My comments refer right back to the start of the graph in the 1660s
Generally past years are cooler than the 1990’s which was just 0.10C warmer than 1730’s and 1920’s
Overall the monthly figures are dragged right down by the very cold little ice age which covers most of the period from the 1660’s to around 1880
As above with 1730 cooler by .10 1860 by .2 1870 by .3 and 1920 by .2
As above but 1730 cooler by .6 1920 by .8 and 1930 by .9 i.e. one of the greatest changes in any month (other than winter Dec-February inc)
1990s cooler than 1940 by 0.7 1860 by .3 and 1730 by .2 otherwise broadly similar
May. 1990s cooler than 1660 by 0.3 same as 1720 and 1730 cooler than 1800 by 0.3 same as 1820 and 1830 cooler than 1830 by .10 and 1910 by .3 otherwise broadly the same
1990 same as 1980 1970 and 1960
Cooler than 1960 by .4 1950 by .2 1940 by .3 1930 by .4 1890 by .4 1870 by .1 1860 by .1 1850 by .3 1840 by .3 1830 by .6 1820 by.4 1800 by .2 1790 by .2 1780 by .8 1770 by .7 1760 by .1 1750 by .4 same as 1740 cooler than 1730 by .7 1720 by .9 1710 by .3 same as 1700 and 1680 cooler than 1670 by .3 and 1660 by .3
Overall June has become a much cooler month
July 1990 cooler than 1730 by .4 1750 by .5 1760 by .4 1770 by .4 1780 by .4 1790 by .4 1800 by .4 1870 by .5 1930 by .4
Overall July has become a rather cooler month
1990 was cooler than 1930 by .3 1770 by .5 and 1700 by .3
Overall August has become a little warmer.
1990s cooler than 1720 and 1730 by .2 and 1740 by .1 It was the same as 1930 and cooler than 1940 by .2
Overall there was little difference
1990 cooler than 1960 by .4 and .4 warmer than 1900 1850 1830 1820 1730 1660
Overall October has become a little warmer
1990s cooler than 1970 by .2
Overall this month has become distinctly milder
1990 cooler than 1980 by .5 1970 by .6 1950 by .2 1940 by .1 1860 by .1 1820 by .3 1730 by .3
The month has become a little milder
Temperatures have fluctuated considerably throughout the period with months often changing their ‘traditional’ characteristics.
Generally modern winter months have become milder than the winters of the little ice age period (not surprising!) which brought the overall averages for the year sharply down. November has also become distinctly milder and March much milder. July has become rather cooler whilst June is distinctly cooler, other months show limited difference either way.
The early 1700’s were remarkably similar to the current period but the warmth was over a more extended period and came from a lower base. In this respect average temperatures have barely changed in nearly 300 years from pre industrial times. Many other periods have been fairly close in warmth to the modern era but again the little ice age winters knocked the annual averages down somewhat. The 1820’s 1900’s 1920’s and 1930’s were also notably warm.
I was of course not surprised that LIA winters were much colder than now. A little more surprised to realise it is the very cold winters dragging down the average and many warm records occurred in the LIA. Very surprised to see that June and July have actually cooled.
Hope you find this to be of interest as it seems to support your contention.
TonyB: Very interesting. FWIW, I have the GHCN decade averages going back to 1709. I didn’t post them largely just to keep the total “number load” lower on folks, but also because there was a very low number of stations in the early years, so the averages are certainly biased. The early years are cold, in conformance with what you would expect from the LIA and from a NH bias to the set.
While I used to think that the 1880 cut off was malice, I’m now more inclined to think it was just a result of eyeballing the station count and doing a cut off where there were too few station in the past for a “global average”.
At the end of the day, it still comes down to a highly variable thermometer count biasing the data set for the GHCN. At this point, I’d expect individual long lived stations to be more informative than any averages of random station collections…
I posted this on another thread on WUWT that I havent seen you participating in. It is relevant as it shows the effect of a pre Hadley data set (CET) to a post Hadley one (1864 Zurich) The ups and downs of the climate are reflected very well in CET but the Zurich one misses out on the largest changes. It factors in the UHI effect which is effecting the disproportionately large numbers of stations in urban areas in global sets. I believe Global temperatures can hide too many sins (and smoothing) and am currently collecting those from individual countries to examine.
“Your talk of UHI reminded me of some graphs I produced a while ago to look at this very effect, when I factored in 0.1c UHI effect per decade on two long temperature data sets
The first graph is of CET (in red) to 1660-unadjusted for uhi- to which have over laid in green the Zurich figures (unadjusted for uhi) to 1864. The amount of mirroring is remarkable through the decades until the rapid growth of Zurich since the war.
Consequently I factored in .1c uhi per decade from the 1960’s into the second graph below. This has dramatically reduced the observed warming and puts it closer again to the CET figures- which aren’t perfect but don’t suffer quite as much from UHI
As regards UHI, a new study illustrates that our personal observations that it is often much hotter in urban than rural areas -particularly at night- appears more correct than the previous scientific studies that minimised the apparent observed UHI effect.
The amount of adjustment to take into account this UHI factor ihas been limited and the apparent impact on temperatures will consequently be larger than had previously been factored in if this new study is accepted.
There will be occasions when you will be frustrated and discouraged. Take heart, there is real power in the truth. A determined man with truth in his hand is a unstoppable force. Just cross all your T(s) and dot your (I)s and DON’T GIVE UP!
I wish I could help, but that ship has sailed.
@TonyB: The WUWT articles move so fast that I can’t keep up with all of them. I’ve chosen to “lurk” on some of them just to keep the amount of time it takes reduced. So I’m reading (almost) all of them (great stuff over there!); but not commenting as much as I used to (probably a good thing ;-)
In thinking about why the 100+ year stations might not show much UHI it occurred to me that in 1908 there were not a whole lot of airports… By definition, the really old stations will not include airports. They may well include more astronomical observatories out in the hills and more big old university campuses with thousands of acres between them and the city. It would be a very interesting area of investigation to look at exactly where that 1000 Best thermometers were located. I can make the list of station IDs if anyone wants it.
@G. Karst: Thanks for the encouragement. BTW, one can always help. Even the bag pipe player helps the army move ;-) How many songs are there about the drummer boys of the old army?
Heck, I’m a semi-retired guy doing this on a 20+ year old PC that started life as a 486 and was “upgraded” to a 400 Mhz AMD chip about a decade ago. Talk about your “sailed ship”!
You clearly have a brain, a keyboard, and can type. That’s all it really takes to make a difference…
Did you adjust for time of day of the measurements?
Part of the beauty of this approach is the simplicity. The data are not changed in any way, simply averaged. This does not give an actual temperature, but it does show where the temperature records are rising, the “warming signal”.
Excellent stuff – looking forward to the final conclusions. No criticism of this site but if this really hits the spot, would not an additional posting at WUWT or CA reach a wider audience?
I periodically put links to here from WUWT. I’ve also given Anthony blanket approval to put anything from here that he finds of interest on WUWT (and I’d help re-write anything if needed). Basically, it’s not up to me any more. I’ve also put a couple of links from CA to here in a couple of comments there. I’ve seen Steve looking at some of this stuff too; so he’s aware. I think he’s just waiting for a more finished analysis (that I’m working on too…)
I guess the almost-last point would just be that the number of sites (with which I was not familiar before) that have put up links to me has risen significantly in the last week. My traffic is up by a factor of 10 over prior months. This analytical approach is so clear, so easy to “get”, that it slaps you upside the head. Folks see that, and put a link to it. I hope that exponential trend continues for just a couple of more months 8-)
So yes, you are correct, but in the fullness of time… and until then, folks are sharing what they found.
Frankly, I don’t have much of an “ego thing”. I’m not looking for anything to build my reputation or career, just for truth. So I’m deliberately structuring things so that anyone can pick things up here and run with them. There are, IMHO, about a half dozen very decent “science papers” that can some from this approach. I’m putting source code here, observations here, and “dig here” signs on things of interest. If I end up as 3rd author on someone else’s paper, that’s fine. If I just end up as a footnote on a paper, well that’s OK too. And if all that happens is a foundational paper that derails AGW paranoia gets published without so much as a footnote, well, I’ll sleep well at nights…
And that’s why some of this is a bit “hurried” or could use more polish to get some graphs on it. Because I *want* folks to say “Wow, I could add some graphs, explore how the thermometer count changes by decade by latitude, and get my publishing quota for the quarter done…” When that happens, I’m having a fine single malt Scotch after a nice steak dinner… And if it doesn’t happen, then I’ll just keep on keepin’ on, one brick at a time, until the thing is built.
So you want this better seen? Then “Y’all Come!”, it’s up to you all…
Wauw – This is priceless stuff. Thank you for your commitment and hard work.
The average Skeptic like myself loves the simplicity of it all.
So easy to explain to my friends. Another nail in the coffin…
Thank you! One of the basic tenets I follow is that a tidy mind is best based on simple understandings. If something is too complex: it can either use simplification, is wrong, or your understanding has not matured to the clarity it ought to have.
This takes chunks of my time to “clarify” things, but once I’ve got it nicely figured out, I can keep it straight and let it take residence in my mind without cluttering the place up… (Small spaces need all the “neatening” they can get ;-)
It also stands out in stark contrast to the puff and obfuscation of the folks who want to “baffle ’em with Bull Shit”… I like that contrast…
From the look of the globally averaged month by month pattern for the “103 yr and longer” thermometer record the station list must be heavily dominated by Northern Hemisphere stations. Does it contain Sthn Hem stations?
What would this long-lived site record look kike if it was split into two independent parts i.e. Part 1 = northern hem, Part 2 = Southern Hem. Would that lower the trends further?
The global temp for Jan is not at 1 deg C and the global temp for July is not 23 deg C. I suspect the “real” global mean temp doesn’t alter by more than about 2 or 3 deg C through the year.
If you change the proportion of Sthn Hem records in the “quartile” this would almost inevitably impact on the overall trend for the summer and the overall trend for the winter. So its probably not just a matter of changing station abundance in Siberia, China etc.
rob r From the look of the globally averaged month by month pattern for the “103 yr and longer” thermometer record the station list must be heavily dominated by Northern Hemisphere stations. Does it contain Sthn Hem stations?
Substantially the same thought that came to me when I looked at the data. I answered it here:
That breaks it out by 20 degree bands of latitude.
@EMS: First off: great stuff!
However, the risk with only looking at “good” thermometers (using a sub set of data) is that you might be introducing a bias yourself. The “warmers” will discard your data as data mining unless you can show that your particular sub set is statistically neutral.
In that light, I’d like to tell you of one of my own experiences. After my friend Mike pointed me to some data in the Dutch KNMI temperature records (http://www.knmi.nl/klimatologie/metadata/debilt.html)
I plotted the data in an excel graph and found that in two occasions, when a new station was added (station 2 in 1950 and station 4 in 1987), in the first year the older stations were off by about 1 degree Celsius from the new station, and the year after they were suddenly in sync again. The thing is, that the temperature pattern changed by a degree Celsius in exactly those sync periods (first down in 1950, then up again in 1987). I wrote an e-mail to KNMI asking them if the older stations were being calibrated to the new ones after the first year, but never got a reply.
Now this is just an anecdote and might be not relevant on a global scale, but the message here is that long-term thermometers don’t necessarily give the “best” or most stable temperature data. I would advise to spend some time showing that your sub set is a fair sample, e.g. by showing that they are, on average, in sync with the other thermometers in the grid/area *in between* the time points when other (“hotter”) thermometers are added that influence that area’s average.
Good suggestion. A “consistency check” is always in order!
BTW, these are stable “Thermometer Records”, not thermometers per se. So a change of “modification” history (new adjustment) would make a new modification flag and this a new record. (I *think* this is why so many drop off in the last decade. The station may still be there, but some change in how the record is handled gave it a new record ID, but that’s just a guess and needs checking.).
My initial goal was only to show where the “warming signal” was carried in the data (and that’s in the shorter and to some extent newer records). Having shown that, the next step would be to validate the long lived records as representative and “fair”.
But even without that step, when 27%+ of the data carry no warming signal, it isn’t ‘global’…
I just finished reading the top article and all the comments, very interesting!
I am so saddened by science being so full of people who are more concerned with achieving their next grant than achieving new, accountable, scientific research. I have the utmost respect for anyone and everyone who does any form of research with an open mind and open methods! Keep up the great work, and I hope you help to clarify some of the rather gray areas floating around these days.
Alas, what E.M. Smith completely misunderstands is that global warming OUGHT TO BE (according to standard theory) primarily a winter warming, just as his data shows. There is NO conflict between the CO2 theory and observation; rising CO2 concentrations SHOULD affect winter temperatures much more than summer. And dry places (high plains, mountains, the polar regions, some glaciers) should be affected far more than wet.
The reason is that the absorption bands of CO2 and water vapor overlap to some extent. When there is a lot of water vapor in the air, rising CO2 has little effect. Thus, we see the effects of CO2 primarily in places and times where the climate is cold or dry or both. To understand the details one has to study the calculations of a program such as MODTRAN, but the broad outlines of the relative effects of CO2 and H2O are well-understood, if rarely discussed.
REPLY: [ No. What YOU completely misunderstand is that when there is NO warming in summers: there is no warming. Further, what you misunderstand is that when you look at a stable set of long lived thermometers there is NO winter warming either. When you do look at the “winter warming” in the data set you find it happens in a set of short lived thermometers added in places with hot winters. THAT is in the data. Not theory. Not some fanciful “ought to” or “models predict”. The real data looked at up close and personal. And that set of facts is completely incompatible with the “CO2 as causal” theory. Get over it. Oh, and it’s not “his data” it is the dataset direct from NOAA / NCDC. It is their data. -E.M.Smith ]
“There is absolutely no way that near monotonic rises of CO2 can produce a rise of temperatures in winters and nearly nothing in summers. The control mechanism is just not there.”
That’s what you wrote back on 11 August 2009. I was merely trying to point out that this statement is wrong.
As for “your data” vs “their data”, I meant that you have chosen data subsets and methods of data analysis that are surely YOURS. I’ll believe your results when I see them in a peer review journal, properly judged and commented. It’s perfectly possible (although, in my view, unlikely) that you are correct. But given that CO2 really is a greenhouse gas, and given that its concentration has increased markedly, it’s hard to believe that no warming whatsoever has taken place.
REPLY: [ I don’t care what you believe. That is between you and your gods. I do care what the data say. They say “fudge in the data” loudly and they do not say “CO2 profile of action”. You can assert that “is wrong” all you want. On “your side” is a bald assertion. On “my side” is a 100+ graphs of temperature profiles (based on an anomaly process) showing what the data actually say. In many cases two adjacent countries have one with a warming “hockey stick” precisely pivoting upward in 1990, the other country dead flat. CO2 can’t do that. The (anomaly based) change of temperatures have step functions with thermometer change and pivots with thermometer change and processing change (“duplicate number” change as NCDC calls is). That is not CO2 profile of action. And the major impact is via ‘peak clipping’ of excursions to the cold side in winter as one would expect from applying a “QA process” that peak clips (as NCDC have said they do with tossing out ‘out of range’ values) so I’d further assert that the most likely “cause” is a QA process that “sounds good and does bad”. But you can believe whatever you like. And once you figure out how to turn CO2 on in winter and off in summer you can also figure out how use of the NCDC produced data makes it “my data”. It isn’t. At most, I subset it and look at subsets in detail. Yeah, that’s my process. No, that’s not “my data”.
And per peer review: As we saw in the climategate emails, the CRU crew were busy suborning the peer review process and influence pedaling with the Editors. So no; peer review is no longer a suitable metric for veracity. And talk to the UEA CRU crew if you don’t like that. It’s their fault they buggered it. So I’m doing “public review”. And no, I still don’t care what you believe about it. “The truth needs no gatekeeper. -E.M.Smith” So you can take a look at all the (anomaly based) dT/dt graphs and you will not find a steady increase in warmth over the industrial revolution. And you will not find either a ln decreasing increase (as CO2 saturates) nor an exponential lift off (as IPCC fantasies would predict). But you will find lots of dead flat countries, some that “dip” in the baseline as the thermometers are changed, but end up not higher today than before the baseline, and some that are dead flat to a “pivot” in 1990. Magical stuff this CO2, as instant action precisely on the 1990 change of “Duplicate Number” from changed NCDC processing of the data… -E.M.Smith]