A Comparison of The Global Historical Climate Network data Version 1 and Version 3
by E.M.Smith
June 12, 2012
The Global Warming Issue, and Consequences
The major thesis of Global Warming is that there is a change of temperature, on the order of about 1/2 C to 1 C, caused by human activity in the burning of fossil fuels. The hypothesis then moves on to consequences,where just about anything that could be bad is caused by Carbon Dioxide Warming, at least as long as research funding dollars are available to study it, and then the hypothesis moves on to remediation. Remediation largely focuses on The Green Agenda, as in the UN Agenda 21 effort, and de-industrialization.
It is that final step that is the most concerning. There are great swaths of the economy to be cut down and replaced with untested, or in some cases, tested and failed, technologies and alternative economies.
But what if they are wrong?
There is a very long chain of events, of unproven and ill conceived conjecture, and even of fundamentally broken theories; all of which must be true, for the Anthropogenic Global Warming theory to be valid and for the proposed “cures” to be correct. The consequences of being wrong are horrific. We are, in essence, about to play economic Russian Roulette based on a theory which comes from ONE set of data. The GHCN. Global Historical Climate Network.
There are folks who will assert that there are several sets of data, each independent and each showing the same thing, warming on the order of 1/2 C to 1 C. The Hadley CRUtemp, NASA GIStemp, and NCDC. Yet each of these is, in reality, a ‘variation on a theme’ in the processing done to the single global data set, the GHCN. If that data has an inherent bias in it, by accident or by design, that bias will be reflected in each of the products that do variations on how to adjust that data for various things like population growth ( UHI or Urban Heat Island effect) or for the frequent loss of data in some areas (or loss of whole masses of thermometer records, sometimes the majority all at once).
A great deal of energy (on both sides) has gone into arguing various points of minutia. Are city lights a decent proxy for heat islands? Are Airports inherently warm? (They are. Then the minutia moved to “how much?” and “Is it enough to bias results?”)
In the end, it is often more heat than light that has been so generated.
On the one hand, the narrative is that “Only Green Sustainable Solutions can save the planet! You, too, can be the heroic one, using Schumpeterian “Creative Destruction” to remake the world into a new garden paradise of Clean Green Sustainable Energy and lead the world out of the darkness of coal and oil power.
But what if the “Creative Destruction” is in fact long on destruction and short on creation? What if we really need coal and oil to produce things like steel, glass, aluminum, and food?
Spain has already taken the lead down that path in the European Union. They have been “leaders” in solar and related “Green” power. Germany has pushed wind as a power solution as well. The result has been an economy in Spain where unemployment is over 18% and among the youth, over 50%. They have even invented a new word: NINI. http://www.theglobeandmail.com/commentary/nini-and-the-european-dream/article1389123/ Neither in Employment Nor Education. It describes the lost generation of young folks who have graduated school, or just given up on school as they see no job at the end of it; and are now unemployed and with little prospect. In Germany, increasing numbers of people are in “Fuel Poverty” with utility disconnections rising fast.
The counter example is China. Building one coal fired power plant, roughly, per month ( Per the New York Times: https://www.nytimes.com/2009/05/11/world/asia/11coal.html ) They have recently experienced something of an “economic slowdown”, but from 12 % growth per year to 8%. (the exact number changes from month to month, with some reports as low as 5%). Those are growth numbers most economies would love to have. Growth provided by cheap and effective power.
So which is the real “Heroic Narrative” that will be written? The Green Dream? Or the one about rational folks looking at real world facts on the ground and saving the worlds greatest economy from Green destruction? Will it be the person who has an ocean view in Massachusetts despoiled with thousands of subsidized windmills (which tend not to run when most needed on cold still winter nights)? Or the person who keeps America at work making cars, computers, canning food, and having BBQ tailgate parties at the football game? Each of those things takes economical energy to produce.
Drive up the price of electricity, aluminum smelting and metal can fabrication move to China. Steel Arc Furnaces shut down and move to where China can provide cheap coal powered electricity. Cement kilns run on coal heat and coal based coke is used to reduce iron ore to iron and steel. The basic building blocks of industry depend directly on electricity and coal.
In short, every significant economic function of a manufacturing economy depends on affordable fuel. (As do most of the significant economic functions for an “information” or “service” economy – computer rooms run on massive electric consumption; and ever tried to grill a steak without gas or charcoal?) Each and every one of those manufacturing industries depend, fundamentally, on low cost power. Raise the price beyond the competition, and those industries will be destroyed.
Would It not be just as much a heroic act to prevent that destruction?
What if “the story” of Global Warming were in fact, just that? A story? Based on a set of data that are not “fit for purpose” and simply, despite the best efforts possible, can not be “cleaned up enough” to remove shifts of trend and “warming” from data set changes, of a size sufficient to account for all “Global Warming”; yet known not to be caused by Carbon Dioxide, but rather by the way in which the data are gathered and tabulated?
In that case, the person who stands up and says, in essence, “The Warming Story Has No Clothes” is in fact the true hero. Saving millions of jobs and untold $Billions of economic activity from unjustified destruction. Destruction that will not “save the world” as it is addressing a problem that does not exist. It is just an erroneous response to a mistake of computer data processing. All too common, but on a vastly more grand scale. (Rather like the financial derivatives market meltdown where they, too, had computer models showing that everything was fully understood and risk management was settled.)
Examining The Data
Suppose there were a simple way to view a historical change of the data that is of the same scale as the reputed “Global Warming” but was clearly caused simply by changes of processing of that data.
Suppose this were demonstrable for the GHCN data on which all of NCDC, GISS with GIStemp, and Hadley CRU with HadCRUT depend? Suppose the nature of the change were such that it is highly likely to escape complete removal in the kinds of processing done by those temperature series processing programs? Would it be too much to ask that folks take just a bit longer to think about what they plan to do to the economy in the face of that kind of foundation of sand to the Global Average Temperature?
It is my opinion that the situation is exactly that way. And relatively easily demonstrated. The response from Hadley, Goddard, and NCDC will undoubtedly be that they have it all perfected. That they have peer reviewed each others papers and that they all agree that they can’t be wrong. Yet anyone can be wrong. The history of science is littered with discarded theories. Often theories that held dominance and were “consensus” for decades (or even centuries) prior to being overturned. That is just the nature of science. Newtonian mechanics were superseded by Relativity, just as Copernicus replaced Celestial Spheres. More recently the entire arrangement of which species were most closely related to what others has been overturned. The Linnean names of plants and animals replaced with “clades” based on our new genetic tools.
But surely temperatures are more “fixed” than that? Fine folks took readings by looking at a thermometer and writing them down. For most of history that is how it was done. What could possibly change that written record?
In short, modern folks finding reasons to re-write the past. Perhaps good reasons. Perhaps not. We’ll see that point being argued for decades to come (perhaps longer). Yes, all these good folks believe they are right, and that they can not have made an error. Yet the changes are of the size and scope sufficient to account for all the “Warming” seen in the historical record. Surely when the warming we find in the temperature record is largely attributable to changes of method of adjusting that data, and processing it into a data series, there is sufficient cause for alarm to council against rash actions based on such a malleable history. Sufficient cause to ask that a true accounting be done, with proper independent Quality Control Audits all the way through. (Yes, there is no audit trail, such as one would see in an accounting report for a prospectus. Nor are the computer codes being used vetted and tested as are the codes for FDA drug approval. We are, quite literally, betting the nation’s economy on code that would not pass FDA requirements for a new form of aspirin.)
One Example Problem
This is but one example problem among many for the GHCN data set. The problem is “Revision History”.
There are three major revisions of the GHCN data set. Version 1, Version 2, and most recently, Version 3 was released. Over time, the exact temperature recording stations in the data set have changed. Sometimes many are added, often many exit.
From this flux of ever changing instruments, the assertion is made that one can calculate a Global Warming Trend. While there are many technical and philosophical issues with that assertion, the simple fact that the instrumental change can account for the “Warming” is distressing. ( As an example of the technical issues, a Global Average Temperature confounds heat and temperature; which is commonly done by laymen but strictly avoided by engineers and scientists, except for climate scientists it would seem.).
For this particular example we will look at how the data change between Version 1 and Version 3 by using the same method on both sets of data. As the Version 1 data end in 1990, the Version 3 data will also be truncated at that point in time. In this way we will be looking at the same period of time, for the same GHCN data set. Just two different versions with somewhat different thermometer records being in and out, of each. Basically, these are supposedly the same places and the same history, so any changes are a result of the thermometer selection done on the set and the differences in how the data were processed or adjusted. The expectation would be that they ought to show fairly similar trends of warming or cooling for any given place. To the extent the two sets diverge, it argues for data processing being the factor we are measuring, not real changes in the global climate.
The dP or Delta Past method
The method used is a variation on a Peer Reviewed method called “First Differences”. It is one of the simplest methods to use. When doing data audits, the simplest methods are much less likely to have hidden problems that are not easily spotted. Computer codes thousands of lines long, in dozens of distinct programs, can be hideously hard to debug and have subtle errors hidden in them for decades. The method used here is short, simple, and easy to check. Dozens of lines of code in single digit numbers of modules (and some of them as short as 2 or 3 lines). While not an ideal theoretical method to calculate temperatures in any one place, it is well suited to finding biases in the data sets.
The computer programs used to create the data graphed here is available at this link:
https://chiefio.wordpress.com/2012/06/08/ghcn-v1-vs-v3-some-code/
The various graphs and comparisons in this report can be found in an index at this link:
https://chiefio.wordpress.com/v1vsv3/
Unlike the codes that try to do homogenization (that has as many definitions as there are practitioners, it would seem) and do a variety of “filling in” and data fabrication, to create missing data; this method simply compares a single thermometer now, to the readings for it in the past. What is called an “anomaly process” in climate science.
The other codes from places like NCDC, GISS and Hadley do anomaly processes too, but often they are comparing a synthetic “Grid / Box” value made from one thermometer set today to a completely different set of thermometers in the past. That is prone to a variety of errors, including one called a ‘splice artifact’.
For example, GIStemp computes 16000 “grid cells”. Yet there were only 1280 currently active thermometers in GHCN v2, so the “present value” of 14,000+ ‘grid boxes’ were a polite fiction. A creation of the GIStemp computer program based on other cells up to 1200 km away. Hardly a ‘clean’ anomaly process; comparing one fiction in the present to another fiction in the past. Fictional values created by “homogenizing” the data and splicing together many thermometer records that often themselves contain values created by comparison and adjustments in the “homogenizing” process.
Any time data from different sources are glued together to make a continuous series, there is the risk that the “join” will be artificially displaced. This is a common and well recognized kind of error. In the various climate codes, the homogenizing process and the joining of different thermometer records into one synthetic record take steps to try and reduce the splice artifacts. It is not possible to perfectly remove them. So in large part the question becomes “was the join good enough”?
The code I used to make these audit graphs avoid making splice artifacts in the creation of the “anomaly records” for each thermometer history. Any given thermometer is compared only to itself, so there is little opportunity for a splice artifact in making the anomalies. It then averages those anomalies together for variable sized regions. (This can be any sized region that can be described with a thermometer World Meteorological Organization identification number series; or WMO number, and the Country Code. The highest order digit of Country Code is ‘region’; that is fundamentally each continent. The first three digits taken together are the “country code” and they code for a single political entity (most of the time – Russia is divided into 2 parts, one in Europe, the other in Asia). In the process of making these graphs, the different thermometers are averaged together as anomalies inside these selected regions.
While that is an accepted process (averaging different instruments via anomalies) and is done inside the various data set creation codes (such as GIStemp and HadCRUT) it inevitably creates a splice artifact. The only question is “How big?” Are the efforts taken to remove that splice artifact sufficient to separate it from the desired “signal” being sought? No one knows, as there have not been any benchmarks run on codes such as GIStemp to assess how good, or poor, they are at such splice artifact removal. We are, in essence, “betting it all” on the opinions of a few researchers and their peer reviewers (who are often from the same small group of agencies) that they have done everything correctly.
So this code is a bit different, in that it minimizes splice artifacts in the anomaly creation step, but it does not attempt to remove those artifacts in the “average the anomalies” step. The purpose here is to see how much variation there is in the data themselves, not to add yet another unknown quality of “adjustment”. In essence, we want to see, given a very clean and direct anomaly process uncluttered by masking processes such as ‘homogenizing’, if there were significant shifts in the basic character of the data being fed into all those other data creation programs? (Such as NASA GIStemp, Hadley HadCRUT / CRUTEM, and NOAA NCDC products)
What Is Found
What is found is a degree of “shift” of the input data of roughly the same order of scale as the reputed Global Warming.
The inevitable conclusion of this is that we are depending on the various climate codes to be nearly 100% perfect in removing this warming shift, of being insensitive to it, for the assertions about global warming to be real.
Simple changes of composition of the GHCN data set between Version 1 and Version 3 can account for the observed “Global Warming”; and the assertion that those biases in the adjustments are valid, or are adequately removed via the various codes are just that: Assertions.
Are Computer Programmers Perfect
It all comes down to trusting the opinions of the folks who wrote the programs that there are no errors.
I’ve spent decades writing, testing, and running various kinds of computer programs. The larger the size of the code, the more commonly it has errors in it. Typically these are found and removed by a debugging process that includes various test suites run with specific test data. ( With catchy names like “Red Data” and “White Noise” and even the middle ground of “Pink Data”.) I have seen no published test data, test runs, test suites, nor audit reports from independent code auditors for the Climate Codes. Financial systems and FDA drug approval codes must typically have some kind of audit process done. For the FDA, even the process by which the computer is unboxed, set up and turned on must be documented in what is called a “Qualified Installation” where each step is signed off by the technician doing the work and a manager observing. ( I’ve done “Qualified Installations” so I am familiar with the process.)
Looking at the state of the Hadley software (especially the laments in the “Harry README file” make it very clear they could not do a QA test run. They don’t even have their input data any longer, per the emails made public. I have ported and run the Goddard GIStemp code. It is coded to expect particular stations in the input. It is not possible to run it on synthetic test data. It breaks and hangs. (Exactly how much the data can be changed before the program hangs has not yet been found. So far, every significant change of station composition has caused a crash / hang in my testing.) It looks as though it simply is not possible to do a proper QA / test / validation suite run on the various climate codes. (Though I have not seen the NCDC code, blocks of the GIStemp data descriptions include the NCDC data structures and it is clear that the two groups share code and practices, so I don’t expect much will be different).
This means that we have no idea if they can, or can not, remove the kinds of data shifts seen in the following graphs. It is simply an article of faith in the programmers at GISS and Hadley (and poor Harry README ). Usually, when about to remake the global economy, a bit more than an article of faith would be required. These codes would not be acceptable for use in bringing a new aspirin to market, as they do not meet FDA requirements. Clearly there is a disconnect between potential for damage and degree of vetting required in the two fields.
The Results
These are presented as graphs. On each graph there are lines for the Version 1 data set ( v1) and the Version 3 data set (v3). The scales are not always consistent from graph to graph ( partly as it is needed to expand some graphs to see the differences) but are generally not great. Each graph has my commentary next to it, but can easily be examined by anyone for alternative opinions.
These graphs are very large (so look very compressed on the screen). Click on the graph and open it in a new window to get a larger readable version.
There are several salient features seen that are common to the set of graphs. There are some other features that are a bit more abstract, or only seen in some of the graphs.
In particular, the changes are generally such that a warming change is introduced into the data series. Remember that each graph compares the same area, for the same years, with what ought to be the same instruments for many of them. Yet not all data series are warmed. That, alone, is curious.
Looking around the continents we see some warming, some not. Whatever “Global Warming” is, it is not “Global”. Looking at individual parts of the data (such as by Region or continent) we find large differences. A “well mixed gas” causing widely disseminated “Global Warming” ought to produce changes that are more consistent from region to region. We might expect to see some cycles that are complimentary (such as warming in Europe while North America cools) due to weather cycle, but over a hundred+ years, we would expect both places to rise proportionately. That is not seen.
There is a clear cyclical component. Many times we can see that the data cool in the early 1800s, then warm dramatically into the mid 1930s-40s, then cool again into the 1960s-70s, only to warm again as we exit that cool period. One of the things frequently seen is that the period of time from about 1930 to 1970 is “cooled” in v3 when compared to v1. This creates a warming trend increase from then to the present. Often, too, the distant past is ‘warmed’. In gross averages, these tend to offset each other showing “little net bias”; but those statistical measures hide the way that the ‘belly of the temperature history’ gets a sag. The very early data are often not used in the later temperature series programs and are thrown away, leaving just that increased warming trend. (GISTemp, for example, ‘starts time’ in 1880 and disposes of earlier data). Similarly the 1950 to 1980 period tends to be the ‘baseline’ from which ‘warming’ is measured. Cooling the baseline biases the trend to warmer.
Each graph will have some commentary attached to it. The graphs are very large and it would be best to open them in a dedicated window to see them clearly, or print them on large paper.
The Global Comparison v1 vs v3

GHCN v1 vs v3 Global
In this graph, the dark red line is the difference between the two versions. V3 is the thin yellow line while v1 is the thin blue line. Recently, v3 is above v1. When we move into the past, v3 goes below v1. That is, the present has been warmed while the past has been cooled.
The recent warming is about 1/4 C while the more distant cooling is up to a full 1 C, but generally about 1/2 C. Overall, about 0.75 C of “Warming Trend” is in the v3 data that was not in the v1 data.
It bears repeating that these are the same GHCN data set and covering the same time periods ( in that v3 is ended in 1990 to match v1) and in this case it is ‘all data’ so covers the entire world. This increase in “warming trend” is entirely the result of changes as to which thermometer are in the data set and which are out, along with the changes in processing done to the temperature data now, as opposed to 1990. These are “man made warming trends”, but do not involve the planet, only the data set and how it is constructed.
If we look just at the segment from 1880 to date, it is a bit easier to see some of the smaller details:

GHCN v1 vs v3 1800 start time Global
Notice how the lines are much more volatile in the past? In the recent couple of decades things ‘go flat’. Partly this is from there being very few thermometers in the distant past, but another part seems to be related to where thermometers are located and how land use has changed. Some of it is the Quality Control process applied today. For example, today most thermometers are at Airports and use a system called ASOS. (Automated Surface Observing System). Airports are characterized by large expanses of concrete and asphalt, giant aircraft arriving day and night burning tons of kerosene per hour, with bands of surface vehicles circling and with snow removal equipment hard at work. Contrast that with a snow covered field a few miles away and it is pretty clear which one can have a sudden hard cold excursion.
Any record that is “too extreme” is compared to a set of nearby ASOS stations and, if the computer program deems it “too extreme”, the temperature reported is dropped and the average of “nearby” ASOS stations is substituted. Not only is an average much more resistant to having a cold excursion (as they must ALL have a large cold excursion for the average to have one) but the ASOS stations are located at Airports, which are generally in low flat areas and often near bodies of water. All places with less excursion to the temperatures.
There have been many other issues raised with the change of thermometer location over time (such as more are at lower elevations, fewer at altitude) and more such issues can be found detailed here:
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1653928
What this graph shows is the cumulative impact of those types of issues, which has been to ‘clip off’ the low going cold spikes and generally “flatten” the data. Even with that, though, we can see that the present high excursions are about the same as during the 1930s and the early 1820s. We have not experienced warming significantly different from then, despite the low going cold spikes being much more “clipped” in recent data.
By Hemisphere
Does anything different show up when we look at the data aggregates by hemisphere? North vs South? If “Global Warming” is truly global, we would expect to see a consistent trend over long periods of time. There might well be some kind of oscillation where a cold N.H. happens at the same time as a hot S.H., but on average, the two curves ought to have the same trajectory and about the same slope if warming really is happening and really is global.
What is most striking is just how much the Southern Hemisphere is not participating in “Global” warming:

GHCN v1 vs v3 Southern Hemisphere
Pretty much dead flat over most of history. The very early years are more chaotic, as we end up with just a half dozen, and then eventually just one thermometer. But once coverage is representative, it just kind of “lays there”. The dark blue and dark yellow lines are the two temperature series. I’ve added the year-to-year changes (those thin dT v1 and v3 dT lines) that are the actual yearly values (not the running total that makes the thick lines). Even they do not stray far from the zero line.
From roughly 1870 to date, we’ve had three cold dips ( 1950 – 1975; around 1916 – 1925, and about 1879-1910) each followed by a warm period rather like now. Looking very much like the PDO (Pacific Decadal Oscillation) cycle of natural hot / cold alternating periods of about 30 years each.
Again, though, we do see that the last few years have had the “cold excursions” clipped out. Those annual dT values that had been regularly ranging over 1/2 C and occasionally 1 C, now barely move 1/4 C. There may not be any warming in the Southern Hemisphere, but the thermometers are clearly in places that just don’t change much. The last half dozen years of data points are nearly dead flat.
The Northern Hemisphere has much more “action” in comparison.

GHCN v1 vs v3 Northern Hemisphere
The first thing to notice here is just how different this graph is from the Southern Hemisphere graph.
How can a global phenomenon from a ‘well distributed gas’ have almost all of the “effect” in only one hemisphere?
An artifact of thermometer data processing and / or instrument change would be expected to show up far more strongly in the Northern Hemisphere where far more instrument change has happened and the processes have changed more dramatically. Where industrial growth and paving / airport growth has been largest.
Start by looking at the two top thin lines. Those “dT” lines. They were far more volatile in the past, now nearly flat. There just isn’t nearly as much range to the data now as there was before. Partly that is because averaging together more thermometers gives a narrower range of possible outcomes. (But then again, it is just that kind of artifact that can find spurious “warming” comparing present stable values to past volatile ones).
The next thing to notice is that the thick dP/dt lines range between -1 C and -1/2 C back to the 1700s. Only recently does the range move up to -1/2 C to 0 C. That is, that 1/2 C of “Global Warming” all arrives between about 1986 and 1990. Even then, we do not have higher reading than in the early 1930s. There is a ‘step function’ in the processing, not a slowly accumulating effect from a slowly accumulating gas.
Once again we see that recent data have been warmed in v3 compared with v1 where yellow is on top of blue; but in the distant past, yellow is now below blue. The past has been “cooled” and the more recent data “warmed”; increasing the “warming trend” in v3 vs v1. (Not in the real world, only in one data set when compared to the other – the world did not retroactively change.) The “pivot point” looks to be about 1880, right at the point where GIStemp tosses out the older data, leaving only that warming trend from 1880 to ‘warmer’ recently.
Having the distant past get colder is also not something one would expect from a well distributed gas most of which had onset after 1930.
Also of note, we can see the same kind of “ripple” of natural cycles. Cold in the 1960s and 1880s and warmer in the 1930s, 1860s and even back in the early 1730s and about 1825.
The S.H. chart ends in the 1830s while this one extends to 1702, so they do not line up exactly. In 1830 the N.H. is having a wild warming (that bubble up just to the right of the main heading) rather like the S.H. does at the far right edge of its graph.
Frankly, given that the North had more airports faster, and more urban growth faster, than the South, I suspect that what little “warming” there is can be entirely explained with Urban Heat Island effect, Airport Heat Island effect, and a bit of over zealous data ‘adjusting’ by folks at certain Northern Hemisphere Met Offices and government agencies.
To my eye, the bulk of the “lift” looks to come between 1890 and about 1935. Then we have the typical “ripple” both before (at a lower level) and after. Given that 2 world wars happened in there, along with several changes of thermometer scales and instruments ( Japan, alone, took a significant rise with the arrival of American Occupation and different instruments and methods) I’m surprised the offset is only 1/2 C. I’d expect more than that just from airports turning from grass balloon fields to tarmac coated International Jetports.
That kind of “disconnect” between the overall look of the Southern Hemisphere graph and that for the Northern Hemisphere is in keeping with what would be expected from issues in the instrumental record and not in keeping with a generalized “Global Warming” caused by a well distributed gas having the same physics in both hemispheres. Yes, being much more water, the Southern Hemisphere would be somewhat more ‘water moderated’, but as the trend is essentially zero from 1855 to date, the implication is that the cause of warming in the Northern Hemisphere record is unable to change the temperature of 1/2 of the planet. Global Warming isn’t global. At best we have Northern Hemisphere warming and the data “has issues” with thermometer changes.
By Region / Continent
Looking at the data grouped by “Region Code” (the first digit of the Country Code portion of the station number) is also enlightening. Looking at major regions compared to the whole data set can show the bias of most of the data as being from the USA, Canada and Europe. Even looking at hemispheres can show that bias in the data (which one ought to be able to mitigate a little bit with some kind of ‘grid / box’ assignment and averaging). Looking at the data by continent or even by country eliminates most of those concerns. We will still have the USA dominating the North American data, but Africa, South America, Asia, and the Pacific Islands of Oceana will all get clearer representation. Europe will only dominate Europe.
So if there is a tendency for European changes to skew the data, they will not show up in Asia, or Africa, for example.
What is quite surprising here is, in fact, Africa. As it straddles the equator and has a load of hot places, one would expect added heat to show up here. What we get is not warming.
Africa
First off, just notice that V3 is above V1 clear back to the 1800s.
Where v1 had 1 C of warming, v3 has nil.
So which was it? 1 C of catastrophic warming in Africa, or “no worries”?
We also can see some ‘ripple’ from natural oscillations and, once again, the dramatic compression of “range” of the data over time. The 1990 end is incredibly compressed compared to prior years. (Though I note that 1886 and the 1930s were both low volatility times as well). In general, v1 ranged from -1 C to 0 C over most of the history, with the present being a zero time. In v3, we make that range more like +/- 1/2 C from natural variations.
At a minimum this is saying that the equatorial band is not having any “global warming” as Africa sits astride it. That just the changes to the data set can move an entire continent by 1 C does give some pause as to just what any particular “trend” really means.
South America
Here we have a ‘warming profile’. Cold in 1888 and even in the 1920s, then we gradually warm into the present. But notice that in 1932 we touch the zero line while in 1944 we exceed it. From that point on, we are essentially flat. The low going excursions get trimmed a little, but we just do not get “warmer”, just a bit of ‘less cold’ on exceptional times. All in all it looks like a ‘climb out of the Little Ice Age’, though a bit later than Europe. The “dip” in 1850 in Europe does not show up here, instead it gets a bit of cold about 1886, then starts a nice recovery.
One small problem. The CO2 theory says warming is caused by the CO2, where most of it got into the atmosphere since about 1945. This graph shows the warming happening when there is little added CO2, and growth of temperature halting as CO2 is released. Being essentially flat from 1932 to the left margin. We do see that the last half dozen years again have about 1/2 the ‘low going range’ pruned out of the data; or it could just be like the 1940s flat period again.
We also have to note that about 1940 the V3 line is below the v1 line. The past data get “cooled” making the present warmer in comparison as a result of those changes in thermometers and processing.
Oceana / Pacific Ocean
In this case we again see ‘warming’ that comes as a step function about 1978-80. Temperature curves wander between -1/2 and 0 C from about 1866 to about 1978, then it goes to the zero line for most of the rest. The new V3 data generally cool the past.
In general though, not a lot of displacement between v1 and v3. About 1/4 degree overall. Might want to find which countries exactly are having “warming” in their data and which are not; but even with the added warming trend, just not a lot of overall “warming” in the Pacific.
One must ask, though, if the Pacific Ocean isn’t warming, what is so “Global” about Global Warming?
It is also the case that the way the temperatures change, as a ‘step function’ in the recent past, is not in keeping with the notion of gradually increasing infrared radiation induced heating. It is much more in keeping with data artifacts from changes of instruments or processes.
Asia
A very strange graph. Almost no change in essentially the whole record from 1822 to 1986.
In the very early years, the record is volatile as very few instruments are being used (in very limited geographies) At the very end a hot year or two show up. There is some ‘warming trend’, but more of the time there is no history being made. One is left to ponder to what extent the “warming” in this data are the explosive growth of Asian cities in the ’80s and to what extent it might be a carefully selected ‘ending year’ that biases the relative position of the rest of the series. Or is that recent end point shift just from the giant move to thermometers at growing Asian airports?
Still with temperatures regularly ranging from -1.5 C to -1/2 C from about 1864 to the late 1980s, its just does not look like “well mixed gas” causing slow IR warming and looks more like a bit of cold in the LIA, and a step function at the end from a change of processing / instruments.
Not seeing much reason to shut down the economy when looking at these data…
Not much to say, really. The two series are almost on top of each other the whole time. V3 is a bit more volatile in the past as we’ve seen in other series. Generally we do still have the loss of volatility at the recent end of the graph; but only in the last half dozen years and not out of keeping with prior episodes of other low volatility times.
My biggest “take away” from this graph, though, is just that ‘dead flat for 100 years then a 1/2 C bump in a couple of years” is NOT the signature of CO2. It is the signature of equipment and process changes… There is also an interesting “cold time” between 1870 and 1910, but prior to that is another warm time. Very early the thermometers were either not being closely watched or there was a significant cold spike about 1800 to 1820. “Eighteen Hundred And Froze To Death” was in 1816, so that fits.
In the end, I see nothing that says “CO2 caused global warming” in the Asia data and I see little changed between v1 and v3. That increased slope in the Northern Hemisphere data can now only be carried by either North America or Europe or both. Asia didn’t change.
Also of interest is just how little change there is from v1 to v3. In Europe and North America that isn’t the case. So one is left to wonder: Which is correct? NOT changing Asia, or changing Europe and North America? Is v1 “right” in Asia, but “wrong” elsewhere? Or is v3 “right” in Asia but “wrong” elsewhere?
North America
First off, it is very easy to see that the red v3 line is pulled down below the blue v1 line starting in about 1888. The drop is about 1/2 C. There is a similar, though smaller, displacement in the 1960s. We also see a dramatic warming of the data in the far distant past about 1760. A full 1 C higher then. As those early records are from very few instruments (and start with a single instrument) we are, in essence, asserting that we can reach back in time over 300 years and say “No, sorry, you didn’t read that 74.0 F degrees correctly, it was really a 75.5 F.” I find that hard to believe. Far more likely is that the method of adjustment “has issues”.
Harder to see is how the thin blue and yellow “dT” lines change. The v3 dT yellow triangles are regularly “outside” the blue diamonds of v1 dT annual changes. The v3 data are showing more volatility than the v1 data. The apparent warming or cooling of particular segments correlate with a difference in extreme warm range vs extreme cold range, not with an overall increase of the warming trend. This is most easily seen about 1800 to 1820. Similarly, though in the opposite, the data from 1955 to about 1970 are incredibly low volatility.
Were the 1960s particularly unchanging? Not deviating by even 1/4 C from the norm? Those where the years where it snowed in the Central Valley of California. A very unusual event. Some years were normally warm, some were significantly colder than typical. Yet that does not reflect in the data. A decade later was the weather dramatically more volatile? Or are there data artifacts in both the v1 and v3 data sets that cause changes of 1/2 C to 1 C from decade to decade (and even from year to year)?
Are we to bet the fate of our economy and all the disruption of “creative destruction” on what may well be simply data collection artifacts that can not be ‘fixed’ by the climate computer codes?
Notice, too, that the “warming trend” from about 1880 (when GIStemp cuts off data used) to date runs about 1.5 C. We saw earlier that the Pacific, Africa, and indeed, the whole Southern Hemisphere, was showing much less “warming trend”, often near zero. If all the “warming trend” comes from averaging in data from places like North America with dramatic increases in urbanization and size of airports, with many “data artifacts”, with clear discontinuities in the data; averaging those places with others that have no such evidence of warming: Is there really anything “Global” about “Global Warming”? Can it reasonably be attributed to a well mixed gas causing radiative changes that must, by definition, happen over the entire globe?
Or do the variations in trend evidenced in the data from different geographies instead indicate an issue with data collection methods, data processing errors, and local changes? Are we willing to bet lives, incomes, and careers on what looks like simple data errors?
It is interesting to note that the 1768 to 1794 era stays about the same as now. No warming over about 200 years. “Eighteen hundred and froze to death” shows up in 1816, but also quite a dip in 1836. The mid 1930s also stay about the same ‘warmth’ as now.
We again see the roughly 1/2 C “offset” in the 1987-1990 transition of equipment and processes that happened then.
It is pretty easy to pick out where more ‘warming slope’ is added in v3 vs v1. It’s the place where the dark blue curve clearly is above the dark red line and where the a more volatile light yellow dT for v3 puts a data point on each side of the light blue v1 dT line.
It looks as though the “warming” in North America is entirely an artifact of: moving thermometers to airports, swapping to electronic thermometers that have different thermal issues and different adjustments (for many years one model was found to suck in heated exhaust from the humidity measuring device – the data from those instruments are still in the record), Urban Heat Islands as we developed more than did places like Africa (that has cooled) and perhaps some ‘odd’ data adjustments, as the ones seen here, where changes between v1 and v3 put more change into the data, for the same time and place, than the actual ‘warming signal’ we are seeking.
We are seeing 1/2 C to 1 C of movement of the average anomaly for a continent based entirely on thermometer selection and processing changes. With that much variation based on how GHCN Version 1 is created vs how GHCN Version 3 was crafted: How can someone possibly claim that a 1/2 C variation in the anomaly over time within one of those sets is “warming” of anything? It can simply be an artifact of creation of the data set, just as v1 vs v3 shows artifacts of that size.
Europe
The European record is very long as it contains the very first thermometer records. At the far right, the data become significantly suspect as it is located in a very narrow geography and comes from the earliest thermometers using a variety of scales that were newly created then.
The most interesting feature of this graph is just that the v3 red line is above the v1 blue line for substantially all of the historical data. There are times, like 1887 (about 1/3 of the way in from the left side) where they match up again. In about 1746 (that “dip” on the far right) v1 is briefly “warmer” than v3. But in general, the changes made to the data set going from v1 to v3 “cooled” the Europe trend (warmed the past) by about 1/2 C. Due to the way that First Differences works, it can make that kind of ‘offset’ if the first data are dramatically different. However, that implies that THE best data, from the most recent measurements, can be subject to a 1/2 C change on average over all of Europe. So if we can change the degree of warming in Europe with the stroke of a data massaging pen, how do we know that the “Global Warming” 1/2 C isn’t an artifact of some similar change or bias in the present data set version?
Also of note is just the degree of “warming” in the European data. We’ll start at the 1850s as that is when Hadley CRUtemp / HadCRUT “cuts off the data” and chooses to “start time”. In the 1850s we see that v1 is up to 2 C below the zero line and frequently is about 1.5 C below that line. This implies that Europe has had a rise of about 1.5 C in 150 years. 1 C per century. But has it?
Look at the range of the v1 dark blue line and the v3 dark red line. For most of their history, they run just below the -0.5 line (for the red one) and between the -0.5 C and -1.5 C lines (for the blue one). Even as far back as 1776, the ranges are substantially the same. (The v3 data even touch the zero line about 1778. No net warming). We also see that the 1930s are “about the same as now” in v3, but are showing about 1/2 C of “warming” from then to now in v1. So which is it? Have we had 1/2 C of warming from 1932, or are things basically unchanged?
If we look more carefully into the data, we find that the same “change of process” in the 1986 to 1990 date range accounts for the shift. The 1988 data for dP/dt are substantially the same at -0.64 vs -0.62 C while in 1987 they become -1.69 for v1 and -1.26 for v3 – an offset of -0.46 C between the two data sets. It is that “join” or “spice” in 1987 that causes the “warming” in the v1 data, and that gets a partial correction in v3. In the intermediate data set, Version 2 (not analyzed here) there is a change of “Duplicate Number” (sometimes called “modification flag” in GIStemp FORTRAN code) that happens at that point in time.
In this European data, we see that changes in how that point in time gets handled, how the splice is treated, can move the conclusion by an amount as large as the asserted “Global Warming”.
We are, in essence, being asked to simply “trust” that such changes and artifacts are all perfectly removed by the various “Climate Codes” and only a pristine “Global Warming” signal remains. That just happens to be of about the same size and scale as the errors and artifacts in the data.
Now compare these European Data to the ones from Africa and from all of the Pacific Basin above. Europe has, per this, warmed by a full 3 C from that dip about 1828 and a steady 2 C from the main body of the data during that interval using the v1 data (but “only” 2.5 C since 1828 and 1.5 C from he main body of the data if using v3 data) while Africa has cooled and the Pacific has done, basically, nothing other than a recent splice artifact offset.
Is it credible to say that “Global Warming” is concentrated in the thermometers of Europe and North America? That CO2 doesn’t act at the equator nor over the Pacific Ocean? Or is it more credible to say that Europe and North America have had the most growth of urban centers, and the greatest development of airports with vast areas of concrete and black asphalt baking in the sun?
One Example of Two Countries
Looking at the data for individual countries presents similar issues. Some change dramatically from v1 to v3, others do not. Some show a ‘warming’ pattern, others do not. To present the data for all the hundred+ countries in the record would be tedious and not as productive as presenting one example.
Here is a an example comparing Australia and New Zealand. These are two island nations. They are both located in geographies dominated by ocean and in particular by the Pacific Ocean (though western Australia has more Indian Ocean influence and South Island New Zealand has more Southern Ocean Arctic influence). Still, in general, things like changes in the Pacific Decadal Oscillation ought to reflect on both similarly. Both were part of the British Empire, so collected much of their historic data with similar instruments and methods.
One would expect both to show similar behaviors of the data over time, and if adjustments were needed one would expect to see both having similar changes.
In effect, if they are different, something unexpected and perhaps odd is going on.
New Zealand lines are the orange and blue ones. Orange for v1, blue for v3.
Australian lines are the dark red and green/black ones. Green/black for v1 and red for v3.
The New Zealand lines are almost on top of each other. For most of the record the blue line is not even visible. In some minor periods, the v3 data are slightly warmer and we see a bit of blue dots show up. But for Australia it is quite another story. The red and green/black lines diverge to about a 1/2 C separation and hold it clear back to 1866, then they swap by about 1 C. The divergence sets in about 1970. It happens all in one step, for the most part.
New Zealand does not have many temperature records. There are only about a dozen major stations. While there are some changes over time (Campbell Island as a cold thermometer enters, then exits, the record, for example) it is predominately a stable set. Australia has had massive changes of which instruments were in use, where, and when. They had been run by the postal service until that was discontinued.
For most of the record (until that near term sudden rise) the record is remarkably flat. It doesn’t matter if you use the v1 or v3 record. The Australian v1 record bounces between about -1/2 C and +1/4 C from about 1855 to 1975 or so. Similarly the v3 record bounces between about -1 C and -1/4 C in the same range of dates. There was no “Global Warming” in Australia for about 120 years. Then we get a sudden ‘offset’ right as the “modification flag” or “duplicate number” changes on the data. Just as a change of process and instruments is implemented.
New Zealand similarly has no warming through most of the data. Having fewer thermometers and being closer to cold polar storms, the range is a bit wider: from about the zero line to -1 C from 1866 to 1976, more or less. Then we again get that compression of range and “shift” in the mid to late ’80s on changes of processing and equipment. It also bears emphasis that the yellow and the blue lines are both just laying on the zero line. They have not gone up, we have just clipped off the dips into an occasional -1 C cold spike.
All in all, this looks much more like artifacts of changes to processes and instruments than any actual change of the temperatures in those two countries.
A couple of interesting things to note:
The 1930s were a bit cold “down under”. In contrast to North America where they were quite warm. Coverage of the Southern Ocean historically was very poor. It is quite possible that the hot/cold ripple seen in the Northern Hemisphere is just one half of a polar shift where we simply did not detect the cooling at the other side of the planet. Here we see some evidence for that in the New Zealand data.
Look at 1866 vs 1880. It was just as warm then, as at the left (recent) side of the graph, per these data (both v1 and v3 for both countries; though v3 Australia is biased a bit cooler than the others.) Recent data have the low excursions clipped off, but the highs are no higher. Those early years are left out of the record when programs like GIStemp create their “Global Average Temperature” and so those programs find we have warmed over time. Can we really ignore that it was just as warm then, as now? Might the fact that we dump gigaWatts of heating into our urban areas and burn tons of kerosene at airports, then put the thermometers in just those places, reasonably account for 1/2 C of “missing cold”?
In short, are we placing the thermometer in a heated living room then marveling that it just doesn’t seem as cold as when we sit on the patio in winter?
In Conclusion
The patterns of the data do not match those one would expect to see from radiative driven warming via a “Greenhouse Gas” well distributed over the planet.
The patterns of the data do match those one would expect to see from data collection and processing artifacts.
Observed variation from one version of the data to the next are larger than the ‘global warming’ signal being sought.
The computer programs that are asserted to remove those biases and changes have never had an audit, never had a benchmark test, never had a validation suite run; in short, they are untested in the ways that all other commercial software are tested and they have not been subject to the kinds of validation required for computer programs used for banks and drug companies. We are, in essence, told “trust me I know what I’m doing”. Peer review is largely “Trust me, my friends think I know what I’m doing.”
We are being asked to play “Economic Chicken” with our economy based on computerized speculation using data that are “unfit for purpose”. Those countries that have followed the suggested path (such as Spain) are on the brink of ruin. Those that have continued to exploit traditional energy sources (China) are thriving.
The story is being presented that the heroic thing to do is to “Save the World” via embracing what are at best speculative ideas about how to run the economy and to do so based on energy sources that are incredibly more expensive and less reliable. All based on one underlying data set that mutates rapidly and “has issues”.
Is it not the more heroic and responsible thing to do to stand up and simply say: “I choose to save the American Economy for the American People.” Then take the time to test the various theories and to see if there is any way to repair the broken data that underpins the warming case.
There has been no detectable warming for the last dozen years. The natural weather cycles have turned. We have at least a couple of more decades of this half of the cycle. Perhaps the wisest thing to do is use that time to do a more carefully audited and controlled study of the data, and with truly independent researchers whose careers are not already wedded to “not being shown wrong”.
Looking at the GHCN data set as it stands today, I’d hold it “not fit for purpose” even just for forecasting crop planting weather. I certainly would not play “Bet The Economy” on it. I also would not bet my reputation and my career on the infallibility of a handful of Global Warming researchers whose income depends on finding global warming; and on a similar handful of computer programmers who’s code has not been benchmarked nor subjected to a validation suite. If we can do it for a new aspirin, can’t we do it for the U.S. Economy writ large?
In short, is not the heroic thing to do to stand up and say: The climate researchers have no clothes, their data are not fit for purpose.
The AGW story has nothing, absolutely nothing, to do with the global climate and everything to do with establishing a tyrannical, global Orwellian society.
That was probably not the original intent of frightened world leaders in 1945, but that is where government deception led us since 1945.
http://omanuel.wordpress.com/about/#comment-132
If AGW promoters want us to believe their intentions are honorable, their first step is to distance themselves and their organizations from the deception documented, for example, in
1. Fred Hoyle’s autobiography [Home Is Where the Wind Blows: Chapters from a Cosmologist’s Life (University Science Books, Mill Valley, CA, USA, 1994, 443 pages) pp. 153-154
2. This CSPAN news reel on 7 Jan 1998
3. Climategate emails and documents in 2009
http://joannenova.com.au/2010/01/finally-the-new-revised-and-edited-climategate-timeline/
4. Experimental observations since 1960 that listed and reported again in 2012 [The Apeiron Journal 19, 123-150] http://redshift.vif.com/JournalFiles/V19NO2pdf/V19N2MAN.pdf
Damn! That was an hour read, and I am a fast reader. I did catch a small hint that the warm/cold cycle may be a north to south hemisphere oscillation rather then a world warming/ cooling feature. It appears that man caused warming is a man made artifact and not caused by a tiny change in atmospheric gasses. pg
Reblogged this on The GOLDEN RULE and commented:
A very impressive analysis by Chiefio.
A heap of interesting and relevant, meaningful content. A joy for the AGW disbelievers to behold and promote.
Important comments follow the article.
EM-
Only have about 20% read but wanted to suggest that, if you haven’t already considered it, passing this on to AW at WUWT would reach a lot more people. This seems to dot all the i’s and cross all the t’s. Best!
The Global Warming scam it is just one of the many “tricks” the madmen who seek to control the world, in a power/money grab frenzy only explainable if they were immortal people who could eternally enjoy the product of their abnormal desires.
Who will stop them?
EM-
Outstanding! A great summation of a number of your recent papers! You’ve clearly shown major faults and failures in today’s “New Age Data Base Climatology” and the foolishness of using it’s “findings” to make life changing political, economic, and ‘environmental’ decisions. Of course, those most interested in making those political and economic and environmental decisions ARE the same ones who think they stand to gain from the creation of the New World Order; and it doesn’t help any of us that scientific funding is tied very securely to this mumbo-jumbo too. It’s all so NOT about the science but the money (and the New World Order); sounds more like a religious crusade or jihad doesn’t it? Thanks again! You’re amazing! ;-)
One of the “. . . great swaths of the economy to be cut down and replaced with untested, or in some cases, tested and failed, technologies and alternative economies” is
Hydrogen-fusion reactors that would operate like Fred Hoyle’s imaginary model of stars [1,2].
A model that dominated astronomy, astrophysics, climatology, cosmology, and solar physics for the next sixty-six years (2012-1946 = 66 years), augmented by Nobel-prize winning studies by the best-funded scientists at the most-prestigious institutions [3,4], . . .
Although Fred Hoyle himself expressed
a.) Surprise in 1994 that the imaginary model of H-rich stellar cores was adopted unanimously, without discussion or debate, immediately after the Second World War [5].
b.) Contempt for the idea of an early universe filled with hydrogen from an imaginary “Big Bang” that created everything [6].
The public is only now beginning to realize that this imaginary technology of nuclear fusion reactors will not meet society’s future needs [7].
Here’s the rest of the story of the Orwellian curtain of deceit that fell across government science and isolated mankind from reality in 1945-46: http://omanuel.wordpress.com/about/#comment-132
– Oliver K. Manuel
References:
1. Fred Hoyle, “The synthesis of the elements from hydrogen,” Monthly Notices Royal Astronomical Society 106, 343-83 (1946) http://tinyurl.com/8aal4oy
2. Fred Hoyle, “The chemical composition of the stars,”Monthly Notices Royal Astronomical Society 106, 255-59 (1946) http://tinyurl.com/6uhm4xv
3. E. Margaret Burbidge, G. R. Burbidge, William A. Fowler, and F. Hoyle (B2FH), “Synthesis of elements in stars,” Rev. Mod. Phys. 29, 547–650 (1957)
http://rmp.aps.org/pdf/RMP/v29/i4/p547_1
4. O. Gingerich and C. De Jager, “The Bilderberg solar model of the Sun,” Solar Physics 3, 5-25 (1968) http://tinyurl.com/6pnwgos
5. Fred Hoyle, Home Is Where the Wind Blows: Chapters from a Cosmologist’s Life (University Science Books, Mill Valley, CA, USA, 1994, 443 pages) pp. 153-154
6. Sir Fred Hoyle; Coined ‘Big Bang,’ Los Angeles Times (23 August 2001)
http://articles.latimes.com/2001/aug/23/local/me-37483
7. James Dacey, “When will we see the first nuclear fusion reactor?” Physics World (1 Dec 2011) http://physicsworld.com/blog/2011/12/when_will_we_see_the_first_nuc.html
I second Pascvaks suggestion. WUWT would mean a wider audience. I would think Anthony would be delighted.
Awesome work again, E.M.!
It’s been a very good few days for us realists: McKitrick has just severely dented the credibility of the IPCC modellers with his paper and associated op-eds, while E.M.’s summary raises some un-ignorable questions for the data-guardians – and all users of their products. I’d like to see this work put through the wringer of real peer-review at Judith Curry’s, but WUWT is clearly the largest audience, and would be a very good start.
I think Myles Allen’s remark as quoted in McKitrick’s second op-ed bears looking at closely under the circumstances:
“we all use the instrumental temperature record all the time…If there had been anything wrong with the instrumental record, I would have to retract or redo a huge number of papers. It turned out there wasn’t.”
I wonder if Myles Allen would still be so sure if he read the above summary. Almost incredibly, it appears he is basing his conclusion that there isn’t “anything wrong with the instrumental record” on the laughable Russell report. I’m not what, if any, data QC analysis was done by that motley collection of whitewashers, but I think we can have high confidence that if any was done, it was done at nothing like the forensic level of E.M.’s analysis.
http://opinion.financialpost.com/2012/06/13/junk-science-week-climate-models-fail-reality-test/
http://opinion.financialpost.com/2012/06/20/climate-reality-check/
Highlights for me are the tests you were able to actually perform with the computer programs, and the finding that it cannot be tested in the usual sense using synthetic data. Also interesting are the cooling of certain periods in the past, offset by warmings in the very very distant past. And of course, the regional temps do not conform to the “global temperature averages.” Vincent Courtillot is also reconstructing regional temps and finding that they differ quite a bit from oneanother. Thanks for the great article.
Pingback: Chiefio Smith examines GHCN and finds it “not fit for purpose” | Watts Up With That?
EM: Great stuff. Understood, almost, some of it. But the fact that this is the “95 thesis” nailed to the church of (C) AGW will be understood by many adherents of that faith. Perhaps? that is why all these adjustments are continuing, to make sure no one can recreate the “correct” record.
Again, EM, well done.
EMS
This is a tour de force about the most basic level of climate science, the thermometer records. This alone should be sufficient to demolish AGW. Even though we know there are higher and higher levels of BS science. Currently I’m investigating one of the high levels where even most climate skeptics are, I now think, subscribing to BS science. But your work is of foundational importance.
I hope you (or maybe a.n.other, like Bishop Hill for Steve McIntyre) can edit your work to make it presentable and graspable in (a) one fell swoop (b) say 20-50 bite-size statements that build up. I think you should do your graphs to different scales so that the y-axis range is not less than say a sixth of the x-axis length, preferably a quarter or so.
Thank you.
@LucySkywalker, Pascvaks, Pyromancer76:
Thank you for the kind words. FWIW I have a different version of the graphs made that more effective at ‘natural scale’ (bar charts per data point so much shorter) but I’m still working on colors and such. They will be added shortly.
While I can’t say much about it, this posting was sent to someone who will be using it in producing a different more ‘approachable’ document for presentation where it will have more effect. News once the process is complete.
@Joe Prins:
Thanks for the note of appreciation. One can only present views of reality and help others to see them.
@Ferretonthespree:
Thanks for the links.
I can only hope that something I do comes up to even 1/4 of what McKitrick has done.
IMHO there are very significant issues in the GHCN data set. Not particularly just in the individual data points for any one thermometer for any one day, but in how the particular thermometers change over time, how the segments from each have ‘end effects’ (like the one that makes the offset in the Asia record), and in some change of processes and location about 1987-1990 that causes a dramatic change in the character of the data (that suppression of volatility and range coupled with about 1/2 C of warming in about 3 years.) There are likely also some effects on individual data points on individual days from changes in the “QA process” and from “adjustments” too.
My working thesis is that those effects “bleed through” codes like HadCRUT and GIStemp. That they take out a little of it, but enough remains to be seen as “Global Warming” where there is none. That is why a proper QA and Benchmark suite test of those codes using “Red Data” and “Pink Data” and “White Noise” is so important.
The code I use is made such as to not try to suppress those “splice artifacts” and inherent bias in the data from instrument changes, but to pass them through unchanged. To show just how much “instrument change” pollutes the data. Then the question becomes “Do codes like GIStemp and HadCRUT / CRUTEMP do a 100% job of removing that error?”. I think the answer is clearly “No, they don’t.” but that needs proving / testing.
A secondary question is just “Why is the change of that bias in the data toward exactly the direction and degree of the reputed “Global Warming” and never the other way?”. The nature of the change is suspicious and deserves a distinct kind of investigation. IMHO.
Thanks Chiefio for your extensive evaluations and prudent conclusions.
You may wish to incorporate Hurst-Kolmogorov statistics to better identify the full natural variability. e.g.,
Markonis, Y., and D. Koutsoyiannis, Hurst-Kolmogorov dynamics in paleoclimate reconstructions, European Geosciences Union General Assembly 2010, Geophysical Research Abstracts, Vol. 12, Vienna, EGU2010-14816, European Geosciences Union, 2010. http://www.itia.ntua.gr
Pingback: Global Historical Climate Network (GHCN) data corrupted | The Drinking Water Advisor
Chiefio, how do they say, “you rock, Man!”
I have sent this to the Leader of the Opposition in the Australian Federal Parliament for his edification. Maybe it will help convince him that support for a 20% renewable energy target is retrogressive and bad policy.
Gimme A Graph with Hair…
EXCELLENT! As usual.
So I suppose Dr. Muller’s BEST project, the last word in temperature reconstructions, is based on the same quirky and questionable GHCN data?
Reblogged this on contrary2belief and commented:
It’s complicated … but if you’re going to be spending my money to “save the planet”, you’d better do your homework. Correctly.
Great stuff! Reading fast through your article (need some more time to really digest it), it suddenly struck me that the problem with the temperature record is essentially the same as with the temperature proxies: The data has gone through an obscure selection and adjustment process which makes it completely useless for any real statistical analysis. Anyone using GHCN will end up with hockey-stick like result because of this!
Thank you for that absorbing work Mr. Smith. The need for an independent analysis of how the data was arrived at in v3 and then what the data means is essential.
I think that a few thousand, a few hundred thousand?, a few million?, (more? including WUWT) readers and I are grateful that you “retired” mid-career-life. Your months of work (is it now years?) on this material has had a grand pay-off. Glad you have a partner for publication.
No wonder Mosher had a fit at WUWT; the data and analysis are clear, open, and indisputable. except for someone showing their own data and analysis. He seemed to be left sputtering something like “but you are not using the right v1 or v3”! Furthermore, there is so much new knowledge, or new perspective on the original, in your answers to commenters, including those we usually think of as trolls, that everyone from researcher and interested parties creates one marvelous experience of Enlightenment. There have been a number of “significant moments” in the publication of bad or dishonest CAGW science, and your research presented here is one of those.
I hope you aren’t the only one with “copies” of GHCN v1,2,3 (as it is now). I’ve read you sent files to others, but yours is one of the most important research documents with clear analysis and arguments to put this pseudo-scientific fraud to bed. Yes, you are taking lots of care with security, but please take even more.
I have written before that I believe we are on the cusp of a political tsunami (end of era, discovery of abundant natural energy resources, great awakening). Americans do not like to be taken for fools especially when it affects their own jobs and their children’s futures. (I wish I could say the same for Californians — except maybe those in San Diego and San Jose.) Well, if the magic show is revealed as smoke-and-mirrors, no pea at all, or, heck, not even a shell, you can bask in the knowledge that your entrepreneurial effort (you just thought you were retired) can be given due credit. Now if you can just work further on that intrinsic-extrinsic matter…..(smiles).
A thought with regard to my comment on an earlier thread about my “optimism” for the future. As a historian and someone who followed American politics and foreign affairs (especially in relation to global natural energy resources) , I was utterly shocked, gobsmacked even, (like the feel of that Brit slang) at the collapse of the U.S.S.R. I had not read one iota of information that the tyranny had truly crumbled from within. (I knew that they had sucked the economic life out of every satellite country.) Then, wham! It is with this experience in mind that I am looking more carefully, searching for the numbers of citizens who see the signs of detritus, whiff the decay, realize the narcissistic bravado that betrays the confidence of the current poseurs, whether Presidents, bankers, lawyers (like the Attny Gen), scientists, professors, teachers, crony corporatists, etc.
Suspicions confirmed.
Take a look at station data in the Arctic and Antarctic. They needed to produce polar warming and remove the AMO cycle.
http://notrickszone.com/2012/03/01/data-tamperin-giss-caught-red-handed-manipulaing-data-to-produce-arctic-climate-history-revision/
@David L. Hagen:
Looks very useful. Perhaps in “round two”… This one is to inspect the nature of the data. A future effort will be to “fix it”. I think that fits better in the “fix it” and perhaps “QA that result” step.
Thanks.
@Streetcred:
Thanks! Though now I’m thinking I ought to have said “Global Economy” instead of “American Economy” ;-)
@Ruhroh:
Nice to see you again! Yeah, I like Hairy Graphs too ;-) It’s a details thing…
@G.Combs:
Well, the results are out there for the rabid rock tossers to sniff at. We’ll know soon enough if it is “Excellent” or not… (Though given the lack of frothing of any duration in the comments on the WUWT link I think it is passing muster so far… Unless they are just having a hard time finding a copy of v1 to test ;-)
http://wattsupwiththat.com/2012/06/21/chiefio-smith-exqamines-ghcn-and-finds-it/
@Robert Austin:
I’ve not looked very closely at BEST. Frankly, when they started announcing their results as “no difference” before they have finished the exercise I kind of lost interest in them. Folks who are working to pre-planned results are not doing Science, but manufacturing. So I saw “Yet Another Self Confirmation” issue developing and didn’t particularly want to put my energy into showing just how they fell into the goo…. I suppose I ought to, though.
FWIW the basic problem in ALL the “global data” consists of two Very Important Points.
1) All over the globe, instrumentation was placed at airports during the growth of The Jet Age. As that data are widely available and distributed via METARS to aviation users globally, it gets gathered and used. (Not as many “issues” with trying to pry “proprietary” local data from local Met Offices who want to sell it and limit the uses to which it can be put – so as to maintain their ability to sell more copies…) Airports are fundamentally exactly wrong as places to put thermometers for climate measurement. Their primary purpose, and their primary users desire, is to show the temperature over the runway (due to ‘density altitude’ causing hot air over the runway planes can crash if it is hot over the runway and you don’t know it.) So any “bias” or “error” MUST be toward reading high, not low, or people may die in crashes. Putting the thermometer in the nice cool grass or woods ‘near by’ is NOT acceptable. This, IMHO, biases the data toward the high side in direct proportion to the growth of airports since 1914 or so. https://chiefio.wordpress.com/2012/06/15/an-example-airport-issue/
2) In about 1987-1990 (it varies by location) there was a change of “Duplicate Number”. That indicates some change of instrumentation / processing. I believe that is when the electronic instruments on the ends of RS-232 cables were rolled out. Not only did this frequently put the instruments closer to buildings and other heat sources (who wants to dig a long cable trench? Or who CAN dig a long trench when the cable has a max allowed length?) but at least one early version of the instruments “sucked its own exhaust” and the humidity measuring gizmo made heat that the thermometer sucked back in. If nothing else, putting a power feed into an enclosure is just asking for added energy inside the enclosure… I find it highly suspicious that the “onset” of the “jump” in the data is roughly exactly on top of that change of equipment. If nothing else, there are natural “sanity checks” that are not done by the instruments but that would be done by people.
For example, I found several data items in the “unadjusted” GHCN that had temperatures over 100 C. The average person sent out to read a LIG thermometer would have a general idea about how hot it was, and if the instrument said it was more than double, they would know it was bogus. The automated gear doesn’t do that… and has regularly had “fail high” problems (like the one at the airport in Hawaii that reported record high values for many days prior to being replaced…)
So pretty much any data set collected is going to have the two issues incorporated into it that, IMHO, are most like THE cause of “global warming”…
@Bernd Felsche :
Unfortunately, some things simply are complicated. I try to make it as understandable as possible, but some things are intractable. Like getting folks to understand that temperature is not heat. The more try to explain it, the less folks want to hear… Heck, I’ve tried for a couple of years now to get folks, even a few, to understand that it is simply void of meaning to average an intrinsic property like temperature.
https://chiefio.wordpress.com/2011/09/27/gives-us-this-day-our-daily-enthalpy/
https://chiefio.wordpress.com/2011/07/01/intrinsic-extrinsic-intensive-extensive/
So I try to alternate between very un-complicated things (like lakes rising) and more subtle but important things (like enthalpy and intrinsic properties and data set variation…)
@Espen:
Most likely. Early on I actually did rather well in chemistry and really liked it. I learned calorimetry and actually liked it, too. What “climate science” is trying to do is a large calorimetry experiment on the whole planet. Yet the cardinal rule of calorimetry is to never screw around with the instrumentation. Do Not Change The Thermometer. Not the location. Not the instrument. Don’t even put a finger on it (as that causes a heat flow).
So what does the instrumental record do? CONSTANTLY screw around with the instrumentation. Changing instruments, locations, scales, methods, you name it.
Any chemist that is worth their salt at calorimetry ought to run screaming from the room if they take a look at what the “climate science” codes try tot do.
@Keith Battye:
You are most welcome.
IMHO, given that we have very little real records of the original data left, that a lot of it is locked behind the doors at national Met Offices globally, and that the “assembly” process of the GHCN at NOAA/NCDC is done by folks who look very “partisan” (and cheered on by folks like Hansen who are openly rabidly partisan, even getting arrested and testifying that it is OK to violate the law if your cause is in your opinion ‘just enough’..) and given that the ClimateGate ‘investigation’ was basically a Top Cover whitewash (IMHO): the odds of getting any real audit of the data and creation process is nearly nil. Even then, the above mentioned structural issues with where we stick thermometers remains.
IMHO it would be more effective to just pick a couple of well sited well tended long life non-Airport thermometers and look at them.
@pyromancer76:
I’ve sent copies of the data to a trusted source. v3 is still available for download. V2 was widely copied by others (so, for example, Verity has a copy or two in a database shared with TonyB) If it comes to it, I’ll even put the copy I’ve got up on a public server (though it is about 10 MB in a spreadsheet).
I’d hope, instead, that the original version just gets put back on line…
FWIW, it’s now “years”. Found some links have a 2009 in them, so at least 3 years. Doesn’t seem like it though ;-)
My “middle retirement” will likely need to end soon. I’ve run down the retirement fund to the point where I’ve got about 2 years more of “play time” and then it would be prudent to not “push it”. It would also be nice to have a larger “toys” fund ;-) OTOH, one of my personal goals was to see if I could get a book written. Made a couple of modest starts, but I’m feeling that “Get ‘r done, damn it!” self motivational thing starting… So I’m figuring a few months of intensive on that and “We’ll see”…
Mosher is an interesting fellow. He is honest and generally tries to be fair. But he is a bit too trusting, IMHO, of the pronouncements of others. So he accepts things like BEST and various hypothetical ‘thought experiment’ tests of the data quality as “proof”. So things like “on average all the changes are nil” doesn’t get into the weeds of just when in the time series the “ups” vs “downs” happen. Things like Europe gets a reduced trend (but started with near 2-3 C of trend!) and Australia gets more trend, so it all average out… that gets accepted instead of raising a Red Flag… Probably a good programmer and researcher, but not “paranoid enough” for the forensics side of things (IMHO).
His complaint is actually a valid one, IF the goal is to produce a Global Average Temperature that reduces the data artifacts as much as possible. But my goal is to show how much the artifacts change from one set to the other. Then the question becomes “Do the other codes remove that PERFECTLY?” If it isn’t perfect, some of the “warming trend” is data artifact driven. My goal is to show how much the data bias changes. “We cannot agree for we argue from different premises.” Since we don’t know how “perfect” the other codes are, IMHO, it is better to use a simple and clear method to measure the “issue” than to try using something so complicated that you have no real idea what is being measured in the end.
From what I’ve seen of GIStemp, it is at most 50% capable of removing some of the “issue”. That would result in a finding of “global warming” of just the size it finds, but only from the changes in the data set… IMHO, that matters.
FWIW I’m not really very worried about “security” for me or the stuff I’ve done. The results are all “up” here at WP, so also in the wayback machine. The data are saved by others. My laptop and workstation have nothing of real interest on them ( I was a Unix Sys Admin for way too long to trust security on a laptop with a commercial OS on it…) and, frankly, the Climate Science Machine doesn’t really care about folks like me. I’m an unpublished nobody without presence in the academic circles. “Just an amateur”…
But I do like the Security Gig (as I did it for a living for many years / decades) so I do have some security habits that just won’t go away… like offsite data archival and encryption. But it’s more to keep skilz up than ‘for effect’.
One of the things that interests me a lot (and where I’d put time were it not for the whole Climate Alarmism problem) is the issue of Emergent Behaviour in economic systems. I’m fascinated by things like how fracking has stood energy economics on its head. All over the place the Mantra Of Running Out has been “bitch slapped” by natural gas. So much fun to watch ;-) So I’m also of the opinion that there is a revolution in progress. But IMHO it is going to be a very slow one. A generational thing.
One of the most interesting indications is how Ron Paul has gotten large support from young folks. That “echo boomer” generation is peeved at the Green Machine trying to steal their future and sell them a bill of goods of depression, fear, and scarcity. They are saying “NO!” to it. Having had a face full of the enforce PC Doctrine in school for way too long, they’ve had enough. It will take a decade or two for them to displace “my generation”, but it is happening. “This too shall change”. The more the Agenda 21 folks “push it”, the more they alienate the future…
So from fracking to generational culture shifts, a slow revolution is happening. I can only hope that some of the “Pelosi Generation” figure out that the winds have shifted fast enough to abandon the Socialism Shiny Thing and Watermelon World sooner rather than later. A couple of country collapses would help that along (think Spain, Greece, California…)
As to “credit”: I’d be happy as a footnote in somebody else’s papers and / or with a modest job in my “later years” doing something like data archive management or software auditing. Either that or a blockbuster novel that makes me $Millions ;-)
The USSR is a stellar example of “Brittle Failure”. Everything looks just fine, right up until the glass shatters into kibble. (BTW, I love the sound of Brit Slang too ;-) I think California is close to that point. The problem with it, though, is that the USSR took a decade+ of absolute “on the floor” misery during the collapse. California is headed the same way.
There are a couple of industries that are still “stuck here”. Movies. Agriculture. Some of the “tech new ideas”. (Google, Ebay, Apple). Yet even there we see more movies being made “on location” outside California. Some “stars” still live here, so some paychecks come in, but it’s just a lot cheaper to ship the work to Montreal or Czech Republic and get cool visuals too. Ag has been dealt a body blow with the water cut off and is about to be brutally whacked by the regulatory fist on “offroad Diesel” and “particulates” along with electricity costs headed for 50 cents / kW-hr. Irrigation depends on cheap pumping of water. A whole lot of fields will be changing to things that take less cultivation, less pumped water, less labor, less economic inputs… Think “pasture” with the processing done in Nevada and no California wages paid…
That leaves “tech”. But even there, manufacturing is all outsourced to China and Asia now. IT Support work and Customer Service are from India. Apple is building a large Cloud Data Center… in the Carolinas… (who would by megawatts of power for computers at 50 cents per k-Whr when they can get it for 8 cents / k-Whr?) Ever more folks who “invent things” are living in other places. ( I would not move here now, if doing it over.) Lots of us from the “old guard” are planning to “Get out of Dodge”. As I’ve mentioned, my daughter has her last semester and then the last anchor here is done. Of my three siblings, all retired on Government Pensions BTW, 2 are in Nevada after making their money here. The other has a very low cost basis and doesn’t need to spend much… I’m eying Florida and Texas and would be thrilled to move, as soon as family and a job offer let me. So what of “tech” is going to keep California afloat? Heck, even Larry Ellison of Oracle bought Lanai in Hawaii, not a chunk of Malibu…
Every so often I drive past the Heart Of Silicon Valley. Were I found jobs for decades. Long rows of buildings where the New Ideas were hatched. Now largely empty, and have been for a decade. Only a few large old companies “hang on”. The new wave isn’t here. IMHO we’re about a decade away from deadsville. Like Detroit, we’re killing the goose for dinner.
The only good thing, IMHO, is that we’re so far in deficit spending and debt and have had a couple of cities and counties file bankruptcy already, with the State not far behind, that the brittle failure could come very soon. There’s a vote in a coming election. Gov. Jerry “Moonbeam” Brown wants a load more Tax Burden to keep funding the State Spending. It can go one of two ways:
1) He gets it. Tax burden is so high the exodus accelerates, the collapse comes soon enough to be a useful ‘bad example’…
2) He doesn’t get it. The population shows a bit of sanity, but the State Government has a major financial crisis. Again we serve as a useful ‘bad example’, but with the private sector recovering more quickly.
There is a minor 3rd possible: The Federal Government does a California Bailout to keep the game going. I don’t think that the other States would let that happen, but it’s a possible. In that case we’re “Greece and the EU” with German Style bailouts. Then we suffer on for however long it takes for that to fail. As I’m not seeing a lot of sympathy from Texas and Chicago toward California, I don’t see this one developing.
In any case, it will be “fun to watch”. If it gets too bad it will be “Fun to watch from Florida” ;-)
IMHO, the thing to watch is the “debt to income” level and spending level of the consumer class. Right now debt is higher than annual income. Unsustainable. Can’t take more in taxes and have them survive. Can’t have them spend more (as they don’t have it). Can’t have electricity costs rise as they MUST just turn off and quit. So the economy can’t grow from consumer demand. The “Green Agenda” can’t be funded (who can buy an e-car? Or charge it?) Government can’t have revenues grow (as they are at or past the limit on ‘take’ possible already). It’s a SHTF situation, just waiting for the plate to launch toward the rotating part….
The only bad part is that the folks with the least incomes get hurt first, and it must travel up through the income levels until it hits the Large Players, before anything will change.
At any rate, back at the GHCN:
I’m doing my part, but I think that the GHCN / Global Warming “gig” runs out in the next half decade to decade. It’s already on the downslope and things are cooling. That will be the death knell for that mantra. Today in San Jose it is cool and overcast. I’m looking out my window at a very nice April day… in June… The cool north air is headed south on this side of N. America. The hot tropical air headed North on the other. Heat leaving the poles fast. It’s just a matter of time, and we run out the clock. Even if stupid “Green laws” are passed, at most they can be in the starting phases of implementation when there’s a cold crisis and the monetary crisis will mean no net money flow to speak of anyway. So while I’m going to examine GHCN some more in coming years, I think the center of action has moved. A decade from now the “echo boomers” will be undoing the stupidity so they can have a normal life…
So brittle failure or just a decade of cooling ennui as the guard changes. Either one is much more likely than any Watermelon World… and at the end of any of them, the whole “warming” and “GHCN” issues thing peters out.
EM
An excellent article. It shows how each version of GHCN has been adjusted to show a greater warming trend. The BOM in Australia has been doing the same thing from the raw data, then their High Quality site data and now ACORN.
By the way, you supplied me with an answer to my question on JoNova’s blog re the discrepancy between GISS Arctic temp range and the DMI data for the month of May – the former showing above av temps and the latter showing below average. Thank you for your response. Do you regard DMI as a reliable source of Arctic temps? Is DMI data incorporated into GHCN data?
EM – I also took an hour to read this last night, and didn’t have the time to congratulate you on the work. I did pass on the link to family and friends though. Tonight it took another half hour reading all the comments.
If California is going to have a brittle collapse it would seem useful to have sold the house before others realise it’s going to happen – that’s assuming you own the house and are not renting it. I’m not sure on the moral stance here – would you inform a buyer that it’s a long-term investment only?
I’m looking forward to the results of your work, and hope it gets noticed in government. Your “retirement” has been probably more valuable than your paid time. I hope the book earns enough so you don’t need to get a paying job again.
@Simon:
I’m most likely to just “gift” the house to the kid(s). We “own” it, but a sell / buy cycle would have a lot of costs that I just don’t need. Basically the tax issues make changing ownership a bad idea. Add to that all the “crap” that now must be done to make a house “current” and it is even worse. Easier just to take my name off the deed and leave the kid on.
Real Estate is always a long term investment. I’m of the opinion that “After The Fall” California will once again be a great place to live, so might as well keep the dirt. BTW we have “Prop 13” limits on property tax, so as long as the property title does not leave the family, taxes stay at about 25 year ago levels (plus 1%/yr or some such). A “sell / buy” would most likely get me higher property taxes (whatever State I was in) than the present house payment (that runs out in a few years…)
In short: The “transaction costs” now outweigh the benefits of changing ownership. This will only get worse, not better, as things like “mandatory solar panels” and such are being added to new homes and will ‘trickle down’ over time to resales.
@Ian George:
On my “someday” list is to compare the “New Australia” to various old ones…
Pingback: Comparing GHCN V1 and V3 | Watts Up With That?
Chiefio,
I appreciate the need to keep things simple sometimes. The complexity comes from the number of simple things and how they couple to produce the real world.
The worst thing that one can do to test that there is “global warming”, is to only look at a statistical artifact, label it “temperature” derived as an average of a varying number of diurnal extremes and then entice others to believe that it is real. One could only do those things in earnest if one has no functional grasp of rudimentary thermodynamics.
http://contrary2belief.wordpress.com/2011/10/22/global-warming/
So, “The fiddles did it” is the Null Hypothesis, not rejected by the meta-analysis.
Of course the really intriguing question is, “Who paid the fiddlers to play that tune?”
_________
Edit: “independent researchers who’s careers are not already wedded” Unless you mean “who is careers”, that’s “whose careers”.
@Bernd Felsche:
Pretty much sums it up! Well done.
@Brian H.:
Eventually the formal rules of English will catch up with the people… ;-)
E.M. Smith;
Ignorance and slop may well triumph in the end. You will rue the day, however.
Pingback: GIStemp – No new source code? | Musings from the Chiefio