A Comparison of The Global Historical Climate Network data Version 1 and Version 3
June 12, 2012
The Global Warming Issue, and Consequences
The major thesis of Global Warming is that there is a change of temperature, on the order of about 1/2 C to 1 C, caused by human activity in the burning of fossil fuels. The hypothesis then moves on to consequences,where just about anything that could be bad is caused by Carbon Dioxide Warming, at least as long as research funding dollars are available to study it, and then the hypothesis moves on to remediation. Remediation largely focuses on The Green Agenda, as in the UN Agenda 21 effort, and de-industrialization.
It is that final step that is the most concerning. There are great swaths of the economy to be cut down and replaced with untested, or in some cases, tested and failed, technologies and alternative economies.
There is a very long chain of events, of unproven and ill conceived conjecture, and even of fundamentally broken theories; all of which must be true, for the Anthropogenic Global Warming theory to be valid and for the proposed “cures” to be correct. The consequences of being wrong are horrific. We are, in essence, about to play economic Russian Roulette based on a theory which comes from ONE set of data. The GHCN. Global Historical Climate Network.
There are folks who will assert that there are several sets of data, each independent and each showing the same thing, warming on the order of 1/2 C to 1 C. The Hadley CRUtemp, NASA GIStemp, and NCDC. Yet each of these is, in reality, a ‘variation on a theme’ in the processing done to the single global data set, the GHCN. If that data has an inherent bias in it, by accident or by design, that bias will be reflected in each of the products that do variations on how to adjust that data for various things like population growth ( UHI or Urban Heat Island effect) or for the frequent loss of data in some areas (or loss of whole masses of thermometer records, sometimes the majority all at once).
A great deal of energy (on both sides) has gone into arguing various points of minutia. Are city lights a decent proxy for heat islands? Are Airports inherently warm? (They are. Then the minutia moved to “how much?” and “Is it enough to bias results?”)
In the end, it is often more heat than light that has been so generated.
On the one hand, the narrative is that “Only Green Sustainable Solutions can save the planet! You, too, can be the heroic one, using Schumpeterian “Creative Destruction” to remake the world into a new garden paradise of Clean Green Sustainable Energy and lead the world out of the darkness of coal and oil power.
But what if the “Creative Destruction” is in fact long on destruction and short on creation? What if we really need coal and oil to produce things like steel, glass, aluminum, and food?
Spain has already taken the lead down that path in the European Union. They have been “leaders” in solar and related “Green” power. Germany has pushed wind as a power solution as well. The result has been an economy in Spain where unemployment is over 18% and among the youth, over 50%. They have even invented a new word: NINI. http://www.theglobeandmail.com/commentary/nini-and-the-european-dream/article1389123/ Neither in Employment Nor Education. It describes the lost generation of young folks who have graduated school, or just given up on school as they see no job at the end of it; and are now unemployed and with little prospect. In Germany, increasing numbers of people are in “Fuel Poverty” with utility disconnections rising fast.
The counter example is China. Building one coal fired power plant, roughly, per month ( Per the New York Times: https://www.nytimes.com/2009/05/11/world/asia/11coal.html ) They have recently experienced something of an “economic slowdown”, but from 12 % growth per year to 8%. (the exact number changes from month to month, with some reports as low as 5%). Those are growth numbers most economies would love to have. Growth provided by cheap and effective power.
So which is the real “Heroic Narrative” that will be written? The Green Dream? Or the one about rational folks looking at real world facts on the ground and saving the worlds greatest economy from Green destruction? Will it be the person who has an ocean view in Massachusetts despoiled with thousands of subsidized windmills (which tend not to run when most needed on cold still winter nights)? Or the person who keeps America at work making cars, computers, canning food, and having BBQ tailgate parties at the football game? Each of those things takes economical energy to produce.
Drive up the price of electricity, aluminum smelting and metal can fabrication move to China. Steel Arc Furnaces shut down and move to where China can provide cheap coal powered electricity. Cement kilns run on coal heat and coal based coke is used to reduce iron ore to iron and steel. The basic building blocks of industry depend directly on electricity and coal.
In short, every significant economic function of a manufacturing economy depends on affordable fuel. (As do most of the significant economic functions for an “information” or “service” economy – computer rooms run on massive electric consumption; and ever tried to grill a steak without gas or charcoal?) Each and every one of those manufacturing industries depend, fundamentally, on low cost power. Raise the price beyond the competition, and those industries will be destroyed.
Would It not be just as much a heroic act to prevent that destruction?
What if “the story” of Global Warming were in fact, just that? A story? Based on a set of data that are not “fit for purpose” and simply, despite the best efforts possible, can not be “cleaned up enough” to remove shifts of trend and “warming” from data set changes, of a size sufficient to account for all “Global Warming”; yet known not to be caused by Carbon Dioxide, but rather by the way in which the data are gathered and tabulated?
In that case, the person who stands up and says, in essence, “The Warming Story Has No Clothes” is in fact the true hero. Saving millions of jobs and untold $Billions of economic activity from unjustified destruction. Destruction that will not “save the world” as it is addressing a problem that does not exist. It is just an erroneous response to a mistake of computer data processing. All too common, but on a vastly more grand scale. (Rather like the financial derivatives market meltdown where they, too, had computer models showing that everything was fully understood and risk management was settled.)
Examining The Data
Suppose there were a simple way to view a historical change of the data that is of the same scale as the reputed “Global Warming” but was clearly caused simply by changes of processing of that data.
Suppose this were demonstrable for the GHCN data on which all of NCDC, GISS with GIStemp, and Hadley CRU with HadCRUT depend? Suppose the nature of the change were such that it is highly likely to escape complete removal in the kinds of processing done by those temperature series processing programs? Would it be too much to ask that folks take just a bit longer to think about what they plan to do to the economy in the face of that kind of foundation of sand to the Global Average Temperature?
It is my opinion that the situation is exactly that way. And relatively easily demonstrated. The response from Hadley, Goddard, and NCDC will undoubtedly be that they have it all perfected. That they have peer reviewed each others papers and that they all agree that they can’t be wrong. Yet anyone can be wrong. The history of science is littered with discarded theories. Often theories that held dominance and were “consensus” for decades (or even centuries) prior to being overturned. That is just the nature of science. Newtonian mechanics were superseded by Relativity, just as Copernicus replaced Celestial Spheres. More recently the entire arrangement of which species were most closely related to what others has been overturned. The Linnean names of plants and animals replaced with “clades” based on our new genetic tools.
But surely temperatures are more “fixed” than that? Fine folks took readings by looking at a thermometer and writing them down. For most of history that is how it was done. What could possibly change that written record?
In short, modern folks finding reasons to re-write the past. Perhaps good reasons. Perhaps not. We’ll see that point being argued for decades to come (perhaps longer). Yes, all these good folks believe they are right, and that they can not have made an error. Yet the changes are of the size and scope sufficient to account for all the “Warming” seen in the historical record. Surely when the warming we find in the temperature record is largely attributable to changes of method of adjusting that data, and processing it into a data series, there is sufficient cause for alarm to council against rash actions based on such a malleable history. Sufficient cause to ask that a true accounting be done, with proper independent Quality Control Audits all the way through. (Yes, there is no audit trail, such as one would see in an accounting report for a prospectus. Nor are the computer codes being used vetted and tested as are the codes for FDA drug approval. We are, quite literally, betting the nation’s economy on code that would not pass FDA requirements for a new form of aspirin.)
One Example Problem
This is but one example problem among many for the GHCN data set. The problem is “Revision History”.
There are three major revisions of the GHCN data set. Version 1, Version 2, and most recently, Version 3 was released. Over time, the exact temperature recording stations in the data set have changed. Sometimes many are added, often many exit.
From this flux of ever changing instruments, the assertion is made that one can calculate a Global Warming Trend. While there are many technical and philosophical issues with that assertion, the simple fact that the instrumental change can account for the “Warming” is distressing. ( As an example of the technical issues, a Global Average Temperature confounds heat and temperature; which is commonly done by laymen but strictly avoided by engineers and scientists, except for climate scientists it would seem.).
For this particular example we will look at how the data change between Version 1 and Version 3 by using the same method on both sets of data. As the Version 1 data end in 1990, the Version 3 data will also be truncated at that point in time. In this way we will be looking at the same period of time, for the same GHCN data set. Just two different versions with somewhat different thermometer records being in and out, of each. Basically, these are supposedly the same places and the same history, so any changes are a result of the thermometer selection done on the set and the differences in how the data were processed or adjusted. The expectation would be that they ought to show fairly similar trends of warming or cooling for any given place. To the extent the two sets diverge, it argues for data processing being the factor we are measuring, not real changes in the global climate.
The dP or Delta Past method
The method used is a variation on a Peer Reviewed method called “First Differences”. It is one of the simplest methods to use. When doing data audits, the simplest methods are much less likely to have hidden problems that are not easily spotted. Computer codes thousands of lines long, in dozens of distinct programs, can be hideously hard to debug and have subtle errors hidden in them for decades. The method used here is short, simple, and easy to check. Dozens of lines of code in single digit numbers of modules (and some of them as short as 2 or 3 lines). While not an ideal theoretical method to calculate temperatures in any one place, it is well suited to finding biases in the data sets.
The computer programs used to create the data graphed here is available at this link:
The various graphs and comparisons in this report can be found in an index at this link:
Unlike the codes that try to do homogenization (that has as many definitions as there are practitioners, it would seem) and do a variety of “filling in” and data fabrication, to create missing data; this method simply compares a single thermometer now, to the readings for it in the past. What is called an “anomaly process” in climate science.
The other codes from places like NCDC, GISS and Hadley do anomaly processes too, but often they are comparing a synthetic “Grid / Box” value made from one thermometer set today to a completely different set of thermometers in the past. That is prone to a variety of errors, including one called a ‘splice artifact’.
For example, GIStemp computes 16000 “grid cells”. Yet there were only 1280 currently active thermometers in GHCN v2, so the “present value” of 14,000+ ‘grid boxes’ were a polite fiction. A creation of the GIStemp computer program based on other cells up to 1200 km away. Hardly a ‘clean’ anomaly process; comparing one fiction in the present to another fiction in the past. Fictional values created by “homogenizing” the data and splicing together many thermometer records that often themselves contain values created by comparison and adjustments in the “homogenizing” process.
Any time data from different sources are glued together to make a continuous series, there is the risk that the “join” will be artificially displaced. This is a common and well recognized kind of error. In the various climate codes, the homogenizing process and the joining of different thermometer records into one synthetic record take steps to try and reduce the splice artifacts. It is not possible to perfectly remove them. So in large part the question becomes “was the join good enough”?
The code I used to make these audit graphs avoid making splice artifacts in the creation of the “anomaly records” for each thermometer history. Any given thermometer is compared only to itself, so there is little opportunity for a splice artifact in making the anomalies. It then averages those anomalies together for variable sized regions. (This can be any sized region that can be described with a thermometer World Meteorological Organization identification number series; or WMO number, and the Country Code. The highest order digit of Country Code is ‘region’; that is fundamentally each continent. The first three digits taken together are the “country code” and they code for a single political entity (most of the time – Russia is divided into 2 parts, one in Europe, the other in Asia). In the process of making these graphs, the different thermometers are averaged together as anomalies inside these selected regions.
While that is an accepted process (averaging different instruments via anomalies) and is done inside the various data set creation codes (such as GIStemp and HadCRUT) it inevitably creates a splice artifact. The only question is “How big?” Are the efforts taken to remove that splice artifact sufficient to separate it from the desired “signal” being sought? No one knows, as there have not been any benchmarks run on codes such as GIStemp to assess how good, or poor, they are at such splice artifact removal. We are, in essence, “betting it all” on the opinions of a few researchers and their peer reviewers (who are often from the same small group of agencies) that they have done everything correctly.
So this code is a bit different, in that it minimizes splice artifacts in the anomaly creation step, but it does not attempt to remove those artifacts in the “average the anomalies” step. The purpose here is to see how much variation there is in the data themselves, not to add yet another unknown quality of “adjustment”. In essence, we want to see, given a very clean and direct anomaly process uncluttered by masking processes such as ‘homogenizing’, if there were significant shifts in the basic character of the data being fed into all those other data creation programs? (Such as NASA GIStemp, Hadley HadCRUT / CRUTEM, and NOAA NCDC products)
What Is Found
What is found is a degree of “shift” of the input data of roughly the same order of scale as the reputed Global Warming.
The inevitable conclusion of this is that we are depending on the various climate codes to be nearly 100% perfect in removing this warming shift, of being insensitive to it, for the assertions about global warming to be real.
Simple changes of composition of the GHCN data set between Version 1 and Version 3 can account for the observed “Global Warming”; and the assertion that those biases in the adjustments are valid, or are adequately removed via the various codes are just that: Assertions.
Are Computer Programmers Perfect
It all comes down to trusting the opinions of the folks who wrote the programs that there are no errors.
I’ve spent decades writing, testing, and running various kinds of computer programs. The larger the size of the code, the more commonly it has errors in it. Typically these are found and removed by a debugging process that includes various test suites run with specific test data. ( With catchy names like “Red Data” and “White Noise” and even the middle ground of “Pink Data”.) I have seen no published test data, test runs, test suites, nor audit reports from independent code auditors for the Climate Codes. Financial systems and FDA drug approval codes must typically have some kind of audit process done. For the FDA, even the process by which the computer is unboxed, set up and turned on must be documented in what is called a “Qualified Installation” where each step is signed off by the technician doing the work and a manager observing. ( I’ve done “Qualified Installations” so I am familiar with the process.)
Looking at the state of the Hadley software (especially the laments in the “Harry README file” make it very clear they could not do a QA test run. They don’t even have their input data any longer, per the emails made public. I have ported and run the Goddard GIStemp code. It is coded to expect particular stations in the input. It is not possible to run it on synthetic test data. It breaks and hangs. (Exactly how much the data can be changed before the program hangs has not yet been found. So far, every significant change of station composition has caused a crash / hang in my testing.) It looks as though it simply is not possible to do a proper QA / test / validation suite run on the various climate codes. (Though I have not seen the NCDC code, blocks of the GIStemp data descriptions include the NCDC data structures and it is clear that the two groups share code and practices, so I don’t expect much will be different).
This means that we have no idea if they can, or can not, remove the kinds of data shifts seen in the following graphs. It is simply an article of faith in the programmers at GISS and Hadley (and poor Harry README ). Usually, when about to remake the global economy, a bit more than an article of faith would be required. These codes would not be acceptable for use in bringing a new aspirin to market, as they do not meet FDA requirements. Clearly there is a disconnect between potential for damage and degree of vetting required in the two fields.
These are presented as graphs. On each graph there are lines for the Version 1 data set ( v1) and the Version 3 data set (v3). The scales are not always consistent from graph to graph ( partly as it is needed to expand some graphs to see the differences) but are generally not great. Each graph has my commentary next to it, but can easily be examined by anyone for alternative opinions.
These graphs are very large (so look very compressed on the screen). Click on the graph and open it in a new window to get a larger readable version.
There are several salient features seen that are common to the set of graphs. There are some other features that are a bit more abstract, or only seen in some of the graphs.
In particular, the changes are generally such that a warming change is introduced into the data series. Remember that each graph compares the same area, for the same years, with what ought to be the same instruments for many of them. Yet not all data series are warmed. That, alone, is curious.
Looking around the continents we see some warming, some not. Whatever “Global Warming” is, it is not “Global”. Looking at individual parts of the data (such as by Region or continent) we find large differences. A “well mixed gas” causing widely disseminated “Global Warming” ought to produce changes that are more consistent from region to region. We might expect to see some cycles that are complimentary (such as warming in Europe while North America cools) due to weather cycle, but over a hundred+ years, we would expect both places to rise proportionately. That is not seen.
There is a clear cyclical component. Many times we can see that the data cool in the early 1800s, then warm dramatically into the mid 1930s-40s, then cool again into the 1960s-70s, only to warm again as we exit that cool period. One of the things frequently seen is that the period of time from about 1930 to 1970 is “cooled” in v3 when compared to v1. This creates a warming trend increase from then to the present. Often, too, the distant past is ‘warmed’. In gross averages, these tend to offset each other showing “little net bias”; but those statistical measures hide the way that the ‘belly of the temperature history’ gets a sag. The very early data are often not used in the later temperature series programs and are thrown away, leaving just that increased warming trend. (GISTemp, for example, ‘starts time’ in 1880 and disposes of earlier data). Similarly the 1950 to 1980 period tends to be the ‘baseline’ from which ‘warming’ is measured. Cooling the baseline biases the trend to warmer.
Each graph will have some commentary attached to it. The graphs are very large and it would be best to open them in a dedicated window to see them clearly, or print them on large paper.
The Global Comparison v1 vs v3
In this graph, the dark red line is the difference between the two versions. V3 is the thin yellow line while v1 is the thin blue line. Recently, v3 is above v1. When we move into the past, v3 goes below v1. That is, the present has been warmed while the past has been cooled.
The recent warming is about 1/4 C while the more distant cooling is up to a full 1 C, but generally about 1/2 C. Overall, about 0.75 C of “Warming Trend” is in the v3 data that was not in the v1 data.
It bears repeating that these are the same GHCN data set and covering the same time periods ( in that v3 is ended in 1990 to match v1) and in this case it is ‘all data’ so covers the entire world. This increase in “warming trend” is entirely the result of changes as to which thermometer are in the data set and which are out, along with the changes in processing done to the temperature data now, as opposed to 1990. These are “man made warming trends”, but do not involve the planet, only the data set and how it is constructed.
If we look just at the segment from 1880 to date, it is a bit easier to see some of the smaller details:
Notice how the lines are much more volatile in the past? In the recent couple of decades things ‘go flat’. Partly this is from there being very few thermometers in the distant past, but another part seems to be related to where thermometers are located and how land use has changed. Some of it is the Quality Control process applied today. For example, today most thermometers are at Airports and use a system called ASOS. (Automated Surface Observing System). Airports are characterized by large expanses of concrete and asphalt, giant aircraft arriving day and night burning tons of kerosene per hour, with bands of surface vehicles circling and with snow removal equipment hard at work. Contrast that with a snow covered field a few miles away and it is pretty clear which one can have a sudden hard cold excursion.
Any record that is “too extreme” is compared to a set of nearby ASOS stations and, if the computer program deems it “too extreme”, the temperature reported is dropped and the average of “nearby” ASOS stations is substituted. Not only is an average much more resistant to having a cold excursion (as they must ALL have a large cold excursion for the average to have one) but the ASOS stations are located at Airports, which are generally in low flat areas and often near bodies of water. All places with less excursion to the temperatures.
There have been many other issues raised with the change of thermometer location over time (such as more are at lower elevations, fewer at altitude) and more such issues can be found detailed here:
What this graph shows is the cumulative impact of those types of issues, which has been to ‘clip off’ the low going cold spikes and generally “flatten” the data. Even with that, though, we can see that the present high excursions are about the same as during the 1930s and the early 1820s. We have not experienced warming significantly different from then, despite the low going cold spikes being much more “clipped” in recent data.
Does anything different show up when we look at the data aggregates by hemisphere? North vs South? If “Global Warming” is truly global, we would expect to see a consistent trend over long periods of time. There might well be some kind of oscillation where a cold N.H. happens at the same time as a hot S.H., but on average, the two curves ought to have the same trajectory and about the same slope if warming really is happening and really is global.
What is most striking is just how much the Southern Hemisphere is not participating in “Global” warming:
Pretty much dead flat over most of history. The very early years are more chaotic, as we end up with just a half dozen, and then eventually just one thermometer. But once coverage is representative, it just kind of “lays there”. The dark blue and dark yellow lines are the two temperature series. I’ve added the year-to-year changes (those thin dT v1 and v3 dT lines) that are the actual yearly values (not the running total that makes the thick lines). Even they do not stray far from the zero line.
From roughly 1870 to date, we’ve had three cold dips ( 1950 – 1975; around 1916 – 1925, and about 1879-1910) each followed by a warm period rather like now. Looking very much like the PDO (Pacific Decadal Oscillation) cycle of natural hot / cold alternating periods of about 30 years each.
Again, though, we do see that the last few years have had the “cold excursions” clipped out. Those annual dT values that had been regularly ranging over 1/2 C and occasionally 1 C, now barely move 1/4 C. There may not be any warming in the Southern Hemisphere, but the thermometers are clearly in places that just don’t change much. The last half dozen years of data points are nearly dead flat.
The Northern Hemisphere has much more “action” in comparison.
The first thing to notice here is just how different this graph is from the Southern Hemisphere graph.
How can a global phenomenon from a ‘well distributed gas’ have almost all of the “effect” in only one hemisphere?
An artifact of thermometer data processing and / or instrument change would be expected to show up far more strongly in the Northern Hemisphere where far more instrument change has happened and the processes have changed more dramatically. Where industrial growth and paving / airport growth has been largest.
Start by looking at the two top thin lines. Those “dT” lines. They were far more volatile in the past, now nearly flat. There just isn’t nearly as much range to the data now as there was before. Partly that is because averaging together more thermometers gives a narrower range of possible outcomes. (But then again, it is just that kind of artifact that can find spurious “warming” comparing present stable values to past volatile ones).
The next thing to notice is that the thick dP/dt lines range between -1 C and -1/2 C back to the 1700s. Only recently does the range move up to -1/2 C to 0 C. That is, that 1/2 C of “Global Warming” all arrives between about 1986 and 1990. Even then, we do not have higher reading than in the early 1930s. There is a ‘step function’ in the processing, not a slowly accumulating effect from a slowly accumulating gas.
Once again we see that recent data have been warmed in v3 compared with v1 where yellow is on top of blue; but in the distant past, yellow is now below blue. The past has been “cooled” and the more recent data “warmed”; increasing the “warming trend” in v3 vs v1. (Not in the real world, only in one data set when compared to the other – the world did not retroactively change.) The “pivot point” looks to be about 1880, right at the point where GIStemp tosses out the older data, leaving only that warming trend from 1880 to ‘warmer’ recently.
Having the distant past get colder is also not something one would expect from a well distributed gas most of which had onset after 1930.
Also of note, we can see the same kind of “ripple” of natural cycles. Cold in the 1960s and 1880s and warmer in the 1930s, 1860s and even back in the early 1730s and about 1825.
The S.H. chart ends in the 1830s while this one extends to 1702, so they do not line up exactly. In 1830 the N.H. is having a wild warming (that bubble up just to the right of the main heading) rather like the S.H. does at the far right edge of its graph.
Frankly, given that the North had more airports faster, and more urban growth faster, than the South, I suspect that what little “warming” there is can be entirely explained with Urban Heat Island effect, Airport Heat Island effect, and a bit of over zealous data ‘adjusting’ by folks at certain Northern Hemisphere Met Offices and government agencies.
To my eye, the bulk of the “lift” looks to come between 1890 and about 1935. Then we have the typical “ripple” both before (at a lower level) and after. Given that 2 world wars happened in there, along with several changes of thermometer scales and instruments ( Japan, alone, took a significant rise with the arrival of American Occupation and different instruments and methods) I’m surprised the offset is only 1/2 C. I’d expect more than that just from airports turning from grass balloon fields to tarmac coated International Jetports.
That kind of “disconnect” between the overall look of the Southern Hemisphere graph and that for the Northern Hemisphere is in keeping with what would be expected from issues in the instrumental record and not in keeping with a generalized “Global Warming” caused by a well distributed gas having the same physics in both hemispheres. Yes, being much more water, the Southern Hemisphere would be somewhat more ‘water moderated’, but as the trend is essentially zero from 1855 to date, the implication is that the cause of warming in the Northern Hemisphere record is unable to change the temperature of 1/2 of the planet. Global Warming isn’t global. At best we have Northern Hemisphere warming and the data “has issues” with thermometer changes.
By Region / Continent
Looking at the data grouped by “Region Code” (the first digit of the Country Code portion of the station number) is also enlightening. Looking at major regions compared to the whole data set can show the bias of most of the data as being from the USA, Canada and Europe. Even looking at hemispheres can show that bias in the data (which one ought to be able to mitigate a little bit with some kind of ‘grid / box’ assignment and averaging). Looking at the data by continent or even by country eliminates most of those concerns. We will still have the USA dominating the North American data, but Africa, South America, Asia, and the Pacific Islands of Oceana will all get clearer representation. Europe will only dominate Europe.
So if there is a tendency for European changes to skew the data, they will not show up in Asia, or Africa, for example.
What is quite surprising here is, in fact, Africa. As it straddles the equator and has a load of hot places, one would expect added heat to show up here. What we get is not warming.
First off, just notice that V3 is above V1 clear back to the 1800s.
Where v1 had 1 C of warming, v3 has nil.
So which was it? 1 C of catastrophic warming in Africa, or “no worries”?
We also can see some ‘ripple’ from natural oscillations and, once again, the dramatic compression of “range” of the data over time. The 1990 end is incredibly compressed compared to prior years. (Though I note that 1886 and the 1930s were both low volatility times as well). In general, v1 ranged from -1 C to 0 C over most of the history, with the present being a zero time. In v3, we make that range more like +/- 1/2 C from natural variations.
At a minimum this is saying that the equatorial band is not having any “global warming” as Africa sits astride it. That just the changes to the data set can move an entire continent by 1 C does give some pause as to just what any particular “trend” really means.
Here we have a ‘warming profile’. Cold in 1888 and even in the 1920s, then we gradually warm into the present. But notice that in 1932 we touch the zero line while in 1944 we exceed it. From that point on, we are essentially flat. The low going excursions get trimmed a little, but we just do not get “warmer”, just a bit of ‘less cold’ on exceptional times. All in all it looks like a ‘climb out of the Little Ice Age’, though a bit later than Europe. The “dip” in 1850 in Europe does not show up here, instead it gets a bit of cold about 1886, then starts a nice recovery.
One small problem. The CO2 theory says warming is caused by the CO2, where most of it got into the atmosphere since about 1945. This graph shows the warming happening when there is little added CO2, and growth of temperature halting as CO2 is released. Being essentially flat from 1932 to the left margin. We do see that the last half dozen years again have about 1/2 the ‘low going range’ pruned out of the data; or it could just be like the 1940s flat period again.
We also have to note that about 1940 the V3 line is below the v1 line. The past data get “cooled” making the present warmer in comparison as a result of those changes in thermometers and processing.
Oceana / Pacific Ocean
In this case we again see ‘warming’ that comes as a step function about 1978-80. Temperature curves wander between -1/2 and 0 C from about 1866 to about 1978, then it goes to the zero line for most of the rest. The new V3 data generally cool the past.
In general though, not a lot of displacement between v1 and v3. About 1/4 degree overall. Might want to find which countries exactly are having “warming” in their data and which are not; but even with the added warming trend, just not a lot of overall “warming” in the Pacific.
One must ask, though, if the Pacific Ocean isn’t warming, what is so “Global” about Global Warming?
It is also the case that the way the temperatures change, as a ‘step function’ in the recent past, is not in keeping with the notion of gradually increasing infrared radiation induced heating. It is much more in keeping with data artifacts from changes of instruments or processes.
A very strange graph. Almost no change in essentially the whole record from 1822 to 1986.
In the very early years, the record is volatile as very few instruments are being used (in very limited geographies) At the very end a hot year or two show up. There is some ‘warming trend’, but more of the time there is no history being made. One is left to ponder to what extent the “warming” in this data are the explosive growth of Asian cities in the ’80s and to what extent it might be a carefully selected ‘ending year’ that biases the relative position of the rest of the series. Or is that recent end point shift just from the giant move to thermometers at growing Asian airports?
Still with temperatures regularly ranging from -1.5 C to -1/2 C from about 1864 to the late 1980s, its just does not look like “well mixed gas” causing slow IR warming and looks more like a bit of cold in the LIA, and a step function at the end from a change of processing / instruments.
Not seeing much reason to shut down the economy when looking at these data…
Not much to say, really. The two series are almost on top of each other the whole time. V3 is a bit more volatile in the past as we’ve seen in other series. Generally we do still have the loss of volatility at the recent end of the graph; but only in the last half dozen years and not out of keeping with prior episodes of other low volatility times.
My biggest “take away” from this graph, though, is just that ‘dead flat for 100 years then a 1/2 C bump in a couple of years” is NOT the signature of CO2. It is the signature of equipment and process changes… There is also an interesting “cold time” between 1870 and 1910, but prior to that is another warm time. Very early the thermometers were either not being closely watched or there was a significant cold spike about 1800 to 1820. “Eighteen Hundred And Froze To Death” was in 1816, so that fits.
In the end, I see nothing that says “CO2 caused global warming” in the Asia data and I see little changed between v1 and v3. That increased slope in the Northern Hemisphere data can now only be carried by either North America or Europe or both. Asia didn’t change.
Also of interest is just how little change there is from v1 to v3. In Europe and North America that isn’t the case. So one is left to wonder: Which is correct? NOT changing Asia, or changing Europe and North America? Is v1 “right” in Asia, but “wrong” elsewhere? Or is v3 “right” in Asia but “wrong” elsewhere?
First off, it is very easy to see that the red v3 line is pulled down below the blue v1 line starting in about 1888. The drop is about 1/2 C. There is a similar, though smaller, displacement in the 1960s. We also see a dramatic warming of the data in the far distant past about 1760. A full 1 C higher then. As those early records are from very few instruments (and start with a single instrument) we are, in essence, asserting that we can reach back in time over 300 years and say “No, sorry, you didn’t read that 74.0 F degrees correctly, it was really a 75.5 F.” I find that hard to believe. Far more likely is that the method of adjustment “has issues”.
Harder to see is how the thin blue and yellow “dT” lines change. The v3 dT yellow triangles are regularly “outside” the blue diamonds of v1 dT annual changes. The v3 data are showing more volatility than the v1 data. The apparent warming or cooling of particular segments correlate with a difference in extreme warm range vs extreme cold range, not with an overall increase of the warming trend. This is most easily seen about 1800 to 1820. Similarly, though in the opposite, the data from 1955 to about 1970 are incredibly low volatility.
Were the 1960s particularly unchanging? Not deviating by even 1/4 C from the norm? Those where the years where it snowed in the Central Valley of California. A very unusual event. Some years were normally warm, some were significantly colder than typical. Yet that does not reflect in the data. A decade later was the weather dramatically more volatile? Or are there data artifacts in both the v1 and v3 data sets that cause changes of 1/2 C to 1 C from decade to decade (and even from year to year)?
Are we to bet the fate of our economy and all the disruption of “creative destruction” on what may well be simply data collection artifacts that can not be ‘fixed’ by the climate computer codes?
Notice, too, that the “warming trend” from about 1880 (when GIStemp cuts off data used) to date runs about 1.5 C. We saw earlier that the Pacific, Africa, and indeed, the whole Southern Hemisphere, was showing much less “warming trend”, often near zero. If all the “warming trend” comes from averaging in data from places like North America with dramatic increases in urbanization and size of airports, with many “data artifacts”, with clear discontinuities in the data; averaging those places with others that have no such evidence of warming: Is there really anything “Global” about “Global Warming”? Can it reasonably be attributed to a well mixed gas causing radiative changes that must, by definition, happen over the entire globe?
Or do the variations in trend evidenced in the data from different geographies instead indicate an issue with data collection methods, data processing errors, and local changes? Are we willing to bet lives, incomes, and careers on what looks like simple data errors?
It is interesting to note that the 1768 to 1794 era stays about the same as now. No warming over about 200 years. “Eighteen hundred and froze to death” shows up in 1816, but also quite a dip in 1836. The mid 1930s also stay about the same ‘warmth’ as now.
We again see the roughly 1/2 C “offset” in the 1987-1990 transition of equipment and processes that happened then.
It is pretty easy to pick out where more ‘warming slope’ is added in v3 vs v1. It’s the place where the dark blue curve clearly is above the dark red line and where the a more volatile light yellow dT for v3 puts a data point on each side of the light blue v1 dT line.
It looks as though the “warming” in North America is entirely an artifact of: moving thermometers to airports, swapping to electronic thermometers that have different thermal issues and different adjustments (for many years one model was found to suck in heated exhaust from the humidity measuring device – the data from those instruments are still in the record), Urban Heat Islands as we developed more than did places like Africa (that has cooled) and perhaps some ‘odd’ data adjustments, as the ones seen here, where changes between v1 and v3 put more change into the data, for the same time and place, than the actual ‘warming signal’ we are seeking.
We are seeing 1/2 C to 1 C of movement of the average anomaly for a continent based entirely on thermometer selection and processing changes. With that much variation based on how GHCN Version 1 is created vs how GHCN Version 3 was crafted: How can someone possibly claim that a 1/2 C variation in the anomaly over time within one of those sets is “warming” of anything? It can simply be an artifact of creation of the data set, just as v1 vs v3 shows artifacts of that size.
The European record is very long as it contains the very first thermometer records. At the far right, the data become significantly suspect as it is located in a very narrow geography and comes from the earliest thermometers using a variety of scales that were newly created then.
The most interesting feature of this graph is just that the v3 red line is above the v1 blue line for substantially all of the historical data. There are times, like 1887 (about 1/3 of the way in from the left side) where they match up again. In about 1746 (that “dip” on the far right) v1 is briefly “warmer” than v3. But in general, the changes made to the data set going from v1 to v3 “cooled” the Europe trend (warmed the past) by about 1/2 C. Due to the way that First Differences works, it can make that kind of ‘offset’ if the first data are dramatically different. However, that implies that THE best data, from the most recent measurements, can be subject to a 1/2 C change on average over all of Europe. So if we can change the degree of warming in Europe with the stroke of a data massaging pen, how do we know that the “Global Warming” 1/2 C isn’t an artifact of some similar change or bias in the present data set version?
Also of note is just the degree of “warming” in the European data. We’ll start at the 1850s as that is when Hadley CRUtemp / HadCRUT “cuts off the data” and chooses to “start time”. In the 1850s we see that v1 is up to 2 C below the zero line and frequently is about 1.5 C below that line. This implies that Europe has had a rise of about 1.5 C in 150 years. 1 C per century. But has it?
Look at the range of the v1 dark blue line and the v3 dark red line. For most of their history, they run just below the -0.5 line (for the red one) and between the -0.5 C and -1.5 C lines (for the blue one). Even as far back as 1776, the ranges are substantially the same. (The v3 data even touch the zero line about 1778. No net warming). We also see that the 1930s are “about the same as now” in v3, but are showing about 1/2 C of “warming” from then to now in v1. So which is it? Have we had 1/2 C of warming from 1932, or are things basically unchanged?
If we look more carefully into the data, we find that the same “change of process” in the 1986 to 1990 date range accounts for the shift. The 1988 data for dP/dt are substantially the same at -0.64 vs -0.62 C while in 1987 they become -1.69 for v1 and -1.26 for v3 – an offset of -0.46 C between the two data sets. It is that “join” or “spice” in 1987 that causes the “warming” in the v1 data, and that gets a partial correction in v3. In the intermediate data set, Version 2 (not analyzed here) there is a change of “Duplicate Number” (sometimes called “modification flag” in GIStemp FORTRAN code) that happens at that point in time.
In this European data, we see that changes in how that point in time gets handled, how the splice is treated, can move the conclusion by an amount as large as the asserted “Global Warming”.
We are, in essence, being asked to simply “trust” that such changes and artifacts are all perfectly removed by the various “Climate Codes” and only a pristine “Global Warming” signal remains. That just happens to be of about the same size and scale as the errors and artifacts in the data.
Now compare these European Data to the ones from Africa and from all of the Pacific Basin above. Europe has, per this, warmed by a full 3 C from that dip about 1828 and a steady 2 C from the main body of the data during that interval using the v1 data (but “only” 2.5 C since 1828 and 1.5 C from he main body of the data if using v3 data) while Africa has cooled and the Pacific has done, basically, nothing other than a recent splice artifact offset.
Is it credible to say that “Global Warming” is concentrated in the thermometers of Europe and North America? That CO2 doesn’t act at the equator nor over the Pacific Ocean? Or is it more credible to say that Europe and North America have had the most growth of urban centers, and the greatest development of airports with vast areas of concrete and black asphalt baking in the sun?
One Example of Two Countries
Looking at the data for individual countries presents similar issues. Some change dramatically from v1 to v3, others do not. Some show a ‘warming’ pattern, others do not. To present the data for all the hundred+ countries in the record would be tedious and not as productive as presenting one example.
Here is a an example comparing Australia and New Zealand. These are two island nations. They are both located in geographies dominated by ocean and in particular by the Pacific Ocean (though western Australia has more Indian Ocean influence and South Island New Zealand has more Southern Ocean Arctic influence). Still, in general, things like changes in the Pacific Decadal Oscillation ought to reflect on both similarly. Both were part of the British Empire, so collected much of their historic data with similar instruments and methods.
One would expect both to show similar behaviors of the data over time, and if adjustments were needed one would expect to see both having similar changes.
In effect, if they are different, something unexpected and perhaps odd is going on.
New Zealand lines are the orange and blue ones. Orange for v1, blue for v3.
Australian lines are the dark red and green/black ones. Green/black for v1 and red for v3.
The New Zealand lines are almost on top of each other. For most of the record the blue line is not even visible. In some minor periods, the v3 data are slightly warmer and we see a bit of blue dots show up. But for Australia it is quite another story. The red and green/black lines diverge to about a 1/2 C separation and hold it clear back to 1866, then they swap by about 1 C. The divergence sets in about 1970. It happens all in one step, for the most part.
New Zealand does not have many temperature records. There are only about a dozen major stations. While there are some changes over time (Campbell Island as a cold thermometer enters, then exits, the record, for example) it is predominately a stable set. Australia has had massive changes of which instruments were in use, where, and when. They had been run by the postal service until that was discontinued.
For most of the record (until that near term sudden rise) the record is remarkably flat. It doesn’t matter if you use the v1 or v3 record. The Australian v1 record bounces between about -1/2 C and +1/4 C from about 1855 to 1975 or so. Similarly the v3 record bounces between about -1 C and -1/4 C in the same range of dates. There was no “Global Warming” in Australia for about 120 years. Then we get a sudden ‘offset’ right as the “modification flag” or “duplicate number” changes on the data. Just as a change of process and instruments is implemented.
New Zealand similarly has no warming through most of the data. Having fewer thermometers and being closer to cold polar storms, the range is a bit wider: from about the zero line to -1 C from 1866 to 1976, more or less. Then we again get that compression of range and “shift” in the mid to late ’80s on changes of processing and equipment. It also bears emphasis that the yellow and the blue lines are both just laying on the zero line. They have not gone up, we have just clipped off the dips into an occasional -1 C cold spike.
All in all, this looks much more like artifacts of changes to processes and instruments than any actual change of the temperatures in those two countries.
A couple of interesting things to note:
The 1930s were a bit cold “down under”. In contrast to North America where they were quite warm. Coverage of the Southern Ocean historically was very poor. It is quite possible that the hot/cold ripple seen in the Northern Hemisphere is just one half of a polar shift where we simply did not detect the cooling at the other side of the planet. Here we see some evidence for that in the New Zealand data.
Look at 1866 vs 1880. It was just as warm then, as at the left (recent) side of the graph, per these data (both v1 and v3 for both countries; though v3 Australia is biased a bit cooler than the others.) Recent data have the low excursions clipped off, but the highs are no higher. Those early years are left out of the record when programs like GIStemp create their “Global Average Temperature” and so those programs find we have warmed over time. Can we really ignore that it was just as warm then, as now? Might the fact that we dump gigaWatts of heating into our urban areas and burn tons of kerosene at airports, then put the thermometers in just those places, reasonably account for 1/2 C of “missing cold”?
In short, are we placing the thermometer in a heated living room then marveling that it just doesn’t seem as cold as when we sit on the patio in winter?
The patterns of the data do not match those one would expect to see from radiative driven warming via a “Greenhouse Gas” well distributed over the planet.
The patterns of the data do match those one would expect to see from data collection and processing artifacts.
Observed variation from one version of the data to the next are larger than the ‘global warming’ signal being sought.
The computer programs that are asserted to remove those biases and changes have never had an audit, never had a benchmark test, never had a validation suite run; in short, they are untested in the ways that all other commercial software are tested and they have not been subject to the kinds of validation required for computer programs used for banks and drug companies. We are, in essence, told “trust me I know what I’m doing”. Peer review is largely “Trust me, my friends think I know what I’m doing.”
We are being asked to play “Economic Chicken” with our economy based on computerized speculation using data that are “unfit for purpose”. Those countries that have followed the suggested path (such as Spain) are on the brink of ruin. Those that have continued to exploit traditional energy sources (China) are thriving.
The story is being presented that the heroic thing to do is to “Save the World” via embracing what are at best speculative ideas about how to run the economy and to do so based on energy sources that are incredibly more expensive and less reliable. All based on one underlying data set that mutates rapidly and “has issues”.
Is it not the more heroic and responsible thing to do to stand up and simply say: “I choose to save the American Economy for the American People.” Then take the time to test the various theories and to see if there is any way to repair the broken data that underpins the warming case.
There has been no detectable warming for the last dozen years. The natural weather cycles have turned. We have at least a couple of more decades of this half of the cycle. Perhaps the wisest thing to do is use that time to do a more carefully audited and controlled study of the data, and with truly independent researchers whose careers are not already wedded to “not being shown wrong”.
Looking at the GHCN data set as it stands today, I’d hold it “not fit for purpose” even just for forecasting crop planting weather. I certainly would not play “Bet The Economy” on it. I also would not bet my reputation and my career on the infallibility of a handful of Global Warming researchers whose income depends on finding global warming; and on a similar handful of computer programmers who’s code has not been benchmarked nor subjected to a validation suite. If we can do it for a new aspirin, can’t we do it for the U.S. Economy writ large?
In short, is not the heroic thing to do to stand up and say: The climate researchers have no clothes, their data are not fit for purpose.