It would seem that NCDC have made a nice set of graphs that show the adjustments done on each and every station in the GHCN Global Historic Climate Network temperature history.
A brief look seems to indicate more cooling of the past and warming of the present, via adjustment, than from any asserted “Global Warming” in the actual data. It would take a lot more work, though, to demonstrate that via looking at every station and the net impact on the final ‘warming’.
But for now, there’s quite a set of useful images here:
For each station. So you can just wander around and find things of interest.
I’m going to upload a couple here, for purposes of illustration. But really, this is one giant “Dig Here!” that would benefit from many hands (and eyes) looking at many graphs.
The graphs are in folders with a single digit number. That number is the first digit of the station ID (so also the continent / cluster).
Here’s some info from other locations at that site:
The “ReadMe” file:
Last Updated: 09/29/2010
The following directory:
is comprised of sub-directories (that are named by the first digit of a station
ID) that contain individual station plot files (in “gif” format).
The plot files contain 9 individual graphs, arranged in a 3×3 matrix. The first
column of graphs, contain 2-D colored symbol graphs of the actual monthly data
for the entire period of record for A) the (Q)uality (C)ontrolled (U)nadjusted
(QCU) data, B) the (Q)uality (C)ontrolled (A)djusted (QCA) data, and C) the
differences between QCA and QCU monthly data. The second column of graphs
contain histograms of the monthly data for QCU, QCA, and (QCA-QCU) respectively.
Finally, the third column of graphs depict annual anomalies and their associated
trend line for QCU and QCA, and the differences in the annual anomalies for QCA
and QCU. Detailed axis titles and units are displayed in the title of each
So you can see that there’s lots of good info here on unadjusted vs adjusted. I find the tend line and the difference graphs the most interesting.
Here’s an example from Tatlayoko Lake, BC:
On the right, notice that the original dropping trend line has been turned into a generally flat one. The graph at the bottom right shows that the past was cooled, and the present warmed. Clearly and obviously.
Now, to me, it isn’t so much the warming present and cooling past, as that pretty much every graph has more change from adjustments than it does from actual trend. Those that are not changed generally are so short of data that there isn’t much point. (Though there are graphs that are unchanged).
What’s the net-net of it? Hard to say, but I’d say mostly a “Global Warming” signal that comes out of the adjustments, not out of the data.
They have a paper describing their latest changes here:
It has some interesting bits buried in it, like their new method finding more step change points to prune out and inducing even more change than the prior version. The “homogenizing” looks to be the magic sauce. It looks similar to the B.E.S.T. splice and dice method of taking slow changes (like aging paint) and keeping that warming in, while taking out the step function when it is repainted to the proper white. Version 3.2.0 finding 1.07 C / Century while Version 3.1.0 has 0.94 C / Century. So we get 0.13 C of added warming from this one update to the code. Now, do that 5 times, you have all of Global Warming. How many updates have there been? Well, since this was from 3.1 to 3.2, I’d wonder about 1.x to 2.x to 3.x… Looks like about a dozen or three to me…
Yes, just a first approximation. But I’d like to know just how many salami slices of warming have been added just this way.
Here is an example station that gets no change:
So why is it left alone, while others are changed? Who knows…
Again, if it is so important to change the data, dramatically, for other stations; then why is it not just as important for THIS station? Which is the error? Changing the other one, or not changing this one? They BOTH can not be error free decisions…
While Faraday gets the rather high trend there cooled down:
Mawson station in the same major number cluster gets a bit of warming:
In a general ‘look over’ it looks to me like the added warming makes up all of the “AGW” signal. It needs a full on analysis / proof to show that. But what gets me more is that there is no rhyme nor reason. Some stations up, some down, some flat. Is the whole thing just an artifact of an algorithmic adjustment gone mad? The average warming signal being the leftovers in the error band of all those seemingly senseless adjustments?
Looking at the raw data for many locations does not show much “warming” at all. This one for example:
So why do they end up getting a warming trend? And why is the tend from those adjustments so much more than any trend in the actual data?
IMHO, the folks doing the adjusting are in love with their intellectual creations and not bothered to actually look at what it does to the data. (The alternative requiring malice… and “never attribute to malice that which is adequately explained by stupidity”… )
This takes a whole lot more eyes looking at a whole lot more of these graphs. Sorting them by type of adjustment. Assessing each one for sanity. Calling “BS” on the ones that are just not justified by the known facts. Calling “BS” on the ones with no known facts to justify them. Calling “BS” on the ones where natural cycles and processes have been ironed out in the name of ‘homogeneity”.
But at least the graphs are now produced, and sitting there for everyone to have a look.
Station data and other info is available too. They have a FAQ “Frequently Asked Questions” file here:
and it claims to link to other documents that:
global temperature trends?
NCDC Technical Report No. GHCNM‐12‐02 provides a detailed summary of each software modification
and the resulting impacts to global temperatures. This report is available at
ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v3/techreports/Technical Report NCDC No12‐02‐
With software available for inspection:
Is it possible to obtain the computer software code that NCDC uses for making homogeneity
Yes. The Pairwise Homogeneity Adjustment algorithm software is available online at
So plenty to keep a lot of folks busy, if they have the time to dig in and help.
What is very clear is that there is an awful lot of room for fudge in those adjustments, and a lot of room for error that does not show up as error bars, and ought to.
If it is at all like what they do to the USHCN, that is about 1/2 F, it accounts for roughly all of the “Global Warming” signal with nothing left over for nature:
The cumulative effect of all adjustments is approximately a one-half degree Fahrenheit warming in the annual time series over a 50-year period from the 1940’s until the last decade of the century.
Thanks, E.M. Smith, and to all those who have encouraged government agencies to be more transparent about data modifications.
If Galen Winsor is correct, the errors identified to date in stellar, nuclear and climate science are dwarfed by the exaggerations in nuclear radiation safety:
Radiation hormesis is not accounted for.
It is unwise to attribute to malice alone that which can be attributed to malice and stupidity.
Here are some that I found a while back with an automated process looking for large differences in adjustments (I’m sure I have a bug in it b/c there are some not so extreme examples.) Some beggar belief. Hopefully the links come through OK.
Thanks for the Post EM.
top avg adjustments for us ghcn
What I have noticed is that long run stations with data pre 1850/80 that show no trend are often adjusted by homogeinizing them with local short run stations from post 1955 that show a large warning trend. For example the two long run stations from Berlin (Dahlem and Templehof) that show a low rate of warming ( 0.24/0.41 C – the latter includes a UHI effect as Templehof was an airport)) are corrected from short run stations from the two airports that had large military and commercial traffic and buildings, so a large UHI impact, that have trends post 1955 of 2.83/5.49 and the same applies to rural stations in the sub arctic.
In a response to Zeke at ClimateEtc I asked why we could not evaluate the global trend from the 60+ long run stations that give global coverage that is better than CET which has been taken (shown!) to be a good global proxy when compared to the GHCN values. The Scotish Sceptic has suggested that the GHCN signal is mainly 1/f error and I find considerable sympathy with that view when I see how similar the various series are when they use (select) a different set of stations. In this respect it is likely that BEST with their slicing and dicing to increase numbers of segments, plus use of erratic segments from short run stations, increases this error and hence the trend they display.
I have also in previous responses pointed out that temperatures depend on surface heat capcity that varies with soil water holding capacity and precipitation with cooling during the day linked to windspeed and warming at night due to the retained heat – as has been shown in a number of studies. Changesin these factors occur with the frequency El Nino and La Nina for the SE USA, S America and India and with the PDO/AMO for N W USA and NW Europe such that to my mind pair wise homogeinisation is not a valid methodology as such changes can occur within kilometres, as has been found in the evaluation of the UHI effect at Armagh. As Pielke Sr has pointed out many times the problems with microclimates and land use changes makes the surface station data unsuitable for the evaluation of trends in global climate and were never intended for that purpose. I have also pointed out that the Class A Pan Evaporation data does not support the global warming meme with China finding opposite trends at sites in Tibet related to windspeed and precipitation.
So a good starting point would be an evaluation of the trends in the long run stations presented as a spaghetti graph and then we can look for rational explanations for differences in trend linked to ocean cycles, precipitation, changed wind patterns ( as Hubert Lamb proposed and other naturaöl factors. A lack of consistency of trend between these stations would indicate that CO2 as a well mixided gas is not a major player in the global climate arena – as we already seen from the differences in the trends between the NH, SH and tropics. This is also shown in the raw data from long run Arctic stations that according to Arrhenius should show the greatest impact of increased atmospheric CO2 but do not until homogeinized and kriged with corrections equal toor in the case of those with a clear cooling trend greater than the claimed warming.
I shall see if I can make the time to look at the RH and see if these ‘homogenizers’ have actually thought through what they are doing. It would be instructive to start using the correct metric a daily integral of Kj/Kg would be correct as the AGW hypothesis is based on heat being trapped – so heat content should be measured not temperature.
Steven Goddard and Paul Homewood are both looking into this, but the task is huge given the number of stations.
And as observed here, the trend appears, in the majority of cases, to be cooling the past and warming the present.
What the simple “Graphs” do not show is the number of Stations with Estimated data, it can vary from 2 or 3 years of current data to 30-40 years around 1900.
You get the impression that there are so many staions with Estimated readings that they quite often compare Estimated station values with Estimated station values or base the estimate on local estimated stations.
They are also replacing actual values with estimated values, sometimes for a couple of years.
USHCN have also dropped a lot of “First Order” stations, which may be in response to A Watts paper on station quality, or it may be because they show cooling, who knows?
It is a perfect recipe for “Added Value Warming”.
Following on from my earlier post, there is a better example of the limitations of pairwise homogenization than Berlin. If one refers to the BEST climate map (http://berkeleyearth.lbl.gov) and looks up Bourke Australia one finds two long records from 1871 to 2013 that are on very close sites – one originally based at the Post Office and one apparently based at the airport – even though the Wright Brothers did not make the first manned flight until December 1903! The first shows a mean rate of change per century of -0.49 C corrected to 0.24 C after breakpoint analysis whereas the corresponding values for the áirport´are 0.9C and 0.55C. This suggests that there is a UHI effect of at least 0.3C or 1.4C if one takes the raw data. The BEST graphics sjhow different breakpoints even though one suspects that these are in fact the same initial Bourke record with later airport values included. There are similar examples in the Arctic area.
A Graph to Debunk AGW: Solar Geomagnetic Activity is highly correlated to Global Temperature changes between 1856-2000
Analysis: Solar activity & ocean cycles are the 2 primary drivers of climate, not CO2
The solar idea seems to be gaining ground
Thanks, M. Simon. The need to constantly remind scientists of the Sun’s dominant influence on Earth’s climate is a telling indictment of our academic institutions.
Your conclusions support what Steve Goddard has been saying (over and over and over again), yet he is getting some criticism over at Lucia’s blog:
Zeke strikes me as a combination of the two monkeys who see no evil and hear no evil.
In regard to radiation hormesis, many folks seem to be unaware of the DOE low dose Radiation Research program. One really interesting recent piece is that they have identified a means by which exposure to low doses of ionizing radiation stimulates stimulates DNA repair. My own observation is that this same mechanism is that it also marks a second means by which some exposure to full-spectrum sunlight is good for you:
Thank you, Duster, for that link.
1. The above video by Galen Winsor &
2. The rapid and complete recovery of Hiroshima and Nagasaki after WWII,
Together with false scientific models of nuclei, stars and the Earth’s climate after WWII, confirm that
Stalin emerged victorious from the unreported CHAOS and FEAR of 1945 to mold world government into the form of government used in the old USSR:
To expand further on my previous comments the real questions to ask are:
1. Do different methods of calculating anomalies give different answers?
2. Do temperature anomalies have meaning when we are interested in energy flux and this depends on the heat capacity of the surface and the transfer of heat by convective evaporation, both linked to topography, precipitation and wind speed etc? These all vary over short distances.
I have argued in a number of previous posts that there is no global climate only regional and in particular zonal as per the Köppen-Geiger classification.
There are a number of good papers on these subjects from Clive Best that answer these points:
How reliable are global temperature “anomalies” ?
Guest post by Clive Best
Comparison of standard CRUTEM3 anomalies(BLACK) and anomalies calculated using monthly normals averaged per grid point rather than averaged per station (BLUE).
The anomalies are significantly warmer for early years (before about 1920), changing the apparent trend. Therefore systematic errors due to the normalisation method for temperature anomalies are of the order of 0.4 degrees in the 19th century. The origin of these errors is due to the poor geographic coverage in early station data and the method used to normalise the monthly dependences. Using monthly normals averaged per lat,lon grid point instead of per station causes the resultant temperature anomalies to be warmer before 1920. Early stations are concentrated in Europe and North America, with poor coverage in Africa and the tropics. After about 1920 these systematic effects disappear. My conclusion is that anomaly measurements before 1920 are unreliable, while those after 1920 are reliable and independent of normalisation method. This reduces evidence of AGW since 1850 from a quoted 0.8 +- 0.1 degrees to about 0.4 +- 0.2 degrees
Tracking down climate feedbacks
There is a clear trend in the data that ARID stations cool faster and warm faster than WET stations. They seemingly react stronger to external forcing. The WET humid stations respond less than both the ARID stations and the global average. The location of the stations are shown in Figure 2 which is taken from reference . The ARID stations are the yellow desert areas and the light blue polar areas. The WET stations are located in the red zones – Amazon, central Africa and SE. Asia.
Evidence for Negative Water Feedback
Abstract: Positive linear climate feedback for combined water effects is shown to be incompatible with the Faint Sun Paradox. In particular, feedback values of ~2.0 W/m2K-1 favored by current GCM models lead to non physical results at solar radiation levels present one billion years ago. A simple model is described whereby Earth like planets with large liquid water surfaces can self-regulate temperature for small changes in incident solar radiation. The model assumes that reflective cloud cover increases while normalized greenhouse effects decrease as the sun brightens. Net water feedback of the model is strongly negative. Direct evidence for negative water feedback is found in CRUTEM4 station data by comparing temperature anomalies for arid regions (deserts and polar regions) with those for humid regions (mainly saturated tropics). All 5600 weather stations were classified according to the Köppen-Geiger climatology . Two separate temperature anomaly series from 1900 to 2011 were calculated for each region. A clear difference in temperature response is observed. Assuming the difference is due to atmospheric water content, a water feedback value of -1.5 +/- 0.8 W/m2K-1 can be derived.
If there is a global climate this explains it (link also posted up thread):
Analysis: Solar activity & ocean cycles are the 2 primary drivers of climate, not CO2
Dan Pangburn has updated his analysis identifying the two primary drivers of global temperature:
1) the integral of solar activity
2) ocean oscillations [which are in-turn driven by solar activity and perhaps lunar-tidal forcing].
The correlation of the integral of solar activity and ocean cycles to global temperature is 90.49%, and with the addition of CO2 the correlation only improves very slightly to 90.61%, demonstrating CO2 change has no significant effect on climate.
May 21, 2008
E.M. Smith, Peter Azlac
There is another approach to this.
I have been working on a method for analyzing the station records. It’s a way of doing temperature trend analysis that neatly avoids all the data manipulations so much discussed recently. I have built an Excel workbook for this method, and have done a study of Kansas US in order to prove the concept and the tool.
I have a post up at WUWT including a link to download my Excel workbook:
I have started building a Canada workbook with this template, but there are many stations, and it will take time. I have found access to the EC history of monthly averages for Canadian stations. The record has gone through 2 homogenizations, though it is claimed that the process is only to remove errors and improve data quality prior to gridding, anomalizing and averaging.
Now I believe that NWSs are sincere and competently trying to get the record right. Since the global warming cause, unfortunately some of the people working with these data do have an agenda. A skeptic has an additional uncertainty: Are they altering the record to suit their cause?
So I want to explore your NOAA link before building a dataset.
OK. Thanks to this post, I have been able to download the qcu (quality controlled unadjusted) monthly tavg for GHCN stations. It is current to June 2014. I have the Kansas subset to be able to compare to finding I obtained from HADCRUT3.
The same problem with adjustments has thrown up some discrepancies in the BoM (Aust) records.
As you are aware, the BoM has moved to ACORN data set in 2012, a so-called ‘Rolls Royce’ system. However, I am beginning to discover many problems with the adjusting it has done.
I compared Bourke with neighbouring stations for Jan 1939 (a particularly hot month).
The first column shows the long-term average for each station, the second the raw monthly max mean for each and the last column the adjusted ACORN mean for each (all in C).
Jan 39 L-T Mean Raw Adj
Bourke 36.3 40.4 40.04
Cobar 35 40.1 40.19
Walgett 35.4 39.15 40.16
Tibooburra 36.2 40.1 40.08
Bourke has the highest long-term mean for Jan and the highest raw mean for Jan 1939 – but after adjustments comes out last of the four stations. Cobar, which has the the lowest L-T mean comes out the highest after adjustments.
The whole adjustment procedure appears to have little credibility at all.
As noted on C J Orach’s blog, mankind shot himself in the foot by promoting false information about ENERGY, FUEL and FOOD.
M Simon says:
15 July 2014 at 9:54 am
“If there is a global climate this explains it (link also posted up thread): `”
I would agree with Don Pangburne´analysis.
Solar activity acts in four main ways a) via TSI effects at a ground level – shown to have increased in Europe by about 10% during the period of increased warming due to decreased cloud cover associated with changes in the solar wind and cosmic ray flux, as shown at Oulu. b) changes in UV wavelength and flux that affect the sites of formation and destruction of ozone in the stratosphere and in consequence surface pressure that drives wind system and hace a direct effect on Arctic heating and blocking patterns c) changes in geomagnetic activity that have an effect on Rossby waves d) impact on the Lunar Saros cycles that impacts heat transfer from the SH to the NH via ocean and atmospheric waves.
Ocean cycles act in transferring heat and precipitation to land with the directions controlled by the surface pressure wind system – as discussed by Hubert Lamb.
The work of Frank Lansner and Clive Best shows that these effects are manifest in the climate zones outlined by Köppen. Clive has shown differing temperature trends between arid and humid areas whilst Frank has shown that these areas can be defined by the penetration of ocean air to land with areas shielded from ocean air currents being more arid and having a different temperature trend. Today Tim Ball has an article at WUWT (http://wattsupwiththat.com/2014/07/16/macro-meso-and-micro-climates-the-importance-of-trees-in-urban-climates/#more-113099) on the importance of trees in the urban landscape in determining UHI effects ( this also applies to other vegetation, especially irrigated crops and those with a high transpiration rate) – what this is showing is that the macro arid/humid climate trends found by Clive Best are also dipslayed on a meso and micro scale making the concept of a rural – urban cllimate distinction ala BEST and the rest invalid – the difference should be the varying degrees of aridity dependent on soil water capacity, ground cover, precipitation and wind, especially the wind originating from the oceans. There was another recent paper showing how forests through evaporation and release of cloud forming nucleii set up an increased air flow off the ocean bringing more precipitation and with it more daytime cooling and night time warming. In his article Tim Ball gives a good diagram showing a cross section of a coastal area with such microclimates making a mockery of claims by BEST and the rest that gridding and kriging can be used to correct claimed station temperatrue breaks that are more than likely to be real – for example caused by short term El Nino and La Nina periods that bring high and low precipitation and so have direct effects on minimum and maximum temperatures.
M Simon says:
“The correlation of the integral of solar activity and ocean cycles to global temperature is 90.49%, and with the addition of CO2 the correlation only improves very slightly to 90.61%, demonstrating CO2 change has no significant effect on climate.”
I do not argue with this conclusion but would say that CO2 certainly does have an impact on climate but not as a radiative molecule – if the radiative effects were true than we would expect to see a large response in the northern NH and over the Antarctic where the cooling effects of water vapour found by Clive Best are minimal and where Arrhenius, who started this hare running, expected it to be. Instead if we look at the raw temperature data for the Arctic and Antarctic regions we do not see this with warming in the Arctic linked to stratospheric effects of ozone, solar wind and geomagnetic flux:
http://cosmicrays.oulu.fi/webform/query.cgi?startday=13&startmonth=03&startyear=1968&starttime=00%3A00&endday=04&endmonth=04&endyear=2014&endtime=23%3A30&resolution=Automatic+choice&picture=on. Some evidence for a radiative effect comes from the desert areas where the influemceof water vapour is minimal but these areas make a small contribution to the Earth´s land area and so have little overall imapact. Rather with the northern movement of the Hadley cell bring precipitation the increased CO2 has a greening effect as in the Sahel.
But where increased atmospheric CO2 can have a large climatic effect is in increasing the growth rate of biomass on land and ocean which in the former case increases the cooling effect of water vapour and enhances albedo from greater cloud formation or changes in cloud type. Thus during the period of increased atmospheric CO2 we have seen a 11% of so increase in global biomass, much of which is due to enhanced growth of trees in the will have enhanced evapoarative cooling which shows up in Dan Pangburns analysis as solar and ocean impacts. Though what is the main source of the increased atmospheric CO2 – release from oceans, increased soil microbial activity, or fossil fuels – is still an open question.
Since humans change the Earth,s surface through urbanization and the the vegetative cover through forestation and agriculture, especially irrigated crops, there is a decided anthropogenic effect on climate but one of net cooling.
There is also good evidence that the heat emitted by burning fossil fuels – or heating with renewables – creates warm ´bubbles´that impact on the direction of airflow and cause regional changes in climate, another reason why the division of areas into rural and urban is not valid.A good analysis of this situtation is offered by an aanlaysis of temperature and precipitation data for North Carolina that does not support the concept of homogenization:
Finally one should also note that increased atmospheric CO2 can increase the rate of energy loss above the tropopause.
Sorry to be off topic but I have a open question. In my memory I remember the acronym CAGW being commonly used by proponents, and skeptics. I know there was, and are currently countless proclamations of catastrophe by the media, and scientist.
However currently the warmist say that CAGW is a term used by the skeptics. They point to the IPCC using the term CC, for Climate Change since its inception. I know that most scholarly publications used most commonly the term AGW, or GW. Yet I remember may uses of the term CAGW by proponents.
Am I wrong?
Did skeptics create that term?
If you have any linked evidence I would appreciate it.
Clearly the term CAGW is more accurate and pertinent, but I still need the history of the acronym.
Thanks in advance.
Several good comments were stuck in ‘moderation’ as I was busy at work, having a friend invite me to a wedding, spent time at the pool, and generally was being a lazy bum. My apologies.
I think I like your law ;-)
Is it malice inc-or stupidity? Yes…
Nice list. Very nice. If you have code to post, feel free. Maybe we can find any bug hiding in it. As it is, the selections you found are ‘interesting’…
A VERY perceptive comment. (Anyone who has not read it as it was ‘stuck’, please scroll back up and read it.)
Yes, it looks to me like the good data get corrupted by the worst. But how to fix?…
@A C Osborn
Part of homogenizing is an infilling of missing data. IMHO there is so much missing for many stations that the whole is not ‘fit for purpose’ in showing trends. It would be interesting to do some kind of statistic per station that shows ‘useful quality’. Perhaps take the interval fro, 1800 to 2015 and simply count a tally for each day with real data, then divide by 1800-2015 day count. Yes, new stations get a lot of historical zeros before they existed, yet that is their real coverage of the ‘trend’ in question…
Yes. The number of dung flingers can be quite high…
@radiation hormesis topic:
I’ve got a posting coming up about crosote… turns out it can be good for you. Sometimes low doses of something bad can be good, and clearly ‘no lower bound’ to bad is a broken idea.
You are exploring the issue of temperature being an intrinsic property. I periodically raise the point that an average of temperatures has no meaning, but it largely gets ignored (though it means the whole activity of ‘climate science’ as practiced is just daft, as it starts from monthly averages of temperatures to find heat fluxes…)
Nice summary link of what’s really going on.
I’ve put up a new posting that lists some other data sources. IMHO the best looks to be the GHCN Daily. But it will take some work to make sure it really is unmolested…
Making it available to the spreadsheet competent is a big plus. (not everyone likes old FORTRAN and tar files ;-)
No idea which came first or when. Just ask them “Oh, so it isn’t going to be Catastrophic?” and wait… ;-)
Thanks to a few brave souls – like E. M. Smith, J. C. Orach, Steven Goddard and Jeff Condren – sixty-nine years (2014 – 1945 – 69 yrs) of government deception are about to “blow !”
Bill Streifer gave me permission to say the new book he will coauthor with a Stanford physicist, “Dr. Fritz J. Hansgirg: Heavy Water and the Secret History of the Atomic Bomb,” provides independent evidence of the nuclear energy cover-up Galen Winston discussed in the above video:
Click to access WHY.pdf
Without any comment: http://stevengoddard.wordpress.com/2014/07/25/link-to-my-2014-iccc-talk-posted-on-the-heartland-web-site/