This will be a bit of a pot-luck of some semi-random country graphs that I just chose by whim. There’s a LOT of countries so doing all of them would be a bit much for one posting. I tried to spread them around the world and do a mix of large and small.
There are two graphs for each country. One is an A/B graph of the v3.3 anomaly data vs the v4 anomaly data. The other is a graph of one minus the other for a “delta” or difference number.
It was all a ‘rush job’ with several surprise turns, so hopefully I didn’t miss some subtle point. Please help by scrutinizing some graph or other and making sure they look right.
Yes, there’s a LOT of graphs. Just remember all you have to do is look at them and that’s a lot less than it took to create them and upload each one and put them in a posting… I’ve scattered a few snide comments along the way, but I’m not “fresh” enough after this effort to do detailed analysis. If you see something interesting, say something! ;-)
One country in particular, Lesotho, was chosen for the very small number of data points as that ought to let me “QA Check” that the two graphs match where they both have data and that the Delta graph does not include data for years where only one of the two versions has data. With that, here’s Lesotho:
Lesotho – Introduction and Validity Check
This may be a lousy nation for thermometer data, but it’s a good test case. Notice that the v3 data are only in a few years on the first graph. Eyeballing the second graph, it looks like it correctly only has spots for those years where both data sets have data. The size of the value in the second graph looks like a match to the difference on the first graph, read off the graph.
Around 1975 there is a hotter v4 value and the difference graph has that as a positive while the colder years are negative values, so “trend” is in the right direction.
With that out of the way, on to the rest of the graphs. These will be in roughly alphabetical order. At the end there will be a Tech Talk section with the table schema and Python code along with the SQL used.
Other Countries Graphs
Not much going on in Argentina. Some of the “baseline” period from 1950-1990 gets nudged up, some down, and the recent data becomes more changable.
Interesting that the general trend is a roll off of heat. But a couple of years get a hot bump at the end.
Not a lot of trend, but certainly some odd changes. Deep past gets cooled a lot, the baseline gets a divot, then a couple of hot fliers recently. Wonder what justified those changes?
Not much happening in Burundi. A bit of cooling the baseline period. They drop the oldest v3 data (see the plot below)
What’s the deal with recent Canada data? That’s just crazy change. 2 C of change in the anomaly range.
A little dip into the start of the baseline window, and recent larger changes, but over all China is fairly flat. The general cooling of the past is a bother. Just wrong, even if only a 1/10 C or two.
Looks like the French are not fooling around with their data much. Substantially flat. I wonder if the dip around 2000 was removing some fudge ’cause someone got caught? Wonder what was in their news then…
Again with the cooling of the baseline window… I think we’re getting a trend here… but not in the climate.
Nice “tuck” taken in 1950. Then recently a couple of years with a hot pop.
Looks like India isn’t with the “adjusting” program ;-)
Either the historic Indonesia data are crap and need a lot of fixes, or they can’t decide what their temperature was in the past.. Nice warming jump added at the recent end.
WOW. Deep past changes by at least 2 C (and may have gone off my graph range), then they cool most of the past by about another 1/2 C, then run up the recent data by 1/2 C. Looks like Italy is with the program of Data Diddle.
Well. Japan can only manage 1/4 C of cooling of the early baseline period. The Japanese must feel guilty about changing historical data…
Not much going on with Johnston Atoll. Then again they already have 2 C range in the anomaly (see next graph) so maybe nothing more was needed…
Abut 1/2 C cooling of the deep past, but not much else.
1.5 C range of variation? I wonder if they play with their thermometers much?…
WARMING in the baseline? Cooling recent data? They have a bit over 2 C of anomaly delta in the data (see below graph) so maybe it was standing out too much? Looking too much like jet exhaust and not enough like CO2?
Ah, the Classics. Just a smooth gentle 1/2 C colder past… Korean tech is neat and careful like that.
OMG what a mess. 1.2 C colder deep past, baseline only 1/4 C colder. Recent data about 1/4 C of lift.
Turkey had complained that GHCN was only using the few thermometers that showed warming and ignoring the ones that showed cooling. Wonder if that “sensitized” folks to not fool with Turkey? Looks like some W.W.I data got adjusted.
United States Of America
Looks like somebody fooled around a lot with the 1800s. Then recently a couple of years get a big bump up.
What Does It Mean?
I’m not sure. I’d guess that it shows some countries have “activist” BOM’s who are adjusting their data, with emphasis on the English Speaking countries and maybe those expecting to pick up a bit of cash. It’s possible this was done in the creation of the GHCN, but given how different the countries look that would require a deliberate decision to obfuscate the changes. I suppose it is possible some “new idea” was applied but only to some countries (based on some attribute like the thermometer type or their standards).
To me, it looks like for at least some of the larger and more important countries there is a bias toward cooling the past and warming the “present”, even in data that were considered “just fine” in v3.3 all of a couple of years ago. That, IMHO, is not Science. That is “action for effect”.
Any of this say anything to you?
I added a new table that holds the average anomaly for each year in each country. This makes the V4 minus V3.3 math easier to do. The straight graph of the anomalies average was done first and by a different path, and I’d already made those graphs prior to making the new table for the Delta or Difference graphs, so I’ve left that code as-is for now (so it can also serve as a QA check on the rest of the process).
The Table Schema:
chiefio@PiM3Devuan2:~/SQL/tables$ cat yrcastats CREATE TABLE yrcastats ( year CHAR(4) NOT NULL, abrev CHAR(2) NOT NULL, mean3 FLOAT, mean4 FLOAT, big3 DECIMAL(7,2) NOT NULL, big4 DECIMAL(7,2) NOT NULL, small3 DECIMAL(7,2) NOT NULL, small4 DECIMAL(7,2) NOT NULL, num3 INTEGER NOT NULL, num4 INTEGER NOT NULL, trang3 FLOAT NOT NULL, trang4 FLOAT NOT NULL, stdev3 FLOAT NOT NULL, stdev4 FLOAT NOT NULL, PRIMARY KEY (year,abrev) ) ;
The SQL code to create the average anomalies and load it:
chiefio@PiM3Devuan2:~/SQL/bin$ cat Lyrcastats INSERT INTO yrcastats (year,abrev,mean4,big4,small4,num4,trang4,stdev4) SELECT year,abrev, AVG(deg_C),MAX(deg_C),MIN(deg_C),COUNT(deg_C), MAX(deg_C)-MIN(deg_C), STDDEV(deg_C) FROM anom4 GROUP BY year,abrev ; show warnings; UPDATE yrcastats AS Y SET mean3 = (SELECT AVG(A.deg_C) FROM anom3 AS A WHERE Y.year=A.year AND Y.abrev=A.abrev), big3 = (SELECT MAX(A.deg_C) FROM anom3 AS A WHERE Y.year=A.year AND Y.abrev=A.abrev), small3 = (SELECT MIN(A.deg_C) FROM anom3 AS A WHERE Y.year=A.year AND Y.abrev=A.abrev), num3 = (SELECT COUNT(A.deg_C) FROM anom3 AS A WHERE Y.year=A.year AND Y.abrev=A.abrev), trang3 = (SELECT MAX(A.deg_C)-MIN(A.deg_C) FROM anom3 AS A WHERE Y.year=A.year AND Y.abrev=A.abrev), stdev3 = (SELECT STDDEV(A.deg_C) FROM anom3 AS A WHERE Y.year=A.year AND Y.abrev=A.abrev) ; show warnings;
This is rather slow on the UPDATE section. There may well be a faster way… The initial insert of the v4 data that’s much larger in volume than the v3.3 data takes only 4 minutes on the Raspberry Pi M3. Then the Update to add the v3.3 values takes over 4 hours…
MariaDB [temps]> source bin/Lyrcastats Query OK, 28423 rows affected, 5 warnings (4 min 7.05 sec) Records: 28423 Duplicates: 0 Warnings: 5 Query OK, 23987 rows affected, 17744 warnings (4 hours 20 min 58.52 sec) Rows matched: 28423 Changed: 23987 Warnings: 17744
The Python3 code that makes the graphs for the two graph types:
Here’s the one that calculates and plots the two anomaly values on one graph, directly from the anomaly by month tables.
chiefio@PiM3Devuan2:~/Py3/v4$ cat cnASdelta.py # -*- coding: utf-8 -*- import datetime import pandas as pd import numpy as np import matplotlib.pylab as plt import math import mysql.connector as MySQLdb plt.title("GHCN v3 black vs v4 red Anomaly by Years") plt.ylabel(" Country AS Australia Anomaly C") plt.xlabel("Year") plt.ylim(-3,2) #plt.ylim(1850,2020) try: db=MySQLdb.connect(user="chiefio",password="LetMeIn!",database='temps') cursor=db.cursor() sql="SELECT year,AVG(deg_c) FROM anom4 WHERE abrev='AS' GROUP BY year;" print("stuffed SQL statement") cursor.execute(sql) print("Executed SQL") stn=cursor.fetchall() # print(stn) data = np.array(list(stn)) print("Got data") xs = data.transpose() # or xs = data.T or xs = data[:,0] ys = data.transpose() print("after the transpose") plt.scatter(xs,ys,s=1,color='red',alpha=1) # plt.show() sql="SELECT year,AVG(deg_c) FROM anom3 AS A INNER JOIN country AS C ON A.country=C.cnum WHERE C.abrev='AS' GROUP BY year;" print("stuffed v3.3 SQL statement") cursor.execute(sql) print("Executed SQL") stn=cursor.fetchall() data = np.array(list(stn)) print("Got data") xs = data.transpose() # or xs = data.T or xs = data[:,0] ys = data.transpose() print("after the transpose") plt.scatter(xs,ys,s=1,color='black',alpha=1) plt.show() except: print ("This is the exception branch") finally: print ("All Done") if db: db.close()
This is the one that uses the new table of yearly statistics and does the difference or change of the anomaly values between GHCN version 3.3 and version 4:
chiefio@PiM3Devuan2:~/Py3/v4$ cat a3v4deltaAS.py # -*- coding: utf-8 -*- import datetime import pandas as pd import numpy as np import matplotlib.pylab as plt import math import mysql.connector as MySQLdb plt.title("GHCN v3 vs v4 Anomaly Difference by Years") plt.ylabel(" Country AS Australia Anomaly C") plt.xlabel("Year") plt.ylim(-2,2) #plt.ylim(1850,2020) try: db=MySQLdb.connect(user="chiefio",password="LetMeIn!",database='temps') cursor=db.cursor() sql="SELECT year,(mean4-mean3) FROM yrcastats WHERE abrev='AS' AND num3>0 AND num4>0 GROUP BY year;" print("stuffed SQL statement") cursor.execute(sql) print("Executed SQL") stn=cursor.fetchall() # print(stn) data = np.array(list(stn)) print("Got data") xs = data.transpose() # or xs = data.T or xs = data[:,0] ys = data.transpose() print("after the transpose") plt.scatter(xs,ys,s=4,color='purple',alpha=1) plt.show() except: print ("This is the exception branch") finally: print ("All Done") if db: db.close()
E M I looked at the charts quickly. I will spend more time looking at them and thinking.
But already I am perplexed.
Standard ‘consensus’ global warming theory states that CO2 is rising in the atmosphere ‘globally’. So there ‘should’ be some kind of uniform CO2 effect showing up.
But there is no uniform effect in what I see in these charts. The charts are all over the place.
From which I conclude that either the CO2 global warming hypothesis is wrong OR Other factors are drowning out the global CO2 effect = ” It’s complicated “
@Bill in Oz:
Yeah, that was one of the things that tipped me into Skeptic very early on (decade+ ago). I took a look at the data “by country” and each one was acting differently. It is ONLY when you make one big averaged ball of homogenized Data Food Product out of it that it “fits the theory”. Prior to that it’s all nips and tucks.
I think it was with GHCN v2 and Africa … I did a posting of each country in Africa and there was a “knee” in the temps in about the 1980-90 range. Except it showed up progressively as you moved around the continent country by country year over year. As though there were a roll-out of electronic thermometers at airports but the installation team could only work “so fast” and in one place at a time.
For these graphs it is important to remember they talk about two different things. The Delta Anomaly v3 vs v4 shows Data Diddle. The result of changes to the “raw” data years, decades, or centuries later OR the effect of changing thermometer numbers on the anomalies (so puts the lie to the notion that as anomalies it doesn’t matter what thermometers you select).
The chart with both v3 and v4 anomaly data on it has more information, but in a harder to read form. The difference between individual black / red dots gives the same information as the Delta graphs, but you also get to see the shape of the data. How much dispersion or volatility or range is there to the anomaly year over year? Here we see high volatility early one, a narrow “waist” in the baseline period, and then what look like a loss of low going anomalies in the present; for many graphs of regions and nations. That doesn’t look at all like what CO2 is supposed to do; but it sure looks like what you would get as an artifact of thermometer changes and numbers.
Just that “the more instruments you average the less anomaly range you ought to get” enters into this. In the “baseline window” the GHCN folks loaded in a lot more different thermometers. Well it is far more likely that ONE thermometer will have a very hot or cold day than that 100 will. The average of that 100 will tend to suppress excursions. That is why all the early data have a wide range (very few thermometers) and the baseline window then gets a narrow range in the aggregates.
But what happens after 1990 is most odd. You get a “hockey blade” but only in some places. Some are flat. Some lose the cold going excursions. Some just narrow dramatically with an upward tilt. Now CO2 has supposedly been busy since the 1940s… strongly since the 1970s… yet those upturns tend to lay on top of the late ’90s just about the time electronic thermometers get rolled out (and as airports with lots of jet engines burning tons of kerosene become a major percent of the data). It just shouts “Instrument issues” to me (being a bit charitable here as the alternative is “manufactured data for effect”).
The one that really frosts my shorts though is how the past is changed. These are supposed to be the “unadjusted data”. Historical records from the instruments on the ground in the 1800s ought not to change yet it moves – “Eppur si muove!” and that’s just wrong.
EM, comparing anomaly baselines is great path to take. Even better is the preliminary findings re: some countries followed the consensus norm and others DID NOT!
Your preliminary suspicions only confirm what many of us have felt about the data and its manipulators.
IMHO the biggest point this set demonstrates is that “using anomaly” does NOT prevent thermometer change artifacts from changing the results by the 1/2 C of notional “Climate Change”.
Not a surprise to me ( the idea of doing Global Calorimetry and screwing around with all the thermometers while it is happening is just daft. Fiddling with the thermometers assures failure in Calorimetry per all my Chemistry teachers).
E. M. Excellent work. Thank you.
Precisely what I hoped you could produce.
Your conclusion is spot on: “The Delta Anomaly v3 vs v4 shows Data Diddle.”
It perplexes me how and why historical data is adjusted. If they (including NASA Goddard) make an adjustment in say 2002 and then again a few years later, and then again…. What was wrong with their first adjustment…and the second…and so on? Makes any product unsound. Why believe this product…you’ll likely change it in a few years???
Pingback: GHCN v3.3 vs v4 – Selected Country Anomaly Differences – Climate Collections
What was wrong with their first adjustment…and the second…and so on?
Even worse the adjustments are not well documented so we don’t know what they were trying to fix, and in many cases they don’t bother to tell everyone they “fiddled” with the numbers, plus that minor detail that the data may have been “fiddled” with by someone farther up stream so you have corrections made to corrections made to corrections, to the point that the only thing you can say for certain is that the data is tainted and untrustworthy because you have zero audit trail on what has been done to it.
I contended for a long time that what they needed to do is pick long historical stations and validate the daily temperature data against uncorruptible records like the temperatures reported for that day and location in the local news paper. Then you would at least have a clue what the original value was to see if we have tweaked the data so many times that it is completely unrecognizable.
@E.M.: well done, sir! And, your conclusions drawn from the drawn spots are spot-on!
At some point, please save a v3/v4 set somewhere, as I surmise that v4 is ‘in motion’, and that the diddling is continuing.
I agree very much that changing measured values after the fact is undesirable, as it does not remain data… it yields a product, or a result. Put another way, processing data is like a stomach… you may get something nutritious out of it, but what is left over is no longer as it was.
I have archived copies of v1, v2, v3 and v4 often in a couple of ascensions and with some variations of Min Max Mean along with adjusted and unadjusted. It isn’t a full set of all combinations, just what I grabbed when I grabbed it. It’s on several different sets of media so unlikely to become “lost”.
Bigger problem is just finding time to do something with all of it.
I hope to eventually do a 4 way compare, like above but with v1, v2, v3, and v4 all included, and perhaps in adjusted vs unadjusted variations. But since this posting has about a week of total work in it that 8 variations is a few months work especially when you get into 2 by 2 combinatorics. But I’m whacking on it ;-)
Thanks E M for the replies to my comment and to the others. And thanks to everyone else who has chimed in here.
Clearly a whole group of national & international weather agencies have been, & are doing a diddle of the data. That’s is what the evidence shows.
But I wonder, are there whistle blowers who have been working within BOM or NASAA etc during this period, who can spill the beans on this diddling ? Folks who have retired now and so no longer have to protect their jobs & salaries ?
I wonder what a search would turn up ?
Pingback: Gardening In Construction Tubs & Pots | Musings from the Chiefio
@EM: Have you considered writing a paper, perhaps in collaboration with retired BOM resources who have in-country access to public temp records? With its representation on this list, I would think that Australia would be a dandy starting point. Titled something like, ‘World regional Surface temperature audit project’, collecting as much as possible on thermometer location and technology changes, technology, supporting documentation, etc. I think it would be enormously valuable as a resource once created.
@EM: And while I am thinking about it…. In AUS there is an issue making site records disjunct when a digital sensor replaces a min/max thermometer in a location, with no calibration overlap.
I think it would be way fun to take a Calibrated pair of the two technologies, place them together, and record their results for a year. For extra fun, add a ‘continuous recording’ device, (say, 1 second samples) that does not throw any data away. Capture it all, and then analyze for tendencies to under/over report averages or min/max based on the various averaging methods.
@EM: (Merovingian voice) Who has time? Who has time? But then if we do not ever take time, how can we ever have time?
All wonderful ideas (Bond Movie Voice: Had we world enough, and time…) but…
There’s the small issue of zero income but Social Security and the need to do everything myself involving house et. al…. Frankly, I’ve thought of going back to work and just paying someone else to do a lot of the stuff here that needs doing. Taxes make that less attractive than otherwise, but it’s still a net win. OTOH, it’s nice being retired and having no boss…
At one point I bought a couple of thermometers to do A/B compares of locations just around the house. It can be remarkably different. But not relevant to the Climate Cartel. They will only care about things that are of the kind they use and sited per their guidelines…
A.Watts sells nice recording thermometers fairly cheap and it would likely be good as a starting point to compare one of them to LIG in similar locations; but what’s really needed is a real Stevenson Screen and a real LIG thermometer next to a real MMTS unit. Unfortunately, in many places now, you can not buy mercury thermometers… (the best kind, BTW). Plus, a LIG thermometer is not automatically read. Takes a human being there reading it.
So someone want’s to put up enough money to buy the hardware and provide to me a gardener (to do all the yard work lined up for this summer) and pay for a new roof (the other summer job in queue) and I’ll be happy to do a 6 month study on the thermometers…
Otherwise I’ve got “things to do” to put food on the table and prep the house for sale in a year-ish and sort the crap out for moving in a bit longer than that and… and… and… (Nobody ever told me there would be this much work involved in being retired ;-)
So I’m doing the ‘desk work’ archived data compares instead. That I can wedge in as a couple of hours late at night or the occasional weekend marathon.
An interesting little report:
I was thinking of taking one continent at a time and doing the above graphs for them. Figured I ought to know how many countries in each region (continent). Ships (9) has 1, as does Antarctica (7). But we already know there’s no longer “Ship” data in v4, so it doesn’t count anymore. An Antarctica graph would be nice.
Then you get South America (3) at 16 countries. North America (4) at 31, Asia (2) at 36, and Australia Pacific Islands at 37 are almost the same level of work, but double South America. But Europe (6) at 57 and Africa (1) at 61 … man that’s a lot of countries (and graphs)…
I think I may need to have a re-think on the approach ;-)
Yes a difficult one.. What to do ?
A suggestion for Europe : Do all the English speaking countries first. The UK, Ireland, Gibraltar & Malta if they still use English for their meteorological reports. Why English ? Because you can double down and check out any ‘issues’ in the supporting country info if needed. See what emerges from those… And after that think about other European countries…
I am also OK in French and Spanish (and can read Portuguese … it is the sound of it that is hard…) so a lot of Europe is accessible to me. The problem is lack of alternative data sources…
But it is an interedting idea. Group countries by language affiliation… on the assumption “they all talk” a so might share biases…
An alternative might be Aa / Bb comparisons where v3 vs v4 comparisons could be done in two countries in the same geography (where they ought to be the same). Like Austria Hungary or Denmark Holland or Norway Sweden…
Yes EM That might yield some interesting discrepancies !
Another to think about : Republic ofIreland vs Northern Ireland in UK..There is an element of “We’ll do it our way in the Irish..And bugger off you English”…
Pingback: GHCN v3.3 vs v4 Anomaly South America | Musings from the Chiefio
Pingback: GHCN v3.3 vs v4 Anomaly South America – Climate Collections
Pingback: GHCN v3.3 vs v4 Anomaly North America | Musings from the Chiefio
Pingback: GHCN v3.3 vs v4 Anomaly North America – Climate Collections
Pingback: GHCN v3.3 vs. v4 Anomaly Australia / Pacific Islands – Climate Collections
Pingback: GHCN v3.3 vs v4 Asia Anomaly | Musings from the Chiefio
Pingback: GHCN v3.3 vs v4 – Top Level Entry Point | Musings from the Chiefio