Australia Pacific Change of Anomaly GHCN v4 vs v3.3

Not what I expected to see. Yet in a way, completely what ought to have been expected.

I plotted the anomaly “by region” using Australia Pacific (region 5) as my test case. Folks may remember I put the time in to match up the old continent / region method of GHNC v3.3 with the nation ID method uses in GHCN v4. I did that just so that I could compare “like to like” graphs.

So the Region 5 (Australia, New Zealand, Philippines, other Pacific Islands) graph at first look didn’t look all that different. It DOES have a spike up at the tail (the new years of data after 2015) that looks very un-physical to me, but the general shape is the same. Then I plotted the two on top of each other. In many cases in the early years the temperature dots are the same dots, but shifted lower. Wait, what? This is an “anomaly” of ONLY a given thermometer in a given month compared to other months of that thermometer. I doubt strongly they went back into the 1800s and “found” more thermometers in the Pacific. So HOW? at present, I can’t say. More “Dig Here!” required. Yet some of the dots move upward around 1900. In the 1940s to about 1975 range, the dots all cluster near the middle and with little difference. Then they spread again after that with some degree of the v4 dots being cooler than the v3.3 dots.

What I’d expected to see was the black dots of the GHCN v3.3 obscuring the red dots of v4 in the early years, then the red v4 pulling up in the present. That’s not what I got.

So here’s the graphs, you tell me what it says:

Australia / Pacific Average Anomaly GHCN v3.3

Australia / Pacific Average Anomaly GHCN v3.3

GHCN v4 Australia Pacific Anomaly by Years

GHCN v4 Australia Pacific Anomaly by Years

Australia Pacific Anomaly v4 (Red) v3.3 (Black)

Australia Pacific Anomaly v4 (Red) v3.3 (Black)

I really hate it when people re-write historical data.

One thing that I think this DOES illustrate, is that using “anomalies” is not a magic bullet that protects against change in the results from change in the selection of instruments. We’ve got a couple of 1/10 C variation in THE most narrow repeatable anomaly I could make (only comparing a single instrument to itself, in each month). Making an anomaly from different instruments averaged over areas and decades will be more variable. Remember that, in theory, those historical data (pre-W.W.II at least) are FIXED. They can not change in either their actual recorded data or their number. Yet they have.

Subscribe to feed

About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in AGW Science and Background, NCDC - GHCN Issues and tagged , , , . Bookmark the permalink.

26 Responses to Australia Pacific Change of Anomaly GHCN v4 vs v3.3

  1. Hifast says:

    What’s are the anomaly deltas (changes from v3.3 to v 4)?

    This is right down Tony Heller’s alley.

  2. Larry Ledwick says:

    Although they sell the idea of using anomalies as a way to solve problems I personally believe it is an intentional effort to obfuscate just these sorts of unexplained shifts.

    The historical data should be absolutely frozen, it cannot be adjusted!

    What some bloke read on a thermometer in 1850 is not going to change and it is simply wishful thinking that you can magically correct for what you “believe” to be errors in his methods or equipment.

    It is quite literally impossible to know what if any errors were made back then (it is just as likely that he was a very careful and precise fellow than that he was not) – the only honest thing to do is take the data at face value and add an * note that explains the likely uncertainties.

    The proper way to deal with likely errors in the past is to widen the error bands for that data to account for what sort of errors you believe were made.

    Correcting the data and retaining accuracy is a lie, it is like false precision to a measurement by tacking on significant digits which were not there in the original measurement.

    We know from historical accounts that some of those folks who did citizen science a century or more ago were absolutely obsessed with accuracy and precision.

    Read the story of John Harrison (1762) and Harrison’s famous No. 4 marine chronometer. He was the inventor of the first practical chronometer and tell me he was not an obsessive compulsive about the fine details and accuracy.

  3. E.M.Smith says:


    Are you asking how they are made, or what is their size? The size is visible on the graph above at a 1/10 to few 1/10 C, but I can likely compute the actual numbers. I was planning to work on how to add trend lines next but the actual delta would be interesting too.

  4. Bill in Oz says:

    E M, I agree with Larry on this. This result is weird and unexpected. And must be a consequence of fiddling with the bits of the jig saw set…But by whom ? When ? Why ?

  5. Bill in Oz says:

    PS : Is it possible to separate out the data from the different national meteorological organisations ? BOM here in Oz, New Zealand, the Philippines etc ? maybe one of these is ‘mistreating’ the historical data ?

  6. Graeme No.3 says:

    The good news is that they aren’t fiddling the old figures much. In the HADCRUT set the whole southern hemisphere temperatures before 1858 are from ONE thermometer in what is now Indonesia. As they exchange delusions, scary slogans etc. it is likely that the same applies to these figures.
    Alternately, why not invent fictional sites (with the favoured temperature)? There was that African site which supplied figures for the first time in 41 years. Just when high temperatures were needed for the ‘warmest year’ in whatever. And I’ve commented before on the 2 sites in Columbia with 2 months over 80℃.
    What is obvious is that IF they have destroyed the original data then they would face criminal charges; destruction of government property, obtaining money with false pretences and no doubt others.

  7. E.M.Smith says:

    @Bill in Oz:

    Yes, I can do “by country”. It’s a lot of work and a lot of graphs, though. I’ll likely do an Oz and Kiwi set but not every nation. Sometime after I’m done with regions and version compares.

    @Graeme No3:

    Would that that were true… but wasn’t it Hadley who said they had “lost” the original data and only had the “quality enhanced” product? Last I looked they all still had their jobs, pensions, grants, speaking tours, etc. etc.

    IMHO not one of them will see the jail they ought to see. Too many “Rich and Powerful” paying off their lawyers and owning the cops.

  8. Steven Fraser says:

    E.M.: try this: Pick 1 aussie thermometer that is represented in both datasets, and plot all the values for v 3.3 and v4 in actual temps for it. That should give a sense of the comparative values, and potential offsets between the 2 series. For additional fun, plot the two anomaly lines… one for each version, with the time series. That will allow the mark 1’eyeball relationshop comparison to be made.

  9. Bill in Oz says:

    Whatever you think will work to show up why this problem is happening is good with me E M.

    But another curious anomaly is showing up with the BOM in recent days.
    A strong cyclone named Trevor, passed over the coast of NT on Saturday bringing with a huge dump of rain as it travelled South from the gulf of Carpentaria… It degraded into a tropical low as it moved South ..Bringing rain to bone dry desert country in the country along the NT -Queensland border. But BOM’s maps of this rainfall event are not accurate..Ignoring falls of 85 mm at a place like Jervois in NT… And elsewhere..

    Apparently the BOM is ignoring some of it’s automatic weather stations in an area where they are few & far between..As in hundreds of ks. apart…

    Very curious !

    But perhaps a hint of the reason for the problems you are having as well ?

  10. Bill in Oz says:

    PS : Link to info for my comment. :

  11. A C Osborn says:

    E M, Best Final Temperatures are even worse than you describe, they are what their Computer Program Thinks they should be.
    I kid you not.

  12. cdquarles says:

    What I’m thinking of doing is a box-whisker plot of this, when I get a roundtuit. That shows the range and the box is where most of the values lie, with the center of the box being whatever central value you like or get. This will give more information.

  13. Bill in Oz says:

    Huge storm in New Zealand. A bridge over a river on the South Island has been destroed as a result :

    This same weather system passed through here in South Australia, on Sunday night. Looks like the Antarctic Circum polar Vortex is expanding Northwards as we in the Southern hemisphere move towards Winter.

    Given your remarks E M about both polar vortexes being connected, that means the Northern Polar Vortex will contract Northwards…
    So maybe we’ll get some needed rain with the storms. And you folks will get some warmer weather up there !
    A win/win !


  14. E.M.Smith says:

    There are two aspects of the polar relationship (coupling).

    1) Polar See Saw… As the climate shifts it is opposite in effect on each pole. This is seen in the annual cycle and there are some longer cycles to do with Lunar / Tidal effects (the moon ends up more north or south of the equator pulling more ocean one way or the other over a few years. Seen in the variation in tide charts over the years.

    2) Solar driven global effects. The tendency for both poles to have meridional (loopy) or zonal (flat) jet stream flow. When meridional, more storms take a “loop” ride to more equatorward lands and we get alternating horrid-cold-wet and nice-warm weather. I believe that both poles react the same way (together) to that.

    So right now both N and S are having some dramatic storms and meridional jet streams. That ought to remain for several years as a dominant tendency.

    BUT, as we are heading into spring / summer we ought to catch a break while you folks are headed into increasingly worse for a few months…

    Don’t know the state of the Saros Cycle and Tides…
    so there’s that too… There’s a roughly 1500 to 1800 year super-cycle (with harmonics so sometimes 1800 sometimes off a bit) with the moon doing 1800 years and the Bond Event cycle being an average about 1470 (but with a couple hundred year variance and some look more like 1800…).

    With all those cycles, and some reinforcing at both poles and others counterpoint each pole, well let’s just say prediction is hard; “especially about the future” ;-) But I hope one of us catches a break…

  15. Bill in Oz says:

    E M I’m not up to speed on those cycles. Will have to spend some time checking them out.

    But as we have been in drought the past year & a bit, some stormy wet weather would really be welcome here.

    As as you in North America, ( USA & Canada ) have been in the deep freeze since November, you will all welcome some warmer weather will I’m sure.

    BOM occasionally produces some interesting stuff. This image based on satellite imagery of the Southern Hemisphere from the viewpoint of above the South pole is interesting. :

    All those deep lows reflect the circum polar current around the Southern Ocean…

  16. Kneel says:

    “But perhaps a hint of the reason for the problems you are having as well ?”
    Or perhaps the change from ACORN1 to ACORN2 at BoM (Oz) is the issue? According to Jen Marohasy, this makes a significant difference – it’s about double what it was in the old version. If ACORN 1 to GHCN 3 and ACORN 2 to GHCN 4, then …

  17. A C Osborn says:

    EM Tony heller has found a Station, Nuuk, Greenland, where the RAW data has been adjusted.
    This is one that you can double check.

    It is almost as if they have messed up the whole station data, lost early records a change of GPS.

  18. Bill in Oz says:

    Here is an interesting discussion of the BOM’s ACORN Sat, versions One & Two.

    Once again raw data is taken and homogenised to produce totally inaccurate information about climate.

  19. Steven Fraser says:

    E.M.: (and) A, C, Osborn: Tony also profiled Reykjavik, which is an excellent location for correlating their temps with the NAO. Caught cooling the past, again, to create trend where there is none.

    This time, there is pushback. The MET folks in Iceland don’t buy that the adjustments were made, THey disagree with them.

    MMMMM the politics of temperature data.

    Keep up the good work.

    BTW: A.C: My grandmother was an Ontario Osborne.

  20. A C Osborn says:

    Steve, I am in the UK.

  21. A C Osborn says:

    EM, an article by Kirye and Pierre Gosselin at NoTricksZone has added other stations to those identified by Tony Heller as having adjusted Raw Data.

  22. Ian W says:

    It would appear that there are a lot of ‘adjustments’ between V3 unadjusted and V4 unadjusted. The governance on this data is appalling considering the level of trust given to it by politicians.
    see for example:

  23. E.M.Smith says:

    Yeah, there’s a small cottage industry to be had just doing A/B compares of station by station by versions.

    I’ve taken a bit of a tech break on the data analysis to get Tax Stuff done, but that’s all filed now, so I guess it’s time to get back to it ;-)

    I’ve got the Odroid XU4 back up and running, (Using it for this comment) and with just about everything on a Real Disk partition:

    root@odroidxu4:/# df
    Filesystem      1K-blocks    Used  Available Use% Mounted on
    udev                10240       0      10240   0% /dev
    tmpfs              204488     532     203956   1% /run
    /dev/mmcblk1p1   30335916 3007916   26990212  11% /
    tmpfs                5120       4       5116   1% /run/lock
    tmpfs              828400       0     828400   0% /run/shm
    /dev/sda5         4184064   37300    4146764   1% /tmp
    /dev/sda3        20961280 1399476   19561804   7% /var
    /dev/sda6         4184064  136212    4047852   4% /lib
    /dev/sda7        12572672 1881036   10691636  15% /usr
    /dev/sda8         1038336   34728    1003608   4% /root
    /dev/sda9         1038336   40656     997680   4% /bin
    /dev/sda10        1038336   43184     995152   5% /sbin
    /dev/sda11        1038336   39148     999188   4% /etc
    /dev/sda12     1900148220 2344536 1897803684   1% /SG2/xfs
    cgroup                 12       0         12   0% /sys/fs/cgroup
    tmpfs              204488       0     204488   0% /run/user/1616

    Yes, that’s a 12 on the last one ;-) A dozen partitions on one 2 TB disk…with a 4 GB /tmp so no more worries about making big indexes running out of /tmp space…

    I’m using a “one release back” Armbian (based on Debian 8 “Wheezy”) with a Devuan “uplift” that I’d built some long time ago (that still worked fine) with the various file systems copied onto the hard disk partitions, then the partitions mounted over the original. In that way, doing things like an “update” only changes the hard disk version and if it breaks, you can just unmount the disk copy and drop back to the chip copy… It’s a fun, if trivial, “trick” I’ve used often even on hard disk based systems. It lets you have nearly instant “roll back” on a failed upgrade, for example

    So, as one example, I would mount, say, /dev/sda9 and copy to it, then remount, via something like:

    mkdir /SG2/bin
    mount /dev/sda9 /SG2/bin
    cd /SG2/bin
    (cd /bin; tar cf - . } | tar xvf - 
    cd /
    umount /SG2/bin
    mount /dev/sda9 /bin

    In reality that last mount would be after putting an entry into /etc/fstab so it would happen automatically at boot time. The only “tricky bit” is that you need that fstab entry in both the native /etc and the overlay /etc so that it happens in both cases (unless you don’t want that…)

    So now I’ve done that for the various file systems. Then I did the usual apt-get update, apt-get upgrade and “Bob’s yer Uncle” up and up to date.

    Now I’m going to go through that whole “install the database software and load the database” process again just to assure it’s all fresh and correct, and then pump out some more graphs…

    I’ll be doing that last kind of graph (v3 vs v4) for each region, and then for selected large countries inside various regions. Yeah, that’s going to take a good while… But it willl show the extent of the Data Diddle (select regions or global?) and perhaps differences by country (are the local BOMs up to something, or the aggregation that’s the Diddle Point?).

    Depending on how long it takes to get this system all “set up and running right” I may boot the Pi M3 for a few of the graphs, just to make some consistent progress… (That’s also a work habit of mine. Work on building better faster infrastructure while being willing to use whatever you’ve got to keep progress to goal moving… Sometimes you push hard on one leg or the other depending on needs of the moment, but both matter so some of each in any case… Usually the ratio depends on needs, but sometimes I’ll swap based on being tired of futzing with one or the other ;-)

    So I’m not likely to be doing the individual stations soon… but I’ll be pointing at the best spots to dig for such stations; or point out that they are distributed through the lot…

  24. Pingback: GHCN v3.3 vs v4 – Top Level Entry Point | Musings from the Chiefio

Comments are closed.