What Difference Does Instrument Selection Make To Anomaly Processed Temperature Data?

I’ve wanted to do an A/B Comparison of “Anomaly Processed Temperatures” with changed instruments since the very first time I ran GIStemp, about a decade ago. Here, I finally get to “scratch that itch”.

They, “Climate Scientists” and supporters of Global Warming, claim that since their dire claim of warming is all “done with anomalies” the actual instruments used does not matter. Complaints about instrument change causing problems are often summarily dismissed, often with sneers and jeers, as being ignorance. Here I show otherwise. Instrument selection DOES change the anomalies produced.

To me, this was obvious just because some places are more volatile than others, and some move in opposition. “High Cold Places” have a wider range. Over the year and over the day. Places near water never get as hot as dry places. Similarly, the hot inland Central Valley of California will hit over 100 F in Summer, the hot air rises, and it pulls an ocean fog blanket over San Francisco that can cool down to the 50-60 F range on the same day.

One one occasion I left San Jose in August at 105 F and drove to San Francisco to find it a cold dank 55 F, then returned home to the heat later in the day.

This simply MUST show up in the “anomalies”. But I could not change GIStemp to demonstrate that and have it still run. (It was way too sensitive to changing the data inputs and would hang if the “wrong” stations were deleted, so I had a long ‘trial and error’ phase to slog through if I wanted to do that).

Now I can show it using these anomaly processed temperatures.

In version 3.3 and prior of GHCN, both Russia and Kazakhstan were divided into European and Asian parts, roughly at the Urals. This was denoted in the Station ID as a leading 6 on the first three digits for Europe, or a 2 for Asia. In version 4, the data are combined into one country abbreviation. RS for Russia and KZ for Kazakhstan. Since comparing a “country’ requires combining those two “country codes” in GHCN 3.3, I had to work out a way to do that. that process is detailed in this documentation. I’ll be including the Python code used to make the combined graphs in the Tech Talk section down below.

But first, lets look at some temperature anomaly graphs and the differences in the temperature anomalies between v3.3 and v4 of GHCN, with Europe included in Russia vs not for GHCN 3.3 data and always included in v4 data.

RS Russia

The Difference In The Anomalies

These are the differences in the anomalies of v4 vs v3.3 data. The first graph is including Europe in both sets, the second graph does not have Europe in the 3.3 data. They are dramatically different. WHAT instruments are used DOES change the anomalies. Which rises the question of how much of “Global Warming” is actually just what instruments are used in any given year. How much of the difference in the first graph is due to just changed instruments in v3 use?

GHCN v3.3 vs v4 Russia Anomaly Difference Europe Included

GHCN v3.3 vs v4 Russia Anomaly Difference Europe Included

GHCN v3.3 vs v4 RS Russia Difference

GHCN v3.3 vs v4 RS Russia Difference

Note that the “difference” graph is less scattered with more common instruments in the set. That makes sense as there is now less difference in total instrument data. It does hint that a dispersed or wide ranging Difference Graph can be an indicator of a lot of instrument change; implying you are measuring instrument change more than climate changes.

The Anomalies Plotted

Here we have plotted the actual anomalies in v3.3 and v4. The above graphs just subtract one set of these dots from the other to get the difference in the dots. The most obvious difference is just that all the early years data, the first thermometers, are all in Europe. So we only get black dots in the early years (GHCN v3.3) if Europe is included. But notice that the other black dots do shift position, even if only by a few 1/10 ths C. Then again, all of “Global Warming” is just a few 1/10 ths C. So is it warming, or are instruments included in the data changing over the decades? Was adding a HUGE number of data points to the “baseline interval” of GISStemp and Hadley (roughly 1950 to 1990) really helping, or assuring that years since would be different?

First graph is with Europe included, 2nd graph is Asia only for GHCN 3.3 data.

GHCN v3.3 vs v4 Russia Anomalies Europe Included

GHCN v3.3 vs v4 Russia Anomalies Europe Included

GHCN v3.3 vs v4 RS Russia Anomaly

GHCN v3.3 vs v4 RS Russia Anomaly

KZ Kazakhstan

The changes here are harder to spot. The reason is simple. Only one station is counted as in Europe in the v3.3 data:

MariaDB [temps]> SELECT COUNT(deg_C), stnID, abrev FROM anom3
    -> WHERE abrev="K!" GROUP BY stnID;
+--------------+-------------+-------+
| COUNT(deg_C) | stnID       | abrev |
+--------------+-------------+-------+
|          995 | 62534398000 | K!    |
+--------------+-------------+-------+
1 row in set (1.28 sec)

So that’s about 82 years of monthly samples for one thermometer. All the rest is from “Asia” so swamps that one station in the rest of the data:

MariaDB [temps]> SELECT COUNT(deg_C), stnID, abrev FROM anom3 WHERE abrev="KZ" GROUP BY stnID;
+--------------+-------------+-------+
| COUNT(deg_C) | stnID       | abrev |
+--------------+-------------+-------+
|         1250 | 21128679000 | KZ    |
|         1077 | 21128879000 | KZ    |
|         1332 | 21128952000 | KZ    |
|         1266 | 21135078000 | KZ    |
|         1586 | 21135108000 | KZ    |
|         1195 | 21135188001 | KZ    |
|         1226 | 21135229000 | KZ    |
|         1000 | 21135358000 | KZ    |
|          949 | 21135394000 | KZ    |
|          714 | 21135406001 | KZ    |
|         1030 | 21135416000 | KZ    |
|         1108 | 21135542000 | KZ    |
|          816 | 21135576000 | KZ    |
|          733 | 21135663000 | KZ    |
|         1511 | 21135700000 | KZ    |
|         1231 | 21135746000 | KZ    |
|          988 | 21135796000 | KZ    |
|         1607 | 21135849000 | KZ    |
|          808 | 21135925000 | KZ    |
|         1401 | 21136177000 | KZ    |
|          953 | 21136208000 | KZ    |
|         1316 | 21136535000 | KZ    |
|          897 | 21136665000 | KZ    |
|          777 | 21136729000 | KZ    |
|         1146 | 21136859000 | KZ    |
|         1593 | 21136870000 | KZ    |
|         1878 | 21138001000 | KZ    |
|         1165 | 21138082000 | KZ    |
|         1477 | 21138198000 | KZ    |
+--------------+-------------+-------+
29 rows in set (1.12 sec)

29 stations to one. Still, there are visible effects even from one station. Look at the 2000 era “anomaly difference” graphs and note the 2 dots that are above the pack with Europe included, but in the pack without it.

The Difference In The Anomalies

GHCN v3.3 vs v4 Kazakhstan Difference Europe & Asia combined

GHCN v3.3 vs v4 Kazakhstan Difference Europe & Asia combined

GHCN v3.3 vs v4 KZ Kazakhstan Difference

GHCN v3.3 vs v4 KZ Kazakhstan Difference

The Anomalies Plotted

GHCN v3.3 vs v4 Kazakhstan Anomaly Europe & Asia combined

GHCN v3.3 vs v4 Kazakhstan Anomaly Europe & Asia combined

GHCN v3.3 vs v4 KZ Kazakhstan Anomaly

GHCN v3.3 vs v4 KZ Kazakhstan Anomaly

My Conclusions

My observation on this is simply that even using “anomalies”, instrument change MATTERS. “Climate Scientists” often claim that since they use anomalies, the exact instruments in use does not matter. Compare these graphs. They are different. All processed with anomalies, and the Asian thermometers are the same data, so the difference is all down to “instrument selection”.

I think this matters, and certainly matters a lot more than the Global Climate Cabal claims, as they claim it does not matter at all.

Tech Talk

I’ve worked out a method of combining the 2 “countries” of European Russia and Asian Russia used in GHCN v3.3 and prior for a “apple to apples” comparison with the v4 data (where there is only one ‘RS’ Russia). The same process is used to unify the Kazakhstan data (KZ and K!).

The produced “Yearly Country Anomaly Statistics” table, yrcastats, is used to produce the “Difference” graphs above.

This is done via a Python program that uses an SQL interface to get the data from the database table, then uses a plotting library to produce the plots above. Here’s that program for Russia. All the “magic sauce” is in the creation of a combined yrcastats table so this program in unchanged from the prior run:

I’ve line wrapped some text to make it more readable without scrolling off the right side of the panel. As Python is “space as syntax” dependent, those lines would need to be turned back into single lines for this to run if cut/paste copied.

# -*- coding: utf-8 -*-
import datetime
import pandas as pd
import numpy as np
import matplotlib.pylab as plt
import math
import mysql.connector as MySQLdb

plt.title("GHCN v3 vs v4 Anomaly Difference by Years")
plt.ylabel(" Country RS Russia Anomaly C")
plt.xlabel("Year")
plt.ylim(-2,2)
#plt.ylim(1850,2020)

try:
    db=MySQLdb.connect(user="chiefio",password="LetMeIn!",
       database='temps')
    cursor=db.cursor()

    
    sql="SELECT year,(mean4-mean3) FROM yrcastats 
         WHERE abrev='RS' AND num3>0 AND num4>0 
         GROUP BY year;"

    print("stuffed SQL statement")
    cursor.execute(sql)
    print("Executed SQL")
    stn=cursor.fetchall()
#    print(stn)
    data = np.array(list(stn))
    print("Got data")
    xs = data.transpose()[0]   
#   or xs = data.T[0] or  xs = data[:,0]
    ys = data.transpose()[1]
    print("after the transpose")

    plt.scatter(xs,ys,s=4,color='purple',alpha=1)
    plt.show()

except:
    print ("This is the exception branch")

finally:
    print ("All Done")
    if db:
        db.close()

The program that produces the plot of all the anomalies uses different tables, the anom3 and anom4 tables. To produce them, all I needed was to add an OR clause. I’ve coded that “Country Abbreviation” for Russian Europe as “R!” in the anom3 table, so simply selecting for “C.abrev=’RS’ OR C.abrev=’R!'” gets both the European and Asian anomaly data for each station, that is then averaged inside that year. So why didn’t I do that before? Because I wanted the anomalies shown to match the difference chart they were used to make. Only once I could make a combined difference chart does this combined anomaly plot match it.

Do note that this “method” generalizes. I could use “WHERE abrev=’RS’ OR” and any other countries in combination to make all sorts of ‘apples to oranges’ comparisons. So “how would the anomalies look if we include Switzerland in Europe, vs left it out?” can be shown. Does someone at NOAA do something like this to decide what thermometers to drop for the series used? What ones to include in the “baseline” but not in the present? It can be done…

I’ve bolded the bit that changed from prior versions and line wrapped some text to make it more readable without scrolling off the right side of the panel.

# -*- coding: utf-8 -*-
import datetime
import pandas as pd
import numpy as np
import matplotlib.pylab as plt
import math
import mysql.connector as MySQLdb

plt.title("GHCN v3 black vs v4 red Anomaly by Years")
plt.ylabel(" Country RS Russia Anomaly C")
plt.xlabel("Year")
plt.ylim(-3,2)
#plt.ylim(1850,2020)

try:
    db=MySQLdb.connect(user="chiefio",password="LetMeIn!",
       database='temps')
    cursor=db.cursor()

    
    sql="SELECT year,AVG(deg_c) FROM anom4 
         WHERE abrev='RS' GROUP BY year;"

    print("stuffed SQL statement")
    cursor.execute(sql)
    print("Executed SQL")
    stn=cursor.fetchall()
#    print(stn)
    data = np.array(list(stn))
    print("Got data")
    xs = data.transpose()[0]   
#   or xs = data.T[0] or  xs = data[:,0]
    ys = data.transpose()[1]
    print("after the transpose")

    plt.scatter(xs,ys,s=1,color='red',alpha=1)
#    plt.show()


    
    sql="SELECT year,AVG(deg_c) FROM anom3 AS A 
         INNER JOIN country AS C ON A.cnum=C.cnum  
         WHERE C.abrev='RS' OR C.abrev='R!' 
         GROUP BY year;"

    print("stuffed v3.3 SQL statement")
    cursor.execute(sql)
    print("Executed SQL")
    stn=cursor.fetchall()
#    print(stn)
    data = np.array(list(stn))
    print("Got data")
    xs = data.transpose()[0]   # or xs = data.T[0] or  xs = data[:,0]
    ys = data.transpose()[1]
    print("after the transpose")

    plt.scatter(xs,ys,s=1,color='black',alpha=1)
    plt.show()

except:
    print ("This is the exception branch")

finally:
    print ("All Done")
    if db:
        db.close()

The Kazakhstan programs are the same other than using “KZ” and “K!” and the name of the country.

Subscribe to feed

Advertisements

About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in AGW Science and Background, NCDC - GHCN Issues and tagged , , . Bookmark the permalink.

18 Responses to What Difference Does Instrument Selection Make To Anomaly Processed Temperature Data?

  1. erl happ says:

    At: https://reality348.wordpress.com/2019/05/14/there-is-no-carbon-pollution-effect-the-proof/ I provide a proof that there is no carbon pollution effect: No enhanced greenhouse effect due to increasing levels of carbon dioxide in the atmosphere. This means no greenhouse effect to account for a supposedly favorable surface temperature. This favorable surface temperature must be due to the flywheel effect of the Oceans conserving and re-distributing energy. Together with the movement of energy and moisture from low to high latitudes by the atmosphere itself.

    If people disagree with my reasoning I would be pleased to hear about it.

  2. E.M.Smith says:

    Earl, I think there’s something wrong with the link…

    Looks like this works:

    https://reality348.wordpress.com/

  3. E.M.Smith says:

    BTW, nicely done analysis ;-)

  4. erl happ says:

    A mystery: Try this one: https://reality348.wordpress.com/2019/05/14/there-is-no-carbon-pollution-effect-the-proof/

    [REPLY: That one works, so I’ve put it in your original comment to fix that one. -E.M.Smith]

  5. Bill in Oz says:

    EM I understand the reasons why you are doing this work.
    And it seems that you are conclusively shown that the type of temperature guage used makes a difference.

    When I think about this I come at it from a different angle : SURELY ANY CLIMATE SCIENTIST USING THIS DATA NEEDS TO PROVE THAT THE TYPE OF THERMOMETER USED MAKES NO DIFFERENCE.
    That’s the way science works. But here you are spending time & effort disproving what has never been proved.

    Another thought : all this has for me, feels like a computer ‘swamp’ designed by climate scientists to confuse, and marginalise other folks coming at this from their experience with a different opinion.. It’s the “WE are the EXPERTS” crap.

    OK You can navigate that swamp. Good EM. But 99.9999% of ordinary folk never want to go close to this swamp..

  6. gallopingcamel says:

    I am more than impressed by your ability to compare v3 with v4 given my failure to compare v2 to v3.

    Sadly, in spite of all your erudition your graphs seem completely useless. What am I missing?

  7. E.M.Smith says:

    @Bill in Oz:

    That is exactly why I’m doing it. Because I got tired of being dismissed with a sneer that “We Climate SCIENTISTS use anomalies so {obviously} instrument change does not matter

    I decided to do a simple “change the thermometers” test using GIStemp and ran straight into that “swamp” where if you removed too many, or the wrong ones, or “something” it would not work. Then discovered that GHCN is a mutating zombie of thermometers. That was basically when I decided that “This behavior is by design” (to quote a Microsoft web site about a bug in their software…) and that the purpose was not to find truth, but to meet an agenda with obfuscation.

    Then for a while I spent time saying “Someone needs to deal with this.” Eventually realized “I am someone” – having just the skills and understanding needed.

    The rest is volumes of code ;-)

    But, the good news is that I think, FINALLY, I’ve proven it. That “instrument choice matters” even using anomalies. And we KNOW that the “baseline period” has a very different set of thermometers to the rest of the inventory, and that a huge change of instruments happened between about 1980 and 2010 (depending on place / money for new electronics). I think this shows up in where different countries have the “change of life” from fat and flat anomaly plots to a thin rising tail. Each has had a political attitude change (UN waves $200 Billion / year under their noses) at different times and applies budget to new thermometers at a different time.

    I’d guess that you can follow just when each nation got a “grant” or applied domestic money to “The Climate Crisis” and updated their instruments / processing just from looking at the graphs I’ve made. Another “Dig Here!”.

    @Galloping Camel:

    Thanks for your support. I took on v2 vs v3 first, then gave up. When I returned to it, they had moved on to v4 and that change to entirely new Station IDs forced me let go of the former approach of trying to match Station IDs. Once I did that, and decided to just match on country, I made progress. Now, doing v2 vs v3, ought to be much easier. (To be started after I do the graphs for Europe and Africa…)

    What are you missing? I don’t know, but I’d guess it is not seeing the purpose of each graph, so not knowing what to look for.

    I’m a very “visual thinker”. Shape, density, pattern, all talk to me. So some basic points:

    The “difference” graphs are VERY different. The claim is made that by using anomalies, there is no difference and you can use any instruments. Yet there IS difference. LOTS of difference. This shows that using anomalies is NOT sufficient to prevent “instrument change” from changing your results. We know there are lots of changes to instruments, therefore all the results (“Global Warming” in the temperature series) are in strong doubt.

    The particular shape of a difference graph tells you now the change of instruments changed the conclusions. When the difference graph has a jump up, or a tuck down, or turns up at the end, that means that the temperature series has done the same between v3.3 and v4 NOT based on any actual temperature change (as that is a simple historical unchanging fact) but due to the choice of instruments and processing applied between the two series. This, too, proves that “Climate Change” is more about the instruments chosen and processing applied than it is about actual temperatures. When your “factual temperatures in the historical record” can have a difference of 1 C (or even up to 3 C for some) due to only instrument changes and processing, then it is impossible to claim you found 1/2 C of CO2 caused “warming” in that noise.

    Moving on to the “anomaly” plots. These are harder to read, but still useful. THE biggest thing that stands out to me is how they often have a tapering “Duck Tail” in the present. Narrowing to a rising point on an arc. Often starting about 2000.

    So look at the Russia graph. It has about a 3 C range of hot years vs cold years up until about 1880 where it narrows to about 1.5 C. That is largely due to adding instruments to the record. This shows that the more instruments you have, the lower the volatility range of the answers. Then in the “baseline period” there are lower high anomalies and lower low anomalies with a couple of dots ‘way low’. That shows that baseline data is biased to cooler. Now is that from it actually being colder then, or from the large number of instruments added ONLY during the baseline period? And does it matter? Either one will bias the results.

    Then notice how about 1985 forward there are almost NO excursions below the zero line. Did Russia REALLY have NO “cold years” in the last 34 years? NO years “below average”? Um, no. That says the data are “tailored”. Then notice how the tail “flips up” and range narrows to a point. Compare that to all the prior data. It’s just wrong. Russia has not had their volatility year to year evaporate and their temperatures rise relentlessly by 2.5 C anomaly during the “Pause” period.

    This all says that there’s something seriously wrong with the data. I can’t disambiguate the various potential causes. Putting electronics in, and closer to buildings. More at airports with tarmac, concrete and tons of jet fuel burned. Changed processing. Changed number and location of instruments. Etc. But if the data clearly do not match history, then the data “have issues”.

    Finally, when you see that same “Duck Tail” on many sites, but not on other near them, or with “onset” a few years offset from a near neighbor, it just shouts that it isn’t because 2 islands 20 miles apart in the same sea had dramatically different temperatures; but that more likely they had an instrument change at a different date.

    What ought to exist is a rectangle or trapezoid with a gentle 1/2 C “tilt” of temperature rise IF the data were correctly reflecting the Global Warming Thesis. It does not. Between versions it ought to be nearly impossible to see much in the “Difference” graphs, but we see big differences and those can vary widely between close neighbor countries. The difference graphs ought to be a nearly flat straight line, perhaps with a couple of tiny “pips” where an error was found (you can see that in the very early years of some graphs where there was no choice as to instruments); instead they are a mess of broad scatters, step functions, arcs, dips and flip up tails. That alone says that ONLY instrument changes and processing changes can change the results more than the 1/2 C of proposed “Global Warming” (as that is ALL that has changed between v3.3 and v4 – the reality of the actual temperatures did NOT change).

    So that’s what I see when I look at those graphs. Does that help you see it too?

  8. erl happ says:

    Unfortunately, all the skill and dedication comes to nought when you are dealing with religious beliefs. The sad and dismaying thing is that the mainstream media is happy to back up the high priests of ‘climate change’. Then you have venal, or perhaps just plain stupid, politicians in the democratic party like Al Gore who use the thing for their own ends. All politicians are beholden to the media.So very few are prepared to lead from the front.

    As for academia and ‘science’, you can’t expect much independent thinking from those who are rewarded all the way up for toeing the party line.

    The ‘environment’ is the new religion. A version of ‘heaven on Earth’. Unfortunately, the doctors prescription is likely to give the patient, ‘humanity’, a nasty tummy ache.

  9. erl happ says:

    https://www.vitisphere.com/news-89538-Cahors-loses-60-to-70-of-its-crop-to-frost.html

    And is very cold here (south west of Australia) too. 2019 vintage saw many producers with low leaf area to fruit weight vines (set up for mechanical harvesting) pick nothing due to lack of maturity in the fruit. This 2019 growing season was much cooler than the previous five years which was in turn cooler than the previous five years.

  10. Bill in Oz says:

    Thanks Em & erl for the comments..

    I appreciate what you are doing,, I’m glad you can navigate that bloody swamp.
    But I think there is also the view that is is swamp is a swamp.And not a source of useful data about ‘past climate’

    but then that is the point of this whole exercise !

    Thanks
    Bill

  11. Simon Derricutt says:

    EM – this has been both an amazing amount of work and also an eye-opener as regards just how much the original data has been buggered with (technical term…). If my logic is correct, the the V3 and V4 data when plotted on the same plot (normally second graph of the set) ought to give a collection of black-over-red dots, with no significant trend but maybe a few rounding errors in the 1/100 degree range (since they use a false precision). The differences of the anomalies should all be zero, so the first graph of the pair should be a line of dots on the zero Y axis.

    Not many even came close to that (flat line on first plot, superposition on the second) and so the historical recorded temperatures have been totally messed around with. I thought these were meant to be the “uncorrected data” temperature sets?

    Though it is also true that the choice of what records you include and exclude will change the trend, maybe the buggery of the original data might have something to do with it as well.

    It’s thus looking like that claimed 0.5°C rise may be a result of the statistical methods and data selection, and not a real on-the-ground effect. In fact, it’s even looking like we can’t be that certain of the veracity of the historical data. Predictions based on bad data are also most likely to be wrong, too.

    Since even the relatively-simple checks on what grew where at what time can be confounded by varietal changes of the actual crop, and if farmers save their own seeds then they’ll select for sub-varieties that are better-suited to their local climate in each successive year, then we can only get an approximate idea of global temperatures (whatever that’s worth) by having a rough idea of northern and southern limits of a particular crop. Better than nothing, and the movements of those borders over time may be useful.

    It seems the best we can estimate from the data is “the climate hasn’t changed that much in our lifetimes”. Alternatively, the sky isn’t going to fall….

  12. E.M.Smith says:

    @Simon:

    Basically, yes.

    There ought to be some “jitter” in the “difference” graph, but not a lot IFF the “Climate Scientist”s assertion that “The Anomaly Fixes All” were correct. Let’s walk through an example.

    Say you looked at NYC Central Park. There’s ONE thermometer there in the past. The difference between v3.3 and v4 ought to be nearly zero or exactly zero for the actual temperatures. It is ONE historical record. I could see maybe a tiny little “jitter” for different rounding on different computers or maybe someone found (for example) a July 4th temperature that was reported as 6 F higher than any other thermometer around it at that time and did some research that showed they were launching fireworks next to it, so removed that data item in v4. It isn’t “adjusted” so much as it failed a QA test. But removing on data point in one month doesn’t change anything else… so a tiny difference in that monthly dot.

    Now we add in a thermometer at the Port. It is regularly running 2 F cooler than the Central Park, so when we turn this into anomalies, it is -2 F compared to Central park, but in any year where Central Park is -3 anomaly (x-averageX) it ought to be -3 anomaly (y-averageY, or x-averageX+2) IF their thesis about anomalies were true. Even having the Port thermometer ‘go away” and adding one at JFK (with, say, a +6 compared to Central Park – so z-averageZ or x-averageX-6) ought to still have a -3 anomaly in that year, so the anomaly ought to always be -3 in that year whichever thermometers are in or out.

    So the DIFFERENCE in the anomalies between v3.3 and v4 ought to be a solid line of dots at ZERO all the way across, with only minor deviations of fractional degrees from small QA data changes. IF their thesis were correct.

    Now my “Volatility Thesis” says that different places have a different volatility. In particular, “High cold places” can get much colder in a downturn of temps, and concrete jungles and airports don’t get very cold at all. Include High Cold Places in the early years, take them out in the present, you have lots of “cold excursions” in the early years and very few in the present. So let’s say we add an Upstate New York cold place and it goes -7 compared to Central Park in cold months/ years but only -3 compared to Central Park in warm years. We put it in up until, oh, 1980 in v4 and it isn’t in v3.3 at all. We now get some “cold spikes” in the early record EVEN WITH ANOMALIES where it will have an excess of -4 in cold years compared to the expected -3 from warm years. But since it isn’t in the present, everything from 1980 onward doesn’t have those cold spikes.

    Now the “Difference” graph will be below zero in the early years, and above zero in the present (or perhpas at zero depending on the other details). Also, when you look at the actual plotted anomalies, the v3.3 dots will be above the v4 dots in the early years, as it doesn’t have that High Cold Station in the data. The “cold spikes” will show up in those early years, but then as we get to the years after 1980, there will be no excess -4 anomaly cold spikes…

    That is just what we see in the anomaly graphs. Wide ranges and lots of cold spikes in the early years up to a point in time (often between about 1985-2000) when cold years essentially disappear and we get the “Duck Tail” of a flip up and only ever warmer than the zero anomaly line. Because concrete does not cool off as much, as far, or as fast as snow in Upstate NY.

    A particular example of this would be California where in v3 in the “present” we had 4 stations. ALL in the low volatility nearly ideal temperatures coastal band, NONE in the snow covered mountains. SFO airport, then three down near LA (where folks don’t even need central heating…). Up at Truckee you can go from a 100 F summer day down to 10 F (or lower) snow. It simply is not possible to be that volatile in coastal LA. In summer, on shore winds keep the coast cool. Inland may heat up, but they don’t have any inland… In winter, the ocean prevents freezing. (Though go to the Grapevine in the hills and you get snow…) So all those added stations in the early years and “baseline interval” give a wide ranging volatility and anomaly pots while the preesent is dead flat coastal Los Angeles.

    Then what do we see in the “shape” of all these anomaly plots? Lots of volatility (spread of dots up / down) in the years up until a “knee” in the last 20+ years where volatility dies and it narrows to a point. That, to me, shouts “Location, location, location”! Concrete and coastal airports, not high cold places with snow and Canadian Express intrusions.

    In short The Reference Station Method (Hansen…) FAILS. That paper needs to be challenged in just this way.

    The Difference graphs show consistent “cooling of the past”, even for places like Pacific Islands with only one thermometer in the past. Now that just isn’t possible if you are not either diddling the data, or your upstream is diddling the data. Adjusting the “unadjusted”. (They, NOAA, do put a caveat in that “unadjusted” does not mean the national suppliers are not adjusting it, nor that QA and instrument changes might not be in it). For other places, it can be done by just adding warmer thermometers in the present and some “cold volatile” ones in the past, and then “homogenizing”… I’ll be testing for specifics as to data vs stations ov v1, v2, v3, and v4 in a future step of this process particularly looking at islands with few thermometers. In a prior “look” with v2 vs v3 I found “instrument change” on one island where “warming” happened and no instrument change at a nearby island with no warming….

    Maybe I need to make a synthetic “country” and plot what things ought to look like in comparison. Make a 4 thermometer country. Have random temperatures in a range in each, overlay a few years of cycle, and then plot with warming trend and without. If there were no cyclicality or trend, it ought to be a rectangle of dots for the anomaly plots (and a flat line “difference”). IF there were trend, it would become a trapezoid with the ends straight up and down and both the top and bottom line sloping roughly at trend (modulo impacts of random distribution aka weather). Add a cyclical component and it bows one way or the other.. BUT, none of those look at all like what we have seen in the plots and graphs so far. They will look like that rectangle “up to a point” (usually about 1985-2000) then it turns into an upward arcing duck tail narrowing the volatility range to a point. Just very unphysical. We DO still have weather and both warm and cold years. Siberia has NOT lost its volatility.

  13. E.M.Smith says:

    Oh, and remember I create these anomalies not by using a baseline, but by comparing a given thermometer only to the average of that thermometer in that month. The only way the average anomaly for a given thermometer can change is via changed data. For a country, it is changed data or changed instruments.

    Newer data in v4 can change the average a little bit for a given instrument that is still in use today. But as we saw from The Great Dying Of Thermometers, most of the instruments do not escape the baseline interval alive….

    There really ought not be any significant change to data anomaly averages from 1990 and before, just because “dead” thermometers dominate the total data then.

    So, IMHO, it all comes down to one of two choices:

    1) The historical data are being changed, version to version.
    or
    2) Instrument change has significant, perhaps even dramatic, effects, even through anomalies.

    I can’t see much other choices (though if anyone else does, speak up!). Either of those two choices, if proved true, is lethal to the Global Warming narrative and the use of GHCN surface data records.

  14. Power Grab says:

    Two questions come to mind:

    1. When did the instruments change from analog to digital, and when did we start obsessing about tiny fractions of a degree, just because we could start recording tiny fractions of a degree?

    2. When did the various localities start being affected by significant sources of EMF energy? I’m specifically wondering about when big TV and radio antennas began broadcasting…when the big NWS radars (such as WSR-88D and such were built…when cell service transmitters were added…when the cell phone service went to higher levels (e.g., from 3G to 4G, from 4G to 5G). If I understand correctly, the EMF can have a drying effect since it influences the water molecules, which are dipolar. I’m thinking that water tends to equalize energy across different locations if it is present; and its absence could tend to make a dry locale get “stuck” in dry mode, or make a wet locale get stuck in wet mode, with the ensuing effects on temperature that different humidity can cause.

  15. E.M.Smith says:

    @Power Grab:

    It varies by country and when they had budget.
    https://ams.confex.com/ams/pdfpapers/91613.pdf

    In the mid 1980s the National Weather Service
    (NWS) began a mass replacement of their
    traditional liquid-in-glass (LIG) thermometers and
    Cotton Region Shelters at thousands of
    Cooperative Observer sites across the country.
    The wooden shelters had become increasingly
    expensive and difficult to maintain. Furthermore,
    NWS was also having trouble obtaining high
    quality self-registering thermometers at an
    acceptable price and an aging corps of volunteer
    observers found these thermometers difficult to
    read.

    So mid 1980s for the USA (and likely Europe) but later for places like Africa and Pacific Islands (non-US). That, IMHO, is why we see the “Jump / Knee” at different years in different countries.

    On the “someday” list is to match each graph to the “roll-out date” for each country…

    As to when RF in strength was around, essentially “day one”. Most of these are at airports. Airports have RADARs. kW radars… Weather radars…. ON the pavement next to them sometimes. In the air approaching them (especially in bad weather – even some “Biz Jets” i.e. small ones have weather radar in the nose…)

    Also, by the ’80s we had the old analog cell towers giving nation wide coverage…

    It is an interesting question, and one I’d not considered. Does the “resistance wire” in these things pick up any particular RF band. IIRC they are on the order of a cm long, so high end cell / wifi / radars in the GHz have a shot at it. It will all come down to shielding… which can’t be 100% or air can’t circulate… Hmmm…..

    A quick search turned up several other thermometers (mouth, ear, food) that have errors if exposed to RF, but after a page or two did not see MMTS showig up so need better search terms (or the manual…)

    My “quick take” on it is that RF can and does cause bad readings in digital thermometers and until the MMTS are shown to be insensitive to it, they are suspect.

    I think you may well of discovered another “Ooops!” mode! ;-) “DIG HERE!”

  16. Power Grab says:

    Thanks for that! I printed the paper so I can study it better. :-)

    Here is an interesting site that discusses such things:

    http://broadcast.homestead.com/

    That’s one reason I’m curious about how changes in the RF in the environment affect weather.

    Another reason I’m curious about this angle is that the day they finally turned off the analog TV transmitters, within 10 minutes we had a thunderstorm with golf-ball-sized hail. I keep an eye on the weather when I’m getting ready to start a long trip. I don’t remember being concerned about thunderstorms before that trip. We had just started the 400-mile trip when I realized I forgot something essential. I went back home, and after I entered my home, the hail started. My kid was still in the car. I waited to go back out until it stopped. My kid was untraumatized, for which I was relieved!

    We hardly ever have hail here. My personal opinion is that the strong RF bubble that surrounds my town fends off storms when the population is highest and the RF energy is highest. Many times I have been watching a storm system approach my town, split when it gets here, and re-form when it gets past my town. Others have noticed it, too.

    But wait! There’s more!

    The car I recently bought must have led a sheltered life. It had no hail damage (the insurance people specifically asked about that). But Monday while I was sitting outside a fast food place in a long line…hail started falling. The first piece hit my sunroof. WTH?? One hit. I looked to see if I was parked under a tree with pine cones. More hits came. It appeared to be marble-sized and quarter-sized. Now there are a bunch of little hail dents. :-(

    This was at a time when a BIG BUNCH of visitors had packed up and gone home. Of course, it could be a coincidence. But it makes me go “Hmmmmm…”

    I want to find someone to talk to who can answer some questions I have about how much (or whether) the cell phone companies increase the power they use when they know crowds will be here, and then back it off when the crowds are absent for weeks/months at a time. If I were a cell phone company, I expect I would want to lower my expenses by using less power when there aren’t extraordinary crowds here. Some years ago, I read a news story about how the cell phone companies had to increase the power of their systems when big crowds were here.

    Oh, one more thing. The hail storm this week (May 13) was exactly 44 years after a similar hail storm that hit when I was out of town for my job. When I returned after dark, the streets were covered with “stuff”. It turned out to be chopped up leaves that hail had stripped from the trees.

    Well, one month later (June 13) was the biggest tornado I have dealt with. Since this year is exactly 2 full solar cycles since that event, I’m going to be watching the weather very closely as June 13 approaches.

    Whenever I watch reports about notable severe weather events, more often than not they include a statement like, “This is the biggest flooding/storming/drought event since 11/22/33 years ago.”

    Oh…one more thing…again… When I read “Isaac’s Storm” about the Galveston hurricane that pretty much wiped the town off the map, there was some discussion of Isaac’s duties in logging the weather at his location. He was an official observer; might even have been on the payroll of the government. I keep wondering if people who analyze weather patterns nowadays ever go back as far as 1899-1900, or close to when that storm happened. The manual tools and methods they used would be interesting to compare with those of later time periods.

  17. E.M.Smith says:

    FWIW, I’ve finally finished making all the Africa graphs. I’ve got about 2/3 of them uploaded too.

    So sometime tonight I ought to get them all uploaded and into the draft article. Then I’ll look at each pair of graphs and see what they say to me. Likely to hit post in about 24 hours then ;-)

  18. Pingback: GHCN v3.3 vs v4 – Top Level Entry Point | Musings from the Chiefio

Anything to say?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.