In the prior posting I was looking at all months of the data, by continent (region) and that’s 84 graphs. (12 x 7 = 84). That’s a bit much to post. I’d posted a sample and intended to post a few more months; but had the idea to see if a straight “season” graph of the 3 months in each season worked well. It does.
There is still a bit more information in the individual month graphs (as you can watch the change of volatility spread month by month), but these also show that it is a seasonal effect, variable by continent, and NOT a general effect from a well mixed gas year round.
Using the seasons also squashes the overall change of volatility from the worst of winter to the flattest of summer, so I can use the same scale for all the graphs. Comparison of one season across the continents can be very interesting in how much they vary in range.
As it is 28 graphs for “the whole world”, I’m going to put them all in here. Yeah, a lot of graphs, but it is less than 84 ;-) I’m going to do the continents backwards as I find the Antarctic (7) graph interesting and the Africa (1) not so much as it straddles the equator so has not much seasonality and what is there goes both ways. It really needs to be done as N 1/2 and then S 1/2. It would also be interesting to do graphs like this for latitude bands, but I doubt it would show anything novel, just perhaps a bit sharper.
Assuming I didn’t screw-up, the seasons ought to be color coded as well as marked on each graph. Spring is green. Summer red. Fall is orange. Winter is blue. (Yes, different months in the Southern Hemisphere…)
N.H. (Asia, North America, Europe):
Winter 12, 1, 2 Spring 3, 4, 5 Summer 6, 7, 8 Fall 9, 10, 11
S.H. (Africa, South America, Australia – Pacific Islands, Antarctica):
Summer 12, 1, 2 Fall 3, 4, 5 Winter 6, 7, 8 Spring 9, 10, 11
As you look at these graphs, you can scroll them up to the top of your monitor to “fit a line” across the tops of the data to see the “top trend”, if any.
Antarctica Region 7
Winter is crazy volatile, but NOT warmer than in the past. There is a loss of low going excursions after 1960, but I would expect that is due to station closures. The Antarctic data has big changes over time with short and non-overlapping stations. Scanning across the tops of the dots, your eye must move down at the end. Cooling? Or just lost volatility?
Spring is also volatile, but not warmer. Tops stop at the 2 C line. Again after 1960 low excursions get pruned, and recently (2000+) volatility goes very low. That could create a statistical “warming trend” but the place is not getting any warmer.
Summer is pretty much dead flat. All that ice just prevents getting much past 0 C.
Then this very intresting one. Fall is getting COLDER. Overall we see the same compression of volatility, but in this case the tops are going lower as the bottoms are coming up. Even the post 2000 temps are dropping. Since there is a known “Polar See-Saw” might the warm uptick about 2000 in the N.Hemisphere just be reflecting a cold Antarctica?
The Southern Hemisphere is mostly water, so the South America, Africa, and Australia land temperatures can’t answer that question. Only the Southern Sea temperature can, and Antarctica can drop hints.
Europe Region 6
Just OMG how volatile Winter is in Europe. Looks like it was a “bit of cold” in the ’50s to ’70s, then warmed up since (or maybe it’s just those electronic thermometers at the airports being closer to buildings…) Overall the top excursions never seem to exceed 2 C, much like in the past (1800 to 1850). As Europe has THE longest records and some of the best, that 1800 to 1850 data ought to be worth trusting.
Mostly when I look at this, I see a loss of very low excursions. Might that be snow removal and Jet-A by the ton burned at airports? Kilo-tons of heating oil in the cities? Just saying… This isn’t 1800 horses and buggies and no central heating any more. We’ve strongly industrialized.
Spring time flat across the tops, a bit of low clipping along the bottom. Then the post 1990 PC To Be Warmista loss of cold excursions. Just when electronic thermometers were put on a short wire to buildings.
Summers are very interesting. Looks to me like a clear cyclical period with peaks about 1860, minor one in 1910, then about 1940 and 2000. Sort of 60 years and an occasional 30 half cycle. The peaks do not get higher until post 2000 when “cooking the books” was decided to be “OK Climate Science”.
Generally Fall looks like nothing at all happening until after 2000.Warm in the 1930-40 period, and warm in about 2000, but no warmer than 1810. Cold in about 1890 though. Even the low excursions after about 1850 are not much different. Not much use of heaters in fall in cities? The data after about 1990 have a bit of a “manicured” look to me. (We see that in Asia down below too). Just not enough volatility in it to look real.
Australia -Pacific Islands Region 5
The Poster Child for not much happening, IMHO. Winters show the loss of low going volatility, but the other seasons are just flat to slighly cooling at the tops. Spring and Summer have a slight loss of low excursions that I would attribute to most of these stations having been “grass shacks” until post W.W.II when the Jet Age turned them into Tourist Destinations with areres of asphalt growing around the former sea plane only locations. I did a posting on Hawaii that showed that change. Grass shack photo and all. Kauai IIRC. I need to search for that…
Given that this shows the Pacific is just not with the warming program, the necessary conclusion is that “the Globe” is not warming. Cities in Asia? Sure…
North America Region 4
In many ways THE best thermometers, and certainly the most of them, for almost the longest period of time (only Europe is longer). This ought to be the best there is. What do I see in it?
Winter has lost volatility to the low side, but not warmed up. Scanning across the tops it looks about flat. IMHO, that is entirely a combination of losing the high cold stations from the record (cold is much more volatile downward as are high altitude stations with thinner air) along with Urban Heat Island effects as rural stations urbanized. It does look like post about 1990, the lows are clipping with the advent of the change of thermometers and QA method that tosses low excursions more than highs.
(It uses the same absolute number, but lows are more volatile down than highs are up. Note the need to use an asymmetrical range on these graphs).
Spring is getting colder. Other than one “flyer”, it is a clear “lower highs”. There is an overall loss of volatility, and with the loss of lows I suspect a statistical up trend line could be fit, but not because it is warmer at the tops.
That “flyer” likely has to do with some bad data in the set. I’d not filtered for crazy highs in making these graphs, but looking at them, did this report. I look for temperatures over 60C / 140 F. I doubt they are real… I’ve bolded the north america ones. Odd that they are ALL in the recent data… The first digit of the Station ID is the region or continent number.
MariaDB [temps]> SELECT year, stnID, deg_c FROM temps3 WHERE deg_C > 60; +------+-------------+--------+ | year | stnID | deg_c | +------+-------------+--------+ | 1999 | 10160535000 | 64.00 | | 2008 | 11264405000 | 60.40 | | 1997 | 11264459000 | 71.00 | | 2001 | 11365536000 | 82.00 | | 2000 | 12462008000 | 76.00 | | 2013 | 12567009000 | 67.30 | | 1999 | 12761226000 | 66.40 | | 2001 | 12761296000 | 73.00 | | 1999 | 12761297000 | 66.70 | | 2003 | 12861499000 | 86.90 | | 2004 | 13167297000 | 87.20 | | 2003 | 13361052000 | 90.00 | | 2000 | 13361099000 | 61.00 | | 2000 | 22232411000 | 74.70 | | 2010 | 30285201000 | 86.00 | | 2011 | 30285201000 | 83.00 | | 2011 | 30285242000 | 90.00 | | 2000 | 30485629000 | 99.90 | | 1981 | 40778486000 | 86.20 | | 1996 | 42572259002 | 154.40 | | 1996 | 42572352002 | 138.10 | | 1996 | 42572417010 | 102.30 | | 1996 | 42572471001 | 121.10 | | 1996 | 42574750001 | 144.20 | | 1999 | 42591178002 | 87.80 | | 2000 | 50291652000 | 70.00 | | 1992 | 50396237000 | 64.90 | | 2000 | 64740030000 | 70.00 | | 2003 | 64917285000 | 89.90 | +------+-------------+--------+ 29 rows in set (36.23 sec)
But the “flyer” is after 1996, so not from these numbers. Perhaps I need to look at > 40C…
Summers, to me, look just about dead flat. There’s some big volatility at the start of the record (with just one or a few thermometers all in the same place) After about 1850, there is what looks like a cyclical warming / cooling with peaks in about 1870, 1930, and 1990 (or 60 years apart). The overall trend from 1850 looks to my eye like it is flat peaks and “lower excursions low” up to about 2000 when it looks like maybe the low going stations are trimmed (or maybe it’s those electronic thermometers being rolled out … or their “new” QA process that tosses “outliers”…)
In any case, it most certainly does NOT look like hotter summers.
Fall looks like it was a bit cold from about 1950 to 1890, then warmed a touch and stayed flat after that. Nothing in this data looks “warming” to me. Strange how CO2 takes the Fall off in North America where we have the best records… In fact, it looks like it doesn’t really show up in any season here. Must be Trump’s fault… or The RUSSIANS!!!! /sarc;
South America Region 3
Fall and Winter nearly dead flat. Spring & Summer with some rise to them, but mostly after the start of the Jet Age with big airport expansions and the advent of the electrical thermometers in the ’90s. Mostly I see volatility suppression and a bit of UHI / Asphalt in the sun. Especially winters are just not getting warmer.
Asia Region 2
Spring, Summer, and Fall more or less flat until the ’90s when Global Warming became politically popular and the electronic instruments were rolled out. Spring has something of a trend to it. For much of this time China was industrializing. Might increased urbanization and industrialization have warmed spring a bit more than the max of summer? Winter is very volatile but showing the same slight trend of Spring. Again I suspect UHI in industrializing China (and east Asia in general).
Africa Region 1
All I see in these is the same ‘just about the ’90s’ change of thermometers to electrical devices when “Global Warming” became trendy. Nothing prior to that. Strange how all the “warming” shows up just when it was politically advocated… I used Southern Hemisphere months for the seasons.

In Conclusion
So if you want to bet the future of Western Industrial Life on the quality of thermometers in Africa and Asia (and the honesty of their governments who have been promised $200 BILLION / year of “climate reparations” payments from The West), then go for it.
I’d rather trust the USA and Pacific records showing “not much happening” but some instrument changes and growing asphalt jungles at airports.
That’s what I see in the data. What do you see?
What I see, since you asked, is an absence of numbers. How about mean (presumably zero), trend, and standard deviation? You’ve done an outstanding job, but I’d prefer numbers to commentary.
Mean of what? For the anomaly, it ought to be zero +/- the precision of the computer math.
“Trend” is meaningless in a narrowing volatility time series. Much more useful is the top bounds and the bottom bounds. Unfortunately, I’m not good enough (yet) with Python to get those. Heck, I’ve only been able to graph at all using Python for a week or two.
Again, standard deviation of what?. Also a modestly meaningless number for a volatility narrowing time series. Std. Dev. at the start, or the end? Of each thermometer (all 7280 of them)? Or the absolutely meaningless average of them all? You can see by inspection of the graphs that the deviation is dramatic at the start, narrow at the end. The Std. Dev. of that will be pointless.
The problem with such “numbers” is that they are statistical games. Far more important is to just look at the data and see what is going on. Then you can have enough clue to apply some statistical manipulation to highlight (or hide) some particular thing.
Blindly throwing a trend line at time series data with cycles in it is a known path to failure.
But, if you want it, I’ve published all the “how to” to load up the data and make whatever you want.
FWIW, here’s an example of the code that made the above graphs (I had intended to put it in the article but it was already a bit long).
If you want other stats or lines on the graphs, just code up the Python and put it in a comment. I’ll run it.
Yeah, the mean is zero within the limits of computer math with repetitive addition / subtraction:
Here’s some more. Not doing the whole 28, but did some whole regions (all season) and the total for all the anomalies at the top:
Antarctic is mildly interesting in that 2/10,000 is bigger than I’d expect, OTOH, with so little data there’s less for the law of large numbers to narrow the deviation in the averages / differences.
I’m not going to compute the rest of them as it must just be similarly close to zero, given these and the total is nearly zero.
So there’s your means…
Post up the “how to” on adding std dev in either the Python program above or as an SQL query and I’ll run it. Ditto adding trend lines to the graphs at top and bottom (or even in the meaningless middle where it just shows what start and end points you choose…)
May I reproduce these Australia graphs?
At Graeme No.3:
Every bit of code and all the graphs I hereby copyleft attribution. You can reproduce them all you want, but a little attribution would be nice now and then ;-)
The code is free for anyone to use as they see fit, so you can even “roll your own” on any Linux system with MariaDB and Python3.
Well, Serioso, something you don’t hear from me that often:
Thank you for poking me about Standard Deviation.
It looks like it is actually very easy in MariaDB. While MS uses STDEV, MariaDB uses STDDEV, but it is simple. Glad I looked at it and realized I was worried about nothing. Here’s the STDDEV for the regions anomalies:
Region 8 is ships at sea and has no current data, so I’ve not made graphs for it.
The Std Dev by year is interesting:
Don’t know what to make of it though. Lot of 2+ early on, only 2 over 2 since 1990.
EM I saw this with interest : ” some bad data in the set. I’d not filtered for crazy highs in making these graphs, but looking at them, did this report. I look for temperatures over 60C / 140 F.”
What happens in the charts when you eliminate the bad data – temperature readings above 60 degrees C ?
I noticed that all of the ‘bad data’ recordings were ‘modern’ ones. So probably from the electronic temperature gauges. I’m curious to know if there are any bad data temps for the Australia, pacific Ocean region. And what happens to the chart when they are eliminated.
EM I realise you need to have a consistant period for your seasons and that the months you have chosen are accepted officially around the world. However, I have lived in various parts of Australia and feel that the actual seasons are one over eg summer is Jan, Feb, Mar. Winter tends to be Jul, Aug, Sep. In the southern Alps that tends to be when there is reliable snow. Where I live summer from January is the rainy season. The average rain in Jan is 237mm, Feb 262mm, Mar 265mm. The average for both Dec and Apr is 171mm. The driest months are in Winter with both Aug and Sep having 59mm average. (NB my rainfall data goes back to 1893). I spent a little time in Canada, England and Switzerland and again my impression was that the season was one month over. In Canada I saw the Niagara Falls flowing in Dec 1969 and then frozen solid Mar 1970 and there was still snow on the ground in May when i left.
@cementafriend, But in South Australia, Spring starts to get moving in early September with Summer kicking in late November… Supposedly it’s Autumn now here in SA. But the temperatures are still warm each day. Just cold nights because of the lack of cloud – which means also no rain ! A very dry time with no sub soil moisture ! Bugger !
@Bill in Oz:
Looks like there are two that are clearly insane. There may be more that are “reasonable but wrong”, like a 35 C in winter or 40 C when it was really 35 C somewhere. No way to catch those with a quick report.
The impact will only be on two dots in a single season of two years, and diluted by the number of other stations in that month / year.
505 K total temperatures recorded. So 1 : 252k crazy isn't too bad, I suppose.
So April of 2000 and AUG of 1992 does will be a little high. How high?
So there are 163 likely ‘valid’ temperatures in region 5 in the year 2000 in April. One bogus.
That bogus one will have both raised the anomaly calculation a little, and then been reduced by that average to an anomaly. Then it will be averaged in with about (164 x 3) -1 (the three months in a season) or 491 values to make the one “dot”
So there are:
53 April values for that thermometer but ony 47 that are not missing data flags:
Then the average of them is about 42 C, so right out the gate we have 70-42= 28 bogus degrees spread over 47 readings, so it’s a bias high of roughly 0.59 C in that monthly average. Not going to change it too much. Then the anomaly for that month will be 70-41.5 = 28.5 C.
That will get averaged in with the roughly 163 other readings that are about 1 C of anomaly from above:
or about 28.5 / 163= 0.175 C of increased average anomaly in that one month… but there are three months in the season, so it will be about 0.05 C higher for that one spot on that one graph.
The other dot ought to be similar but I’m not going to do all the calculations for it ;-)
Basically it will be impossible to see it in the graph.
Which is why I’m not too excited about a couple of clearly bogus data points.
I’m MUCH more concerned that we KNOW the MMTS thermometers tend to “fail high” as it has happened a few times now (one WELL documented in Hawaii that was reading well over 40 C for weeks…) and the question of what happens in, say, the year before the fail? Is there a year, or maybe several, where region that had thermometers all installed about the same time has them all creeping higher and nobody notices as they have not gone clearly crazy yet?
That’s the thing to worry about. Not one data point in a quarterly average.
Per Seasons:
Partly I used these as it aligns with the accepted values.
I thought about moving them “one over” to be closer to solstice / equinox… (i.e. Jan 1 is closer to Dec 23 than is Dec 1). Not really interested in redoing 28 graphs… But I might re-do a couple just to see if it changes anything.
Right now I’m on “Honey Do” watch… so maybe not soon… ;-)
I suppose it might make more sense to “pick a continent” and do 12 full months plus the “whatever” seasons has the most similar months graphs in it…
But first I want to re-run my anomaly creator program after adding “WHERE deg_C < 50 C" to it to screen out the clearly crazies…
Interesting… The only stations over 50 C and below 60 C are NOT Death Valley and Libya (where there were such temperatures) but two places that can’t have them:
So it looks like I can use as 50 C “cut off” for bogus high temperatures…
Just FYI, here’s the data points that will be “missing data” in the database and graphs “going forward”, even though there isn’t much impact on the averages or the graphs:
Found with this SQL program:
I wonder if the greater range between Highs and Lows in the 1700s vs 2000s is due to the errors in thermometers in the 1700s (calibration and reading and recording) or were there really bigger ranges in those days?
EM, I’ve been googling for the Hottest place in Australia. Apparently it was Oodnadatta in outback desert SA with 50.7 degrees C. But the BOM says it was Marble Bar in WA with 49 !
( There is money to be made from being the Hottest place in Oz from Tourist visits. So there is competition for that bit of glory . )
So I wonder what would happen to the Australia & Pacific region charts if you dropped the bar down to say 54 degrees from 60 C….
No 55 C has ever been acknowledged as a valid temperature measurement by BOM.
So it might show up more of those insane temp readings. Or might not if the gear has been maintained properly.
Just curious !
I decided to search on low temps too. This one seems odd:
What does Wiki Say?
Yeah, the tropical Philippines at -70 C…
On such data quality rests the fate of the global economy…
Now I”m pretty darned sure no human would EVER record either a 70 C or a -70 C for anywhere on the planet outside Antarctica or Siberia. This, IMHO, brings into question ALL the data from ALL automated stations.
If you can’t even avoid clearly insane values, what hope is there for avoiding “mildly daft” data?
My program found a bit over 500 rows, but most all of them had names like Siberia or were in Antarctica, so “plausible”. Things below -50 C, but not ‘missing data’ flags.
@Bill in Oz:
Look at the comment of mine about station data dropped “going forward”. It uses a 50 C cut off and no new stations show up for Australia / Pacific.
Another way to do it, is to cut off at 40 or 45 deg C and pull those between that temp and 50 C and flag those for audit. Another data integrity check is to check for too great of a temperature spread between stations that are close to each other. For example in two nearby cities their high temp for the day should be reasonably close to each other but if they differ by 30 deg C maybe a check is in order. (perhaps use the average high for that station / day of the year as a check as well)
It would be interesting to put together a table of max high and max low temperatures recorded by latitude, and elevation as a table to compare readings against to flag possible bogus readings.
This of course, is the sort of data hygiene checking a responsible scientific organization would create right from the beginning to flag instruments that have problems.
I have a cheap digital thermometer with the inside outside sensor. The sensor wire to the outside sensor got cut a couple years ago when I closed the patio door too hard, so I repaired the break and put a small stop in the door channel so the door hits something else other than the wire when it gets closed hard.
I did learn however that when the sensor wire on that type of cheap indoor outdoor thermometer has an open, the display pins to a -58 deg C reading. That might be another specific temperature to scan for to pick up cable failures on the digital thermometers as that appears to be the circuit’s max read out for infinite resistance. (of course other models using different chips might have different open circuit values.)
Not that I expect you to jump through a bunch of hoops just brain storming on how a well setup system might do data integrity checks.
@Alexander McClintock:
Part of it, especially in the 1700s, will just be that it is one or single digit numbers of thermometers, mostly in the same places. So a “cold snap” in southern Germany CAN in fact show up as a Damned Cold on that thermometer. Today it will be averaged over Span and Greece and The South of France. A very large number of places just can’t have as far an “excursion” as one place.
Another part of it will be things like the Berlin Air Lift station. In the 1700s it was a grass marching field that, in winter, could be covered in snow and stay that way in a hard freeze. In the 1960s it was a bustling airport. In the 1990s a very big Jet Port with tons of Jet-A being burned per hour for takeoffs. And snow removal all the time.
Then there’s just the fact that we had a Little Ice Age and it WAS damn cold in that late 1700s to mid 1800s period. (Oddly, the temperatures recorded in Sweden in about 1710 are not too much different from now, but there are very few of them and the don’t seem to have made it into the GHCN… I wonder why.. And not just thermometers said so. Linnaeus, the guy who gave us plant taxonomy: https://en.wikipedia.org/wiki/Linnaean_taxonomy reported data on growing all sorts of plants that later folks claimed he could not have grown as “Sweden is too cold”… Yet he did. It was just warmer then, then frightfully cold (England having Frost Fairs on the Tames, Napoleon losing The Grand Armee in Russia in a bit of OMG Cold, etc.)
Or 1800 And Froze to Death (1816 and the volcano…) in the USA…
So yeah, it really was quite cold with extreme cold excursions at times.
Then there’s the way that “data now” goes through a QA filter that tosses out values deemed too low but uses the same range down as up, despite down excursions being much further than up excursions as evaporation / thunderstorms limit the upside and not much limits the downside.
Finally, there’s that documented removal of “High Cold Places” from the modern record. San Diego on the beach is 72 +/1 single digits just about every day of the year. Truckee / Lake Tahoe ranges from 110 in the summers to -OMG in the winters. There is just no way the Reference Station Method can ‘recreate’ those high cold place excursions as it depends on AVERAGES to make the relationship and Averages can never range as far as individual data items.
A specific example of the general case that “The data are shit not fit for purpose”.
Frankly, I’d not trust the GHCN to decide when to plant a field of corn; never mind plan the entire global economy and $TRILLIONS of transfer payments to 3rd World Dictators.
@Larry L:
Now that I’m comfortable doing Standard Deviation (h/t Serioso for the nudge!) I’ve added it to some of the table schema. For the next day or two I’m going to be “reloading from scratch” and tossing the clearly bogus insane values, plus pre-computing some STDDEV values for the tables.
THEN I’ll be able to do things like look for anomalies more than 2 STDDEV away from the mean…
I think that will find all of the insane, and many of the just daft, temperature values…
(Then I’ll get to go through this all over again using preened data… It will also be interesting to see if the ‘clearly insane’ values survive in GHCN v4 …)
Discovery, it’s a process….
Of possible interest:
http://joannenova.com.au/2018/10/first-audit-of-global-temperature-data-finds-freezing-tropical-islands-boiling-towns-boats-on-land/
It quotes from McLean_2018-Audit_of_HadCRUT4_dataset.pdf which was being sold at that time.
A few examples – stations in Columbia with temperatures over 80℃ for 2 months. Offset by an island in the West Indies with a month at 0℃. Sea temperatures taken by ships 80-100km. inland according to their position.
McLean was a student with Prof. Peter Kidd (Physics) who was sacked for criticism of the standard of research on the Great Barrier Reef at James Cook Uni (as uncollegiate behaviour) and his case against the Uni comes up next month. He won’t be hindered by a recent PhD graduate from the marine science Department having one paper withdrawn as fraudulent and 2 others under investigation.
The standard of climate science data does not seem to me to be accurate.
I head off to a tango milonga and 5 hrs later there is a swag o comments and a new post !
Bugger ! When do you have own time EM ?
I will process all this when I have eaten and rested the tired feet and brain.
It all looks interesting !
:-)
Larry NO! not when you take thunderstorms into account. It was shown on the TV a place 10km away had hail covering the ground like a snowstorm while we had sunshine. The temperatures were something like 15C different. The temperature at our place is about 2C lower (we have a lot of trees so our place including the pool can not be seen on Google maps) than the local airport but in winter the airport which is beside the sea can be cooler. I think EM and others have said one can not average temperatures or smear temperatures across locations which is something BOM does for their supposed quality ACORN data base and maps.
The 2 things that stand out for me are the 1751 mad std devn of 9.6 and the very high temp of 154.4 at Dallas.
Hopefully the quality control checks that they are supposed to run prior to using the data would correct the very bad temperature readings.
@Bill In Oz:
Well… I type fast and I overlap tasks… (Like right now having France24 running to catch up the news while I’m checking comments getting morning coffee on board).
Then I’ve been programming a lot of years and you get good at figuring the fastest way through a problem set. In the case of these graphs, it’s mostly changing just a couple of bits from one to the next. So while I ought to sink the time into figuring out how to do parameter passing and make this just one slick program and a wrapper with parameters for each region, it was faster to just change the digit and the title and hit “save as”… and make 7 almost identical programs. (Faster than describing it…)
Then it is just “run one” for a minute or two and while it is running read or make a comment, save the graph, run the next one, rinse and repeat for 8 to 15 minutes and done.
Besides, making this DB & Programs IS “Me time” ;-)
@A.C.Osborn:
Yeah, that got me twigged too…
I’m generally mulling over how to do some kind of quality assessment. Catching the Crazy Temps isn’t hard, but catching the “just wrong” is harder, and the “not quite right” impossible. So, say the electronic gear drifts slowly higher, 1/10 C by 1/10 C and then suddenly “fails high” to +20 C. You catch that end point, but what about the “few tenths” before that? Then you have folks who want to make sure the record is “continuous” with the replacement so they adjust that “splice” to match instead of seeing it as an error – effectively cooling ALL the past by making the splice smooth.
We see that with Stevenson Screens where the whitewash stayed white but the latex would slowly age and darken. The answer was to smooth the splice. IMHO that alone accounts for a significant chunk of the “global warming” of 1/2 C. The splice ought to just accept the discontinuity of instruments as an actual fact, not change the past when the instrument was actually more accurate. But nobody asked me… ;-)
cementafriend says:
17 March 2019 at 12:48 pm
Agree – that is what the audit process is for – is there another explanation for an odd data item.
True enough very rapid large temperature excursions are possible (common) in certain areas of the country. In the area down wind of the Rockies chinook wind effects have recorded multiple occasions where temperatures shifted 30 – 50 deg F in hours, sometimes both up and down that much several times as the winds left and returned. It is not at all uncommon for the western metro area of Denver to be 20 deg F warmer than out on the plains just 30 – 50 miles away.
Same with temperature inversions. You can have a pocket of cold air settled into a river valley or low ground and have air temperatures 20 deg cooler than just a few hundred feet higher.
When I was a teen we had a car paper route where my dad drove the car and me and my brother tossed the news papers. The elevation change across the paper route area was 126 ft elevation change from the high point to the creek south of my home. On one spring day I recall it was comfortable shirt sleeve weather on the hill (low 50’s) and bloody cold in the low ground near the creek ( temp in the high 30’s). Then there are special events like hail, which I have also seen. Temperature drops from mid high 80’s to low 60’s and ground fog with hail drifts 2 feet deep in just a matter of 30 minutes.
All of which demonstrate the idea of infilling temperatures across large distances is absolutely insane.
For example looking at temperatures on windy.com right now, the temperature listed for Denver is 14 deg F, 13 miles north in Thorton the temperature is listed as 30 degrees F, 54 miles north east at Ft. Morgan the temperature is 50 deg F. (by the way that 14 in Denver is almost certainly a bad temp as all nearby temps are in the high 20’s and low 30’s.
Pingback: Australia – Oz Choice Seasons & By Month | Musings from the Chiefio
Thanks EM !
It’s all good.
My earlier comment was just an attempt at humour.
But I notice that there are typos.
Brain fugue last night !
Now on to your new post re Oz etc !
:-)
Barring equipment malfunction, a temperature reported at a site *is* the temperature at that site. Temperatures are geometric means of the internal kinetic energy of a sample of matter and only its internal kinetic energy. That, by definition, makes temperatures conditional. Change conditions, change results. One should not be so quick to dismiss any recorded value without checking the site and its equipment first.
So, that 14F at Denver may very well be a “good” value, while being a regional outlier. So, could a -70C value be seen in the Philippines? Sure can. Could it be good? Sure could, depending on where it was measured and how it was measured and for what sample of matter. Recall that saying about assumptions ….
Pingback: GHCN v3.3 vs v4 – Top Level Entry Point | Musings from the Chiefio