Paris Rounds Too

Just as a quick cross check, I took a look at the Paris Airport. The question I was wondering about was simple. The USA ASOS stations ’round up’; but do major airports in other countries also round to whole degrees C ? Or do they report in fractional degrees C ?

This matters rather a lot. If it’s just a USA thing, the damage is limited. If airports in other countries also “round up”, then we have a very significant issue.

So what did I find in Paris, France? Rounding.

From the wunderground page:

http://www.wunderground.com/history/airport/LFPG/2010/10/18/DailyHistory.html

We have this chart:

Paris, France, Airport Temperatures from Wunderground.com

Paris, France, Airport Temperatures from Wunderground.com

Again we see the ‘typical’ stair steps with ramps effect from rounding up. The temperature has a small ‘ramp’ between the whole degree jumps. Sites that do not round have a more linear change, without the steps and ramps effect.

Further, down in the temperature history section, we see that all the recorded temperatures are in whole degrees C.

Time (CEST): Temp.: Dew Point: Humidity: Sea Level Pressure: Visibility: Wind Dir: Wind Speed: Gust Speed: Precip: Events: Conditions:
12:00 AM 44.6 °F / 7.0 °C 39.2 °F / 4.0 °C 81% 30.24 in / 1024 hPa 6.2 miles / 10.0 kilometers NNW 4.6 mph / 7.4 km/h / 2.1 m/s – N/A Mostly Cloudy
METAR LFPG 172200Z 34004KT 9999 BKN024 07/04 Q1024 NOSIG
12:30 AM 44.6 °F / 7.0 °C 39.2 °F / 4.0 °C 81% 30.24 in / 1024 hPa 6.2 miles / 10.0 kilometers NNW 4.6 mph / 7.4 km/h / 2.1 m/s – N/A Mostly Cloudy
METAR LFPG 172230Z 33004KT 9999 BKN024 07/04 Q1024 NOSIG
1:00 AM 42.8 °F / 6.0 °C 39.2 °F / 4.0 °C 87% 30.24 in / 1024 hPa 6.2 miles / 10.0 kilometers North 4.6 mph / 7.4 km/h / 2.1 m/s – N/A Mostly Cloudy
METAR LFPG 172300Z 35004KT 9999 BKN025 06/04 Q1024 NOSIG
1:30 AM 42.8 °F / 6.0 °C 39.2 °F / 4.0 °C 87% 30.24 in / 1024 hPa 6.2 miles / 10.0 kilometers NNW 6.9 mph / 11.1 km/h / 3.1 m/s – N/A Mostly Cloudy
METAR LFPG 172330Z 34006KT 9999 BKN024 06/04 Q1024 NOSIG
2:00 AM 42.8 °F / 6.0 °C 37.4 °F / 3.0 °C 81% 30.24 in / 1024 hPa 6.2 miles / 10.0 kilometers North 5.8 mph / 9.3 km/h / 2.6 m/s – N/A Mostly Cloudy
METAR LFPG 180000Z 36005KT 9999 BKN025 06/03 Q1024 NOSIG
2:30 AM 42.8 °F / 6.0 °C 37.4 °F / 3.0 °C 81% 30.24 in / 1024 hPa 6.2 miles / 10.0 kilometers North 3.5 mph / 5.6 km/h / 1.5 m/s – N/A Mostly Cloudy

Looking at the airport current record:

http://www.wunderground.com/cgi-bin/findweather/getForecast?query=Paris%20france&wuSelect=WEATHER

And scrolling down to the bottom, you can see that nearby stations report in fractional 1/10 degrees C.

Paris – 20ème arrondissement 52.3 °F / 11.3 °C 48 °F / 9 °C 84%

West at 6.6 mph / 10.6 km/h / 3.0 m/s
29.92 in / 1013.1 hPa 0.00 in / 0 mm / hr – 397 ft 7:11 AM CEST Normal Website

Meteo 77, Saint-Pathus 50.2 °F / 10.1 °C 47 °F / 8 °C 88%

West at 11.5 mph / 18.5 km/h / 5.1 m/s
29.86 in / 1011.1 hPa 0.00 in / 0 mm / hr – 315 ft 7:13 AM CEST Rapid Fire Website

Euro Disney, Lagny-sur-Marne 51.6 °F / 10.9 °C 48 °F / 9 °C 87%

West at 4.0 mph / 6.4 km/h / 1.8 m/s
29.90 in / 1012.4 hPa 0.00 in / 0 mm / hr – 260 ft 7:10 AM CEST Normal

Paris 52.2 °F / 11.2 °C 46 °F / 8 °C 81%

West at 4.8 mph / 7.7 km/h / 2.1 m/s
29.98 in / 1015.1 hPa

Other Countries?

Sydney, Australia Airport reports a round 17 C. Stations ‘nearby’ report fractional 1/10 C.

Mexico City is interesting. The list:

Col. Oriente, Mexico DF 63.1 °F / 17.3 °C 44 °F / 6 °C 49%

NNW at 1.0 mph / 1.6 km/h / 0.4 m/s
23.09 in / 781.8 hPa 0.00 in / 0 mm / hr 7342 ft 12:06 AM CDT Normal Website

APRSWXNET Coyoacan , Mexico City 63 °F / 17.2 °C 54 °F / 12 °C 72%

WNW at 0 mph / 0.0 km/h / 0.0 m/s
29.84 in / 1010.4 hPa 0.00 in / 0 mm / hr 7401 ft 11:51 PM CDT MADIS Website

Tlalpuente, Tlalpan 52.8 °F / 11.6 °C 42 °F / 6 °C 66%

ENE at 4.0 mph / 6.4 km/h / 1.8 m/s
29.90 in / 1012.4 hPa 0.00 in / 0 mm / hr 8613 ft 12:08 AM CDT Normal Website

is all in 1/10 C.

Kingston, Jamaica returns to the pattern of rounding.

http://www.wunderground.com/history/airport/MKJP/2010/10/18/DailyHistory.html?req_city=NA&req_state=NA&req_statename=NA

with the chart having the ‘stair steps’ behaviour:

Kingston, Jamaica, weather chart

Kingston, Jamaica, weather chart

Sao Paulo, Brazil at the airport is rounded, nearby is in 1/10 C.

http://www.wunderground.com/cgi-bin/findweather/getForecast?query=Sao%20Paulo,%20Brazil&wuSelect=WEATHER

And finally, Tokyo Haneda Airport is in whole degrees C while nearby is reporting in 1/10 C.

Again with the odd ‘ramps and steps’ look to the chart:

Tokyo Airport weather history page

Tokyo Airport weather history page

In Conclusion

OK, this “Airports Round Up” problem looks to be global, and with the constantly increasing percentage of “major airports” in the recent part of the record pretty much guarantees a consistent “rise” of a grid box temperature by about 1/2 C just from the rounding between about 1990 and now (along with the grid / box anomaly system that compares one station now to different stations in the past). As the non-airports were dropped from the record and the airports assumed an ever larger part of the record, and as the use of the “round up Full Degree C” method comes to dominate aviation stations at major METAR reporting airports, we “find” additional “Global Warming of 1/2 C even without the Airport Heat Island, Urban Heat Island, and all the rest.

In my opinion, this one error pretty much invalidates any claims of AGW based on any temperature series that uses the GHCN (and thus through it the METAR reported temperatures that have been rounded up). Since that’s all the temperature series in common use, we pretty much don’t know what’s been happening to the temperature of the planet.

I think it is still needed to work through this at a very low level and follow particular non-ASOS temperature records in parallel with the ASOS type reports and show that the actual impact makes it into GHCN. But at this point I can see no way these stations could not impart a warming bias of 1/2 C from the rounding up.

About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in AGW Science and Background and tagged , , . Bookmark the permalink.

24 Responses to Paris Rounds Too

  1. dearieme says:

    When I first took an interest in “Global Warming” I didn’t immediately realise how crooked many of them were. But I did realise pretty quickly how dud most of them were. Failing to account for rounding – that’s ultra dud, even by their standards. Can you be quite sure?

  2. pyromancer76 says:

    Temperatures taken at airports. Hmmm. Don’t airports need absolutely accurate temperatures as part of the information to pilots? Does this fiddle-faddle with the temperatures occur AFTER they have been used for the reality (safety) of air traffic, or is “rounding up” too insignificant for flying planes/jets?

  3. E.M.Smith says:

    It is possible to be “quite sure”. On

    https://chiefio.wordpress.com/2010/10/18/phx-asos-sol-sob/

    at the bottom are links to the way ASOS equipment is set up that specifically say it rounds up to whole degrees C. Then there are links that show the METARS are in whole degrees C and made from those reports.

    Short of tracking a data item in a full forensic audit from end to end in the equipment itself, that’s about as much as you can do (and is usually more than enough).

    FWIW, I just put an update on that page where I added some data from today. I set Wunderground to show “both” for temperature type (F and C). To my eye it looks like some equipment in the USA is reporting in whole degrees C (like the ASOS) but taking a conversion to F, and when converted back gives an error term. Some equipment looks to be clearly reporting in fractional F (as the C value matches a straight conversion). Basically, it looks like there is “A whole lot of rounding going on” and like there is a bit of a free hand as to ought one measure in C or F (and convert as needed) then another free hand in how (someone … USHCN?) stores the data (with more conversion of type) and finally Wunderground is reporting C and F values that do not always agree (off by various accumulated conversion, rounding, etc. errors).

    For Paris at the airport, this ought not to apply, as it is C all the way. So you can look at that list of temperatures above (or go to Wunderground and see the whole list) and it’s always in whole degrees C. Never a fractional part. But the nearby neighborhoods have fractional parts.

    That pattern continues to attest to a “round up at airports”.

    (And that’s a GOOD THING. Those stations are primarily for aviation. In aviation, you calculate “density altitude” to see if you can get off the runway or not. If you have a ‘false low’ temperature, you crash. If you have a ‘false high’, you have margin of safety. So “round up” helps prevent accidents. It’s the use of AVIATION temperatures for “climate research” that’s a BAD THING…)

    To “nail it” would require finding the French equivalent of the docs I link to under that prior article for the USA. I’m not that familiar with the French Government (and they are having a spot of trouble right now…) but it ought not to be hard for someone familiar with France to find the docs saying “we round up at airports”. It ought to be a well advertised fact to French pilots and aviation weather men.

    FWIW, I too took that “What The..” journey. If I’d known you could be that sloppy, lazy, and incredibly uninspired and be a successful NASA scientist, I’d have pursued it when I had their job application in my hand… With that class of competition it would have been a walkover…

    For example: Remember the Mars spacecraft we lost because they could not keep Metric and Traditional units straight?…. They have a history of that kind of stuff now.

  4. E.M.Smith says:

    @pyromancer76: I put it in the prior comment, but it bears emphasis.

    Rounding UP is a GOOD THING for aviation. It is what every pilot is told to do. (In ground school, we were even encouraged to add a degree or two for safety if we were unsure).

    So the decision to ’round up to whole degrees C’ is exactly right. For aviation purposes.

    Density Altitude is a fancy word for “hot air is thinner”. Cold thick air will carry more weight than hot thin air. Since all the aircraft performance numbers are for a ‘standard atmosphere’ but you fly in real air, the real air is converted to an imaginary standard atmosphere altitude in the calculations. You say, basically, “I’m at 5000 feet, but it’s not the standard temperature, it’s darned hot in this desert, so that’s more like 7000 feet of standard atmosphere. My aircraft is placarded to lift 2000 lbs at 7000 ft, and I’ve got a load of 2100 lbs. Either Uncle Joe stays on the ground or we wait until it cools off a bit.”

    In that kind of calculation, the last thing you want is to find out that the 21 C was really 23 C. As you might have calculated that “Joe just barely fits” and run into the tree at the end of the runway as you could not quite get off the ground in time… BUT having the 21 C really be 20 C means you get off the ground just a bit easier and if Joe fibbed about his weight by 5 lbs, no bad thing happens…

    So it’s a very good thing that they “round up” at airports.

    It’s just very bad “climate science” to use it for something other than aviation.

  5. RuhRoh says:

    So where are all those genii that were claiming you didn’t understand the merits of averaging noisy numbers to get a sqrt N reduction in the noise???

    Maybe we can go back and find some of their fine commentary and repost it in comments here.

    IIRC, you were suggesting that the data might be polluted by this kind of rounding and that it was insanity to claim the levels of accuracy being routinely bandied about…

    On second thought, it is no fun to hear ‘I Told You So’, and thus this is another ‘great idea’ which is not a very good one.

    Cheers,
    RR

  6. E.M.Smith says:

    @Ruhroh:

    Well, I was thinking along those lines, but thought I’d post it as a post that emphasized the “error” aspect rather than the “just because you can make an average of something, that doesn’t mean it has meaning” aspect… ;-)

    Simply because there are a large number of those folks who are going to simply sing from their “One Note Hymnal” and pronounce that of course you can make an average to a gazillion digits of precision out of whole digit fodder (and you can). Sill missing the point that if those single digits are bogus and widely vary as to what THEY mean, the average of them is, very precisely, meaningless. Basically, you can’t upgrade the trash by making soup out of it.

    I expect the same is true of the folks who where wailing that because the pre-drop station averages for “kept stations” vs “dropped stations” matched, the “post drop” had to match as well. THAT ignores the WHY did they drop. As we’re now seeing, that drop is coincident in time with the ASOS rollout. Gee, and the “Duplicate Number” flag changes at the same time in the “kept” stations flagging a change of process.

    So IMHO, what we will eventually find is that the “kept” stations where the same PLACE, but different INSTRUMENTS as the ASOS / AWOS et. al. were added to airports AND different processes. (the change to C and round up). The airport percent rises mightily about then (as non-airports are dropped). So they end up comparing LIG to LIG or MMTS to MMTS in the past; and don’t realize that “post drop” they have ASOS to MMTS and get a “hockey blade” from a 1 C round up in the present vs a 1 F ’round nearest’ in the past.

    But working that into a posting with links and evidence and easy to follow examples is going to take some time….

    At any rate, it was very nice to see that at least one person noticed that I’m pretty careful about what an average actually means and that maybe I know something about how averages ought to be used… Thanks 8-}

    FWIW, the question that needs answering (and will take a fair number of ‘staff hours’ that I don’t have right now and some detail data too) is: What percentage of the data in GHCN in recent years is presently from ASOS (or similar ‘aviation round up whole degree C’ processed sources) and how does that compare to the data (and process) pre 1990? When this error of “round up to whole degree C” is removed from the present “warming trend”, what (if any) “warming” is left?

    On first blush examination with airport percents approaching 90% overall (and the ASSUMPTION that a lot of them will have the “round up problem”) it looks to me like the answer will be “nearly all global warming is a rounding up error term”.

    And averaging together 1500 stations, where 1350 of them have a 1/2 C upward rounding error term, will NOT give an answer with usable precision out to 1/100 C as it simply can not fix that error. (Though subtracting 1/2 C can allow for it…)

    Then comparing “grid box averages” of those stations with that error term to “grid box averages” from 1950-1980 done with degree F values that were “round nearest” so as to compute a ‘grid box anomaly’ will NOT remove that error term either. No, Virginia, the anomaly will NOT save you. It will simply give you false hope to match the false precision taken into the case in the first place.

    My “job” for the next few months at least, is to try and figure out:

    1) How to prove that.
    2) How to package a readable presentation of it.
    3) How to get that understanding communicated widely.

    For now I’m in step zero:

    0) Admire the problem.

    So I’m gathering interesting bits of example and going “Gee, that’s interesting!” ;-)

    Like Santa Barbara where in a single very similar geographic bowl the ASOS is “hot hot hot” compared to the average of all the stations around it…

  7. Rob R says:

    I look forward to the gradual unfolding of this line of inquiry. I suspect the prevalence of the issue will vary from Country to Country.

  8. Robert L says:

    Wow! this could be huge.

    0.5 degrees warming for what proportion of the worlds temp records? (Given the great thermometer die off of the 90’s and the increasing reliance on airport locations)

    Would I be right in thinking that this will be more significant in high latitude locations with more sparse stations and greater area infilling influence?

    Could this also help to explain why we are seeing relative stasis in global temps now – most changes in instrumentation were in the 90’s produced the warming signal, but are producing no more relative change now.

    I’d be screaming about this to the world if I were you.

  9. TomM says:

    All American ASOS report temperatures to the tenth degree in the remarks section of the observation. Just check the wunderground and click on Raw METAR on any ASOS station listed, look past the RMK section to the sequence starting with a T. The temperature and then dewpoint will will be listed to the tenth of a degree C. The remarks section also records the highs and lows for the day based on temperatures that are measured to the tenth of a degree F, and then rounded to the whole degree.

    Now AWOS stations are a different matter, some have a remarks section with temperature and dew point to the tenth of a degree and others don’t.

    To decode an ASOS metar check…
    http://en.wikipedia.org/wiki/METAR

  10. E.M.Smith says:

    @Rob R: Well, that’s the $64 Billion Dollar question…

    IF there is some international aviation standard that said “All Metar Stations to round up to whole degrees C” then it’s pretty much going to be global and 100% of GHCN. IF it’s just ASOS and AWOS, a smaller problem. IF it’s only airports, the prevalence will vary by country and over time (but most countries now are already 90 ish % airports. France, USA, New Zealand all very high. (IIRC, N.Z. is 100%)

    So there’s likely to be variance, but mostly over time and mostly toward the 100% mark… though that’s the bit that has to be proven.

    I’ve already found the docs and examples that confirm the docs for the USA / ASOS. This posting is a supporting example for France in particular and Global in general (though I need to find the EU / French ASOS or similar docs). Then on to the rest of the world… (oh, for a grad student or two to do all the real work ;-)

  11. E.M.Smith says:

    @TomM: Yes, the ASOS manual (linked in one of the prior articles, I think the one on San Jose SJC) made that clear. Oh, looking around, I see it’s the one about Phoenix Sky Harbor. Has the link to the METAR decoder too:

    https://chiefio.wordpress.com/2010/10/18/phx-asos-sol-sob/

    The problem, as I see it, is this:

    The folks compiling the climate data see that 1/10 spec and say “no problem”. Then accept the automated METARs that in in whole C rounded up (without thinking about it…).

    All the evidence I’ve found so far points that way. Including prior looking into the GHCN / METAR process. The “missing link” is only the question of who turns METARs into CLIMAT reports and do they “fix” this issue then, or not? It is possible that the different countries choose to use a source other than the METARs for the daily data and ‘get it right’.

    I would hope they do, at least for some countries, but “hope is not a strategy”…

    @Robert L: As noted in this comment just above, it all depends on a specific detail of how the temperatures are turned from METARs to CLIMATs (which then feed into GHCN and on to the rest of the world).

    It will be ‘more significant’ wherever the “bulk of the temperature data now” comes from automated airport stations. That is increasingly “all of the world”…

    And I thought I was screaming it, here, now… ;-)

  12. E.M.Smith says:

    Somehow I sense things getting more confused on the question of how the monthly averages are computed from ASOS.

    One reference cited earlier said the data are created in whole degrees C. But this link:

    http://www.ncdc.noaa.gov/oa/climate/ghcn-daily/index.php?name=integration

    that claims to detail how the NCDC data are created lists a particular data set.

    A google of that term leads to this page that claims to detail how the data set is created FROM the ASOS.

    Click to access td3211.pdf

    It says that the data are in whole degrees F for min / max and that mean is calculated from those two as (Min+Max)/2

    MNTP
    Average Temperature. The value is the (Max Temp. + Min Temp.)/2, expressed in
    whole degrees Fahrenheit.

    and further down:


    TMAX
    Daily Maximum Temperature. DATA-VALUE = -00199 to b00199, expressed in whole
    degrees Fahrenheit.

    TMIN
    Daily Minimum Temperatdegrees Fahrenheit.

    So the ASOS data are at best in whole degrees F before the mean is calculated. But which ASOS data are used?

    10. Quality Statement:

    This data set is produced from ingested ASOS Summary of the Day data. The data are examined by routines that perform gross limits checks, internal consistency checks and climatological limits checks. Discrepant data are flagged. This data set should be used with the knowledge that these are “raw” data as received from the station. Flagged data may be erroneous or merely the data failed a check.

    So I guess it’s back to the ASOS manual to find out what form the “Summary Of The Day” data might be in.

    And there is no statement about how the “whole degrees F” get made into whole degrees. Round up, down, nearest, or truncate?

    But at least this starts to explain why some ASOS show ’round degrees C’ and others show ’round degrees F’. It will depend on if you got the data from the ASOS or from NCDC (among other things…)

  13. RuhRoh says:

    Inexorably, the noose tightens…
    RR

  14. dearieme says:

    Well, what can I say but “congratulations”? What a catch. Bloody hell. They didn’t account for rounding. Dimwits. I wonder whether they can eat without dribbling.

  15. E.M.Smith says:

    Well, the ASOS manual is a bit dense at times. But near as I can tell, the “Daily Summary” and “Monthly Summary” data are calculated as rounded whole degrees F (the usual way with midpoint deciding up / down) then converted to 1/10 C (why? don’t ask why… down that path lies insanity and ruin…)

    There is a data flow diagram at the bottom of the manual that has an ‘as originally done’ that looks like it would work, but then a newer form that’s just a touch vague (the “MS/DS” labels are near 2 flow lines… wonder which one it flows down?). At least one interpretation of that diagram would have the Daily Summary / Monthly Summary data from FAA ASOS not making it to the NWS Network ASOS site. Then again, it might… That there are two different ASOS systems (FAA and NWS) is also a bit interesting as they will likely have differences of operational behaviour and preferred data formats.

    To the extent that the Summary Data are used, as the NCDC doc claims, then we’re only taking a couple of gratuitous type changes ( F in 1/10 to whole F to C in 1/10 to USHCN in F or GHCN in C ) while to the extent that the data flow had some “issue” at some point and the METARs were available, so used, you might get the “round up” problem.

    And who knows what they do in France….

    At any rate, it looks like the ASOS device does keep a fractional degree value (until it does the rounding to whole degrees F) and then does have a fractional degree C available. And supposedly the summary data are used for that specific data set, that then feeds into the GHCN. But frankly, given the amount of Rube Goldberg in that data flow / type conversion / precision conversion merry go round; I’m still a bit worried.

    It also remains the case that anyone using the METAR data feed will be getting the “Round Up Whole C” version. This might well explain some of the discrepancy between recreated monthly averages based on METARs, based on CLIMATs, and based on values in various data sets. Basically, you can’t effectively do a ‘cross foot audit’ by comparing those three and expecting them to match.

    I think I was happier when they had Old Joe read the mercury thermometer and report the temp in F and that was the end of it… 8-)

  16. Steven Mosher says:

    1. Do they also round down?

    2. Rounding up and rounding down will not bias the estimate.
    we know this mathematically.

    3. The historical record is done by round up/down on a daily basis and averaging per month. doesnt distort the average. you can prove this to yourself by doing a suitable monte carlo simulation or writing the equations.

    4. if rounding biases things up by 1/2c for the land record, you have these problems to explain

    A. you would not have the agreement you in fact havedo with RSS
    B. homogeniety algorithms would identify the difference between ASOS and non ASOS records. ( they dont)

    C. the ocean would be warming faster than land record which would make no physical sense.

    D. Air temps over the sea ( different than SST) would not track the land temps, as they do.

    E. 7 years of CRN data collected every 5 minutes to 1/10 would have divergent trends from ASOS type sites. They dont.

    In short, the rounding up/down on a daily basis in F, is the same way the Stevenson screens are recorded historically.

    This mathematical process introduces no bias. You can play with Lucias spreadsheet on the issue or write your own monte carlo simulation to show this, or you can write the equation and see why it doesnt.

    If they only rounded up, that is reported 13.2 as 13.2, but rounded 13.5 6 7 8 9 up to 14, then you would have a bias.

  17. Steven Mosher says:

    How a min/max is read in a CRS:

    “4.6 How to Read and Record Temperatures. Thermometers are read and recorded to the
    nearest whole degree Fahrenheit. Readings are usually recorded on WS Form B-82, and WS
    Form B-91, or WS Form B-92. Temperatures below zero are recorded with a minus (−) sign to
    the left of the digits; i.e., −15°F for 15°F below zero. The thermometers should be reset after
    they are read, as described in Sections 4.6.1, 4.6.2 and Figure B-7. ”

    Now for the US you can do the following simple test.

    1. test the trend in the US data prior to conversion to C
    2. test the trend after a conversion to C.

    Answer, no difference.

    You can also pick any station that does round up/down on a daily basis and compare it to synoptic data without rounding. Answer. No difference. If you want to do this the easiest way in california is to go to the agriculture temp measuring system which always has some station nearby a station in ghcn. its tedious work.

    You can also take any station data that does report to the hour or minute and perform rounding/truncation/ conversion tests on real data and see that the transforms dont bias trends.

    You can also monte carlo the process.

  18. E.M.Smith says:

    Dear Steven:

    They round by “midpoint” for the DS records. (The usual 0.5+ goes up, 0.4999 goes down). But round up for the METARs (and I’ve not found what they do for the CLIMATs).

    Rounding changes the precision you have available to work with. “We know this mathematically”… it’s also a common place where error terms are introduced and bugs in code raise their head. (For example, we THINK we’re rounding based on 0.5, but that’s a decimal number, we’re actually doing something else inside the computer as it works in binary. This can be exploited in some kinds of ‘salami technique’ theft as the wee small fractional error terms are accumulated into a Swiss Bank Account…).

    There is a world of difference between how a forensic thought process works and how a theoretical one works. In particular, you never say “It Can’t”. You ask “How could a very clever person or a pernicious error exploit?”.

    With that said: Yes, if they in fact managed to carry the DS / MS form of the data properly through to the end result of the GHCN, the odds of an error term of significance drop dramatically. (As I noted above) But there are still loose ends. In particular, all that type conversion. Remember that we found a 1/10 C warming of records in Step0 of GIStemp from just such a type conversion done in the obvious but wrong way. Not nearly the same as a 1/2 C, but if you take that conversion a couple of times…

    I’ve never argued with the historical method nor asserted it was invalid. Only that it sets a precision limit. Please don’t be so patronizing as to think I don’t know how to round.

    Oh, I have pointed out that the procedures allowed observers to simply make up a missing value and put it on the forms. (Though after linking to the NOAA site, they mysteriously took the page down…)

    The major “issue” I have with the rounding done involves the way it’s done in computers and with software. It IS prone to error there and no amount of “Monte Carlo” will fix nor demonstrate the human tendency to write bad code or make bad choices of data source and type conversion.

    Computers are subtle beasts when it comes to math, a thing folks often forget, that causes no end of grief. We tend to think in base 10, and they work in base 2. We tend to think in unlimited precision, they have underflow and overflow and accumulated error terms. We tend to think in “did it once and it worked ok”, they do something a million times in a second and can accumulate that small nearly nothing error to very large sizes. The upshot of all that is simple:

    It is simply wrong to be dismissive about what happens in any computer mathematical operation, be it averaging, rounding, or anything else. EVERY operation done is an opportunity for a cockup of significant proportions. Even if the formula looks fine to a human being and it works great done by hand on a modest number of records.

    I’ve spent more years of my life than I care to remember finding just such “It can not happen” but it did “issues” in code. Both my own and that written by other folks. And the quality of the workmanship in the NASA / NOAA code I’ve seen does not increase my faith in it.

    Oh, and “writing the equations” does nothing to tell you how the code really works. It only tells you what you expect it to do (which typically is NOT what it actually does unless you are using the more esoteric symbolic math codes). NEVER let your ASSUMPTIONS and EXPECTATIONS tell you to skip actually looking at what happens to the data. Never ever. Be it looking at a single subroutine or doing an entire systems analysis or even a simple audit.

    The only thing that matters is what actually happens to the data. In a forensic audit, you start from the assumption that something is wrong, but in a way such that you will miss it if you look for what is expected. The things you attack the most are the assumptions and expectations. If you don’t, you will typically miss the most interesting bits. And something you ought try to remember: The more someone says “It can’t be there, look over here.” the more you simply MUST keep digging right where they said it can’t be. “Accept nothing your opponent offers you” is not just part of The Art Of War… (While under cover I’ve found folks doing illegal things via just such an approach. They were walked off site. About a week later we finished finding all the more subtle bits of buggery.)

    That same approach is just as critical for finding bugs in code. I once found a compiler bug by writing code of the form:

    If A print A and exit.
    if Not A print NOT A and exit
    print You can’t get here.

    And had “You can’t get here.” print out. The other programmer on the team had been laughing at me for having put in the “stupid” line of code that could never execute. Just look at the “formula”…

    So please keep in mind that the 2 least effective things you can do when dealing with me are to say in any form:

    1) It can’t be there. (Especially if followed by “look at the formula”).

    2) You ought to look at this instead.

    Until you realize the degree to which this forensic mindset is 180 degrees divergent (and rightly so) from the inventor or fabricator (or code writer) mind set, you will continue to get exactly backwards responses to what you expect from your suggestions.

    Or, more simply, never tell the Cop there can be no drugs hidden in the car. You will end up with a large pile of car parts. (And even then some parts will be sent to the lab for trace analysis. There is an interesting plastic you can make from cocaine. Makes usable turn signal lenses… so expect missing bits from all plastics in the car.)

    On your point 4: I have to explain NOTHING. Zero. Zilch. Nada. Nary a thing. The folks who wish to assert that something outrageous is happening to our climate have to PROVE every single step of their process is correct from beginning to end (and several times to different audit teams). That’s what we require in financial audits (required yearly for most corporations – but in more depth if any tax or other agencies are in the mood…) That is what we require for various ISO certifications. Hell, you can’t even sell dog food without a more through ingredient inspection program than that which is applied to the temperature ingredients in the AGW hypothesis.

    It is simply and utterly BACKWARDS to assert that the audience must show how the trick is done on stage. All they need do is say “That ‘saw the woman in half’ looks fishy to me” and start looking for a “how”. (It helps NOT to accept the invitation to sit front and center… instead “accept nothing” and head to the wings instead … while your assistant looks in the basement…) The correct behavior is to require the magician to open the cabinet while running the saw through.

    Oh, and any number of hypothesis about how the trick is done may each be wrong. That does not prove the trick is the truth. It only proves that the trick is a good one…

    OK, with that said:

    A. It’s a nice thing to have a comparison data set, but not very useful as proof that the former is right. It may just as well mean that the latter is wrong or that the ‘calibration’ between them is off. (Not the calibration of the RSS to reality, only the alignment of the comparison).

    B. The QA process (linked in another article here) uses the ASOS as the standard. If it finds data too far away from the vote of the ASOS, it silently substitutes an average of nearby ASOS for that daily value. Not very useful to then say that the following homogeneity checks in other codes find nothing. No, I’ve not “proven” that fully. But it is sufficient to show that ASOS is used without challenge early in the process, so can not be tested via the result later in the process.

    C. It’s not possible to say much at all about what does or does not “make sense” for the ocean. We measure a thin skin on top. It has depths we can barely reach for brief periods of time. Water moves all over the planet between these temperature zones AND takes a load of phase changes. Until you can fully measure all of that: any discrepancy is not a proof of anything, it only “informs our ignorance”.

    D. See item C. Look, I’m not going to get sucked into a bunch of additional hypotheticals. There are all sorts of times folks have made exactly the error you are making in asserting “it can’t be, as these agree”. For most all decent computer fraud the major external audit terms are MADE to agree. For many other cases, folks end up calibrating to each other (deliberately or via subconscious acts … ie bugs). Long before “going there”, one needs to simply follow the data and see what is done to it.

    And no amount of hypotheticals nor of ‘look it matches’ changes that. EVER.

    Bad code is written by assuming that a right answer once, or an answer that matches expectations sometime or other, is VALID. It isn’t. It’s VALIDATED when it’s had a thorough end to end audit, been run on a load of benchmark data, been vetted in production for a few years without error, and had code reviews by a few dozen hands. And even THEN you don’t TRUST it. You stamp it “validated” and keep an eye on it… Someone might just update the compiler or libraries on you and start giving errors.

    E. Got a pointer to a data source / analysis? And WHICH ASOS data was audited? The feed from the high precision sensor or the data after type conversions et. al. and run into GHCN? Look, I’m willing to accept that the hardware is probably pretty good. I’m even willing to state that the specs and manual say you CAN get from it a decent record.

    What I’m not willing to say, until proven by process inspection and audit results, is that those data make it to the final GHCN product unscathed.

    It’s quite clear that the METARs are rounded UP to whole C. One can hope those are not used for anything but aviation, though we’ve seen folks using them as a kind of audit / cross check. At a minimum those folks need to be informed that they have a “Round up whole C” issue to deal with. I could easily see folks in the “climate science” arena using METARs inappropriately in some process and not realizing it. (No, I don’t know they are doing it. I just see a potential big issue to be checked.)

    It’s also clear that many ASOS as reported on Wunderground read very high in comparison to surrounding areas. That could be siting. It could be bogus hardware design. It could be software. It could simply be choosing the “wrong” data feed and getting a “round up error”. In any case, that is what the DATA shows. The ASOS read high.

    No amount of song and dance, nor attempted deflection off to RSS or drowning in the sea will change that fact about the data.

    So that “inconvenient data” needs exploration.

    (And yes, I’m quite willing to postulate that at the end of the day it may simply be that Wunderground is using METARs rather than the DS feed. That, too, would be worth finding. Though it would raise the point that folks are using a “Round up whole C” product without knowing it; and beg the question who else is doing the same.)

    So maybe at the end of the day we find that NOAA have a nice process (Byzantine as the data flow chart in the ASOS manual may be) that actually does do ’round middle’ properly. Part of a forensic audit is often to simply find that the process is so convoluted that the error RISK is large, even if mitigated. And I’d be happy if that is the case too.

    Our basic difference (which will not go away unless you change) is pretty simple:

    You are willing to leap off a cliff of conclusion that everything is just fine because it looks nice from the outside or in comparisons with other nice looking things.

    I’m willing to hunt for potential cockups in things that look nice from the outside; and will not let go of a line of investigation until I’ve followed the data (and the code) from beginning to end. And even then I’ll be willing to have a second, third, or fourth go at it. Especially so when it’s not so nice looking from the outside (like those ASOS records in Wunderground that are consistently hotter than surroundings).

    One final note: Please drop the patronizing on simple math. I’ve had college level statistics, a couple of years of calculus through partials, exposure to ‘non-standard math’ (and love the fact that it handles division by zero and multiple infinities in a rational way) and I’ve got a Math Award in my history (along with a string of A grades). I have tutored folks in math, and been taunted with “little professor” during parts of my life. To have you be so foolish as to think I need instruction in rounding really makes you look the idiot. I understand the math Just Fine. I’m simply unwilling to accept on faith that other folks wrote perfect code and developed perfect systems when they have so many moving parts with different roundings and type conversions going on (especially in the context of hot ASOS in the public display). So please afford me at least the courtesy of a presumption that I’ve mastered basic arithmetic….

    (For a good time, read “Beat the Dealer”. I did just out of high school. Managed to make money in Reno a few times, then got bored. For a better time read “Beat the Market” by the same author. A bit more difficult, but not much. The foundation of the modern hedge fund industry. Then, for a nice bit of recreation, read “Is God A Mathematician?” . I love it. That’s the kind of thing I do for recreation…

    https://chiefio.wordpress.com/2010/07/10/newton-and-global-warming/

    Then if you are up to it, take a look at some of the fundamental math (not arithmetic) issues with this whole temperature as warming indicator issue:

    https://chiefio.wordpress.com/2010/07/17/derivative-of-integral-chaos-is-agw/

    Do you really think I need you to tell me how ’round middle’ works? )

  19. P.G. Sharrow says:

    Nice reply E.M. :-) pg

  20. Tony Hansen says:

    The two posts by Steven Mosher don’t read like moshpits writings normally do.
    Is there another mosh ?

  21. David says:

    When one looks at the extremely chaotic (ever changing for different reasons) system of measuring the planets “mean” temperature, which is in and of itself is a chaotic system, the measure of which is affected by land use and UHI effects in addition to changes in local surrounding and or changes in location as well as potential coding problems, any reasonable person would indeed question the possibilitie of achieving an answer to the global energy budget of the planet, let alone the mean temperature.

    That being accepted, what do the ocean buoys ONLY, which measure the air temperature at a fixed location tell us?

    This, in combination with the satelite record, should over a 60 year period give us the beginning of an answer.

    At any rate as far as …” the ocean would be warming faster than land record which would make no physical sense.” as far as we can tell the oceans are not warming at all, and therefore, if true, it is likely that any atmospheric changes are a reflection of ocean surface cycles, nothing more.

  22. E.M.Smith says:

    @Tony Hansen: I suspect that Mosh is “channeling” some other folks with whom he is speaking, and who have discovered that postings / links of the form “You are an idiot because” don’t see the light of day (as I like folks to be polite). Mosh, being polite, has no issues. So I suspect some ‘suggestions’ have been sent his way.

    All purely speculative and with no hope at all of clarification. But such is the way of life.

    It’s also just as possible that he’s simply a true believer in the “lukewarmer” position and is espousing / defending their (not too well thought out) rationale.

    In either case, my answers stay the same. Follow the data. Inspect every single thing done to it at every single step. Accept nothing that is offered to you by your opponents. Suspect everything until well vetted and well proven (and even then, re-check it from time to time). Expect that there will be errors, cockups, and bugs. Never attribute to malice that which is adequately explained by stupidity… but remember that there can be malice… And above all else never be so full of arrogance as to think that anyone, including yourself, really knows the answers.

    I’d also add a minor point: ANY oddity is cause for intense investigation and suspicion. The complete lack of oddity is even more cause… ( The Paranoid Cop mantra ;-)

    Why that behaviour set? Experience.

    Everything from bugs that can’t be but are (in code) to people who seemed completely honest and moral (who were not as I caught them in the act) to systems that ought to be stable (but were not) to… After a while you figure out there is a pattern here. Perfection is substantially never achieved and low expectations are often met.

    So when you have an absolutely perfect match between two things, well, the odds of that are nearly zero… It does not confirm they are right, it suggest an issue. (Many a crook and bug have been caught by the too-perfect cover.) And when things are way out of whack, it’s pretty clear there is an issue. There is a very small zone of ‘just reasonable’ error that is usually an indicator of things done properly. If you can vet everything AND get that result, you’ve probably got a decent audit done.

    Or simply: If a $Million corporate account balances to the penny, something is probably wrong. If it balances to the $10,000, you’ve got a mouse to trap. If it balances to within $1-$100, it’s probably right (provided the systems behind it are reasonably clean and appropriate). Between $100 and $10,000 I’d suspect a cockup somewhere.

    FWIW, I was a ‘night auditor’ at a hotel for a while. Had to close the books and posting machine each day. I spent endless hours finding where folks had posted a room, then ditched the card (back when things were done on cards…) but didn’t put an error entry in the error book. After a while you get good at knowing what the cockup was even with no real evidence. IIRC, short $52.50 was a Double Queen room, posted, but not zeroed nor paid (and probably in the trash bin…) while half that was a change of occupancy not noted in the errors and corrections log. It was a long night getting the machine to match “to the penny”… but I left an audit trail so folks could confirm it was in fact a correct to the penny answer… (showing where the ‘reasonable’ errors had been found).

    You do that kind of thing day in and day out for a few years, add in some police training and security work (and catching a couple of teams of bad-guys), spend a while managing software QA and doing debugging and code reviews (both forensic and otherwise) and after a while you start to understand why it’s a very bad idea to leap to the conclusion that everything is just fine as things look sort of like they ought to look on the surface…

    Yeah, for every 10 ideas of how it’s wrong, you may find 9 of them were bogus. BUT that one where you catch an “issue” is more important than all the blind alley investigations.

    Getting that mind set across to “climate scientist’ types who don’t even have a regression test nor a QA suite at all, and who are happy to leap to conclusions on “something matched once so it must be right forever” is, er, difficult…

    And I suspect it frustrates Mosh that I won’t “learn” from him to “accept” that some of these things “are right”. Thus the repeated dumbing down of the pitch and the emotional tenor. Now if he would just “learn” from me that “this behaviour is by design”… 8-)

    Oh, and one correction to my earlier reply. I’d said it would be considered “validated”. That bothered me in a subtle way (enough to have it wake me up..) The reality is that I would say in written documentation “This code has passed the validation suite”. For the simple reason that I *know* there will usually be some bug or other that has not yet been found. Most of the time an irrelevant one, and often one that is only rarely encountered. But it would stick in my craw to pronounce any code “Validated” as someone might think that meant it was correct and bug free… but I’m happy to say it is ‘fit for purpose’ and ‘passed the validation suite’… Yeah, I’m that careful about what I assert is “known”.

  23. Tony Hansen says:

    EM
    Thanks for the reply.
    Very much agree on the investigative ideas and accepting nothing.

    Just a comment on this part….. “Yeah, for every 10 ideas of how it’s wrong, you may find 9 of them were bogus. BUT that one where you catch an “issue” is more important than all the blind alley investigations”.

    The blind alley, or what I would call a dry gully (follow it as far as you like but there aint no water), has mostly been an excellent learning experience. Not just because it fine tunes my understanding of the system but it also provides little ‘dig here’ flags – not for anthing directly connected to the job at hand – just bits that need to be checked… which inevitably lead to another bunch of blind alleys…. and every so often something that is very interesting.

  24. E.M.Smith says:

    Frankly, I suspect that’s the reason I do this… It’s not for any particular outcome; it’s just for the ‘thrill of the chase’ and the ‘joy of discovery’… Figuring out where all the parts of this clockwork fit together.

Comments are closed.