GHCN v3 A Question Of Quality

Sometimes when writing code you make assumptions. Quite valid assumptions. That turn out to be quite wrong.

Sometimes it is not your fault.

Sometimes the data sucks.

I’ve ported my dT/dt code to run on both the v1 and v3 versions of GHCN.

It does a ‘first differences’ anomaly processing on EACH thermometer record before doing ANY combining, so is about as pure and clean an ‘anomaly’ process as you can get. The only real ‘twist’ to it is that I do the ‘anomaly’ creation process starting with the most recent data point and going backwards in time. The most recent data ought to be the best, so this puts any extreme excursion for very old and questionable instruments or processes ‘at the start of time’ for that particular thermometer. In this way, a thermometer at a place that was first being read in 1720 and perhaps even in some entirely different scale, like Reaumur, does not cause all of future readings to be offset by whatever oddity it might have had in the first reading.

First Differences simply takes the first reading you have, and subtracts the next one from it. So if you have 20.1 today and 19.1 is the next reading the difference would be -1 C. No Problem.

I create the report in several steps. This is often an easier way to program and it is frequently very useful to have intermediate data files for things like debugging or for feeding the intermediate form into a another ‘great idea’ that comes along (without needing to reprocess all those first steps). In particular, I combined the Inventory File with the Mean Temperatures data to make a combined file where each record is identified. (This is how it really ought to be, since the data in the inventory file changes over time in the real world, but only the most recent point in time is captured in the INV file. A land use, for example, may be AIRSTATION today, but I assure you it was not so in 1800…) Then I create a version of that file where all the temperatures have been put through the dT “first differences” processing. At that point the data can be used to make all sorts of interesting reports (and more easily since the meta data are attached to each record).

Why That Matters

Here I was making dT/dt reports by country and by region, comparing v1 with v3, and looking for patterns. I went to do one for “North America” (via using “country code” of “4” – yes, just “4”. I search on the string and the first digit of country code is the continent area). My program “blows up” with a run time “data type error” on reading input records. It is trying to read character data into an integer variable and that is forbidden in FORTRAN. ( “C” will let you do it, though ;-) In this case the “not letting you do it” is a feature for FORTRAN.)

Now I’m worried. Have I “blown the port”? Is my “FORMAT” statement “off by one”? A common error is to have numbers and letters near each other and to get your ‘framing’ off in matching the “READ” format to the actual data. So you might have “1995LUFKIN 20.1” and be trying to read it as “995L” and UFKIN by being ‘off by one’ and looking just one character too far to the right. In some cases that kind of bug will run FINE on 99%+ of the input data, yet fail when one extreme valid value comes along. So, for exapmle, “20.1” being turned into “0.1 ” might never be noticed. Make a temperature of, say, “22 C” into one of “122 C” and a human being might notice, but a computer program only notices if you told it to check. To do “QC” or “QA” for out of range data.

In my program, I had done no ‘range sanity checking’. It is a common choice for programmers. “Bounds Check” the input data (to catch obviously broken cases of “insane” data) or not? Is the party or program handing you data one that can be “trusted”? Has it already done the “sanity checking” and the data are guaranteed to be “in bounds”? One of the very first things I learned in my FORTRAN IV class many decades back. (To this day I marvel at how much ‘that really matters’ about programming I learned in that one class. Problem sets cleverly designed to force you to run into things like out of bounds data and ‘the typical problems’ with the typical bugs.) The question became: “Was it something I did?”.

Was my program “broken” in some subtle way?

So off I went debugging.

It wasn’t about me…

I did discover in one of my intermediate files an “anomaly” that was all asterisks. FORTRAN does that for you when you tell it to print a number into a field that is too small for it. ( “C” will just let you do it. Sometimes it’s a feature to say “just do it” and “C” is a better language. For engineering work, the behaviours of FORTRAN, where it “barfs” on things that are probably an error, helps to discover “boo boos” better. There are times I Like FORTRAN better. This is one of them.)

So WHY did that field have asterisks? Were MY numbers off? Did I have an ‘off by one’ on the size of the numbers and overflowed the size of the field? All the other numbers looked about right.

Swimming further up stream, I found that the record in question from v3 Mean temperature file was in error.

Inside GHCN v3

Cutting to the chase… There were 3 records for North America that had “crazy hot” temperatures in them. They fit in the 5 space long field in the v3.mean type file (that is in 1/100 C without the decimal). The field might say “-5932″ for a negative -59.32 C reading in Antarctica, for example. One would expect values below about ” 5000″ for most of the world. (This is where range checking can be fun… just what IS the highest temperature ever, and how much ‘head room’ do you leave above it for that new record to show up? Knowing that it may let SOME errors come through undetected…)

In doing my “create the anomaly values” step, a Very Large Positive can become a Very Large Negative after the subtraction. Then there may be no room for the minus sign in the 5 digit space. And you get asterisks. And your report “blows up”…

That is exactly what happened.

Looking at the v3.mean data, there are 3 records for North America with “insane” values. Simply not possible.

Yet they made it through whatever passes of Quality Control at NCDC on the “unadjusted” v3 data set. They each have a “1” in that first data field. Yes, each of them says that it was more than boiling hot. In one case, about 144 C.

You would think they might have noticed.

Here are the records, as extracted from the ghchm.tavg.v3.1.0.20150511.qcu.dat file. Yes, that is the “unadjusted” file. But one might have thought that “insane” values would not be included. I’ve yet to check the .qca.dat “adjusted” one to see if they are removed from it. It would be a heck of a “Hobson’s Choice” to be stuck with either accepting ALL of their “adjustments” or having “insane values”; but that looks like it may be the case. (Welcome to ‘raw’ data…) So it looks like I’ll be needing to add a step of “compare qca to qcu” to see how much changes.

This record is from CHILDRESS. Notice that the seventh temperature field (near the middle of the record so scroll just a touch to the right) is 13810. That’s 138.10 C. The data then go to “missing data flags” of -9999 from that point forward. I suspect it was a ‘keying error’ and the value was supposed to be 13.81 but got shifted ‘off by one’, but it could just as easily be that the sensor simply went nuts. BTW, this is part of the QA process that was lost when we went to automated thermometers instead of having people read them. A person would say “120 F in winter? No way” and go get a new thermometer. Automated systems typically don’t know winter from summer or that it FEELS like it’s about 60 F today so I ought to suspect an error in that 80 F reading… If a sensor goes “a little bit bad”, the data will be blindly accepted. Heck, even a whole lot of “crazy bad” looks to be let through.

425723520021996TAVG-9999     890  G  890  G 1700  G 2530  G 2700  G13810 OG-9999   -9999   -9999   -9999   -9999  

This is the DALLAS/FAA record:

425722590021996TAVG  750  0 1250  0 1280  0 1870  0 2700  0 2860  015440 O0-9999   -9999   -9999   -9999   -9999   

Again we notice that it’s a 1996 record (the first 3 digits are the Country Code – 425 for the USA, then 8 digits of WMO and instrument identifier, then 4 digits of YEAR). Again it is the 7th temperature field (or July) that is in error. 154.40 C in Dallas. Who knew? And again we go to “missing data flags” the rest of the year.

So just how trustworthy are the May and June values? Did the sensor cleanly and suddenly die just in July? Or has it been on a long slow drift to incorrect high readings for a year (or since the last calibration)? How many bogus high values are accepted as “close enough”? Are these sensors prone to “fail high” readings? Or is there a random distribution of “fail high” and “fail low”? I suspect that is a question for folks like Anthony Watts, who knows more about temperature stations than anyone else on the planet, near as I can tell.

At any rate, it might be interesting to compare these stations to “nearby” stations for the several months or years prior to these “data farts” to see if the offset stays contsant into a catastrophic failure, or if there is a long slow drift that is accepted into the record, until the “blow up” happens. Clearly if 154 C makes in it, 48 C would too… and even 28 C when 27 C was the actual temperature.

This one is from LUFKIN:

425747500011996TAVG  950  G 1310  G 1310  G 1920  G 2640  G 2670  G14420 OG-9999   -9999   -9999   -9999   -9999   

Again 1996 and July, followed by missing data flags. 144.2 C.

In another posting comment in this article, DocMartyn found that the temperature “ramp up” matches the onset of electronic thermometers and the use of short RS-232 ( or RS-2s2) connector cables. The typical assumption has been that it was pulling the stations closer to buildings and power sources, but might there also be a ‘cumulative failure mode’ impact over time as well?

BEST vs GHCN cumulative "anomaly"

BEST vs GHCN cumulative “anomaly”

In earlier work, I’d found that the bulk of the “warming” came from the most recent “Mod Flag” and speculated that something about the processing of the newer data was suspect. There was also the early failure mode of some of the instruments where they would “suck their own exhaust” and pull in hot air from their humidity testing heaters. To that we can now add some suspect data from a non-graceful failure mode.

Basically, one simply must ask: “Just how good, or bad, is the quality checking on these electronic gizmos?”

The same records, extracted from my “combined with Inventory information” file, so you can get more information about them (though you will need to scroll a LOT to the right… it’s a long record ;-)

425722590021996TAVG  750  0 1250  0 1280  0 1870  0 2700  0 2860  015440 O0-9999   -9999   -9999   -9999   -9999      32.8500  -96.8500  134.0 DALLAS/FAA AP                   148U 4037FLxxno-9A 1WARM CROPS      C

425723520021996TAVG-9999     890  G  890  G 1700  G 2530  G 2700  G13810 OG-9999   -9999   -9999   -9999   -9999      34.4300 -100.2800  594.0 CHILDRESS/FCWOS AP              565R   -9FLDEno-9A-9WARM CROPS      B

425747500011996TAVG  950  G 1310  G 1310  G 1920  G 2640  G 2670  G14420 OG-9999   -9999   -9999   -9999   -9999      31.2300  -94.7500   85.0 LUFKIN/FAA AIRPORT               70S   30HIxxno-9A10WARM DECIDUOUS  B

In Conclusion

So that’s where I got to in last night’s “Coding Frenzy”.

I was planning to post up some v1 vs v3 comparison reports (which have been run), then ran into this “bug” and spent until 3 am chasing phantoms only to find that “It isn’t about me” and it was crappy input data.

Yet that discovery points to a very interesting potential issue. IFF 154 C can just flow through until whatever “Magic Sauce” is applied at NCDC to remove it in “QA” processing: We are 100% dependent on their “QA” process to catch such errors and to catch more subtle errors that do not fall into a “sanity check” bucket.

How many thermometers might read 1 C high for a year? Or 0.4 C high for two years? And never be ‘outed’ by the “QA” code?

We just don’t know.

But we do have a very suspicious “onset” of the ramp in warming right at the time the electronic systems are rolled out (found by two folks using entirely different methods) and long after CO2 had been increasing for decades.

IMHO this is more than enough of an “issue” to put some Liquid In Glass thermometers in selected locations to “Guard the Guardians”… ( from Quis custodiet ipsos custodes? )

We are, in essence, fully dependent on some rough sieve computer programs checking some occasionally insane automated data entries to determine if there is Global Warming, or not. I find that inadequate.

Subscribe to feed


About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in AGW and GIStemp Issues, NCDC - GHCN Issues and tagged , , , , , . Bookmark the permalink.

15 Responses to GHCN v3 A Question Of Quality

  1. Note the codes “OG” following the crazy values.There is some QC/QA.

    O = monthly value that is >= 5 bi-weight standard deviations
    from the bi-weight mean. Bi-weight statistics are
    calculated from a series of all non-missing values in
    the station’s record for that particular month.
    G = GHCNM v2 station, that was not a v2 station that had multiple
    time series (for the same element).

    I checked for Childress, and that crazy value does not make it into the QCA file – it becomes -9999. I presume the same would apply for the others.That crazy value was -9999 in the most recent v2.mean file from GISS, so the problem seems to have arisen when reformatting old v2 data for v3.

    I’ve identified another possible data acquisition problem, which I’m taking up with my national MET office, and notifying to GHCN. CLIMAT reports which depart from the WMO specification may not be understood by GHCN, although OGIMET for example does extract the information correctly even when the particular non-standard encoding which MET Eireann has used for the most recent Valentia Observatory data (the station changed to AWS from April 2nd 2012) is encountered. Different QCU versions in May have various incorrect April mean temperatures for Valentia (I would guess garbage values taken from uninitialised memory locations – that’s FORTRAN for you). As the values I’ve seen have been more than 5 SD below the previous April mean these got flagged and removed in the corresponding QCA file, although if these are garbage values another version could well have a plausible value which would not be excluded. It is probably a slightly more complex coding problem, as a later CLIMAT report for another Irish station, with similar non-standard encoding, is processed correctly. Five stations with CLIMAT reports which do follow WMO encoding regulations separate the two with non-standard encoding, so this may explain the difference in processing of the two non-standard encoded CLIMAT reports. My heavy duty FORTRAN programming ended more than twenty years ago however, so I have some difficulty now “designing” code to understand how it behaves this way.

  2. E.M.Smith says:

    @Peter O’Neill:

    Good Stuff!

    Thanks for saving me the bit of leg work to dig that out of the qca version.

    Frankly, the amount of “garbage” that gets into this kludgey system to begin with, and then just gets carried along, is still a bother.

    There is a giant “vetting” and “validation suite” operation that ought to have been done, but never has been done. It is a system that “just growed” and has all sorts of discontinuities and opportunities for error all through it. (Not least of which being the whole M vs “-” issue…)

    Sometimes I wonder if the data can ever be made non-garbage enough to be useful.

    I’m ever more of the opinion that a dozen or two very long lived very well tended stations with data from original records would do more to inform us about global changes than this pot of ‘Mulligan Stew’, no matter how much we stir it…

  3. Kevinm says:

    C sharp makes it really easy to do these calculations, especially if you can get the data into a database and use SQL commands. I’m surprised Fortran still exists. I used to resist microsoft’s crappy IDE efforts when the earlier versions of Visual Studio came out, but it’s gotten so easy to work with now. Like a thousand monkeys banging on typewriters, they’ve finally produced something not really Shakespearian but at least occasionally rhyming.

  4. E.M.Smith says:


    FORTRAN isn’t very hard to use. There are some programming metaphors that I wish were in it, but frankly I’ve had a lot more difficulty doing simple things like this in languages like C.

    For some reason beyond my ken, many languages have no easy way to do fixed format data entry / output. Yes, I can do it in “C”, but that it is not built in but is in a variety of glued on functions is just dumb.

    So I wouldn’t denigrate FORTRAN for this kind of stuff. Frankly, the “default” behaviours with nagging on some errors as seen here, is more often a feature.

    Yes, I’ve used a variety of DB products. ( For a decade or so I was a DB consultant) And I’d be happy to have the data in a database and use such tools. (In many ways the “non-procedural languages” used in relational DBMs are dirt simple for this kind of thing, and It would be easier to do some of these things.) But the trouble it takes to get one installed and get the data loaded has been greater than the minor anoyance of using FORTRAN. Maybe it’s just that I’m quite comfortable with the FORTRAN metaphors.

    At any rate, I’ve used so many different languages on so many different systems that I’m just not all that much bothered by using whatever tool is to hand.

    Swapping between them usually takes me about a day (sometimes more) and it’s easier to just keep going with the present one for most projects. Maybe it’s just the decades spent as a consultant where you walked into the shop and had to use whatever they were using…

    FWIW, in some perverse way, I like PL/1 more than others. One could write just about any language style inside of it. When a FOCUS DBMS consultant, I was helping folks use PL/1 to access data on an IBM style mainframe. You could see a program and it would just shout “I leaned FORTRAN” or “I speak Pascal” or “COBOL is my first language”… yet all were written in PL/1. I hate to think what the compiler looked like ;-)

    My “second language” was ALGOL 68 and I really wish it had gotten more life. Found an online compiler for it, but have not had time to port / install it. I once used it’s facilities to take down the campus mainframe… It has a facility where you could spawn a new program by saying “Task ‘programname'”. The site operators had peeved me… so I wrote a simple set of two programs.

    Program A: “WHILE TRUE DO Task B DOEND;”
    Program B: “WHILE TRUE DO Task A DOEND;”

    Not the exact syntax, but you get the idea.

    An operator could kill a program by typing “DS {task ID}” but had to find each task number and type each one by hand. Launching “Task A” caused an exponential growth of tasks at computer speed…

    Less than 2 minutes and things were slow. Shortly after that they had to reboot the whole thing.

    It’s a bad idea to annoy the hacker…

    At any rate, playing “Which language is best?” is a slightly interesting game; but the reality is that “whatever language you know well” is “best”.

  5. John F. Hultquist says:

    “. . . things I learned in my FORTRAN IV class many decades back. (To this day I marvel at how much ‘that really matters’ about programming I learned in that one class. …”

    Daniel D. McCracken died less than a year ago at age 81.

    We learned with FORTRAN II D – The D was for double-precision apparently but moved quickly to IV. A long ago classmate saw McCracken’s obituary and sent a note about our mid-60’s debugging IF THEN and DO loop statements, and all that fun stuff. Assuming you also used a McCracken text – pause and remember.

    Then you write: “IMHO this is more than enough of an “issue” to put some Liquid In Glass . . . ” and, of course, I jumped to the wrong conclusion, being thirsty and all.

  6. Dave says:

    This site and WUTT inspired me to purchase my own Davis Vantage Vue station. This January I added a WeatherLink IP so the data is now uploaded to I attempted to be very precise about how I setup my station, noting the position and elevation using calibrated GPS readings. Once I started logging data I noticed there were two other stations very close to me. One is owned by the State of WA or maybe USGOV, not sure which.

    It is almost always +5dF when compared to my station and my neighbors. I have not been able to locate it physically but one idea I had was that it was located on the Ferry Dock right near my house and they have the LAT/LONG wrong placing it up the hill. But take a look at the dewpoint reading:


    I think a lot of these instruments; or the systems that record their data, will default to 9s when that particular sensor is not available. The one thing I have found interesting is the amazing variablity in temperature just for an area the size of Seattle. With all of this “unbiased” raw data flowing into Wunderground, it would be interesting to gather their entire data set and see if there is any evidence of warming. My guess is that there isn’t. Not sure how one would go about collecting that and normalizing it accurately.


  7. E.M.Smith says:

    @John F. Hultquist:

    Who says it was the wrong conclusion!? ;-)

    (Thinking back on it, just after that posting came a bottle of Merlot… so perhaps there was a subconscious suggestion at work … )

    Yes, had the McCracken text. Big red IV on the front. It may be in my boxes of old books…

    OT SIDEBAR: CNBC is announcing that S&P is cutting ratings on 5 Spanish Banks. Talk of more Greek Exit from Euro and potential for a melt down of the whole Southern Europe zone.

    Back on FORTRAN and McCracken: A former room mate from college (software Engineer) told me when he passed. He had fond memories of the same FORTRAN class at the same school and the same book…


    On my “infinitely expanding ToDo list” is to get 3 stations and set one up over pavement on the driveway, one under a tree over grass and one near the buildings (perhaps under the patio overhang). That, then, lets me “calibrate” the degree of corruption that comes to readings even in a 100 foot area from pavement and buildings. Also would let me cross check against the local “official” thermometer at the airport…

    Perhaps just getting a “local consortium” would be better…

    The basic idea is to deliberately set up some stations “badly” and measure how much it biases the results. As most people at least try to do it right, most stations just have “suspicions” and not “clearly wrong”. Those that ARE clearly wrong (like the one in the middle of the tarmac at some Univ. in Ariz. IIRC) have little “right” near them with which to compare. (One ends up with a station a few miles away in somebodies back yard – so gets aspersions as the other one is “official”…)

    At any rate, I’m thinking of just ordering three of WUWT’s smaller recording thermometers and putting them in my own personal screens (as long as they are the same they can be compared) and doing a faster, if less perfect, comparison.

    IMHO the combination of bad siting, airport bias, electronics that drift to high readings, and dodgy “adjustments” makes the instrumental record rather useless after about 1980.

    As per gathering the “other” stations data: There’s a large archive of some of it, but I don’t think Wunderground keeps all they get. Hope they do… It would be good for establishing the bias factors, but has the problem of geographic limitations. AFAIK there are not similar networks of volunteer stations in places like Africa, Asia, South America. So the “global” coverage would likely be limited. (Then again, I’m speculating and it might turn out their are weather hounds all over the globe ;-)

    In looking at the stations just in my area (and I’ve done a couple of postings on them) the variation can be in 5-10 F ranges. Some of it justified (folks near the bay cooling as wind starts, only arriving ‘downwind’ several miles later); some of it not (the airport often a couple of degrees F warmer than Santa Clara just a mile or two away, but with trees…)

    IMHO a proper study of “Airports vs surrounding” done retrospectively with established records would be enlightening… (What I could get done with 1/10th the money wasted on Mann and his bogus trees…)

    Oh Well. Time to go make some more “Stone Soup Science” using what I can find laying around… ;-)

  8. Jason Calley says:

    “IMHO the combination of bad siting, airport bias, electronics that drift to high readings, and dodgy “adjustments” makes the instrumental record rather useless after about 1980.”

    Yes, exactly. Perhaps the first thing that really got me suspicious of the CAGW story was looking at Watts’ investigations into the actual quality of the US weather stations. My first response on seeing photos of some of the locations was “WHAT?! They put a station THERE?! But, but, but…” My second response was, “Well, thank goodness that Watts has pointed out these errors. Now the powers that be can fix the stations and we will be able to do decent science!”

    Ah, well. From the big self described climate gurus the response was zero. Actually, even worse than zero. When I read that the big guys like Hansen et al were actually claiming that they could get good numbers by various adjustment massaging of the data, I was convinced that they were either incompetent or frauds. I have since narrowed that down to “both.”

    You cannot do good science with bad data — and at this juncture, the temperature data has been so corrupted that we do not even know how bad it is. Bah. I guess that the good news is that it is only corrupted “temperature data” and not enthalpy data. Even if it were perfect to four decimals, I guess that it would still be problematic to make good use of it. Double bah.

  9. adolfogiurfa says:

    @Jason Calley ….But, then it came our friend Misha Vukcevic letting us know one of his famous graphs:

    which means that our dear @E.M. and all the weather crew should stop worrying about thermometers and buy ASAP a magnetometer!
    (Apart of the fact that talking or dealing about a NON EXISTING PROBLEM IT IS THE SAME AS RISING IT FROM THE GRAVE OF “CLIMATE-GATE”)

  10. E.M.Smith says:


    Similar story for me, though I started at places like (and after just ASKING about some ‘disconnects’ in the reasoning and having my comments get tossed in the bit bucket started to ‘get clue’; and as they were bitching about WUWT, went there…)

    It was the “offputting” nature of having questions vilified and having a simple “This and that don’t mix. How do you answer that?” met with “Read these several thousand pages of published literature, it’s in there somewhere.” along with a few dozens of links to things that mostly said “Our computer model says so.” THAT was what pushed me into the Skeptic camp from a “Joe Public thinking I need to learn more about this important problem of Global Warming.”

    The fresh air at WUWT was a great relief. Then ran into the project (with it about 80% complete at that time) and had the parallel “They do WHAT??” experience.

    Spent about 6 months “complaining” that “somebody” ought to go through the GIStemp code and then finally decided “I guess I am somebody”… This site came about just as a place for me to store the work I was doing and put “the usual stuff” after I’d typed the same Troll Response a few dozen times. (At the time the “Running Out!” panic pitch was being made frequently, so the “No shortage of stuff” posting was a fleshed out summary of what I knew.


    Fascinating graph. So we’ve got solar changes reflected in solar current changes reflected in magnetic changes AND UV changes AND wind / temperature changes.

    We really do live inside the solar envelope.

  11. Dave says:

    This link shows a world map of all of the Davis stations using WeatherLink:

    8936 stations actively reporting. Every continent except for Antartica. Suprisingly not present in Greenland but quite a few in Iceland. At the very least it would be interesting to compare the last 3 to 5 years of what these stations report compared to the model predictions and official readings. Having such a massive sample size for North America and Europe would at least demonstrate parity or indicate that official records are off in these regions.

  12. DocMartyn says:

    What I really want is someone to have a pair of high quality sensors and a troop of scouts.
    At the peak temperature of the day, about 1 in the afternoon, I want the scouts to move a black plastic sheet on poles above the sensor, something like 20 x 20 feet of sheeting. Then I want to know the line shape of the temperature drop.
    At steady state, the efflux of heat = influx of heat, the rate at which temperature drops will give us the relationship between light flux and temperature.
    The actual light flux can be measured (using a pyrometer) or calculated.
    If you did this once a month for a year we would have the rate of temperature change/light flux.

  13. E.M.Smith says:


    Missing a couple of bits to know what you are proposing.

    Is this “black plastic sheet” IR radiative (downward)? Or is it insulating or …

    Having been inside a few “way too hot” tents as the fabric was radiating in the strong IR from the sun hitting the other side, I’m a bit concerned about black plastic and IR radiation issues.


    Maybe I’ll have a look and see if the historical data are convenient to download…

  14. E.M.Smith says:


    Well, didn’t see any way to get historical data. They may not have it. For comparison “Now” it’s a useful map. BUT, the range of temperatures is just amazing.

    Comparing the Wunderground readings for KSJC (that is the Airport) it shows 63 F ( we’ve had a relatively cool day today)

    While looking around the area (at the double digit large number of stations) on the Davis map, there are temperatures from 47 F – 59 F or so in the hills toward the coast, a bunch of 60, 61, 62 and 63 around the urban area a bit away from the airport. Up to 67 F further north and inland at Milpitas.

    Overall, the impression I get is that the Airport is a degree or so warmer than the urban area ‘up wind’ but cooler than the site in Milpitas (perhaps on a sun facing hillside?).

    Probably worth repeating the observations during the night or just after sundown to see how the residual heat changes.

    Still doesn’t quite give me the “bad vs good” site info (that takes a known site description and with the instruments near each other) but does let me do an “area A/B” for urban airports (where most GHCN stations are located) and the surrounding areas.

  15. oldtimer says:

    If I recall correctly, you used GCHN v2 on your earlier magnum opus of charts for evey country in the world. These, you may recall, were entered as evidence of dodgy records and an absence of adequate quality control to the Muir Russell enquiry into Climategate. Like every other submission it was ignored. The one thing it did do was get the CRU to admit to the reduction in the number of stations used over time in their temperature record. It seems unlikely this would not have occurred without your analysis of GCHN v2. If, as I understand it, you are going to repeat the exercise for GCHN v3, I will follow the results and differences from its predecessor with great interest.

Comments are closed.