Hadley Hack and CRU Crud

OK, I was “offline for a while” and THAT was of course the moment all the fun breaks lose! So looking up the stuff about what was “released” from CRU…


This link: http://strata-sphere.com/blog/index.php/archives/11420

Has an interesing analysis of the precision and lack of regional warming that we’ve seen in various GIStemp and GHCN postings here.

In comments, FrancisT has a link to his in depth look at just a couple of the comments in “HARRY_READ_ME”. Well worth a read:


Oh, and I’ve added a bit of time stamp forensics in a comment near the bottom too.

Original Article



We have [begin quote]:

The bit that made me laugh was this bit. Anyone into programming will burst out laughing before the table of numbers

17. Inserted debug statements into anomdtb.f90, discovered that
a sum-of-squared variable is becoming very, very negative! Key
output from the debug statements:
OpEn= 16.00, OpTotSq= 4142182.00, OpTot= 7126.00
DataA val = 93, OpTotSq= 8649.00
DataA val = 172, OpTotSq= 38233.00
DataA val = 950, OpTotSq= 940733.00
DataA val = 797, OpTotSq= 1575942.00
DataA val = 293, OpTotSq= 1661791.00
DataA val = 83, OpTotSq= 1668680.00
DataA val = 860, OpTotSq= 2408280.00
DataA val = 222, OpTotSq= 2457564.00
DataA val = 452, OpTotSq= 2661868.00
DataA val = 561, OpTotSq= 2976589.00
DataA val = 49920, OpTotSq=-1799984256.00
DataA val = 547, OpTotSq=-1799684992.00
DataA val = 672, OpTotSq=-1799233408.00
DataA val = 710, OpTotSq=-1798729344.00
DataA val = 211, OpTotSq=-1798684800.00
DataA val = 403, OpTotSq=-1798522368.00
OpEn= 16.00, OpTotSq=-1798522368.00, OpTot=56946.00
forrtl: error (75): floating point exception
IOT trap (core dumped)

..so the data value is unbfeasibly large, but why does the
sum-of-squares parameter OpTotSq go negative?!!

[end of quote and quoted quote ;-) ]

For those unfamiliar with this problem, computers use a single “bit” to indicate sign. If that is set to a “1″ you get one sign (often negative, but machine and language dependent to some extent) and if it is “0″ you get another (typically positive).

OK, take a zero, and start adding ones onto it. We will use a very short number (only 4 digits long, each can be a zero or a one. The first digit is the “sign bit”). I’ll translate each binary number into the decimal equivalent next to it.

0000  zero
0001  one
0010  two
0011  three
0100  four
0101  five
0110  six
0111  seven
1000  negative (may be defined as = zero, but oftentimes 
          defined as being as large a negative number as you can 
          have via something called a 'complement').  So in this 
          case NEGATIVE seven
1001  NEGATIVE six
1010  NEGATIVE five (notice the 'bit pattern' is exactly the 
          opposite of the "five" pattern... it is 'the complement').
1011  NEGATIVE four
1100  NEGATIVE three
1101  NEGATIVE two
1110  NEGATIVE one
1111  NEGATIVE zero (useful to let you have zero without 
      needing to have a 'sign change' operation done)
0000  zero

Sometimes the 1111 pattern will be “special” in some way. And there are other ways of doing the math down at the hardware level, but this is a useful example.

You can see how adding a digit repeatedly grows to a large value (the limit) then “overflows” into a negative value. This is a common error in computer math and something I was taught in the first couple of weeks of my very first programming class ever. Yes, in FORTRAN.

We have here a stellar example of it in real life in the above example where a “squared” value (that theoretically can never become negative) goes negative due to poor programming practice.

There are ways around this. If a simple “REAL” (often called a FLOAT) variable is too small, you can make it a “DOUBLE” and some compilers support a “DOUBLE DOUBLE” to get lots more bits. But even they can have overflow (or underflow the other way!) if the “normal” value can be very very large. So ideally, you ought to ‘instrument’ the code with “bounds checks” that catch this sort of thing and holler if you have that problem. There are sometimes compiler flags you can set to have “run time” checking for overflow and abort if it happens (there are also times that overflow is used as a ‘feature’ so you can’t just turn it off all the time. It is often used to get “random” numbers, for example.)

But yes, from a programmers point of view, to watch someone frantic over this “newbie” issue is quite a “howler”…

And that is why I’ve repeatedly said that every single calculation needs to be vetted for rounding, overflow, underflow, precision range, …

Because otherwise you are just hoping that someone did not do something rather like they clearly have done before…

Also, from :


we have in comments:

Paul W (15:05:29) :
Phil Jones writes that the missing raw CRU data could be reconstructed:

(from file 1255298593.txt)

From: P.Jones@uea.ac.ukTo: “Rick Piltz” <piltz@xxxx.net
Subject: Re: Your comments on the latest CEI/Michaels gambitDate: Sun, 11 Oct 2009 18:03:13 +0100 (BST)Cc: "Phil Jones" <p.jones@uea.ac.uk
, "Ben Santer" <santer1@llnl.gov
Rick, What you've put together seems fine from a quick read. I'm in Lecce inthe heal of Italy till Tuesday. I should be back in the UK byWednesday. The original raw data are not lost either. I could reconstruct what wehad from some DoE reports we published in the mid-1980s. I would startwith the GHCN data. I know that the effort would be a complete wate oftime though. I may get around to it some time.

So we have a tacit confirmation that they start with GHCN data. That means that ALL the issues with the GHCN data (migration to the equator, migration from the mountains to the beaches…) apply to Hadley / CRU just as they do to GIStemp.

Both are broken in the same way, so that is why they agree. They use biased input data and see the same result.

Heck, I’ve even stumbled onto another programmer type doing stock trading stuff…

The discussion is very interesting, even if a bit ‘rough language’ at times:


In comments there we get a picture of “Mr. Harry Readme”:


Somehow, I can feel his pain at the code he must deal with. Best of Luck to you Harry.




This is in reverse time order, but as presented in the link.

From: Michael Mann
To: Phil Jones

Subject: Re: Skeptics
Date: Thu, 25 Jun 2009 11:19:45 -0400
Cc: Gavin Schmidt

Hi Phil,

well put, it is a parallel universe. irony is as you note, often the contrarian arguments
are such a scientific straw man, that an effort to address them isn’t even worthy of the
peer-reviewed literature!


So we are “contrarian” are we? And not even worthy of a peer-reviewed address? I would think that someone has been inflating his own ego a bit overmuch…

“Facts just are. -emsmith”. It isn’t about the people, and it isn’t about the peers. It is all about the truth and the facts. And the facts are that there are lose ends to the AGW fantasy that have been pointed out by us “contrarians” that very much need addressing.

On Jun 25, 2009, at 10:58 AM, Phil Jones wrote:

Just spent 5 minutes looking at Watts up. Couldn’t bear it any longer – had to
stop!. Is there really such a parallel universe out there? I could understand all of
the words some commenters wrote – but not in the context they used them.

It is a mixed blessing. I encouraged Tom Peterson to do the analysis with the
limited number of USHCN stations. Still hoping they will write it up for a full journal
Problem might be though – they get a decent reviewer who will say there is nothing
new in the paper, and they’d be right!


Bolded a couple of bits here… Well, nice to know they poked a nose in at WUWT, even if they could not come to grips with it. And nice that he “could understand all of the words”… even they were too much for him to understand in context. Strange that I’ve never had any problem understanding the postings or comments on WUWT.

I also find it interesting that they are saying it’s fine to do an analysis on a reduced set of USHCN stations. Also the raw cynicism of the encouragement to publish an empty paper in light of the belief it would be ‘nothing new’ is particularly galling given their efforts so suppress what is really new, but against their agenda.

At 15:53 24/06/2009, Michael Mann wrote:

Phil–thanks for the update on this. I think your read on this is absolutely correct. By
the way, “Watts up” has mostly put “ClimateAudit” out of business. a mixed blessing I
talk to you later,

Unclear on the concept of “Synergy” it would seem… WUWT lead me to Climate Audit, and CA has pointed to WUWT with some fair frequency. One is more technical than the other, but both are good and both are well attended.

On Jun 24, 2009, at 8:32 AM, Phil Jones wrote:


Good to see you, if briefly, at NCAR on Friday. The day went well, as did the
dinner in the evening.
It must be my week on Climate Audit! Been looking a bit and Mc said he
has no interest in developing an alternative global T series. He’d also said earlier
it would be easy to do.
I’m 100% confident he knows how robust the land component
I also came across this on another thread. He obviously likes doing these
sorts of things, as opposed to real science. They are going to have a real go
at procedures when it comes to the AR5. They have lost on the science, now they
are going for the process.
Prof. Phil Jones
Climatic Research Unit Telephone +44 (0) 1603 592090
School of Environmental Sciences Fax +44 (0) 1603 507784
University of East Anglia
Norwich Email [1]p.jones@xxxxxxxxx.xxx

So here we evidence for ‘inbreeding’ between NCAR and CRU. That GIStemp uses “NCAR” format data files about STEP2 – STEP3 then merges with HADLEY CRU SST in STEP4_5 continues to argue for excessive “group think” and shared design / code between the temperature series. So when folks point out that Hadley CRUt and GIStemp agree, maybe it’s because they have extensive overlap in design goals, frequent exchange of “ideas” and common input data, internal work files matching in format (and content? for easy comparison and convergence? at confabs such as in the email?), and processes…

BTW, IMHO it would be easy to make an alternative Global Temperature Series. “Mc” is quite right that it is easy. I could have one in about a day (less if I didn’t want to think about the details too much) and it would be more accurate than GIStemp. How? Simply by “un-cherry picking” some of the GIStemp parameters then running the code.

I finds the dig at “real science” vs “procedures” interesting. How can you have reliable science if your procedures are broken? I learned about “lab procedures” and the importance of them very early in chem lab. Anyone who disses the merit of sound procedures is an accident waiting to happen… IMHO. And will produce errors from unsound procedures.

But the overall thing that I pick up from this is just the tone of True Believers. These folks really do think they have it all worked out. And that is a very dangerous thing. It leads to very closed minds and it leads to very strong “selection bias”. Often with no ability to self detect that broken behaviour.

You know, I think there will be a great deal of insight come from this “leak”…

Oh, and here is a more complete copy of the snippet quoted above:

Comment by Prof. Phil Jones
http://www.cru.uea.ac.uk/cru/people/pjones/ , Director, Climatic
Research Unit (CRU), and Professor, School of Environmental Sciences,
University of East Anglia, Norwich, UK:

No one, it seems, cares to read what we put up
http://www.cru.uea.ac.uk/cru/data/temperature/ on the CRU web
page. These people just make up motives for what we might or might
not have done.
Almost all the data we have in the CRU archive is exactly the same
as in the Global Historical Climatology Network (GHCN) archive used
by the NOAA National Climatic Data Center [see here
http://www.ncdc.noaa.gov/oa/climate/ghcn-monthly/index.php and here http://www.ncdc.noaa.gov/oa/climate/research/ghcn/ghcngrid.html ].

The original raw data are not “lost.” I could reconstruct what we
had from U.S. Department of Energy reports we published in the
mid-1980s. I would start with the GHCN data. I know that the effort
would be a complete waste of time, though. I may get around to it
some time. The documentation of what we’ve done is all in the
If we have “lost” any data it is the following:
1. Station series for sites that in the 1980s we deemed then to be
affected by either urban biases or by numerous site moves, that were
either not correctable or not worth doing as there were other series
in the region.
2. The original data for sites for which we made appropriate
adjustments in the temperature data in the 1980s. We still have our
adjusted data, of course, and these along with all other sites that
didn’t need adjusting.
3. Since the 1980s as colleagues and National Meteorological
Services http://www.wmo.int/pages/members/index_en.html (NMSs)
have produced adjusted series for regions and or countries, then we
replaced the data we had with the better series.
In the papers, I’ve always said that homogeneity adjustments are
best produced by NMSs. A good example of this is the work by Lucie
Vincent in Canada. Here we just replaced what data we had for the
200+ sites she sorted out.
The CRUTEM3 data for land look much like the GHCN and NASA Goddard
Institute for Space Studies data
http://data.giss.nasa.gov/gistemp/ for the same domains.
Apart from a figure in the IPCC Fourth Assessment Report (AR4)
showing this, there is also this paper from Geophysical Research
Letters in 2005 by Russ Vose et al.


Figure 2 is similar to the AR4 plot.

So again we have confirmation that the Hadley input is substantially the same GHCN input data as for GIStemp. And as we’ve seen, there are strong biases built into the GHCN data set and it’s changes over time.

For any future assertion that Hadley and GIStemp agree, so they must be ‘right’, I think it’s pretty clear they are the same because the accept the same highly biased input data.

I also find it amazing that the response to “you lost the raw data” is “It isn’t lost because we have different data that has been modified and is better”. Clearly unclear on the concept…

About these ads

About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in CRUt and tagged . Bookmark the permalink.

65 Responses to Hadley Hack and CRU Crud

  1. Ripper says:

    Good spot up E.M.

  2. vjones says:

    Ah! I see normal service has been resumed. That’s better!

    here’s on for you ;
    From: Phil Jones
    To: “Michael E. Mann”
    Date: Thu Jul 8 16:30:16 2004

    The basic message is clear – you have to put enough surface and sonde obs into a model to produce Reanalyses. The jumps when the data input change stand out so clearly.

  3. BarryW says:

    True for one’s complement machines. I’m used to two’s complement where 1111 would be a minus 1 and 1110 a minus two, 1101 a minus 3. Add 0001 with 1111 with carry and you get 0. No negative zero. Oddity is that the negative range is one more than the positive. In this example 0111 = 7, 1000 = -8.

    REPLY: [ Yes. I was trying to keep the example attainable for the non-programmers. For a fun time, look up some of the more bizarre hardware math done on some of the early machines! IBM had one with DECIMAL hardware and IIRC Burroughs had a 56 bit machine but only 48 were used for most things. I've got a link to the peculiarities of various hardware somewhere...

    Ah, yes, here it is:


    A couple of the more exotic entries:

    In the IBM 7030 or STRETCH computer, an exponent flag bit was followed by a ten bit exponent, the exponent sign, a 48-bit mantissa, the mantissa sign, and three flag bits. The exponent flag was used to indicate that an overflow or underflow had occurred; the other flag bits could simply be set by the programmer to label numbers. The AN/FSQ-31 and 32, with a 48-bit word, used 11 bits for the exponent and 37 bits for the mantissa.

    The Burroughs 5500, 6700, and related computers used an exponent which was a power of eight. The internal format of a single-precision floating-point number consisted of one unused bit, followed by the sign of the number, then the sign of the exponent, then a six-bit exponent, then 39-bit mantissa. The bias of the exponent was such that it could be considered to be in excess-32 notation as long as the mantissa was considered to be a binary integer instead of a binary fraction. This allowed integers to also be interpreted as unnormalized floating-point numbers.

    A double-precision floating-point number had a somewhat complicated format. The first word had the same format as a single-precision floating-point number; the second word consisted of nine additional exponent bits, followed by 39 additional mantissa bits; in both cases, these were appended to the bits in the first word as being the most significant bits of the number.

    The BRLESC computer, with a 68-bit word length, used a base-16 exponent; it remainined within the bounds of convention, as the word included a three-bit tag field, followed by a one-bit sign; then, for a floating-point number, 56 bits of mantissa followed by 8 bits of exponent. Thus, the 68-bit word contained 65 data bits and three tag bits, while the whole 68-bit word was used for an instruction. (In addition, four parity bits accompanying each word were usually mentioned.)

    Ah, for the good old days before Pentium New Math... or, as quoted from someone else: "We give these people computers and expect them to know how to use them..."

    -ems ]

  4. vjones says:

    BTW – Congratulations – Check your flag counter – you hit 50K views.

    REPLY: [ Golly! Who knew so many folks could be interested in crappy old FORTRAN... just because it holds the fate of the planet 'in the balance'... Especially now that Hadley and CRU are going down in flames. Ah well, at least we have the answer to why Hadley, CRUT and GIStemp "agree" so much. Same input, tuned outputs. What? A surprise? -ems ]

  5. kuhnkat says:

    I started on mainframes in 1973. Thanks for making my day with this post!!!

  6. E.M.Smith says:

    @kuhnkat: My “first machine” was one of those B6700 machines in the comment above. With FORTRAN and ALGOL and 56 bit words. Only later did I move to IBM Mainframes.

    There is so much that the climate model and GIStemp code seems to assume about things, like precision and underflow / overflow, that I really cringe at people trusting it for anything. There are so many opportunities for data dependent failures; and I think the folks who wrote the code don’t even know it…

    Someone gave them a computer, and they think they understand what it is doing… Until their squared value turns out to be a negative number…


  7. BarryW says:

    Good list! I seem to remember CDC had a machine that used BCD.

    Another piece of trivia is that the increment command (i++) in C and Java came from the DEC machines which had a command to increment or decrement a register.

    Makes you wonder what were they thinking of when they designed these machines.

  8. j ferguson says:

    I hope you all will forgive this stupid question, but is/are the baseline global temperature series which is/are the basis of all this panic GIStemp and its daughter HADcrut? No others?

    -and all of this foofooraw derives from “perceived upward excursions” in these (about to be confirmed to be) products of misapplication of coded assembly and corrections?

    It is very clear from reading Harry’s libretto that these guys with maybe the one exception really believe their stuff.

    I don’t know about you E.M. but I always felt more secure in an environment where I could discover via dialog that something I was convinced was a really great idea wasn’t before we tried to do it.

  9. E.M.Smith says:

    It is not a stupid question. In fact, it cuts right to the heart of the issue. And yes, if these folks would just listen to someone outside the anointed for a few days they just might avoid going down in history as the new Piltdown Crew…

    Hadley and GIStemp are not the ONLY temperature series, but they are by far the most important ones for the “baseline global temperature” and for the hockey stick expectations.

    The two major satellite series are of just too short a duration to identify any trends longer than about 15 years.

    A 40 year cycle will have a 20 year up and a 20 year down half cycle – and 20 years of average flat in the “1/2 of each” range. So a 20 year series will be “fooled” by a 40 year cycle into showing, variously, a strong up trend, a strong down trend, or a curving acceleration / deceleration.

    For this reason, to detect long term trends, like climate, you need long term series. And if you ‘splice’ short ones together to get it, you risk gluing together series that respond differently to that 40 year cycle and getting a fake series….

    So, for example, you glue together tree rings with a thermometer record; but your tree rings were really responding to rainfall changes (or even directly to CO2), not temperature. Now you get two “uplifts” glued together instead of two disjoint objects reporting different things…

    Suddenly you say: “Look 40 years of rising temperatures!”, when really you had 20 years of more rain (maybe even cool rain) and 20 years of warming temperatures (from 1/2 of a regularly repeating 40 year cycle with a cool wet phase and a warm dry phase).

    This is why when trading stock (or doing other things with cycles) you use a “moving average” of 2 x the length of the minimum cycle of interest. If you want to see “everything up to” annual cycles, you must have 2 years data. (There are faster filters, but you get ever less reliability – that is, they become more “twitchy” responding to recent changes more than may be warranted by reality.)

    So what we have with satellites is a great series for checking the weather this year, or even this decade. Not very useful for climate stuff. OR you can try to splice it on to the land data and hope you are not mixing disjoint series… OR you can try splicing on tree rings and do even worse (what the UEA folks seem to like to do…)


    These folks have an example of doing it in digital gear:



    There is the small problem of “From where do the series get THEIR data?”

    So each weather service in each country has some thermometers. These get bundled up and sent off to a “central place” that glues them together into a “series”.

    2 small problems…

    The originating agency may induce bias: Australia is “re-jiggering” their thermometer series. As they “re-imagine” their data, everyone will be given whatever errors they induce.

    The compiling agency may induce bias: GHCN is controlled by a NASA employee (I guess ‘on loan’ to NOAA).. So ANYONE who uses GHCN will get all the biases introduced by NOAA under the guidance of this Data Set Manager. So GHCN drops all but 136 US thermometers and induces an extreme altitude bias (4 thermometers in California, all on the beach and 3 of them near L.A. …)

    So at this point it will not matter if you are GIStemp or Hadley or WHOEVER: If you use the most widely available “source” of GHCN you will have the exact same input data bias to deal with. Even if you ‘reach around’ it to the Aussy BOM you may still end up with a common bias for that part of the data due to the BOM having it wrong as they “adjust the data”. And what I’ve seen is folks using the GHCN, not rolling there own from first sources.

    This is then compounded by the arrogance issue:

    It is an Article Of Faith for these folks that their software does a perfect job of removing selection bias in the thermometer record. But filters are not perfect and bias leaks through.

    They then believe that this leakage is The True Climate, when it is more the product of poor software (and lousy quality control to find “selection bias” and remove it) and they then “believe their own bull shit” as we would say in the computer business…

    The result is the AOF belief that 3 thermometers in LA and one at the airport in SFO can tell you what is happening to the Glaciers of Mount Shasta, the snows of Heavenly Valley, the cold / hot Mojave desert, the….

    So to solve this? Somehow you must get to the “raw” data (that as near as I can tell is either unavailable from the individual BOMs or, as in the case of Hadley, has been lost.)

    Then you need to somehow “stabilize the instrument” by either selecting a stable thermometer set or finding a provably valid adjustment method (going to be pretty hard with, for example, the 93% drop of thermometers in the USA in 2007 that lead to the 4 on the Beach problem…)

    Finally you can BEGIN to do a trend analysis…

    So in my opinion, due to poor control of ‘selection bias’ and a broken Article Of Faith about their filter quality: we have not yet even begun to figure out what is happening to climate and have not got a usable temperature series or analysis method and software with which to do it.

    Oh, and all the group think and “collaboration” has lead to a mass hysteria based around bad code and poor thermometer placement… At the end of the day, the behaviour looks to be indistinguishable from folks who “See Jesus” is a water stain on the wall. The more true believers talk to each other, the more places they see Him. In the clouds, in the fog, even in the mud of the lake bottoms and in tree rings… And the less tolerant they become of the person who says: “Hang on, isn’t that just a smudge of soot?”

  10. dearieme says:

    Christ on a bicycle. I’ve known intellectually that it was all tosh, but it’s quite another thing to see the trail of tears as poor Harry tries to pump life into the corpse.

    The leaders are rogues; jail them.

  11. Iridium says:

    “So we have a tacit confirmation that they start with GHCN data. That means that ALL the issues with the GHCN data (migration to the equator, migration from the mountains to the beaches…) apply to Hadley / CRU just as they do to GIStemp.”

    “group thinking” wasn’t that wrong to say the least… ;)

    Further, it seems there is some (The?) code from CRU in the package released (file named “cru-code”). More work for you ?

    REPLY: [ Yes, the world seems to be making sure I'm busy ;-) And you ought to be just tickled pink with the degree of "group think" in those email records! You called it, and did so quite early. As we saw in:


    Bon chance. -ems ]

  12. Joseph in Florida says:

    Great post, and great comments. Thanks for the info.

  13. Hi there :)
    I’m glad you found my post interesting.
    I got about 1/4 of the way down that HUGE text file before skipping to the end.
    Poor Harry had more perseverance than I did LOL.
    But then he had 3 years and I was doing it late into the night!

    Thanks for the update on the “lost data”. Very interesting.

    Hopefully we’ll get some real science out of this.

  14. Pingback: Sky falls on the chicken littles « TWAWKI

  15. Ric Werme says:

    I started branching off into the non-Email climategate files and quickly found my way into Harry’s diary. I scanned the whole thing, though I didn’t look at much of the last 50% (7506 lines). I quickly realized you’d enjoy it, though it makes me rather skeptical of any data from CRU.

    One of Harry’s problems was going from little endian to bid endian systems (Sun, apparently) and the problems one runs into with binary data doing that. For the non-computer folks here, endianness deals with the order of the bytes that make up a larger value. For example, a 32 bit integer integer is 4 bytes. On a little endian machines (e.g. Intel/AMD PCs) the low order byte is in the lowest address, on big endian machines, the high order byte is stored first.

    The best real life analogy I can come up with with little thought is buying numbers for a street address. If you live on 1123 Fib Rd, you might take them off a rack and go to the cashier with a stack of digits. If 1 is on top, that’s big endian, if 3 is on top, it’s little endian.

    At any rate, if you have a chance, read more of Harry’s diary. I’ve done similar things, but never for a 3 year project!

    He starts off with:

    Nearly 11,000 files! And about a dozen assorted ‘read me’ files addressing
    individual issues, the most useful being:


    (yes, they all have different name formats, and yes, one does begin ‘_’!)

    2. After considerable searching, identified the latest database files for


    (yes.. that is a directory beginning with ‘+’!)

    Skipping quite a ways in, here’s one of many comments about working on weekends:

    The first file is the 1991-2006 update file. The second is the original
    temperature database – note that the station ends in 1980.

    It has *inherited* data from the previous station, where it had -9999
    before! I thought I’d fixed that?!!!

    /goes off muttering to fix mergedb.for for the five hundredth time

    Miraculously, despite being dog-tired at nearly midnight on a Sunday, I
    did find the problem. I was clearing the data array but not close enough
    to the action – when stations were being passed through (ie no data to
    add to them) they were not being cleaned off the array afterwards. Meh.

    Wrote a specific routine to clear halves of the data array, and back to
    square one. Re-ran the ACT file to merge the x-1990 and 1991-2006 files.
    Created an output file exactly the same size as the last time (phew!)
    but with..

    crua6[/cru/cruts/version_3_0/db/testmergedb] comm -12 tmp.0704292355.dtb tmp.0704251819.dtb |wc -l
    crua6[/cru/cruts/version_3_0/db/testmergedb] wc -l tmp.0704292355.dtb
    285829 tmp.0704292355.dtb

    .. 313 lines different. Typically:

    For a moment I thought he was talking about Revlon cosmetics:

    Without going any further, it’s obvious that LoadCTS is going to have
    to auto- sense the lat and lon ranges. Missing value codes can then be
    derived – if it always returns actual (unscaled) degrees (to one or two
    decimal places) then any value lower than -998 will suffice for both
    parameters. However, this does make me wonder why it wasn’t done like
    that. Is there a likelihood of the programs being used on a spatial
    subset of stations? Say, English? Then lon would never get into double
    figures, though lat would.. well let’s just hope not! *laughs hollowly*

    Okay.. so I wrote extra code into LoadCTS to detect Lat & Lon ranges. It
    excludes any values for which the modulus of 100 is -99, so hopefully
    missing value codes do not conribute. The factors are set accordingly
    (to 10 or 100). I had to default to 1 which is a pity. Once you’ve got
    the factors, detection of missing values can be a simple out-of-range

    However *sigh* this led me to examine the detection of ‘non-standard
    longitudes’ – a small section of code that converts PJ-style reversed
    longitudes, or 0-360 ones, to regular -180 (W) to +180 (E). This code is
    switched on by the presence of the ‘LongType’ flag in the LoadCTS call -
    the trouble is, THAT FLAG IS NEVER SET BY ANOMDTB. There is a
    declaration ‘integer :: QLongType’ but that is never referred to
    again. Just another thing I cannot understand, and another reason why
    this should all have been rewritten from scratch a year ago!

    So, I wrote ‘revlons.for’ – a proglet to reverse all longitude values in
    a database file. Ran it on the temperature database (final):

    crua6[/cru/cruts/version_3_0/db/testmergedb] ./revlons
    REVLONS – Reverse All Longitudes!

    This nifty little proglet will fix all of your
    longitudes so that they point the right way, ie,
    positive = East of Greenwich, negative = West.

    ..of course, if they are already fixed, this will
    UNfix them. I am not that smart! So be careful!!

    How about this faith in the underlying data:

    I am very sorry to report that the rest of the databases seem to be
    in nearly as poor a state as Australia was. There are hundreds if not
    thousands of pairs of dummy stations, one with no WMO and one with,
    usually overlapping and with the same station name and very similar
    coordinates. I know it could be old and new stations, but why such large
    overlaps if that’s the case? Aarrggghhh! There truly is no end in
    sight. Look at this:

    User Match Decision(s) Please!
    TMin stations: 4
    1. 0 153 12492 80 MENADO/DR. SA INDONESIA 1960 1975 -999 0
    2. 0 153 12492 80 MENADO/ SAM RATULANG INDONESIA 1986 2004 -999 0
    4. 9701400 153 12492 80 MENADO/DR. SAM RATUL INDONESIA 1995 2006 -999 0
    5. 9997418 153 12492 81 SAMRATULANGI MENADO INDONESIA 1973 1989 -999 0
    TMax stations: 4
    6. 0 153 12492 80 MAPANGET/MANADO INDONESIA 1960 1975 -999 0
    7. 0 153 12492 80 MENADO/ SAM RATULANG ID ID 1957 2004 -999 0
    9. 9701400 153 12492 80 MENADO/DR. SAM RATUL INDONESIA 1995 2006 -999 0
    10. 9997418 153 12492 81 SAMRATULANGI MENADO INDONESIA 1972 1989 -999 0

    One thing that’s unsettling is that many of the assigned WMo codes for
    Canadian stations do not return any hits with a web search. Usually the
    country’s met office, or at least the Weather Underground, show up – but
    for these stations, nothing at all. Makes me wonder if these are
    long-discontinued, or were even invented somewhere other than Canada!

    7162040 brockville
    7163231 brockville
    7163229 brockville
    7187742 forestburg
    7100165 forestburg

    Near the end, large amounts of code are doing things and the discussion startes dealing with larger issues, but there’s still a need for tolls to cope with the haystacks of data being tossed about:

    Got all that fixed. Then onto the excessions Tim found – quite a lot
    that really should have triggered the 3/4 sd cutoff in
    anomauto.for. Wrote ‘retrace.for’, a proglet I’ve been looking for an
    excuse to write. It takes a country or individual cell, along with dates
    and a run ID, and preforms a reverse trace from final output files to
    database. It’s not complete yet but it already gives extremely helpful
    information – I was able to look at the first problem (Guatemala in
    Autumn 1995 has a massive spike) and find that a station in Mexico has a
    temperature of 78 degrees in November 1995! This gave a local anomaly of
    53.23 (which would have been ‘lost’ amongst the rest of Mexico as Tim
    just did country averages) and an anomaly in Guatemala of 24.08 (which
    gave us the spike):

    7674100 1808 -9425 22 COATZACOALCOS, VER. MEXICO 1951 2009 101951 -999.00

    1994 188-9999 244-9999-9999 286 281 275 274 274 262-9999
    1995 237-9999-9999-9999 300-9999 281 283 272-9999 780 239
    1996 219 232 235 256 285 276 280 226 285 260 247 235

    Now, this is a clear indication that the standard deviation limits are
    not being applied. Which is extremely bad news. So I had a drains-up on
    anomauto.for.. and.. yup, my awful programming strikes again. Because I
    copied the anomdtb.f90 process, I failed to notice an extra section
    where the limit was applied to the whole station – I was only applying
    it to the normals period (1961-90)! So I fixed that and re-ran. Here are
    the before and after outputs from trace.for:

    REPLY: [ At least a half dozen times I've re-read this comment and tried to decide what to say. Which piece to quote and respond to... And I just end up sitting here numb at it all... "Revlon" is right. Slather enough Tammy Faye over it and maybe no one will notice! The number of "crowbar" this and "hammer" that... (Mouth flapping silently as words fail.. ) -ems ]

  16. Ric Werme says:


    Good list! I seem to remember CDC had a machine that used BCD.

    Another piece of trivia is that the increment command (i++) in C and Java came from the DEC machines which had a command to increment or decrement a register.

    Makes you wonder what were they thinking of when they designed these machines.

    My father designed a process control computer in 1962 that had a 25 bit word size for a sign bit and 6 BCD digits. He designed it for people more interested in montoring power stations than in learning binary. Some of those systems ran for 30 years and were installed all over the world. He said it was so easy to program that a 12 year old could do it. Since I was 12 at the time….

    C was developed for PDP-11s, which had “post-increment” (i++) and “pre-decrement” (–i) addressing modes. They made it easy to scan through tables, implement a stack (mov r0,-(sp) to push, mov (sp)+,r0 to pop), fetch “immediate data” (mov #1234.,r0 emitted mov (pc)+,r0 followed by 1234.) The PDP-11 architecture had a lot of thought put into it, and influence many later CPUs. It also influenced C a lot and the result was something where a dumb compiler could produce some decent code, especially at the hands of someone who knew the architecture.

    The PDP-10, a 36 bit machine, ranks as the most assembler programmer friendly system I’ve ever used. There’s no call for that in this era of cached/pipelined/multi-math-unit CPUs with highly optimizing compilers, of course, but in its day, the -10 was a wonderful machine to use.

  17. Cold Lynx says:

    Probably the same Harry who wrote the mentioned file above is at CRU staff website.
    There is he claiming he is working with :
    Dendroclimatology, climate scenario development, data manipulation and visualisation, programming.

    Yes. Data manipulation. On the official website.


  18. Adam Gallon says:

    If they’re Catholics and went to Confession.
    I wonder howmany Hail Maries they’d have to recite?
    It’s garbage, an Augean Stables of immense proportions.
    This crap is being used to make decisions on a global scale.

  19. Tonyb says:

    Good stuff.

    I wonder how many politicians, journalists or indeed the general public, realise that the global temperature relies on the equivalent of casting the runes, then if the answer is inconvenient trying it again. Even worse that it becomes retrospective according to the theory of the week.

    I am sure those involved in this complicated game will just think of us as little people who don’t understand the machinations of the great and they have nothing to answer for. Arrogance and complete belief in their highly experimental science were the overwhelming impressions I received when reading the emails.

    Must go now and read the temperature in my garden. I threw away that old fashioned thermometer thingy and now get just the temperature I would like by dancing three times round the compost heap and throwing sawdust in the air whilst chanting an invocation.


    REPLY: [ Tony, I've found that tossing old radishes works just as well, and I don't need to cut any wood to get sawdust. BTW, if the day is a particularly cold one, I've found that tossing 3 Scotches down the old hatch warms the day nicely. One particularly cold day I tried 6, or maybe 9, but I don't remember the results very well... Experimental Science can have it's risks. ;-) -ems ]

  20. dearieme says:

    “that really should have triggered the 3/4 sd cutoff” – really, a cut-off, rather than flagging a problem for human attention?

    REPLY: [ Yes. These climate guys seem really fond of just chopping off things they don’t like and splicing on things they do like. And programs tend to “dead halt” if they get something they don’t like and can’t chop or splice… though sometimes they just do ‘the wrong thing’ with it and keep on running… There is substantially no “robustness programming” in GIStemp (and evidence for ‘not much’ in CRUt) and there is ample evidence in both for a default style / method of:

    PICK ONE: { chop and drop | splice | dead halt | silently mangle and continue }

    The idea of “Log run exceptions and flag” is sadly lacking.
    The idea of “Select and log” is sometimes used, though usually with a chop.
    The idea of “Sanity check and Crossfoot” is not in existence.
    The idea of “Normal Run Counts Crosscheck” is not in existence.
    (That is, “IF ( stddeviationcount .LE. 30 ) write (*,*) LOW SD Errors, check run”
    or similar code typically is not used…)

    Basically, the “flagging for human attention” is typically “the run failed to complete, I wonder why?” (at least, using GIStemp as an example. I’m still working through the CRU code. It may yet have some good bits in it. At least one of the programmers, Tim Mitchell, seems to do a good job. Signs his work, too. Nice clean layouts. Logical design. I like Tim’s work. I’d hire him. Don’t know how much he did, though. I started with some of the newer stuff and I’m working backwards to the ‘old crud’ …)

  21. Pingback: The Harry_Read_Me File « SOYLENT GREEN

  22. E.M.Smith says:


    Oh, I left one off the list. It ought to also include “Or just make something up and stick it in” …

    From the README_GRIDDING.txt file inside “pro” we have:

    ReadMe file for gridding a la Mark New 2000
    Tim Mitchell, 30.3.04

    The main program for gridding is quick_interp_tdm2.pro
    This program is based on one of Mark New’s called quick_interp.pro

    This program takes as inputs the outputs from anomdtb.f90 option 3 -
    see ~/code/linux/cruts and the readme file there. The location of
    these files is identifed as pts_prefix. There may be an additional input
    required of synthetic data, to augment sparse grids for secondary
    variables – see the published literature for the reasoning here. The
    synthetic files are in binary format and are identified to the program
    through synth_prefix. Both identifiers are only prefixes, not full names,
    because the program itself supplies the year-specific file endings.i Use
    anomfac and synthfac as appropriate.

    Use year1 and year2 to specify the range of years to process.

    Use out_prefix to specify the location of the output files.

    Use dist to specify the correlation decay distance for the climate
    variable being interpolated – necessary information to determine where
    to add dummy or synthetic data.

    Use gs to specify the grid size – 0.5 for half-degree

    Use dumpbin to dump, Mark New style, to unreadable IDL binary files
    Use dumpglo to dump, Tim Mitchell style, to .glo files (suite of processing
    software for .glo files under ~/code/linux/goglo)
    Don’t bother with dumpmon
    Use binfac and actfac as appropriate

    If creating primary variables, don’t bother with synthetics.
    If creating secondary variables, create (or find) primary variables, grid
    at 2.5deg resolution, and store as IDL binary files. Then use
    to create synthetic grids for the correct variables.
    Then use quick_interp_tdm2.pro
    on the secondary variable, with synth_prefix supplied, to create the new grids.
    Bear in mind that there is no working synthetic method for cloud, because Mark New
    lost the coefficients file and never found it again (despite searching on tape
    archives at UEA) and never recreated it. This hasn’t mattered too much, because
    the synthetic cloud grids had not been discarded for 1901-95, and after 1995
    sunshine data is used instead of cloud data anyway.

    To convert the output .glo files into the grim formatted files supplied to users:
    1. convert these land+ocean files to land-only files using globulk.f90 option 1
    (globulk.f90 is under ~/code/linux/goglo)
    2. convert the land-only files to grim using rawtogrim.f90 (~/code/linux/grim)

    While I like Tim’s work and he has a good careful style, it is clear he is working inside constraints set by others “in the literature” and these include some degree of “synthetic” data… (But don’t worry if you lost some of the method, it’s only used some of the time anyway… in other years you can use other synthetic data…)

  23. E.M.Smith says:

    And we have an interesting README file from the Linux directory….

    First, though, a comment or two:

    It warms my heart to see that they are not only using LINUX but the distributed processing Beowulf Cluster as well. This to some extent validates my choice of a Linux box as my test bed for GIStemp. I’m using a similar OS base for a similar product.

    I once built a 6 node Beowulf just for my own fun and made another Beowulf Cluster as a “Software Build System” at a company making an internet appliance. It can work very well, but it “can have oddities”. In this README we see some reference to such oddities.

    Please note that while I’m going to poke a bit of fun at the “might work” language, this is actually a fairly normal way of saying “We have this working in production HERE, but on this new Beowulf thing, well, it’s experimental…”

    The top of the README:

    Linux f90 code written by Tim Mitchell
    readme file, 24.2.04

    I have written a fairly substantial volume of f90 code in the past few years, mostly
    to cope with all the data-sets I have been handling. I have done almost all my data
    manipulation in f90, and almost all my data plotting in idl.

    The code in this directory was (mostly) originally written in Compaq f90 on crua6,
    and was (mostly) subsequently ported to work under the Portland Group f90 compiler
    on the UEA Beowulf cluster (beo1.uea.ac.uk).
    This code has been ftp’d back to here to be within reach of crua6 users. Where I have
    had the opportunity, I have ported the code back to crua6. This equivalent crua6 code
    may be found in ./../alpha

    The compilation statements in the headers of the main programs will use the pgf90 command
    if the program has been ported across to the Beowulf cluster. If I never got around to
    porting, the statements will use the f90 command. If you want to use an unported f90 program
    on the Beowulf cluster, find a pgf90 program and copy the syntax. It should mostly work.

    I would “howl” over this, but for the fact he is saying “IF”. This is really just a “Here There Be Dragons!” notice to anyone wanting to do this bit of experimental work.

    But one is left wondering how many times a “port was done” to the Beowulf, and by whom, and if they were as careful about when “it should mostly work” as Tim ?

    Exceptions may include:
    a. Hard addressing of dump files in the main program (search for ‘f709762′)
    b. Reading the number of lines in a file (via a shell call wc -l) as i10 instead of i8
    c. Passing a pointer array to a subroutine, subsequently allocated in the subroutine and
    returned to the main program; for some unknown reason this works fine on crua6 but
    not (or erratically?) on beo1. If you get a segmentation fault, suspect this!
    No easy solution, because I use this construct extensively. You may need to rewrite
    the code to allow two calls. Call the subroutine once to find the dimensions of the
    arrays, then allocate the arrays in the main program, then call the subroutine again
    to fill the arrays. Good luck!
    d. Calling a subroutine that has been subsequently modified since I last compiled the main
    program. Check that the variables passed to/from the subroutine are the same in the
    main program and in the subroutine. Changes are usually noted in the subroutine headers.

    One can only hope that the use of the Beowulf Cluster was ‘experimental only’. At least until they worked out all the bugs and bug sources noted above.

    Normally I would not be worried about this kind of note in a README about a port to a new platform, especially one that is quasi experimental, as long as everyone knew to keep using the production box until the new box or cluster was “vetted” and passed QA.

    BUT …

    The formal QA process seems to not exist. There is an R&D atmosphere of “Run it and if it looks OK, maybe it is”. Notice the statement “if you get a segmentation fault” and ‘erratically’. So some code might run to completion (but produce wrong results) while other code might die. Or it might only die on some input data, but not others.

    There is little to show that there is any procedure in place to assure that only QA checked, vetted, code is run on QA checked, vetted, hardware.

    I would not care at all. I would not even make a mumble about it: IFF this was just being used for R&D to publish some papers about some ones favorite pet theory.


    At the point where it is used for ‘Policy Decisions’ or ‘Policy Guidance’ it really does need a “Qualified Install” AND full QA cycle on it – End to End.

    (A Qualified Install is what is done for FDA drug companies. Every Single Step is exactly specified. And I do mean every single step and I do mean exactly – even “plug power cord into 220 VAC 60 Hz 30 A NEMA6-30 outlet”. You can guess that if you need that level of detail for “plug it in” there might be “issues” with the level of detail for the code build / run environment and the above README …)

    Or this snippet from the Linux/Cruts README:

    The program opcruts.f90 is the home for all the little useful routines for
    manipulating the .cts and .dtb (and .src and .dts) files. Option 1 can be used
    to convert from one of these formats to another. The other options can be
    exploresd to find out what they do.

    That sure sounds like someone cooking up a batch of chili-con-carne ‘by taste’. Add some beans and meat (don’t ask what kind, sausage is good, though, we think…), some comino, salt and black pepper; then Add Chili To Taste, we like Fire Breathing Hot…

    Now I don’t know if they actually change any of the temperature data, but that kind of “recipe” would never pass a “Qualified Install” or drug trial requirement.

  24. E.M.Smith says:

    OMG, they are using EXCEL to drive this thing:

    ! fromexcel.f90
    ! program to convert raw data files from Excel spreadsheet to CRU ts format
    ! written by Dr. Tim Mitchell (Tyndall Centre) on 03.06.03
    ! assumes one row of information per variable/station/year combination
    ! assumes that all relevant info is on each line (including meta data)
    ! f90 -error_limit 5 -o ./../cruts/fromexcel time.f90 filenames.f90
    ! crutsfiles.f90 grimfiles.f90 ./../cruts/fromexcel.f90

    program FromExcel

    I’m going to take a break or I’m going to bust a gusset…

    So SOMEWHERE there is SOMEONE who uses EXCEL to do SOMETHING and then it gets fed into this system via a FORTRAN conversion program.

    Oh My God.

  25. Harold Vance says:

    If I take Tim’s language at face value, someone is using an Excel spreadsheet to store raw data, which then gets fed through Tim’s meatgrinder fromexcel.f90.

    Excel (an ever-changing proprietary format that tried to overthrow a centuries old practice of handling leap years) is going to be subject to bit rot and conversion issues as the software ages and goes from one version to the next.

    OMG is right.

  26. Ellie in Belfast says:

    @Tonyb/EMS reply
    my grandfather always used to say that you’d never be cold when you had the price of a bag of coal inside you. I guess good Scotch wasn’t cheap even in his day ;-)

    REPLY: [ I need to find a very very expensive seller of coal so my 'cost benefit' spreadsheet recommends better Scotch! ;-) (speaking of goal driven behaviour...) -ems ]

  27. BobF says:

    It is simply not possible to get a sign wraparound on a floating point number : if you overflow it, you will get +Inf, or -Inf. IEEE floating point handles infinity, “not a number” (0/0), +0, -0 etc, correctly.

    However since the input parameter is an integer, I guess the programmer did parts of the arithmetic on integers, which will do sign wraparounds on overflow.

    This is a real newbie error, honestly it’s a f***ing shame.

    These guys never heard of asserts ?

    REPLY: [ I fished this out of the "review" que and moderated one word in it a bit. Your faith in compiler writers is admirable, but I've had the case where "you can't get here" diagnostic writes have written... But yes, if using "proper" floats and no compiler error they ought not get there. That's why I used an INT example. I presume they did the squaring "long hand" at some point then cast to a float or double.

    FWIW for 'not quite programmers' here is an example:

     100  do k=1, 10000
             i = i*i
             write(*,*) i
          end do
     666  stop
    [chiefio@tubularbells analysis]$ f77 int.f 
    [chiefio@tubularbells analysis]$ a.out

    You can see how this can produce nearly random values. This trick, btw, is often used to get a 'nearly random' seed value for stochastic code. You do something like this, pick a number in the series based on something like the time of day, toss out the high order half of the bits and "presto" a "random" set of values. Don't see that it belongs in a temperature series, though...

    The "Data Value" in their case up top prints as an INT, the total of squares with a ".00" on all values. My guess was they did something like:

    square = idata * idata

    and forgot that the multiply will be done as INT then the cast to FLOAT for 'square' will happen... after the rollover. Fix is easy: square = float(idata) * float(idata)

    They still have the risk of hitting infinity when they didn't mean to, but the negatives ought to go away from their "sum of squares"...

    So that's what I figure they did... And yes, I agree with the sentiment... -ems ]

  28. E.M.Smith says:

    Also, just for fun, here is the same “test code” as a floating point example:

    [chiefio@tubularbells analysis]$ cat float.f
     100  do k=1, 10000
             f = f*f
             write(*,*) f
          end do
     666  stop
    [chiefio@tubularbells analysis]$ f77 float.f
    [chiefio@tubularbells analysis]$ a.out | more

    By default in FORTRAN things starting with I will be integers while things starting with a-h will be floating point. The use of I in the first example makes it an INT while here, using F makes it a floating point… details, details,…

  29. In my view Jones ref to using GHCN is a classic obfuscation. The GHCN is a 1990′s construction, the key Jones et al work was done by 1986. There is only one way to understand what Jones et al did and that is to use the station data they used and track the changes they made.
    No other way – but of course PDJ does not want you doing that.

  30. E.M.Smith says:

    A fascinating thread on where “The Deed” was done:


    Involving open proxy servers, Turkey, Russia, …

    Also, on WUWT, there is a very interesting theory that it was an FOAI Request Gone Bad and accidentally leaked, then leveraged by someone outside who spotted the leaked data…


    I think it all fits.

  31. FrancisT says:

    E.M Smith,

    if you’d like to examine the code I have it up on my website

    There’s the (infamous) readme file – http://di2.nu/foia/HARRY_READ_ME-0.html – split into useable chunks, the raw CRU code – http://di2.nu/foia/cru-code/

    and the results of my first scan through the code


    I’d welcome a fortran programmer’s comments because I isn’t one

    REPLY: [ I'll take a look at your scan. I've got the whole download and I'm wandering through it; but it's 4 AM and I ought to sleep sometime ;-) -ems ]

    REPLY2: [ I love it! Only nits to harvest are the phrase "There are a bunchaton" that is maybe a typo or maybe a style thing; is just a bit odd. (but I like odd some times...). There is also the interesting speculation elsewhere that "HARRY" might have been the intended reader and the "HARRY_README" might have been written by come contractor / intern / Ph.D. candidate and aimed at Harry as the guy who might explain these things... Supposedly there is some evidence for this in some of the code comments, but the poster did not provide links. It would 'fit' the personality profiles better, though. As of now, i donnno... but worth noting as a "possible". -ems ]

  32. Harold Vance says:


    In regard to use of Excel, reference this file:


    Procedure for updating the databases underlying the
    CRU high-resolution grids.
    Tim Mitchell, 25.06.03, revised 30.3.04

    1. Transform additional station data into CRU time-series

    (c) Jian’s Chinese data from Excel
    i.e. a single ASCII table per variable, with one
    line per station/year, use (or modify) fromexcel.f90.

    It looks like the Chinese are storing data in Excel.

    Here is a link to a relevant Climate Audit post dated April 2007 named “Some China Comparisons” that also happens to discuss one of your favorite topics, namely the proliferation of Airportmometers in CRU and GISS, as well as “fanatical obsruction” from CRU:


    Here’s a great quote from Steve:

    Steve McIntyre:
    May 23rd, 2007 at 12:51 pm

    One of the things that has come out of my first inspection of the GISS stations is the tremendous increase in the proportion of stations in the post-1990 period that come from urban airports. This is because the majority of the updated data in CRU and GISS other than the U.S. is from the automated airport weather system. I suspect that this may underpin the fanatical obstruction from CRU to even identifying their stations.

    So it’s not just a UHI effect but an urban airport effect. I’ll do a post on this some time. In Toronto, the landscape around the airport has been urbanizing very rapidly. It was on the outskirts of the city when I was a boy and now it’s urban. This must be happening all over the world.


    AGW = Airports Globally Warming


  33. Harold Vance says:

    The “in a nutshell” comment was obviously not Steve’s. I’m not sure how to offset quotes in wordpress other than using underscores or hyphens.

    REPLY: [ The "old fashioned" way, from before html, was to put "BEGIN QUOTE:" and "END QUOTE." blocks of text around quotes.


    The virtue of this is that is survives some other person quoting your quote, where the HTML tags will not, and they either have to know how to do them, or "your quote" and "their quote of you" will get munged back together as they leave out the HTML.


    This gave way in the "unix" world to "levels of greater than signs" (that in the modern era get stolen by WordPress in an attempt to make them HTML tags... so I won't use them here, but one would have put a ">" in front of each line of the quote).

    } sometimes I substitute a brace for the > in
    } a Unix style quote block, though it does
    } look a little odd, but it's easy. You could use
    } other symbols as well. A "+" is common. So
    + in some systems that don't like > as a lead
    + character, you will see the plus sign used.
    + and I've seen others.

    Now, in the HTML Times, we use HTML "tags". These can include "b" for bold, "i" for italics, "blockquote":

    that usually indents with white space before and after

    and a few others. and they are all ended with a "/[tag]” construct, all inside matching opening and closing ” less than” signs and “greater than” signs . So: < b > This is bold </b> if I’ve done it right.

    You can, of course, use plane old quotes around short text. If you want to get fancy, you can put may use “strike” with the matching “/strike” to get “strikeout quotes”.

    And if you are wondering how I was able to insert the HTML metacharacters without having them stolen, it is via another more complex method. But I’ll stop here, because if I explain how to escape the metacharacters, then I’ll have to explain how I escaped the escape of the metacharacters, and then that will lead to showing how to wrap the escaped escaped characters in a quoted escape… and … “Recursion: See ‘Recursion’.”

    -ems ]

  34. Pingback: Hot Air » Blog Archive » CBS: East Anglia CRU covered up bad data, computer modeling

  35. E.M.Smith says:

    I love it:


    All UEA has to do is buy enough “Bad Code Offsets” and everything can be made all better…

  36. E.M.Smith says:

    A tiny bit of “forensics”…

    In my unpacked copy of FOIA we have many files and directories with very old dates, but some with clearly impossible dates. The date of “creation” of the file is from before the date of the contents. This often indicates a system was used that had a wrongly set clock. For the HARRY_README.txt file, the date is the first of the year 2009:

    /FOIA/documents chiefio$ ls -l HARRY_READ_ME.txt 
    -rw-r--r--   1 chiefio  chiefio  716953 Jan  1  2009 HARRY_READ_ME.txt

    and we see the same date on all the files holding emails, even though the email often includes newer dates.

    It is not clear to me if this is an artifact of the original machine where these were created, or some intermediate site where they were unpacked / repacked. (This, btw, is what I would have done to cover the date stamp on any download; be it hacked or whistleblown. I suspect that is exactly what it is, a ‘fogging of the trail’.)

    Consequently, it is fairly hard to use the ‘datestamp’ to see when these files were created. BUT …

    Looking inside HARRY_README.txt, down at the bottom, he includes an “ls -l” of a couple of files:

    The problem with re-running all updates, of course, is that I also fixed WMO codes. And,
    (though my memory is extremely flaky), probably corrected some extreme values detected
    by Tim. Oh bugger.
    Well, WMO code fixing is identifiable because you get a log file, ie, here's the tmp dir:
    -r--------   1 f098     cru      25936385 Aug 18 10:48 tmp.0705101334.dtb
    -r--r--r--   1 f098     cru       7278548 Feb 17  2009 tmp.0705101334.dtb.gz
    -rw-r--r--   1 f098     cru      25936385 Mar  8  2009 tmp.0903081416.dtb
    -rw-r--r--   1 f098     cru           408 Mar  8  2009 tmp.0903081416.log
    -rw-r--r--   1 f098     cru      27131064 Apr  2 11:15 tmp.0904021106.dtb
    -rw-r--r--   1 f098     cru      27131064 Apr  2 12:48 tmp.0904021239.dtb
    -rw-r--r--   1 f098     cru      27151999 Apr 15 14:11 tmp.0904151410.dtb
    tmp.0705101334.dtb had its WMO codes fixed and became tmp.0903081416.dtb, which has the
    accompanying log file (tmp.0903081416.log) to prove it:

    Here we can see dates like Apr 2 and Mar 8 2009. After the Jan 2009 supposed ‘creation date’…

    But the file is newer than that… See that April 2 has no year, while March 8 does? The Unix ls command only puts the year on files for ‘old files’ and assumes that you can figure out that the ‘new ones’ are in the last few months. So we know that between Mar and Apr is when that watershed falls…

    And what is that watershed?

    6 months.

    So we can place the creation of this ‘near the end’ entry at about end of September / early October of 2009. Allowing a week or two for the following entries and we can see that this is a fairly recent file, but probably not “now”.

    This, IMHO, matches the thesis that this set of data was being collated for a FOIA request and that the inside process got botched letting it leak to an outside server.

    Further, looking in the “mail” directory:

    $ pwd
    $ grep Nov * | grep 2009
    1257532857.txt:Date: Fri, 06 Nov 2009 13:40:57 -0700
    1257546975.txt:Date: Fri, 06 Nov 2009 17:36:15 -0700
    1257847147.txt:Date: Tue, 10 Nov 2009 04:59:07 +0100 (CET)
    1257874826.txt:Date: Tue Nov 10 12:40:26 2009
    1257881012.txt:Date: Tue, 10 Nov 2009 14:23:32 -0500
    1257888920.txt:Date: Tue Nov 10 16:35:20 2009
    1257888920.txt:     Date: Tue, 10 Nov 2009 15:35:37 +0000
    1257888920.txt:     Sent: 10 November 2009 2:43 PM
    1258039134.txt:Date: Thu Nov 12 10:18:54 2009
    1258053464.txt:Date: Thu, 12 Nov 2009 14:17:44 -0000

    It looks like the last email was Nov 12, 2009. There is also evidence that the number in the number.txt file name is sequential with date (though more work would need to be done to validate that presumption for all text files).

    Once again, it looks like someone working through an email log, pulling out bits that match a FOIA request, and putting them in numbered text files.

    While it would be “nice” to have the date stamps on the files, it looks to me like a pretty easy conclusion that the last data “saved” was about November 12 to November 14 (basically, that week ending, modulo the whole “time zones where people are” vs “timezones where computers date is set” thing. We then have the “leak” at about November 17 / 18 (same ‘where in the world is your time zone issue… all this really ought to be normalized to GMT, but I’m feeling lazy ;-)

    That gives us a “window” of about 3 to 6 days, spanning a weekend, for the data to “get out”.

    IMHO, it is not possible for a hacker to download all the raw sources from which this selected set would be compiled (i.e. the whole email archive, the whole file server with source code, etc.) and do this extract in the time available. The extraction and collation was done, most likely as an ongoing process, over a fairly prolonged period of time as an internal process at UEA and probably under the pressure of a pending FOAI request.

    Only at the last step did this internal work product end up on the “outside”. That most likely happened as a “monatomic” event. A single 60 MB download.

    I would look first at the potential for a “flubbed” permissions issue on the external FTP server. Basically, the data was put “at the ready” and a change was made to “public” in error.

    I could easily see someone saying “That FOIA2009 file, we can let it go now.” Meaning delete it; but the sysadmin takes it to mean “let it be public now”… Yes, I’ve seen that kind of thing happen…

    I’ve also seen a “chmod 640″ mistyped as “chmod 644″ and similar “permissions issues”. There are lots of ways a “ready for release” FOAI file could become “released by accident”.

    Right after that, I’d look for an insider who was peeved that the FOIA was squashed, perhaps because they had SEEN the effort to thwart the FOIA inside the emails, who decided to “leak” the file instead of delete it.

    Finally, I think it is highly unlikely that an intruder broke into the UEA private network, wandered around on the email servers and file servers, and this was the only archive they felt was interesting enough to pull out. It just doesn’t pass the smell test… The filtering is too fine, the structure of the saved email file names is too orderly, and the coherency of the focus is too sharp. A Hacker would be “taking Coup” or “counting coup” and there is none of that. No embarrassing “meet me for lunch, don’t tell my spouse”, no “God was i drunk at that party”, etc. Basically, the “negative space” – what ought to be but is not, of the coverage is all wrong for a hacker / break-in.

    So, in my opinion, it is highly unlikely that a general “break in” was done and it is far more likely that a FOIA preparatory document directory was accidentally leaked by an administrative fumble OR that an insider leaked it on purpose.

  37. sierra says:

    I’d suggest correcting the typo in “HARRY_READ_ME” since that’s bound to be a popular search term.

  38. Tonyb says:

    I’ve got your Christmas present sorted out ems just let me know the correct size



    REPLY: [ Extra Large... but I could never wear it... Most folks would think I was talking about my hair line.. or worse... ;-) -ems ]

  39. StrongContinental says:

    Hi E.M. CBS have a nice article up on the utter b@lls-up…


    Including a link to your floating point maths howler. Amongst many others.

    Shame it doesn’t link to your demolishment of the entire GHCN dataset, which would is the basis of what Phil Jones now describes as… well in fact I’ll just quote the relevant part of his statement:

    [WARNING keyboard endangering content follows - please swallow all liquids and put down coffee cups]

    “Our global temperature series tallies with those of other, completely independent, groups of scientists working for NASA and the National Climate Data Center in the United States, among others. Even if you were to ignore our findings, theirs show the same results. The facts speak for themselves; there is no need for anyone to manipulate them.” Phil Jones


    Quite how he can claim “completely independent” is beyond me, the evidence of complicity is now in plain view – in his own emails!!

    “Their data shows the same results.” Really? Now I wonder why could that be…

    [hope everyone's keyboards made it through that. The only convincing hockey stick these jokers have successfully produced is the hit uptick on WUWT and similar sites, includingChiefio (I hope), plus - I strongly suspect - a surge in keyboard replacements and screen wipe products]

  40. Harold Vance says:

    ems, are you planning to review the code? It would be nice if someone could set up a web site where people could review the code at their leisure and post comments. Kind of like a Groklaw for CRU.

    Although I don’t know any Fortran, I’m generally quick to catch on to other languages. I’ve become somewhat curious as to the programming practices that CRU has been using. I want to learn how it compares to what I am doing now and have done in the past. Any suggestions?

    REPLY: [ I'm thinking about it... But with GIStemp already on the plate, it's daunting. I'll certainly put up at least a few samples. Harry had 3 years at it... If you just want a "look see", I downloaded my copy from the Wall Street Journal article link. If you want commentary included, well, that will dribble out over time. FWIW, FORTRAN is not a very difficult language. Far easier than C or Pascal in many ways. (Though it has better more intricate I/O built in... including the odd 'feature' that you can make in infinite loop inside a FORMAT statement... Honest! ) My "first blush" is that most of it was written by "Tim" and the coding style is fairly good. The only really bad style he has is a tendency to very complex intricate interweaving of programs. Well, and the general sort of "slap dash hand tool don't validate input don't error handle well" that is often seen in R&D code. Basically: He codes well, but his systems design is brittle and overly complex and confusing.

    So I'd figured with everyone and his brother having the file downloaded, individual program postings would not "be a feature"... but if you think it's worth it, well, I could probably get a few posted over the weekend.

    FWIW, crib notes version of FORTRAN:


    REPLY2: See the FrancisT comment up thread a bit too, he says that the code is up at the link he provides. I think it's all of it, but I've not gotten through it all ;-)
    -ems ]

  41. Steve says:

    How about quoting the http://www.di2.nu/foia/HARRY_READ_ME.txt
    file with the next line *included*:
    ..so the data value is unbfeasibly large, but why does the sum-of-squares parameter OpTotSq go negative?!!

    Probable answer: the high value is pushing beyond the single-precision default for Fortran reals?

    Thus, if you look at the file, you see that the “newbie” you are all laughing at thought the same as you about why the sum was going negative. His solution was to correct an erroneous input value. I agree with his decision. Making the code more robust in this case would be an utter waste of time.

    He’s doing climate research, not writing code to fly an airplane. There is no need to write bulletproof code with bounds checking, error handling, etc. If for example he’s calculating the mean and variance of a bunch of surface temperature measurements, do you really expect him to put checks in that piece of code to test if the data is in a realistic range? Is he supposed to do that for all functions?

    REPLY: [ Well, I was being charitable and leaving out a particularly embarrassing bit for Harry (and avoiding a particularly technical discussion that will cause some folks to glaze) but since it's already been brought up: Notice the word "reals". He is asserting that it is a roll over in a "real" or "float" type variable causing the sign reversal. Except that can't happen. (See the code example posted above). He not only is having an INTEGER roll over, he "doesn't get it" that it can not be in the "reals"... A point you seem to have missed as well.

    Also, there is no evidence that he fixed the program when he "corrected an input value", only that he did notice the onset was correlated with a high value. And the comment does not say the high value was in error, only that it triggered the problem. Finally, "robustness" in code is never wasted. But more to the point, yes, it is not to drive a single airplane into the ground, it's being used to drive the entire world economy into the ground... I think some more "robustness" is justified by that... and emphatically: YES all input ought to be range tested for bogus values. I learned that about week 3 of the first programming class I ever took. In FORTRAN.

    Your advocacy of bad programming practices deserves comment, but I will moderate myself on that point; only pointing out that there is never an excuse for shoddy workmanship. -ems ]

  42. Steve says:

    I see. So he understood that a large value was causing some sort of overflow problem in a running sum in someone else’s code and made a note of that to himself. But he wrote “real” instead of “integer.” Ha ha, what an idiot.

    You are wrong about there being “no evidence” that he fixed the problem with the data. The file clearly shows that he tracks down the value to a line in a particular file and corrects it. This fixes the problem. You expect him then to go in and debug code that is now working?

    Your assertion that robustness in code is “never wasted” is obviously false. At some point there is a decreasing marginal benefit for each preventive measure you take. At some point the cost in time and effort outweighs that benefit. Here we are talking about code that may only need to be run one time, against one data set, to produce a result for publication. How much time are you going to spend on error handling, etc.?

    REPLY: [ Yes, it is a "Novice stupid error" to assert that it was a "REAL" that was the problem when it was an INTEGER. I didn't want to point that out (Hey, I feel for the guy. I'm doing a similar programming task with GIStemp and it is not pleasant work) but OK, you want to keep flogging him for it so we flog.

    It is absolutely critical that you keep straight how REAL is handled vs how INTEGER is handled. At all times. To not know this. To not have it at the very top of your mind as a FORTRAN programmer is a "nubie" mistake and is a stupid error. It was presented in the very first week of the very first FORTRAN class I ever had. It is that basic. Wrap your head around it. Embrace it. It is truth.

    Take a look at the two example short programs in the comments up thread. Notice they produce dramatically different results. All from naming the variable "i" vs "f". It matters that much the be constantly aware of INTEGER vs FLOAT. (or REAL).

    Further, to your assertion the changing a bad data item "fixed it" and "the code now works". It does not. It is just as broken as it ever was. From the code:

    integer, pointer, dimension (:,:,:)             :: Data,DataA,DataB,DataC
    real :: OpVal,OpTot,OpEn,OpTotSq,OpStdev,OpMean,OpDiff,

    The "square an INT" and stuff it into a REAL running total is still there.

    The very next time a large value ends up in a data field, the same "issue" of a negative sum of squares will return. If nothing crashes from it, or if it is small enough that subsequent values added to it become slightly positive again and so goes unnoticed, the work product of the program may be quite quite wrong AND UNNOTICED AS SUCH. And every single wrong conclusion and every single bogus answer will lay at the feet of the programmer who DID NOT change the code to catch this (demonstrated as happening) data error. Realize that such a "bad but not bad enough to crash" data item may still be in the input data for this particular run.

    This is the same as finding a roach in a bucket of stew and just fishing out the roach. The correct answer is "roach abatement" (and a new bucket of stew, with ingredients being checked for contamination before being put in the stew...). "Look, I fished the roach out and the stew is fine" is just not the right answer.

    So look, I can accept that you clearly are not a programmer and "don't get it" that this stuff matters. I can accept that you are unclear on why the program is still just as broken as ever until: 1) Proper bounds checking is put in it. 2) The "square the INTs" gets changed to "cast INTs to float, then square them". 3) Data screening code is written to "preen" the data file for such clearly bogus values 'up front'. I "get it" that those things are beyond your ken. Can you at least not shout your ignorance of them to the world? I really don't like watching folks embarrass themselves.

    It is this kind of big error case that lets you find the "square the ints" type or intermittent failure in code and identifies the bug in the code (when smaller values, still in error, can sneak by). If the next time it is run the bogus value fails to cause a crash, but makes the product completely wrong, then the programmer has failed hideously in their debugging skills. On a bug that was handed to them on a silver platter. If they can not handle that, there is no hope they can handle the pernicious bug that is not so blatant.

    And finally, you assert that: because an error is only done perhaps once, and only "for publication" it does not matter. Just silly. It would be an even worse failure if they went to publication with wrong results and ended up needing to retract their work later. And once again: You have no way of knowing how many "roaches" are in the stew as it stands. You only know that you saw one and pulled it out... exactly the wrong behavour. The correct answer to "How much time are you going to spend on error handling?" is: Enough to assure you catch them all.

    Realize that we're talking about, maybe, 5 hours work to instrument this code up the wazoo with data and bounds checks (and frankly I think I could have it done in about 30 minutes... it isn't that intense... 784 lines all told as it stands. Only 15 of them "READ" statements.) Maybe you think it isn't worth skipping a coffee break to produce quality vs broken code, but my values differ...

    But wait, there is more. From the header of the program in question:

    ! anomdtb.f90
    ! f90 program written by Tim Mitchell
    ! originally written as part of opcruts.f90 (originated 11.02.02) as 
    options 25
    ! program to convert CRU ts .dtb files to anomaly .txt files for use in 
    ! pgf90 -Mstandard -Minfo -fast -Mscalarsse -Mvect=sse -Mflushz

    Notice that it says "for use in gridding" as part of the anomaly step in CRU. This is done each time the HadCRUt product is run (and that is much more than "once"...). It has been in existence in one form or another for 7 years. And only now are we finding out how many roaches are in the stew... And your desire is to leave lots of them in for another,,, how long?...

    The argument about "cost benefit" might actually have some merit, if this code was to produce some students final exam paper. But when folks are planning to spend, or not spend, a few $trillion based on the anomaly maps it produces: It certainly is worth the hour or three it might take to put in a little bit of bounds checking on data. And it is clearly worth the 20 seconds it would take to type:

    FLOAT(DataA(x,y,z)) **2

    Though even for an exam paper I have reservations about ignoring the bounds checking... In that old FORTRAN class I took, it was so important that if you didn't do it, you got at most 1/2 credit. On every single problem in the whole class after the third week when bounds checking was introduced. (The "problem" was to produce a 'square root' program. The instructor deliberately seeded the DATA DECK with a negative number. I remember it clearly as I got an "A" because I had checked for negative numbers, while other folks did not... )

    But I guess at the end of the day it really just comes down to "What is your work product?" If you are to produce a worthless pile of untrustworthy numbers that are unsuited to anything but scaring children, then by all means, skip the bounds checking and square your INTs. If you are to produce valid and trustworthy information usable for policy decisions: Never skip the data preening, bounds checking, type checking, and QA testing of your product. -ems]

  43. E.M.Smith says:

    Hi E.M. CBS have a nice article up on the utter b@lls-up…


    Including a link to your floating point maths howler. Amongst many others.

    Golly! Maybe this issue will finally get addressed!

    Shame it doesn’t link to your demolishment of the entire GHCN dataset, which would is the basis of what Phil Jones now describes as…

    Well, anyone who follows the link to here can also find:


    and all the other stuff under the AGW and GIStemp issues category on the right hand margin.

    well in fact I’ll just quote the relevant part of his statement:

    [WARNING keyboard endangering content follows - please swallow all liquids and put down coffee cups]

    “Our global temperature series tallies with those of other, completely independent, groups of scientists working for NASA and the National Climate Data Center in the United States, among others. Even if you were to ignore our findings, theirs show the same results. The facts speak for themselves; there is no need for anyone to manipulate them.” Phil Jones


    Quite how he can claim “completely independent” is beyond me, the evidence of complicity is now in plain view – in his own emails!!

    OMG! So “Hadley CRUt” must be right because it agrees with GIStemp; but GIStemp claims it must be right because it agrees with “Hadley CRUt”. When both are just GHCN “warmed over” and GHCN is busted! Astonishing!

    And my keyboard survived. (Though the Cat is looking very nervous… )

    “Their data shows the same results.” Really? Now I wonder why could that be…

    I don’t know. It’s so hard to decide if it was the “Goal seeking behaviour”, the “Output tuning and fudge factor matching”, the common input dataset of GHCN, or just the collusion…

    [hope everyone's keyboards made it through that. The only convincing hockey stick these jokers have successfully produced is the hit uptick on WUWT and similar sites, includingChiefio (I hope), plus - I strongly suspect - a surge in keyboard replacements and screen wipe products]

    Highest single day ever. And 2 of them ‘back to back’ … so far… Never thought there were so many folks interested in computer code…

  44. NathanB says:

    “Highest single day ever. And 2 of them ‘back to back’ … so far… Never thought there were so many folks interested in computer code…”

    This fubar’d code is the most important code that has ever been written. Literally the freedom of every man, woman and child rests on it. I use no idle hyperbole here. The monolithic tyranny that is being attempted by this utter deception is something that even Orwell only glimpsed in his worst nightmares. Expect more hits in the next week. I thank you for your analysis. Keep up the good work, we all depend on it.

  45. ET says:

    I still can’t believe this patchwork of code and scripts is what is driving trillions of dollars in policy decisions.

    I wouldn’t be surprised to find that the code eventually just reads in the data it wishes to present and then spits it back out. Wouldn’t be the first time a desperate programmer tried that trick.

    After all, anyone who can’t even figure out how to read in a text file without spawning the unix wc command to count the lines in the file to limit the loop count is not a “real” programmer. I wouldn’t trust this to calculate the temperature of my bath water.

  46. Pingback: Alaskas climate is safe after all !!! YES - (AK) - Page 5 - City-Data Forum

  47. Pingback: "Climategate" -- Forget the Emails: What Will the Hacked Documents Tell Us? - Hit & Run : Reason Magazine

  48. Pingback: Trevor Hicks - Disturbing revelations about climate science

  49. Pingback: Congress May Probe Leaked Global Warming E-Mails « Peelotics

  50. Steve says:

    I’ll spare you the personal attacks you’ve leveled at me, though I am confident my expertise and experience in scientific computing is at least equal to your own.

    You are right about the importance of checking the data up front. Indeed, this is a much more valid criticism. That bad value should never have made it to the point that it was sent to that code. But I disagree with your claim that data should be range checked within each piece of code. The code’s author may simply have known that for integer values in the possible range for climate data, squaring does not cause problems.

    In fact, if you are going to check the range of the data, shouldn’t that checking be done not only on data type but on the basis of physics and climatology as well? Are you going to work that into each piece of code?

    If you think that the code he is using should work for real data types over a wider range, then he probably shouldn’t be using the formula he is using at all. It is well known that the formula for variance in anomdtb.f90 is prone to roundoff error. See eq 14.1.8.


    REPLY: [ There are no personal attacks against you. I may have leapt to the conclusion that you are not a programer based on your desire to "defend the indefensible". If so, that was not an attack, it was an attempt to allow for the limitations of someone not acquainted with programming and coding. You now assert you have expertise. If that is the case, then your arguing for broken code is much more troubling.

    Frankly, you are acting as though this was YOUR code and as though you were not experienced at having critical code reviews done on your work. If so, get over it. Remember the law of mutual superiority: "Anything you code, I can improve, and anything I code, you can improve." (And the related: "Argue for your limitations and you shall keep them.") Accept criticism, learn from it, and move on. If you can't do that, you can't work in most professional programming shops. Code reviews are a standard feature...

    Also realize that computers run a strict meritocracy. They care not one whit what credentials you have or how many boxes of code you have written (one of the most gifted programmers I ever hired was a university drop out). If you write bad code, it is bad code. Period. Why? Because good code works and bad code crashes and makes bogus output and does not care what credentials you wave at it. And one of the more important metrics for "good code" is how well it handles bad data and how robust it is in the face of operator error. On both of those metrics, the code "Harry Readme" was dealing with was very bad code. But it can be fixed. Easily.

    I have stated 3 corrective actions are needed for this code. 1) Doing a cast to FLOAT. 2) Range checking in the program 3) "Preening the data". It is not strictly needed to do all three. Personally, I'd put the minimum cut off at doing 2 of them. You either need to do range checking in your code OR you must have some process that "preens the data" for those ranges ahead of time (and is always run. This can be some prior step that only runs to completion if the data are clean). It's a bit overkill to have both, though I've had code with "belt and suspenders" that ended up using both. Folks don't always run the "preening" step as directed... To the extent that process is perfect, go ahead and square the INTs. To the extent it isn't perfect and / or you like more robust code, cast to FLOAT, then square. Personally, I'd do all three (since it takes less time that typing this one reply).

    So in this case we had bad data. That proves we have a bad data problem which proves that range checking is not adequate. Do it in a preening step or do it in the code; but it must be done.

    We also had a crash from the "bad data" and we had dramatically wrong values from integer overflow. To the extent your up front preening and range checking are perfect you can skip the cast to float. To the extent you would like to be 'more careful', putting in a cast to float is a reasonable way to: A) Prevent the outrageous values from an overflow. B) keep the computed running total of a bunch of values closer to reasonable (via preventing those few outrageous values impacting the sum nearly as much). C) most likely prevent the crash that happened after the overflow (that then lets you have more code run for things like diagnostics and test logging.)

    Now I don't know why you are so worked up and want to have: A) Severely broken data values. B) Running totals that are completely out of touch with reality. C) Programs that crash rather than letting you get diagnostics out of them. D) Those bad data which fail to cause a crash happily processed into bad / useless / junk output silently. E) Data "preening" by hand, and only when it crashes your system, otherwise, just let it go through and give you junk output. (I'll stop here, lest I be accused of "leveling attacks" by stating simple truths...) But those are what you will get and what you have gotten from the choices in the code that you are defending.
    Think about it.

    Per range checking based on "climatology" and "physics". All you need to know is 'what is a reasonable value for the datum?' and put in a check for it. Exactly how precise depends on your goals. It is having NONE, as we have here, that is wrong.

    As a first step, I would simply have something along the lines of:

    IF (DataA(x,y,z) .LE. -90 .OR DataA(x,y,z) .GE. 58 ) then
        write(*,*) "New World Record?  I don't think so, Tim! ", DataA, year, month
        (your choice of: Log, error and halt, or go to next value)
        (normal processing loop)

    You can argue all you want about should that "-90" be variable by continent, by year, by ice pack level. I really don't care. IFF the user wants that degree of detail, you do it. If they want the "running total of squares" to be more or less sane but expect the law of averages to smooth out a small "clinker" like a -20 when it ought to be 20 in every 1000 values, well fine, that's what you do.

    But what you DO NOT DO, is have an insane value cause a program crash, then pluck it out, but leave all the other "mildly demented" values to run amok, without so much as a bare minimum sanity check. (As this code, and "Harry Readme" does).

    This is going astray from the HARRY_READ_ME.txt issue, but it looks like that is where you want to go, so:

    The way I would do the design is rather simple. I would immediately have put in the half dozen lines of code above ( IF {foo} then "I don't think so Tim!" else... ) and I'd have wrapped the "Squaring" in a cast to FLOAT. (It gets stuffed into a float anyway, so you WILL do a cast to float. The only question is before or after the potential for overflow is past. If you "have issues" with rounding error, then cast to DOUBLE prior to the square and round to FLOAT). So at that point the code is:

    1) Not making insane values, only mildly wrong ones on 'bad data'.
    2) Not accepting insane data, only accepting "reasonable but maybe damaged" values.
    3) Not crashing on bad data.

    This is an improvement. This is easy.

    It ought to be quite usable at that point. Adding a few extra diagnostics and sanity checks in the program is something I would do if I had the time or thought the rest of the code warranted it OR if it had not been fully QA tested yet (since you will need them for the QA runs anyway.)

    Total time invested: About 10 minutes. Less time than to find the prior bogus value by hand. (Which this modified program will print out for you; so net, I'd expect saved time).

    Then I would go to the "User Base" (or if I were the researcher using it, I'd ask myself) if I wanted more bounds checking than done by the minimum "I Don't Think So Tim!" tests. IFF the answer was yes (such as your 'by physics' et. al.) I would build those tests into a preprocess "preening" program. In that way you can more easily QA test both parts and a bug in one does not propagate to the other.

    Further, the "preening" step can be more parameterized if desired by the researcher to find what values work better or for R&D "experiments". And finally, the "preening" step can be run on similar structured data sets (inputs, outputs, intermediary files) as desired. Run it once upon assembly of the data, or run it between each step (during debug or for final production runs). A very flexible ability to choose speed vs paranoia AND a nice debugging tool for evaluating intermediate data. I would also include a reporting feature in it that gave me quality metrics about the data. That would let me establish quality thresholds for accepting a data set or for simply judging the error bounds of my final product.

    Fairly fast and easy. Very robust. Much better design. Much lower error potentials.

    You may be happy with the "status quo", I'm not. I'd rather have the design I've described. And under no condition would I accept that the bugs in the program ought to be left in it and the data ought to be run with no checking. -ems ]

  51. Jeff Alberts says:

    I’ll spare you the personal attacks you’ve leveled at me,

    What personal attacks??

  52. Pingback: Congress May Probe Leaked Global Warming E-Mails « Aftermath News

  53. Pingback: Has anyone seen my spork? » Blog Archive » WSJ Weighs in

  54. Pingback: Alaskas climate is safe after all !!! YES - (AK) - Page 6 - City-Data Forum

  55. CO2 Realist says:

    My program compiled! That means it’s bug free, right?

    Just kidding.

    I’m really amazed that they didn’t/wouldn’t use an off-the-shelf DBMS at the time as there were plenty of options: network, hierarchic, relational, or heck, even ISAM/VSAM files. Would have made sense to bounds check the data prior to loading as well. Data integrity, there’s an idea. And as EMS points out, the analysis would simply be different variations of reports.

    I also don’t buy the excuse that data was deleted due to lack of storage space. Most likely incompetence.

    It really is “worse than we thought”.

    REPLY: [ I was a DBA specialist at the time they claim to have been doing this. There were plenty of great database products that would have worked fine. Oracle, Ingres, Focus, and a few dozen others. IIRC, Ingres was part of the BSD offerings or was available from Berkeley (prior to spin out as a company). The "lost the data due to shortage of storage" is just bogus. 6250 bpi round tape was in use then. One or two tapes would hold it all, with programs and more. Less space than one book on the bookshelf. stored GB of data then in data centers. It was not hard nor expensive. The mantra was "Tape is Cheap." Just looks like an excuse to me. -ems ]

  56. Paco Wové says:

    “There is also evidence that the number in the number.txt file name is sequential with date”

    The names of the mail files (0826209667.txt … 1258053464.txt) are most likely the date of the mail, in UNIX epoch seconds.

    >echo 1256747199.txt | sed -e ‘s/\.txt//’ -e ‘s/^0//’ | perl -e ‘print scalar localtime(<STDIN>),”\n”;’
    Wed Oct 28 11:26:39 2009

    >head 1256747199.txt
    From: Phil Jones <p.jones@uea.ac.uk>
    To: “Mitchell, John FB (Director of Climate Science)” <john.f.mitchell@metoffice.gov.uk>
    Subject: Yamal response from Keith
    Date: Wed Oct 28 12:26:39 2009

    REPLY: [ Great Catch! -ems ]

  57. Yes, as a SQL, VBA and BI dabbler meself, I have wondered why they did not adapt their technology. I’ve just read a good chunk of the Harry ReadMe file (an excellent, painfully honest doucment, BTW) and find myself just gobsmacked. Relying on thousands of text files is really primitive stuff, whereas a single load script to suck it into a decent RDBMS would have ensured integrity, backup, transactional change logs and many other mod cons. Then the transforms would have been controllable, counts made easy, change logs kept as the transforms worked their magic, and rollback thereby made possible.

    Heck, I sound like a DB salesperson.

    But, as they say, hindsight is 20/20. And at least Harry has kept his own very good logfile. He deserves a medal. Or at least a beer.

    REPLY: [ "Harry Readme" will never pay for a beer if I'm in the bar. And one of these "Temperature Series products" would have been a 1 or 2 week project back when I was a database guy. It is ideally suited for a non-procedural language RDBMS / report writer. Did that gig for a decade or so. High end consultant for a vendor on their product on mainframes. That level of person could redo this (what, about 20 years?) of work in a decent product in a few weeks, max. One heck of a lot easier than porting this "stuff" and getting it to work as is. The hardest part would be getting a 'spec' for it, though. The code is the spec as near as I can tell, so that's the hurdle that is in front of a DBMS guy. Hopefully, some of what I'm going here 'ploughs that field' enough for folks to get an external spec put together should they want one. I figure I'll get to it in about 1 - 2 years.... sigh... -ems ]

  58. Pingback: 2*2 = -1 « Ilmastovaivoja

  59. CO2 Realist says:

    EMS – I was just getting started in my second career in IT in the early 80s. In school, no less, I was using CODASYL, relational, and hierarchic DBMS on a Control Data Mainframe (the school had an engineering emphasis). This was not hard stuff for anyone get their head around. I then went on to the corporate world with IBM MVS on System 360/370, along with CICS and IMS DBMS. Later moved to PC environment with SQL and relations DBMS.

    In the corporate world, we never users touch any data with intent to update. I wrote more than one flat file extract from IMS that the users could play with using FOCUS to their hearts content. That way there was always data integrity but the users could model all they wanted. And we had to go through both a design review and operations review to even put a report program into product. Needless to say, stuff worked correctly most of the time.

    And your right about tape as a cheap storage mechanism. I actually did a month as a “tape ape” in the data center as an intern.

    In my experience, the universities were ahead of the corporate world in trying out some new technologies. I wonder why this never crossed into the climate research world. Painful work keeping track of all those text files – the lazy person’s approach would have been to load it all into a DBMS once and for all and be done with it.

    My guess is the geniuses in all this (i.e. all the top scientists) didn’t spend much time worrying about the details like coding standards and data integrity. I can imagine some scientist yelling at a status meeting “I don’t care how you make it work, just make it work!”. And then off poor “Harry” goes to make sense of the pile of garbage he was left to work with.

    The other big problem is that without documentation, you are basically trying to glean intended logic from the source code. What if the code is wrong? What was it really supposed to do? If you have code and comments and they’re out of sync, you might at least be able to make an educated guess.

    I’ve seen code this bad before and worse (lost source code!), and the owner of the company had no clue. I imagine some of the “scientists” are just like him.

    At the end of the day, I think this is a classic case of a project that should have been rewritten long ago and never was. Now it is just truly a mess.

    REPLY: [ Interesting parallels. I was a dbms specialist on RAMIS II and FOCUS on IBM mainframes in the 1980s. The whole MVS CICS TSO etc basket. In FOCUS this whole thing could be done in a week or two without much effort, IMHO. And yes, the effort that a company puts into having a Data Base Administer stand between the data and any data corruption is orders of magnitude more than the "climate researches" seem to understand. (Forget "do", they don't even seem to understand the need...) A programmer could not even look at a database without 2 approvals from management.

    Your point about comments and code divergence is very well made. The need for documentation beyond the source code is another thing that the researchers seem to 'not get'. In the end, I think it is just a case of folks who do academic R&D and had a FORTRAN class once; just do not have the experience of a commercial data center and have no idea what DBA operations are like. So they build something that looks like a home made desk done from packing crates; and us furniture makers are just shaking our heads and suggesting that using a plane, some stain, and maybe even some polish (and better wood...) and clamping and gluing the joints instead of nailing them... that maybe that would be better... -E.M.Smith ]

  60. Jack Poynter says:

    Reading the article and these comments was such a pleasure.

    I started programming in 1966 on an IBM 360, though not as a scientific programmer, and am now retired. My principle job was as a systems programmer, and general fixer of everyone else’s otherwise insoluble problems. For many years, almost 20, I worked for a service bureau, which was contractually obligated to deliver error free code and timely production runs. Therefore, we had very strict quality control standards, and were audited frequently by the companies for which we did business. Our policy was that if we made an error in a computer run, the customer got his money back; and being a relatively small company in terms of personnel, that would have been a major disaster for us, if such had ever occurred.

    I sympathize with the problems commented on in the code; but if a person working for me had committed some of those errors, he would have been transferred to a position more in keeping with his level of professionalism, say janitor, for instance.

    Perhaps the most troubling part of all of this is that they seem to have committed the error of “programming to the desired result:” If you know the result you want, and the program appears to be delivering it, you quit debugging. But the code must not only appear to work, it actually must do what it is designed to do; and the way that is done is to code in small components, testing each separately over a large number of stressing inputs and checking the actual outputs. The interfaces between each component can then be tested to see if they are passing the data correctly.

    This is a very large amount of work, but to not do it illustrates a contempt for the process and a contempt for the people who will be using the output.

    An additional way to quality control systems is to submit the product to a team which is tasked with finding errors in it; to an organization of that type, not finding an error constitutes a failure. In the scientific world, it seems that is the function of ‘peer review.’ And drawing the parallel, ‘peer review’ should be done by people who are definitely committed to finding errors in the work. Passing papers to people who are ‘friendly’ guarantees errors of omission, people who would do ‘antagonistic’ peer review should be desired.

    And that brings us to the over-arching problem with this whole situation, which is not at all technical, but is political. Anthropogenic Global Warming is an idea which has been sold, rather than studied. I have seen this attitude before, I was up to my ears in Y2K fixes in the runup to the year 2000. There were indeed Y2K errors in code, but they were very amenable to being fixed, and indeed were; the hype surrounding Y2k errors was a constant source of hilarity to those of us in the field. But, create a crisis, create a market; and when I see a process which is being sold in that manner, my antennae quiver. Looking at the circus in Copenhagen that finished today, I see that the only honest people there were 1) the people opposed to a global treaty as an affront to liberty and 2) the warriors of greenpeace, whom I believe to be dangerously wrongheaded, but who at least are true to their understanding of what AGW really means, if it should turn out to be true.

    Again, thanks for the article and all the comments, it was a real pleasure to bury myself in programming problems once again.

    RPLY: [ Thanks. I had the same reaction. Oh, and had great fun asking folks questions like: Where is the clock in the gas pump? And pointing out that those with no date had no date problem. Or "Does the wrong date stop your microwave oven from working?"... Got a quick sort of "thinkers" vs "sheep". But yes, the contracts were flying fast in those days with buckets of money... -E.M.Smith ]

  61. I agree with what you wrote here. Have you read Cem Kaner’s books? He is such a great author, I have read all of his books and learned so much from them. I was lucky enough to see him give a speech a few years ago on his methodology. He is as good a speaker as he is an author. Do you know of any authors of Kaner’s reputation?

  62. Pingback: The Climategate code « The Invisible Opportunity: Hidden Truths Revealed

Comments are closed.