## GIStemp F to C convert issues

### GIStemp, Precision, and FORTRAN

This posting is a bit esoteric, but this stuff really matters. Computer security and computer forensics depend on this kind of detail code review, as does proving a system is reliable and not making really bad errors. Almost as bad, you can have a tiny error, like 1/10% in a line of code that may not matter, but if you have 1000 such lines in a system, you now have a 100% error (assuming the errors add). So “little things can add up”, and computer professionals know that you must use a very very fine comb in a code review to prove a bit of code is ‘valid’.

In looking at GIStemp, one bit of code has regularly been a bit “problematic”:

That is USHCN2v2.f

(Where there was at one time a “bug” that caused it to miss read the input data and drop the 1/100th degree position). This code turns US data in F into “GHCN” format data in C.

Now this data comes from NOAA already a bit “buggered” in that it is in 1/100 F precision. Since the raw data are in whole degrees F, this can only be the result of adding one month worth of those integers, then dividing by integer days, and assuming that the 1/100 F place means something. It doesn’t. Proper behaviour is to either truncate or round to the nearest integer (NINT in FORTRAN). But GIStemp runs with it in 1/100 F of False Precision anyway. OK, I’ve covered that in depth here:

Mr. McGuire Would Not Approve

So we’re just going to assume that issue exists and “move on” to the next issue. The next issued is “what happens with those temperatures with 1/100F precision when processed?” What do they do to the Global Average Temperature?

### Into The Code

In looking at that bit of code (and doing some testing of it) the bug that dropped the 1/100 F place seems to have been patched… but… There was something about the way temperatures were converted from F to C that bothered me.

The code in question is very short:

if(temp.gt.-99.00) itemp(m)=nint( 50.*(temp-32.)/9 )

This basically says that if the “temp” variable is bigger than the “missing data” flag, set the “Integer temperature variable – itemp” for a given month to be the “nearest integer – nint” to the result from using the standard formula for C to F conversion (5/9 x temp in F -32).

The only odd bit is that there is a “factor of ten” multiplied in so that the 1/10 C place can be put in an integer with an implied decimal point. (One wonders why not just make it a high precision floating point number, but asking “why some programmer does something” too much is not good for your sanity…)

This ought to be evaluated as (temp-32.) first, then multiply that by 50. and then divide the result by 9 (which, being an integer, would have to be “promoted” to a floating point number to do that division).

OK, what’s the problem?

Well, FORTRAN handles numbers a bit “differently” from what people do. And computer math has some “odd things” to keep track of.

First up is “digits of precision” in formulas. When we multiply by 50, we slide the “decimal point” over by one position for that factor of ten. You could multiply by 5, then move the decimal point over one space. So if temp were, oh, 43.1F, we would turn that into 11.1 (by subtracting 32.), then multiply by 5 for 55.5, then slide the decimal over one spot to get 555. ( Something similar happens in the computer, but in base 2 or binary numbers.) Now we divide by 9 and get 61.67 which we then turn into the “nearest int” or 62. This represents 6.2C and would be fine, but for the fact that we had to add some .03 C in the rounding process.

Or we could divide 50. by 9. and get 5.56. Then we could just take 11.1 times that and get 61.76. We still round up but this time only adding .024 C. With a large bunch of numbers, the difference in these two methods can change the result; especially when they are averaged together. And the first way is more sensitive to the data (since we multiply the data by 50, then divide by nearly 10, making a shift back and forth dependent on data size) while the second way calculates a constant that is kept intact and does not shift bits one way, then the other, losing a few in the process. When temperatures range over a couple of digits of value (or more, from 1F to 100), this can change the precision in the results quite a bit.

Another one is that “9” with no decimal after it. That means “Use an INTEGER 9, not a floating point 9.00000”. When dividing an integer by an integer, FORTRAN does not round, it truncates. Your low order bits can be very sensitive to this kind of effect. Everything to the right of the decimal point can be “messed up” if you don’t keep track of exactly what happens. Do we raise that 9 to a float before, or after the division?

But there are other rules, called precidence rules, that decide in exactly what order things get changed. Do we subtract the “32.” first? Then is it divide by 9 (INT) or multiply by 50. next? (the decimal point makes it a floating point number, 50.000000 and “temp” is a floating point, in this case a “real*4” with lots of precision). So at some point that 9 will need to be turned into a “real” to do the math with the “temp-32.” part. Then we “cast” it with nint back into an integer. So what happens to all those bits to the right of the decimal point? EXACTLY? And does this change the value in the 1/10 C place when it is finally repackaged back into an INTEGER?

Well, the fact is we don’t know. You can try to figure this out (the FORTRAN definition tells you what ought to happen), but it’s better to just test it. Sometimes the syntax of a language is a bit vague about what part gets done first, and some times one compiler (the code that turns FORTRAN into something the computer can run) does things one way when another compiler writer did it another. And sometimes compilers, too, have bugs in them. So you test.

At any rate, I decided to compile this code with both f77 and g95 (two different FORTRAN compilers) and make sure nothing changed. But something DID change. The code is sensitive to exactly which compiler you use.

### The Test

How sensitive?

I instrumented the code and did a few runs. I also added a slightly better way of doing this same code that is far less sensitive to such minutia.

What I did was to take that “50./9” and make it into “50./9.” (all “real” data types) and put it into a constant called “ftoc”. This not only makes it VERY CLEAR to the compiler that it does this bit first, but it also only does it one time (rather than once for each record processed) and thus saves about 1.5 Million divide operations each time the program is run, so it’s a lot more efficient. My code now says:

if(temp.gt.-99.00) ttemp(m)=nint(ftoc*(temp-32.) )

Then I ran the program and compared the two methods. In this first case, I use the f77 compiler:

/usr/bin/f77 -o ../bin/USHCN2v2.exe ./USHCN2v2.f

then ran it:

```replacing USHCN station data in v2.mean by USHCN_noFIL data (Tobs+maxmin adj+SHAPadj+noFIL)
reformat USHCN to v2.mean format
extracting FILIN data
getting inventory data for v2-IDs
USHCN data end in  2007
Error rate:   0.0104666771
Old Warmer:   14312.
Old Cooler:   2120.
Count:        1570127.
```

I added some “instrumentation” code that also looks at how many records change, which way, and what the effect might be.

So we have about 1% of the records “different” between the two methods. Of those, in 14312 cases the result is a warmer record by 1/10 C while in 2120 cases we get colder by 1/10 C. The net result will be about a 1/1000C warming of the total of all records (but individual months can be warmed more, and the effect after “reference stations” spread things around is, er, unknown.)

But wait, there is more…

When compiled with the g95 compiler we get similar, yet different results. The original code is “compiler sensitive” to some small degree. In all three cases, the results are different. Though “my way” is not sensitive to which compiler you use. Not surprising given that “their way” will be more data dependent with bits at the far right of the number being lost if you have a 100F day and kept if it’s only 3F.

/usr/local/bin/g95 -o ../bin/USHCN2v2.exe ./USHCN2v2.f
#echo “USHCN2v2.f is compiler senstive at run time. 1 digit rounding variation

```  reformat USHCN to v2.mean format
extracting FILIN data
getting inventory data for v2-IDs
USHCN data end in  2007
Error rate:  0.009524695
Old Warmer:  12801.
Old Cooler:  2154.
Count:       1570129.

```

At least the number of records that vary has gone down by a smidgeon.

### The Meaning of This Is? About 1/2C Possible from Math

So what does this all mean?

It means that the “false precision” of the 1/10 F place and 1/100 place in the raw data from NOAA influence the conversion to C with an impact that will represent about 1/1000C of warming of the data in total from the process alone. Not a lot, but every little bit adds up. This is just One Line of the code…

And further, because it isn’t “determinative”, you have a compiler sentivitiy issue as well. Luckily, in this case the new compiler makes the planet cooler. Who knew? Maybe we could just buy NASA a new compiler and cool the planet that way…

Just to put a finer point on this, I used 5 variations on simple math to calculate the Global Average Temperature from this dataset. (Not the particular GIStemp code above, just different reasonable choices about calculating GAT from the raw GHCN dataset.)

``` JAN  FEB  MAR   APR   MAY   JUN   JUL   AUG   SEP   OCT  NOV  DEC ALLDATA
26    38    73   117   156   187   205   200  172   130    79   39  118
2.60 3.80 7.30 11.70 15.60 18.70 20.50 20.00 17.20 13.00 7.90 3.90 11.850
2.00 3.00 7.00 11.00 15.00 18.00 20.00 20.00 17.00 13.00 7.00 3.00 11.333
2.59 3.81 7.14 11.41 15.22 18.31 20.13 19.65 16.83 12.64 7.71 3.91 11.612
2.63 3.90 7.34 11.78 15.64 18.75 20.58 20.09 17.27 13.03 7.97 4.00 11.913
2.63 3.90 7.34 11.78 15.65 18.75 20.57 20.09 17.27 13.03 7.97 4.00 11.913
```

The “bottom line” is that I can move the global average temperature by about 1/2 a degree C by my choice of when to use integers, floats, reals, and what order to do the calculation. The first line: uses integers all the way with the number representing 1/10 C increments. It is the most accurate. The second line uses “floats” but in a way that is “done carefully” and does a good job of reflecting the first line values.

The third line has a “divide by 10” (notice, no decimal point…) so it ends up truncating the temperature series in the fractional part. This is clearly losing the added precision of the data (in the full degree F actual records) that could be partialy preserved with 1/10C records.

The fourth line has a bit less impact from a “divide by 10” rather than “divide by 10.” in a different part of the processing. (Adjust each temperature as read, or adjust them at the end in the average. WHEN do you slide the bits over and lose a few?..)

Finally we have the 5th and 6th lines. These create more warming by the simple expedient of doing more floating point math and less integer math (thus preserving more of the false precision through the calculations and letting them have more impact.)

### How I’ll Measure GIStemp

For my future tests of the way GIStemp changes the temperature, I’ll be using the method from line 2 as my benchmark. It is, IMHO, the most honest. What does GIStemp itself do? Well, that’s what this whole thing is all about, now isn’t it?

But the bottom line is very clear:

Before we get all excited about changes of 1/10 C (and before we even think about anything in the 1/100 C place) we need to know how much of that change is real, and how much comes from decisions made in the programming…

To give you an idea what the actual difference in the code looks like here are some quoted snippets of my “testing” code:

```          icount(m)=icount(m)+1
count(m)=count(m)+1.

itmptot(m)=itmptot(m)+itmp(m)
ttmp(m)=ttmp(m)+(itmp(m)/10)
wtmp(m)=wtmp(m)+itmp(m)
rtmp(m)=rtmp(m)+(itmp(m)/10.)
```

So here I can keep a counter as in integer (icount) or as a float (count).

Then when calculating a running total of accumulated degrees, I can do it as integers in 10ths of a degree (itmptot), as floating point (wtmp), I can turn that itmp from 1/10ths C into whole degrees with a divide (using 10 or 10. causing different results – ttmp and rtmp).

Later in the code, where I create the final monthly averages, I can do it in a couple of ways too:

```        itmpavg(m)=(itmptot(m)/icount(m))
tmpavg(m)=(itmptot(m)/icount(m))/10.
trnkavg(m)=(itmptot(m)/icount(m))/10
ttmp(m)=ttmp(m)/count(m)
wtmp(m)=(wtmp(m)/count(m))/10.
rtmp(m)=rtmp(m)/count(m)

igav=igav+itmpavg(m)
gavg=gavg+tmpavg(m)
trnkgavg=trnkgavg+trnkavg(m
tgav=tgav+ttmp(m)
wgav=wgav+wtmp(m)
rgav=rgav+rtmp(m)
```

These are just 6 of the ways of figuring the Monthly Global Average Temperature and then 6 ways of figuring the TOTAL of all time Global Average Temperature. If we did not divide the individual temperatures by 10 (or 10.) we do it here with the total. And the resultant averages are different. Notice that we could also have chosen to divide the total by 10 THEN divide by the count. That, too, would change things in some cases.

So which do you choose? Convert early or late? In floating point math or integer? Why? Which is “right”?

If you think these things all look maddeningly similar, you are right.

Can you really spot the impact of

```
wtmp(m)=(wtmp(m)/icount(m))/10
vs
wtmp(m)=(wtmp(m)/count(m))/10.
```

And yet that is the stuff that a 0.64C warmer January is made of… These detailed coding decisions by some random programmer making what they think are reasonable choices can (and do, and in fact must: you can not avoid making SOME decision in how to write the code…) change the results in the 1/10 C place.

And that is why I refuse to get excited about a computer model showing any changes in the temperature data to the right of the decimal point…

### In Conclusion

So here we have a specific line of the GIStemp code that warms about 1% of the data records by 1/10C in the conversion from F to C.

We also have a NOAA supplied data set with fractional degrees of F when we know that the fractional part is a lie (or, to be charitable, an error… of False Precision). NOAA calculate a monthly average by a method that may, or may not, be valid and give us data that are not properly vetted for precision. It has a 1/100F precision when no such precision is possible.

And I've done a test on the GHCN data (already in degrees C so we're not going through the F to C conversion "issue") to see just how much sensitivity there is in the "Global Average Temperature" to the kinds of decisions that programmers had to make at NOAA and GIS in writing their code. What I found was that about 0.5 C is “easy” to change, and that about 0.1 C TO 0.2 C is very hard to avoid. So it is almost certain that the 1/10C place owes it’s values as much to programmers and compilers as it does to any actual physical changes in the world.

Any statement about “Global Average Temperature” changing in the tenths or hundredths place is “dancing in the error band” of the calculations. There is no “truth” in any digits other than the full degrees. The whole body of code must be “vetted” and tested in a manner similar to what I did above before there can be any trust given to the “bits to the right of the decimal point”.

Oh, and I’m changing my copy of GIStemp to use the “ftoc” constant method of calculating F to C conversions. It is, IMHO, much better. In doing tests of trends in the data, I will be using the #2 method for my baseline (“careful floating point – true to the integers”) where possible. It, too, is better. A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in GISStemp Technical and Source Code and tagged , , , , , . Bookmark the permalink.

### 16 Responses to GIStemp F to C convert issues

1. steven mosher says:

Excellent work.

Hehe. Why not keep it all in F. then at the end present the chart in F, do all the trends in F.

That would be the clearest way of showing the difference.

Basically, take Gisstemp as is. Calculate the global average.
Calculate trends for the last 10, 30, 100, entire data set.

Then do the calculation in F.

That might be a post that Anthony or steve would pick up.

and for the laymen it would be dead easy to understand.. no precedence rules etc.

2. E.M.Smith says:

An interesting idea (and one I would support, since I’m fond of degrees in F and things like feet and rods…)

However,

The international data are in C.

So as soon as you run GIStemp past STEP0 you have the problem of who converts to what.

Now if you wanted to just subset for the USA, then you can use the F all the way through. Or for particular stations you could do that too.

Now a minor geeky little point that I’ve thought about (being a bit Geeky…) is that F has much finer grain that C; that is, it has more inherent precision in any particular decimal place. (One degree of C takes 9/5 of a degree F, so that F is a little less than twice as precise per digit). Part of why I like F over C …

So if you take your degrees C (presented in 1/10 C in the record) and convert them to F (as 1/10 F ) you can do a very very nice conversion with little to no loss of accuracy. Taking F to C you must at some point accept a bit of “jitter” in the final decimal place ( i.e. your 1/10 C place might mean 1.1F or 1.2F and it can not tell the difference)

So, in a puckish moment (which I think is what you were alluding to…) I thought that from a mathematical point of view it makes more sense to convert the C to all be F and leave the existing F alone!

But it sure would peeve the “SI or Die!” units Nazis. 8-}

You know, I might do it just for that ‘feature’ alone ;-)

3. Martin Baxter says:

“Now we divide by 9 and get 61.67 which we then turn into the “nearest int” or 62. This represents 6.2C and would be fine, but for the fact that we had to add some .03 C in the rounding process.”

Being picky don’t you mean that .33 C is added?

And as a software engineer of long standing, I must say that I would never use a floating point unless absolutely force. I abhor the loss of precision involved, (not to mention the additional processing.)

4. Martin Baxter says:

Okay, no. My mistake. You are actually adding 0.033 (recurring) C.

Whatever, your point stands. The only good thing about the code is that it can be used to demonstrate how many bugs can be packed into a single line!

REPLY: [ Dealing with the shifting decimal point can be maddening. The 61.67 is actually 6.167 C (or 6.17 C if you hold 2 decimals of precision max as in the original F measurement) so you add 0.033 C (or, in fact, 0.03 C would do the same thing in the 2 decimals limit case), as would 0.33333 repetent 3). A little ‘implicit demonstration’ that I’m glad someone noticed ;-) And just think, they claim meaning in the 1/100 C place of the product of this code… I do get the distinct impression that the authors of it had a FORTRAN class and got the syntax, but missed the math subtleties involved in computer math. Which, especially for FORTRAN with implicit typing and casting and the odd truncation of integer math, can be dramatic. -E.M.Smith ]

5. Nick Barnes says:

So here we have a specific line of the GIStemp code that warms about 1% of the data records by 1/10C in the conversion from F to C.
(b) this is temperature, not anomaly. We care about anomaly. What does the choice of conversion do to global anomaly? I suspect that the effect is either non-existent or neutral.
(c) Your audience will misunderstand your use of the word “warming”, to be saying that this contributes to the global warming signal, but in fact this “warming” will be equally distributed across time.

6. Nick Barnes says:

On (b): GISTEMP is all about anomaly, about “trend”, and always has been, right from the very beginning. See Hansen et al 1981, or Hansen & Lebedeff 1987. If you don’t care about anomaly, you don’t care about GISTEMP. Full stop.

REPLY: [ Not so. There are products produced prior to the anomaly map phase as in the GISS web site that shows station data from intermediate steps. Also you will find that some of us want to show what quality is in the parts prior to the last anomaly step, since it kind of matters… -E.M.Smith ]

7. Nick Barnes says:

Now this data comes from NOAA already a bit “buggered” in that it is in 1/100 F precision. Since the raw data are in whole degrees F, this can only be the result of adding one month worth of those integers, then dividing by integer days, and assuming that the 1/100 F place means something. It doesn’t.

This is wrong. A mean of 30 numbers intrinsically has more information than any one of those 30 numbers on its own, and thus can legitimately be given more precision. Ask a statistician.

Can the average number of children per family be 2.4? Would it be better if demographers reported it as 2?

Also: is it true that NOAA numbers are only in whole F? That seems a bit poor.

https://chiefio.wordpress.com/2009/03/05/mr-mcguire-would-not-approve/

ALL discussion of the validity of the small precision bits will be directed there to avoid every thread ending up with the same arguments a hundred times… There is a link to the NOAA page that tells how to record data, as whole degrees F or make up a number if you have none.

-E.M.Smith ]

8. Nick Barnes says:

One wonders why not just make it a high precision floating point number

When this code was written, integer arithmetic and processing was very, very much faster than even single-precision floating-point. I don’t know whether that’s the reason.

9. Nick Barnes says:

Or we could divide 50. by 9. and get 5.56. Then we could just take 11.1 times that and get 61.76

This section seems confused to me. If you divide 50. by 9. in Fortran, you don’t get 5.56. You get a single-precision floating point number which is much closer to 50/9 than that. In fact if you are using IEEE 754 floating point – which you are – it is 5.5555553436279296875 (i.e. 2912711/524288). The 11.1 isn’t going to be exactly 11.1, but it’ll be pretty close. And so on. So any effects of the nint rounding will be very slight.
For what it’s worth, the corresponding line in our step0.py is this:

temp = round_to_nearest((temp_fahrenheit – 32) * 50/9.0)

and our step0.py output is exactly, byte-for-byte, identical to the GISTEMP STEP0 output.

More information: our code uses Python’s default double-precision floats, and round_to_nearest is a function I wrote which produces the nearest integer to a float argument (as it happens that rounds draws away from zero, to match the nint() function of the Fortran implementation I was using).

10. drj11 says:
11. DirkH says:

drj11:

temp = round_to_nearest((temp_fahrenheit – 32) * 50/9.0)

Is that a newer version? Again, no brackets that define the precedence. Are brackets expensive? I see this all the time with C++ youngsters, it freaks me out, they mix all kinds of operators and most of the time it works. I want to see brackets everywhere. Sometimes they get the precedence rules wrong and their programs just refuse to work correctly and need to be debugged into existence. Each time you have to do this it indicates that you haven’t thought it through to the end or where too lazy to place some friggin brackets. I don’t care about a loss of readability, in fact for me it becomes more readable when you exactly specify the order of operations. This is lousy code again. Lousy.

12. DirkH says:

Oh i see, the Python rewrite. It’s sloppy to omit the brackets nevertheless.

13. DirkH says:

To be more scpecific: Even when in Python-land, a byte code compiler is at work and rearranges the formula into a sequence of byte codes. The exact sequence of operations might be undefined if two operators have the same precedence. I don’t know the precedences of / vs * in Python… anyway, it makes it difficult for a non-Python-expert to find out what is happening even if the author knew it out of his head.

Might even be that IronPython vs. Python vs. whatever implementation produces different sequences.

So, when mixing int’s and float’s it might even in non-optimizing Python land be useful to define the sequence precisely. That’s what i do when mixing int’s and float’s.

Make of that what thou whilst.

14. Nick Barnes says:

DirkH: Oddly enough both DRJ and myself are experienced compiler-writers, and have worked professionally on fine details of floating-point arithmetic. And neither of us could be counted as a youngster.

Firstly, Python does unambiguously define the order of operations. Multiplication and division have the same precedence, and associate left-to-right, so in this case the multiplication takes place before the division.

Secondly, the line of code you identify:

temp = round_to_nearest((temp_fahrenheit – 32) * 50/9.0)

will obviously produce the same integer result “temp” for any value of “temp_fahrenheit” which can reasonably be produced by the earlier read_float() call (i.e. from a one-decimal-place string), regardless of the order of operations. Proof is trivial and left as an exercise.

The same is true – and even easier to demonstrate – for the line which replaced it in our release 0.2.0 (which reads USHCNv2 data, in integral tenths of degrees Fahrenheit).

In our current code base, I have nearly finished removing all explicit rounding (e.g. the rounding to an integer value here at this stage), except in producing some of the final output files. All the data will flow through the entire algorithm as double-precision floating-point. The remaining implicit rounding – of the arithmetic hardware – is approximately bounded in the result datasets by a relative precision of N.k.e Kelvin, where e is the relative precision of double-precision IEEE floating-point values, i.e. 2^-53, N is the number of operations in the algorithm as a whole, which may be as many as a few hundred, and k is a small integer. Of course, this value is totally swamped by the measurement precision of the input datasets, and also by the reporting precision of the plain-text output files (which is 0.01K).

15. Gareth Rees says:

Dirk: Python’s rules for arithmetic operator precedence are here. You can see that multiplication and division have the same precedence, and that expressions are evaluated from left to right. So when evaluating a*b/c Python will carry out the multiplication first and then the division.

In the case of the Fahrenheit to tenths-of-degrees-Celsius conversion, we can do a bit of numerical analysis to see how much difference order of evaluation can make.

With the multiplication first, i.e. ((f-32)*50)/9.0, the subtraction and multiplication are exact, and the division results in an error of up to ½ulp (ulp = unit in the last [binary] place). This is assuming that there’s no underflow or overflow, but since the range of inputs (temperatures in Fahrenheit) is small, it’s easy to check that can’t happen.

With the division first, i.e. (f-32)*(50/9.0), the subtraction is exact, the division results in an error of up to ½ulp, and the multiplication in another ½ulp, so up to 1ulp in total.

You can explore this by looking at the binary representation of the floating-point numbers in Python. For example, 56 degrees Fahrenheit shows a difference of 1 in the last bit of the mantissa when the order of operations is varied:

>>> import struct
>>> struct.pack(‘d’, ((56 – 32) * 50) / 9.0).encode(‘hex’)
‘4060aaaaaaaaaaab’
>>> struct.pack(‘d’, (56 – 32) * (50 / 9.0)).encode(‘hex’)
‘4060aaaaaaaaaaaa’

That’s an error of 2−43 on a number of size about 133, i.e. a relative error of about 2−51, which is 1ulp since double-precision floating-point numbers have 51 bits of mantissa.

In summary, the order of arithmetic operations on this line of code is not worth worrying about.

16. Nick Barnes says:

I made a mistake in my earlier comment. The USHCNv1 dataset was of course to two decimal places (in Fahrenheit), not one as I wrote. So it is possible for the F-to-C conversion to result in a value close to a half-integer, which might therefore round differently with nint(). Specifically, Fahrenheit values of the form 32+(18N+9)/100.
Out of 1501821 values in USHCNv1, 7985 produce a higher result for (f*50.0)/9.0 than for f * (50.0/9.0), and 792 produce a lower result. This is a net effect of 479 microKelvin per reading, and produces no net anomaly trend.
7289 values produce a higher result for (f * 50.0)/9.0 than for (f/9.0)*50.0, and 5693 have a lower result. This is a net effect of 106 microKelvin per reading, and produces no net anomaly trend.

This is all moot now, since USHCNv2 has integral values of tenths-Fahrenheit, for which the evaluation order cannot possibly have any effect.

REPLY:[ I would be careful with assumptions like “cannot possibly have any effect” until it’s measured and benchmarked. “Probably” or even “almost certainly”, I’d agree with… Now if only they had not warmed the USHCN.v2 data set compared to USHCN in the process of the swap… -E.M.Smith ]