Trend vs Range vs Causality in Temperatures

We’ve seen this kind of graph many times over the last dozen+ years. Usually just with one data set, such as RSS. In this case it is comparing two data sets. GIStemp and RSS. This graph was used on WUWT to show that there is a strong divergence between RSS and GIStemp in the recent years, and to ask why. In most cases, it is to show that for about the last 18+ years the trend in satellite data is nil. No warming. In all these cases, the stress is on the trend.


GisTemp vs RSS from WUWT from WFT via Paul Clark

GisTemp vs RSS from WUWT from WFT via Paul Clark

The whole discussion on these kinds of graphs revolves around 1/10 ths of degrees of trend. Even in the end bit where they are looking at exaggerated trends from short periods of time, it’s down in the 1/10 C range. (At least, I assume it is in C ).

What I want to point out is that the range is over 5/10 degree (actually I eyeball it as about 8/10 at the max range) and that this range happens in a few bursts about every 2 years.

Now when I look at that, and realize that CO2 levels are nearly constant over a 5 to 10 year period, it is incredibly obvious to me that something else is causing the changes in the values. If something else is swinging this cat by 5/10 to 8/10 of a degree in 2 years, just how in the world can someone assert that the ‘trend’ comes from nearly constant CO2? CO2 must be in the error band of swings from the major driver. IMHO, this matters.

Until the major drivers are clearly identified, their error bands identified to greater precision that CO2 changes, and that major driver subtracted from the raw data, you can say nothing about the other drivers of those temperature changes, and nothing about their “trends” from those alternative explanations.

A trend much much less than a range is likely an error in starting and ending points, or random variations in the range over time, than some other much more minor contributor.

IMHO, this would benefit from a formal treatment by a Ph.D. statistician.

Subscribe to feed

About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in AGW Science and Background and tagged , , , . Bookmark the permalink.

12 Responses to Trend vs Range vs Causality in Temperatures

  1. omanuel says:

    Frankly speaking, the US National Academy of Sciences betrayed the American public after WWII, when frightened world leaders united nations (UN) and national academies of science (NAS) into a giant, worldwide “Orwellian Ministry of Consensus Scientific Truths” on 24 Oct 1945:

    My research mentor, the late Dr. Paul Kazuo Kuroda, probably risked his life by retaining possession of Japan’s atomic bomb design for fifty-seven years,

    to prevent the growth of the sinister and dishonest side of post-WWII consensus science shown by AGW:

    Click to access Introduction.pdf

    The US National Academy of Science confirms its continuing dishonesty by refusing to address or debate precise experimental evidence,

    “The Sun’s pulsar core made our elements, birthed the solar system, and sustains every atom, life and planet in the solar system today, including planet Earth.”

  2. tom0mason says:

    EM, as usual a good point and well put but …
    Sorry I find all this emphasis on global averages (to an averaged few tenths of a degree) banal. Unless there is a major movement, over a significant period of time, it means just so little.
    IMO one of the things that these kinds of graphs hide is the regionality of the underlying real temperatures around the globe. Yep, we may have an averaged global number but does it really tell us much? I feel not, what matter to us and most of nature is what happens in the locality.
    So if some regions of the world are slowly cooling and other areas warming a bit more and the average calculates out to a tenth of a degree of warming over all, so what? If the cooling areas are say Europe, North America, and the oceans around Antarctica, with parts of South America (due to warmer ocean cycles) and the deserts of Africa warming more, then surely efforts should be in looking at the cooling areas not just watching the average warming.
    Climate and human relations to it, is ultimately about what happens locally and not some abtract global average.

  3. M Simon says:

    Small changes around a system average can be meaningful if the data collection is accurate enough. The health of the power grid

    is diagnosed that way. I’m not keeping any stats on the above meter because the error band is in the range of one meter tick (0.001 Hz out of 60). I’d like to up my accuracy by a factor of about 10. For that I need to do a redesign (the redesign is done – I need to order boards and parts). Donations (link at the bottom of the page) welcome.

    BTW if anyone would like a design done – since I’m retired – I’m a very low cost shop. Ask.

  4. tom0mason says:

    M Simon,
    Thank-you and that is my point.
    Manipulated figures of a gross average tells up so little, maybe it highlights the noisy randomness in the natural system but I can not see that it shows much else. Useful for confirming that the average temperature of a noisy chaotic natural system is, er, noisy?
    Without a systematic and thoroughly scientific look at the reasons for these changes, and how they fit into the historical context, how can they hope to move forward? They can not, which appears to be the whole object of the exercise in ‘climate science’.

    Nice meter project of yours. It highlights all those loads and generators randomly jumping off and on the rigidly defined and (human) controlled grid system. Miniscule changes within a very parameter limited, closed system. You could analyze each variation of this closed controlled system to give a definite cause for it.
    It also rather beautifully shows the difference between human designed systems and natural systems. Nature’s noisy system, with its multiplicity of feedback trigger points, and its variation in global temperatures is so little is understood, and certainly not controlled by humans.

  5. Kevin B says:

    It’s worth repeating again and again that the figures that GISS, NOAA, NCDC, HadCRUT, UAH, RSS et al give out are NOT the global average temperature for today or this week or this month or this year. They are in fact the output of the various models which these agencies run.

    In other words, THEY ARE NOT DATA!!!

    These model outputs may or may not be able to tell us something about global temperature trends over time, but they are all subject to the vagaries of their limited range of inputs and the instability of their calculations. Perhaps the variability in the model outputs is caused by the instability of the models themselves, (due to lack of complete coverage, invalid assumptions programmed into the models or pure data fiddling), rather than any great variability in the actual global average temperature. (If there is such a thing.)

  6. poitsplace says:

    I haven’t seen what it looks like now, but the adjustment for NOAA/GISS has been increasing in what looked last time I saw it like an exponential curve. The earlier warming adjustment was largely linear and was most likely an honest mistake, at least for most parties around the world using it.

    The earlier accumulating adjustment error was likely the result of the incorrect assumption, that surface station temperature step changes would be in both directions. But in 2/3 of the cases they are from warmer to cooler as stations are moved out of UHI polluted areas into better locations. So while they do in one step “adjust” for UHI, when the homogenization routines come through and stitch together that station data so it looks unbroken…they basically just put the UHI right back.

    BUT AGAIN, this small but significant warming bias has been replaced by a substantial bias in recent years. I’m pretty sure NOAA/GISS have gone so far off the deep end with recent adjustments that even the scientists that are mired in group think will take notice…with the rather ironic side effect of snapping quite a few of them out of that group think.

    BTW, I believe Lord Munckton pointed out that the stated measurement error on Hadley is +/- .15C. The actual measurement error is around .1C. Of course, these are utterly laughable for NOAA considering methodological changes between 1997 and 2015 have resulted in approximately 2.3C changes in the temperature of 1997

    Here NOAA claims the global temperature was 62.45F/16.917C

    And here NOAA claims the global temperture is “0.69°C (1.24°F) above the 20th century average of 13.9°C (57.0°F)” or 58.24F/14.59C

  7. gareth says:

    @ Kevin B
    “the figures that GISS” etc, “give out are NOT the global average temperature for today or this week or this month or this year. They are in fact the output of the various models which these agencies run.”

    Please explain – I had assumed that the figures were based on measurements (albeit with “corrections” & “homogenisation”). It would be good if you could describe the process and say what/how much actual data is used, or give links.


  8. Jason Calley says:

    Hey gareth! Like you, I would appreciate a little more info from Kevin B about the “output of various models” statement, but lacking a response from Kevin, here is what I think he means by that.

    Simple averages are, well, simple! You take a series of measurements, you add them up and then divide the total by the number of measurements. Simple! Unfortunately, the process for producing a “global average temperature” is not nearly so easily done and requires a LOT of assumptions and judgement calls. In fact, it requires so many assumptions and judgement calls that a rather complex computer model must be made that will take into account all those factors which the programmer thinks are important. Yes, the process starts with actual data, but the program which massages the data induces its own shaping to that data. For example, suppose that a certain station is missing some of its data. Should you fill it in with an interpolation based on nearby stations? How nearby? How many nearby stations? Interpolate a reading based on historical data for that site? Interpolate based on readings from that site just before and just after the missing data? Once it has created a reading, is that reading given equal weight with a real reading, or is it given slightly less weight? Or maybe that station should just be ignored and the other stations adjacent simply given a heavier weighting to compensate for the area of the missing data. Additionally, suppose that one station has what seems to be an outlier reading. How is it decided how big an outlier must be? Is the decision based on variability of historical records? On other stations? On previous days? How do you handle step changes? Does the program look for step changes?

    And on, and on, and on, and on…

    We like to think that the data is handled in some straightforward, mathematically defensible, well documented and justified process. It ain’t… The biases (whether conscious or inadvertent) of the people who do the programming will always have an effect on the outcome. Those biases are reflected in the form of the imaginary world, the model they create, which is the framework in which the averaging process takes place. And even with the best and purest of intentions, averaging temperatures tells you approximately nothing about any changes of actual heat energy in the system. Average global temperature not only is not data — it is not even a temperature. It is a statistic about temperatures.

    How would I do it? I would completely ignore any station without data. I would take only data which was raw or had been adjusted for clearly known, described, and documented reasons. I would weight the data by relevant area. Sure, that process has problems — but at the least the problems are more obvious and proper caution could be used in interpreting the answers. That, to me, would be much more reasonable than a process guaranteed to both introduce and conceal biases.

    Anyway, that is my understanding, and I think that is what Kevin B was referring to. Someone please correct me if I have gone off the deep end!

  9. E.M.Smith says:

    @Gareth & Jason:

    I’d thought I’d have an answer up by now but “things came up”… but the answer is rather complex.

    First up, ANY average of temperatures is NOT a temperature.

    It is, at best, a statistic about temperatures. Now since many (most? all?) of the climate codes that calculate a Global Average Temperature (GAT) start out with the monthly Min/Max Average, by definition right out the gate they are not talking about temperatures at all. Ever.

    This is fundamental physics, but entirely ignored by everyone. I try to call it out as often as I can without sounding like a crank / thread hijacker / whatever, but is seems to always fall flat despite being a fundamental rotted corner stone of Global Warming Theorists.

    Now those Monthly Min/Max Averages were themselves cooked up from daily min / max readings. Said readings have been processed through a boat load of “adjustments” (TOBS being the most egregious IMHO but not due to the theoretical basis of it, but due to the way it is applied – ham handed and without reason to all sorts of data items). There is also a quasi-broken “QA” process that “adjusts” data to match an average of nearby airports:

    Then those adjusted, QA modified data may be infilled by mysterious means and then a monthly average for that instrument is produced. At that point, the mass of data-food-monthly-average-product gets sent to further processing by GIStemp, HadCRUT, NOAA, et. al. to produce their “temperature data” that is neither a temperature (see above about intrinsic properties) nor data.

    Now I have no idea if that is what Kevin B. meant, but I think it is possibly close.

    FWIW, I set out to make “as close as I could get” graphs from what was available that avoided a chunk of the spreading / infilling / adjusting / etc. as possible. I decided (for the moment) to accept the use of a “monthly average” even though it is philosophically bankrupt as a temperature, but didn’t have the data, the time, or the processing power then to handle the daily data. (I’ve since gotten the daily data, but still lack the rest…) That was my dT/dt series. It uses an interesting technique and a few weeks worth of reading can be found here:

    Then I figured out that NCDC / NOAA were pre-buggering the data anyway and that looking at GIStemp was kind of pointless in that context. Sigh. Essentially, if you want clean data, you must go back to the original recording documents that in many cases have been lost, destroyed, or stored where the sun don’t shine… In short, the only relatively clean data we have is from the satellites.

    Some misc. here:

    and more detail inside GIStemp than anyone would ever want, but it lets you see the kind of processing that goes on AFTER all the crap done by NOAA/ NCDC:
    or a more general human oriented entry point:

    You will notice after slogging through all that stuff that “temperature” is not exactly a data item by the time you reach the end…

  10. Larry Ledwick says:

    Ever since your discussion of this issue I have raised the question of world wide average temperature being a meaningless construct and got the same deer in the headlights response or a tongue lashing for not believing in science.

    I have decided it is not worth the brain damage resulting from an effort to discuss the issue with those who will not hear. I assume that50 -100 years from now it will be discussed along with other fanciful theories that make people shake their head in wonder, but the current generation just is not in the mood to step back and think, so it is a lost cause for so many who are immersed in the AGW dogma.

  11. Glenn999 says:

    I agree the GAT is meaningless. Something much more meaningful, it would seem to me, would be to look a temp changes in small localities. For example, I live in N. Central FL. I would be interested in analyzing changes over time in this area, along with the changes in temperature equipment siting. Even this small geographic area may need to be broken up into coastal and inland areas, perhaps even more subdivisions. But comparing our increase/decrease with an increase/decrease in Alaska, and then averaging, makes no sense to me, and I’m not sure what that data product would tell anyone anyway. But if we had good data about one station over many years, that would tell us something. At least that is what I’m thinking now….

  12. Mic Hussey says:

    Glen999 – The Armagh Observatory in Ireland provides max and min daily temperatures going back to 1844. Plus detailed discussion of all the adjustments they’ve made to deal with instrument changes. Even so you’ll find lots of markers for “no data” (-999 or -888) in the early readings.

Comments are closed.