AGW Jumping Sharks Since 1986

OK, I couldn’t Resist

This graph has the monthly trend lines from 1986 to date. Notice in particular how drastic some trends are. Like November, for instance. ( You can click on the graph to get a much larger and readable version).

GHCN V2 All Data Anomalies 1800-2010 1986 Trends

GHCN V2 All Data Anomalies 1800-2010 1986 Trends

I can’t help but think that this is something of a “Jumping The Shark” graph. The drastic and extreme change of trends, especially in some months, along with the divergence between months, just looks crazy. Yet that’s what is in the data. There is no adjustment, homogenizing, filling in, UHI adjustment, etc. done by me. This is the straight GHCN “Unadjusted” data set. It is converted to anomalies by having each thermometer compared only to itself, and for each month being compared only to itself. There is no seasonal averaging nor any kind of blending done. Simple, direct “self to self” comparisons for each thermometers in each month.

Compare with the graph of the data prior to 1986. In this graph there is a minor warming of winter months as we rise out of The Little Ice Age, but substantially no warming happening in Summer. A very natural state of affairs.

GHCN V2 All Data Anomalies 1800-2010 pre-86 Trends

GHCN V2 All Data Anomalies 1800-2010 pre-86 Trends

That is just a crazy change of trend between these two graphs. Notice that I’ve stretched the vertical size of the second graph so that what little divergence there is within those trend lines would be more visible.. This change of trend between the graphs happens well after CO2 has had plenty of time for “effect”, yet could not warm the summers prior to 1986. It looks very non-physical.

I’ve chosen to make this segment break in 1986 for this graph as that is when a lot of the newer “duplicate number” or “modification history” flags first start to show in the data (the older series overlap, then exit with The Great Dying Of Thermometers that starts in 1990 in a big way.)

The nature of these graphs, and how they are made, is discussed in the posting of yesterday, here:

https://chiefio.wordpress.com/2010/07/31/agdataw-begins-in-1990/

That posting has the graph entire from start to end, along with trend lines for the data as a composite from start to end, so you can compare the two segments with the total by using the graphs in both postings.

I suppose one could try to claim that a 24 year segment was just too short to give a valid trend, but then one would have to explain why a 30 year segment is long enough to define “climate”… and why it’s usable for defining “climate change” but not usable for showing how atypical the present segment of the data looks when presented.

IMHO, these two graphs highlight a significant “brokenness” in the GHCN data series.

Update: This Just In

From Verity Jones I’ve been handed an early peek at a “Pivot Chart” she has started. This is just the first 50 or so lines done, but it shows how the data suddenly change in 1986 or so with the change of the “modification history number” AKA “Duplicate Number”. 50 down, only 6800 to go…

Depiction of the "Duplicate Number" series changes over time

Down the left side are the Station IDs, across is ‘years’. A colored box shows where there is data. If I understand what this says correctly, it is showing “The Splice” where one set of stations lead in, we get a different set (mostly) in the ‘cold period’ from 1951 to about 1980 ish, then a swap is made about 1986-1990 to yet another set.

“A Splice is a terrible thing to waste.”…

Exactly what the change is that happens at that moment in time is yet to be determined.

Advertisement

About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in AGW Science and Background, Favorites, NCDC - GHCN Issues and tagged , . Bookmark the permalink.

24 Responses to AGW Jumping Sharks Since 1986

  1. Verity Jones says:

    Well, my first reaction was a downbeat ‘yeah I know’ but then I read it properly and went and looked at my data. I have sent you a picture that I think would add considerably to the posting. Although it is a simple depiction it shows how typcially fragmented the data is and it shows ‘1986’.

    It was a bit of a ‘Wow’ when I looked and of course you are right about the numbers of series starting in 1986. So the question is ‘What happened then?’

    I should be able to quanitfy this.

  2. Verity Jones says:

    Er actually, for most series the first full year is 1987.

  3. DG says:

    Have you seen Ross McKitrick’s new paper?

    Click to access surfacetempreview.pdf

  4. DG says:

    Per above:

    “On substantive grounds I therefore conclude that after 1990 the GHCN archive became very problematic
    as a basis for computing precise global average temperatures over land and comparing them to earlier
    decades.”

  5. E.M.Smith says:

    @Verity: Yes, there is something of a ‘feathering” effect. The changes are blended in mid year so they are smoothed.

    I’m sure it’s all innocent… 8-^)

  6. E.M.Smith says:

    @DG: Had not seen it, but reading it now! Thanks!

    Golly, It cites me as ‘tabulator’ of the data.

  7. oldtimer says:

    I have just finished reading the McKitrick paper. It reinforces and amplifies everything you have discovered. Process changes have contaminated both the land and the sea temperature records. There appears to have been very little attempt by CRU to validate the changes as they occur or to accept criticisms.

    It is also worth recalling that the CRU did not disclose, in their original evidence to the Muir Russell enquiry, the actual number of temperature stations used in calculating their temperature record. That was only dragged out of them, and reported in the final Muir Russell report, because of the investigatory work and brilliant charts that you produced on this site, including that telling graph line of the number of stations used year by year.

    In McKitrick, I particularly like the chimney brush chart, Fig 1-10 page 15, illustrating the noise introduced post 1990.

  8. E.M.Smith says:

    @oldtimer: Kind of makes it all worth while….

    Had not heard that it was dragged out of CRU using anything I’d done, but nice to think about.

    And I’d never heard of a “chimney brush” chart before, but it is a very interesting technique. I like the way it graphically demonstrates the sudden onset of volatility and the extreme noise level introduced by thermometer drops.

    While it seems like most folks don’t get much from it, the comparison of the AGW analysis to a calorimetry experiment “speaks to me” the most. It really is the case that the goal is to do a calorimetry experiment. To answer “how much is the heat balance of the planet changing?” Once you accept that, it become blindingly obvious to anyone who’s done calorimetry just how completely broken their process is.

    They use temperatures, but don’t know the mass or specific heats of the things being heated.

    They don’t know the time lags involved.

    They keep moving the thermometers around, changing the number, changing the calibration, changing the locations, changing the ‘adjustments’ applied, etc.

    You just can’t get a valid result in that sea of change and broken ‘technique’.

    But it seems not too many folks are familiar with calorimetry, as the metaphor does not catch folks attention.

  9. Sinan Unur says:

    Verity’s graph inspired me to generate a similar sheet using the database I had set up back in May and a simple Perl script. You can find the code and the spreadsheet on my blog: Station data continuity in GHCN-v2.

  10. Sinan Unur says:

    E.M. Smith: The link in my comment (although the color scheme makes it hard to see). For clarity, here it is, without using the anchor text:

    http://blog.qtau.com/2010/08/station-data-continuity-in-ghcn-v2.html

  11. Steve McIntyre says:

    Up to 1987, the provenance of the data is mainly World Weather Records compiled by the Smithsonian. After 1987, the provenance is MCDW Monthly Climatic Data of the World – mostly airport data.

    John Goetz and I looked at this in some examples in Aug-Sep 2007 examining how the splice worked in some Russian stations.

    Sometimes the overlap is too short to permit the GISS station collation method to work and thus GISS doesn’t use stations after 1990 or so that are publicly available (while CRU with a different collation method) uses them.

  12. Max Hugoson says:

    Cubic foot of air, sea level.

    85 F, 60% RH. Enthalpy: 38 BTU’s

    Cubic foot of air, sea level.

    105 F, 10% RH. Enthalpy: 33 BTU’s.

    WHICH “atmosphere” has more energy.

    Sorry, these average temperatures are actually MEANINGLESS.

    ENERGY is everything. Have to keep coming back to the concept that you CANNOT AVERAGE INTENSIVE VARIABLES.
    Wait, yes, you can average anything you want… BUT averaging is a mathematical operation, MEANING is an intellectual operation.

  13. Verity Jones says:

    @Steve McIntyre,
    Thanks that makes a lot of sense. Will chase up.

    Your site is such a resource, but most of the station work predates my interest in AGW so it does not always occur to search in your archive, and in this case I would not have known what to look for anyway!

  14. E.M.Smith says:

    @Steve McIntyre: If you have a good ‘entry point’ link it would be nice to post it. There is so much on your site that it can be hard to find things if one is not familiar with where to look.

    @Max: Yes, I periodically push the point that it’s just a nutty thing to do, but for some reason folks don’t hear it. So I go ahead and participate on the ‘ground they have selected’ all the while thinking it’s just a stupidity.

    I have a posting on the math of it as well that basically says that even if temperatures were NOT an intensive variable, what they then do with them is equally pointless. But that, too, gets ignored. Makes me wonder how many folks really understand mathematics and physics and how many took the courses and passed the tests but never bothered to think along the way.

    https://chiefio.wordpress.com/2010/07/17/derivative-of-integral-chaos-is-agw/

    Then there is the fact that as a calorimetry experiment they do just about everything possible to do it “wrong”. Yet it would seem most folks have not done much chemistry either. So that metaphor does not catch much wind.

    So when I first looked at the whole AGW / GIStemp et.al. processing of the data, I just cringed. So much wrong, and how to get it across. Had a few months of rants from folks about my complaint that the MEANING of the average of a few thousand different temperatures (places and times) was meaningless. That you could have an average calculated to 1/100 C but it was void of meaning. That you might have something in the whole C place but even the 1/10 C was dodgy. Somehow I failed to get folks to understand the point and just had a lot of folks assert I was an idiot because of the law of large numbers. I’ve yet to figure out a good way to get most folks to understand that you can have a mathematical operation that is valid in some contexts yet gives trash in others and that it’s about tracking the MEANING of what you are doing with the process. It’s blindingly clear to me, but somehow I can’t get it into a single simple enough paragraph.

    So if you have a bunch of intensive variables and they are all about 5 C you can average them and say that it is a cool average, but you already knew that as they were all near 5 C and averaging them does NOT give more precision that that as the average is devoid of meaning. You could just as easily average -5 C and 15 C and try to get meaning out of it. You can’t. You can know that the range was greater (so the distribution is interesting) but the 5 C average doesn’t mean anything.

    I suspect the problem comes from folks just assuming that there is some standard or average mass, specific heat, specific heat of fusion, specific heat of vaporization, etc. for the planet so that you can just assume it is a constant.

    But it is NOT a constant, as you showed. So all those 1/100 C and even the 1/10 C places are just trash. The error band in the (mass, specific heats, phase changes) just swamps them. And I’m not even sure that the whole degree of C place is fully outside the jitter induced by the other (assumed and not measured) terms.

    Oh well. I’ll keep working on it from time to time until I find a way to state it clearly and not cause 1/1000 folks to post the stats they learned about averages being more precise than the individual data items. ( I had it in my stats class too; but I remember the teacher saying something about it not always having meaning, so be careful using it… wish I’d kept those notes, but they went in one of the purges of moving. When I moved onto a live aboard sail boat, I think. Small price to pay for the couple of years of pleasure it brought…)

    So, to make an already too long reply even longer ;-)

    I’m pretty certain that the “results” all the temperature series guys (GISS, CRU, NCDC) get are just dancing in the error bands of number manipulations that are devoid of real meaning, and done in a poor way based on a broken mathematical understanding. But to get that across takes an understanding of subtle math (not arithmetic…) and physics / chemistry; and that most folks lack one or the other. So the synthesis fails and they accept to do battle on ground selected by the AGW crowd that is just a muddy quagmire of mushy thoughts and numb ideas.

    Oh well, sure the game is rigged, but it’s the only game in town…

  15. P.G. Sharrow says:

    ” I am a street merchant and every night night when I get home I weigh the contents of my pockets and record the weight. after 10 years I average the weight per day, 25 oz. how many dollars a day have I made and what is the change in per day income over the 10 year period” How did I do? Chiefio. pg ;-)

  16. Sinan Unur says:

    @E.W. Smith:

    Here is a link to “Russian Bias”: http://climateaudit.org/2007/09/28/russian-bias/

    There are links in that article to discussions of the method:

    http://www.climateaudit.org/?p=2083

    http://www.climateaudit.org/?p=2033

  17. E.M.Smith says:

    @P.G.: Hmmm…. I like it. It does very clearly illustrate the ‘issue’ of an intensive variable. It’s accessible. We’ll see if it catches on / works over time with others…

    @Sinan: Thanks, I’ll hit the links.

  18. Chief you know I suspect the official stuff like you. Therefore I also want hard questions answered. So, suppose, even if the station dropout problem is a cause of the 1986 shift you show here, there were other factors? What about the 30-year climate shifts ie 1910-1940 steep rise, 1940-1970 gentle fall, 1970-2000 almost exactly the same steepness of rise as 1910-1940, after 2000 looks like gentle fall…

    Now can you compare the monthly data trend changes at each of these significant natural change points, with your graphs above?

  19. PhilJourdan says:

    I read the McKitrick paper, and this seems to go hand in hand. Thanks for doing the hard work.

  20. E.M.Smith says:

    @Lucy Skywalker:

    I can easily put “segments” at any places desired. I’ve just awakened, so need coffee before I can get to it. The “Hockey Blade” at the end shows up clearly in a lot of subsets of the data like all the ones here:

    https://chiefio.wordpress.com/2010/04/11/the-world-in-dtdt-graphs-of-temperature-anomalies/

    The most telling thing to me is the loss of the “hair” rather than the actual trend lines by month. But most folks just ignore the “hair” and love trend lines, so I put the trend lines in, even though I think the loss of cold going anomalies (that clearly does NOT happen in other time spans) is the big deal to me.

    With that said, I would be interesting to see the ‘divergence’ by month in some of the prior warming or cooling segments.

    IMHO though, most of the ‘issue’ is a thermometer count artifact due to either a change of Q.A. process between the two periods (the “Duplicate Number” or “Modification History Flag” change) along with ‘splice artifacts’ from combining records with different volatilities:

    https://chiefio.wordpress.com/2010/08/04/smiths-volatility-surmise/

    which I guess is just another way of saying I think it’s as much about the “dip” in the 60s and 70s from the increased number of thermometers as it is about the rise after it.

  21. P.G. Sharrow says:

    Actually. If I remember correctly all the nessesary information is collected and recorded at weather stations for use of meterologists in guessing the future weather, Barometric pressure, relative humidity, and temperature are collected. The problem is the climate records that we are looking at from GISS, Hadcru, at al. are for climateologists and they are ether too lazy or dumb to use all the information. I vote for lazy, as every thing that they have done for the last 40 years points to poor mental work habits. pg

Comments are closed.