Africa – The Canonical Set of dT/dt Graphs

Africa

This posting is an “aggregator posting” so that the entire set of Africa graphs can, in the future, be reached from a single link. The Africa series was divided into 4 major regions, each with it’s own posting. There are well over 60 total graphs, so even with those divisions it’s a pretty hefty “page weight” for each portion.

Each of the graphs is quite large, so you can click on the small version seen in the page and get a much larger and more easily read version to view.

So what does Africa look like?

Africa Monthly Anomalies and Running Total by Segments

Africa Monthly Anomalies and Running Total by Segments

A dramatic drop to 1975, then an equally dramatic rise of about 1/2 C per decade back out again. With rather interesting symmetry to the thermometer count. Notice also the “bullseye” moment in 1992 when the data processing history changes and how that maps to a rising segment. It is quite striking that in that year all the monthly anomaly lines pass through zero. Also notice that there are very few “negative anomaly” months between 1976 and 1996. Something was pruning out the “cold months” then on an aggregate basis. When we get into the individual country graphs this becomes all the more ‘odd’ as many countries are cooling.

https://chiefio.wordpress.com/2010/04/09/africa-north-africa-graphs/

https://chiefio.wordpress.com/2010/04/10/africa-the-islands/

https://chiefio.wordpress.com/2010/04/10/africa-equatorial/

https://chiefio.wordpress.com/2010/04/10/africa-southern-nations/

(Which includes a gratuitous Antarctica Graph for completion)

Overall Conslusions

The patterns of the various countries of Africa are so divergent, both in shapes and time of onset for changes (or complete lack of changes with flat or dropping trends) that I can see no way for a “CO2 explanation” to be causal. It simply must do too many contradictory things that are very often “unphysical”.

Things like that “Pivot” in 1976. How does CO2 accumulate for 100+ years, then suddenly have an onset in one year? But only on one continent and only in some countries of it?

But those patterns would be very easily explained by instrument change and data processing artifacts (things like “splice artifacts” between data series).

There are also rampant data dropouts throughout Africa. This complicates matters (and it is highly “suspicious” that places near those with the most artificial looking “hockey sticks” often have the most data drop outs that will of necessity be ‘filled in’ from those hockey stick places…)

Finally, there are hints of a cold southern hemisphere. Something we also saw under the Islands threads. There is the very real possibility that there is a global antipodal oscillation of cold where we don’t capture the southern cold phase as most of the thermometers are in North America, Europe, and Japan. Then we would “find warming” by simply not finding where the cold went during one half of the cycle. And with known 60 year cycles of weather, “this matters” in that we might have to wait 60 years after we achieve full global instrumentation to capture the event. Not only do we not know, but it may be impossible for us to know for several decades more.

Advertisements

About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in dT/dt and tagged , . Bookmark the permalink.

3 Responses to Africa – The Canonical Set of dT/dt Graphs

  1. oldtimer says:

    Thank you for this series of charts. Your whole effort is a tour de force. Just as a picture is worth a thousand words, so a chart is worth a thousand numbers.

    You make the point several times that it is difficult to see a connection here between temperature changes and CO2.

    Your analysis of the unadjusted GHCN set of all the records for all years stands in stark contrast to the CRU/GISS analyses which compare results vs the 1961-90 period. I do not see how records based on the post 1990 station count can reliably be compared with that earlier baseline period. Yet the “evidence” of global warming appears to rest on this comparison.

    I have posted here previously that I have submitted evidence to the Independent Climate Change E-mail Review (headed by Sir Muir Russell). Among other things I have asked that CRU clarify the thermometer station counts which underlie the anomaly charts submitted as evidence to the Review. I have also said that, because of the differences in station counts recorded in unadjusted GHCN (said to be a common source for CRU and GISS) in the baseline period and post 1990, there should be a parallel run using a common data set throughout. Without that, it seems to me, the anomaly record published by CRU is unreliable and unproven.

    REPLY: [ You are most welcome. You raise several complex and very interesting questions. I can’t do them justice in a small comment, but I’ll give an overview of my opinions.

    Baseline: I’ve come to wonder how we really know that the ’60s and ’70s were “cold”. Yes, it snowed in my home town (and odd thing), but globally we have very poor information. The thermometer counts rise to a peak during the baseline and one could view the “dropping” to date as “warming” or the “peak then” as “cooling”. It is a symmetrical question. Do we have a limited “warming biased” set today, or was the baseline a cooling cherry pick? Or perhaps some of both. It is clear to me that the “baseline pick” is a significant issue. “The Warmers” assert that it makes no difference. Yet again and again we see that “dip” only in that baseline period. Removal of a “baseline” was one of my key design goals. So my “start point” is ‘now’ and that’s why all the charts have ‘now’ near zero.

    Counts: Part of dT/dt is a forensic approach. I want to see the “splice artifacts”. The thesis is that these splice artifacts bleed through other anomaly processes as well. So one could attack the dT/dt process as being sensitive to “splice artifacts” (but that presumes you are trying to find the fictional “global average temperature” rather than “find the issues” and would raise the question of how sensitive the other codes are to “splice artifacts”…)

    In reality, dT/dt is only the halfway point for me. The eventual goal is to find a way to avoid the ‘splice artifacts’ and then show where code like GIStemp falls on the spectrum of “artifact sensitivity”. So I have a ways to go. What’s very clear to me already, though, is that GIStemp is NOT a perfect filter and does let the artifacts through. (Those comparison GISS anomaly maps that find about the same ‘warming’ as dT/dt does in some parts of the world… Like Marble Bar and Africa) the big difference is that I show the shape over time of that ‘warming’ and they show the end point. A critical difference, IMHO ;-) So yes, I think “counts matter” and that you can not divorce “instrument change” from the question of validity of the answer. That’s a big part of why I do the “by segment” graphs. A crude form of dividing the instrument changes into islands of relative stability for comparison. And we regularly find that the thermometer count is symmetrical to the dT change. It ought to be orthogonal if “count does not matter”… so it’s pretty clear to me that “count matters”.

    Stable Set: One of my very first efforts was a ‘stable set’ analysis using only long lived thermometers. This was done using simple temperature averages (not anomalies). Some “Warmers” wanted to toss rocks at that for being an irrational way to find the “Global Average Temperature”. That was a stupid line of attack as my goal was NOT to find the GAT, but rather to characterize the degree of error inherent in the data from thermometer change. (My personal belief is that a GAT is by definition wrong. Averaging intensive variables can tell you about the structure of the data, but does not give a valid average answer.) What it showed was that the “warming signal” was in the sort lived records and not present in the long lived records. The same thing we have seen repeatedly in the anomaly graphs as well. Large chunks of the planet cool right up to the point in 1990 or so when the “duplicate numbers” change and we get a batch of short lived records. So there are two things to do in making a ‘stable set’. 1) Selected long lived thermometers. 2) Find out what was done to bugger the process in 1990 and undo it.

    A vs B compare in GISS and CRU: Yes, CRU have said that most of their data is identical to the GHCN set (over 90%) so given that they use a baseline AND the same data, they too will be comparing two completely different sets of thermometers to each other. The claim is that “anomaly processing” will solve this problem. Yet dT/dt is an anomaly process and it finds that there are plenty of “splice artifacts” in the comparison of one time period (and one set of thermometers) to another. So, IMHO, CRU and GISS are both pretty worthless for any form of policy guidance. Basically, it would be far superior to pick a few dozen extraordinarily long lived thermometer records around the planet and simple use them, unadorned. TonyB has done substantially that, IIRC, and what was found is that “warming” is not present. (TonyB: If I’ve mischaracterized that, please let us know? )

    And with that I’m going to stop (before this reply becomes a 10 page article ;-)
    -E.M.Smith ]

  2. Araucan says:

    Hi,

    It’s seem to me, but perhaps it’s only an impression, that month variability is fonction of the number of weather stations. Do you check that ?

    See you

    A.

  3. E.M.Smith says:

    There ought to be an increase in variability as the number of stations drops (averaging a larger number of things dampens the outliers more). But what we get is the opposite. So it’s station selection rather than number that’s dampening the range.

Comments are closed.