Original Image Bigger version if you click.
I particularly liked the way this image ties together D.O. events, Bond Events, and our historical context along with the Ice Age context and the Younger Dryas. If anyone knows the author I’d love to give attribution and / or read the original article it came from (if any).
Over on WUWT there is a discussion of Arctic stations and how the temperature “trends” at rural stations track the AMO ocean current while the more ‘urban’ stations show urban heat island effects.
A couple of comments there caught my eye, and I’m going to preserve them here (for the simple reason that things move so fast on WUWT that I can’t always find a particularly good bit when I want to go back to it). From:
This comment in particular did a very nice job of setting out some of the physics issues of using temperatures to measure heat flow on the planet.
Dave in Delaware says:
September 23, 2010 at 5:43 am (Edit)
Thoughts on Anomaly Temperatures
Temperature is a PROXY for Energy.
The Energy content and the Energy transfer is what you really need to track. Calculating an Anomaly of High Energy air averaged with Low Energy air is only a rough approximation, even if the statistics are pristine. It takes more energy to change the temperature of Humid air.
Three examples where temperature anomaly is not telling the full story
* radiant energy transfer
You have probably seen the example (my excerpt from Max Hugoson post at WUWT)
Go to any online psychometric calculator.
*Put in 105 F and 15% R.H. That’s Phoenix on a typical June day.
*Then put in 85 F and 70% RH. That’s MN on many spring/summer days.
What’s the ENERGY CONTENT per cubic foot of air? 33 BTU for the PHX sample and 38 BTU for the MN sample. So the LOWER TEMPERATURE has the higher amount of energy. …..Thus, without knowledge of HUMIDITY we have NO CLUE as to atmospheric energy balances.
———————– (end of excerpt)
So might a better anomaly track temperature in similar humidity areas? Tracking Phoenix with itself might be OK, but maybe we shouldn’t track Minneapolis even with itself, since summer vs winter humidity is significantly different. It has been suggested that Dew Point might be a better indicator than Tmin averaged with Tmax.
Surface temperatures on land are actually ‘near surface’ air temperatures 1 to 2 meters above ground. The energy flow has already started its trek toward space. Ocean temperatures, especially the ARGO floats, are more truly surface or sub surface measures (before the energy moves to the air). Heat Capacity (used to determine energy content) of liquid water does not change much with temperature, so ‘averaging’ warm and cold water is a smaller error than for dry vs humid air. Which is why OHC, Ocean Heat Content, has been suggested to be a better measure of the Earth’s warming or cooling. And finally, because liquid water has a much higher Heat Capacity than air, when energy moves from the ocean to the air (as in an El Nino) a temperature change in the liquid, gives rise to a larger temperature change in the air. So again, an Anomaly that averages land surface with ocean temps is another ‘apples to bananas’ comparison – both fruit, but different texture.
Radiant Energy Transfer
Energy transfer from Earth toward space begins at the true surface, the dirt, grass, pavement, etc. On a clear sunny day, the surface temperature of an asphalt parking lot can be much higher than the air above it (the measured air temperature is then another proxy of the surface). Radiant energy transfer from the surface toward space is proportional to the absolute temperature to the 4th power (T^4). As the average anomaly temperature changes linearly, the energy transfer changes to the 4th power. An anomaly that averages a 5 degC change in the Sahara with a winter time change in Siberia isn’t telling the full energy story.
I have toyed with the idea of an ‘anomaly correction’ for radiant affect, but have not actually worked it past the concept stage. The idea would be to take each location, adjust for Radiant Potential (temp to the 4th power), then compute a Radiant Anomaly on the transformed temperatures. The Radiant Anomaly might then let us compare the Sahara to Siberia in terms of the surface ability to shed heat. Sort of like the ACE energy metric for hurricanes, but applied to surface temperature.
I like the idea of a ‘heat anomaly’… Then George Smith adds some more to the physics:
George E. Smith says:
September 23, 2010 at 9:51 am (Edit)
“”” Dave in Delaware says:
September 23, 2010 at 5:43 am
Thoughts on Anomaly Temperatures
Temperature is a PROXY for Energy. “””
Dave, I have for some considerable time pointed out, that even if it was possible to measure the true average global (surface) Temperature; (which it isn’t) , that we still would know nothing about the energy transfers; and the roughly black body Stefan-Boltzmann like fourth power relationship, is one part of that problem.
It is a trivial problem in calculus and trigonometry to prove that if the Temperature goes through any arbitrary, single valued continuous function (of time) cycle, whose average value is Tzero, that the average value of the instantaneous fourth power of that temperature function, is ALWAYS Tzero + deltaT.
Now a lot of folks love to point out that the earth surface is NOT a “Black Body”, so they argue that the fourth power thing is not valid.
Well the black body assumption does set a maximum for the amount of radiant cooling that can occur; and many surfaces have a sufficiently constant spectral Radiant emissivity over the range of LWIR wavelengths that can be present in the thermal radiation from that surface at prevailing Temperatures; that simply applying some average emissivity to the BB calculated value from the S-B formula is a respectable value for the actual surface radiant emittance.
Actually the deep oceans behave like a fairly good black body absorber; well a grey body to be pedantic, since the surface Fresnel reflectance is about 2% (normal) over a fairly wide specral range; and certainly over the solar spectrum range; and perhaps 3% over the full range of incidence angles. So the deep oceans would be fairly well characterized as a Grey body with 0.97 total emissivity.
Actual LWIR reflectances at typical ocean surface temperatures; are not quite so easy to figure but I would expect the BB (with emissivity of 0.97 would be quite close to reality for the oceans; which after all are 70 % of the total surface.
Employing (with caution) BB radiation theory to the problem also gives us some other inputs to the Green House Gas absorptin of surface emitted LWIR thermal radiation.
If the surface emissions are in fact roughly black body like, then it is known that the spectral radiant emittance at the spectral peak of that emission varies as the FIFTH power of the Temperature, and NOT the FOURTH; and then the Wien Displacement law moves that peak to shorter wavelengths (~3000/T microns), so the higher the surface Temperature, the further down the thermal radiation tail the CO2 absorption band (15 micron) is. The total captured energy still goes up with Temperature; but the fraction of the emission spectrum energy that is capoured goes down; and more of it escapes the atmosphere. The spectral peak which is about 10.1 microns for the global average Temperature of 288K (they claim) will move further into the atmospheric window also.
On the other hand for colder regions the Wien Displacement moves the thermal radiation peak closer to the CO2 15 micron band; but the surface Total radiant emittance goes down severely for the colder regions.
All of which supports my contention that it is the hottest driest mid day tropical desert regions, that do most of the real radiant cooling (land) . The polar snow and ice regions are quite ineffective in cooling the planet; but if the arctic ocean should become ice free, then the north polar region would become a better cooler for that part of the planet.
And of course although this note is all about radiant cooling; we never lose sight of the fact that the ocean regions do a heck of a lot of cooling via the evaporation/convection mechanism, transporting latent heat into the upper atmosphere.
And brings up some issues with the sampling density:
George E. Smith says:
September 23, 2010 at 10:27 am (Edit)
“”” Al Tekhasski says:
September 22, 2010 at 4:45 pm
evanmjones wrote: “So we would need over 120,000 stations? That’s a lot of stations.”
Sure it is. But I am afraid you might need more. “””
Well Al you must be new around here. If you had been visiting here more often, you would know that the general theory of sampled data systems is apparently quite unknown in “Climate Science” Institutions.
So your Nyquist Sampling Theorem is trumped by their Statistical Analysis, and probably the Central Limit Theorem as well. So long as they get the right r^2 value and proper trend line (with a slope error no more than +/-50%; or a 3:1 range) they don’t have to worry about undersampling.
But they are very good at what they call oversampling; which is creating a whole raft of phony values that nobody measured, on their computer. They can make as many grid points as they like; limited only by the size of the supercomputer that the tax payers bought for them. Well they don’t actually measure anything real at all those oversampled grid points. For some reason their computers are not able to go back and predict; excuse me that’s project, the actual values that would have been read at the handful of real actual global measuring stations. but they can interpolate somethign feirce.
So in climate science it is legitimate to core bore a single tree; and from those small sectors of that one dimansional sample ofr the three dimensional tree, in an even bigger forest; you can describe the complete climate history as to Temperature, wind, moisture, sunlight, humidity (maybe I alreadys aid that) and anything else you want to know; well but only for the age of the tree. And you can determine the age of the tree by doing a radio-carbon 14 C assay on some of those pieces of the extracted core. There might be other ways to tell the age of the tree and date the climate conditions; but they probably aren’t as reliable as 14 c assays.
And due to the coherence of anomalies, it is ok to measure the temperature in downtown San Jose California; and apply that Temperature value to the small town of Loreto about 1/2 way down the Baja, on the Sea of Cortez.
So they used to monitor the weather and climate of the entire arctic (north of +60 degrees) with just 12 total weather stations; now they have some totally huge number like 70-80 .
So I doubt that anybody is going to heed your request for 100,000 sampling locations.
And by the way; just in case you haven’t noticed these climate reporting stations get their daily Temperature from a min-max temperature reading; which gives you two samples during each 24 hour cycle; but since the diurnal temperature variation is not pure sinusoidal; there must be at least a second harmonic 12 hour periodic component presnt, so they fail they Nyquist criterion for the Time variable by at least a factor of two which means that the aliassing noise makes even the daily Temperature average value unrecoverable. So the spatial aliassing noise is just superfluous; which is why they don’t care about it.
But it is good to see somebody else with some understanding of sampled data systems.
He was responding to the same comment I responded to here (followed by a bit of give and take with Ben D.):
Al Tekhasski says:
evanmjones wrote: “So we would need over 120,000 stations? That’s a lot of stations.”
Sure it is. But I am afraid you might need more. The example of stations 50km apart having opposite long-term trends means that we don’t know what trend is in between, and what is around in the same proximity. […]
More, we still have no idea if the 25x25km is enough to capture complexities of local micro-climates, so be prepared to another half-scale, which would quadruple the number of necessary stations. Without this uniform sampling grid of data it is not serious to discuss any mathematics of subsets or else. This is what physical science says. Sorry.
Al, you may like this article where I look at some of the mathematical issues of sampling surface temperature. As topology is fractal (mountains, coastlines, etc.) the temperatures from them ought to also be fractal. A black pebble next to a snow melt stream will have quite divergent temperatures… Measuring a fractal gives different answers based on the ‘size ruler’ you use. And we’re measuring ‘climate change’ with a ruler who’s size constantly changes over time.
Ben D. says: I will not say that the anomaly approach is incorrect, but there are issues with it as well. As far as I can tell, its the best method known right now, but it is not perfect. To argue that its the end-all is kind of ignoring the issues that it also brings up.
Very well put. Also, there are many kinds of anomaly method and they have many different modes of failure. One of the simplest to ‘get’ is that of the splice artifact.
It doesn’t much matter if you use anomalies or not, when you take a station that warms 1 C as it grows, then in a later decade add a new station that warms 1 C as it grows, then in the final decade swap to a third station that warms 1 C as it grows. Tack them all together and you a 3 C “warming trend”. It matters not if this is done via averaging, direct splicing, or “homogenizing” using each to “adjust” neighbors.
The temperature series codes like GIStemp are FULL of that. And anomalies make it easier to have happen rather than harder. (No station has to reach unheard of record highs and call attention to itself…) I saw this effect in the region near Marble Bar Australia where an all time ever record was set back near the ’30s and never exceeded, yet the ‘region’ has a ‘warming trend’.
So that’s why I periodically anchor myself back in ‘real temperatures’ and why I start by looking at the profile of ‘real temperatures’. I’ve taken a great deal of flack from folks asserting that it is sheer stupidity to do that, and they are wrong. It is only stupid to think that they show accurately the temperature trend. Just as it is stupid to think that averaged anomalies show the accurate temperature trend if for no other reason than ‘splice artifacts’. IMHO, the ‘splice artifact rich’ nature of the ‘homogenizing’ done is a target rich environment in the temperature series codes.
But wait, there is more…
To clarify, anomalies are based on the area-weighted global average,
For one kind of anomaly…
The climate codes use a “grid / box to grid / box” anomaly. They have one set of thermometers in the box at the baseline and a different set now. This is horridly broken. It would be like me saying cars have gotten faster as my home “grid / box” had an average VW fastback in 1970 and has and average Mercedes SL now and the “max speed anomaly” has risen by 55 mph.
Yes, they take steps to mitigate the problem. But mitigation is not perfection. We are basically betting the global economy on the perfection of their mitigation and coding (and their coding is fairly sloppy.)
So I started by looking at plain temperatures, and found things that were not in keeping with AGW and CO2 theories. Then moved on to anomalies. But I wanted a more controlled beast. So I do anomalies only “self to self” for a given thermometer.
This brings up the issue of ‘baseline’. But you don’t need a baseline to do anomalies. ONE kind of anomaly is based on a common period of time, the baseline. But as Steve Mosher points out, you need a common time period in your baseline. And many thermometers don’t have it. So a whole load of ‘homogenizing’ and ‘splicing’ (that GISS calls joining) and box to box and infill and… well, junk… gets done to try to make a complete enough record to use a ‘baseline’ method. And IMHO it adds too much error to be usable to 1/10 C. But you can use non-baseline anomalies, such as First Differences. And I use one like that (but fixing an issue in First Differences that makes it fail on data with lots of gaps in it.)
To put it simply, you can use this approach in the data above and it will probably change what you see simply because of the transformation of the data so to speak.
BINGO! And that is what the temperature data codes like GIStemp do. They change the data. So we end up with the past cooling by whole degrees…
But I must also interject here and say one thing: If everyone uses the same method and that is ALL they use, how would we know this method is actually “correct”. I might be playing devil’s advocate there, but at some point I will take a shot at adjusting the data myself and the first thing I would do is NOT use the anomaly system.
You have it exactly right. The three major labs all use the same basic approach, data, and methods with all the same flaws. Any attempt to look at it from a different angle gets rocks thrown at you (though it does point up their flaws…). And yes, starting from the basic temperature data gives you the context to know when something is straying from reality.
And a whole lot more in:
I know, I ought to have taken the time to clean this up into a new distinct summary work, but I’m too rushed right now.
At any rate, at least now I’ve saved some of this where I can find it again ;-)
FWIW, it is my belief that the interaction of these heat issues with the 4th power radiance that puts the hard lid on our Holocene temps in the top graph, and that allow us to have the plummet back into a glacial era on the downside. There is a climate tipping point, and as the graph shows, it’s all downhill from here…