Starting with temperatures
Temperatures are a point property. They are a single point at a moment in time. An instantaneous event in time and space for a single property. They are an intensive property and the average of two temperatures is meaningless to heat or energy flow. ( You must know the mass and specific heats to have an extensive property with which to work.) Yet the first thing done with the temperatures is to average the min and max to get a mean, not even allowing for the non-sine wave nature of temperature change over the day.
Weather – More than Temperatures
Weather happens at a point in time, and a place in space, but weather events are spread over many properties.
It may be raining one minute and not the next, or raining on one side of the street and sunny on the other. A single snowflake can fall… But weather incorporates temperatures. It also heavily incorporates the water cycles of the planet. Rain, snow, hail, sleet, dry deserts and saturated dew points. And wind. The movement of the air, fast or slow, vertical or horizontal in any direction. Either straight line or in circles. Density changes and even changes in composition (such as particulates). Upslope and downslope winds can drastically change temperatures with no change of heat content of the air.
Chinook winds have been observed to raise winter temperature, often from below −20°C (−4°F) to as high as 10°C to 20°C (50°F to 68°F) for a few hours or days.
So for each area on our climate surface, we ought to be taking the weather and integrating it.
But in both space and time. Each spot on our map ought to have a sample series under it so that we can calculate that surface that flows over all the spots on the map, and over all the time of study. The accuracy of our integral depends critically on the size of those samples. Yet for most of space and time we have no samples and instead use what few we have (making for a very large spot under the curve… and a very very poor model of the curved surface.)
What is “climate”?
To get climate, we take those instances of weather and we integrate them over time and over space.
The “climate scientists” seem to like using the 30 year average of weather as climate, but that is a very broken definition. There are known 60 year quasi-periodic functions in weather, such as the PDO, so any 30 year average will be constantly finding bogus ‘trends’ as that cycle turns. A better definition of climate is the one used by geologists and geographers. See: http://en.wikipedia.org/wiki/Köppen_climate_classification for a review of one of the systems.
Such zones do not change over a 30 year period (the Sahara has been a desert for a while now, though many thousands of years ago it was wet). So “climate science” is broken in it’s core definitions… The Mediterranean has been a “Mediterranean Climate Zone” for thousands of years, even during the Little Ice Age and the Roman Optimum / Warm Period, and will continue to be for thousands of years to come. But that is what we are stuck with as a “climate science” definition for the moment. Just remember that real climate is based on a very very long time base for the integration of weather. Yet “climate scientists” use 30 years instead of 3,000 years.
A “Desert” may have snow or a drenching rain, but over a significant area it has insufficient rain to exceed evaporation. A “Rain Forest” can have a dead dry day, but integrated over time it rains far more than not, and over it’s whole area. A desert can have a pond in it and a swamp can have a dry rock.
So to find the climate of an area, we must take the weather and integrate it over time and over space. Preferably a very very long time.
But then, to get “climate change” we want the first derivative of climate over time…
So if we integrate weather (over too short a time base) then take the derivative of that and find changes over time we have found exactly what again? Have we not found simply that “weather changes”? Both on large scales and on small?
But it’s worse than that!
Weather is chaotic. What is the result of doing an integration on a chaotic domain? Is that not itself a chaotic result? And when we look at climate, we do find it to be chaotic. Just on a longer time scale.
In this chart you can see how much things change during a glacial period and in a fairly chaotic way. The present is that small surprising shelf of stability on the far left. It includes all the ‘extreme’ changes of weather during such periods as the Roman Optimum, the Little Ice Age, and the present Modern Optimum. Our “chaos” is astounding stability in comparison to longer term climate states.
There are Ice Epochs that come and go, snowball earth and tropical jungle dinosaur earth. The present Ice Age earth, with our recent Interglacial Optimum anomaly, but not quite the same as other glacials and interglacials. (There is that nagging problem of the shift from a 40,000 year glacial periodicity to a 100,000 year periodicity, for example, and that the periods are not quite predictable…) Or look during the glaciations, and you find jagged rises and falls of temperatures and ice levels. Climate is every bit as chaotic as weather, just on a different time scale.
All in all, it looks to me like a Fractal function. There are hot spots and cold spots, and hot times and cold times, and sometimes they are mixed in odd ways. But with an overall pattern that LOOKS sort of regular with repeating bits, but never quite the same. Very much like the patterns you see in coastal patterns and mountain ranges, but with peaks and valleys of temperatures and rainfall rather than height or ‘ruggedness’ of the shore.
So what is the integral of a fractal function? And does not the slope of a tangent to a fractal surface (derivative) depend entirely on where you measure it AND the size of the ruler you use?
A coastline is of indeterminant length It is one length if measured with a yard stick, a different length if measured with a millimeter ruler, and yet another length if measured with a speed boat going from one port to another as a sort of ‘unit ruler’.
Thus, the coastline paradox.
So we take our fractal weather and measure it at selected odd points, then measure it at other selected odd points in different times, then integrate that, then take the derivative of the integral and that slope means exactly what again?
And we keep changing the size of our “ruler”… (Station drops matter. Especially in a fractal domain).
Take a moment to think about that, please. It is a very important point. We keep changing the size of the ruler being used to measure a fractal surface. Then finding that the result changes.
It Gets Worse
But it is even worse than that. Weather is composed of liquid water, water vapor, solid water, wind and mass transfers, air density changes. Yet we chose to use only ONE of those, temperature, as a sort of a broken proxy for weather. Vast quantities of heat can move with no change of temperature, yet it is temperature we measure.
So now we are not actually measuring weather and integrating it, we are measuring only one aspect of weather and ignoring things like dew point, humidity changes, tons of snowfall as water freezes and more tons of snow melt at constant temperatures. And we take this one broken proxy for weather and treat it as the foundation of “Climate Change” and look for that proxy to tell us what is going on in climate.
Yet even here there is more breakage.
The temperatures are measured on a grid of cells that is far too sparse to capture the true state of the landscape. Look at that map of the Southwest again. California had 4 stations in GHCN for 2009, all on the beach, with 3 in
the LA basin and one in San Francisco. Not nearly enough to capture the texture of the state. In 2009 in GHCN there are about 1200 stations for the whole globe. Yet ‘microclimates’ can be dramatically divergent in a range of miles, or even yards. (Look at those hot spots in Texas cities, for example. Or, as temperatures really are fractal, in millimeters… I’ve had warm black rock in the sun next to cold snow… and frozen snowflakes on a warm tongue…)
We make the assumption that the air a few feet over the ground will somehow act to do an integration of this fractal property over a spacial grid measured in hundreds (or sometimes thousands) of miles or kilometers. And we know that it can not, as it can not even make the snow and rock temperatures match a few feet or meters apart. Stations located at airports (as most are now) can be 5 F or more warmer than nearby well sited stations, so the air is not doing a very good averaging nor integration over space.
Does that look uniform to you?
So we take this broken spacial integration of the point property of temperature, and treat it as a proxy for the point weather. Then to make the integration over time, we average the data. We take a min / max that may or may not have consistent precision, then use a months worth of each to make an average temperature ‘mean’ for a month. In some cases, missing data is ‘made up’ via using ‘nearby’ stations or the averages of them to create missing items. This averaging of nearby ASOS stations is called “quality control”. But quality of what? Then programs like GIStemp take those ‘monthly means’ and via more averaging functions make ‘homogenizing” adjustments. Filling in more missing data, blending some locations with others, and generally smearing the data around to where there are none.
Does sporadic semi-random ironing flat of a poorly sampled fractal surface make for a good integral?
Does it get better if you constantly change what gets ironed flat and what is left intact?
Averaging is a pretty poor way to get the integral over space and time for a highly variable and chaotic natural process, even if done on a proxy for that process (or especially so?) and even if done on the variance from a baseline instead of on the actual datum. Add in the fact that which particular geographic points are being averaged and blended, in any one ‘step’ of our ersatz integral, change constantly (and in their own chaotic way) and I’m beginning to have a distrust that the ‘answer’ really means anything, anything at all.
But wait, there’s more….
From Ersatz Weather Proxy to Climate Change
These poorly measured, averaged, homogenized, and blended temperatures are then turned into “Grid Box” values, and these are compared over long periods of time. A 30 year “baseline” value is found (via more averaging…) and the present yearly value is calculated (via more averaging but using different stations in different geographic locations) and these two are compared to create a “Change Over Time” proxy for the derivative of climate: The Change of Climate Over Time: dC/dt.
But is there really any relationship between dC/dt and the average of temperatures, re-averaged, offset, blended, and summed over time, then differenced from a different average of summed re-averaged, offset and blended temperatures ?
We’re missing all of the water cycle. All of the biological cycle of plant growth and death, evapotranspiration, all of the ocean turnover and heat storage / release processes, all of the ice cap cycles. But we are getting land use changes unrelated to climate. Is a 40 C desert the same as a 40 C rain forest? A 40 C airport in the sun? Is a 0 C desert the same as a 0 C ice cap?
They are treated as the same in the way “climate change” is calculated in programs like GIStemp.
No PDO, no AMO, no Gulf Stream. No hurricanes moving megatons of energy to the top of the sky (hurricanes embody as much energy as atom bombs, but we ignore that, and all the mass flow of water and air…)
But we have this proxy for weather, calculated from a bad method applied to incomplete data in a sparse field using a method far removed from integration; then we use an equally bad way to find a poor derivative of it, and that is supposed to be a proxy for dC/dt ?
What purpose was served by first integrating then taking a derivative anyway? All you will do is build up and amplify errors and incompatibilities from the two odd methods (proxy methods) used. Would it not be more honest to admit that averaging averages of offsets of averages, then differencing them, makes for a very poor first derivative of anything? And if you are going to do that, why integrate over both space and time first, then take a derivative over only time. Why not just integrate over space and be done?
And it means exactly what again? And to 1/100 C of meaning?
Looked at in this light, the whole process of using temperatures to find “climate change” is just so ersatz and lacking in a rational philosophical basis (in the sense of a ‘philosophy of mathematics’). It’s just an arithmetic game with large error bands and constructed in a clumsy way; and with no relationship to actual climate.
I wouldn’t even make stew with that level of circumlocution and poor choice of ingredients and methods.
For purposes of illustration, here is a Mandelbrot or two to contemplate:
And at another scale it looks very similar, yet far different at the same time.