A 20% Volatility Effect

In yesterdays posting, we saw how an excursion of 80% of record ranges would provide more than enough tracking error to cause biased analysis. (Tracking error of the ‘excursion’ from a ‘full average’ for temperatures in a city such that the analysis done by folks using The Reference Station Method or comparing a “Grid Box” in one time to a box made from different thermometers in another time could easily have large errors).

http://chiefio.wordpress.com/2010/08/04/smiths-volatility-surmise/

But what if the excursion is smaller? I mean, really, how likely is it that you could ever get to 80% of a record?

OK, here are the graphs for a 20% excursion regime. Just a little bit away from the overall average. I’m going to start with the graphs that show the ‘tracking error’ as yellow and magenta with the overall average as green and the 20% of record excursion averages just each side of it. To understand what these graphs are showing, you will need to read the prior posting. Here I’m just going to assume you’ve done that.

San Francisco vs Sacramento

We will start with the comparison that had the weakest power in the prior 80% charts. What happens between these two nearly sea level cities just 1/10 th the maximum distance used for data creation with The Reference Station Method?

San Francisco vs Sacramento 20% of Extreme Excursion

San Francisco vs Sacramento 20% of Extreme Excursion

Even with a measly 20% excursion, and event that ought to happen with some fair frequency, we have an effect. That “cold phase PDO” when we have lots of volatile stations vs warm phase when at the peak we move to low volatility stations has ‘something to grab on to’ even from these two cities. The yellow and magenta range out to about 10 F each side of 0 at peak, with an average of about +/- 5 F over the whole year. More than enough to create a 1/2 C ‘error’ in the analysis as done by other folks. If they do not allow for the volatility effect, they have no idea what they are measuring with their ‘in-fill’ and “homogenized” fabricated data products.

Sacramento vs Reno

How about comparing that inland near sea level with just a couple of hours drive “up slope” in the mountains? I, like millions of others, have driven from Sacramento to Reno for an evening and driven home the same day (or night, or weekend… depending on how fast you gamble your money away 8-)

Sacramento vs Reno 20% of Extreme Excursion

Sacramento vs Reno 20% of Extreme Excursion

Our annual profile swaps so that the winters now have the ‘nearly 10 F’ tracking error while the summers are much closer (they both get darned hot in the summer sun and heat tends to self limit about 105-110 F while in winter Sacramento can sit under “tule fog” for weeks while Reno can be clear skies in a high desert radiating away heat down to very very cold). But we still have a tracking error that runs to both sides of zero during cold and hot phases.

San Francisco vs Reno

And what will happen when GHCN drops out ‘nearby’ cities like Sacramento so codes like GIStemp need to “reach” further, out to as much as 1000 km, to create missing data via The Reference Station Method? Here we look at San Francisco (where the thermometer survives) vs Reno (well inside the GIStemp data fabrication range).

San Francisco vs Reno 20% of Extreme Excursion

San Francisco vs Reno 20% of Extreme Excursion

A fairly consistent 10 F of tracking error between either cold phases or warm phases and the overall average. So we have a load of high volatility stations in the GHCN during a cold phase of the PDO, then leaving as we move toward the top of a hot phase. And this is substantially ignored. It would be nearly trivial to get 1 F of “warming” out of this tracking error. Simply mitigate only 90% of it. Even a slight failure to be perfect is sufficient to “create Global Warming” out of nothing but station volatility changes and natural hot / cold cycles like the PDO, AMO, AO, etc.

An interesting “dig here” would be to see if the European station arrival / departure dates match the AMO and if other countries match their local oscillations. That kind of ‘perfect accident’ would be a very interesting bit of evidence. An innocent process ought to have calendar dates of thermometer inclusion / drop that are not so ‘tuned’.

But in any case, the use of a simple average for fabricating missing data is clearly subject to significant tracking errors if it does NOT take into account the variations in correlation between the two locations as major meteorological regimes shift. Regime change matters.

About these ads

About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in AGW Science and Background, Favorites, NCDC - GHCN Issues and tagged , . Bookmark the permalink.

8 Responses to A 20% Volatility Effect

  1. Rob R says:

    This is an issue that the various GHCNv2 global mean temperature INQUISITORS (e.g. Jeff Id, Zeke, Steve Mosher, Chad, David Stockwell) need to be aware of and need to take account of.

    Until they do I will have no more faith in thier efforts that in those of NOAA, CRU or GISS. They are all certainly clever and skilled enough. I suspect it will only take one of them to move on this issue and the others will basically have to follow.

  2. E.M.Smith says:

    @Adolfo: I doubt it. They were looking more at natural cycles of the starts. (Not really a disaster prediction. The Codex just shows water pouring form the sky. Most likely just the increased rains and storms from a warm ocean when the air gets cold during a Grand Minimum or Bond Event.)

    One thing to realize about the Derivatives Market: It turns over very quickly. To calling it a $1.5 Quadrillion Market ignores the fact that options expire in mass all the time. Traditionally the “3 rd Friday of each month” but recently they have come out with options that expire each week. So that market has a load of “bets” that are gone by the end of each year.

    Further, pretty much every bet has someone on the other side of it. One individual may lose his shirt, but the counter party wins a shirt. There is no net destruction to the economy.

    Finally, a huge part of the derivatives market is just letting producers of products REDUCE risk. Hardly a ‘risky thing’. So if I’m a wheat farmer, I can ‘sell wheat futures’ and lock in my wheat price at a point where I know I will make a profit. If Russian wheat burns I only “lose” the excess gain I could have made had I not ‘hedged”. I still make a ‘normal’ profit. In exchange for that, during years when Russian Wheat is in abundance and prices plunge, I still make a ‘normal’ profit thanks to that protective hedge.

    The counter party is some trader who wants to take on that risk. So if some trading desk at a hedge fund wins one quarter, then loses the next, do I really care?

    So the fact that most of the metals and minerals mining can be hedged with futures, or that most all the crops produced each year can be hedged with futures, or that folks can buy a “put” under their stock position rather than selling the stock if they are worried. None of that bothers me at all. If the whole derivatives market started to collapse, we would just be pulling the chips off the “hedge” table and returning the bets to the players. It does not cause any wealth to “go away”, it just redistributes it among those folks interested in playing.

    (The use of longer term derivatives based on things like packages of mortgage sausage does concern me. That was a key part of the Mortgage Crisis, and that IS a problem. But mostly because it lead folks to believe they had something other than a lottery ticket. Basically, the customer for the products didn’t know what they bought. But that’s a smaller part of the whole derivatives market.)

  3. @E.M.Smith:

    Thanks for your explanation about the “derivatives”. I asked you because I was sure you would give the correct answer.
    However you have just said something i did not know:
    The Codex just shows water pouring form the sky. Most likely just the increased rains and storms from a warm ocean when the air gets cold during a Grand Minimum or Bond Event.
    It remembers me we are in the Aquarium era, the era of that lady pouring down on us a jug full of water…while a japanese satellite has discovered a GCR source (protons, hydrogen nucleii) in that direction, though this time the solar system won’t cross it exactly… :-)

  4. E.M.Smith says:

    From the wiki:

    The Dresden Codex contains astronomical tables of outstanding accuracy. It is most famous for its Lunar Series and Venus table.[2] The lunar series has intervals correlating with eclipses. The Venus Table correlates with the apparent movements of the planet. Contained in the codex are almanacs, astronomical and astrological tables, and religious references.[2] The specific numen references have to do with a 260 day ritual count divided up in several ways.[5] The Dresden Codex contains predictions for agriculturally-favorable timing[citation needed]. It has information on rainy seasons, floods, illness and medicine. It also seems to show conjunctions of constellations, planets and the Moon.

    Not really a catastrophe prediction, more of a planting planner. We’re supposed to have very wet times on the Yucatan. Not the end of the earth.

    FWIW, at the end of the last 5000 year cycle, the glaciers were about where they are now. There is a geologist digging up PLANT REMAINS preserved under the retreating toe of Peruvian Glaciers that date to exactly that time. One Bond Event ago. Then it became very rainy for a while… and the snows came and the glaciers advanced. Until retreating back to that point just now. Cold and wet is what’s coming.

  5. Steven Mosher says:

    rob,
    none of us use RSM and none of us infill. this is a non issue

  6. E.M.Smith says:

    @Steven Mosher: The data series already have some “infill” done before you get the data. Check out how the “QA” is done on the daily data. There are ‘estimated’ values used… and not flagged by the time it gets to the monthly averages.

    So while YOU may not use RSM or infill, your data may well…

    Further, if you use a ‘grid / box’ anomaly rather than a “self to self’ single thermometer anomaly, you still have the volatility effect issue to deal with. You will have one set of (volatile) thermometers in the set of data up until the temperature peak and a different, less volatile, set in since 1990. So your “anomalies” for the present as compared to the past will be based on two different volatilities and thus be non-comparable.

    This IS an issue. Ignoring it will compromise your results.

Comments are closed.