Does GIStemp use Satellites? Maybe in a tiny way in that they use a computed anomaly map from NOAA in STEP4_5 to adjust their anomalies more. I’d not call that using ‘satellite data’, though it does depend on a computed anomaly product that does use both satellites and surface data.
I was pointed at this “Global Surface Air Temperature in 1995: Return to Pre-Pinatubo Level” paper by Hansen as proof that GIStemp uses satellite data. In fact, it confirms the lack of satellites in GIStemp. On the first page it discusses satellites (6 uses of the word) but only in the context of computing anomalies to suface data that can be used to “interpolate” some made up “data” for use elsewhere in the paper (i.e. how to use satelites to extend GIStemp data in the anomaly phase. i.e. there are no satelite data in GIStemp, only a roughly satellite related anomaly map using surface and satellite inputs).
So lets take a minute to look at this paper, since we were deflected into it by a broken claim that GIStemp uses satellite data directly:
This says that suface air temps (implys land) are the primary measure of gloal climate change and sites some folks who have used them including Hansen and Lebedeff in 1987 (that they then abbreviate to HL87). They then state that “we update the analysis of HL87.” OK, they are going to patch something in an earlier paper. The next paragraphs says by adding more ocean coverage. And then that they will “discuss the significance of the unusual global warmth of 1995”. OK some pontificating will follow the science portion.
The paper is largely about the anomalies that showed up in their anomaly map after Pinatubo popped. The assertion is that the anomalies continue to show global warming.
Update of the Meteorological Station Record
This section talks about how they get land data from ground stations and get near-real-time recent months data from the GTS (Global Telecommunications System) from NOAA. They then state GTS is shown good enough by comparison to other data. The statement is made that GIStemp gives data in both calendar year and meteorological year (Dec – Nov) formats.
They then point to a graph showing “Annual-mean surface air temperature anomalies based on the meteorological stations” in “Fig. 1”. So they explicitly state that the GIStemp results are for Met stations, not satellites. Note is given that the mean used is 1951-1980 (a particular cold time, as I recall it). They even recognize this, though tepidly, with the statement “a weak cooling trend from 1940 to 1965”. There was snow in my home town 2 times in that period. Only the very old folks remembered snow before. There was also snow a few years later in the 1970’s that has not been seen again. That snow was so unusual that it made the papers. I have to suspect a ‘cherry pick’ of baseline here.
Oddly, this interval matches an interval when the solar angular momentum was in the pattern that matches cold times. But back to the paper.
Inclusion of Marine Temperatures
This says that they have had uncertainty in global temperatures due to poor spatial sampling. That is, they don’t cover oceans well.. Add in ships and bouys data and it gets better “but in situ data introduce other errors”. Then they go on to say satellites provide better total surface coverage, but limited time coverage and “The satellite data provide high resolution while the in situ data provide bias correction.” OK, which is it: “introduce other errors” or “provide bias correction”? Please explain how such an error prone data set can be used to correct a new high tech satellite series? This just smells like a cover up of a “Data Food Product Homoginizing Process” coming.
Yup, next paragraph. They talk about “Empirical Orthogonal Functions” used to fill in some South Pacific data… but it uses “Optimal Interpolations” which sure sounds like they are just cooking each datapoint independently… From here on out when they use EOF data they are talking about this synthetic data. It also looks like they use 1982-1993 base years to create the offsets that are used to cook the data for 1950-81. Wonder if any major ocean patterns were different in those two time periods, and just what surface (ship / bouy) readings were used to make the Sea Surface Temp reconstructions? They do say “The SST field reconstructed from these spatial and temporal modes is confined to 59 deg. N – 45 deg S because of limited in situ data at higher latitudes.” OK, got it. You are making up data based on what you hope are decent guesses. But in GIStemp “nearby” can be 1000km away with no consideration for climate differences, so I’m concerned that the same quality of care is being given here.
Yup. Page 2 gives the story. Same 8000 unit box. Same big distances, though it says 1200 km. By the way, that ‘diminishing influence’ is a bit misleading. Now I’ve only read the code once, and you need to do it a few times to catch all the ‘nuances’, but it looks to me like the code tries to use near stations and works its way out to far away. If the only station is far far away, it gets used just like a nearby one would have been used. The only ‘diminishing’ is that nearby is tried first. Sparse station density will still have major influence from far away stations.
I also find the “coastal box’ statement worrisome. “A coastal box uses a meteorological station if one is located within 100 km of the box center.” So we can toss out good coastal station data if the station is 101 km up the coast from the center of the box, but use fictional data from up to 1200 km away? This makes sense how?
They then present their chart of anomalies based thier interpolations and find, surprise, it’s warmer. Well, yes. It’s warmer now than it was in 1950-1980 when you already said it was at the end of a cooling period. What A Surprise /sark>.
Urban Warming and Other Measurement Problems
This is a discussion of Urban Heat Island issues, mostly. Hansen repeatedly cites himself (as HL87) to dismiss the importance, finding only a 0.1C effect. Gee, and I thought the data accuracy was only in 1F. How do you incease accuracy by oversampling of disjoint data from different entities? Each day is sampled ONE TIME. You can not increse it’s accuracy with sampling another day. Averaging many days is not oversampling. Though I gotta love this quote: “If the correction were substantial compared to the net change, it would call into question the reality of that change.” For someone where we have blink charts showing 1-2C fudges of data; to make this statement is “Oh, the hypocracy of it all!” to paraphrase the report of the crash of the Hindenburg.
He then does a hand wave of comarison of his “data food product” with Jones et.al.1990 and basically pronounces UHI doesn’t matter. Right… At this point I don’t know what you are comparing. It isn’t temperatures. It’s some kind of synthetic anomaly product. I also don’t know what Jones et. al. did, but it sure was not looking at the actual stations as sufacestations.org has done. “The comparison indicates that urban warming and other local anthropogenic effects do not dominate the observed change.” Yes, I’d agree with that. The data manipulations of GIStemp dominate the observed change, IMHO.
We then get a laundry list of other indications of surface warming (glacers, boreholes, etc.). Yeah, we got that. The Little Ice Age was cold. OK. Everything is fine on that point. It just doesn’t mean that anything unusual is happening and certainly does not mean anything bad is happening now.
Global Distribution of Temperature Change
This has 4 scary maps of the world with lots of red and brown on them.
They show the bulk of the heating over North America (where GISS cooks the data) and Siberia (where the Soviet era data are suspect, and the modern Russian era data are sparse with a hugh drop out of thermometers, and where they run steam pipes above ground to heat the villages in the winter…) It also shows a big cold area near Australia. Gues the Aussies don’t really have a hot one this summer… And they predict (project? Make up? Fantasize?) that this shows a hotter tropical ocean with “more than just an increse of the frequency or intensity of El Ninos.”
Guess the recent La Ninas are an issue,then.
Here they run off into speculation. Lots of ‘ifs’ and ‘expecteds’. He does drag ozone into the picture, though without much light being shed, and cites himself a few more times. There is a prediction that the trophospere will return to a warming trend “expected in 1-2 years”. Then more ‘scary scary’ talk and a warming-may-cause-cooling backside cover. “It may reduce deep water formation causing cold conditions”… “though not a cooling sufficient to overcome greenhouse warming”. Gotta love it. It will be colder, but warmer.
There is then more speculation including “then the Northern Hemisphere temperature anomaly pattern may persist or even strengthen, rather than switch to the opposite phase of the North Atlantic Oscillation”. And a cavalier dismissal of any possible impact from being at the bitter end of an interglacial “an increased likelihood of strong Atlantic storms, but no danger of a nascent ice age.” How nice of you to have changed the planets’ orbit, inclination, precession, nutation, obliquity et.al. Is “thanks” enough?
His references then cite himself 7 times as lead author. Guess we know now how to get bigger citation counts in the “peer reviewed literature”.