CO2 takes summers off

Benchmarks are Critical

I have been working with the GIStemp code that produces one of the major temperature records for the planet. As part of that process, I needed to have a “benchmark” to test against GIStemp. (If you would see what something does to the data, you must have a starting point, a benchmark, so that you do not end up measuring the world with a “rubber ruler”).

I decided to use the raw GHCN data as my benchmark. Why? Because that is the foundation on which GIStemp builds. They take the GHCN data, add some special data from small locations (such as Antarctica) and blend in the USHCN records (that are already in GHCN, but with a different modification history). So it is “reasonable” to use GHCN and see, step by step, what GIStemp does to change that temperature history. It doesn’t really matter all that much exactly WHAT the GHCN data say when used as a benchmark, so much as it matters what CHANGES when GIStemp works on it.

Averages of Average of Averages Mean What Again?

OK, I’m off to build a benchmark… But you don’t really want to look at every single temperature record of a 1/2 million record data set to see what changed. You would also like a “summary” of the impact. Despite my personal belief that the “Global Average Temperature” has no real meaning, it can still serve as a broad indicator of “what changes” in the data. But which average to use?

The NOAA data arrive at GIStemp already averaged. 60 (ish) records for a station are averaged over the month. Yes, fewer in February and more in December. But by some “magic sauce” NOAA takes one calendar month of highs and lows and averages them together. Is it (sum of( high-low))/count? or is it (summation of highs) / count then averaged with (summation of lows) / count? I would guess that it is the mean for the day (high – low) averaged over days, but that is just a guess. I may investigate that later, but for now I’m just going to accept that NOAA has a “magic incantation” and they make a Monthly Average Temperature. Which way it is calculated will change the result, but that question will have to wait … “tomorrow is another day”…

What is known is that the data arrive at GIStemp as temperatures, one per month, that are stated in 1/100 F precision. Since the temperature records were only reported in whole degrees F, this is “a neat trick”… I would assert that these numbers ought to be at a minimum rounded to whole degrees F, but inspection of the method of calculation would be very helpful in choosing to round, truncate, or do something else entirely. For now, we just accept the number as given and toss rocks at the idea that it is meaningful, especially in any precision beyond whole degrees F.

As you add together thermometer readings, you move ever further away from something that has simple clear meaning. My patio right now is 92F which is more or less normal for this time of year (At Last! it’s been abominably cool this summer.) Average that reading with the low from last night and it means a little less. Average it for the month, and do that with thermometers from 100 miles around and it means even less. (It may, or may not, represent the “typical” temperature in the area. That depends on which thermometers you used, how many of them you used, how good they are and how well placed.) It most certainly no longer tells me that I can fire up the BBQ tonight… It can be 50F in San Francisco when it’s 100F here, and SFO is only 50 miles away. The “local average” tells lies about my patio…

So I’m no fan of “average temperature”. I think of it as being rather similar to “average telephone number” and “average car color”. An interesting meta-statistic; but lacking in any really clear meaning.

OK, with that said: A contrived value like an “average temperature” can tell you things about what your program does to a data set. It can also tell you things about the pattern of the data in that data set. Just don’t confuse it with an actual temperature, ok?

Global Average of Temperatures – GAT

So I decided to calculate a couple of different “Global Averages of Temperatures” as my benchmark. For each year, I would add up the temperature records for each month in the GHCN set and divide by the number of cells with valid data. A simple average. But I would then add those ‘averages by month’ together, and divide by 12, to get an annual average GAT. Then all years in a given decade would be averaged together to get a “Decade Average of Temperatures”.

One can also add all those “monthly averages by year” together and divide by the number of years to get a “Grand Total Global Average Temperature by Month” and could then average those together to get GAT for ALL months of ALL years. Or one could simply take all the temperatures for all valid records, add them together, and divide by the number of records to get “The One True Number” of the grand total GAT for all time for all records.

I did all of those things.

Why? To see how the different ways of calculating things impact that GAT as reflected in the data.

I do feel compelled to point out, once again, I believe that the Global Average Temperature no matter which way you calculate it, does not really mean much. This is only a way to “characterize the data” and “assess the sensitivity of the data to ways of calculating”. The fact that everyone is all excited about a “Global Average Temperature” does, to me, raise the issue of exactly how one chooses to calculate that One True Number. The fact that it is used is distressing, but I’m stuck with what the world has chosen to do. And the world has chosen to think that averaging several thermometers together (or the same one over long time intervals) might mean something… OK, I must accept that premiss. So what happens when we “characterize the data” in this way?

CO2 is French

I was rather startled to discover that CO2 is French. When August comes, it takes a vacation.

I don’t know how it does this, but the temperature data are very clear about where there is, and is not, a “warming” signature. The GAT does not increase in summer months, over time. It does increase in winter months. If (and it’s a might big IF) CO2 is causal of warming, it somehow knows to take the summer off. The AGW thesis is that we will have “runaway heating” due to a “tipping point” where once you warm up some, the temperatures are non-stop headed for hotter.

What the data show is that you hit a “lid” at about 20C and can’t get warmer; but cold winters can be moderated.

Here is the table of “GAT by Decade” from all GHCN data (available from the site listed under the GIStemp tab at the top of this site).

ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v2

I’m using the v2.mean.Z file but one could repeat this exercise with the max or min data sets as well.

DecadeAV: 1710  
-2.58 -0.38  2.26  5.75  9.70 15.82 16.68 14.73  9.49  6.10  2.67  1.46
DecadeAV: 1720   
0.37   2.20  3.84  7.70 11.67 14.97 16.92 16.01 13.72  9.09  4.89  2.20
DecadeAV: 1730   
1.44   1.63  3.75  7.87 12.41 15.67 16.02 16.01 14.29  9.53  4.89  2.34
DecadeAV: 1740   
0.55   1.60  4.36  8.18 11.96 14.92 16.79 16.09 13.97  8.55  3.47  2.51
DecadeAV: 1750  
-1.50 -0.24  0.79  6.29 11.86 16.20 17.67 16.28 13.70  7.68  3.93  0.46
DecadeAV: 1760  
-2.89 -1.49  2.08  6.99 11.93 17.04 18.95 17.41 13.27  8.13  3.00 -1.01
DecadeAV: 1770  
-1.49  0.83  2.98  7.88 12.59 16.59 18.64 17.95 14.18  8.10  4.37  0.14
DecadeAV: 1780  
-2.98 -0.38  3.45  7.45 12.83 16.90 18.88 18.53 14.24  9.35  3.35  0.08
DecadeAV: 1790 
-1.51 -0.20  1.92  7.66 13.45 17.27 18.79 18.02 14.62  8.44  3.06 -0.91
DecadeAV: 1800  
-0.50  1.14  3.66  9.80 13.72 17.03 19.50 19.04 15.06 10.04  4.55  0.61
DecadeAV: 1810  
-0.63  0.58  3.29  7.85 13.92 16.89 19.36 19.35 15.37  9.89  4.48  1.10
DecadeAV: 1820  
-2.27  0.16  3.24  8.18 13.29 16.98 18.82 18.03 14.28  9.28  3.78 -0.78
DecadeAV: 1830  
-2.80 -1.06  3.51  8.71 13.70 17.40 19.48 18.59 14.87  9.58  4.06  0.43
DecadeAV: 1840  
-2.83 -1.08  2.15  7.39 13.00 17.31 19.28 18.17 14.31  9.13  3.12 -1.35
DecadeAV: 1850  
-2.77 -1.10  2.29  8.04 13.45 17.59 19.56 18.99 14.77  9.22  3.83 -0.55
DecadeAV: 1860  
-0.24  0.07  3.44  8.63 13.84 18.08 20.15 19.40 15.62 10.91  4.45  0.93
DecadeAV: 1870   
0.30   1.68  4.15  9.22 13.60 17.49 19.51 18.67 15.60 10.47  5.45  1.46
DecadeAV: 1880   
1.72   2.67  5.79 10.31 14.63 18.63 20.73 20.16 16.67 12.02  6.37  2.52
DecadeAV: 1890   
0.19   1.78  5.09 10.57 15.18 18.90 21.09 20.29 17.15 11.97  6.61  2.57
DecadeAV: 1900  
-0.27  0.69  4.67 10.59 15.35 19.40 21.41 20.93 17.57 12.19  5.93  1.74
DecadeAV: 1910   
0.92   1.40  6.36 10.71 15.14 18.77 21.02 20.45 17.30 12.33  6.85  1.99
DecadeAV: 1920   
0.80   2.00  5.83 10.82 14.92 18.56 20.69 20.08 16.93 12.25  6.94  2.28
DecadeAV: 1930   
0.89   2.60  6.23 10.85 15.03 18.59 20.78 20.18 17.16 12.35  6.99  2.36
DecadeAV: 1940   
0.96   1.94  5.82 10.76 15.35 18.94 21.18 20.52 17.35 12.49  6.65  2.56
DecadeAV: 1950   
0.91   2.10  5.87 10.98 15.07 18.42 20.46 19.98 16.89 12.53  6.71  2.21
DecadeAV: 1960   
3.32   4.61  7.51 12.35 16.14 19.28 20.94 20.51 17.75 13.52  8.36  4.86
DecadeAV: 1970   
3.78   5.23  8.46 12.67 16.25 19.03 20.59 20.15 17.59 13.88  9.24  5.23
DecadeAV: 1980   
3.47   4.92  8.37 12.33 15.91 18.71 20.29 19.80 17.23 13.27  8.71  5.00
DecadeAV: 1990   
2.96   4.23  7.75 12.03 15.73 18.59 20.47 20.06 17.16 12.99  8.04  4.04
DecadeAV: 2000   
4.63   6.36  9.26 13.05 17.07 20.15 21.97 21.61 18.68 14.31  8.82  5.42
DecadeAV: 2009   
5.26   6.18  9.73 13.91 17.21 20.27 22.00 21.55 18.94 14.85 10.41  6.61

The data from prior to about 1850 are fairly sparse. The first years of data, near 1701-10, have as low as one thermometer in the the record, so these “global averages” are clearly bogus due to a low count of thermometers in these early years. The count rises to a few hundred by the mid 1800’s, so you can start to put some trust in the averages starting then. And what do we see? We see that from 1880 (where GIStemp cuts off history) to the decade ending in 2000 (heating was highest in 1998) August rises from to 20.16 C to 21.61 C. Not very much. But January rises from to 1.72 C to 4.63 C, or almost 3 whole degrees.

Somehow CO2 knows to take August off, but works really hard in winter… “I don’t think so Tim.”

It is even more interesting if you look at the data using only the 3000 stations with the longest temperature records. (That is, delete the records for stations with short reporting history. This stabilizes which thermometers you use and where they are located. The far right number is the total number of active thermometers used in that decade.) When that is done you get:

DecadeAV: 1890   
0.6  2.2 5.8 11.9 17.0 20.9 23.2 22.4 19.0 13.2 7.2 2.9 12.2 1174
DecadeAV: 1900  
-0.5 0.5 4.6 10.6 15.4 19.5 21.5 21.0 17.6 12.1 5.8 1.5 10.8 1867
DecadeAV: 1910   
0.6  1.0 6.2 10.6 15.1 18.9 21.2 20.6 17.3 12.2 6.6 1.6 11.0 2338
DecadeAV: 1920   
0.4  1.5 5.5 10.6 15.0 18.7 21.0 20.3 17.0 12.1 6.6 1.9 10.9 2567
DecadeAV: 1930   
0.4  2.2 5.9 10.7 15.0 18.7 21.0 20.4 17.2 12.1 6.6 1.9 11.0 2849
DecadeAV: 1940   
0.3  1.3 5.4 10.4 15.3 19.1 21.6 20.9 17.5 12.3 6.2 2.0 11.0 2851
DecadeAV: 1950   
0.2  1.4 5.2 10.7 15.0 18.7 20.9 20.5 17.0 12.3 6.2 1.5 10.8 2885
DecadeAV: 1960   
0.1  1.6 4.6 10.6 15.1 19.0 21.2 20.6 17.2 12.0 5.8 1.9 10.8 2845
DecadeAV: 1970  
-0.8 1.1 5.0 10.4 15.1 18.8 21.0 20.3 16.8 12.0 6.2 1.0 10.6 2860
DecadeAV: 1980  
-0.9 0.9 5.5 10.4 15.0 18.9 21.1 20.4 17.0 11.6 5.9 1.3 10.6 2706
DecadeAV: 1990  
-0.1 1.4 5.7 10.8 15.4 19.3 21.6 20.9 17.2 11.9 6.1 1.0 10.9 2186
DecadeAV: 2000   
0.8  3.4 6.8 11.3 16.5 20.5 22.9 22.4 18.5 13.0 6.2 1.9 12.0 1272
DecadeAV: 2009   
1.7  2.2 6.6 11.7 15.9 19.9 22.3 21.7 18.0 12.3 6.9 2.1 11.8  209
~                                                                                                          

We now have August in 1890 being 22.4 C while August in 2000 is only 22.4. Wait a minute… those two numbers are exactly the same! Luckily for us, July is 23.2 C in 1880 but 22.9 in 2000, having dropped by 0.3C which tells us that we don’t have a pathological failure of calculation and that the two numbers for August being the same is a bit of an accidental quirk.

And what happens in January? 1880: 0.6 2000: 0.8

Hmmm…..

Now we have a very interesting thing. One that will be covered in more depth in other postings. But it is very very clear:

There is a “warming signal” in the raw GHCN data.

That “warming signal” is in winter, not in summer. CO2 spends summers on the beach with all the other Europeans.

That “warming signal” evaporates entirely when short lived stations are removed from the raw data.

It’s Not Just About GIStemp Anymore

So in my attempt to build a benchmark to characterize the data for testing GIStemp, I discovered that the data with no processing do contain a warming signal, but one that is not in agreement with CO2 theory It warms much more in winter than in summer.

Anyone can do this process. The code is not complex, and could even be done in a large spreadsheet. I’ll include the code I used below for folks to pick over. (Realize that it is just a “hand tool” and not production quality. It is also written in FORTRAN and in a style similar to GIStemp. When you soak yourself in someone else’s code, you inevitably start coding in “their accent”…)

Further, when only long lived stations are used, there is no warming signal at all left in the data.

So at this point, I don’t care if you use all the data, part based on years, part based on life of thermometers, or part based on any other selection criteria. The simple conclusion is that “CO2 takes the summers off” so it can’t be CO2. The more complex conclusion is that “It’s in the selection of thermometers”.

The Code

FORTRAN is position sensitive, and WordPress loves to strip out blanks, so I hope that I’ve managed to keep all the blanks in. We’ll see. This program starts with a “ruler” that shows column positions in a comment. I always do this in FORTRAN to prevent pernicious bugs where, for example, a “temp12” variable gets turned into “temp1” or “temp” by running past position 72 and into the “serial number” fields… FORTRAN lets you declare a variable by using it, so even if you had no “temp1” variable declared, you could find yourself using one if you don’t keep track of the column numbers. (Isn’t a “code audit” fun? ;-)

The code for the “reduced” data form is very similar, mostly differing in having 1879 instead of 1700 and 129 instead of 308 as fixed years and year counts. The data are sorted into order by year before this program is run by using the Unix / Linux “sort” command:

sort -n -k1.13,1.16 ${1-“../STEP0/input_files/v2.mean”} > ${2-“v2.mean.sorted”}


C2345*7890         2         3         4         5         6         712sssssss8
      integer incount(12), itmptot(12), nncount(12), ntmptot(12)
      integer itmp(12), icc, id, iyr, nyr, iyrmax, m, iyc, kyr 
      real tmpavg(12), gavg, ggavg, ttmpavg(12), count(12), motav(13)
      real tr(13,310), testr(13), ktmpavg(12), kgavg
      character*128 line, oline

      data incount /0,0,0,0,0,0,0,0,0,0,0,0/
      data nncount /0,0,0,0,0,0,0,0,0,0,0,0/
      data itmptot /0,0,0,0,0,0,0,0,0,0,0,0/
      data ntmptot /0,0,0,0,0,0,0,0,0,0,0,0/
      data itmp    /0,0,0,0,0,0,0,0,0,0,0,0/
      data tmpavg  /0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0./
      data ttmpavg /0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0./
      data ktmpavg /0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0./
      data motav   /0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0./
      icc=0 
      id=0 
      iyr=0 
      iyc=0
      nyr=0 
      iyrmax=0
      gavg=0.
      kyr=0
      kgavg=0
      kount=0

C     Believe it or not, the program made bogus very large values, but only
C     in (repeatable) sporadic entries... until I added this initialization.

      do m=1,13
         do nyr=1,310
           tr(m,nyr)=0
         end do 
         testr(m)=0.
         motav(m)=0.
      end do

C     Get the name of the input file, in GHCN format.  The file must be
C     sorted by year (since we sum all data by month within a year.)
C     The name of the output file will be that of the input file.yrs.GAT
C     where GAT stands for Global Average Temperature.

      call getarg(1,line)
      oline=trim(line)//".yrs.GAT"
      open(1,file=line,form='formatted')
      open(10,file=oline,form='formatted')              ! output

C     Read in a line of data (Country Code, ID, year, temperatures)

      read(1,'(i3,i9,i4,12i5)',end=200) icc,id,iyr,itmp
      iyrmax=iyr
      rewind 1

   20 read(1,'(i3,i9,i4,12i5)',end=200) icc,id,iyr,itmp

      if(iyr.gt.iyrmax) then

C      if you have a new year value, you come into this loop, calculate
C      the Monthly Global Average Temperatures, the Yearly GAT for iyrmax
C      print it all out, and move on.

        nyr=(iyrmax-1700)
        do m=1,12

          if (incount(m).ne.0) then

C      We keep a running total of tenths of degree C in itmptot, by month.
C      Then we divide this by the integer count of valid records that went
C      into each month.  This truncates the result (I think this is valid, 
C      since we want to know conservatively how much GIStemp warmed the data
C      not how much my math warms the data ;-)  

C      So we have a "loss" of any precision beyond the "INTEGER" values being
C      divided, but since they are in 1/10C, we are tossing 1/100C of 
C      False Precision, and nothing more.  THEN we divide by 10. (REAL) and
C      yield a temperature average for that month for that year (REAL).
C      I could do a 'nint' instead:  nint(itmptot(m)/incount(m)) and get a
C      rounded result rather than truncated, but I doubt if it's really worth
C      it for a "hand tool" that I'd like to be a conservative one.  If I
C      truncate, then any "warming" of the data is from GIStemp, not this tool. 

            tmpavg(m)=(itmptot(m)/incount(m))/10.
            gavg=gavg+tmpavg(m)
            kgavg=kgavg+tmpavg(m)

            count(m)=incount(m)
            tr(m,nyr)=tmpavg(m)
            motav(m)=motav(m)+tmpavg(m)
            motav(13)=motav(13)+tmpavg(m)
            ktmpavg(m)=(ktmpavg(m)+tmpavg(m))
C       So we put the average temperature for that year into 
C       a bucket by months.  A running total of the monthly 
C       averages.
C            tr(m,nyr)=(tr(m,nyr)/count(m))/10.

          end if
        end do

        gavg=gavg/12.
        tr(13,nyr)=gavg
        kount=kount+1

C2345*7890         2         3         4         5         6         712sssssss8
        write(10,'("GAT year: "i4,12f7.2,f7.2)') iyrmax,tmpavg,gavg

C      I have improved this code by changing to a "modulo" of the
C      year rather than the count itself.  The count can be a bit off
C      on the very first records if you start at a value other than zero
C      so lining up decades that start at other year boundaries can be
C      problematic.  The data in the article above were created with 
C      the "if kount" version.  Not really significant, but the "mod" version
C      is better.  It will give a different decade boundary as coded here,
C      though.  So to exactly reproduce the data above, comment out
C      the "mod" line and un-comment the "kount" line.

        if (mod(iyr,10).eq.0) then
C       if (kount.ge.10) then
             kyr=iyrmax
             do m=1,12
                ktmpavg(m)=ktmpavg(m)/kount
             end do
             kgavg=kgavg/(kount*12)
             write(10,'("DecadeAV: "i4,12f7.2,f7.2)') kyr,ktmpavg,kgavg
             write(10,'(" ")')
             kount=0
             ktmpavg=0
             kgavg=0
        end if

C       write(10,'("GAT year: "i4,12f7.2,f7.2,i6)') iyrmax,tmpavg,gavg,
C    *iyc
C       write(*,*) "iyc: ", iyc
C       write(*,'("GAT/year: "i4,12f7.2,f7.2,i6)') iyrmax,tmpavg,gavg,
C     *iyc

C      probably paranoia, but we re-zero the monthly arrays of data.
C      
        do m=1,12
          incount(m)=0      
          itmptot(m)=0
          tmpavg(m)=0.
        end do

        gavg=0.
        iyrmax=iyr
        iyc=0

      end if

C     So we have a new record for a new year or for the same year.
C     If it is valid data (not a missing data flag) add it to the running 
C     totals and increase the valid data count by one.

      nyr=(iyr-1700)
      iyc=iyc+1
      do m=1,12
        if(itmp(m).gt.-9000) then
          incount(m)=incount(m)+1
          itmptot(m)=itmptot(m)+itmp(m)
C       Increment the running total for that year / month
C       Then add that running total to the running total for all years.
          ntmptot(m)=ntmptot(m)+itmp(m)
          nncount(m)=nncount(m)+1

C        So here we are adding the present station/year data to a 
C        running total of individual records.  

        end if
      end do

C     and go get another record
      goto 20

C     UNTIL we are at the end of the file.
  200 continue

             kyr=iyrmax
             do m=1,12
                ktmpavg(m)=ktmpavg(m)/kount
             end do
             kgavg=kgavg/(kount*12)
             write(10,'("DecadeAV: "i4,12f7.2,f7.2)') kyr,ktmpavg,kgavg

C2345*7890         2         3         4         5         6         712sssssss8

C     so here we take the totals "so far" and divide by the valid record
C     count in each month. 

      do m=1,12
         if(nncount(m).ne.0) then
           ttmpavg(m)=(ntmptot(m)/nncount(m))/10.
           ggavg=ggavg+ttmpavg(m)
c       For ttmpavg, we take the running total of all records, by month.
         endif
         motav(m)=motav(m)/308.

C    where in this case, we divide by years... for each month.
C    so we have a total of each year by month.

      end do

      ggavg=ggavg/12.
      motav(13)=motav(13)/(12.*308.)


C      write(10,'("        : "4x,12f7.2,f7.2)') ttmpavg,ggavg

      do m=1,13
         do nyr=1,308
            testr(m)=testr(m)+tr(m,nyr)
         end do
      end do

      do m=1,13
         testr(m)=testr(m)/308.
      end do


      do nyr=1,308
      iyr=nyr+1700
      write(*,220) iyr,tr(1,nyr),tr(2,nyr),tr(3,nyr),tr(4,nyr),
     *tr(5,nyr),tr(6,nyr),tr(7,nyr),tr(8,nyr),tr(9,nyr),tr(10,nyr),
     *tr(11,nyr),tr(12,nyr),tr(13,nyr)
 220  format (i4,1x,13f7.2)
      end do

 221  format (5x,13f7.2)

      write(*,221) motav
      write(*,221) testr
      write(*,'(5x,12f7.2,f7.2)') ttmpavg,ggavg

      stop
      end

Advertisements

About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in AGW Science and Background, Favorites and tagged , . Bookmark the permalink.

40 Responses to CO2 takes summers off

  1. H.R. says:

    It looks to me
    Like the GAT
    Is anything you want it to be.
    (Especially if you’re the IPCC.)

    NICE, E.M.! I’ve been checking in from time to time and you make the GISSmess fairly understandable.

    Thank you, sir.

    Quick P.S. The Fatbigot suddenly quit blogging. I’ll miss his voice in the conversation.

  2. schnoerkelman says:

    I’ve been following along here for a while and think what you’re doing is very useful, keep up the good work.

    I think the seasonal effect is caused, at least in part, by the Earth’s orbit. We’re closer to the Sun in NH winter and thus insolation is higher than in NH summer.

    Something that occurred to me while reading this entry is that min/max readings will be affected by the relative length of day. Outside of the tropics the day is longer than the night during the local summer and reversed in winter. I believe that the average temperature for a given location is supposed to be a proxy for the integrated temperature for that location over a given period where the minimum meaningful period is one day. The effect of variable day/night times would seem to compromise the proxy substantially.

    bob

  3. Bernie McCune says:

    I believe you are really on to something. I too am an amateur temperature buff. I first started looking at Co-op Weather Stations in New Mexico because the mean annual temperatures from a few long term temperature stations indicated no sort of clear sustained cooling or warming trends in NM over the past 100 year or so data range. Two interesting things that came out of this first look was the fact that most Co-op stations started after WW II and that of the few stations with long term data, the trends were clearly the same throughout the state. Over periods of 3 to 5 years the mean temperatures varied as much as 7 degrees F but all stations (be they in the mountains or the deserts) trended in phase and roughly by the same amount of degrees F. Last fall when I was in Japan I got some long term data from about a dozen stations there. I found the quality of the data much better. In NM many of the stations had up to 5 or 6 month’s of data in any one year missing. I basically obtained about 50 NM stations from the potential 200 stations by simply eliminating stations that had too many data gaps. I was still able to find plenty of stations with a good geographical spread. But not many with more than 60 years of data. Anyway, the mean annual Japanese data seemed to follow the NM patterns with up to 5 degree C swings over periods of 3 to 5 years and with all Japanese stations except the one in Okinawa following the temperature trends of the other. I compared those cycles with NM ones and found them 180 degrees out of phase!!

    Your comment that “what does global average temperature really mean” is very appropriate. Here I am seeing Northern Hemisphere temps out of phase. Then when you consider that Southern Hemispheric seasons are opposite of those in the NH – not sure how to deal with global temps in any meaningful way. I suspect we will have to start looking for ways to break all these temperature regimes into some sort of regional boundaries that somehow makes sense. I have really struggled with trying to get deeper into daily – monthly temperature characterizations, but feel that we will have to find some sort of unusual graphical method to be able to show meaningful information from it. I strongly reject the idea that we can easily clarify even NM temps from this very homogenized mean annual data. The other issue that you bring up is the use of 1000s of almost randomly picked stations. I suspect a few thousand carefully selected stations would be best if we had 100 years of data from them. My guess is that we will not find too many of those that are properly geographically spaced.

    I recently read a book by Hoyt that indicated some climatically significant cycles. An 80 year solar cycle and an 18 year lunar cycle were the ones that caught my eye. The NM data from the 6 or so 100 year data stations seem to show slightly more than a 1hz cycle over the whole term of the data set which seems to jibe with Hoyt’s 80 year cycle.

    One last thing. I noted a very significant 1 deg C increase in temperature for all the Japanese stations in 1989. That in itself is not so significant because several degree swings over a year or two are common in all the annual mean temp data. What was interesting about that jump was that it rose and then stayed high for the next 8 years or so. This jump is not seen as dramatically in the NM data even though the delayed trend into the early 90s was up and stayed up.

    As you well know – temperature is not as easily characterized as some might think. I am happy to see someone else who has started to dig into this question and reveal some of these very complex relationships.

    By the way – my background is Biology BS and Engineering Tech AS with most of my 35 years of experience in Engineering and Physical Science.

    Bernie McCune
    Las Cruces, NM

  4. Gary P says:

    I hate to see data thrown away. Would it make sense to try and average trends from each individual thermometer? Perhaps a weighted average where the weight is the length of the time the station is reporting.

  5. E.M.Smith says:

    Good points all!

    Bernie: I took all records for all stations, counted them (one record per year) then sorted them and did a “cut off” at the 3000 stations with the longest records. They had 64 year lifespan at the cutoff point. It goes down hill from there.

    We simply don’t have several thousand stations with 100+ year history.

    I had to cut it back to 1578 stations to get 100 year lifespans at the cutoff. That’s all there is with 100 years of data. At 1000 stations, you get a 110 year lifetime. And not all stations will be active in all years. So any given year or decade will be based on even less stations than that. As you drop below 1200 stations, your statistical value tends to drop off quickly.

    Gary P: My goal was to “characterize the data” – that is, to see how the different pieces of it impacted the benchmark. It isn’t so much “throwing it away” as it is “seeing what the different parts look like”. That’s the first step. THEN you try to figure out how to deal with the character of the data…

    So could the “bad” stations be “fixed” some how? Maybe. But if there is an impact from adding a lot of stations in the same place, then: averaging them together, doing a ‘reference station’ blend, or tossing out the redundant data all ought to yield about the same results for that cell. You will mostly end up measuring the differences in the error bands of the different methods rather than any real temperature impact. If it turns out that you are adding whole regions (or removing them) then it is harder to decide how to “fix” their impact on the trend over history.

  6. Al S. says:

    Mr. Smith,
    Since most of the long record thermometer stations have dropped out, or are otherwise unavailable, I would very much like to see some data for just the (long-record) ones that are still available.
    There may well be some skew in the ones that were dropped;
    it seems to me that data from just the 209 long-record stations that were recently available would tell us a lot.
    Thanks for your consideration!

  7. E.M.Smith says:

    I’ll see what I can do with a “very persistent thermometer” filter.

    Realize that these are “thermometer records” rather than “thermometers”. The difference is the “modification history”. So that long lived record may end 2 years ago, but there may well be a new ‘segment’ with a new record ID (which would be counted in the “short lived” batch) if they, for example, swapped out the Stevenson Screen and put in a new automated device.

    GIStemp has a couple of ways of “splicing together” those two records that they assert keep it accurate. That is part of what I’m auditing in this process.

    So, bottom line, the very long records that persist to today tell us one very interesting thing. The other very long records that don’t persist might also tell us something (like how we need to better splice series together…)

  8. MikeN says:

    I think you should drop the point about CO2 not causing warming in winter. That is what one can expect. CO2 is not the only thing affecting temperature. The warmer the planet gets, the more heat it emits into space. So you have a negative feedback right there.
    It makes sense that you would get more warming in winter, at the poles, and at night, and indeed that is what the experts are saying.

    REPLY: I think maybe you read something backwards. CO2 takes summers off. The global warming signal IS PRESENT in WINTER. It is NOT PRESENT in SUMMER. This is exactly NOT what the experts are saying. They are saying we have a positive feedback loop, tipping points, and runaway global warming. Clearly none of these things are consistent with the temperature record that shows consistent tops at 20 something C with no increase, no tipping point, and no positive feedback. It is very consistent with adding more thermometers in warm tropical places and in the Southern Hemisphere (which is also what the data shows happened to thermometer count over time.) -ems.

  9. Paul says:

    OK, I don’t understand your point.
    The global warming signal is present in winter, it is not present in summer. That is not what the experts are saying.

    Is this different from saying that it will get warmer in winter and at the poles?
    I think this is what the experts are saying, that warming would have a primary effect on the coldest areas and times first. The effect on lower latitudes is less, as is the effect on daytime temperatures, and summer temperatures.

    If you look at Arctic summer ice melt, the numbers for February March are fairly consistent, but the September low number is dropping.

  10. E.M.Smith says:

    @Paul

    Do not be mislead by “appeal to authority” arguments. “Experts” are worth exactly nothing. (I know, I am one.) Only the data carry truth, though to find that truth you must ask it what it has to say and then listen. Do not torture it as GIStemp does or it will tell you the lies you wish to hear…

    What I am saying is in some ways incredibly simple (even though folks seem to have trouble with it) and in a way somewhat subtile. First, the subtilely:

    I’m talking about a “Warming SIGNAL” not “warming”.

    What’s the difference? If you call your mistress and talk about sex, this is one hot conversation. But if you do it over an encrypted phone line, I can not hear the exact conversation. But what I can do is detect the conversation. (In signals intelligence work this is called “contact tracing”). If I’m very lucky, I might even get some meta data (how long you talk, what time of day, from where, to where) and if I’m very very lucky, I might be able to see that you always end with the same words. Maybe it’s “I Love You”, or maybe it’s “Don’t tell my wife” but the communication always ends with the same set of “stuff”. (This is part of how we broke the German Enigma in WWII – some operators always ended with “Heil Hitler” and we got enough crypt text with different keys to attack it.) And if I’m staffed for it, I might also be able to log when you visit after your phone calls.

    I can know “about” the conversation even if don’t know exactly what was said. And knowing about it can tell me almost as much as hearing it (and sometimes more…)

    So what I did was make a simple test. IFF the temperature record is warming (from whatever cause) then SOME of the temperature records MUST show warming. Which ones, when can tell you a great deal about why or why not.

    So we look at the temperature data and find that August is just dead flat. There is no “signal” then. Not over 100 years. You never call your girl friend in August. We look in January. OMG does it warm up. Several Whole Degrees over the decades.

    Now what does this tell us?

    It says that it can NOT be CO2.

    Why? CO2 does not go away in August. It rises steadily over the years. If the “CO2 causes warming” thesis is to “hold water” then you must have some way that increasing it from 260 ish parts per million to 380 ish parts per million does absolutely nothing in August and absolutely everything in January! CO2 can not do that. The physics of the stuff do not change with the seasons.

    It is not that the poles or the winters are getting a little bit, or even somewhat, warmer; while the summers are warming, but not so much. It is very much that CO2 does nothing worth noting at all in summer. CO2 literally takes summers off. Gas laws, thermodynamics, they just can’t do that…

    Now we also have the problem that the “experts” have said that CO2 is a positive feedback item and that, via water vapor, it is enhanced. But when is water vapor highest? Yup, in those dreadful summers with 98 F and 98 % humidity. Yet those times when we have the highest temps and the highest “water vapor and co2 feedback” we get NO feed back effect.

    The “runaway feedback” thesis is “toast” based on the data. There is simply no positive feedback signal present at all. If anything, there is a negative feedback signal that approaches unity at about 20C (something looks to put a lid on at that point). It is as though someone said “Every time he calls the barber, he sees his mistress”; but when we check the phone log, you go home after calling the barber, but go the girl friend after calling the florist. We don’t know if the flowers went to the mistress or to the wife, but we know where you went – and it wasn’t to the barber…

    On the other side, during winter, we have a rising “signal”. If there were positive feedback, then we ought not to see so much gain in winter (when water vapor drops to near zero at freezing and CO2 dissolves really well into cold rainwater).

    OK, what could explain this? When we look into the data, do we see something else that clearly could account for this pattern? Yes. Thermometers move south.

    There are other postings here that explored that thread and found it very well supported. I’ll not recount them here (see the AGW “issues” topic on the side bar). The “bottom line” is that we moved ever more of the total of thermometers to places that are warm during the N. Hemisphere winter (i.e. to the Tropics and S. Hemisphere) and it is exactly THOSE RECORDS that “carry the warming signal”. If we look at long lived cold or northern thermometers, there is no warming.

    The Northern Records are not warming.
    The Old Records are not warming.

    We are left with the fact of the “warming signal” being almost entirely carried in the records from places that are new thermometers added in the south. ( There is also some warming signal from the fact that lots of thermometers counted as “rural” are in fact at airports that had growth over time and are Airport Heat Islands – there are a couple of postings here on that as well).

    Basically, if you look only at averages of averages of adjusted averages of averages (no that is not an overstatement!) you see one thing, but when you look “inside the box” you see that the averages are simply hiding the truth.

    And that truth is that the warming signal is not present in the places the AGW theory says it ought to be; but it is an artifact of adding Jet Age airports in the tropics and S. Hemisphere.

    BTW, arctic ice is simply following the 60 year PDO cycle (about 30 year half cycles when it “flips”) and we have just had 30 years of the warm half end, and started 30 years of the cold half. The ocean currents and winds have far more to do with arctic ice melt than any “global average temperature” does.

    This year ice is quite normal (well inside the last decade or so averages and well above the low point) and we have already started a re-freeze. For the next 30 years or so you will see more Arctic Ice on a regular basis. Also, BTW, the Antarctic Ice has been growing for years. The total GLOBAL ice is growing, not shrinking. That the PDO flip changed which pole gets more and which gets less is normal.

    It is essential when dealing with things that have 60 year cycles (and there are even some 176-200 year solar cycles, and Bond Events are on a 1500 year cycle) that you not look at any time interval shorter than the cycles without allowing for them. Otherwise you will just fool yourself into thinking something else is happening when it is completely covered by the normal progression of the cycles. This, IMHO, is the critical broken thing in using “30 years” as the definition of Climate. It is just wrong. It will reliably fail to account for the PDO, the 176 year solar cycle, or Bond Events (to name a few).

    California has had the same climate for several thousand years, and will for several more. Until mountains are moved or latitude changed (or very long term, the Ice Age Glaciation returns), we have the same climate. There are cycles of drought here that run several hundred years in length. You can not say “OMG it’s a Drought! AGW!” unless you have fully accounted for all those other known longer term cycles. 30 years is just a very bad joke.

    So what I’ve found is really quite simple:

    No “positive feedback” signal.
    No “Cold North warming faster” signal.
    Yes “Stable old thermometers show no warming”.
    Yes “New thermometers in S. Hemisphere carry warming signal”.
    Yes “Adding thermometers over airport tarmac” show the warming signal (well Duh! It is black asphalt in the sun!)
    Yes “GIStemp spreads bogus warming signals to places that have not actually warmed”.

    And yes, we are just past the end of a 30 year warm phase to the PDO and have already entered a 30 year cold phase.

    Oh, and “maybe” we’re starting one of the 200 or (Heaven Forbid!) the start of a 1500 year cold cycle. The last Bond Event (there is a posting here on them, too) was just about 1500 years ago. It was called “The Dark Ages” and was known for it’s cold, miserable, and dark weather… And guess what: Cycles of hot, then cold are always hottest just before you start to drop into the cold cycle…

    The sun is taking a nap.
    The PDO has flipped cold.
    The world is at the right time (given the dates of past events) for a “cold flip” into a Bond Event.
    I’m just glad I lived in the warm part. Cold is not healthy for children and other living things…

  11. MikeN says:

    >Now what does this tell us?
    >It says that it can NOT be CO2.

    No it doesn’t.

    REPLY: Look, I’m not going to play ‘did so – did not’ with you over this. It’s dirt simple. If you can’t see that, please keep trying till you do. How can co2 cause warming over 100 years of January, yet do nothing over 100 years of August? It simply can not. Period. And it certainly can not have a heat driven ‘tipping point’ with that kind of negative temperature coefficient. ”

    “>Why? CO2 does not go away in August. It rises steadily over the years. If the “CO2 causes warming” thesis is to “hold water” then you must have some way that increasing it from 260 ish parts per million to 380 ish parts per million does absolutely nothing in August and absolutely everything in January! CO2 can not do that. The physics of the stuff do not change with the seasons.”

    I think this is an assumption that you are making that is not what is being said, at least outside of press releases and scaremongering stories. Now if there is absolutely no warming signal in summers that is a different thing.

    REPLY: Look at the data. I published it in the article. Did you even read the article? Here, I’ll reproduce a bit for you: We see that from 1880 (where GIStemp cuts off history) to the decade ending in 2000 (heating was highest in 1998) August rises from to 20.16 C to 21.61 C. Not very much. But January rises from to 1.72 C to 4.63 C, or almost 3 whole degrees. There is a fairly trivial change (that is nearly in the noise level) in August and a double that in January. If you take the time to further read, you will find articles where I track down that warming signal and find it is not in the older stable thermometers but only in the newer thermometers, then find they preferentially are put at aiports in the tropics. The warming of the AVERAGES is a fiction based on blending non-warming long lived thermometers with warmer newer short lived thermometers. Then assuming that somehow GIStemp can ‘fix it up’. It can’t (as shown in a couple of other postings here. ‘islands in the Sun’ and ‘A slice of Pisa’, for example. Take look at:

    https://chiefio.wordpress.com/2009/08/13/gistemp-quartiles-of-age-bolus-of-heat/

    Where August in the top quartile of thermometers (by length of service) is a dead flat 20.x from the decade ending 1879 to 1999 decade end. It only pops up in the last two decades as we seriously prune out the thermometers of record. Or look at September: 12.1 to 11.6 over the same period. It pops to 12.7 at the end with the thermometer changes. Then look at the bottom quartile. Temps rise from -13.9 to +4.7 in January. July goes from 20.8 to 20.6, a drop.

    You simply can NOT get that behaviour from the data via CO2. Warming only the new thermometers, and only in the winter. You can very easily get it by adding new thermometers in tropical airports. (And looking further into the data that is exactly what I found):

    https://chiefio.wordpress.com/2009/08/17/thermometer-years-by-latitude-warm-globe/

    https://chiefio.wordpress.com/2009/09/08/gistemp-islands-in-the-sun/

    As for Antarctic ice, again here you are being misled by the scaremongers.

    REPLY: The only scaremongers are on the AGW side, so no, I’m not being mislead by them. Or by anyone else for that matter. I make my own conclusions based on the facts that are provable. Nobody else makes my opinions for me and I am very hard to mislead.”

    Actual global warming theory expects the Antarctic ice sheet to expand with global warming. It is too cold in Antarctica for ice to melt. Instead, we get warmer waters evaporating, and causing more snowfall.

    REPLY: Again with the ‘theory’ stuff. How can you have an ‘actual theory’? Theories are just good stories folks make up. (Yes, I make theories all the time, to be tested and often discarded.) I have no use for theory as an argument device. I have no use for cooked books. I have no use for computer models offered up as proof of anything (They can never be a proof since they are human created fictions. I know this because I am a programmer.)

    Again, it is very simple. If the world is in a runaway greenhouse meltdown, the ice has to melt. It isn’t.

    Antarctic ice is growing. Arctic ice is absolutely normal. Greenland ice is growing. We had a couple of years where Arctic ice was broken up and blown out of the Arctic ocean where it could melt in southernly warm water (it is too cold to melt in the Arctic Ocean as well, that is why wind and currents matter, not warming) and we went to low ice levels, but no lower than in the past. There is absolutely nothing unusual with the Arctic ice. But you must look with a 100 year eye, not a 30 year one. The PDO flip is a 30 year half cycle. We are headed to 30 years of much colder Arctic temperatures. And there are many old records of folks navigating the Arctic ocean (and THEY did it without ice breakers…) But if you are a 30 something (or even a 40 something) you will only have experienced 1/2 of the PDO, the warming half. I’m older, I’ve seen some of this before… Like the subs surfacing at the pole in open water (there are pictures of the event). “

  12. MikeN says:

    >If we look at long lived cold or northern thermometers, there is no warming.

    This is a surprise, and would contradict the theory. What time frame are we talking about?

    REPLY: from:

    https://chiefio.wordpress.com/2009/08/13/gistemp-quartiles-of-age-bolus-of-heat/

    over about 100 years we have the best thermometers being more or less stable and the newest thermometers warming dramatically in the N.H. Winter which is no surprise since we are adding thermometers in the S.H. and tropics.”

    You are also reaching the opposite conclusion for Tamino’s analysis.
    http://tamino.wordpress.com/2009/09/11/arctic-analysis/

    REPLY: Theory is kind of pointless when faced with the actual results. The data clearly show that stable thermometers show “warming is not happening in the way it is claimed”. The theory is, simply put, wrong. Re-read the rest of my postings here, I can’t retype them all in comments for you. It does not matter if you take 100 years or 50, the effect is the same.

    Per Tamino: He does a simplistic analysis that is all based on the broken data set of GIStemp. In case you haven’t noticed, there is an entire section here devoted to looking at what the GIStemp code actually does. It is terribly broken. You can not trust that temperature series to tell you anything. This posting is one of the earlier stages of that investigation, the one that pointed me at the fact that the data are processed in just such a way as to hide the truth: There is no warming in the long lived stable thermometers, it is only an artifact of adding thermometers to hot places.

    And surprise or not, it is what is in the data.

  13. MikeN says:

    >warming is not happening in the way it is claimed”.

    I’ve read your postings, but you are not understanding what I am saying.

    When you say warming is not happening in the way it is claimed, you need to first understand what is claimed.
    To say that increasing Antarctic ice is contrary to ‘what is claimed,’ you have to have people that actually claim that. The only ones who claim that are scaremongers, and people misled by scaremongers.
    I have heard it said from scientists that global warming primarily affects winters, nights, and higher latitudes. I think about double the warming at the poles, and half at the equator. So again, your CO2 takes summers off is based on something you are assuming, but not ‘what is claimed.’

    You have provided solid evidence that the amount of global warming is not what is being claimed, and I encourage you to get this published. Has anyone enquired about this? If not, I’ll see if I can forward you to the right people.

    You go a step too far with posts like this one, where you assume the theory to be something it is not.

    As for Arctic warming, I have no problem thinking Tamino is fudging data, but I would like to know how. It looks like he looked at individual station data. Are you saying that this published data is the after-adjustment data? If so, then how did you evaluate the signal?

  14. MikeN says:

    My objection is to this line:

    I discovered that the data with no processing do contain a warming signal, but one that is not in agreement with CO2 theory It warms much more in winter than in summer.

  15. Jeff Alberts says:

    To say that increasing Antarctic ice is contrary to ‘what is claimed,’ you have to have people that actually claim that. The only ones who claim that are scaremongers, and people misled by scaremongers.
    I have heard it said from scientists that global warming primarily affects winters, nights, and higher latitudes. I think about double the warming at the poles, and half at the equator. So again, your CO2 takes summers off is based on something you are assuming, but not ‘what is claimed.’

    Which category does Mark Serreze fall into there? He’s made claims of an ice-free Arctic by x date in the relatively near future. “Ice-free” does not equal increasing Arctic ice.

    Is there a single “global warming” theory that states something testable?

  16. E.M.Smith says:

    MikeN
    >warming is not happening in the way it is claimed”.

    I’ve read your postings, but you are not understanding what I am saying.

    That could well be. Your statements tend to be a bit long on generalizations and short on things like claimed by whom, when. A bit hard to turn into something you can get a hold of. That’s why I go back to the data.

    When you say warming is not happening in the way it is claimed, you need to first understand what is claimed.

    Well, since what is claimed seems to constantly mutate, that’s a rather stiff barrier. I limit my “claim of what is claimed” to the notion that constantly increasing CO2 ought to cause constantly increasing temperatures. (And by extension, that these ought to show up in averages of those temperature records). If you wish to assert that CO2 does not cause heat retention and does not cause temperatures to rise over time in aggregate, well, I don’t think that’s what the AGW thesis “claims”… I think that is the skeptic point.

    What I’m referencing is the assertion that CO2 causes thermal energy to be retained. That the more CO2 you have, the more energy will be retained. And that the more heat is retained, the more water will evaporate, and the more water feedback will be a positive feedback item leading to runaway global warming – the “tipping point”.

    What I see is that some thermometers show warming of a great deal in their average. These thermometers are the new ones added at more southernly latitudes. Other thermometers show little to no warming. Those thermometers are long lived stable locations. To the extent the long lived average shows warming, it is only in the last 20 years when we drastically “pruned” out thermometers from the set. Again thermometer change completely swamps CO2 change.

    For the CO2 thesis to hold, their must be increasing amounts of retained thermal energy. Somehow this does not happen in August. Somehow this does not happen at old stable thermometers. That is not consistent with the CO2 retains heat thesis.

    I don’t know how many times you want me to say this. 100? 10,000?

    IF CO2 causes retained thermal energy THEN thermometers must rise in aggregate. They don’t, they rise in traunches by thermometer life span and by latitude. CO2 is NOT variable by latitude and is NOT variable by thermometer life span (not geographic location, latitude and lifespan) of thermometer. CO2 can not explain the pattern of temperature rise in the detail of the thermometer records nor in the averages of thermometers by life span or by latitude traunch. Is it really that hard to understand?

    To say that increasing Antarctic ice is contrary to ‘what is claimed,’ you have to have people that actually claim that. The only ones who claim that are scaremongers, and people misled by scaremongers.

    Please define exactly who are these “scaremongers” you keep tossing up. As near as I can tell, the only “scaremongers” are Al Gore and Jim Hansen and a few of their friends. Are they the folks whom you are speaking about? Since they have command of the AGW megaphone, if you wish to assert they are wrong, please do. I tire of the annual Antarctic Ice Shelf collapse story…

    Then state who are these folks who claim that the world is warming but we will end up with more total ice. Are they the ones at NASA publishing the bright red maps of the Antarctic? Oh, no, they say it’s warming and shrinking. Are they the ones saying that the Antarctic temperatures are rising and the peninsula especially is melting?

    It just smells to me like you are trying to have it both ways. It’s warming and the ice will grow, but ignore that increasing ice and snow …

    I have heard it said from scientists that global warming primarily affects winters, nights, and higher latitudes.

    Which ones. What nameless faceless folks are these? More importantly, how does this relate to the DATA? When we look at the GHCN data we find old thermometers in northern climates showing nothing much happening over long periods of time. Only when you cherry pick a short interval starting at the bottom of a cold snap do you get “warming”. GIStemp picks 1880 as a starting cold point. Other folks like to use the mid 1970s for nearer term (It snowed then for the first time in about 50 years in my home town).

    Longer term, it’s dead flat.

    https://chiefio.wordpress.com/2009/03/02/picking-cherries-in-sweden/

    Take a look at 1730. Almost identical temps to now. Only starting at the bottom of the LIA in the 1850 or so era gives a warming trend to this data.

    In the average of thermometers, a large number of new thermometers are added into the data series. These are added in warmer more southern locations (with warm months when the N.H. has winter). It is these thermometers that warm the average. It is these thermometers that “carry the warming signal”.

    Not the thermometers in the north, or the thermometers in the N. Hemisphere in the N. Hemisphere winter. They say “no change from the past”.

    It is simply the case that GIStemp can not effectively remove this impact of a flood of southern new thermometers. It tries with UHI adjustment, gridding, boxing, and anomaly processing, but it is simply too little and too late to “fix it up”.

    That, too, is documented here in a walk through the code. Actual runs of the programs. Log files. Direct examination of the data and where it goes.

    So you have a vague “somebody” who has some vague “theory”. I have the data series, the code, what happens to the data, the lines in the code that do it, and how it accounts for all that we see in the result. I don’t need some vague handwave or appeal to authority or fantastic theory. Data, process, result, log files. Simple demonstration is all it takes. That demonstration accounts for all the known behaviours of the data series.

    The CO2 thesis can not explain the changes in the data series. It can’t vary by particular thermometers. It can’t vary by age traunch (not particular calendar years, the duration of thermometer life regardless of particular years). It can’t vary by latitude traunch warming Brazilian January but not Moscow August.

    I think about double the warming at the poles, and half at the equator. So again, your CO2 takes summers off is based on something you are assuming, but not ‘what is claimed.’

    I’m not assuming anything. I am simply looking at what thermometers show warming, and what thermometers do not, and when they show the warming. Then asking is this consistent with the notion of CO2 retaining thermal energy on a global basis? Is it consistent with the CO2 rise globally over time? And it is not.

    We lose Siberian thermometers and the global average goes up, but Siberia does not. We gain tropical airport thermometers and the global average goes up, but the the northern stable thermometers do not AND the older tropical thermometers do not.

    The behaviour of the ouput of GIStemp is fully explained by the lack of ability to mask the changes in number and location of thermometers. You don’t need to go one step beyond that. The anomaly, grid, box, and UHI code is simply unable to make up for the factor of 10 change in thermometers. The data are consistent with this behaviour and the code by inspection shows it as well.

    The behaviour of the averages of traunches of data are not consistent with retained heat on a global basis. The “rising global average temperature” thesis does not allow for this distribution of data.

    You have provided solid evidence that the amount of global warming is not what is being claimed, and I encourage you to get this published. Has anyone enquired about this? If not, I’ll see if I can forward you to the right people.

    I’m open to publishing, but it’s not high on my list of life goals. It would be a ‘nice to have’. Any pointer for co-authors would be appreciated. I would not be willing to co-author with someone widely recognized as an AGW proponent (unless they wanted to do a ‘mea culpa’ change of position and wished to use this as the lever). I don’t need the heartburn of conflict with their agenda. (I am not agenda driven. If the data showed runaway global warming, I’d be shouting that…)

    You go a step too far with posts like this one, where you assume the theory to be something it is not.

    There are so many competing mutating version of the AGW thesis that figuring out which one “it” is can be quite hard… Would that be the one with, or without, the troposphere warm spot? The one with melting Arctic ice, or the one with a new LIA hitting Europe? Etc. etc.

    My position does not depend on what AGW thesis you pick. Mine is very simple. IF co2 is retaining heat on a global basis, you ought to see it as a decade averages trend (to allow for weather cycles) in all or most latitude bands to some significant degree. You don’t. You ought to see it in lifespan bands of thermometers (all geographies, all years). You don’t.

    As for Arctic warming, I have no problem thinking Tamino is fudging data, but I would like to know how.

    I did not say HE was fudging data. I said he uses GIStemp and that GIStemp has broken behaviours that make the data a bad data set to use. Those have been documented here.

    https://chiefio.wordpress.com/2009/07/30/gistemp-f-to-c-convert-issues/

    https://chiefio.wordpress.com/2009/08/23/gistemp-fixes-uhi-using-airports-as-rural/

    https://chiefio.wordpress.com/2009/09/08/gistemp-islands-in-the-sun/

    https://chiefio.wordpress.com/2009/09/04/most-used-rural-airport-for-uhi-adj/

    https://chiefio.wordpress.com/2009/08/30/gistemp-a-slice-of-pisa/

    https://chiefio.wordpress.com/2009/08/26/agw-gistemp-measure-jet-age-airport-growth/

    And very great technical detail at:

    https://chiefio.wordpress.com/category/gisstemp-technical-and-source-code/

    And if you would like to do it yourself, you can even see the code here:

    https://chiefio.wordpress.com/2009/08/09/will-the-good-ghcn-stations-please-stand-up/

    It looks like he looked at individual station data. Are you saying that this published data is the after-adjustment data? If so, then how did you evaluate the signal?

    He uses GIStemp data. That means that it has all been “cooked” via the GIStemp process. That is detailed in the links above. Pay particular attention to the “Pisa” posting.

    Yes, this is “data” after the adjustment. STEP3 uses “reference stations” up to 1000 km away to adjust stations, like PISA. PISA as a UHI “adjustment” gets an INCREASE of about 1.5 C of warming over the history of the station. This is just bogus on the face of it. How can removing a UHI effect make an urban area warmer?

    See:

    https://chiefio.wordpress.com/2009/08/12/gistemp-step1-data-change-profile/

    where I measure the roughly 1/2 C of “warming of the GHCN data” that is directly attributable to the GIStemp code in the first 2 steps ALONE. Not AGW. Not CO2. Not UHI. Just how much the data get an “uplift” from the processing done to it in the first 2 steps. (It gets more “uplift” from later steps, but I’ve not posted on them yet. Took a break to make some lunch money…)

    So anyone who uses GIStemp data has a minimum of 1/2 C of “code warming” in STEP0 and STEP1 AND may have individual stations warmed by 1.5C from nothing but the GIStemp CODE changing the data in later STEPs and in addition to any trend in the base data.

    That, IMHO, makes GIStemp useless for serious research.

    More stuff at:

    https://chiefio.wordpress.com/category/agw-and-gistemp-issues/

  17. MikeN says:

    Jeff Alberts, I was referring to Antarctic ice. The scientists are predicting less Arctic ice.

  18. MikeN says:

    You don’t need to keep repeating the results of your testing. I read the posts, and understand to an extent what you’ve done. I’ll look at it a little more closely another time.

    >Then asking is this consistent with the notion of CO2 retaining thermal energy on a global basis? Is it consistent with the CO2 rise globally over time? And it is not.

    You say you are asking a question, and then you answer it yourself.

    If the claim was that CO2 increases leads to constantly increasing temperatures, then the claim is falsified just by looking at the regular GISTEMP data, which does not show constantly increasing temperatures.

    >Your statements tend to be a bit long on generalizations and short on things like claimed by whom, when.

    You are doing the exact same thing. You assume that more ice in Antarctica invalidates global warming theory.

    >That is not consistent with the CO2 retains heat thesis.
    How do you know this? Again you are making statements that may or may not be true.

    >Are they the ones at NASA publishing the bright red maps of the Antarctic? Oh, no, they say it’s warming and shrinking.

    Exactly. That’s why I don’t like these scientists. They encourage the media into alarmist positions, that are not in line with the science. They say it’s warming and shrinking, or every year say this is the first time we’ve seen this ice shelf collapsing. A warming Antarctica does not mean there will be less ice.

    >For the CO2 thesis to hold, their must be increasing amounts of retained thermal energy. Somehow this does not happen in August. Somehow this does not happen at old stable thermometers. That is not consistent with the CO2 retains heat thesis.

    Again, how do you know it is not consistent with the CO2 retains heat thesis? It is quite possible it doesn’t happen in August. You just state this as fact that this is how CO2 theory works, but your sources are basically from the media. Perhaps we should take a look at some model runs at the Climate Explorer.

    Do you have any desire to tackle GISS Model E?

    By the way, what are the annual numbers since 1995 for August?

  19. MikeN says:

    >I limit my “claim of what is claimed” to the notion that constantly increasing CO2 ought to cause constantly increasing temperatures. (And by extension, that these

    That is simply not claimed. Perhaps you mean in decadal averages.

    >If you wish to assert that CO2 does not cause heat retention and does not cause temperatures to rise over time in aggregate, well, I don’t think that’s what the AGW thesis “claims”

    No, the thesis does claim that CO2 will cause temperatures to rise over time. It is not clear that the thesis expects the warming to happen uniformly across all latitudes, times of day, or months. That is the problem I have with your statements. Now zero signal in the summer is not expected, but more warming in winter than summer is expected.

    >More importantly, how does this relate to the DATA?
    It doesn’t relate to the data. It relates to your statement that ‘CO2 takes summers off is against the theory of global warming.’

    >The behaviour of the ouput of GIStemp is fully explained by the lack of ability to mask the changes in number and location of thermometers.

    I’m not challenging that.

    >The “rising global average temperature” thesis does not allow for this distribution of data.

    Well that is true if there is no increase in temperatures, which is what your thermometers say when properly adjusted. But warming in winter and less warming in summer is acceptable(but probably not zero warming).

    >I would not be willing to co-author with someone widely recognized as an AGW proponent

    I don’t think that’s the right attitude, but since you don’t really have getting published as a goal, this makes sense.

  20. MikeN says:

    >I did not say HE was fudging data. I said he uses GIStemp and that GIStemp has broken behaviours that make the data a bad data set to use. Those have been documented here.

    This is the part that I don’t get. What dataset are you using to get temperatures to run your analysis?

    I assumed that the Gistemp temperatures being published for individual stations are unadjusted, and only the global regional means are adjusted data.

  21. MikeN says:

    I have reason to think Tamino is fudging his data. I’ve noticed many times that when he does a temperature trend he starts at the end of a cooling period.

    REPLY: OK, we’re in a semantic quibble. I would call that a “cherry pick” where “fudge” to me means to change the actual data. No big. (Kind of a “dirty angels on heads of pins” thing ;-) -ems.

  22. Jeff Alberts says:

    Jeff Alberts, I was referring to Antarctic ice. The scientists are predicting less Arctic ice.

    And they’re not really getting that either.

  23. E.M.Smith says:

    MikeN
    “Then asking is this consistent with the notion of CO2 retaining thermal energy on a global basis? Is it consistent with the CO2 rise globally over time? And it is not.”

    You say you are asking a question, and then you answer it yourself.

    Yes. That is exactly correct. You ask things like “Does the whole globe warm evenly?” or “Is there a seasonal variation?” Then you look at the data and see what it answers. You ask things like “If CO2 is warming the whole planet, where is the warming in the data records? What places? What times?” Then you look at the data and see what it answers. And then you ask “Is this consistent with a global impact from a global distribution of a greenhouse gas?” And it is not.

    I am well aware that this does not fit the pedagogical paradigm of: Thesis, antithesis, hypothesis, etc. It is a much more direct process.

    If the claim was that CO2 increases leads to constantly increasing temperatures, then the claim is falsified just by looking at the regular GISTEMP data, which does not show constantly increasing temperatures.

    Please realize that this started from a simple desire to “characterize the data” for the purpose of “validating GIStemp”. That can not then be falsified or justified by GIStemp itself or you end up in circular reasoning.

    The whole point started with “What do the data LOOK like?” so that I could then see “What does it LOOK like when GIStemp is done changing it?” so that I could then ask “Is that a valid set of changes?” So for my purposes, you can’t just go looking at GIStemp to prove that what GIStemp does was OK or not… or you end up in circular land.

    Along the way I noticed some interesting patterns in the data. Patterns that are not consistent with the notion that GHCN data (pre-GIStemp and nearly “raw”) show a pattern of warming that is consistent with the retention of thermal energy over many decades via a broad effect like greenhouse gasses.

    “Your statements tend to be a bit long on generalizations and short on things like claimed by whom, when.”

    You are doing the exact same thing. You assume that more ice in Antarctica invalidates global warming theory.

    No. I assert that growing ice globally with growing ice in Antarctica and “ice dead in the middle of the last decade plus ranges” (and well within historic recorded ranges) fails to support AGW theory, whatever it is.

    Look, you drug the ice into a CO2 discussion. Not me. You want to go there, fine. My opinion, and that is all that it can be since I’m working on GIStemp code, not ice code, is simple: The arctic is not at any kind of historic state. there are records of ships sailing it from several past centuries and photographs of subs in open water at the north pole. Nothing to see there, move along. The Antarctic has growing ice. Nothing to see there, either. Every year we get the stupid dog trick of the glaciers calving off an ice shelf and the ice shelves breaking up. Not me touting that as proof, the AGW team touting it. It is a bogus point. More ice, not less, no warming. Nothing to see, move along.

    That is NOT me saying it disproves the whole AGW theory. I don’t need to do that, the AGW advocates need to prove their theory. I’m fine with the status quo (i.e. nothing of importance is happening). It is me saying that any claims that Arctic or Antarctic ice is anything of historical import break their teeth on the history of prior low ice levels. It is simply not proof of anything, IMHO.

    “That is not consistent with the CO2 retains heat thesis.”

    How do you know this? Again you are making statements that may or may not be true.

    Ok, for the dozen and tenth time:

    More heat trapped by CO2 MUST show up as rising thermometer records. You can NOT have the average go up if you don’t have some of the individual records going up. Math does not work that way.

    Which records go up MATTERS. This is supposed to be a global effect which is why it is called Global Warming.

    If CO2 is trapping heat over the past century, it ought to be more or less evenly distributed over time, perhaps with some multi-decade “ripple” on it from cyclical processes such as the PDO, but those ought to be fairly clearly seen as a ripple signal on top of the “tilt” toward higher. You could FFT to see how much ripple if you like, the data are darned flat.

    You don’t get a global signal in the data.

    Long lived (from whatever years, early years, late years, whatever, just long lived stable thermometer records) DO NOT SHOW WARMING.

    Short lived records (from whatever years) DO show warming. But in those records, the warming is only in the averages in N. Hemisphere WINTER, not N.H. Summer.

    When you look more closely at those records, you find that they are disproportionately RECENT in time and SOUTHERN in location and at AIRPORTS.

    It does not take a rocket scientist to figure out that adding a factor of 10 (or 1000%) more thermometers, putting them in more southernly locations, and putting them at airports might just add more temperature readings in the N. H. winter months in the averages.

    And that is exactly what the GIStemp code does.

    Now, what would happen in the data if CO2 WERE warming the planet? There ought to be a fairly consistent (not perfectly consistent, but modestly consistent) increase in temperatures over the whole planet. Maybe some tilt to one hemisphere or another. Maybe some ripple from multi decade ocean current flips. Maybe even some geographical variation where, oh, Africa might warm up more or less than Siberia.

    A decade average of thermometer records will hide the weather events. Observation of 150 years of decade averages of records will show any periodic ripple from cyclical events. (This is standard band pass filtering technique). Dividing the data into traunches by location or latitude or age or any of a dozen other things will show if the pattern of warming is, or is not, spread with some evenness or if it is very “lumpy” hitting some places very selectively. That highly selective impact would not be consistent with a broad general rise in global average temperate (the thing we are constantly told to worry about).

    So when I look into it, I find that the temperatures do not rise globally from a long lived set of thermometers broadly distributed around the planet. The temperatures DO NOT have a large multi-decade ripple (there is some evidence for a very small one). The long lived thermometers simple do not show any significant warming. They at most show the modest rise from the LIA cherry picked bottom to the present day.

    The “tipping point” thesis requires a positive coefficient of amplification with temperature increase. It isn’t seen in the data. Hot summer months just don’t show much rise over 100 years of steadily increasing CO2. That is NOT consistent with the CO2 / Water Vapor positive feedback tipping point thesis. You can not say “more warming makes it warmer even quicker” and then say “more warmth does not make it warmer”. There is NO evidence for ANY positive feedback in the data. There is LOTS of evidence for a strong negative feedback in the data IFF there is an CO2 effect. That would invalidate the AGW crisis claims. You get to pick one:

    a) There is a positive feedback coefficient and more CO2 means more warmth with more warmth building up the warmer it is, so summers get much warmer.

    b) There is a negative feedback coefficient and more CO2 means that nothing happens to summer heat. We hit the point where negative feedback dominates and we top out at the limit temperature. Which, BTW, is about 20C in the data.

    The data clearly support ‘b’.

    No amount of handwaving of appeals to authority or who published what or which thesis is being pushed by whom today can change that choice. There is either a positive feedback coefficient OR summers do not warm. Not both.

    Now when you look at the warming signal in the N. Hemisphere winter months, you find it is carried in new thermometer records in the souther hemisphere and equatorial zone. This is entirely consistent with the notion that GIStemp fails to correct out the impact of 1000% change in thermometer locations. It is entirely inconsistent with the notion that adding CO2 GLOBALLY warms the new thermometers in the south, but only in N. Hemisphere winter and does nothing to the N. Hemisphere thermometers nor to the old S. H. thermometers. It just can not be that selective. Only working in one hemisphere and only in the winter and only on the new thermometers.

    These are not things that “may or may not be true”. They are exactly true, simply tested, and clearly what the data and physics say.

    The only assumption I make is that a broad effect over a centuries time scale ought to be visible broadly over a centuries time scale with decade granularity. That is not much of an assumption compared to all the, frankly, crap that is assumed in the AGW thesis.

    If that is not true, then you must show how CO2 can accumulate for 100 years doing nothing, then suddenly have a dramatic impact in the last decade, but only on new thermometers (with no impact on old thermometers RIGHT NEAR THEM) and with the impact waxing and waning with the seasons, but only in some places.

    Go ahead and try…

    “For the CO2 thesis to hold, their must be increasing amounts of retained thermal energy. Somehow this does not happen in August. Somehow this does not happen at old stable thermometers. That is not consistent with the CO2 retains heat thesis.”

    Again, how do you know it is not consistent with the CO2 retains heat thesis?

    How many times you want to flog this same horse?

    It is really very simple. If the planet is going to warm up it must have rising temperatures. That is a tautology. The thermometers SOMEWHERE must be going up for it to be getting warmer all over the planet.

    The “CO2 did it” thesis holds that co2 is causing retention of thermal energy. (If it did not hold in thermal energy, there would be no ‘greenhouse gas effect’).

    If thermal energy is being retained, on a multi decade time scale it will be redistributed around the planet to some fair extent. Certainly in the 10s of degrees of latitude. We see this in ocean currents and winds. For example, if you warm up the tropical waters near Florida by 5 degrees, the gulf stream will be delivering hotter water to Europe inside a couple of years.

    CO2 is, on a decade scale, fairly evenly distributed globally and will cause whatever infra red retention it can on a globally distributed basis (per the AGW thesis). It can not have one effect in the N. Hemisphere and another in the S. Hemisphere. The laws of physics do not change with hemisphere.

    If IR is being retained and heat gain is raised in both hemispheres and over the entire year due to the presence of this globally dispersed and persistent gas, then there must be some tendency for the thermometers to show this.

    It is not a very big leap to the notion that an increase of global thermal energy ought to show up broadly distributed. (If it doesn’t, then for the planet to warm by, oh, 2C: someplace has got to warm by 20C. The ‘alternative’ is clearly wrong. Yes, absurdum ad reductum). Now I would not be saying anything if the lack of “distribution” of the heating signal was, oh, 10 degrees of latitude, or 1 to 2 decades, or one continent vs another. All inside lag times and over distances where there is some reason to expect a bit of inertia.

    But what do the data show? They show that long lived thermometers from ALL OVER THE PLANET FOR 150 YEARS do not show warming. But adding new thermometers right next to the old ones in the south does add a “warming signal”. So, pray tell, how does the new thermometer in Rio know to carry more temperature into the average and the old one does not? How does one “warm” the average and the other not? (Note: I am not saying the new thermometer shows a rising temperature trend over time. It has, in most cases, too short a lifetime to have a trend. I am saying that it raises the average by it’s simple existence.)

    The answer is stunningly obvious. The thermometer count carries more weight than the “grids, boxes, and anomalies” of GIStemp can remove.

    CO2 can not warm some thermometers, but not others right next to them (across large aggregates). CO2 can not warm one hemisphere and not the other (it is evenly distributed globally). And CO2 can not explain why the warming does not show up in long lived thermometers, but is all carried in the shorter lived thermometers.

    And CO2 can not cause warming in the winter (when incoming IR to retain is lowest) and yet somehow NOT cause warming in the summer (with maximum IR to retain) if the “green house gas” thesis is to hold true. If you don’t believe this, go stand in a greenhouse in summer with the vents closed and the AC turned off.

    It is quite possible it doesn’t happen in August. You just state this as fact that this is how CO2 theory works, but your sources are basically from the media. Perhaps we should take a look at some model runs at the Climate Explorer.

    Pardon? Please explain how CO2 can be holding in all the solar heat and IR in August, yet it does not warm and then explain how it can be holding in LOTS of solar heat and IR in winter warming the place up dramatically, when there is very little to work with at all.

    And you again assert things that are bordering on snark with no basis. “your sources are basically from the media”. OK, show me how you have any clue what my sources are. Go ahead. State EXACTLY which media. What author. Which TV show or newspaper.

    You can not.

    My sources are the chemistry and physics I learned in school and the data from GHCN. I do NOT use “media” as sources. IMHO, anything in the popular press is a waste of time.

    Please, if you wish to continue to be welcome here, cease speaking lies about me.

    Do you have any desire to tackle GISS Model E?

    Per model runs, be they Climate Explorer, Model E, or any other. I have no interest in them at all at this point. As long as the data going in are trash, and the GIStemp data are trash once processed, the models can do nothing but give you elegant trash.

    Once I’m done with GIStemp (and I still have STEP4_5 to go) I might move on to Model E code and take it apart, but that will be a year or two.

    Until then all I’m willing to say about models is that I’ve run models professionally at a super computer center. Their major advantage is that they inform your ignorance. You learn where you are most wrong when the models diverge from reality.

    Models run on broken data to project a non-tested future are worse than video games. And models of non-linear chaotic processes like weather and climate are a complete waste of time over time intervals beyond a few months. All you will learn is that they diverge, strongly, and fast.

    By the way, what are the annual numbers since 1995 for August?

    Annual numbers of what? I did run GIStemp at an annual granularity for the data in the posting, but only pulled out the decade averages for reasons of brevity. IIRC, the annual numbers were not significantly different from the decade averages. A bit of jiggle back and forth, but nothing very interesting. No real trends, and all you see is the annual weather jiggle. Up a degree one year, down a degree the next. I looked at a lot of data, though, so which one in particular is going to have some impact.

    The biggest problem since 1999 or so is the dramatic reduction in global thermometers. We drop from about 9000 at peak to a few hundred. That alone invalidates the utility of the data.

  24. E.M.Smith says:

    MikeN
    “I did not say HE was fudging data. I said he uses GIStemp and that GIStemp has broken behaviours that make the data a bad data set to use. Those have been documented here.”

    This is the part that I don’t get. What dataset are you using to get temperatures to run your analysis?

    I am using the GHCN data as downloaded via the instruction in GIStemp from NOAA. The GHCN unadjusted data (that have some minor adjustments in them, go figure…) The exact “raw” data that is fed into my GIStemp runs.

    I assumed that the Gistemp temperatures being published for individual stations are unadjusted, and only the global regional means are adjusted data.

    This is not a correct assumption. Please read some of the links I gave you about the GIStemp process.

    Having me constantly point you at the answers that you do not read will get real boring real fast.

    Having me retype postings as endless comments will get real tiresome even faster.

    The entire GIStemp process is documented here in annoying detail. (i.e. the source code is up with comments about it and as I analyse it, I post articles with my findings about that STEP or program).

    The very first step of GIStemp (called STEP0) starts changing the data. It does a slightly broken F to C conversion. It does a merger of USHCN and GHCN data in a broken way. It “unadjusts” the UHI adjustment from some of the USHCN records, but not all, and then it fills in missing pieces by making up the missing data. I takes this, and then does a broken UHI “adjustment” that is sometimes backward (i.e. adding warming instead of removing it: See the “Pisa” posting). And the list goes on.

    So any GIStemp temperature you have gotten is very much “adjusted” and some of would say maladjusted.

    The global means and anomaly maps are made from the STEP3 or later data, and that is after the bulk of the ‘adjusting’ have been done. Only Hadley SST anomaly maps are blended in for STEP4_5.

    All this is already in all the postings linked to above or via the topic links at the right side.

    Is there some reason you want me to retype all those postings in comments here?

  25. MikeN says:

    >Now, what would happen in the data if CO2 WERE warming the planet? There ought to be a fairly consistent (not perfectly consistent, but modestly consistent) increase in temperatures over the whole planet

    Apologies if I don’t provide links and you feel I am being rude or snarky, but the facts you are presenting are not valid.
    I am not challenging your findings with regards to the thermometers being added and no signal. I am saying that you can’t represent it as violating AGW theory in the way that you have. If the thermometers show no warming as you say, then that does violate AGW theory. But if you are getting warming in some places and less warming in others, that is different. You keep talking about old and new thermometers, but that is not relevant to my point. If it is the new thermometers creating the warming signal, fine, no argument from me. My only dispute is when you make some generalizations like CO2 theory doesn’t operate in this way.

    It does feel like we keep repeating the same things, and in the past I’ve found that that means we are misunderstanding each other’s statements and may be in agreement.
    1)If you have found no warming signal anywhere, anytime, then that violates AGW theory.
    2)If you find old thermometers are different from new thermometers, then that means GISTEMP is messed up.
    I am not challenging that.
    3) If you find winters warm and summers warm less, then that does NOT violate AGW theory.
    4) If you find higher latitudes warm and lower one warm less, then that does NOT violate AGW theory.
    5) If you find 3 or 4 but with one part gets NO warming, then that probably violates AGW theory as well. Keep in mind that even summer warming comes from more warming at nighttime and less at daytime.
    6) New thermometers vs Old thermometers and different locations for thermometers is a different issue, and not one I’m challenging.

    I haven’t provided any links, so if you wish to ignore these details that is up to you. I just think it makes you look foolish if you say things like

    August rises from to 20.16 C to 21.61 C. Not very much. But January rises from to 1.72 C to 4.63 C, or almost 3 whole degrees.

    Somehow CO2 knows to take August off, but works really hard in winter… “I don’t think so Tim.”

    Someone who does research on global warming would come to your site, be VERY interested in your GISTEMP results, then read that comment or things like it, and figure you have no idea what you are talking about and go away.
    I wouldn’t even point anyone to this work because of comments like that. I’m sorry if it sounds rude, but you are off-base in your knowledge of CO2 theory.
    And I am definitely interested in the work you’ve done and am going to work on replicating this. Yet I couldn’t share this with any experts I know because I know they would not spend time to go thru the details after seeing what you have said on another subject that is not relevant to your work.

    Let’s look at it another way. Suppose someone said I’ve built this model that shows warming from CO2, and you pointed out how GISTEMP operates, and that the warming isn’t there so far. If they said your work didn’t matter to the results, that the warming coming from adding new thermometers is irrelevant to the model, then you would discount this guy as not knowing anything, and ignore what he has to say on his subject. OK, bad example since you might ignore him anyways for different reasons.
    Then again, the modelers would be very skeptical of anything that went counter to their theory, and so they would be predisposed to ignore you. The comment above would give them reason to discount what you say.

    Let’s not confuse different statements.

    The bulk of your work says
    GISTEMP is messed up, and does not really show warming.

    then you add in details like
    Warming in winter and less warming in summer contradicts AGW theory.

    I started this comment series with

    I think you should drop the point about CO2 not causing warming in winter. That is what one can expect. CO2 is not the only thing affecting temperature. The warmer the planet gets, the more heat it emits into space. So you have a negative feedback right there.
    It makes sense that you would get more warming in winter, at the poles, and at night, and indeed that is what the experts are saying.

    >Of course I meant summer not winter

    REPLY: I think maybe you read something backwards. CO2 takes summers off. The global warming signal IS PRESENT in WINTER. It is NOT PRESENT in SUMMER. This is exactly NOT what the experts are saying. They are saying we have a positive feedback loop, tipping points, and runaway global warming. Clearly none of these things are consistent with the temperature record that shows consistent tops at 20 something C with no increase, no tipping point, and no positive feedback. It is very consistent with adding more thermometers in warm tropical places and in the Southern Hemisphere (which is also what the data shows happened to thermometer count over time.) -ems.

    >Again the tipping points, feedback loops and runaway warming are different things, and for the most part do not show up in the model runs. This is why I feel your sources come from the media, in which I am including scientists making alarmist statements on their own. You are right that it presents a Catch22, where if you contradict them, they say you don’t understand the science. The Climate Explorer doesn’t show 5C of warming in a single run.
    If you wish to argue the models are flawed, that is fine by me. But you have to understand them first. That is why I point you to the Climate Explorer, which will give you the data for how warming takes effect, according to the models which you would like to refute.

  26. Ken Roberts says:

    You can’t open the doors if they keep changing the locks.

  27. Hunter says:

    Christ, what a [~snip -E.M.Smith The Moderator.
    Please read the “rules” tab. Come back when you can be polite and socially potty trained.]

  28. MikeN says:

    From climate-skeptic

    It is in fact staggeringly unlikely that I would use claims of increasing ice buildup in Antarctica as “proof” that anthropogenic global warming theory as outlined, say, by the fourth IPCC report, is falsified. This is because the models in the fourth IPCC report actually predict increasing snowmass in Antarctica under global warming.

  29. SoundOff says:

    Response to https://chiefio.wordpress.com/2009/08/09/co2-takes-summers-off/

    I’m no scientist but it seems to me that your finding of significant warming in January and insignificant warming in August is a clear fingerprint that the warming you detected is CO2 forced. CO2 fluctuates widely from day to night, and from summer to winter, and even by region being measured (jungle, ocean, desert, polar, i.e. places with little seasonal vegetation). These variations can be up to 25% of total CO2 in close proximity to trees, and very little in a dry meadow land. Even wind mixing makes quite a difference.

    Earth-wide CO2 levels peak in the northern winter, which is when your study found temperature increases to be significant. If CO2 is lower in the summer, the higher humidity of summer does not act as a CO2 feedback, so humidity would not be relevant. Still, I would expect to see some effect in summer, just less effect. I found some evidence that summers over land areas are spreading out temporally rather than getting warmer in absolute terms. Also land areas are better able to radiate off excess heat than oceans, so we might not see much land warming until ocean warming reaches a certain point. The other thing is the planet experienced a mild spike in temperatures 1880-1890 while 2000 was a low year in recent times so you may be inadvertently cherry-picking by choosing these as your end points.

    REPLY: [ Your comment is focused on what happens in a summer VS a winter, what I’m talking about is Summer vs Summer over time and Winter vs Winter over time. You are talking weather, not climate. (An underpinning thesis of Global Warming is that over long periods of time CO2 is “well mixed”. If you want to set that aside, then we need to set aside the Hawaii CO2 measurements as being too local… and admit we don’t know the CO2 level nor changes on the planet. And thus set aside the whole Global Warming debate as baseless…) Over decades, summers do not get warmer. That is not a “tipping point” and it is not “cumulative CO2 over decades warming the planet”. And frankly, even if it were {which it is not} the world would be a better place with warmer winters and no warming in summers. What happens is very consistent with thermometer changes and it is further demonstrated in the detail studies done after this one. (See GHCN The Global Analysis under the GIStemp tab up top.)

    Also, the start and end dates are not chosen by me, they are chosen by NASA GISS in the GIStemp code or for GHCN studies, by NOAA/NCDC as they produce the GHCN data set. I’m looking at what they do to the data and what the unprocessed data show, not coming up with some new thesis. You will also find that the rise in temps is “as we come out of the Little Ice Age” (yes, GIStemp did ‘cherry pick’ to start time at a local cold point), but if you go back to 1720 or so, temperatures are much as they are today. Finally, I use all the data available and do not cut it off in 2000. THE recent peak in the data set is 1998 and we have been cooling since then (1934 being slightly higher in the USA). Frankly, the last few years are, IMHO, indistinguishable from the 1970’s so far.
    -E.M.Smith]

  30. SoundOff says:

    Some relevant quotes to my prior post …

    “Tropical rainforests during the day absorb atmospheric CO2 and emit oxygen. During the night that same rainforest breathes in oxygen and breathes out carbon dioxide.” (from http://en.allexperts.com/q/Global-warming-Climate-3851/2008/6/Night-Day-CO2-Levels.htm)

    “Most of the earth’s land mass is located in the northern hemisphere, as is most of the earth’s vegetation. During autumn and winter, millions of tons of leaves fall from deciduous trees and as they decompose, they give off carbon dioxide. The trees themselves no longer process as much carbon dioxide as they are in somewhat of a dormant state. As a consequence, the earth’s carbon dioxide levels rise. Throughout the spring and summer days, leaves grow rapidly and a great deal of carbon dioxide is consumed in the growing of the leaves and subsequent normal respiration processes – so the CO2 level drops.” (from http://www.carbonify.com/carbon-dioxide-levels.htm)

    These are other references I consulted to prepare my post:
    http://www.esrl.noaa.gov/gmd/ccgg/trends/
    http://planetforlife.com/gwarm/glob1000.html
    http://www.jstor.org/pss/2423189
    http://adsabs.harvard.edu/abs/2001AGUSM…A42A11S
    http://meteo.lcd.lu/papers/co2_patterns/co2_patterns.html
    http://berkeley.edu/news/media/releases/2009/01/21_seasons.shtml
    http://www.youtube.com/watch?v=WXaruC4vJCU (Dr. Stephen Schneider)

  31. SoundOff says:

    I understand the main point of your study to be that summer-to-summer long-term warming has not occurred and that only winter-to-winter long-term warming has occurred. You contend that result is inconsistent with GW. I’m explaining why that result is consistent with GW. I’m not talking about weather.

    Also, I believe the global temperature record is assembled from reporting station anomalies (degrees above or below a reference year). As such, adding more reporting stations adds more resolution but does not weight the data one way or another. Given that the world’s scientists now say that global warming is “unequivocal”, your attempt to prove otherwise is definitely an uphill battle. I commend you for trying. Interesting study none-the-less.

    REPLY: [ The ‘station anomalies’ are calculated at the very end. Long before that, the changes of station locations have had many opportunities to influence the global average temperature. For example, Urban Heat Island adjustments are done using station data up to 1000 km away. The ‘station’ being corrected may have in-fill values that were ‘made up’ from stations up to 1000 km away. If you reduce the number of actual nearby really rural stations and leave in a lot of closer to the equator and lower latitude stations and have an ever higher percentage of them at hot airports but still flagged as “rural” you get atrificially warmed results. And that is exactly what we see in GHCN / GIStemp interactions. So long before the “box, grid, anomaly” process, the data have been strongly influenced in ways that are not just “adding more precision”. Further, on closer inspection of the data (as noted in prior reply) we find that the REASON the summer temps don’t go up and the winter temps do is exactly as postulated in this article. The ‘average thermometer’ has moved to places with warmer winters. This posting is not the “be all and end all”, it was just the “first discovery”. Please see the other postings, too, where this is fleshed out in great detail.

    Also, given that the “worlds scientists” who most vocally shouted that it was “unequivocal” have been shown by their own emails to be indulging in attempts to suppress evidence to the contrary and to intimidate journal editors and to suborn the peer review process and… well lets just say that “appeal to authority” is broken and will be for quite a while. We need to look at the cleanest data we can get and see what they say, unadorned by CRU manipulation and unadorned by “interpretation”.

    BTW, good luck with that “It’s global warming but only in winter” line of reasoning. How we can have unstoppable global warming when the temperatures never get higher than they used to be (highs being in summer) is an amusing question… -E.M.Smith]

  32. boballab says:

    To help out a little with this:

    Also, I believe the global temperature record is assembled from reporting station anomalies (degrees above or below a reference year). As such, adding more reporting stations adds more resolution but does not weight the data one way or another. Given that the world’s scientists now say that global warming is “unequivocal”, your attempt to prove otherwise is definitely an uphill battle. I commend you for trying. Interesting study none-the-less.

    Your postion is riddled with mistakes in your understanding which I will address by point:

    First the stations do not report in “Anomaly’s”. They report temperature averages for over a monthttime frame. This average is based on the formula of Tmax+Tmin/2=Tmean. Now you can trace this back in various databases and you will find that in the US they record each days Max and Min in Farhenheit. From that they add all the Tmaxs together for the month and do the same for the Tmins. They then divide those numbers by the total number of days with readings to get an avergage Tmax and Tmin for that month. From there use the Tmax+Tmin/2 equation to get the monthly average temperature called Tmean. From there convert to Celsius and that number is what NCDC puts out in one of their datasets. From there they have a couple of adjusted datasets, USHCN and GHCN which are both used by GISS. Notice GISS is not compling the datasets themselves they are getting the information from NCDC. Here is a link to the various to the paper copies for the US:

    http://www7.ncdc.noaa.gov/IPS/coop/coop.html

    Now so far there is no “Anomaly” created.

    Now from these datasets GISS and CRU both use data to compile their work. EM Smith has stuck to GISS so I will as well by stating you can look through his posts on the code fro GIStemp. What I will address is how those “Anomalys” are created and used.

    There is no one standard year that is used by these agencies. What the IPCC uses is also that is used by CRU, the 30 year average of 1961 to 1990. What GISS uses is a 30 year average of 1951 to 1980, what the UAH satellite dataset uses is a 20 year average of 1979 to 1998. So what you do to get the “Anomaly” is cherry pick your baseline. Take the average temperature of those years and subtract them from your averaged temperatures that we extrapolated from averages. This gives us the “Anomaly” graph that they use. However the Baseline that you choose will determine how many positive to negative anomalys you have. In this case the GISS Baseline shows the most positive anomalys. However this isn’t the final output of the Anomaly data. You see you have to “grid” the anomaly’s over a map of the world and this is where your assertaion about more stations is wrong.

    You see for each grid square they average the temperatures together and then make the anomaly for that grid box. So what happens when you have no data for a grid square? Well what climate scientists do is average from nearby squares. So the more stations used you can fill in those missing grids. That is what happened after 1990, over time the number of stations used in the GHCN dataset dropped, and depending on which stations are used to “infill” you can get widely inaccurate readings. Example is the 4 thermometers used in California (1 at the SF airport, 1 at the beach in San Diego and the two near the beach in the LA area) averaged together be representative of the temperature at Mt. Shasta? or how about up by Lake Tahoe? or in the Napa Valley?

    Now that “the world’s scientists” nonsense is just that nonsense. First not every scientist in the world agrees with the stance of the IPCC CO2/watervapor feedback model. Some like Dr. Roger Pielke Sr. point out that the role land use changes are not taken into account. Others such as Dr. Roy Spencer point out that the feedback mechanism of clouds is unknown at this time and that the best empirical evidence so far points to a net negative feedback, which is counter to the IPCC AGW line.

    From Dr. Pielke:

    The obvious response to these claims is that if we cannot predict weather features such as the Arctic oscillation or an El Niño under current climate, how can anyone credibly claim we have predictive skill decades into the future from both natural and human caused climate forcings? The short answer is that they cannot.

    http://pielkeclimatesci.wordpress.com/2010/01/08/global-lower-tropospheric-temperature-report-december-2009-and-for-the-year-2009/

    From Dr. Spencer:

    Any way you look at it, the evidence for internally-forced climate change is pretty clear. Based upon this satellite evidence alone, I do not see how the IPCC can continue to ignore internally-forced variations in the climate system. The evidence for its existence is there for all to see, and in my opinion, the IPCC’s lack of diagnostic skill in this matter verges on scientific malpractice.

    http://www.drroyspencer.com/2010/01/clouds-dominate-co2-as-a-climate-driver-since-2000/

    So your ascertaion that the worlds scientists all agree that AGW theory is unequivical is baseless, matter of fact if you go and investigate what happened to the 1995 IPCC report that was changed by one man (by the name of Ben Santer who is a starring member of Climategate) you will find that the IPCC statement is fabricated, it was changed from We don’t know when we will be able to detect if AGW is occuring to we already found AGW. You can read about that here where it is disscussed by an IPCC reviewer:
    http://www.tech-know.eu/uploads/Spinning_the_Climate.pdf

  33. SoundOff says:

    Thanks for your information. Of course I don’t expect stations to report in anomalies and I know the reference year is actually a mean of several arbitrary years that can be adjusted for any particular use; I was merely simplifying my point for brevity (I have a background in quantitative methods). As long as anomalies are introduced at the detail level before compiling global numbers, the actual temperatures become irrelevant. If I thought of this, I’m sure NCDC did too. In fact it seems they did. On their web site, NCDC says they calculate the station anomalies and then the grid box average anomalies from the stations anomalies within each grid box, at least for USHCN. They also calculate the difference between each grid box’s average anomaly and the anomalies for each station (100,000 values, 1221+ high-quality stations in the lower 48 states) in the grid box and then the standard deviation of these differences which is used as a way of identifying when inadequate data is present.

    As I understand it, the grid box approach is used to prevent any one area weighting too highly because of an excess of stations. If the grid size is very small, there is less distortion from nearby stations but the trade-off is many grids can be empty or nearly so (and data in such grids becomes unreliable). There is an art and science to balancing these two concerns and lots of studies exist on this narrow area alone. In the end, NCDC says it settled on a 2.5° x 3.5° grid for USHCN data (about 8 stations per grid box). Certainly SF and San Diego wouldn’t be inside the same grid box.

    I’m sorry if you disagree with me that anomalies taken at a detail level are not distorted by the addition of more data points. We will just have to disagree on this, as this is my area of expertise.

    I have discussed USHCN above but I’m lacking information on GHCN, which has more than 7,000 land surface observing stations. I can’t imagine they would use a different approach for GHCN, but the grid box sizes are likely larger (5° x 5° I think). Is there a (free) site at NCDC where they explain all steps in the same detail as boballab? I searched but couldn’t find one there, just high-level explanations.

    BTW, I didn’t say “all scientists” in my earlier post. I’ve read papers by Lindzen and others and I am well aware that some scientists disagree that global warming has occurred recently, or more likely that we can’t be sure of the cause (natural vs. human). One has to keep an open mind and look at all the papers and weigh their arguments and evidence to arrive at an opinion. That’s why I’m here.

  34. boballab says:

    I didn’t find all the steps at one site unfortunately. I actually backtracked the data. The first link I gave takes you to a site for NOAA that has the paper copies that are sent into them in PDF file format. I had to bring them up and look at how they did it. From there you have to run around NOAA for their different products, however in another thread EM Smith gives a link to their ftp site (free). From there is files that has some of what you are looking for as well as including the GHCN Max, Min, Mean and Precip datasets and some fortran programs. He gives a link to their instructions as well (I believe, this is from memory) to go with the Readme files.

    For GISS they are actually the easiest to find imo. Go to this page:

    http://data.giss.nasa.gov/gistemp/station_data/

    There is a link to how they work their data. From them you learn that GISS takes the monthly means that GHCN/USHCN uses and makes them into seasonal averages and then averages those averages into an Annual Mean. Also there is a link to the supposed actual stations used by GISS, however it is not accurate. Just to look for myself I started looking through NZ data and found that Auckland is listed as used but there is no final GISS output for Auckland. I emailed GISS and they confirmed my suspicion of a clerical mistake, Auckland should not have been on the station list. The reason had to do with not being able to use their UHI adjustment technique on it. For UHI adjustment you need 3 rural stations that are at a minimum 20 years of length to fix the data with.

    You found one of the 284 urban or peri-urban stations that were dropped in our homogenization procedure because there was not a sufficiently long overlap of its record with a combination of at least 3 rural neighbors.

    There are 2 rural neighbors within 500 km and a third one within 1000km. The overlap of the combination of those 3 records and the Auckland record was 19 years, just 1 year short of the 20-year limit that ourprocedure requires. Non-rural stations whose trend cannot be adjusted to match their rural neighbors are dropped. The effect is similar to using only rura lstations to find the global temperature trend.

    (Note the meterological Annual mean starts in the December of the previous year for anyone that wants to look, ie for the year 2009 the year started in Dec 2008)

    If you just want to see the annomly output from GISS here is 2 other links, one is just for GISS land data the other is when Had SST data is added in:

    http://data.giss.nasa.gov/gistemp/tabledata/GLB.Ts+dSST.txt

    http://data.giss.nasa.gov/gistemp/tabledata/GLB.Ts.txt

    Another interesting part of the GISS site is that they have a page where you can change the baseline of the anomaly map which allows you to see how the baseline chosen effects the output.

    http://data.giss.nasa.gov/gistemp/maps/

    My personal little hobby is not so much how GISS works the data, its how NOAA does GHCN and USHCN data, especially the Tmax and Tmin adjustments. Besides working out spreadsheets of the means I look at Tmax and Tmin and the longest part is actually comparing those figures to the paper copies on file and seeing the differences and there are differences.

    As to the station number part, you are partly right in the fact if you had data to use in the Grid boxs established it wouldn’t be such a problem, however for whatever reason NOAA doesn’t use the data from the stations in those Grid boxs so in turn GISS has no data to work with. So GISS has a choice: leave that Grid Box blank or average them to grids that have data. They decided on the second option. So there more stations means more accuracy. There was a recent post over at WUWT that shows that problem (as well as comparing different baseline and timescales). They compared the Anomaly map produced by GISS compared to the Modis satellite Anomaly map. What is striking between the two is that the “hot spots” are centered in the same places but on the satellite map the “hotspots” are much smaller in area. Where some of those big GISS “hotspots” were located at is also areas where GHCN has no data in their datasets.

    http://wattsupwiththat.com/2009/12/31/nasa-vs-nasa-which-temperature-anomaly-map-to-believe/

    Take a good look at the Hotspot over northern Canada in the GISS map then at the same Hotspot on the Modis map. Notice the size difference? The reason that GISS is so big is that is one of the areas that NOAA has no data in their GHCN dataset so GISS average around from sparse data. Modis on the other hand has more data to work with so is more accurate in the size of the big hotspot. (Phil Jones talks about this in one of the emails as a known problem)

    Also the hotspots are an artifact of Baseline bias. Go to the map I told you where you can change the baseline and pick a nice big hotspot that sticks out in the 1951-80 baseline, then switch to a 1961-90 baseline which is what the IPCC uses, note the difference in colors. Now use the satellite baseline of 79-98 and see another difference.

    That is one of the problems with using Gridded, color coded Anomaly maps: they are very baseline dependent due to the human prolcavity of looking at the pretty colors and then going “wow look at all that red”. When you change the baseline you get a different result and not so much red and a “whats all the fuss about?” Same principal is in the Anomaly Graphs using linear regression trend lines and time scales. Change the timescale you use and you are picking a trend line. Pick the Baseline you use and you can either inflate or deflate how much you are above your “zero Point”. I did a presentation on this once when someone played that optical trick by comparing two different timescales and two different baselines. (The person that played that optical trick is a climate scientist)

    The comaprison was between UAH satellite and GISS data. I have already shown the different baselines between the two but he went a step further, he used a timescale of 1975-2000 for GISS and compared it to a 1980-2000 time scale for UAH. He also kept the two on seperate graphs while trying to compare the two.

    http://tinypic.com/usermedia.php?uo=f%2BydX%2FAT8aVYhMEWRnSK8Yh4l5k2TGxc

    This first figure shows the two GISS Anomaly products compared to UAH using his timescales and each respective Baseline. Notice the GISS series are both well above the Zero point.

    http://tinypic.com/usermedia.php?uo=f%2BydX%2FAT8aWvfZHVaMFShIh4l5k2TGxc

    This shows what happens when you use the same baseline of 79-98 for both. Now the GISS Anomaly’s are on the same Zero point as UAH but the time scales are different. As shown different Baselines can give you a false impression when doing comparisons when using Anomaly’s.

    http://tinypic.com/usermedia.php?uo=f%2BydX%2FAT8aV%2Fcx9HMOcR84h4l5k2TGxc

    Now this one shows when you put both on the same timescale of data. Notice the Trend for GISS drops down [hey I just reduced Global warming :)]. When you throw in the cooler 75 to 80 starting temps for GISS it artifically inflates the trend line compared to UAH.

    http://tinypic.com/usermedia.php?uo=f%2BydX%2FAT8aVMzxpeYciqY4h4l5k2TGxc

    This one shows the original Graph but then I added in the correction for Timescale and Baseline so we are comparing Apples to Apples and not Apples to Oranges as before

    http://tinypic.com/usermedia.php?uo=f%2BydX%2FAT8aWqlS1BDxTtX4h4l5k2TGxc

    This is what the person doing the comparison should have used right from the beginning, but he tried to mislead with optical trickery. Most people don’t realize that there is no standard in climate science for Baselines nor timescales for data it is all arbitrary. Matter of fact in one of the released emails Phil Jones shows that the whole “it has to be 30 years for climate schtick” is crap. It was arbitrarily picked for ease of computational use and nothing more.

    There won’t be any move by IPCC to go for 1971-2000, as it won’t help with satellite series or the models. 1981-2000 helps with MSU series and the much better Reanalyses and also globally-complete SST.

    20 years (1981-2000) isn’t 30 years, but the rationale for 30 years isn’t that compelling. The original argument was for 35 years around 1900 because Bruckner found 35 cycles in some west Russian lakes (hence periods like 1881-1915). This went to 30 as it easier to compute.

    Neil
    There is a preference in the atmospheric observations chapter of IPCC AR4 to stay with the 1961-1990 normals. This is partly because a change of normals confuses users, e.g. anomalies will seem less positive than before if we change to newer normals, so the impression of global
    warming will be muted.

    http://www.eastangliaemails.com/emails.php?eid=462&filename=1105019698.txt

    I would like to stick with 1961-90. I don’t want to change this until 1981-2010 is complete, for 3
    reasons : 1) We need 30 years and 81-10 will get all the MSU in nicely, and 2) I will be near retirement !! 3) is one of perception. As climatologists we are often changing base periods and have done for years. I remember getting a number of comments when I changed from 1951-80 to 1961-90. If we go to a more recent one the anomalies will seem less warm – I know this makes no sense scientifically, but it gives the skeptics something to go on about ! If we do the simple way, they will say we aren’t doing it properly.

    http://www.eastangliaemails.com/emails.php?eid=455&filename=1103583356.txt

    Notice of his three reasons number 2 and 3 standout because they lack a basis in science and they are tied together and he states the case quite well afterwards. He is both right and wrong. Right in the sense that papers cited on those baselines then don’t match up and pick one and stick with it unless you got a good reason to change. The reason to change that he cites, matching up to satellite with a 30 year trend makes sense to do, when they switched from 51-80 to 61-90 becasue the WMO liked those dates better was and is crap and shouldn’t have been done. Also notice that he is concerned with perception. Phil knows that using that other baseline would casue changes in that gridded map and cause people to start wondering and as he states they have been moving that baseline all along. Why move it if as he states is has no scientific bearing? Perception/packaging to sell a product nothing more which is where he is wrong. If your science depends on packaging to make your case you got serious problems. Also reading the whole chain is fascinating and Trenberth makes my case about Anomaly’s and the Zero point.

    Another interesting email which answers your grid box question is this:

    Another option is to use the infilled 5 by 5 dataset that Tom Smith has put together at NCDC. All infilling has the problem that when there is little data it tends to revert to the 1961-90 average of zero.

    All infilling techniques do this – alluded to countless times by Kevin Trenberth and this is in Ch 3 of AR4. This infilling is in the current monitoring version of NCDC’s product. The infilling is partly the reason they got 2005 so warm, by extrapolating across the Arctic from the coastal stations. I think NCDC and the HC regard the permanent sea ice as ‘land’, as it effectively is.

    As a side issue , the disappearance of sea ice in the Arctic is going to cause loads of problems monitoring temps there as when SST data have come in from the areas that have been mostly sea ice, it is always warm as the 61-90 means are close to -1.8C. Been talking to Nick Rayner about this. It isn’t serious yet, but it’s getting to be a problem.

    In the AR4 chapter, we had to exclude the SST from the Arctic plot as the Arctic (north of 65N) from 1950 was above the 61-90 average for most of the years that had enough data to estimate a value.

    http://www.eastangliaemails.com/emails.php?eid=795&filename=1178107838.txt

    As shown infilling is a known cause of temperature inflation and NCDC uses a 5×5 degree grid box. Also from these emails you notice that Climate Science is still in its infancy and the “rules” are still being made up in an ad hoc way. No set Baseline everyone uses. WMO/CRU/IPCC uses one, GISS another, the Satellites a third and I have seen other countries use the 1971-2000 baseline. No standardized start point for timescales. GISS has a start point where they lop off data prior to that point which as shown will influence the simple linear regression trend line they use as eye candy for the less knowlegeable. That type of stuff makes me uneasy to base policy decisions on and when you couple that with what some of the lead authors have done over the years and you really wonder what they are trying to do.

  35. E.M.Smith says:

    SoundOff
    As long as anomalies are introduced at the detail level before compiling global numbers, the actual temperatures become irrelevant.

    And that is EXACTLY the problem. They are NOT. Re-read what I said above. Averages of temperatures are used for all sorts of things long before the anomaly calculation stage. The actual temperatures are very relevent to the GIStemp process. Until the last box/ grid step.

    I’ve not been investigating the NCDC “adjusted” series, only their “unadjusted” series and the GIStemp process (those being the much more widely used things) so I’m not going to respond to discussions of what the NCDC “adjusted” process is nor how it uses grids.

    For GIStemp / GHCN your following paragraph is very important:

    As I understand it, the grid box approach is used to prevent any one area weighting too highly because of an excess of stations. If the grid size is very small, there is less distortion from nearby stations but the trade-off is many grids can be empty or nearly so (and data in such grids becomes unreliable). There is an art and science to balancing these two concerns and lots of studies exist on this narrow area alone. In the end, NCDC says it settled on a 2.5° x 3.5° grid for USHCN data (about 8 stations per grid box). Certainly SF and San Diego wouldn’t be inside the same grid box.

    Now, for GHCN, we have all of about 1176 stations that survive the selection in the GIStemp processing and get to gridding and boxing; where we have 8000 boxes to fill. See the problem?. We have lots of boxes with NO stations and lots of them with very few stations. AND we have lots of boxes that use THE SAME station (especially ocean islands filling in all the ocean grids around them). So by the time you get to the “Grid and box” stage, well, you don’t have a lot to work with for the globe and your statement is “spot on”
    “data in such grids becomes unreliable”.

    Since the USHCN only covers the USA, it is not nearly as important as the GHCN in determining “global warming”… And, until about a month ago, GIStemp had only used it up to 2007. While they recently put it in from 2007 to date, they did so by putting in an “adjusted” version with yet more data manipulation changing the past to be colder…

    I’m sorry if you disagree with me that anomalies taken at a detail level are not distorted by the addition of more data points. We will just have to disagree on this, as this is my area of expertise.

    But I don’t disagree with the notion that anomalies done at “the detail level are not distorted”. I am simply pointing out that the anomalies are done “way after the detail level has done all sorts of irreversable things” and that many of them warm the record (such as making Pisa 1.4 C colder in the past as a UHI “correction” that goes in the wrong direction…). That happens in STEP2, program PApars.f while the anomally step comes in at STEP3.

    The anomaly maps are the PRODUCT of GIStemp.

    I have discussed USHCN above but I’m lacking information on GHCN, which has more than 7,000 land surface observing stations.

    This is highly misleading (not of you, of them…). Only 1500 of so of those stations are currently active. So yes, about 1970 there were about 5996 stations active in the GHCN data set. But NOAA / NCDC have deleted loads of them from the present time periods. The deletions have focused on the colder stations (high altitude and high latitude in particular). So your “anomalies” are calculated relative to a “baseline” grid with, for example, Canadian Rocky Mountains in it but then has the present data calculated without such mountains… and with fill-in and UHI “corrections” done using stations up to 1000 km away (and with the “high cold places” un-available for participating, due to that deletion).

    For example, the Vancouver Canada station is kept (they grow palm trees on the B.C. beach …) but the high cold Rockies are left out of the present (but kept in the baseline). This strongly biases the data toward warming.

    GIStemp is a filter that tries to filter that out, and fails. Not surprising, since no filter is perfect. (The assertion that “the anomaly grid will cure all ills” amounts to an assertion that “GIStemp is a perfect filter”. It isn’t.)

    In a very real sense the argument comes down to that. An assertion that “it’s OK because it is anomalies” depends on the assertion that the filter is a perfect one. I look at the extreme input changes (deletion of most cold thermometers) and the “anomalies at the end, not the beginning” processing and say “It’s not that perfect!”.

    If someone wants to convince me that it is perfect, it would not be that hard. Just put all the data back in from cold places (and without the kind of ‘recooking’ that was done turning USHCN into USHCN.v2 ) and show me that the anomaly maps do not change at all.

    Basically, show the QA runs for sensitivity and “Q” of GIStemp as a filter.

  36. boballab says:

    EM:

    I don’t know if you saw the final act from Willis look at Darwin in GHCN but incase you didn’t here goes:

    Willis Emailed NOAA and Dr. Tom Peterson responded that the methods that NCDC uses to adjust the data have been reviewed recently and they decided a new approach was needed. So they decided to use the adjustment procedure for USHCN v2 in GHCN and according to Dr. Peterson that dataset will be available this spring:

    Partly in response to this concern, over the course of many years, a team here at NCDC developed a new approach to make homogeneity adjustments that had several advantages over the old approaches. Rather than building reference series it does a complex series of pairwise comparisons. Rather than using an adjustment technique (paper sent) that saw every change as a step function (which as the homogeneity review paper indicates was pretty standard back in the mid-1990s) the new approach can also look at slight trend differences (e.g., those that might be expected to be caused by the growth of a tree to the west of a station increasingly shading the station site in the late afternoon and thereby cooling maximum temperature data). That work was done by Matt Menne, Claude Williams and Russ Vose with papers published this year in the Journal of Climate (homogeneity adjustments) and the Bulletin of the AMS (USHCN version 2 which uses this technique).

    However there is good news in this: They will be releasing the processing software and the intermediate steps with it.

    We currently expect to release the new version of GHCN in February or March along with all the processing software and intermediate files which will dramatically increase the transparency of our process and make the job of people like you who evaluate and try to duplicate surface temperature data processing much easier.

    http://wattsupwiththat.com/2009/12/20/darwin-zero-before-and-after/#more-14358

    Now I know NOAA has taken a beating in the past about some of there actions but hopefully this is a step in the right direction. One thing that you can’t hold NOAA accountable for is the condition of the data they get from lets say North Korea or Iran or some African country ruled by a tinpot dictator. Call me crazy but I just don’t see them being all that co-operative with a US government agency and I wouldn’t put it past Kim Jong Il to tell his people to cook the data they send to NOAA.

  37. E.M.Smith says:

    @boballab

    Thanks! I’d missed that. I’m a bit over extended right now. (Allong with the usual, we have a slew of various aniversaries and birthdays in the extended family in the start of the new year AND somehow I’ve got several vehicles that all have tags expire now, And… )

    The only thing that worries me about this is that the USHCN change to USHCN.v2 on first inspection “warms the record”.

    It will be nice to have the code to see just what they are doing, so that is a step in the right direction, but one can only hope that they also release the input data to go with it…

    Well, this raises the priority for me to an A/B compare of USHCN and USHCN.v2 to document the “anomaly” between them 9-}

  38. boballab says:

    Actually what will be even more helpfull then the code is they are suppose to release the Intermediate files as well. That should make it easier to spot where /if things go off the tracks. at least to me it seems that way, basically they are going to show what the data is suppose to look like after each step.

    After looking at everything I actually sympathize with the GISS crew in a way. They aren’t getting raw data, they are getting more like the Swanson TV dinner version of data, it already comes pre cooked you just need to heat it up.

    One reason I believe this is the case of NZ. Turns out they fired the guy in charge not that long ago (I don’t know what for) but the interesting things is that before being incharge of the NZ dataset he worked at……….

    wait for it…..

    wait for it….

    The CRU in England, where after getting the axe in NZ he went back to. He was also in the Climategate Emails.

    Dr Salinger was dismissed in April for repeated breaches of NIWA policy.

    “It is always disappointing when there is a breakdown in an employment relationship, particularly with a long standing employee, and I am sure there are lessons to be learned on both sides. We would just like to wish Jim all the best in his future endeavours”, said NIWA Chief Executive John Morgan.

    http://www.niwa.co.nz/news-and-publications/news/all/2009/niwa-welcomes-jim-salinger-employment-decision

    So you have to ask the question was NIWA sending in the raw or the processed data to GHCN?

  39. SoundOff says:

    I only had time to quickly read through everything tonight and no time to explore the links. You (Ed & boballab) have both provided much useful information. It may take me a few weeks to digest it all. In general I don’t disagree with anything said in your latest posts and I sort of get your concerns now. I don’t worry much about the baseline being used in any graph (optics), just the line direction and proper scales. I guess there’s some lab (office) politics going on around baselines at CRU (I’ve read many of their emails too, but it’s hard to judge them without context).

    The only conflict I still see in the latest posts is that NCDC said anomalies are calculated at the station level while Ed says much averaging occurs before the anomalies are calculated. I will let this difference go for now until I reread earlier posts and do more independent research (as well as my earlier point about winter warming being indicative of CO2 forcing since CO2 levels peak in winter).

    Why not do a peer-reviewed paper if there’s substance to the issue raised here? It might get those involved to clean up their record keeping/data interpretation. This may be the only site that deals with issue of migrating thermometers in detail (at least, the only one I’ve found after repeated searches on likely words).

    Thanks and keep at it.

  40. E.M.Smith says:

    @SoundOff:

    There are a bunch of “little things” that must be kept straight as you read through all the “stuff” out there. In some cases, it looks to me like deliberate attempts are made to confuse and confound issues just to prevent easy insight (but always with just enough validity to have “plausible deniability”…)

    One easy, if not very important, example: GHCN is the data set that gathers together data from around the world. It is then published by NOAA / NCDC. You look at it and think “Hey, here is the temperature data”. Everyone calls it the temperature data. But it is not. It is averages of averages of adjusted temperature data. So NCDC has some “magic sauce” they apply to the temperature data (called by many names, like in-homogeneity adjustment, and in-fill, and “QA adjustments”). Then they average the daily MIN and MAX temperatures that have been so “adjusted”. THEN they take the (at most 31) of those values for the month and average THEM together. This is the “monthly mean” that is fed into GIStemp. NOAA / NCDC call it “GHCN unadjusted”. Yet it is full of adjustments. What does GISS call it? They call it GHCN “raw” data on the web site that produces charts for stations (from various steps in the GIStemp processing) and they talk about the things plotted as “temperatures”. But by this point the things plotted are averages of averages of adjusted temperatures. Anything BUT “raw temperatures”…

    So that’s a pretty easy to understand example. The names keep changing to confuse the observer and the bucket labels don’t reflect what is really in the buckets. So you need to keep your own set of clear descriptions in mind even while reading the ones that mislead. An anoyance, but such is life when trying to keep a tidy mind…

    As these “data” travel through GIStemp, they are kept distinct by StationID. They are changed (via a variety of things, like MORE “homogenizing” and UHI “adjustment”) and missing bits are just made up and more “fill in” happens. But all still kept in discrete buckets by StationID (though they do merge records from the same physical location but with different “modification histories” such as USHCN vs GHCN). There are a couple of places where a set of records are averaged together and used for some adjustment of the “base data” (I have trouble calling it “raw GHCN”, and you can now see why). For example, a set of 10 or so “nearby” (that can be 1000 km away) “rural” (that can include major airports and towns with significant Urban Heat Island) stations are averaged together and an “offset” produced from the target station. This is then used to “UHI adjust” the target station for Urban Heat Island Effect removal (in PApars.f program). By a bit of a stretch you could call that use of an “offset from an average of peers” a calculation of an anomaly. It’s not quite a lie, but far from what folks think of when talking about an anomaly… Then just after that UHI adjustment (that moves Pisa Italy 1.4 C in the wrong direction for UHI…) those averages of peers are thrown away. Only the adjusted individual station data proceeds to the next step.

    And so it goes.

    In the very end, these widely adjusted individual station “data” are then used to create an anomaly map. Yet even here you do not get to average together thousands of data points. The stations go through a “Grid and Box” step that takes a bit over 1000 surviving stations (as of my last analysis pre-USHCN.v2) in 2009 and maps them onto 8000 “boxes” on a global grid. You will notice that there are far more boxes than “data”. It makes up the “missing bits” by an odd interpolation process. (There is also an odd “two step” in making two different sets of zonal averages that I’ll gloss over, since it’s a few pages on it’s own… but a set of “zonal means” is calculated so you can compare boxes to it later).

    OK, many boxes end up empty. A lot have a single station. Some have a couple of stations. Many have data from several surrounding boxes used to “fill in” (and which stations get used varies over the years). THESE are what are then used to make your anomaly maps via comparison to the “baseline” when there was almost “one station on average” per box … So we are quite strongly comparing “apples to oranges” and NOT doing it a few thousand at a time, but in most cases in single digits at a time. And that “monthly anomaly map” is based on a single “monthly mean of daily MIN/MAX means” for each single station; not on 62 daily temperature data items directly. Exactly how a box in the baseline compared to a zonal mean and a box now compared to the baseline all works out is, er, something for another day…

    So what happens to that “law of large numbers” when the “large number of temperatures” being averaged is 2 ?! What happens to that “anomaly map” process when it’s comparing different stations in the baseline to the present due to thermometer deletions and the need to spread ever fewer thermometers into ever more diffused places (up to 1200 km away)? And how much does that “temperatures don’t matter due to anomalies being used” really work when the anomaly is the product at the end and not applied at the start? (And it is being applied to two very very different things at that end. Sometimes just two individual very different stations that happen to have ended up being used to “in fill” a box at different points in time) And that is why it rankles me to see thermometers deleted. Because I know that it will directly influence individual boxes in the Grid/Box step and will cause broken anomaly map results.

    Now GISS can, in fact, state that “anomalies are calculated at the station level” because they are. But just not always the same stations and not using the “raw” station data. Sometimes in comparison to a “zonal mean”. And at the very end, after all the “magic sauce” has been created. Not a lie; but far far from “up front and true”.

    But you don’t have to take my word for it, the source code is “up” as pages on this site. Just go to the GIStemp tab up top and scroll down to the bottom of that page. You will find links to each step of the GIStemp process (as of a few months back) and can read what I think it does but also look at the code yourself. The anomaly step is STEP3 and you will find the programs named with “ann” or “SBBX” (subbox) in the name.

    https://chiefio.wordpress.com/2009/03/07/gistemp-step3-the-process/

    Per baselines. If you look at a long temperature series, such as the one from Sweden, you find that GISS has set their baseline smack in a cold dip. Hard to have a “negative anomaly” if you cherry picked the cold dip. It was warmer in 1720. It was warmer in 1930. Etc. What CRU et.al. are talking about I’ll leave for another day (other than to point out that they have, IMHO, 2 worries. One is the valid concern about all the extant work that can no long be directly compared if you swap baselines. The other, I suspect, is a concern about global warming going away with a changed baseline. Some of the emails mention a worry about lower values not looking so hot ;-)

    Basically, with the baseline set on a cold bottom, ALL anomalies in a normal cyclical series will be “positive anomalies” and support the “warming” view. (While you are correct that the trend in the anomalies would not show warming, what is shown is the bright red anomaly maps. That they don’t get any redder is lost on most folks.)

    And per a “peer reviewed paper”: That may well come. With the rapid pace of events like Copenhagen there simply was not time. I’m in discussion with some folks. We’ll see what the future brings.

    Partly there was also a “sniff test” issue for me. The Team on the AGW side was regularly swinging the “Not Peer Reviewed” bludgeon around and that set off my “doesn’t smell right” alarm. I’d also seen reports of folks being prevented from publishing (some finally getting published in Russia…) A good general rule is “When the game is rigged, change what game is being played. -E.M.Smith”.

    So I decided to do what I’ve taken to calling “Public Review”. Just do what Newton and Galileo and dozens of other folks did in the time of Science I admire most. Do some work and publish it yourself. Let the public decide for themselves. Now, in the Climategate era, we see that The CRU Team was busy with subornation of the peer review process, apparent blackmail of Journal editors, and active sabotage of folks who’s views they did not like. Makes that choice to self publish look all the more wise to me. So maybe, after the peers get cleaned up, I’ll let them take a look at the work too. ;-) But for now, it’s “Joe & Jane Sixpack first; the rest of you form a line over there.”

    -E.M.Smith

Comments are closed.