Gee, siting problems and intrument error in sea level gauges

This is a familiar story. Gauge history, calibration, and movement / change are all contributory to unusable sea level data records. Averaging it all together doesn’t fix it, either. Sounds rather like the temperature data…

The paper is well written, an easy read, and helps get a good understanding of the issues involved with sea level measurement. The author does lead off with the obligatory “Kiss The Ring” of Global Warming, then proceeds to do some good science anyway… Looks like we can add surveyors to the list of folks prone to precision and careful work who “find issues” in how the data is handled. (Along with chemists, engineers, aviators and aviation tech, farmers, …) The paper:

http://www.fig.net/pub/monthly_articles/july_2010/july_2010_hannah.html

International Federation of Surveyors

Article of the Month
July 2010

The Difficulties in Using Tide Gauges to Monitor Long-Term Sea Level Change
John HANNAH, New Zealand

They also have a pdf available for download:

http://www.fig.net/pub/monthly_articles/july_2010/july_2010_hannah.pdf

SUMMARY

Climate change has a variety of important impacts, one of which is reflected in sea levels. Indeed, long term rising trends in global sea levels are often used to corroborate the assertion of long term climate change. When tide gauge records are examined in order to determine the long-term trends in sea level it is typical for a single number representing the derived trend, to be quoted. However, the problems in deriving such numbers are rarely, if ever, discussed. Indeed, there appears to be a widespread ignorance as to the fragility of tide gauge records and hence the accuracy of derived long-term sea level trends. This paper uses specific examples from New Zealand to illustrate and explain the problems that exist in deriving an accurate figure for the eustatic changes in sea-level at a tide gauge site. It highlights the importance of assessing accurately the influence of anthropological factors, changes in tide gauge datums, and geophysical effects. These factors, which can compromise or even completely invalidate a record, must be able to be assessed over the entire history of the tide gauge record (often 100+ years). This paper, after exploring these factors and their potential influence, concludes by making recommendations on procedures to be followed if we are to leave future generations better quality sea level data than is often available at present.

So after the catechism practice in the Church Of Global Warming, he moves on to actual “ground truths”. There’s a bit of a “blow by blow” on some individual tide gauges, some example description of how the machines worked in the past vs now, and how things go wrong. Worth reading all of it, and it’s too long to quote the whole thing here. But I’ll pick out some samples for comment. Just remember these quotes are a little bit out of context and selectively chosen. First off, the introduction, in whole. I’ve bolded a couple of lines. Again, it leads off with the “Please let me publish – I’ll Kiss The Ring”…

1. INTRODUCTION

Sea level change is an important climate-related signal, studies of which have featured in all recent International Panel for Climate Change (IPCC) scientific assessments (e.g., IPCC, 2001; IPCC, 2007). In undertaking sea level change analyses, the data is typically drawn from the Permanent Service for Mean sea Level (PSMSL) database at the Proudman Oceanographic Laboratory. For each tide gauge this data is used to derive a figure for sea level rise. In order to correct the derived figure so that it reflects the eustatic component of sea level rise, a great deal of attention has been given to the task of separating the motion of the land and wharf structures (to which the tide gauge is attached), from the observed sea level rise signal. This has resulted in the increasingly widespread collocation of GPS receivers with tide gauges (c.f., Woppelman, 2007). In addition to these land based studies, satellite altimetry has advanced to the point whereby TOPEX and JASON 1 time series are now being used to assess long term sea level changes over the open oceans. Such studies, while separate from the land based tide gauge studies, are not independent in that the data from certain coastal tide gauges have been used for altimeter calibration purposes (e.g., Chambers et al, 1998; Nerem and Mitchum, 2001).

In nearly all of these studies, the tide gauge data is typically assumed to be high quality and not subject to question. This important assumption is rarely, if ever challenged.
However, if such a high quality record is to be obtained it is essential that issues such as the datum history of the tide gauge and local wharf movements be well documented and verified. Given that many gauges are located in port facilities where wharf removal, development, and/or extensions occur, this is easier said than done. Indeed, New Zealand experience indicates that some primary gauges have been renewed, replaced or changed at least five or six times in their 100 year history. In addition, it is not uncommon for tide gauges to malfunction for significant periods of time thus offering the possibility of a (potentially) biased tidal record. This study, then, attempts to highlight the importance of the above factors, giving specific examples of how the analyses of New Zealand’s long term sea level trends have been influenced by them and illustrating how a record can be invalidated by poor information. While the examples have been drawn from New Zealand experience, they illustrate problems that are generic in nature to much of the available tide gauge data.

Other issues that arise in long-term sea level change analyses include the influence of geophysical effects and the length of the tide gauge record, Douglas (1997) pointing out that a gauge record needs to be at least 60 years in length if incorrect estimates of sea level change are to be avoided.

The paper concludes by making some practical recommendations on procedures to be followed if future generations of investigators are to be left higher quality, long term data sets than are currently available.

Gee… anything shorter than 60 years has issues due to the ocean cycles… Kind of supports the notion that temperature records ought to be over 60 years to be usable too (since air temps are largely just ocean changes at a distance…) The paper then proceeds to a nice description of how a recording tide gauge works, and some of the errors. Remarkably similar issues to those of the temperature history / recording devices.

Sediment collecting in the bottom of the stilling well. This was often evidenced by a flattened low water tidal curve – the float would sit on the sediment at or near low water and fail to delineate the change in the low water tide. Due to the biases likely to be introduced, data demonstrating this behavior needs to be rejected from any subsequent sea level trend analysis.

Silting up of all sorts of things is a common problem in harbors. Alviso (near San Jose) has turned from a marina into a marsh headed for mud flat in the few years I’ve known it, for example. Seems that some of the silt gets into the gauges too. Now if you take “mean” sea level, and the bottom excursions are clipped… instant “rise” out of a non-rise. Rather like the “bottom clipping” of low temperature excursions seen in the recent temperature records. Here the recommendation is to reject records with such clipping. Were that applied to temperature data, whole swathes of thermometer records would be rejected. (See the Hair Graphs posted here in various categories including dT/dt).

Skipping over a few:

Gauge setting errors. The traditional mechanism for calibrating a float activated tide gauge was to observe the water level on the tide pole adjacent to the gauge and then to ensure that this reading was reflected on the chart record. Some old gauges (e.g., Foxboro gauges) could be set to little better than 0.2 ft. Setting errors of 0.2 ft – 0.3 ft (0.06 m – 0.09 m) appear to have been reasonably common. Such settings would typically occur when the paper tide graphs (rolls) were changed (i.e., anywhere between every two weeks and two months). Assuming a standard deviation for the gauge setting of 0.25 ft, (0.076 m) and a setting interval of one month, then the contribution of this error to the standard deviation associated with an annual MSL could be expected to be in the order of 0.022 m.

So 1/4 foot error in setting are “common”… and happen every few weeks to months.

He then goes on to look at modern gauges, and ways they depended on some of the prior art:

However, even with electronic gauges, the calibration problem (equivalent to the gauge setting error) remains. In addition some, such as the quartz crystal pressure gauges, can drift severely with time. Indeed, New Zealand experience with one such pressure gauge at Cape Roberts in the Antarctic has shown that a calibration interval of two to three years is inappropriate – the data being so contaminated with drift errors as to be essentially unusable. While a calibration period of at least six months is preferred, logistical constraints have limited the calibration of the Cape Roberts gauge to 12 monthly intervals.

Looks to me rather like some of the MMTS transition bias and errors in the temperature record. It also is the same class of “splice error” problem; as different instrumental records get spliced.

New Zealand experience further indicates that the most important issue in obtaining high quality monthly or annual MSL data is the care and maintenance of the gauge. Poor maintenance is often indicated by long periods of gauge outage, frequent breaks in the tidal record, timing errors and poor curve definition at high and/or low water. Where a gauge has been well maintained (such as with the Auckland and Wellington gauges), a posteriori error analysis undertaken on the full sets of digital data collected over 100 years indicate that an annual sea level means should be able to be given a standard deviation of between 0.020 m and 0.025 m (Hannah, 2004).

IFF you have done everything really really well, you MIGHT be able to get a 1 inch standard deviation. (2 cm to 2.5 cm above).

The next major section is about Datum Errors:

2.2 Datum Errors

In attempting to derive a long-term sea level trend, datum errors, generally arising from anthropological factors, are by far the most important to resolve. Unlike gauge errors that are greatly reduced by the quantity of data collected and the resulting meaning process, datum errors can be subtle, tend to be systematic and, if not correctly resolved, will completely invalidate a sea level record. Such errors can arise from the following sources.

IMHO, that is analogous to the “adjustments” made to the temperature data. Trying to figure out where to set the past starting point…

When tide gauges are shifted from one wharf structure to another and the new gauge zero differs by some unknown (or unrecorded) quantity from the previous gauge zero. In recent attempts to reconstruct the tidal record at New Plymouth it has become apparent that the tide gauge had been moved from one wharf to another at least four times since 1918. In the case of the Wellington gauge, written records indicate that the gauge was moved between 1944 and 1945, but there is no record of a datum shift.
[…]
When a tide pole is replaced and the new pole is set at a different level than the previous one. When it remembered that the tide pole is the means by which tide gauges have historically been calibrated, then it becomes clear that any unrecorded shift in the tide pole will immediately translate into an unrecorded datum change. Tide poles, which are attached to wharf structures, can easily be damaged by vessels and are often obliterated by oil and other port debris. It is likely that even a well built tide pole will require replacement on a 20 year cycle. A recent detailed analysis of the records relating to the well maintained Lyttelton gauge, indicates unrecorded variations in the position of the tide pole of 0.08 ft (0.024 m) over a 40-year period. The dates when specific changes occurred are not known. In reality the tide pole is the fragile link that holds a tidal record together. If the position of the tide pole has not been monitored throughout the history of the tidal record then the record must be subject to question as must the accuracy of any subsequent long-term sea level analysis.

So one “well maintained” pole has a 2.4 cm or about an inch variation over a 40 year period. From this we get high precision on mm ocean “sea level rise”?

The simple truth is that tide gauges were intended for ships where the level of concern was feet, not mm, and are very ‘fit for purpose’ for that. Much like thermometers at airports were for the purpose of telling pilots when it was hot over the black tarmac and those at post offices were for telling folks it was a hot, or frozen, day. Precision and accuracy in whole degrees was “good enough”. For tide gauges, when I was sailing, I just wanted them to give me roughly how many feet I had below the keel. Inches was too fine a measure to trust. If I needed 4 feet, and it said 4.2 feet, it was NOT something to trust. 5 feet on the gauge was good…

This, IMHO, is a generic problem for “Climate Science”. Their attempt to use instruments intended for a “rough cut / good enough” for super precision long duration trend analysis. The instruments, data collection, and data record are just not fit for that purpose. No matter how much statistical manipulation you provide.

When there is no consistent history of leveling from stable benchmarks to the tide pole. Any local subsidence in a wharf structure (and thus in the attached tide pole or tide gauge) will only be detected if there has been a consistent history of leveling to stable local benchmarks. For example, for many years, and in earlier sea level analyses the Wellington gauge was assumed stable (Hannah, 1990). However, by 2003, a sufficiently long time series of local leveling data had been collected so as to indicate an apparent long-term subsidence in the wharf structures of about 0.15 mm/yr (Hannah, 2004). Conversely, at Dunedin, it has only become clear recently that certain local bench marks are subsiding while the wharf structures remain stable. The 2004 analysis of long-term sea level change, which assumed both were subsiding, gave a result of a sea level rise of 0.94 mm/yr (Hannah, 2004). The most recent analysis (with this erroneous assumption corrected), now shows a sea level rise of 1.3 mm/yr – a very significant difference.

So we have about a 38% variation in the “sea level rise” based on a correction for a more stable benchmark. And what is the error band on the rest of the stations in the world?

Changes in the setting of the gauge datum. It is altogether possible that a gauge may exhibit none of the above three problems but yet still exhibit obvious datum shifts. This typically happens when some new (or different) figure is adopted for a gauge datum and when the tidal recording device is reset accordingly. At New Plymouth, for example, it is clear that changes in the gauge setting of 1.0 ft (0.305 m), 1.5 ft (0.457 m), 2.0 ft (0.610 m) and 3.0 ft (0.914 m) all occurred in the space of 10 years. In two such cases there was no clear record of exactly why or when this had happened. Indeed, it appears that there was some confusion between the Port Authority (the owner and operator of the gauge) and the national surveying and mapping organization (responsible for the tidal predictions), as to what datum offset should have been set.

So exactly which datum ought to be used for what, when? Oh, nobody really knows… Rather like the constantly changing thermometers that get adjusted by guess and by golly and then diced and spliced together. “Hash” is not generally known as a high quality way to make meat, or, IMHO, data food product…

There’s more, but I’m going to give them short coverage. Do read the paper.

Next category is:

2.3 Analysis Errors

However, there is real danger in seeking to resolve accurately long-term sea level changes from data sets of less than 60 years in length. Douglas (2001), for example, summarises research showing that large variations in the estimates of sea level rise can be explained in nearly all cases by the selection criteria used by a particular investigator – short records being one of the most important. It is vital that the periodic effects from such signals as inter-decadal variability be eliminated
[…]

A second analysis problem that can arise relates to the influence of unmodelled hydrological effects. The Hunter River, for example, has had an influence on the data produced by the Newcastle tide gauge on the East Coast of Australia. Equally, one of New Zealand’s longest tidal records (Westport) was compromised by similar effects. The Westport gauge sits at the mouth of the Buller River. If climate change brings with it changes in rainfall patterns (as is expected to happen), then the prospect exists for apparent sea level change to be masked or exacerbated by changes in river flow.

Gee… precipitation changes masking sea level data… Shades of clouds, precipitation, and temperature data. And short records not to be trusted due to the bias it introduces in a system with long cycle variations.

2.4 Geophysical Effects

Early sea-level change analyses showed wide variation in result (e.g., Gornitz, 1995). However, much of this variation was subsequently able to be explained by ensuring that the tide gauge records used met five criteria. These were: (1) that the records be at least 60 years in length, (2) that they not be from sites at collisional tectonic plate boundaries, (3) that they be 80% complete or better, (4) that at low frequencies they be in reasonable agreement with nearby gauges sampling the same water mass, and (5) that they not be from areas deeply covered by ice during the last glacial maximum (Douglas, 1997). Reasons (2) and (5) are the issues addressed in this section.

2.4.1 Tectonic Motion at Plate Boundaries
[…]

2.4.2 Glacial Isostatic Adjustment
The second geophysical effect to be considered in the interpretation of any tide gauge record is that of glacial isostatic adjustment (GIA). Vertical motions from this effect are estimated by using a geophysical model (e.g., Peltier, 2001), the size of the motion varying according to the model adopted. For example, GIA estimates for Auckland range from 0.1 mm/yr (from the ICE4G (VM2) model to 0. 55 mm/yr (from the JM120,1,3 model). Similar levels of variability in estimate are found at other New Zealand tide gauges.

So think Canada and North Europe might have even more rebound than New Zealand? So a model can range from 0.1 to 0.55 mm / year. Or about a 450% range. Oh Dear. They attached GPS devices to some gauges and now the reality looks more like the lower number. Think having .45 mm/year less “adjustment” (to more sea level rise as you add in the rising land) makes much difference to a 0.94 mm/yr (or even the 1.3 mm/yr) sea level rise computed above?

The paper goes on from there to praise some of the new methods and paint a positive view of the future data quality.

In Conclusion

It looks to me like any statements about “Sea Level Rise” based on the instrumental record are pretty darned dodgy. Personally, I think we have more useful information from things like the ancient ports all around the world that are now a ways inland.

https://chiefio.wordpress.com/2010/12/06/ostia-antica-and-sea-level/

By that measure, any “rise” now is really just a recovery from a drop during the Little Ice Age.

Subscribe to feed

About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in AGW Science and Background and tagged , , , . Bookmark the permalink.

6 Responses to Gee, siting problems and intrument error in sea level gauges

  1. R. de Haan says:

    Thanks for the article E.M.,

    Every (older) observing person, visiting the beaches , actively sailing or flying will know there are no significant changes in sea level at all without reading a single report about the subject.

    Old and new pictures taken from the same position on a timescale of a century provide the same insight and than we have the wonderful picture published on blog of the great late John Daily that makes any report about sea level rise the laughing stock of the year:

    The 1841 sea level benchmark (centre) on the `Isle of the Dead’, Tasmania. According to Antarctic explorer, Capt. Sir James Clark Ross, it marked mean sea level in 1841. Photo taken at low tide 20 Jan 2004.
    Mark is 50 cm across; tidal range is less than a metre. © John L. Daly.
    If the benchmark is difficult to see, try these.
    http://www.john-daly.com/

    If there was any rise in sea level and the mark would have been set at low tide it would no longer surface would it?

    Well, it does surface so we can safely claim there has been no significant rise in sea level since 1841.

    Case closed.

  2. R. de Haan says:

    One other aspect contributing to lot’s of confusion is the fact that land masses are still recovering from the weight of the ice cap that covered the lands during the last ice age.
    We have a rising land masses in Northern Europe, a process that continues today. We have the North of the UK rising and the South sinking.

    In the Netherlands the entire Northern region is sinking due to natural gas extraction and the center of the area is still rising.
    Similar activity is observed in Canada and North America, Asia and South America.
    Read Post glacial rebound:
    https://en.wikipedia.org/wiki/Post-glacial_rebound

  3. R. de Haan says:

    Here’s another read extending the subject to tectonic uplift, rebound and more:
    https://en.wikipedia.org/wiki/Tectonic_uplift

  4. vigilantfish says:

    I enjoyed your presentation of this article. The astonishing thing about CAGW alarmism is how so many scientists appear to complacently accept the re-purposing of historical data with no thought for how robust the records and instrumental data are if they are being used for a purpose unforeseen by those who originated the studies.

    Tidal poles originated with the Royal Society in London in 1666 with their demand that tidal ranges be measured accurately for navigational and theoretical purposes. The theoretical purposes were largely ignored, however, until the 1830s, since English natural philosophers were content to treat the tides purely theoretically as the gravitationally-cased fluctuations in the elevations of an envelope of water on a theoretical perfect sphere.

    In the 18th and early 19th century tidal gauges and the tidal poles used to calibrate them became important for trying to determine the ‘zero’ level for sea level to improve the accuracy of land surveys and calculating land elevations, increasing the accuracy of measurements of the arc of circumference of the earth., etc. Only in the 1830s did English physicists such as John William Lubbock and William Whewell try to advance dynamic tide theory, and what helped this science advance was the construction of the first self-registering tidal gauge by J. Mitchell at the Sheerness Dockyards.

    The self-registering tidal gauge did not become sensitive enough (i.e. record the entire cycle of the tides with enough accuracy) until somewhat later. George Airy in England and Alexander Dalles Bache of the United States Coastal Survey used the improved instrument in the 1850s and 1860s. They advanced the theory of tides and developed some of the information about tides that you provide in your next posting (about why the tides are not quite so simple as Willis presumes and have other periodicities besides their diurnal, monthly and annual cycles).

  5. tom0mason says:

    You may be interested to see a nice piece of work on tidal guages done posted by Hank and building on the data from Beenstock et al’ paper, at suyts space –
    https://suyts.wordpress.com/2013/10/21/sea-levels-a-validation-of-beenstock-et-al/

  6. Reblogged this on The GOLDEN RULE and commented:
    Its IPCC’s alarmist propaganda time again.
    One, and one of many, parameters that are touted as “almost certainly’ or “likely” to mean the planet is undergoing “climate changes” due to mankind’s influence , (somehow linked to increasing atmospheric CO2 levels changing from 0.038% to .04%), is the claimed rising level of the ocean waters.
    I offer this article of impressive scientific content and logical thinking, to give reasonable cause for doubt about the significance of the IPCC claims. A similarly sound assessment of the others, pretty well all in fact, leads open-minded scientists to similar conclusions such as this:
    “It looks to me like any statements about “Sea Level Rise” based on the instrumental record are pretty darned dodgy. Personally, I think we have more useful information from things like the ancient ports all around the world that are now a ways inland.

    https://chiefio.wordpress.com/2010/12/06/ostia-antica-and-sea-level/

    By that measure, any “rise” now is really just a recovery from a drop during the Little Ice Age.”

Comments are closed.