Saving Good Ideas

Our Climate Context

Our Climate Context


Original Image Bigger version if you click.

I particularly liked the way this image ties together D.O. events, Bond Events, and our historical context along with the Ice Age context and the Younger Dryas. If anyone knows the author I’d love to give attribution and / or read the original article it came from (if any).

Over on WUWT there is a discussion of Arctic stations and how the temperature “trends” at rural stations track the AMO ocean current while the more ‘urban’ stations show urban heat island effects.

A couple of comments there caught my eye, and I’m going to preserve them here (for the simple reason that things move so fast on WUWT that I can’t always find a particularly good bit when I want to go back to it). From:

http://wattsupwiththat.com/2010/09/22/arctic-isolated-versus-urban-stations-show-differing-trends/#comment-490305

This comment in particular did a very nice job of setting out some of the physics issues of using temperatures to measure heat flow on the planet.

Dave in Delaware says:
September 23, 2010 at 5:43 am (Edit)
Thoughts on Anomaly Temperatures

Temperature is a PROXY for Energy.
The Energy content and the Energy transfer is what you really need to track. Calculating an Anomaly of High Energy air averaged with Low Energy air is only a rough approximation, even if the statistics are pristine. It takes more energy to change the temperature of Humid air.

Three examples where temperature anomaly is not telling the full story
* humidity
* surface
* radiant energy transfer

Humidity
You have probably seen the example (my excerpt from Max Hugoson post at WUWT)
http://wattsupwiththat.com/2010/06/07/some-people-claim-that-theres-a-human-to-blame/#more-20260

Go to any online psychometric calculator.
*Put in 105 F and 15% R.H. That’s Phoenix on a typical June day.
*Then put in 85 F and 70% RH. That’s MN on many spring/summer days.

What’s the ENERGY CONTENT per cubic foot of air? 33 BTU for the PHX sample and 38 BTU for the MN sample. So the LOWER TEMPERATURE has the higher amount of energy. …..Thus, without knowledge of HUMIDITY we have NO CLUE as to atmospheric energy balances.
———————– (end of excerpt)

So might a better anomaly track temperature in similar humidity areas? Tracking Phoenix with itself might be OK, but maybe we shouldn’t track Minneapolis even with itself, since summer vs winter humidity is significantly different. It has been suggested that Dew Point might be a better indicator than Tmin averaged with Tmax.

Surface
Surface temperatures on land are actually ‘near surface’ air temperatures 1 to 2 meters above ground. The energy flow has already started its trek toward space. Ocean temperatures, especially the ARGO floats, are more truly surface or sub surface measures (before the energy moves to the air). Heat Capacity (used to determine energy content) of liquid water does not change much with temperature, so ‘averaging’ warm and cold water is a smaller error than for dry vs humid air. Which is why OHC, Ocean Heat Content, has been suggested to be a better measure of the Earth’s warming or cooling. And finally, because liquid water has a much higher Heat Capacity than air, when energy moves from the ocean to the air (as in an El Nino) a temperature change in the liquid, gives rise to a larger temperature change in the air. So again, an Anomaly that averages land surface with ocean temps is another ‘apples to bananas’ comparison – both fruit, but different texture.

Radiant Energy Transfer
Energy transfer from Earth toward space begins at the true surface, the dirt, grass, pavement, etc. On a clear sunny day, the surface temperature of an asphalt parking lot can be much higher than the air above it (the measured air temperature is then another proxy of the surface). Radiant energy transfer from the surface toward space is proportional to the absolute temperature to the 4th power (T^4). As the average anomaly temperature changes linearly, the energy transfer changes to the 4th power. An anomaly that averages a 5 degC change in the Sahara with a winter time change in Siberia isn’t telling the full energy story.

I have toyed with the idea of an ‘anomaly correction’ for radiant affect, but have not actually worked it past the concept stage. The idea would be to take each location, adjust for Radiant Potential (temp to the 4th power), then compute a Radiant Anomaly on the transformed temperatures. The Radiant Anomaly might then let us compare the Sahara to Siberia in terms of the surface ability to shed heat. Sort of like the ACE energy metric for hurricanes, but applied to surface temperature.

I like the idea of a ‘heat anomaly’… Then George Smith adds some more to the physics:

George E. Smith says:
September 23, 2010 at 9:51 am (Edit)
“”” Dave in Delaware says:
September 23, 2010 at 5:43 am
Thoughts on Anomaly Temperatures

Temperature is a PROXY for Energy. “””

Dave, I have for some considerable time pointed out, that even if it was possible to measure the true average global (surface) Temperature; (which it isn’t) , that we still would know nothing about the energy transfers; and the roughly black body Stefan-Boltzmann like fourth power relationship, is one part of that problem.

It is a trivial problem in calculus and trigonometry to prove that if the Temperature goes through any arbitrary, single valued continuous function (of time) cycle, whose average value is Tzero, that the average value of the instantaneous fourth power of that temperature function, is ALWAYS Tzero + deltaT.

Now a lot of folks love to point out that the earth surface is NOT a “Black Body”, so they argue that the fourth power thing is not valid.
Well the black body assumption does set a maximum for the amount of radiant cooling that can occur; and many surfaces have a sufficiently constant spectral Radiant emissivity over the range of LWIR wavelengths that can be present in the thermal radiation from that surface at prevailing Temperatures; that simply applying some average emissivity to the BB calculated value from the S-B formula is a respectable value for the actual surface radiant emittance.

Actually the deep oceans behave like a fairly good black body absorber; well a grey body to be pedantic, since the surface Fresnel reflectance is about 2% (normal) over a fairly wide specral range; and certainly over the solar spectrum range; and perhaps 3% over the full range of incidence angles. So the deep oceans would be fairly well characterized as a Grey body with 0.97 total emissivity.
Actual LWIR reflectances at typical ocean surface temperatures; are not quite so easy to figure but I would expect the BB (with emissivity of 0.97 would be quite close to reality for the oceans; which after all are 70 % of the total surface.

Employing (with caution) BB radiation theory to the problem also gives us some other inputs to the Green House Gas absorptin of surface emitted LWIR thermal radiation.

If the surface emissions are in fact roughly black body like, then it is known that the spectral radiant emittance at the spectral peak of that emission varies as the FIFTH power of the Temperature, and NOT the FOURTH; and then the Wien Displacement law moves that peak to shorter wavelengths (~3000/T microns), so the higher the surface Temperature, the further down the thermal radiation tail the CO2 absorption band (15 micron) is. The total captured energy still goes up with Temperature; but the fraction of the emission spectrum energy that is capoured goes down; and more of it escapes the atmosphere. The spectral peak which is about 10.1 microns for the global average Temperature of 288K (they claim) will move further into the atmospheric window also.

On the other hand for colder regions the Wien Displacement moves the thermal radiation peak closer to the CO2 15 micron band; but the surface Total radiant emittance goes down severely for the colder regions.

All of which supports my contention that it is the hottest driest mid day tropical desert regions, that do most of the real radiant cooling (land) . The polar snow and ice regions are quite ineffective in cooling the planet; but if the arctic ocean should become ice free, then the north polar region would become a better cooler for that part of the planet.

And of course although this note is all about radiant cooling; we never lose sight of the fact that the ocean regions do a heck of a lot of cooling via the evaporation/convection mechanism, transporting latent heat into the upper atmosphere.

And brings up some issues with the sampling density:

George E. Smith says:
September 23, 2010 at 10:27 am (Edit)
“”” Al Tekhasski says:
September 22, 2010 at 4:45 pm
evanmjones wrote: “So we would need over 120,000 stations? That’s a lot of stations.”

Sure it is. But I am afraid you might need more. “””

Well Al you must be new around here. If you had been visiting here more often, you would know that the general theory of sampled data systems is apparently quite unknown in “Climate Science” Institutions.

So your Nyquist Sampling Theorem is trumped by their Statistical Analysis, and probably the Central Limit Theorem as well. So long as they get the right r^2 value and proper trend line (with a slope error no more than +/-50%; or a 3:1 range) they don’t have to worry about undersampling.

But they are very good at what they call oversampling; which is creating a whole raft of phony values that nobody measured, on their computer. They can make as many grid points as they like; limited only by the size of the supercomputer that the tax payers bought for them. Well they don’t actually measure anything real at all those oversampled grid points. For some reason their computers are not able to go back and predict; excuse me that’s project, the actual values that would have been read at the handful of real actual global measuring stations. but they can interpolate somethign feirce.

So in climate science it is legitimate to core bore a single tree; and from those small sectors of that one dimansional sample ofr the three dimensional tree, in an even bigger forest; you can describe the complete climate history as to Temperature, wind, moisture, sunlight, humidity (maybe I alreadys aid that) and anything else you want to know; well but only for the age of the tree. And you can determine the age of the tree by doing a radio-carbon 14 C assay on some of those pieces of the extracted core. There might be other ways to tell the age of the tree and date the climate conditions; but they probably aren’t as reliable as 14 c assays.

And due to the coherence of anomalies, it is ok to measure the temperature in downtown San Jose California; and apply that Temperature value to the small town of Loreto about 1/2 way down the Baja, on the Sea of Cortez.

So they used to monitor the weather and climate of the entire arctic (north of +60 degrees) with just 12 total weather stations; now they have some totally huge number like 70-80 .

So I doubt that anybody is going to heed your request for 100,000 sampling locations.

And by the way; just in case you haven’t noticed these climate reporting stations get their daily Temperature from a min-max temperature reading; which gives you two samples during each 24 hour cycle; but since the diurnal temperature variation is not pure sinusoidal; there must be at least a second harmonic 12 hour periodic component presnt, so they fail they Nyquist criterion for the Time variable by at least a factor of two which means that the aliassing noise makes even the daily Temperature average value unrecoverable. So the spatial aliassing noise is just superfluous; which is why they don’t care about it.

But it is good to see somebody else with some understanding of sampled data systems.

He was responding to the same comment I responded to here (followed by a bit of give and take with Ben D.):

E.M.Smith says:

Al Tekhasski says:
evanmjones wrote: “So we would need over 120,000 stations? That’s a lot of stations.”

Sure it is. But I am afraid you might need more. The example of stations 50km apart having opposite long-term trends means that we don’t know what trend is in between, and what is around in the same proximity. […]
More, we still have no idea if the 25x25km is enough to capture complexities of local micro-climates, so be prepared to another half-scale, which would quadruple the number of necessary stations. Without this uniform sampling grid of data it is not serious to discuss any mathematics of subsets or else. This is what physical science says. Sorry.

Al, you may like this article where I look at some of the mathematical issues of sampling surface temperature. As topology is fractal (mountains, coastlines, etc.) the temperatures from them ought to also be fractal. A black pebble next to a snow melt stream will have quite divergent temperatures… Measuring a fractal gives different answers based on the ‘size ruler’ you use. And we’re measuring ‘climate change’ with a ruler who’s size constantly changes over time.

https://chiefio.wordpress.com/2010/07/17/derivative-of-integral-chaos-is-agw/


Ben D. says: I will not say that the anomaly approach is incorrect, but there are issues with it as well. As far as I can tell, its the best method known right now, but it is not perfect. To argue that its the end-all is kind of ignoring the issues that it also brings up.

Very well put. Also, there are many kinds of anomaly method and they have many different modes of failure. One of the simplest to ‘get’ is that of the splice artifact.

It doesn’t much matter if you use anomalies or not, when you take a station that warms 1 C as it grows, then in a later decade add a new station that warms 1 C as it grows, then in the final decade swap to a third station that warms 1 C as it grows. Tack them all together and you a 3 C “warming trend”. It matters not if this is done via averaging, direct splicing, or “homogenizing” using each to “adjust” neighbors.

The temperature series codes like GIStemp are FULL of that. And anomalies make it easier to have happen rather than harder. (No station has to reach unheard of record highs and call attention to itself…) I saw this effect in the region near Marble Bar Australia where an all time ever record was set back near the ’30s and never exceeded, yet the ‘region’ has a ‘warming trend’.

So that’s why I periodically anchor myself back in ‘real temperatures’ and why I start by looking at the profile of ‘real temperatures’. I’ve taken a great deal of flack from folks asserting that it is sheer stupidity to do that, and they are wrong. It is only stupid to think that they show accurately the temperature trend. Just as it is stupid to think that averaged anomalies show the accurate temperature trend if for no other reason than ‘splice artifacts’. IMHO, the ‘splice artifact rich’ nature of the ‘homogenizing’ done is a target rich environment in the temperature series codes.

But wait, there is more…

To clarify, anomalies are based on the area-weighted global average,

For one kind of anomaly…

The climate codes use a “grid / box to grid / box” anomaly. They have one set of thermometers in the box at the baseline and a different set now. This is horridly broken. It would be like me saying cars have gotten faster as my home “grid / box” had an average VW fastback in 1970 and has and average Mercedes SL now and the “max speed anomaly” has risen by 55 mph.

Yes, they take steps to mitigate the problem. But mitigation is not perfection. We are basically betting the global economy on the perfection of their mitigation and coding (and their coding is fairly sloppy.)

So I started by looking at plain temperatures, and found things that were not in keeping with AGW and CO2 theories. Then moved on to anomalies. But I wanted a more controlled beast. So I do anomalies only “self to self” for a given thermometer.

This brings up the issue of ‘baseline’. But you don’t need a baseline to do anomalies. ONE kind of anomaly is based on a common period of time, the baseline. But as Steve Mosher points out, you need a common time period in your baseline. And many thermometers don’t have it. So a whole load of ‘homogenizing’ and ‘splicing’ (that GISS calls joining) and box to box and infill and… well, junk… gets done to try to make a complete enough record to use a ‘baseline’ method. And IMHO it adds too much error to be usable to 1/10 C. But you can use non-baseline anomalies, such as First Differences. And I use one like that (but fixing an issue in First Differences that makes it fail on data with lots of gaps in it.)

To put it simply, you can use this approach in the data above and it will probably change what you see simply because of the transformation of the data so to speak.

BINGO! And that is what the temperature data codes like GIStemp do. They change the data. So we end up with the past cooling by whole degrees…

But I must also interject here and say one thing: If everyone uses the same method and that is ALL they use, how would we know this method is actually “correct”. I might be playing devil’s advocate there, but at some point I will take a shot at adjusting the data myself and the first thing I would do is NOT use the anomaly system.

You have it exactly right. The three major labs all use the same basic approach, data, and methods with all the same flaws. Any attempt to look at it from a different angle gets rocks thrown at you (though it does point up their flaws…). And yes, starting from the basic temperature data gives you the context to know when something is straying from reality.

https://chiefio.wordpress.com/2010/04/03/mysterious-marble-bar/” target=”_blank”>https://chiefio.wordpress.com/2010/04/03/mysterious-marble-bar/

And a whole lot more in:

https://chiefio.wordpress.com/category/dtdt/

I know, I ought to have taken the time to clean this up into a new distinct summary work, but I’m too rushed right now.

At any rate, at least now I’ve saved some of this where I can find it again ;-)

FWIW, it is my belief that the interaction of these heat issues with the 4th power radiance that puts the hard lid on our Holocene temps in the top graph, and that allow us to have the plummet back into a glacial era on the downside. There is a climate tipping point, and as the graph shows, it’s all downhill from here…

About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in AGW Science and Background and tagged . Bookmark the permalink.

10 Responses to Saving Good Ideas

  1. Hi,
    Regarding the heat transfer from GHGs to the ocean, it seems to me that there are at least 3 physical reasons why GHGs would not heat the oceans:

    1. LWIR doesn’t penetrate the ocean skin beyond a few microns, resulting in skin evaporation and all energy used up in the phase change of water- without changing temperature
    2. The heat capacity of the oceans is so immense compared to the atm. that GHGs cannot have any significant influence
    3. The 2nd law of thermodynamics

    http://hockeyschtick.blogspot.com/2010/08/why-greenhouse-gases-wont-heat-oceans.html

    care to tell me where I’m wrong?

  2. E.M.Smith says:

    Well, I can’t tell you where you are wrong as I think you are right. Having spent a lot of my life swimming, it’s always colder as you go deeper. Any ‘heating’ is only at the surface and rapidly goes away when the wind or weather shifts.

  3. oMan says:

    Chiefio: your collecting of good ideas is a good idea. I can (barely) follow the argument of the folks whose ideas you’re collecting, i.e. the normalization of heat content between Phoenix and Minneapolis so we can get a consistent picture across the planet, the deep understanding of Stefan-Boltzmann t^4 emissivity, the “real story” of how energy moves through the system, and the “oversampling problem.” But what I can readily recognize is that even a random collection of commenters, unsung experts from the global “street,” can tear enormous holes in the orthodoxy of Global Warming. What strikes me most is how little we know of how the system works; which suggests the need for some humility, some pragmatic data-collecting and some rigorous testing of lots of hypotheses. On the issue of “oversampling” it amazes me that computer models can pretend to offer 100-year forecasts of “global temperature” (itself a bogus idea), when their cells are tens or hundreds of kilometers on a side. Completely blind to key atmospheric processes like convection cells. Keep up the good work!

  4. E.M.Smith says:

    I intend to ‘keep it up’; though at the moment I’m taking it a bit slow. (Some medical issues in the family and actually making some money to pay bills…)

    I’m also pondering what direction to investigate next. It’s not as ‘target rich’ an environment as it was in the past and more folks have “pitched in”. Verity Jones and KevenM have run ahead of me on GHCN.V3 already. (Something that’s a bit of relief at the moment, as I get to watch rather than herd bits ;-)

    GHCN V3 Beta: Part 1 – A First Look at Station Inventory data

    Part of why I grabbed these comments as there are now a couple of other folks realizing that the fractal nature of temperatures and the use of temperature as proxy for heat is just wrong. Won’t work. FUNDAMENTALLY bad at the mathematical theory level.

    I don’t feel alone on that point any more …

  5. RACookPE1978 says:

    A technical question on the polar ice/melting of the ice “doomsday” spiral often predicted, but one whose answer is not apparently readily available from the usual on-line sources.

    At what angle must the sun’s rays be fro their energy to be absorbed by the ice (or water) rather than reflected?

    I had expected that the answer would be a simple sine/cosine function of the 1.3 index of refraction for ice or water, but nothing I’ve read indicates that assumption is true. To the contrary, even the physics and gemstone “facet design” on-line calculators assume simply that all light arriving on surface is aborbed at any angle greater than 0.0

    Robert

  6. E.M.Smith says:

    I think the answer is hard to find because it does not exist.

    There is always some reflection and some absorption.

    Part of the problem is that the water / ice surface is not flat. You don’t have ONE angle.

    IIRC in discussions on WUWT George Smith had calculated it was something like 20 degrees where you started getting significant absorption, but even then, the cross section of sunlight intersected is fairly small.

    The reality is that the whole ice albedo feedback thing is just not that important. The equatorial deserts matter much more with their huge area and 4th (or 5th) power on radiation.

    For a better answer, you could contact George on WUWT. (Or maybe I’ll ask him to give an answer here…)

  7. Larry Geiger says:

    I mostly have no clue what you are talking about.

    However, the graph is very interesting.
    1. If it’s accurate (which I assume) then the current up tick in temps is totally irrelevant in a climate sense.
    2. It appears that back in 15,000BP the earth tried to kick into warm mode and didn’t quite make it. It didn’t happen until around 11,500BP.
    3. We should be happy to live when we do. Some of those up and down ticks at 28,000BP and 29,000BP look pretty drastic. We think that we’ve seen warming but we haven’t seen anything yet.
    4. Despite the teeny, tiny, wee little up tick at the right end of the graph, the future (1,000 – 3,000 years) doesn’t look warmer!

  8. E.M.Smith says:

    I”ve seen roughly the same graph from other sources. You can find some corroboration under the AGW Climate Perspective category:

    https://chiefio.wordpress.com/category/agw-and-gistemp-issues/agw-climate-perspective/

    And every one of your enumerated points is correct.

    FWIW, I think that ‘failure to warm’ plunge of the Younger Dryas is most likely when most of the Clovis People got wiped out in North America as a large rock from space hit the ice sheet (thus leaving no crater as the 2 miles of ice took the hit) and caused a load of ‘issues’ (probably including the flash frozen Mammoths)

    Mammoth Meat

    FWIW, a lot of the above just comes down to “You can’t DO that as the theoretical underpinings of the math and physics are just wrong.”

    Things like confusing temperature for heat when heat matters. Heat being total energy while temperature is the velocity of individual molecules. So put an ounce of water on the stove in one pan, a gallon in the other. Turn on the burners. The one ounce will evaporate and the pot get quite hot long before the gallon is even warm, yet both are getting the same ‘heat gain’. Temperature is a lousy way to measure heat.

    Nyquist just says you have to measure enough places (and formulates the math for knowing exactly how many). Think of it as checking how many drunks there are at a football game. Sampling one person is not enough as you get 100% or 0% while at the other end it’s accurate but you are doing way too much work. So you need a couple of folks from each section (as the Baptist Delegation in section 25 is not a representative sample ;-) but if you get one of them and one of the rowdies from the end zone party section, you are getting closer. The thermometer guys are taking 20 samples from the end zone and 2 from elsewhere in the stadium and finding ‘record high’ levels of drunks…

    What George is talking about is a mater of physics. The AGW folks are saying that we’re going to have a tipping point with ever more heat trapped. The S-B Equation says the hotter something gets, the faster it dumps heat, and that accelerates ‘way fast’ as the 4th power (and George adds an edge effect that gives a 5th power function). So if we doubled temperature, heat loss would go up by 2x2x2x2 = 16 time (or maybe 32 times). Hard to have heat build up when each increment warmer results in 16 to 32 times faster heat loss.

    And that is a fundamental well known chunk of physics.

    There is also some admiration of the difference between the actual surface that is doing the heat loss and the air in the box up above it with the thermometer. If we had a uniform grey surface under the box and got one unit of radiation and changed that for a mix of white and black (at the same air temperature) and assuming the black surface was twice as hot as grey while the white was half as hot. The white would radiate less ( 1/16 th as much) but the black would radiate 16 times as much. So take that one unit. The B/W mix now radiates 1/16 + 16 and that is a lot larger than 1 (which is all a long way of saying the actual surfaces matter a whole lot more than the air, to CO2 ‘greenhouse gas’ Infrared theory; and we don’t measure the surfaces… )

    In the real world you would not get 1/2 and double (as it’s in absolute degrees…) but you get the same effect (still an increase in loss, just not as much as the example).

    There are some other fine points, but that’s the big lumps.

    Not so complicated once you strip out the jargon…

  9. Curt says:

    RACook:

    Last year I briefly looked into the issue of low-angle-of-incidence reflection off water for precisely the reasons you are asking about. The best on-line sources I found were from optics companies looking at the air/glass interface. I am not able to dig them up right now.

    The percentage of reflection is a (continuous) trigonometric function of the angle, although quite a bit more complicated than a simple sine or cosine function; there are separate functions for vertically and horizontally polarized light (which is why glare is heavily polarized and we wear polarized sunglasses).

    But suffice it to say that when the sun is very low, almost all of the light is reflected off liquid water. Remember though that your simple calculations implicitly assume smooth water surface; I don’t know how waves would affect the average reflection.

    Overall, there are four factors that make the ice/albedo “positive feedback” less strong at high latitudes than at low latitudes:

    1. There is less land per degree of latitude at higher latitudes.
    2. The power density (W/m^2 of surface) is lower at high latitudes by simple geometry.
    3. The longer path through the atmosphere at high latitudes scatters more of the sunlight before it reaches the surface.
    4. More of the sunlight is reflected off the surfaces at high latitudes.

    To me it is clear that the ice/albedo feedback must get substantially weaker at higher latitudes. It would not surprise me at all that the reason that glacial-to-interglacial transitions end where they do, with some ice/snowbound regions still existing, is that this feedback has weakened to insignficance.

  10. RACookPE1978 says:

    I am in the airport terminal right now, and so do not have alll my notes – nor my globe with the actual latitudes of the Arctic land mass surrounding the ocean.

    From memory, and please check that carefully! – let me summarize where my sunlight reflection/sunlight absorption/loss-of-ice geography leads:

    Averaging things, the arctic itself is not a complete circle below 80 north latitude, but actual is only spans 120 degrees longitude between 80 north and 70 north. At no place does the arctic extend south of 70 north. The tiny swath of Greenland between 80 north and its northern tip of ~82 north is neglected. The general north rim of Canada, ALaska, and Siberia averages 70 north.

    All of the tundra south of the arctic ocean itself is ice-free away from the ocean, so melting arctic ocean ice can’t affect tundra energy absorption. Therefore, we need to look at the effect of sunshine hitting a 360 degree circle from 80 north to 90 north, and a 120 degree slice of that circle from 70 north to 80 north.

    All areas of the globe north of the arctic circle (at 90-23.5 degrees) 66.5 receive 24 hours of sunshine for at least one day of the year, but that sunshine varies in impact angle during that 24 hour “day”. And as you go earlier or later from June 21/22, the amount of sunshine available each day decreases substantially. It is absolutely wrong to base any calculation for the arctic as a whole by multiplying any amount of theoretical absorbed energy by 24 hours per day.

    Further, the highest solar angle even on June 21/22 varies from minimum at 0000 (midnight, when the sun shines from due north (lowest height) to 0600 when it shines from due east, to solar noon (12:00, when it is due south and highest) to 18:00 (6:00 pm, when it will shine from the west and be as low as it was at 06:00.) Since all areas of the arctic are above 66.5 degrees north, we need to look at 24 hours of exposure north of 80 north, but only 8 hours of light for latitudes from 70 north to 80 north. Every other day of the year, available solar radiation will be less than what is available on June 21/22. Again, it will be less regardless of what surface is present over the arctic ocean.

    Further, the lower in the sky the sun is, the more the atmosphere
    will absorb its heat (regardless of what surface (ice, water, or land or urban concrete or a black piece of flat paper).

    So we need to define the following angle of the sun:
    When the sun is lower in sky than its Effective Reflection Angle (EFA), it cannot effectively heat a reflective surface.

    Once the EFA is determined, and it may be different for ice and water though it is not expected to vary much since both ice and water have refractive indexes of 1.3

    The lower the sun is in the sky, the more spread from the sun’s rays a flat surface parallel to the earth is, and less energy-per-square-meter can be deposited, even under the best of conditions. The sun’s angle w/r to time is only a product of the area’s latitude and the time of day.

    More later.

Comments are closed.