What Happens In A Climate Cell (Model)

In a prior posting I looked at how to choose a cell gridding or placement method:


In this posting, I’m looking at what happens inside a physical cell and ways to properly reflect that inside the model cell.

This likely would work best with a graphic, but my graphics skills are primitive at best and the time it takes not available to me at the moment. So text it is.

Basic Cell vs Sun

In my view, the cell is first defined by the location (latitude, longitude, average altitude) and then what that means in terms of ’tilt’ of that cell toward the sun line. So, for example:

A cell at sea level at the equator would be 0 altitude, 0 tilt, 0 latitude, and {whatever} longitude.

A cell at the north pole would be 0 altitude, 90 tilt, 90 latitude, and {any / all} longitude.

A cell at 45 north in a mountain containing area could be 2000 m altitude, 45 tilt, 45 latitude, and some longitude.

A cell at 45 south could be 100 m altitude, 45 tilt, -45 latitude, and some longitude.

The Lat and Lon can be used to calculate tilt relative to the normal sun line, then tilt can be used in conjunction with season (and procession and wobble and… if desired) to calculate % of solar output that is incident on that cell.

Pretty much all of this can be computed once in advance and stored as parameters for use during a model run. In this way your “Globe” becomes a “disco ball” with tessellations defined allover the surface relative to the sun line. The model run just needs to ‘rotate the ball’ and calculate the change of actual ‘angle to sun’ as the cell/tilt rotates relative to the sun (and nods up and down with the season). FWIW, it would be possible to pre-compute those parameters once and store them also; but it would require some testing to find out if the (very slow) read-in process took more time than just recomputing it with each turn of a time period.

A 10,000 cell “globe” would need 240,000 parameters if the time step is one hour, then x 365 to cover the annual changes of axial tilt. That’s 87.6 Meg parameters that would need to be stored / read in, for each year of run time. Even at several parameters and long words, the number of MB is very manageable. Even at 10x it would be under 1 GB of data. Many larger memory SBCs have 4 to 8 GB of memory, so caching is possible. Even if not cached, a 1 year model time run would require reading all of 1 GB of data. If a year of model time is completed in 1 minute, that’s 1 GB / minute. Very easy on a 640 MB/second (38 GB/min) USB 3.0 channel.

I’m pretty sure that calculating the sunlight incident on any single cell is easily handled with just these values, and that it is easily handled in a modest SBC (likely with computes and memory left over).

I’d start with a model that used a TSI Solar Constant, but later I’d change that to a variable sun where the spectrum shift (that we saw happen) of up to 10% is incorporated.

Layers Of Layers

But sunlight does not just fall on a cell and be done. Different things happen in different layers of the path.

The path has parts.

Atmosphere, surface {water, dirt, vegetation, snow / ice}, subsurface {ocean, earth}.

Here IMHO we need to layer the air at least into the gross layers of Mesosphere, Stratosphere, Troposphere. I think we can ignore the thermosphere (where the space station and many satellites orbit) until proven otherwise. The Mesosphere matters as that’s where the Ozone Layer absorbs a lot of UV and solar shifts in amount of UV will matter. But perhaps it can be left out of a ‘first cut’. The Stratosphere is where actual IR radiance can happen. The Stratosphere can actually reach the Earth surface at high polar altitudes when in the middle of a dark winter. A complication that really matters, but will be difficult to get right in a model. Then the Troposphere is where IR is ineffective at moving heat, so turbulent convection is the dominant method of heat transport. It is also where most all of the weather happens.

As there is a hurricane Cat 2 wind going sideways at the Tropopause, I think it matters a very great deal too, but I’m not sure how to handle that. Have a specific layer? Average the mass flow into the stratosphere and not worry about it? Somehow one must get that wind taking huge tonnages of air from the equatorial rise to the polar vortex descending. It ought to show up as an emergent phenomenon, but then you need very thin layers… So that’s an open issue. I suppose as an initial kludge one could just program in that air from the Troposphere top gets moved sideways in a layer toward the poles, with some leakage to the stratosphere. Hack it in as a “if air gets here, it goes there”… Then come back later to make it emergent? Hmmm….

I think that is enough layering to handle the major differences in absorption, weather events, chemistry, and IR path length (at least for a first cut). Eventually it would be helpful ( as my best guess) to have layers as thick as the typical tropopause thickness for all layers and then model each one with the known physics of them; but that’s definitely a ‘way later’ enhancement.

Then you have the Surface Layer:

This is a (roughly) 100 M thick layer of air that is strongly influenced by the dirt or water below it. I think this layer must be considered along with that physical floor surface.

This is where the sunshine hits the road (or forest or ocean or…) and turns into heat, either as temperature or evaporated water, with some turned into chemical energy in plants. The physical heat portion can be transferred from the surface into the boundary layer of air. This causes it to warm during the day, destabilize, and rise. As water on the surface evaporates, it also makes the boundary layer air less dense ( Nitrogen 32 mass, water vapor 18 mass, so lighter) and enhances rising.

That says, to me, that key functions of the surface interaction are absorption vs reflection (albedo) and for ocean, transmission into the depths, then propensity to evaporate water into the air, and any vegetation effects. This is where we must handle the varying wetness of surfaces, changing snow / ice coatings, vegetation transpiration (and the tendency to self regulate to about 86 F leaf temperature via transpiration). IMHO, this is likely to be the hardest bit to get right.

So we’ll need parameters for Albedo by season / weather events, surface type (dirt, water, vegetation where vegetation will change by season as can dirt vs vegetation) and a way to figure out how much water will move from the surface (at whatever the present wetness might be) into the boundary layer.

Initially this can be simplified to just average albedo and surface type, but expanding the detail handling of the surface and boundary layer is where a lot of the work will need to go.

Think we’re done?


A cold surface has a huge heat sink below it. You can think of the dirt of about 10 cm thick having a significant change of temperature during the day, but with a very slow heat transfer to / from the larger block of dirt and rock below it. This can be modeled as a large capacitor with a resistor to a smaller block of surface. Overall, the Earth has net heat flow OUT from the several thousand degrees core temperature to the air, then space. So overall, the net heat flow is out of this sub-surface rock into the boundary layer.

BUT, the heat flow is extraordinarily slow as rock is high R (resistance to heat flow) value. In general, I think we can ignore this essentially net heat flow out. Yes, it matters. But IMHO it matters a whole lot more at the various plate boundaries where boiling hot water and / or magma come flowing out. We have no real data on heat flow at the ocean bottom, but it is vastly more than on the continental land away from volcanoes.

Yet on a yearly basis, the heat flows into the top couple of meters of dirt / rock in summer, and out in winter. It takes a couple of months to make the surface rocks cold enough for snow to stick. It takes a few months for the spring warmth to soak in. I suspect it is a significant part of why our seasons lag the insolation angle by a couple of months. How much, how fast, how big? All need a bit of investigation. Initially, I’d set it to a Fudge Factor. Just something like “soak in surface temp delayed by a month”. Eventually it needs decent modeling on it’s own. Any depth over about 2 meters can be set to the constant 56 F which is about the average air temperature of the planet.

So initially it’s a plug number wobbler, but eventually needs to be done better / right.

Ocean is more complex.

For the ocean, we again need a ‘surface layer’ were IR at the surface all causes prompt evaporation, but red and blue and remaining UV soak in. I’d include about a 10 m mixed layer that over the length of the day mixes in any surface delta-T. Then you have the water down to about 100 M where red and blue light are all absorbed. UV can go to the bottom of that, or even below to some amount. Exact depths need to be worked out as those are ‘guesses’ based on what I know from scuba diving.

Initially, I’d model it as a surface that absorbs all IR and turns it into water vapor, then the Red through Blue absorbed as heating into the next 100 feet / 30 m. Yes, at 100 feet in the tropics there’s still a lot of light. Yes, it ought to have a ‘turbidity’ factor and ’tilt matters’ too as depth is directly toward the core of the gravity well while solar light path is directly along line to the sun, so at the poles you get a near infinite path length at near zero depth as the sun sits on the horizon in fall. I’d put that off for later, as I’m pretty sure there’s bigger fish to fry first. At the pole in fall you have near zero light / heat incident anyway, so handling it perfectly isn’t so important. At the equator, having all the heat in 30 m instead of 50 m is likely not that important. That we don’t have enough granularity to handle major ocean currents and gyres is likely a bigger issue.

So I’d have a surface layer, visible absorption layer, then below that any residual UV. At about 1000 M you can assume it is 4 C and constant. Again, exact numbers need a “dig here” to set. Currents need to be added “someday” to get heat flow correct. In a ‘first cut’ one might need to add them as mass flow average plug values between cells.

That, I think covers the question of layers and layering for the first cut at a cell model “well enough” for now. When (if?) I ever get that much programmed and running, I’ll be thrilled…

Order Of Processing

The model MUST converge, or it is trash. For that reason, I’d not be too hung up on starting values. Most anything reasonable ought to converge toward more reasonable, or you need to fix something in the model. Set starting values to approximate averages from some year and “move on” (to debugging, fixing, and tuning to reality… but note when your “tuning” is really patching over a missing bit of reality, like those ocean currents and their heat flows…)

I’d have an initial TSI Solar Constant, then in Version 2.0, change that to a variable solar spectrum with solar cycle (as has been observed). Why? Because I believe that is a major driver of the system, the spectrum shifts, and I’d like to see how the model runs both ways and compare.

Then that solar power arrives at the “top of cell” proportional to tilt / season. Each atmospheric layer absorbs the portion of that light which it normally captures, or reflects. For the tropospheric layer, that includes the initial %Clouds and their thickness. When it reaches the surface, the albedo and surface type determine absorption. Light gets turned to surface heat, or water evaporation, or goes into ocean depths. This is where a lot of the Real Work gets done on “getting it right”. Then that heat and water vapor gets moved into the boundary layer of air. Reflected light goes back toward the relevant layers (but I suspect this can be ignored… if you already absorbed the UVC incoming does the blue outgoing matter to the stratosphere? Some noodling needed here…)

Now you get to figure out how to do inter-layer mass transport and weather computing. Sigh. Don’t have a good grip on this yet. So {wave hands} a density change needs to result in a velocity upward. This becomes a “mass / unit time” into the next layer. Some of this turns into lateral air flow (wind) proportional to volume change. Then, the really hard bit:

Send in The Clouds

As air rises into the Troposphere, it cools. At some point water vapor condenses. This can be fog in the surface boundary layer (model with RH vs T ?) and clouds at some level. In reality, you can have ‘cloud decks’ at 1000 ft levels from zero to 40,000 feet. Call it 40 to 50 layers are really needed. Then there’s a few dozen kinds of clouds, infinitely variable opacity from 0 to 100%, layers that can be a few feet thick up to a thunderhead from ground level to 50,000 feet.

Clouds are CRITICALLY important, as in ‘cycle 2’, they determine how much of the sunshine makes it to the surface to drive all the rest. They also happen at far finer granularity than the size of a cell. So “what to do?”

Initially, I’d use a table of “average cloud by month” for each cell. Likely fairly easy to get the data and not that much data entry. For 10k cells, that’s 120k values. This is “good enough” to let you work on getting the rest of the model right. What is BREAKS, is the feedback loop from surface heating a wet surface to more cloud to more shade and more rain. It is a big crowbar setting things to AVERAGE. I’d do the same thing with precipitation by month, initially.

Longer term, you must find a way to get clouds and precipitation right (and is that precip rain or snow or ice or ???). Shorter term, just saying “on average it snows 8 inches in November” and put 8 inches of snow on the ground along with the appropriate temperature is “good enough” to start debugging the rest of the model (the air and land).

This is where the bulk of the work in “Version 2.0” or maybe even “Version 3.0” will be. IMHO, it is likely why the present “Climate Models” are so concentrated on radiative physics. Clouds and precipitation and convection feedbacks are hard. Radiation is a formula.

Time Step

AFTER all that stuff is done and calculated, you take a Time Step. Mass flows to other cells along with heat amounts and water and such are all moved to those cells input buffer. Cloud values (eventually…) set and new insolation % figured. Surface changes set (i.e.is your ‘dirt’ now ‘snow’ as the weather model says snow happened?). Mass and water flows into this cell are integrated. Basically, you move all the surface and mass flow stuff around, taking their heat and water changes with them.

Then you do all your calculations again.

So I see the general flow as:

Read in parameters / values by season / date / time. Start.
Calculate Light In (solar values, cell parms, time).
Distribute light energy by layer absorption and reflectance values.
Calculate temperature changes, water vapor / ice melting changes (phase changes), mass changes (air density, pressure, etc.)
Calculate ocean energy / temperature changes (eventually).
Figure cycle mass flows out (air / water to next cell, clouds to next layer, etc.)
Calculate any radiative model leftovers.
Set any changes to initial parameters (such as snow on dirt, etc.)
Next Time Step.

That’s all just a ‘first cut prototyping’ spitball subject to massive changes as desired. Just sayin’… it isn’t considered significantly valid and will change.

In Conclusion

That’s where I’ve got to. A solar centric, water moderated, tropospheric weather modulated, convective first, radiative at the end; model. IR Radiation out from the Stratospher (where it happens and where it is driven by CO2 et. al. as a cooling gas) and CO2 ignored in the troposphere where it can’t do anything as the radiative window is closed (which is WHY we have a Troposphere…)

With a whole lot of details to be worked out over time.

I’d start with One Cell with fixed parameters. Once coded, move parameters to a ‘read file’ in. Then add a few more cells around it, test flow between cells and data in from “plug number files”. Once any insanity behaviour is stamped out of it, use a globe of about 100 cells and ‘reasonable’ estimates of values running on one computer and ‘see where it goes’. IFF that has any crazy behaviours, polish on that until no longer insane. (Things like snow in the Sahara or a desert at the north pole.. or 200 inches of rain in Los Angeles…) Then try to expand the number of cells and move to distributed processing. (That’s a big lump ‘o work just for the data entry. Figure a reasonable 10k cell model will take about 20 data items input so 200k items to type. At, say, 10 char per item, that’s 2 Meg keystrokes… or one works out a data loading script… but it’s days+ in any case.)

I figure I can get one cell running, and even expand it to a dozen cells, all by myself. Perhaps even I can get the 100 Cell Globe to run. At the point that I’m trying to get a 10K to 100K planet running with added clouds and ocean currents: It will take several people and a lot of hardware (i.e. money) IMHO. So we’ll see how far I get, and it won’t be happening fast as this happens in my “left over time” that is not much.

So there you have my best guesses on it all.

Start throwing rocks and ideas at it. Polish time welcome.

Subscribe to feed


About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in AGW Science and Background, Tech Bits. Bookmark the permalink.

33 Responses to What Happens In A Climate Cell (Model)

  1. rhoda klapp says:

    On a related point, does anybody in the climate community go back and measure what happens to compare with the computed figures? Just one cell would do, or one square metre.

  2. cdquarles says:

    A catch with water. It is quite complicated and exists in the atmosphere from single molecules to agglomerations from dimers to aerosols (clouds) as well as precipitation (from 100 micrometers or so to several centimeters as hail). That said, your ideas for handling water vapor and water clouds seems reasonable as crude first order approximations.

    Oh, dust clouds may need to be incorporated, too.

    Oh, that seasonal lag is not a couple of months, at least at my latitude. It is more like 4 to 6 weeks, yet is still conditional depending on more than just soil temperature. Wind fetches matter, especially long and strong ones by direction. I also suspect that lag varies by latitude, too; as in less at/near the equator and greater toward the subpolar latitudes, then may diminish as the daylight/night lengths change with the tilt (nearly 6 months of light then nearly 6 months of very little light) at the poles. Yeah, a dig here sounds useful.

    As a rough first approximation, this seems fine as a slight modification of weather models.

  3. V.P. Elect Smith says:

    They do test by “hind-casting”. That is, run the model and see if it reproduces the past right. The problem with this is that it conforms the model to the extant data. You have no idea if the future will be right.

    So, say the globe really warmed from 1850 to 1990 and so does your model. In your model, you did it via making CO2 sensitivity very high. Is that RIGHT? What happens if CO2 sensitivity doesn’t matter and Solar Spectrum does? Then you get the time from 1999 to 2020 cooling in real life with “the pause” and record snow along with reduced crop yields (as has happened) all while CO2 is rising but the Sun went to cool mode spectrum (low UV higher IR).

    At that point all sorts of folks as for more grant money to work on why The Pause happened and to claim that “the model is right” but “FOO” needs to be addressed (everything from particulates to economic slowdown to bat feathers…).

    This is a very well known and common failure mode of models (of all sorts). Modeling the data always leads to trouble.

    But, in general answer to your question:

    They do, but very badly. So, for example, they are quite happy to dismiss a decade or two of divergence of the model from reality as “normal”…

  4. V.P. Elect Smith says:


    I’m seeing the “couple of months lag” as an emergent phenomenon of ground sink of heat coupled with atmospheric movements and ocean temperature shifts. I only mentioned it as a side bar to the need to have some degree of modeling of a giant heat sink of rock under a grid cell.

    In the modeling, I’d start off with it as a fixed mass at 56 F and an R value for heat in, out. Then see what happens. Eventually it likely needs some “tuning” for permafrost in Alaska / Siberia being different from sand in the Sahara and from coral beaches around islands in the tropics. But I think that’s likely to be in the chump change compared to things like ocean currents and where the UV / blue light vs IR goes.

    Oh, and in the Tropics, as the sun just wobbles about 11 degrees north / south of straight over head: There isn’t really any lag of seasons as there isn’t really much seasons ;-) It’s more up in Ice Country with 4 months of dead black and 4 months of lots of light where you get strong seasons, and so, I expect, more seasonal lag (as it takes longer to move all the various masses and heats further).

    But like I said, I think that will be emergent from mass, specific heat, heat of fusion of snow and ice, etc.

  5. IanStaffs says:

    You may find the linked post (and his follow ups) interesting.

  6. Foyle says:

    and then you have hurricanes, mixing heat through massive breaking waves in surface layers and transporting vast quantities of heat high into atmosphere, and highly non-linear tropical convection cells (thunderstorms). And vertical heat (cold) transport in clouds, and huge uncertainty of ice formation, albedo change due to random snow events, thermo haline and oceanic upwelling and millenia long ocean heat/circulation cycles. Not to mention extremely non-linear water phase change physics through the -85 – +60°C temperatures encountered on earth.

    It would be impossible to model with any reliability even with computers thousands of trillions of times faster than any yet built. Climate modelling is, was, and always shall be bunkum.

  7. Simon Derricutt says:

    EM – the vertical height of cells needs to be small enough to cope with cloud-forming at the specific heights that we see clouds form, and also the cell size needs to be able to resolve individual clouds. It seems to me that around 100m or so is about the biggest you can go in each dimension before you can’t resolve what’s actually happening inside that cell and instead need to go to fudging things using a “cloudiness percentage” parameter. I’d expect that the majority of the vertical cells would have bugger-all actually happening inside them except so mass movement in and out, so they would be pretty quick to calculate, but would need some tests for condensation/evaporation as the conditions change.

    On the 100m scale, you only need to deal with linear momentum, but as the cell gets larger you would need to put in a fudge-factor of angular momentum and how it gets passed from one cell to another. I regard angular momentum as an emergent phenomenon – it’s calculable, and it’s generally regarded as being a real number, but it’s a result of geometry on linear momentum and thus not actually a real thing in itself. If you do the calculations based on linear momentum and the forces involved then you get the same answers but from a more-fundamental basis.

    The time-tick you’re using between calculations has to be related to the cell-size so that the air mass-movement doesn’t go more than 1 cell away from where it started in 1 time-tick, since otherwise the Coriolis force effect (another fiction that’s useful but not real) won’t be applied properly as the air moves. Maybe not a major inaccuracy if you don’t move too many cells away in a time-tick.

    The results of the cell calculations should end up with a fairly-realistic weather pattern, but not that close to reality until the cell-size gets small-enough and the time-ticks are short-enough. I figure that once the cells are small enough you’ll get the waves at the top of the troposphere and the cyclones/anticyclones appearing in the model, and you’ll also get more realistic cloud formations, too. Such a massive number of calculations, though, that it would need a huge number of GPU cores.

    Despite the Butterfly Effect, I figure that the problem is in fact computable, and the reason it currently isn’t is that we simply haven’t a full set of data (and of course the size of computer is way beyond anything we can figure on making, as Foyle says). Then again, that full data set would include all the living animals (breathing out water-vapour that has to go somewhere) and all the plants (that also transpire) and the locations of all of those. In practice that means we apply an average number for a particular cell. The question is not whether the model will be accurate, therefore, since almost by definition it won’t be, but instead whether it would be good enough for the purpose.

    I’d reckon that the cell model (with enough cells) could extend the validity of a weather forecast beyond the 3 days it’s currently near-enough right. Might be 4 times better, giving us 2 weeks instead (though maybe that’s not actually 4 times better but 16 times better because it’s more likely a square relationship). Still, it seems mostly to depend on the cell size. With a small cell you can work from first principles, but as the cell gets bigger you’ll need to use fudge-factors based on averages that may not give good results. Clouds are individual things, and the fact that they are separate has a different effect on the ground than it would if they were averaged. Getting the clouds right is maybe the most important aspect of the model, and that implies a cell size sufficiently small to resolve them. Once clouds get smaller than the cell size, you need to use average cloudiness, and maybe at the 100m cells size that’s near-enough the truth to work given enough wind-speed to shift the clouds far-enough between time-ticks.

    The pressure-difference across a 100m cell will be pretty small in general, meaning that will need high precision to get enough accuracy in the forces on the air-mass.

    Initialisation shouldn’t be that critical. Providing the database for ground conditions is there then initialising the air temperature as being the same in all places should settle to a realistic number in all places after enough time-ticks. It would reach that quicker if you start with a temperature related to latitude.

    At ground-level, there’s not only the heat stored with a resistance to change of that heat, but also water stored with a resistance to evaporation based on the vegetation or livestock there, as well as the ground temperature (which will vary with insolation). Ground-level is complex. Sea level needs to include ocean currents, and the way they change depending upon ice-melt and maybe insolation. Though first cut can use the ocean currents as they are, it does seem that a fuller model would need to take into account the basin shapes, salinity, and a lot of other stuff to model the actual ocean currents and water-movements – also means you’ll need to take the Moon into account and the tidal movements over the cycle of its orbit.

    Seems the more we look at this, the bigger the job gets. Maybe the first level, if the cell model produces realistic weather patterns (even if not accurate or predictive) would be better than current models, but still won’t be good enough to predict climate.

  8. cdquarles says:

    Given that the system is mathematically chaotic, I’d say ignoring that set of conditions would throw off an atmospheric centric model dominated by IR, when heat is internal kinetic energy and light isn’t, directly.

    I’d also say he is on to something.

  9. V.P. Elect Smith says:


    Interesting idea / model. My reaction to it is “necessary, not sufficient”. The Earth has a huge thing a simple ball of rock in space with ice does not have. Rain.

    The “sudden end of the ice age glacials” that puzzles so many is easily explained by rain. The ice must accumulate as a slow mass flow problem. Every scrap of ice must arrive as precipitation, which means evaporation from somewhere else. You are limited by mass flow rate. Once you warm enough for that precipitate to arrive as water instead of snow, it can take with it a MUCH larger mass of ice. Watch the breakup of ice on frozen rivers. Huge chunks of slushy ice flowing out to sea to be transported toward the topics, melting on the way. Watch a nice ski slope of thin, but workable, snow just be gone with one warmish rain.

    So it is an important part to add to any Ice Age Glacial theory, but it needs more added to “get it right”. Especially focused on wet mass flows.

    @Foyle & Simon:

    You are both quite right.

    I’d only point out that “Even a wrong model can be illustrative”.

    So, for example, say you run the model with “Average Clouds” and get crazy little heating. Looking into it, you find a LOT more water accumulating on the ground than happens in real life. Now you know you either are not evaporating enough of it in your model (too little input energy?) or that your precipitation is wrong (so you check it against real values and adjust). This can let you see where your modeling is a bit crazy / wrong.

    That’s a hypothetical thought toy, so don’t poke it too much ;-)

    Along similar lines, you can (once a ‘cell’ works OK), model a much smaller area of the globe with much finer grain of cells. “Getting more real” without a Trillion Raspberry Pis on the job ;-) So, for example, one could make a 100,000 cell model of the island of Kauai. At 562 sq. mi. each cell would be 1456 / 100,000 = 0.001456 km^2 (or 1456 sq. meters / cell or about 38 m on a side, call it about 42 feet).

    Then you can compare your fine grain local sim to the gross sim results and to reality. Now you have a way to examine how much “scale” screwed up your gross grain model of the area.

    Essentially, I’m saying you can use a “wrong from being too low on resolution” model to examine the effects of that low granularity and see just how big your error really is. That is also very useful.

    Or more briefly: A wrong model lets you examine the wrongness.

    Per Scale of Clouds:

    This is a theme I’ve mentioned before, but it is necessary to repeat it as folks do not naturally think in these terms: Nature is FRACTAL.

    It doesn’t matter if it is clouds, or rivers, or coastlines, or mountains, or trees or whatever. Nature tends to the fractal. Things are “self similar” “all the way down”.

    The answer you get depends on the scale you use.

    So the length of the coastline of Britain (as mentioned in the prior posting) depends on the size of ruler you use to do the measuring.

    The same thing is true of rivers and mountains and more. So to properly handle clouds, convective cells, ‘whatever’, we have to {somehow, waves hands…} find the effects of our ‘ruler size’ (cell size) and work around them or removed them {frantic hand waving ensues…}.

    One Small Example:

    You have a certain surface area in your cell. Processes depend on it. How much water evaporates from a film of water will to some degree depend on the area of the film. Yet the surface area of a bounded perimeter area is not known (or, really, knowable, as it is a fractal). This property is used to great effect by “rural land subdividers”. I learned this at about 8 years old as Dad sold rural real estate:

    There is a way of measuring large land blocks called “perimeter area”. You throw a straight line along each edge and calculate area. There is another way that has you survey the land, all of it, and calculate the area of the convoluted sheet. “survey area”. Now with a few $Millions, you buy a hilly area based on “perimeter area” and then you survey it into small parcels and sell them “For the same $/acre as we bought it”. You just don’t mention that the much smaller ruler used in the survey gives a lot more area, even if some of it is vertical…

    So now the model question becomes: What area do you use for your surface area? Perimeter area or survey area? IF Survey Area, what size ruler do you use? Do you count the surface area of buildings? Does every leaf and blade of grass area matter? (It does hold water films and they do transpire proportional to surface area…)

    Somehow {very frantic handwaving and jumping about the stage ensues…} you must find a solution to the Fractal Problem of your cell surface area.

    By analogy {stage begins to tremble from violent dancing about, hands turning red from fluid pressure from wave forces…} clouds are also fractal in 3D, so you can see them as small as your eye can resolve, or growing into layers that cover vast areas. From a layer of near dust like haze to a cloud deck from ground to 50,000 feet in a hurricane. At some point you need to pick a ruler size and just deal with it.

    Traditional models do this {cough…} with averages (to hide the details, which is what an average does). Is that good enough? {wonders how much Scotch is at home… decides not to attempt an answer as Scotch inventory is low…}

    Announcement: As our presenter has just fainted, we’re going to end this session for the moment. Please carry on… {anyone got some smelling salts? He looks pale. Why are his hands so red? Is that one bleeding?..}

  10. Simon Derricutt says:

    EM – it’s the amount of thought required to replace that hand-waving (or averaging over a huge area like the main climate models do) that made me think this would be a full-time job for a few years. I agree though that it should produce all the features of reality to a fair extent, even in a fairly large cell size of 100km or so. There, though, the averaging of the terrain will likely result in lower wind speeds than reality. You should get the cyclones and anticyclones even there, just weak ones. Running in-head rough simulations to gauge what it will lie about…. Could of course be wildly off on that. Still, if there’s too much air moved within one time-tick, it will lie. The averaged air direction within the cell will be too far off the reality that it varies in direction.

    Tha Hawaii plot using small cells could be useful, but you’ll need a lot of sea covered since the simulation will break at the edges (where it gets an averaged input).

    At the end of all that work, though, we still can’t really predict what the Sun will do or how many cosmic rays or dust exist in the volume of space the Solar System will travel through. Though I’d expect it to be better at weather forecasting, no use in predicting the climate a century from now. Though the old models have that failing too, they claim to be able to predict that far and further (or was that fur and farther?).

    It might however be saleable software…. A couple of weeks of weather forecast rather than 3 days has to have a lot of value.

  11. V.P. Elect Smith says:


    My purpose would be to “compare and contrast” with all the other “climate models”.

    This does a few nice things:

    1) Shows a different story. Challenges the notion of “consensus”.

    2) Highlights where some of them are daft (hopefully).

    3) Gives US a climate model that is not “Given this ‘IR does it’ model we find IR does it!”. It would be nice for “our side” to have our own climate model to wave about too.

    4) Makes a core / rough non-IR Uber Alles Climate Model available to the Open Source community. Let a million (all different all going in cat on fire different directions…) models flourish!

    5) Illustrates some of the other stuff that ought to be considered, but isn’t.

    6) Perhaps even is a little fun to play with and lets me learn some stuff…

    7) Shows how to make a highly modular, parallel friendly, climate / weather model. Of necessity, also would highlight where parameterization is done and how the data flows between “cells”. What’s in the data, what is ignored. Pokes folks to ask “Why are noctilucent clouds not happening?” and “Where are the hurricanes?” and more…

    8) IFF I’m extraordinarily lucky, could attract enough money for me to work on it full time. About $5k / mo. would be nice ;-)

  12. Simon Derricutt says:

    EM – IR radiation will need to be included, since the only way the Earth as a whole loses energy is by radiation, and of course it receives radiation from the Sun with a relatively small amount of energy being from the heat of the core. Though the majority of the energy-transfers in the troposphere will be through conduction and mass-movements, there will be a small correction to that from the radiation emitted and absorbed by the radiative gases. Radiation coming out of those gases will be omnidirectional, and in the troposphere at least will be pretty short-range (of the order of tens of metres), so sideways radiation from cell to cell will be a very small gain/loss and probably well below measurable, while the vertical components will also be pretty small but probably need to be calculated.

    A lot of people hold to the idea that a cooler object can’t radiate heat to a warmer one. However, the cooler object will still be radiating, and the energy it radiates will be absorbed by the warmer one, and you can tell that there is in fact heat going “in the wrong direction” by replacing the cooler object by one that is cooler still (or at absolute zero and thus not radiating) and noting that the warmer object cools down faster. I’ve seen arguments that, because the CO2 absorbance/emitting bands are equivalent to an object at -80°C or so, that they can have no effect on anything that’s warmer than -80°C or so either. In fact it does affect the warmer object, but not by a lot given the power involved from a naturally-radiating object. The equivalent temperature of the microwave radiation in your domestic oven is around 0.42K (-272.74°C) yet still warms those beans. Energy is energy, and power is power, no matter what the photon wavelength is. If it gets absorbed, then that energy will add to the energy already in the object.

    Thus radiation from water-vapour and CO2, and maybe some others such as NOx and Methane, does need to be accounted for, as well as the time it takes for those radiative processes to convey heat energy across a certain height of the cell by a random-walk. I’d need to spend some time figuring out how to calculate that using minimum computation. Until properly calculated, I can’t even say whether the net effect would be warming or cooling of the surface, and that’s maybe quite important. I’m however pretty certain that above a certain height it will have a net cooling effect, given that the ground-level will be less than half of the solid angle for the (random direction) outgoing radiation.

    Whilst writing this, I realised it’s a can of worms. Oh well….

    Having a model that works from first principles and avoids the parametrisation involved in averaging some effects is bound to be a useful advance. It’s however a huge task. If the model also complies rigidly with conservation of energy and momentum (which in this case absolutely apply) then that’s also a major gain. It does seem that quite a few things will end up as being small differences between large numbers, so precision of the calculations will need to be pretty high (and thus takes longer to calculate).

    I figure the reason for these conversations is to chuck ideas up and see which ones stick. Seeing what’s needed and what’s possible. Looks to me that even in the cells where “nothing is happening” there’s still quite a bit going on that will need to be calculated.

    A big job.

  13. V.P. Elect Smith says:


    “EM – IR radiation will need to be included, since the only way the Earth as a whole loses energy is by radiation,”

    Um, I think you may be responding to this:

    “Makes a core / rough non-IR Uber Alles Climate Model”

    Perhaps I needed to punctuate it more clearly: Not “IR Over Everything Else” model, but IR at the end.

    Way up top in the posting description of atmospheric layers we’ve got:

    The Stratosphere is where actual IR radiance can happen. The Stratosphere can actually reach the Earth surface at high polar altitudes when in the middle of a dark winter.

    First cut I’d have all IR radiance happening in the Stratosphere where it can radiate to space. (Why it is so cold up there…) Then I’d add in the “cloud tops” where convection comes to a halt. Generally it gets very cold up there and ice forms (hail, snow) dumping a bucket load of heat. Some of that heat gets physical transport into the tropopause, but I’m pretty sure some of it just IR radiates upward (downward, it hits a wall of cloud, snow, ice, rain, and denser air with pressure broadening which just cycles it back up to the top).

    Essentially, I see “gluing on” IR at the end. You get all the physicality running (and hopefully right…), and then, at the end, you inspect the tropopause / stratosphere and see what radiative physics does to the energy.

    IMHO there’s no need for IR To Space from the surface or the troposphere to be added in to the model until after that (if ever…). It only happens in DRY air in the desert night, or when very frozen (which by definition isn’t Global Warming the place). So IF I can get the descent of the stratosphere to near or at ground level in the polar winter, then the “big frozen polar radiator” is already taken care of. (It will be a hard bit to do, though…) and I doubt that the IR from the global deserts directly to space is all that big an error bar on a planet that’s 70% oceans, plus a whole lot of wet land and vegetation.

    Eventually the “frozen air dry radiator” needs to be in there, I think, to get the N. Hemisphere winters right, especially during extreme cold excursions (i.e. glacials and such along with “snow to Georgia” Canada Express like events when it hits -40 C/F in the frozen north.) BUT, I think it would be better to have that done last. Both for programming process reasons (must get the air flow and RHumidity and frozen state right before calculating it) and because it leaves the model running a bit “warm” at first, so folks can’t accuse me / it of having a cold bias…

    Having the deserts run to hotter nights and / or have more dry air convection and THEN needing to add the cooling of night IR to space would also, IMHO, be a nice touch. Essentially, by then calibrating the IR to get the “right” night surface temps, you can then have a “cross foot check” against theoretical IR and “compare and contrast”…

    Essentially, I want to get all the WATER right first, as, IMHO, it is THE dominant radiative / absorptive species in the air and on the surface. THEN I want to have a properly working troposphere (convective) and stratosphere (as heat radiator) before adding in a ‘wobbler’ on the troposphere IR via water vapor at extreme dry states in a few patches of land. Then, and only then, add in any “GHG” changes once water is already doing everything it does (which, IMHO, is essentially all of it…)

    Per Calculations:

    My “vision” is that each Cell is a unit of data. Each process (thread, core, whatever) can pick up a Cell Data Batch and work it. Each Cell Data has a flag of “done getting” new inputs loaded and thus ready to process and then has outputs to load into the next Cell# Data batch (and when all are loaded that Cell# ready flag gets ticked…) A Dispatch process then runs over the data store looking for cells with Ready Flag set, and assigns a process to go run it on some core.

    Now the technical “tricky bit” is working out the efficient I/O process. As it is a few orders of magnitude slower than the calculations (if outside of memory), things need to be memory resident as much as possible. Having 100,000 cores (or threads or…) all trying to suck bits of a data base off disk and into memory would be Disk Thrash Slow… So while I, conceptually, like the idea of just having a big SQL DB of “cells” and any process can just go load up any cell and run the data through… that’s bound to end up in disk thrashing as all those cores fight over head seeks…

    So instead, I’d tend toward a ‘memory resident block’ of cells. Each SBC would hold a GB or so of Cell Data for a given # of cells, and run them all. Then only at the edge of each block would one SBC need to do communications off to another SBC over a network interface. Sending updates to the edge cells of that next block over. Picture a globe with dots all over it. Bands of dots would be assigned to different SBCs (likely from top to bottom). At start up, you can start the time cycle at T1 everywhere, and they calculate the next state data for output. As the data state reaches the next band south, the communication happens and that band starts to process on the next time cycle.

    As you expand the number of cells / dots, SBCs expand too, as does the inter-board communications load. At some point this will hit a communications speed wall. So initially you are CPU / Memory bound, but with finer grain & more SBCs, you become interprocess / interboard communications speed bound. That’s your balance point. After that, you are back at “bigger memory faster cpu” buys to get more speed. (then faster comms backplane then…)

    When first coding, just a couple of SBCs will be enough to work out the processing. That will also start to give a read on Number Of Cells / GB of memory and CPU cycles per time cycle. Only then can the general “how much hardware is needed per granularity step” calculations start in earnest.

    Software care and tuning (and data type choices) will matter a lot too. Doing Double Double math is much more expensive than Int math, for example. Special care on how extremely small values are handled especially. (That “underflow” problem that seems to be universally ignored in the model code I looked at – admittedly old code.. as the new stuff is hidden behind barriers)

    So some noodling time needed there for sure. I’d likely start off with 64 bit precision and then back it off as possible. Basically run some trials “big and slow” and some “short and faster” and compare the results. Shorten data sizes where theoretically safe, then test via comparative runs (in a modest number of cells test case development system)

    Then, since “if” tests are expensive but necessary to assure no underflow state develops, care in coding for the error trap is needed too. It is critically important to avoid getting “too close to zero” for floats or “too big / small” for ints. So that needs detection and correction if you can’t do prevention by design.

    But that’s getting way down in the weeds of coding when I’ve not even got the High Vision done yet ;-)

    I think I need a 2nd round of morning coffee…

    Yes, it is several buckets of worms, and yes, it is a horridly gigantic job. Part of why I’m so slow at progress on it. One guy, part time, only when not required to do other stuff… Heck, just diving into grid choices was several weeks elapsed time. Oh Well. One does what one can.

    Now I’m sure I need that 2nd cup… ;-)

  14. V.P. Elect Smith says:

    While I’m going on about math… and computers… and how math in computers is “often wrong”…

    Just as a ‘dig’ at all my “fraction challenged” metric friends and advocates… it recognizes the superiority of using fractional math (in that old school Newton kind of way…) as opposed to base 10 floats…

    Some simple rational numbers (e.g., 1/3 and 1/10) cannot be represented exactly in binary floating point, no matter what the precision is. Using a different radix allows one to represent some of them (e.g., 1/10 in decimal floating point), but the possibilities remain limited. Software packages that perform rational arithmetic represent numbers as fractions with integral numerator and denominator, and can therefore represent any rational number exactly. Such packages generally need to use “bignum” arithmetic for the individual integers.

    Math, it’s a thing…

    Also, for the purposes of models, doing things with trig functions and Pi has issues too. Since we’ll be doing trig and using Pi, “this matters”, but is always ignored in Climate Models from what I’ve seen. (After all, it is an Engineer kind of thing, not a Science Of FUD Climate kind of thing…)

    Computer algebra systems such as Mathematica, Maxima, and Maple can often handle irrational numbers like Pi or SQRT(3) in a completely “formal” way, without dealing with a specific encoding of the significand. Such a program can evaluate expressions like “sin(3Pi)” exactly, because it is programmed to process the underlying mathematics directly, instead of using approximate values for each intermediate calculation.

    I was taught this kind of stuff in my Engineering FORTRAN class in the ’70s. I’ve not seen it in other language classes…

    So when you use sqrt(3) or sin(pi*theta) or ‘whatever’, you will get a modestly good guess. “Close enough” maybe for most things. BUT, whenever you get close to 1 your floats will have increasing unit error bands and as you approach 0 your number can go insane from “underflow”.

    Ever see a climate model that checks for values near 1 and states error band? Or that inspects numbers approaching 0 and makes sure they don’t go insane? Or mentions in comments that taking a root of a trig function of a pi calculation has compounding errors? Yeah, me neither…

    Math methods and functions that are exactly right are extraordinarily compute expensive. Those that are tractably do-able are often wrong, but one hopes the errors do not compound into something big enough to be an issue. Models cycle so many times that exceedingly small errors can magnify into gross errors. “Don’t ask don’t tell” is a bad idea for use in computing error in math results.
    “But hope is not a strategy. -E.M.Smith”…

  15. Steve C says:

    @V.P. Elect Smith – I suspect “horridly gigantic” is something of an understatement! When I saw the thread, I reflected on my personal view of just the situation around the UK. I’ve mentioned before that I have a laptop on the attic stairs receiving images from the NOAA polar satellites; the images I get (barring interference from pagers and the like) are up to about 1200 x 940 pixels, with a pixel representing a square about 4km on a side at our latitudes.

    That’s well over a million 4km squares just for our little part of the spheroid, and the satellites’ high-res images are around 1km on a side – you’d need 510,064,472 of those according to DDG. Even in my 4km squares, there are often what look like resonance phenomena – regular ripples on the cloud tops, or parallel fingers on the edges – and other interesting details which make it pretty clear (a) why the Met Office are always spending money on an eyewatering scale on shiny new supercomputers and (b) why they still get it wrong. How do you model chaos?

    I think I’d start off modelling just a relatively tiny column first, until that seemed to be doing roughly what I expected, then use that model as a function that the big spherical gridder can call per pixel. So I wish you the best of British luck, but I still think that with something on this scale it may take forever. Don’t forget the stirring by mountain ranges at the bottom …

    BTW, I recall in your spherical heat pipe post awhile back, you had an excellent graphic image of the IR lancing out into space at CO2’s wavelengths from the top of the much more uniform lower atmosphere. (So it must have been x = wavelength, y = height ASL) That should be a great spec for how the radiation should be behaving in any model.

  16. V.P. Elect Smith says:

    @Steve C:

    Well, a fellow must have some windmill or other to tilt at ;-)

    The simple fact is that I AM going to play with my computers, so I might as well focus it somewhere. Then, if, along the way, the exercise points out just how impossible computer models of climate really are and how hopelessly wrong they must be; well that’s just gravy…

    BTW, from that floating point wiki, we get an interesting example of some of the accuracy issues (lower in the article):

    Accuracy problems
    The fact that floating-point numbers cannot precisely represent all real numbers, and that floating-point operations cannot precisely represent true arithmetic operations, leads to many surprising situations. This is related to the finite precision with which computers generally represent numbers.

    For example, the non-representability of 0.1 and 0.01 (in binary) means that the result of attempting to square 0.1 is neither 0.01 nor the representable number closest to it. In 24-bit (single precision) representation, 0.1 (decimal) was given previously as e = −4; s = 110011001100110011001101, which is

    0.100000001490116119384765625 exactly.
    Squaring this number gives

    0.010000000298023226097399174250313080847263336181640625 exactly.
    Squaring it with single-precision floating-point hardware (with rounding) gives

    0.010000000707805156707763671875 exactly.
    But the representable number closest to 0.01 is

    0.009999999776482582092285156250 exactly.

    Also, the non-representability of π (and π/2) means that an attempted computation of tan(π/2) will not yield a result of infinity, nor will it even overflow. It is simply not possible for standard floating-point hardware to attempt to compute tan(π/2), because π/2 cannot be represented exactly. This computation in C:

    So 1/10 x 1/10 = 0.009999999776482582092285156250 for the closest the computer can show you, But it finds:
    that 1/10×1/10 = 0.010000000707805156707763671875 so that’s what you get.

    Similar stuff happens for Pi and trig. So good luck with any hope of actual accuracy in that whole calculating stuff on a globe thing…

  17. V.P. Elect Smith says:

    Well, this is something I need to consider… Underflow has been converted to subnormal numbers as of IEE float standards. BUT… this “fill the gap with something of decreasing precision so you don’t just get zero” is not handled the same everywhere, and turning it off is particularly variable.

    Personally, I’d rather have it off, toss an exception code, and know my program was doing dodgy number handling, than have it take ‘way close to zero” numbers and hand me some random-but-very-small-crappy-precision ‘near zero’ result.


    AArch32 NEON (SIMD) FPU always uses a flush-to-zero mode, which is the same as FTZ + DAZ. For the scalar FPU and in the AArch64 SIMD, the flush-to-zero behavior is optional and controlled by the FZ bit of the control register – FPSCR in Arm32 and FPCR in AArch64.

    Some ARM processors have hardware handling of denormals.

    So depending on my use of 32 bit vs 64 bit codes / OS ISA, I’ll get different handling and different results…

    FTZ Flush To Zero is the mode of FORTRAN when I learned it. When you hit the underflow point, and your exponent was as close to “it’s zero Jim” as possible, any further move that way was just set to zero. After IEE stepped into Floats with “standards”, about 1985 with Intel doing some bits early in the x486 floating point HW, FTZ was on the outs and ‘use leading zero” on mantissa with special 0 exponent was in as a way to fill in the gap of representable numbers ‘near zero’. So you get these “crappy if very small” numbers instead of ‘it is zero and maybe you ought not do that division after all’.

    As that can “cause issues” there are (sometimes) ways to go back to FTZ behaviour. Or not. And what that quoted bit says is that depending on the ARM core used, and your compiler optimization level, you get different choices.

    Oh Great, another thing to watch for / test effect of…

  18. cdquarles says:

    This stuff reminds me of that numerical analysis book I bought long ago. I would think they’d hire excellent analysts and pay them handsomely. Yet, somehow, I doubt that happens. (Recalls stuff my youngest son was doing for NASA in Huntsville while at UAH.)

  19. Steve C says:

    I have idly wondered before now about automating Trachtenberg mathematics, which can deal with arbitrarily long numbers. IIRC, though, it would involve having to write routines for anything more complicated than multiplication and division. Plus, they’re probably infinitely long numbers, of course.

    Re windmills, I saw a ‘For Sale’ small ad in the Radio society magazine a few months ago, posted by a fellow I knew at Uni long ago. Two 4CX250s and bases, big mains transformer, high voltage tuning capacitor … His traditional big linear amplifier project had come to the traditional end. Got a fair few ‘lifelong projects’ like it myself, and software ideas, and … ;-)

  20. V.P. Elect Smith says:


    A lot of the “Climate Model” work is done by graduate student types on the Ph.D. holders pet project. Everyone gets to publish and then apply for another grant to do the next increment.

    It’s bloody obvious in the code. Take GIStemp. Oldest bit in OLD ALL CAPS FORTRAN likely written by Hansen himself about the 1980s (early…). Then as bits of enhancement and complication are added on, shifts to more modern Fortran with mixed case and extensions from the 1990s. The 2000 ish bit is in Python and has different names in the documentation (along with a much more complex coding style, either he was a pro-contractor or a decent Grad student). Other bits have very different coding styles.

    NONE of the Fortran has the kind of stuff I was taught to do in my Engineering FORTRAN class. No paranoia about getting near zero. No significant bounds checking anywhere, really. No test of FOO .LE. {very tiny value} instead of FOO .EQ. Zero that I remember.

    Coders have different styles. Often shows up most in commenting style. After working with folks for a bit you can look at a program and know who on staff wrote it. “Updates and fixes” stand out as glaring style shifts. What I saw in GIStemp was several ‘eras of styles’ as IMHO different Grad Students came in, added, say, ‘special Antarctic data module’, and then collected their grade / degree and moved on; Lead Ph.D. published with them on the side. That’s how it looks to me anyway.

    My best guess is that ALL of them were NOT trained in computing nor experts in numerical analysis using computers. It’s all basic physics formulas and standard math handling, but coded up in Fortran. Ignoring the ‘icky details’ of what binary float math does to your text book formula… Except may the Python guy. He seemed more clueful about the need to play nice with Python Math. Unfortunately, my Python is not as rich as my Fortran, so I don’t know how it handles floats internally. IEEE or whatever.

  21. cdquarles says:

    Our most gracious host, I think Python math is the C standard library stuff, though that is implementation dependent and probably, today, IEEE 784. I may be mistaken.
    That bit of sanity checking was also taught to me during my FORTRAN IV class. I do believe that the Engineering department had a say in it, for that was some time before formal computer ‘science’ classes were introduced.

  22. V.P. Elect Smith says:


    I thought Python had some kind of duck typing (if it looks like a {string, int, etc.} it can be used as one), but this page:
    says they have added type hints to the language, so you can kinda-sorta ask pretty please that the dynamic type assigned have some particular type.

    As you might guess, I find it a bit disquieting that I don’t get to choose between int, long int, float, double, long double, string, etc. etc. and know what I’ve got. Especially when the specifics of your numerical results depend on it.

    It is also the case that dynamic typing makes your language very slow. Python is known to be, er, slower than what folks like.

    So it most likely IS C library IEEE 784 (or maybe the 2008 update), but which PART of that is used in any given calculation? (Or parts of a calculation?). Maybe…

    FWIW, GIStemp was re-written in “A modern language, Python”… Frankly, I’d rather they had just rewritten it into one era of Fortran.

    Fortran and C are THE 2 fastest languages out there. Regularly folks do benchmarks for their latest and greatest big idea for a language, and regularly it always comes back FORTRAN The King! and C almost the same, but just a hair slower on some minor irrelevant bit.

    That’s a big part of why 99% of all code I’ve ever worked with has been Fortran, C, various shell scripts, or some database product (when I was a DBA Specialist). IF you want fast and relatively deterministic math done, FORTRAN is your choice. For more general purpose and systems programming, C. For rapid set up and integration wrappers, shell scripts. After that, most stuff is some degree of Also Ran. Good for specialized domains or interesting as an idea, etc.

    There were some other decent contenders once, but most of them have joined the depricated or special use piles. Algol was my favorite, but C pretty much took over the block structured group. Other stuff can be quicker to write, or easier to debug, or faster to learn, or handles a specific class of problem easier, but none are faster.

    Python became trendy as it was cool as a teaching language. Both interpreter and compiler could be used, and you didn’t get down into the weeds of data types right out the gate. But hiding something does not make it go away. Data type still matters.

    Given the Duck Typing process, every Python program has to wait until it knows what any given expression will be handed as data types before it can begin execution. This puts a load of type assignment / type checking into the run time execution AND prevents having multiple pipelines working on the code prospectively. You can’t start a speculative execution until you have the type and the data known. Again, slows optimization tricks.

    So I’ve never been keen on Python. I can write it (not well, and in a primitive kind of way, really, but workable). But any language that increases the obscurity of what you are doing (and methods do just that) and hides HOW it is doing it (duck type, methods) and goes a lot slower too, just doesn’t get me excited. Well, not in a good way ;-)

  23. Simon Derricutt says:

    EM – given the duck-typing, it’s possible that Python might choose different data-types on different runs, depending on the actual values of the data, and if that happens then the accuracy of the calculations would also vary. I’m not even fond of the way C has “optimisation” stages in the compile that might cut out bits of code that the programmer found necessary but the compiler thinks isn’t needed and thus chops out. Most of my programs were in assembler, where what I wrote translated into machine code and nothing was added or removed. That probably no longer applies in today’s assembler which needs to deal with multiple cores and out-of-order execution.

    It does seem that FORTRAN would be the logical language to choose here. Unfortunately I never used it. That sort of tight control on data types is pretty important. Given that the output of one calculation becomes the input to the next one, any calculation errors will be multiplied by the number of time-ticks required to run the whole simulation, and it seems likely that some basic things that ought to be conserved (such as mass of air in the whole simulation) might gradually drift. I’d suggest therefore that an occasional sanity-check for conserved quantities might be useful, for example adding up all the air masses and water masses in the total simulation and seeing if they remain constant.

    Since each cell could have inputs from several other cells on all sides and above/below, seems the first stage processing is to sum the “comesouta”s for nearby cells to produce a summed “guzinta”, then process that guzinta into the current state of the cell, and then produce a comesouta which next time-tick gets divided into the relevant guzintas. Where any cell could be processed on any core or machine, seems each guzinta needs to be on a common database with a flag for the data-storage from each nearby cell to say it’s been updated for that time-tick (the data update may actually be all zeroes if the wind or mass-movement wasn’t in that direction), with the flag being reset once that update has been used. As belt and braces probably needs a time-tick count in that database, too, so that you only update the guzinta if the time-tick is correct. Looks like if you have, for each nearby cell, the air-mass, water-mass and speed+direction (momentum) then it should be possible to sum them to produce a net air mass, water mass, and momentum, and add it to the cell conditions. Thing there, though, is that while that air-mass is moving in, nearly the same amount will be moving out so the calculations on the cell need to be done as if that air-mass had already moved out. Not all that mass will move out, though, because there will be pressure differences across the cell. Will take some more thought on this – at the moment this is just initial ponderings on design and trying to see the problems before they bite. Getting too far in too early risks a rewrite to avoid problems that should have been noticed earlier. I’m of course hoping this discussion ends up being useful, and not just pontification, since in any case you’re going to be trying it.

    The really horrendous work here will be setting up the initial database of ground conditions. I’m not certain how you deal with varying ground height, which will affect upward/downward movement of the air-mass between cells depending on wind direction. For each ground area you’ll need to know angle, moisture, what it’s covered with (grass, trees, buildings, tarmac, snow, etc.), temperature, and how those vary over time – that database will be modified as it gets rain, snow, or drought, and the moisture-level will also depend upon drainage such as rivers or streams. Insolation depends upon what happens in the cells above it, too. Makes the fingers sore just thinking about the amount of key-entry needed to build the initial database.

  24. V.P. Elect Smith says:


    The discussion is useful as it takes different angles on things I’ve thought about and can surface things I’ve not…

    Per starting ground state: That’s easy! Just use an average like the pros! ;-)


    Initially it won’t be that hard as the cell size will be so big it erases things like mountains and the Grand Canyon… As cell size / time tick get reduced (and I do think they must go together as pointed out above) the ground state problem multiplies.

    Once you can resolve things like mountains and layers of air less than “the whole troposphere”, then things like down slope winds and up slope rain extraction enter the puzzle.

    The way I’m thinking of handling mass flow is just a vector from the Dot. It has size and direction (in 3 D…) then you look at what cells / dots are within the hemisphere where the vector points, and apportion the mass according to angle and vector size (distance and cell sizes are supposed to be nearly equal from the Dot assignment step…). (Hand waving around layers being different from side-to-side for the moment but ought not be that hard to sort…). The arriving vector has size and direction, you vector sum it with the existing cell vector (left overs after subtracting the ‘wentoutta’ vector) to get the new cell state. Rinse and repeat.

    Eventually, when resolution lets elevation matter, if your vector points, say, W90 and dead flat, but W90 has dirt at your elevation, the vector gets a vertical rotation to the next layer up, so would make vertical winds up slope as a natural consequence. For down slope, you move air according to relative pressures. The air goes sideways, but the ‘flat vs down’ gets adjusted by the air in the two layers. {hand waving commences again…} Essentially, I need a ‘boundary air’ function that tries to keep air moving in the boundary layer, but lets it ‘wander off’ if pressure differential allows for that.

    Yes, every cell needs a cell number and every cell data block has a time tick number. You can proceed with a cell when all the Gozinta time ticks match and are one more than your Gozouta data time ticks and they have been “moved on”. FWIW, I’d likely have a marker for “you have zero gozinta from this cell but have the go to run”. Probably can just use zeros and the time tick increment…

    I think this ought to result in things like barometric pressure changes and winds and all, but we’ll see.

    Yeah, I like the idea of a periodic “measure your total mass of air and water”. Nice sanity check on things. Maybe once a Model Month toss in a sanity check cycle… How much did the Earth shrink and all 8-)

    Similarly, a scan for temperatures above and below any records ever. A “holler when insane” test ;-)

    Maybe this generalizes… Produce a ‘weather history’ data output set, then you can run that against a normals set. Discrepancy shows degree of fidelity to reality?

    Frankly, I’ll be happy if I can get a 1024 cell model to just warm in the sun, cool at night, and move air around ;-) After that, everything else is just “enhancements” ;-)

  25. V.P. Elect Smith says:

    Thinking about it… when moving air laterally, your ‘hemisphere in front’ will include a new layer ‘down slope’. It will have a LeftOvers Vector that has less than ‘normal’ mass of air left in the cell (as some left to the Gozouta…). Comparing the LeftOvers vector for that cell and for the next layer up, you get comparative “suckage” on your air mass moving sideways. It ought to then apportion into some going down slope and some just sideways. That has the potential to handle things like an upslope wind reaching the top of a ridge. Some flows straight down wind from the top, and some starts back down slope on the backside. At fine enough granularity, you might even get the “roll” of wind as turbulence makes it spin on the back of the ridge…

    Essentially, you compute your total vector out, then look at the LeftOvers vector of the target cells to adjust the total vector into partials for each.

  26. V.P. Elect Smith says:

    Thinking about this some more… One vector can’t capture all conditions. The “downwelling” cells, for example, have net air flow out in all directions. One needs to start with relative pressures (from PV=nRT) were n is the amount of air, R is constant, and Pressure Volume proportional to Temperature.

    Sun in changes T, that changes P (cell volume fixed). Then you can compare P with each adjacent cell and figure the vector to each. Move mass, adjust n, and figure net P, V and T. Then repeat.

    For layers, V is not fixed as the “top of the atmosphere” changes with heating. This will matter when changes of solar spectrum get added in and height changes with ozone heating.

    Oh, and that’s just the base case. Then you add water…

  27. Simon Derricutt says:

    EM – the logic seems to be coming along. Most of my programs were first written as a set of comments that described what each section/subroutine did, so I could check the logic before coding. Provided the logic was right, and the code did what the comments said, I rarely got a bug. Also makes it a lot easier to modify later when out of the programming fugue.

    For me, it always helps if I have to explain what I’m doing to someone else, even if they don’t end up understanding it. In order to explain it, I have to serialise the thinking and cover all that happens, and in the process of doing that I see things I missed and faults in the logic. Sometimes, in explaining things, I realise I don’t understand it well-enough myself, too, so that triggers deeper thinking.

    I’m still not certain that equal-area cells (so the grid is not actually even) will end up being better than a grid system of variable-area cells but where the relationship to the surrounding cells is pretty generic. For the equal-area cells, the location of the surrounding cells will be unique to that cell, and will need to be calculated for each one and put into the database. Once you’re up into the few thousand cells that’s a lot of work, and even putting in the coordinates of each cell (and writing a program to derive where its neighbours are) is a lot of work. Makes the extra effort needed to work with a variable-size cell on a regular grid seem pretty minor. There’s also maybe a kludge available by using a geodesic grid, where the few pentangles can be imagined to hold as much area as the hexagons, though again the actual directions of the nearest neighbours would be unique to each point though in this case it’s likely that their locations and those of the neighbours (and directions of the neighbours) could be also calculated using a program. Maybe no need for the fiction of equal area, if each cell knows its own area. Again, I’m looking forward to when the grid is somewhat smaller, and the effort needed to initialise the database becomes that much larger. If that can be automated, it will save a load of keyboard work.

    As far as I can tell, if you only use conservation of momentum and the differential pressure across the mass of air, and take into account the rotation of the Earth, then the swirls around updraughts and downdraughts will automatically appear. Just depends on the cell size as to how accurate that gets, since there is only one air direction within a cell (and in reality it will curve within a cell, so it’s an approximation). At some point I’ll figure out the gyroscope problem, but since that’s a rigid body angular velocity and we’re talking about air, if you conserve linear momentum then angular momentum will also be conserved, and the model will be closer to reality the smaller the cells get. The wind-flows will be straight within a cell but bent at the edges.

    I can’t remember the name of the circulation, but a hot, flat plane (extending to infinity…) won’t result in hot air rising evenly, but instead settles into alternate updraughts and downdraughts, with a bit of twist on each if we’re doing that on the surface of the (rotating) Earth. It’s just not stable with a plane of rising air, and instead flips to the circulating currents. Possibly the ground-level solar absorbance (thus uneven heating on the ground) will naturally produce these anyway in the model, but a large area of stuff all the same (such as the ocean) does that splitting into ups and downs naturally, so hopefully the model would do the same. Thus starting with an “averaged” model of the Earth as a waterworld should still produce cyclones and anticyclones. May be a good test of the initial cell programming – providing the waterworld is rotating, the initial averaged air conditions should develop a circulation with gyres.

    I’d figure this sort of model should show the same emergent phenomena that the real world does – simple rules applied in a large array show new properties.

  28. V.P. Elect Smith says:

    FWIW, I spend a fair amount of time “imagining the program” before I start to write it. In the writing, I usually go for the rapid prototyping approach and plan on “writing it three times”. (and throw away 2)

    Once as a first splash of crap that likely doesn’t succeed but finds what you forgot. That “forcing you to serialize your thinking” you pointed out. It is also where most often the first blush of data type FOO doesn’t play well with step BAR starts to shift things.

    The second time is to “do it the right way”. This only happens IF the first prototype found grievous fault (my most grievous fault…) in the approach. Like finding out equal area variable direction is worse than constant directions variable area just doesn’t cut it. Usually here is where I start writing blocks of code that likely never will change (subroutines to do a task for example) in their final form.

    Then the third time is the Polish Pass. Bug Stomping. Fix Up & Pretty Print. The thing works, but QA & Efficiency screens turn up bits of not-so-perfect. Very occasionally I’ll find some Really Bad Thing that takes major re-work ( like: Oh Dear, I need to go change all those INT to FLOAT and figure out why it spends 90% of its time in routine FOOBLE that ought to be fast. Or: Why is my atmosphere going away? (I really liked that idea of an atmosphere mass test every model month or so…)

    These are not always formal passes. (Heck, in the rapid prototyping Silly Con Valley way, what IS formal anyway?). Sometimes it’s just one long flow of programming and as the code grows the different phases get done. But it’s fast. Look at Elon Musk using the Rapid Prototyping on Falcon Heavy vs Boeing & the consortium of “professional rocket companies” doing “the usual” plan-forever-every-detail and only then build. Who got the most cool most efficient rocket & capsule to the space station first, eh? (I get a real kick out of it when they do things like let the rocket get too cold and it self crushes from vacuum – then “The Pros” just tisk tisk about them being too fast and not thinking things through, when Elon just says “OK, what did we learn for the next iteration?” Valley vs Palace On The Hill…. Different ways…

    So when I sit down to write a chunk of this, I’m likely going to just spit out a block of semi-right code and start debugging it (i.e. compile and run cycle) way before it is all worked out in detail. What fails gets changed. What’s learned gets incorporated. What’s right gets polished. Once that chunk stops changing, polish the presentation / pretty print / add any QA & Self Check that was left out, and move on.

    For the spherical bin pack equal area cells, I’m not going to do any data entry for location or where the neighbors are located. It will all be program generated. There will be a “make the world” starting conditions program that creates the bin-pack, stores the Lat Lon of each cell, calculates the ‘angle to neighbor’ for all neighbors, assigns the area to each (as fraction of final pack ratio, so if I pack 500 cells every cell is assigned 1/500 * earth-area). I envision those as parameters assigned to each cell# that are picked up by any SBC when it is assigned that cell to process. Part of the data packet for each cell. (I’ve already laid out a rough cut of data items / cell as the ‘first pass’ on that data)

    So when an SBC (or thread or core or…) is assigned Cell 32, it gets an input record of:
    Cell# 32, TimeTick, Lat FOO, Lon BAR, Nneighbor Lat Lon (or maybe radians… need to think about the math load and processing precision on that decision more…), NEneighbor same, Eneghbor, Sneighbor, Wneighbor, (really I’ll likely just number them around the circle from N on around as their ought to be 6 of them pretty much always, but we’ll see at the discontinuities.), Air-mass, Air-temp, Air-velocity_vector, etc. etc.

    Initially it will be in one SBC for devo, so no real “look up record in DB and load” but more “it’s all in memory”. As this goes parallel processing multi-core that I/O thing starts to bite and a decision on ease of programming (one big database) vs speed to process (memory as much as possible, interSBC coms only at the edges of an SBC assigned area) comes up. But, as in all rapid prototyping “We’ll see”…

    I just realized you can see the rapid prototype mode of thinking in how I describe things. A branching decision tree, not a “final spec”. A map of potential areas and path decisions rather than a list of steps. Right brain not left.

    That emergent phenomenon with increased fine grain is why I’m not going to be too worried if the first cut small grid count is a bit insane… Just want the stuff to move more or less as expected. Not going to care about the details. I mean, really why would a 100 cell world have anything other than gross mass flow and mostly modest temperature changes (to the AVERAGE air in each cell…) So I’m thinking “code in the physics then let cell size & time tick demonstrate changes” as you “vary your ruler” and the Fractals can play their games…

    FWIW, the present ponder point is Altitude.

    I was pondering the problem that height of tropopause changes from Equator (about 30k to 50k feet) to pole (can be zero in the antarctic night). Then I remembered that NullSchool uses Pressure Altitude. IF each layer is just defined as a millibar range (or the SI equiv or whatever) then PV=nRT is directly reflected in the model. T happens, n in a space doesn’t change during the T event (only when you distribute the mass), R is a constant, and your V can now be the change event as P is constant for that layer when defined via Pressure Altitude band. It will naturally give you the expansion of your air chunk (then you sort it out to other cells AND any upward distribution based on relative T to nearby cells. Basically, if I’m hot and they are not, my air flow to them happens at a higher layer for my P-altitude as my air expanded).

    I’m not certain that makes it easier, but I intuit that it is more right. It would likely work best with more air layers than I’d planned on though…

    The “explain things makes it more clear” was something I ran into in High School when I’d tutor some other kids in math. Found that I had things “I just did” but could not explain. Figuring out how to explain them helped me too.

  29. cdquarles says:

    One catch handling gases: the real atmosphere does *not* completely conform to the abstractions in the ideal gas equation. At a minimum, I say, you need to add the van der Walls corrections to the ideal gas law. Ideal gas law posits *no* deviation from elastic collisions, essentially *no* chemical reactions, and no phase changes. As a rough first approximation and away from the critical points of the gas involved (nitrogen, argon, oxygen, carbon dioxide, and etc) you can get away with just the ideal gas equation. I say that fails spectacularly for water, and not only by the hydrogen bonding that makes water in the atmosphere have a range of agglomerations from dimers to visible droplets, even without precipitation.

    See one, do one, teach one. Trying to explain something to someone else nearly always helps you understand that something better.

  30. V.P. Elect Smith says:


    That’s why I plan to handle water as a unique and separate step. Figure the PV=NRT for the air, figure RH from surface wetness and T, look at RH vs T in layers and then figure out any precipitation et. al. (Niggle bit being water vapor as part of ‘n’ in PV=nRT next step).

    Essentially, I see the processing as “Light to T by layers and differential absorption bands, then T per layer into PV=nRT, then find water changes and effects for T and P, blend and adjust, move mass and KE with it, find new P and n post mass moves and water changes. do an IR outbound value. Iterate. (roughly off the top of my head…)

    I think it’s relatively OK to treat water as the only non-ideal gas in the system and (at least at first) ignore things like S vapor and SO2 from volcanoes and particulates (even though they matter a lot for things like dust storms and rain nucleation). Essentially put that stuff in “plug number” parameterization for “later enhancements” when the time comes.

    The unfortunate truth is that there is no way in hell of modeling reality. There’s just way too much of it. Every atom, to some extent, is its own bit of math. Not enough atoms of computers to model all the atoms of air and dirt. So, by definition, you WILL gloss over stuff, you WILL do some things “wrong”, and you WILL have a stupid model that isn’t like reality. The trick is to try to focus the “wrong” in places where it does no harm to the conclusions.

    So handle water as water with ‘rules of thumb’ like when RH turns into dew or fog. A distinct cloud subroutine that does as best it can on cloud formation matched to historicals to tune it. Work it to match the AVERAGES and not the atoms. Otherwise you can never calculate it.

    The old “All models are wrong, some are useful” observation.

    So of necessity, this is a game of “do what you can don’t worry about what you can’t”. Leave that bit for the evaluation at the end. The caveats and sanity checking step. When you get thunderstorms in the Sahara and tropical palms in Antarctica, you know you did it very wrong. When you get 150 F in the Sahara and -100 C in Antarctica, you know you have it a little wrong. When you get what looks like reality, you MIGHT have something useful, but more validation and testing required before using it for policy.

    At that point you can start to ask: “Does this model suggest anything interesting about the real world? Where is it a bit daft and where does it point at something real?”

  31. Compu Gator says:

    E.M.Smith replied to CDQuarles on 21 November 2020 at 6:05 pm GMT:
    That’s why I plan to handle water as a unique and separate step.

    Just entering the fray, here.

    Seems to me that you’re setting your program up for a model clock based on a step pair: A major step and minor step, where a major step represents the passage of model(ed) time. For which a minor step represents processing that the program needs to complete for all the cells together, i.e., after certain processing is done for all cells in the model step, but before certain other processing can be started for all of those same cells in that same model step.

    Perhaps you’re already thinking at least vaguely in such terms, but I take it that processing water would be confined to a single minor step that is performed at some time in each major step. Depending on how arithmetic-&-logic-intensive or intercommunication-intensive each minor step is, you might want to make the minor cycle the basis for distributing computing within your model.

    Hope all that’s not “clear as mud”.

  32. Pingback: Cell Model – Top Level Flow Of Control | Musings from the Chiefio

  33. V.P. Elect Smith says:

    The general idea is to step sunshine by a major step (initially an hour), then have the processes it kicks off, like air heating causing evaporation, then convection, and precipitation, run on a minor clock step (so clouds form and precipitation wets the ground before the next sunshine change). Likely I’ll do first testing at 20 minute steps, then see were sensitivity has a knee in it. Maybe 10 minutes, maybe 1?

    Depending on how much computes you have to spend, running 6 hour-cycle steps and 20 minutes each inner ( 4 hours per big tick, 12 little ticks each) is a total of 48 steps / day and pretty cheap. Clock it up to 96 in a day Big ( 15 min) and a 10 sec little tick (60 x 15 = 900 ) and you have 96 x 900 = 86,400 compute cycles. HUGE difference in compute load. Clearly a lot of performance tuning and matching to hardware happens here.

Comments are closed.