Cell Model – Top Level Flow Of Control

Just noodling around some ideas for the Top Level Flow Of Control part of a Cell Based distributed climate model. I’ve already discussed that there isn’t any really good way to lay out cells, so using an equal distance cell center generated layout. Then a general look at stuff that happens inside the cell.

Here I’m looking at a sort of pseudo-code view of the order in which steps would likely happen (subject to change at any time as actual coding turns up issues with this starting point).

This will change as I think of things I’ve missed, change my mind on some point, and review prior art for models and turn up “issues” to deal with.

For those not familiar with it, pseudo-code is a kind of programming shorthand. It is kind of like a programming language, but not any real language in particular. It is kind of like English, but has a bit more programming like structure and meaning to the words. The purpose is to let write out ideas on how to program something without needing to fuss over which particular language construct and do you end phrases with a “;” a “.” and “END” or nothing in particular. Basically, all the minutia crap are left out.

So I’d see things as being roughly:

Main:

call Build_World (cell_count)
spawn Monitor
call Cells (years, parameters)
write output reports
end

The Monitor is a future program that would track data like what year you are processing, what is the load on systems, is anything stuck, etc. It can only really be worked out after the program itself is done, so at this point just a place holder.

“Parameters” is just a place holder for whatever exact parameters are needed, once that’s figured out.

This program just manages the ‘big lumps’. It launches the monitor and creates any summary reports at the end also. Here is where you can pass a “size of world in cell numbers” value and a ‘years to run’ parameter.

Build_World:

call cellgen (cell_count)
call celldatagenerate
output cellmap, celldata_db
end

Generate the actual map of the globe. Figure out what each cell has for neighbors, what angle they are to each other, what Latitude and Longitude, apply global surface types to cells, and generate all the particular data for that cell (eventually to include things like altitude average and vegetation and such).

This is where you create the database of all the data the following calculations will need. For the eventual very big models, this might be run once, and data stored in a database for all subsequent runs. Basically, you may choose to run, or not, celldatagenerate.

Cells:

While JobQue less than limit:
For Each year in years
    For Each day in year
        For Each hour in day
            For Each cellid in cellmap
                spawn cell (cellid, parameters)
            Next cellid
        Next hour
    next day
Next year
EndWhile;
end

This is basically just managing the job queue. Don’t want to pump out a million jobs all at once on a 4 core R. Pi now do we? (Crash!) So it’s managing the Job_Queue. In many distributed compute system, there’s an automatic queue management, so details to depend on how that’s implemented.

I see the iteration as being over date (so we know seasonality data to apply) and time (so insolation by time of day is correct).

Then for each Cell#, we spawn a compute process to a compute node to go figure out the values for that particular cell, based on date, time, and neighbor “gozouta” data that it’s “gozinta” data.

I’d start with all the “gozinta” data buckets filled with the initialization data from celldatagenerate, then have it iterate over time. Stepping from top of globe to bottom (initially) lets us run with very few cores at the start and knowing that all the cells are ready to run. Eventually, with massive 1 core / cell and many cells, I’d expect to change this to where each core just watches for it’s “gozinta” buckets to be filled and the date:time:done flag set. That will be a bit more chaotic on which cells run when, but ought to work OK. (In theory, some areas could “run ahead” of others a little… creating rings of different time periods as the “gozouta”s propagate).

Cell:

call insolation (cellid, celldata, year, day, hour,)

call surface (cellid, celldata, year, day, hour, insolation, return=temp)

call subsurface (cellid, celldata, year, day, hour, temp,  return=temp)
# The subsurface function will look at celldata and season, and 
# decide if it does the ocean depths, rocks, sand, permafrost, mud, etc.

call evaporation (cellid, celldata, year, day, hour, temp, wetness,  return=temp)

call airlayers (cellid, celldata, year, day, hour, temp, RH  return=temp)

call wind (cellid, celldata, year, day, hour, temp)

call IR (RH, temp,  return=temp)

Write Gozouta Flags & Data
end

This is the core work flow of the cell. Just what physics are you going to do, in what order. What are you leaving out, and how much detail do you compute. For example, notice there is no “heat island” around metroplex areas being considered. Eventually, at very high cell numbers, that might matter. For now it will be hidden in the averages of data.

These things have circular dependencies, both laterally to neighbor cells, and vertically into air layers (and ocean layers). But also in the time dimension. I try to capture some of the time aspects inside the individual sub-processes with iterations. (See below).

The general notion is that it all starts with surface heating. So figure how much energy reaches the surface first.

Then, some of that surface layer heat (that I figure is including about a 25 m air layer and buildings) will be subducted down into the ground or water bodies (though net heat must come out of the ground so in winter the heat goes the other way). This is a minor effect process so will likely be done as a plug number initially. But “someday” it ought to be addressed. Especially bottom of the ocean volcanoes…

The surface heat remaining can cause evaporation (or transpiration from vegetation). This is our first bite at the water apple… So figure out, based on surface type and wetness, how much water goes into the air as vapor.

After that, do air layers (with iterations) to distribute the air in vertical winds and create clouds and precipitation and all the rest. Yeah, the big lump…

Once you have that part of the air and water flow worked out, and know your remaining humidity and temperature, move the air laterally as wind, apportioned to your downwind neighbor cells based on angle (but arriving with their initial momentum vector…)

After all that, work out what IR might have left to do, since now you have clouds and humidity and all that data to work it sort of properly.

Write out any remaining bits of data to the database and set the “I’m done” flags.

At this point, this cell process terminates and the Cells job manager gets an open queue slot to spawn the next cell process for the next CellID. Eventually, as mentioned before, I’d like this to be one process / cpu and at this point just enter a spin/wait state until the “gozinta” values show up and the ‘good to process’ flag rises.

Here’s a brief description of what I see the parts doing:

Insolation:

Looks at cell LAT, LONG, Date and Time and calculates insolation % of TSI impinging on surface. Basically, it’s taking the panel of the cell and figuring the tilt relative to the sun and how many Watts land on that space at the top of the air column. It might make sense to calculate the values in advance as they ought not change much year to year over a few decades. Database lookups might be cheaper.

Surface:

Taking into considerations the transmission of the existing air column above the cell (initialized value, then iterated by the model) figure out how much of the Watts incident get absorbed at the surface. Consider surface type (snow, water, vegetation, dirt, sand) from celldata and any surface water from model precipitation and body of water percent in the celldata. This is where all the messy stuff with tree leaves, seasons, albedo change with snow & etc (or static with Sahara sand…) get factored into a theoretical instant surface heating (Watts, eventually temperature).

SubSurface:

Initially likely to be a plug number. Eventually, based on the type of subsurface coded for that cell, some amount of surface heat soaks down into the rocks, sand, permafrost, whatever or mixes in for ocean. Think of it as a capacitor with a resistor to the surface. Then, when winter comes, some subsurface heat migrates out to the surface. It’s a ballast function.

Also, precipitation soaks into the dirt or runs off in rivers. Heat goes with it, too, so that’s allowed for (somehow…) in this section.

Evaporation:

Prompt surface water evaporation into Humidity numbers. Needed to properly handle convection, cloud formation, obscuration by clouds formed, precipitation, etc.

The water that didn’t soak in is available to evaporate, as is surface water of rivers, lakes, seas, and oceans.

Air_Layers:

For Gozinta Winddata
    compute new air properties

For cycle in iterations
For layers in atmosphere
    compute density and vertical displacement
    compute change in humidity, cloud formation, and precipitation
    compute solar heating by layer (UV in straosphere, etc.) and upwelling IR.
    etc.
Next layer
Next cycle
End

This is likely the hardest one. It will iterate a few times trying to take the surface evaporation change to air density into account, and have the moist / warm air rise, forming clouds at the points where RH exceeds capacity to hold water and eventually precipitation (output to surface wetness…). Initially I’d use just a couple of air layers and maybe a 20 minute time step, so 3 iterations inside the hour. But eventually it would need more. Perhaps a lot more.

Wind:

Take the change of air mass in a cell (via incoming cell parameters from neighbor cells) and adjust for humidity and temperature changes (from above routines) and compute how much air mass moves what direction into which cells. Write the Gozouta values to neighbor cells based on direction angle to wind.

IR:

Finally, at the end, after convection, wind, evaporation, precipitation, etc. have done their thing, figure out what IR might do. Biggest impact will be from variable humidity in the air layers and from clouds.

Likely this will need to be split over Tropsphere (little IR outbound) and Stratosphere (lots of IR outbound) with cloud tops in the tropopause a special problem. Not sure how many layers / iterations will be needed.

Write Gozouta

At this point we’re ready to write the “Done With CellID at TIME” flags and any remaining “gozouta” data that has not already been written for our neighbor cells to be able to compute their next iteration.

At this point I envision that “gozouta” flag being, basically, a date:time stamp saying “I’m done with this step”, but a simple integer “cycle#” might be more efficient. Details for later…

Then we are done with the cell process for this date:hour and this cell instance terminates. The processor is freed to go get another cell to process, or just wait for an updated “gozinta” flag for this cell.

Depending on how you do job assignments. Initially, with only a dozen or so cores, having each cell end and be reissued by the Cells process makes more sense. Otherwise you might have a few thousand jobs in queue all checking “Can I run yet?” and leaving no time to actually run one. Eventually, I’d like to see “one CPU / cell” and the program just stays resident watching for gozinta update to just run another cycle.

But at first, it’s just “write out data to database and end”.

Which brings up the other potential bottleneck point. One Big Database will be easiest at the start, but more efficient with 1000 to 10000 cores would be each cell with it’s data resident in memory, and only the interprocess communications going between CPU / SBC units over fast network connectivity. Again, an implementation detail for later as an enhancement / porting to massively parallel hardware.

In Conclusion

IF I’m right, this ought to allow for a “one core per cell” compute paradigm with little bottleneck potential. You can basically scale processing power almost linearly with cell number. (Some interprocess communications may still limit at high values depending on how the SBCs communicate)

I envision writing this over an extended period of time (unless other folks wanted to jump in too…) with initially runs being one function at a time on only a few cells. So, for example, a 32 cell world with only surface written, then adding subsurface (perhaps as a fixed value for all cells, later to be enhanced to variable by celldata subsurface type) and eventually adding an “ocean currents” section into subsurface for ocean cells (with layers like the airlayers, with differential IR vs blue vs UV absorption)

Then continuing to add one function at a time.

Once it’s all working OK on, say, 256 cells, crank it up to 6400 on a big cluster and see what happens ;-)

Like I said, this is the first spaghetti on the wall version. Feel free to make suggestions, toss rocks, etc. etc.

Also, anyone wants to take this, or parts of this, and run with it, feel free. The more someone else does, the less I have to do and the faster we get something. Copy Left Attribution and all that.

Subscribe to feed

About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in AGW Science and Background, Earth Sciences, Tech Bits. Bookmark the permalink.

30 Responses to Cell Model – Top Level Flow Of Control

  1. V.P. Elect Smith says:

    Pondering all this, and the dependency of one cell on the state of neighbor cells, I wandered off into “Conway’s Game Of Life” last night. (Installed as “apt-get install golly” on Debian based systems).

    If you are not familiar with it, here’s a link:
    https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life

    Each cell lives or dies based on the status of cells next to it. Too many, it starves. Too few, it dies of loneliness (or lack of reproduction). Just right, it spawns more cells.

    The dynamics of this can be quite complex. All sorts of folks have wandered off to explore exotic bits. There are Gliders and Spaceships and Puffers and all sorts of things that arise from just a few rules:

    Any live cell with fewer than two live neighbours dies, as if by underpopulation.
    Any live cell with two or three live neighbours lives on to the next generation.
    Any live cell with more than three live neighbours dies, as if by overpopulation.
    Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.

    Or in condensed form:

    These rules, which compare the behavior of the automaton to real life, can be condensed into the following:

    Any live cell with two or three live neighbours survives.
    Any dead cell with three live neighbours becomes a live cell.
    All other live cells die in the next generation. Similarly, all other dead cells stay dead.

    Something similar to this will happen in the climate cells. IF clouds block sun heating water you get less clouds, that then let more sun in and make more clouds. Clouds from neighbor cells can change your sunshine driven evaporation in a neighbor cell. At large cell area via wind blowing in clouds, at small cell area – eventually – via cloud in one cell shading adjacent cells. I saw this from a plane while flying to Florida. Rows of clouds aligned N/S shading the next row space over so it was cloudless as the sun rose. Cloudy and non-cloud bands shifting as the sun traversed the sky. Eventually becoming more E/W lines (sun angle about 40 degrees S in winter-ish at noon.)

    “Life” is very sensitive to initial cell contents. Will the climate model have similar issues with things like cloud formation dependent on initial state causing “gliders” and “puffers” and more? Will oscillators develop? Similar function ought to lead to similar results and behaviours…

    The “good thing” is that these complex behaviours arise naturally from just a few rules. To the extent weather and climate are similar, we don’t need to model all the complex behaviours, just get the rules right…

    A very large amount of complexity in coding may be subject to elimination just by having a few rules that interact properly.

    Examples of patterns

    Many different types of patterns occur in the Game of Life, which are classified according to their behaviour. Common pattern types include: still lifes, which do not change from one generation to the next; oscillators, which return to their initial state after a finite number of generations; and spaceships, which translate themselves across the grid.

    The earliest interesting patterns in the Game of Life were discovered without the use of computers. The simplest still lifes and oscillators were discovered while tracking the fates of various small starting configurations using graph paper, blackboards, and physical game boards, such as those used in go. During this early research, Conway discovered that the R-pentomino failed to stabilize in a small number of generations. In fact, it takes 1103 generations to stabilize, by which time it has a population of 116 and has generated six escaping gliders; these were the first spaceships ever discovered.

    Can “Life” give us clue about how the model will work, and perhaps how nature works?

    Undecidability

    Many patterns in the Game of Life eventually become a combination of still lifes, oscillators, and spaceships; other patterns may be called chaotic. A pattern may stay chaotic for a very long time until it eventually settles to such a combination.

    The Game of Life is undecidable, which means that given an initial pattern and a later pattern, no algorithm exists that can tell whether the later pattern is ever going to appear. This is a corollary of the halting problem: the problem of determining whether a given program will finish running or continue to run forever from an initial input.

    Indeed, since the Game of Life includes a pattern that is equivalent to a universal Turing machine (UTM), this deciding algorithm, if it existed, could be used to solve the halting problem by taking the initial pattern as the one corresponding to a UTM plus an input, and the later pattern as the one corresponding to a halting state of the UTM. It also follows that some patterns exist that remain chaotic forever. If this were not the case, one could progress the game sequentially until a non-chaotic pattern emerged, then compute whether a later pattern was going to appear.

    The implication of this is that, as you change cell numbers and their starting states, the result can shift dramatically. Settling into a stable “still life” state, or going into perpetual chaos. Not due to the physics being wrong, but due to the nature of cell codependency and changes of initial or derived state with scale. You may never get the desired “final pattern” (what we have today) from any test runs to verify the model; and any given “final pattern” may have little to do with reality to come and more to do with model (and reality?) chaos giving highly variable outcomes.

    In short, no model might ever be a correct representation of the world as no model can have the scale of the world (it being a fractal of grain size of Planck Constant.) Bascially the world measures itself, and so it’s cell interactions, with a Planck sized ruler; we measure this fractal world with a many orders of magnitude larger ruler, so must get different results for initial state and derived results. That’s my “Undecidability Thesis of Climate Models” anyway.

    And yes, it can be done on a hex grid world too:

    So, IMHO, there’s things to learn from the game of “Life” for climate models based on cells that interact. And yes, I’m going down that rabbit hole for a day or two to see what the turf looks like.

    IMHO it has insights to offer. Things like, considering layers as different cells that interact, how a cloud layer modulates sun that then modulates surface evaporation and wind that then modulate cloud layers… Essentially, the Climate Model is 2 games of “Life” with different rules. One vertical, the other horizontal. Then complicated with oceans vs land being different vertical layers, and mountains vs air complicating the air layers. A very complex game of life, indeed.

    Then, given that the simple Conway’s Game Of Life has such complex results, one can expect the Climate Model to be even more prone to “excursions” in many ways. Finally, the Climate Model Life gets perturbations inserted regularly as various cycles shift the inputs. Solar cycles, orbital cycles / seasons, cosmic dust cycles, etc.

    That level of constant perturbation, IMHO, pretty much guarantees a chaotic running state with the large number of cells and complex interactions. It just won’t be able to stabilize with cells parameters being constantly perturbed.

    Probably the biggest insight, for me, from this Life parallel is the realization just how much cell number will impact results. The “scale of the playing board” matters:

    In principle, the Game of Life field is infinite, but computers have finite memory. This leads to problems when the active area encroaches on the border of the array. Programmers have used several strategies to address these problems. The simplest strategy is to assume that every cell outside the array is dead. This is easy to program but leads to inaccurate results when the active area crosses the boundary. A more sophisticated trick is to consider the left and right edges of the field to be stitched together, and the top and bottom edges also, yielding a toroidal array. The result is that active areas that move across a field edge reappear at the opposite edge. Inaccuracy can still result if the pattern grows too large, but there are no pathological edge effects.

    The globe will be more like a toroid in that it is closed and has no edges, so that’s not an issue. BUT, it has a finite and relatively small number of cells. It can not hold extremely large patterns. To what extent this matters compared to Real Life is not known. However… We know that cyclonic storms can be several hundred miles across and that the Polar Vortex can cover 1/3 of the planet. Are these just “large evolved patterns”? If so, at what cell count is your model big enough to “hold them” and let them generate? At what scale of cell count do you get “inaccuracy” from missing large patterns?

    My guess would be that you need 10,000 order of magnitude cells to even begin to have some decent representation of large scale patterns like those.

    At the other extreme, a “glider” running into a big stable pattern can obliterate it. At what cell count do you have enough cells to support the formation of “gliders” in the climate model that might then change / obliterate some macro pattern you think is right, but in the real world would be obliterated? A glider can be formed of just 5 active cells.

    Do you need to capture dust devils as well as hurricanes, and a rising air column making a 100 m puff ball cloud as well as a cloud deck covering the MidWest?

  2. President Elect H.R. says:

    The Game of Life you posted was very helpful, E.M.

    I think the elements of climate are rules based. There are a few rules for clouds. There’s another set of rules for albedo. Ocean circulation has its set of rules. Then the orbital mechanics of the Earth and solar system has a set.

    So the way I’m seeing it, any one component of the climate system is governed by a fairly smallish set of rules. Each is its own little game of life. Now you combine all those little climate Games of Life in together to create the big Climate GAME of LIFE, where all the little Games interact with the others at the boundaries, and I’d predict you’d see just what we are seeing; a seemingly stable state climate for the Big Game Board with all sorts chaotic happenings within the game. Those then eventually migrate to another seemingly Stable Big Game. I’m talking Greenhouse Earth vs. Snowball Earth as a BIG GAME and say, glacials and interglacials as recurring climate state patterns.

    But I’ve always maintained that Earth is on a trajectory, from its formation until its demise when swallowed by our Red Giant Sun. Nothing will ever repeat exactly, though often it may rhyme. For the span of most men, the climate is relatively stable and predictable. Now and then, some live in a time when the climate states change, but their observations are lost in a few generations and the descendants perceive the changed climate as stable and predictable.

    So I think the Game of Life model might be useful for some small timeframe of the climate. You might get such a game to work. But the trajectory will no doubt be altered at some point by some nuisance, like a mile-wide asteroid or a super algae that slimes everything or… something, and that would be very hard to inject just the right alteration at just the right time.

    On the other hand, with my view of climate being a billions of years trajectory, I can accept a 3 or 4 million-year Ice Age as being a stable climate and the glacials and interglacials are just so much weather during the period.

    My 2¢.

  3. President Elect H.R. says:

    Wait up… the Game of Life works fine with my POV.

    Thanks for bringing that to our attention, E.M. It’s easy to visualize, but then the visual examples you included were helpful, as they illustrated what I thought the game was doing.

  4. V.P. Elect Smith says:

    @H.R.:

    That’s basically what I’m trying to do in the model overall layout above. Have a ‘surface’ set of rules. Sunshine adds energy, apportion it to soil, air, and evaporation. Pass that off to subsurface rules (a very small set) and air layer rules (a much larger set with subsets for clouds and precipitation), etc.

    Then turn it loose and see what happens.

    Some cells will be much simpler than others. Sahara, for example. Just Sand all the way down (for all practical purposes) and dry air convection above. (Maybe I’ll just model it first ;-0

    Other cells very complicated. Arctic areas for example. Some have pernafrost, others seasonal frost yet others ocean, albedo can range from near black to stark white seasonally (or even with one snow storm). Troposphere can have wet or dry convection with or without sunlight at all. Sometimes the Stratosphere can touch the ground. That’s going to be the hardest bit, IMHO.

    Part of what I’ll be looking for is statistics on how fast a given cell “computes” to completion. Having one CPU dedicated to each cell is a fine idea, but if 7/8 of them compute damn fast (oceans and deserts) and then sit around for 3 or 5 times that long while the polar cells complete, well, maybe better to redistribute the work… (or at least assign much faster hardware to the polar cells…). It might work out just fine to have 4 cells / CPU and have a few dedicated to polar work and a batch that cycles through all the faster cells. Thus my starting with a straight work flow queue and scheduler.

    Another interesting experiment that this Life Cell insight suggests, is to take a map of, say, the Sahara or a Pole and run it with different granularities. Say you can compute a 6400 cell globe with your hardware. Use all 6400 cells just for the Sahara and see if the Sahara behavior has a big change. Then again for Antarctica. (Use recorded values from the global run to provide inputs to the patch at the edges).

    If, say, the Gulf Of Mexico gives you hurricanes at 6400 cells for the Gulf, but doesn’t at 128, well, you know you have a problem with the smaller grid count.

    In theory (IF I do this right) you can make different World Generators that instead of equal area packing, do large cells over places like the oceans and Sahara, but tiny cells over some area of interest. Then the rest of the model stays the same. Another interesting exploration.

    That’s what I’m thinking now, at any rate. We’ll see what next week brings ;-)

  5. V.P. Elect Smith says:

    @H.R.:

    It was the images that set me on that path to begin with.

    “Spaceships” and the hex cell image especially remind me of things I’ve seen clouds do. Some squall line wandering around dumping rain as it moves. A cloud deck raining here and not there, then changing. Clouds billowing and shifting.

    Got me thinking “Wait a mo… both the game and models have cells… and rules… and emergent behaviours… and…”

  6. cdquarles says:

    My 2 copper is that Earth does not have a climate system. Earth has a weather system. Climate is one or more summary statements about that previously realized weather; mostly about the bounds. In that sense, climate is a smaller problem, I say.

  7. President Elect H.R. says:

    @cd – ‘Like’

    Consider that we think of weather or climate in terms of where we are geographically. Just wait 10 or 20 million years and that patch of the Pacific you were sailing on will then be occupied by the Sierra Nevada range.

    Your point stands, though. What’s the Koppen Classification for the Earth? Hint: there isn’t one. For any given geographic location, the climate will be the prevailing, long-term weather pattern. Move yer patootie to another location and the climate will be different.

    I say that there is a global climate, but it is a huge macro system of say, hothouse Earth, or snowball Earth, or volcanic lake Earth.

  8. V.P. Elect Smith says:

    @CDQuarles & P.E. H.R.:

    Yup. From my POV, I’d say it has fractal behaviour on the time dimension as well as the physical. What you see depends on the size of ruler you use… (i.e. your time scale).

    At the wee end, the time step in the model matters. A bit of cloud can form and evaporate in minutes. On the big end, whole oceans form or leave. What you see depends on how fine a time grain you use and what time step is applied.

  9. Totally off topic, but Chuck Yeager passed away today. They don’t make them like him anymore.
    https://nypost.com/2020/12/07/legendary-airman-chuck-yeager-dead-at-97/

  10. Simon Derricutt says:

    EM – the emergent behaviour will depend on the number of cells used for the world, and the Game of Life examples are pretty good at showing that. That’s why I was talking about the cell-size needing to be around a maximum 100m to cope with those small clouds and also why the height of the cell needs to be comparable to its horizontal dimensions. The cloud-shading of lower cells as the angle of the Sun changes will have a pretty large effect on what happens. The cells above the ground boundary layer won’t have a lot happening in them, since mainly you’ll be looking at air-mass in and out and the temperature relative to the humidity to say whether a cloud forms there, but if they are too tall then the resolution of where the ground is shaded will not be adequate to provide sharp boundaries for the cloud-shadows and thus the detail will disappear and you’d need to parameterise the responses.

    I think the point I’m trying to make here is that the parametrisations needed using large cells are averaged responses (and will be pretty complex to work out, too) whereas what actually happens is a response to what happens right here, right now, together with the sum of what has happened right here over the last x hours. Once you get down to a small-enough cell size, then the calculations become far simpler, but of course there are way more cells so the calculation load increases.

    On a hot Summer day, I’ve pointed an IR thermometer at the sky. Pointing at a large area of blue sky, the temperature recorded is around -35°C, whereas a large-enough cloud (since the collection-angle of the IR receiver is around 24°) measures around -2°C. Though it would be normal to treat the heat transfers at ground-level as being conduction only, there is a radiative emission too where the amount of cloud in the sky will make a difference. This effect is large enough to make a difference of several degrees, where on a clear night in Summer you can get a ground-frost (and can in fact make ice if you put water in a bowl surrounded with straw-bales to insulate and deflect the wind), and of course in Winter the cloudy nights aren’t as cold. It thus seems to me that you’d need to look at the number of cells visible above (to the horizon) with cloud in (and density of cloud?) to calculate radiative heat-loss in each iteration. This won’t be a minor correction, but instead pretty important.

    Spice (simulation of electronic circuits) has a similar problem of solving what happens in a circuit where everything depends upon everything else. The way it handles fast transients is to reduce the time-tick between iterations such that the calculated error goes below a set threshold, and when things are changing more slowly the time-tick is extended to the point where the error stays below the (user-selectable) threshold. Where there’s a high wind happening in the world weather simulation, you may need to reduce the time-tick in order that the air-masses moved don’t travel further than 1 cell away. Though this variable time-tick would likely be a bit of a bear to put in at the start, it’s maybe a good idea to design the program so that it can be inserted later without a total re-write.

    I don’t see a lot of point in having a model with large cells. The predictions of the model will be too far from reality. On the other hand, when you get down to a cell-size that’s starting to become useful, in the 100m range, the sheer number of cells needed and the short time-tick needed raises the computation load to massive proportions. If you then want to actually predict weather with it, then setting up the geography of the cells at ground level would be a massive data-entry problem, and putting in the measured current data of barometric pressures and humidity levels would be a huge task too. Seems you’d need a program that reads maps and weather-maps into the database to do that set-up rather than hand-key it in.

    Is it possible to cheat for large areas (such as deserts and oceans) where conditions are largely constant? It may be possible by adding square numbers of the tiny base cells together (so use 4 cells, 9 cells, 16 cells, … as if they were a single cell, and then set the base-level cells to be all the same as each other in the group so at the edges it looks like being small cells). Could be that this way you’d reduce the calculation load for such areas without losing too much accuracy. The selection of those areas and cells to be thus treated would however probably be a manual task, and based on data from those areas.

    The structure and strategy you are proposing seems to me to be the only way to achieve a good-enough simulation to predict the weather further into the future than the current GCMs. It does however seem that the sheer quantity of calculations involved (if you get down to the cell-size it needs) will exceed the computer power available. I’m pretty sure that once the cell-size is small enough, then the emergent properties we see in real weather will also emerge in the model. I think also that the weather is not really chaotic, but instead that there’s so much data needed to describe it that we can’t calculate the causes and effects sufficiently well. Also, of course, there are a lot of points where we don’t collect the data, but instead interpolate it from the points we do measure.

    A further point is that we also need to take into account such things as aircraft and shipping, since they will both affect cloud formations. Con-trails from aircraft (when the weather conditions are right for them) will have a larger effect, of course, and the effect of grounding all aircraft was measured after 9/11.

    The area of the Earth (just looked it up) is around 510,000,000 km², and at 100m cells we’d need around 100 times that number of cells – 51 billion cells. That’s just ground-level, too, with maybe at least 500 cells height (giving around 25 trillion cells). With wind-speed in hurricanes up to around 250km/hr, that’s around 70m/s and the minimum time-tick would thus need to be around 1.5 seconds. It seems that in order to do the calculations fast enough to be able to actually predict (rather than hind-cast) it’s going to be necessary to simplify and take short-cuts. Those introduce inaccuracies. Even a 1km cell size gives around 25 billion cells and a minimum time-tick of 15 seconds or so, and at that point the clouds would have to be somewhat parameterised and to introduce an “average cloudiness” number, and you can’t have many cloud-layers within that 1km high cell – just one averaged-out cloudiness parameter. If you want to have a time-tick of an hour or so, then the air-movements will travel across many cells before going into the gozinta of the other cells (in fact up to ~250 cells distant) and a small angular error will get multiplied.

    On the other hand, that also says why the current climate models are carp. No real hope of being accurate.

  11. V.P. Elect Smith says:

    @Simon:

    Thanks for the ideas and input!

    The only advantage of large cells is that you can do software development and testing without a $Millions to $Billions budget… Other than that, they suck.

    But it’s the only game in town. The old “Yes, I know the game is rigged, but it’s the only game in town!” issue. Since we are stuck with being beaten over the head with “Computer Models Say”, the major value is “My model does NOT say that.” So you need a model, even if “All models are wrong”, one can be useful ;-)

    Per cost of computes:

    Part of my emphasis on “one cpu / cell” comes from the fact that in large fab of bulk chips you can get ARM cores down to about 5 ¢ each. Yes, there’s still a lot of other gear involved (network connects & memory for example, and PSU) but the raw computes can be very cheap. I’ve seen quad core compute cards for $5 and one magazine was giving one away free if you bought that copy of the magazine.

    So that has the potential for making a billion core compute engine for about a $Billion with COTS cards or down around $100 Million with custom fab cores on an interconnect coms system on board. Think something like the Nvidia Jetson board but with mesh of Tegra units instead of just one or two:
    https://en.wikipedia.org/wiki/Nvidia_Jetson
    https://en.wikipedia.org/wiki/Tegra#Xavier

    FWIW, I think the CUDA cores would be great for some of the detail math bits, like cloud evolution / development. I’ve not put much thought time on it, but in profiling it will pop up what’s chewing computes and that could likely be tossed to a cuda core specialized loop.

    But if there isn’t much use for CUDA, you swap over to large multi-core chips with low cost per ARM core instead. https://www.parallella.org/ has an 18 core board for $100, so about $5 / core all up. (You can get lower…)

    We’re in a race to very cheap ARM cores in mass. IF you can reasonably run a cell in a single core, the cost becomes attainable. (note that it is likely layers and steps can be handed to different cores too. So a pipeline of surface, subsurface, air layers, wind, IR could be set up each process on a separate core on a card; IFF that level of computation were needed for acceptable speed. The output of each part being input to the next part, you could have “surface” being run on time tick 2 as soon as it handed off time tick 1 to “subsurface”.

    I think with that approach, total computes are very much “doable” regardless of cell size (Just a small matter of money…) and for an extremely large cell count, the cost is still inside national budgets of many countries (i.e. $Billions not $Trillions). (And I can do development of software on a home brew Beowulf costing under $200… )

    That was a major driving force behind the approach of having all computes for a single cell happen at one time tick in one core, then hand off neighbor data, instead of the current approach of doing all cells on a big processor for one process (insolation) then doing all cells for the next process, all on one big computer.

    So “maybe someday” if the software ever gets done (and works right…) then someone with Big Bucks can take it, and run it on a gaggle of cheap compute boards to reasonable precision and fine grain.

    Per Clouds & Cell Size:

    Yesterday I was in the lawn chair contemplating clouds and just that shading problem. Trying to decide “What is the largest cell size that can have a hope of working?” Seems to me that it all depends on “At what height do clouds of significance form?”. At very low sun angles (like just at sunrise) cloud shadow isn’t important as it is off in space somewhere. So pick some angle where it’s “Important enough”. Say 30 degrees? Maybe 45 degrees? That says your shadow will be either the same distance away as the cloud height, or about root(3) x as far. (Or 1.73 x).

    Put your minimum cloud height of interest into that, and you get cell size necessary. Want to resolve shadow discontinuities from a 1000 m cloud height? You need a cell of 500 meters to land it in the next cell over at 45 degrees. Or a little larger at 30 degrees. As you get to 60 degrees elevation of sun, your cell now must be less than 1/1.73 cloud height where the shadow is landing (ignoring width of cloud effects). So about 1/3 of 1000 meters or 333 meters across.

    But make your lowest clouds of “interest” higher up, your cell size can grow too.

    For that reason I’m pretty sure that you don’t need 100 m cells “out the gate” or even in the first few years of use. A 500 m cell could resolve a cloud shadow from a 1000 m high cloud at 30 to 45 degree sun angles, then for the middle 1/2 of the day, it would just be seen as “straight down”. Likely OK for “climate” even if weather might be a bit different and actual cloud physical patterns off some. Note that at high latitudes the sun has a low angle anyway. 45-23= 22 latitude is about where a 45 degree sun angle shows up in winter (and 68 latitude about where it’s the best you get in summer…) So you might not need to model ideal cloud shadows in the topics and could just make sure they were “good enough” in high latitude summers for those puffy clouds… maybe…

    The cells will be roughly hexagonal. Side about 250 meters would make longest ‘diagonal’ 500 m and area = (3 (root(3) x S^2) / 2. Or 3 x 1.732 x 62500 / 2 = 162379.7 or estimate about 162 x 10^3 sq.m per cell.

    (510 x 10^6 km^2 x 10^6 m^2/km^2) / 162 x 10^3 = 510/162 x 10^9 = 3.148 x 10^9 or about 3 billion cells. So you are looking at $15 Billion for COTS cards just the cards.

    So I think it’s unlikely we could get something usable out of a 3200 to 6400 cell model, and even a 32,000 cell model ought to be highly limited. But I think I can do software development on large cells.

    Not ideal, but good enough for proof of concept, I think, maybe…

  12. rhoda klapp says:

    To return to a previous quibble, what happens, well, what would happen, if you were to take one cell of 1 sq metre in a known location with every last bit described accurately. You program that, and I will find someone to instrument that cell to find out the real heat flows. Do you think you would get it right, or would the actual cell provide a surprise? I think it would deliver the surprise. I’d be shocked if anyone could get a decent result by computation. If only because of the thing you are talking about in the AI thread. AI can only win when it know all the rules and there are no deviations or surprises.

  13. V.P. Elect Smith says:

    @Rhoda Klapp:

    IMHO you are exactly right. I think the 2 biggest error sources divide into:

    exogenous surprises
    Fractal issues

    At large cell sizes, you get few exogenous issues as everything tends to get averaged into that one big fat cell. But then the Game Of Life issues become large. Do you no longer have hurricanes due to too big a cell? Do you no longer get a hurricane moving from one cell into another, and thus changing it dramatically? Causing a foot of rain in an hour? Plus, your “ruler” is so large that the details in the fractal are erased and you get a useless “coastline” result / value.

    But at very small cell sizes with that tiny ruler, you STILL have discontinuities you are ignoring. I walk, bare foot, to the front gate every day. Where sun has hit the cement, it is nice and warm. Just a step away in shade, it is cold as night. What is the “correct” temperature of the surface? The shadow can be as small as a few inches… The grass next to the cement is another temperature, and brown patches warmer than green. Scale about 2 inches. Brown spots do not transpire water, green does.

    This gets back to a point I tried making years ago, regularly ignored. You simply can not average temperatures and think it means anything about heat. Temperature is an Intrinsic Property. So even saying “we’ll just use the average temperature” is a mistake. You really need to know the average HEAT and that depends on specific heat of each item and specific heat of vaporization of any water that evaporates.
    https://chiefio.wordpress.com/2011/07/01/intrinsic-extrinsic-intensive-extensive/

    So you may notice that I propose calculating the Watts delivered at the surface, then to apportion those Watts into subsurface, air and evaporation. Temperature is only a derived value based on (presumed) specific heats and masses of water and solids and air. Useful to display, maybe, but not the basis of calculating effects of heat.

    Now that’s a huge complication for the “instrumented square” as the instrumentation is typically a thermometer, not a Watt meter…

    Then if you DID have a Watt meter, you would need to know the specific heats of all the items in the square… all over the world…

    Oh, and at very small scale cells, exogenous factors become gigantic. Did the neighbor sprinklers just come on and wet you down? Or just humidify the air? Did the car in the driveway start up and send 200 cubic inches / revolution of hot exhaust at your square? Did a Bear do his duty on your square? (How did that change vegetation and specific heats and “moisture” in the coming days, eh?)

    So you have a choice of “too large and you can’t see anything real” and “too small and you can’t possibly get things exactly right for the cell”.

    Either your “ruler” isn’t small enough to resolve reality, or it is, and all sorts of stuff will happen in real life that just isn’t in your model of it. (First big hit is Cities and heat islands as cell size drops below about 100 km )

    You are stuck between impossible wrong averages, or never right exactness.

    But hey, it’s the only game in town, so… (places bet on 42 to win…)

  14. V.P. Elect Smith says:

    FWIW, I don’t expect any model to get reality as a result.

    I do expect it to be useful to “admire the way things are wrong”. So, for example, having a model where you can make runs with the same starting data, but different cell counts / sizes, will let you measure the effect of cell sizes and counts on the results.

    I.e. “You can admire the many ways of being wrong”.

    Can any value in the real world be extracted from that? Depends… do you mean “real climate predictions”? Nope, IMHO no real model can get you there. But do you mean “It shows models are crap”? Then yes, I think that’s a real world value ;-)

  15. rhoda klapp says:

    It’s hard to imagine how anyone could get a one-metre square right even given the choice of surface. Desert sand, snow, whatever. Or sea water. There’s a lot of that. Indeed if you could get water right you could ignore the land, mostly. In my view a working global climate model is not achievable if all you want to do is model the climate. If you want to reinforce a political agenda then ‘working’ has a different set of criteria.

  16. Simon Derricutt says:

    EM – yep, as regards climate, the model won’t have the things that likely do actually produce the slow changes that have happened over the hundred year scale. Might be possible to include things like cosmic rays, galactic dust, changes in the Sun, volcanism from gravitational stresses, and so on, but since we don’t know enough to predict those then they’ll just be a guess.

    On the other hand, it might be good enough to predict the weather somewhat further into the future than the 3 days or so we get at the moment. Your 500m or so grid would probably be good enough to get a better hurricane path prediction, and maybe the big thing is that the approximations are actually stated at the start rather than being hidden.

    As the scale of the cells goes down, the overall circulation simulated will change to something more realistic. Also therefore it means the prediction will not diverge from reality so quickly (providing there’s enough real physics in the cell calculations, anyway).

    Agreed there’s no such beast as an average temperature in reality. Still, people mostly seem to think there is, and it’s easy to do the maths to generate one. When you really get down to it, even temperature is an approximation and the state of thermal equilibrium you need to attain before you measure temperature is also somewhat slippery. The thermometer will of course give you a precise number, but it’s not exactly valid if it’s changing. Heat transfers can be pretty precise, but really it ought to be total energy movements that we measure and work from (as you’ve stated you’re doing). That bit of kinetic energy in the moving air is also something that needs adding in.

    The really tricky bit of calculation in this is going to be cloud formation. I think that’s actually going to be easier with smaller cells, though. I expect you’ve also seen different layers of clouds moving in different directions at different speeds, and accounting for that in a large cell would be difficult.

    Though the actual cloud height is obviously some (dynamic) equilibrium situation, I don’t know how that actually works. Maybe an interesting point if you look at the sum of potential and kinetic energy for a molecule as it rises through the atmosphere, where last time I looked at that I found that the lapse rate (drop-off of temperature with height) ought to be quite a bit larger than it actually is – it’s not actually an equilibrium though it’s regarded as such. The sum of KE and PE in a gravitational column of air ought to actually be constant, and in fact it isn’t.

    I’m a bit late on replying since I’ve been going through the online apps to secure my residence in France after Brexit and to change the driving license to a French one. The sign-on to the driving licence was a pain, since it rejected my various attempts at a password as not secure enough (even the Firefox auto-generated password wasn’t secure enough). Turned out they needed a special character along with caps and numbers, but they weren’t going to tell me that. It also decided on a different username than I’d put in, and didn’t tell me that either, so the subsequent log-on didn’t work until I’d asked it to send me the username (because I’d “forgotten it”). The good news is that after I’d managed to sign on, the rest of the process wasn’t too painful. Just quite a lot of “download this .pdf and read it before you answer yes or no”. Took around 6 hours in all to get the processes started off, but it might become harder if I left it until next year. Still, something I’d been avoiding doing that needed doing. Not as if they’re going to chuck me out of the country, but reduces the hassle later. IIRC there are more French people in the UK than Brits over here. I suspect their paperwork will however be more of a pain than mine – the UK government has a fair history of making life difficult.

  17. President Elect H.R. says:

    (Aside to Simon Derricutt, because I know he’s following this thread.)

    @Simon Derricutt – My wife’s cousin and her hubby are retired in the South of France in a village at the base of the Pyrenees. My wife was born in London, but her lineage on her mom’s side was all Glaswegian since forever. Her mom was working in London – cuz that’s where jobs were – she met my wife’s American serviceman Dad (Korean War period). He was a Scot descendant in West Virginia, which the Scots took to because no one wanted it, it reminded them of the Highlands and they knew what to do with it. So, Hillbilly and Glasgow city chick. :o)

    The Mrs. and her cousin just had a nice video conversation. No extra cost. Just their monthly fees for internet.

    There are a good few Brits in that village. It seems the UK pension goes much further there. Same is probably true where you are. The natives are friendly; no torches and pitchforks so long as you keep plugging away at learning the language.

    Anyhow, I think of you when we talk about the latest from her cousin and hubby. (He’s fishing crazy like me so we get along famously.) Wish I could tell you the name of the village, but it escapes me for now.

    If this Covid nonsense clears up, it’s our turn to visit them. If you don’t mind, and it’s reasonably in the neighborhood, I wouldn’t mind hoisting one with you and taking a gander at your luthier work.
    .
    .
    .
    We now return to the regular programming.

  18. V.P. Elect Smith says:

    Hey, H.R.:

    IF I’m in Florida by then ( I’ll have money out of the house…) I’d be interested in the spouse and me joining up with a France Wine Tipple ;-)

  19. President Elect H.R. says:

    @E.M. – Yeah, a session of Chiefio’s Blog Impromptu Meeting and Dodgy Proceedings is always “a good thing” as Martha Stewart would say.

    So you want to hold a meeting in France or are you talking Florida?

    This year, we’ll be down for Jan & Feb & maybe possibly March. My shoulder is just gonna make it to being able to handle the trailer at the end of the month. I wasn’t sure about that as recently as Thanksgiving.

    We just don’t like the Winter weather any more. We both used to ski, but that’s been out of the question since her stroke. Finally donated all the ski stuff to Goodwill. She still has balance problems. She has had a couple of falls on her bike….. while she was at a dead stop! Just.. plunk, down she went. So skiing is definitely out.

    That leaves us with heading for warmth for the Winter months, and we plan to do that until age makes it impossible. Then we just may move South and stay there. We shall see.

    Anyhow, just like I got connected with OssQss, and you, and Rhoda was able to join in, notices of where we’ll be posted here make for at least the chance for a meet & greet.

    It’s not like the TLAs can’t track us all if they are really interested. I just don’t want loonies crashing the party.

    I have on my bucket list a trip to the Friday Vast Right Wing Conspiracy meeting in NC. I’d like to meet up with Galloping Camel, a shot at saying “Hey” to Gail and her hubby, and some others of like mind.

    We’ve not had time to Stay in the Carolinas the last two years, so I haven’t been able to run up to the Bob Evan’s for a meeting.

    So….. when anyone of us, is out and about and pretty sure someone from the blog might be in the neighborhood, it seems it’s always cool to mention it here and E.M. will pass along info behind the scenes if one doesn’t want it posted in plain sight here. He did it for OssQss and me. The Lakeland meet & greet was announced here.

    Whatever is needed. E.M. runs a full service blog. 😜😁

  20. V.P. Elect Smith says:

    I was thinking France if you were going over.
    I’d be interested in a Florida and then a France too ;-)

    Right now (and for the foreseeable) I’m stuck in California with house arrest rules from Der Governator being promulgated…

  21. rhoda klapp says:

    I’m still not allowed into the US at all so until that changes (or I’m free to spend quarantine in maybe a St Lucia resort before going to Florida) it’ll have to be France. Rural France is wonderful and it is largely kept alive by Brits retiring there, the French kids all go to the city as soon as they can.

  22. Simon Derricutt says:

    H.R. – my mum’s flat is available here now for visitors, and it would be fun to meet and greet. EM, too…. Harder to handle more than 1 set of visitors here since my house has become a workshop, but there are hotels in town. I’m around 3km from Eauze in SW France, and that should be easy enough to find on a map. I figure France won’t be allowing much travel before at least April next year, though, and may have severe restrictions through to July if I’m reading things right.

    I haven’t made guitars for too many years now, so there are only a couple here I made. There are maybe around 10 musical instruments distributed amongst friends and acquaintances, including a bowed psaltery, balalaika, and hammered dulcimer. These days I’m into mainly solving physics problems, and finding ways around the laws. Basically, stuff that’s regarded as crackpot, but logic tells me it isn’t. Hopefully it won’t be too long whether I find out whether or not I’m correct.

    France is a lot cheaper than the UK for retirees like me, and of course the weather is generally a lot nicer too. There are always pluses and minuses, and the minus is the authoritarianism and the red tape, but the French peasantry tend to ignore the red tape and heavy rules and at least down in the south the officials are actually helpful where they can be. Around here, it’s country folk even in towns, with a lot of tolerance and friendliness.

    On the software side, which is what this thread is supposed to talk about, my experience ended up mostly in assembler and systems, with microcontrollers (also mainly assembler) nearer to the end of my employment. Most of my input here will thus be about the problem and the structure, not the coding. I didn’t have the Python problem of knowing what was in the libraries, since I’d produced my own libraries of assembler and thus could use them when they were needed. Where the problem had been solved before, a cut’n’paste and a call made writing much faster than people might expect. Write new routines in a way that enables such things to happen, with a lot of comments so I know exactly what it does several years later. Out of practice now, though. It would take a while to regain that speed and facility. Main problem with assembler though is that it limits you to one processor family, so I really ought to get up to speed in C. It’s just not needed for what I’m doing at the moment, though. Instead, I need more knowledge of RF and quantum physics…. I like to tackle problems where “it can’t be done!”. Maybe that’s the reason this cell model gets interesting, since I figure the weather isn’t actually chaotic, but instead there’s just too many things happening to easily determine causes and effects. Where there’s enough things happening that in themselves are simple and determinate, we get emergent properties that can seem chaotic.

  23. President Elect H.R. says:

    @Simon Derricutt: The cousin says the same things about retiring to France. She and hubby like it a lot. I’ll make a note of your town and try to remember to have my wife ask her cousin about their location.

    Yeah, France is pretty locked down. Probably no trip next year unless countries start opening up quickly.

    She needs written authorization to leave the house! Easy enough to get, though, for things like the Dr., meds, food runs and whatnot. I think they just don’t issue too many permissions at the same time to keep the public spaces sparse, but it’s more than a minor inconvenience to the residents, having to wait for the permission paper.

  24. Kneel says:

    Cells and resolution: would a recursive cell structure be useful and do-able?
    As in: start with 1 cell (world), then to calculate, split into nth and sth and calculate for each, for Nth(sth), split into east and west, then… depending on requirements/available cpu/time limit/whatever, you “bail out” with your heuristic “best guess” at the appropriate level.
    Just throwin’ it out there…

  25. V.P. Elect Smith says:

    @Kneel:

    What an interesting idea…. I need to think about it.

    That is an unusual experience for me, so I really appreciate it.

    A “recursive decent” model of the world… Hmm….

    Thanks!

  26. Actually, you shouldn’t go into too much detail because the processes within the cells change in order to retain system stability.
    For Earth one can make do with just the Hadley, Ferrell and Polar cells.
    My basic concepts have been set out in a new analysis which shows that all one needs to create a surface temperature higher than that predicted by radiative equations is a convective overturning cycle.
    GHGs not needed.
    Details of our ongoing project here:

    https://www.researchgate.net/project/Dynamic-Atmosphere-Energy-Transport-Climate-Model

  27. V.P. Elect Smith says:

    @Stephen Wilde:

    Looks good to me!

    It would also be a lot easier to program, at first glance anyway ;-)

    I’ll have to think about writing 2 models now. One based on your analysts / method, and one based on “cells and raw physics”…

    Most of “my approach” would stay the same (but probably use your values as you have already “done the math”) or at least do a sanity check against them (so my W/m^2 might do hourly steps while yours are day / night).

    Biggest difference looks to be convection / air-layers. I’ve just stated it as a “need something” while you have it laid out in observed Hadley / Ferrell / Polar cells. A lot of the question being “Will my approach have them arise as a natural consequence?” (vs just endlessly fiddling the model trying to get them to form…) while using the known air cycles raises the question of “How do you parameterize / program that?” (Which is also a problem for individual cells and getting them as emergent phenomena …)

    Might be nice to “have it both ways”… One with the Hadley, Ferrell, Polar as forced actions and known present, then another with “emergent air” since then it would be possible to compare and contrast to get a better idea what wasn’t quite working right in the emergent approach.

    Hmmm…. I need a ‘cupa’ and a bit of a think…

    Oh, and how do you account for “Mobile Polar Highs”? (Not read it all yet…) Those big blobs of cold polar air that slide down from Canada along the Rockies and run over Texas plunging it into the freezer every so often. As “emergent” I can just let cold dense air displace warmer wetter less dense. Wind as it may. (Then inspect and see if it looks like nature…). With cells programmed in, is it a result of the winter Polar Vortex? Or just the other atmospheric cells delivering air further north and cooling?

    Hmmmm….. again. There’s something to be said for having different “AirLayers” modules that work different ways. You can keep the constant things constant (like solar inputs and ground heat sink and surface type) and then do “model experiments” on different details of the atmosphere. Drive “winds” via a known physical structure of today, parameter based, and fiddle the things like evaporation rates or particulates. Or run it as emergent and see what happens if you remove a lot of crap from the air (as we did between about 1970 to 2000) or put an ice sheet a mile high over Canada…

    Off to make that tea…

  28. V.P. Elect Smith says:

    OK, I’ve done some work on the basic flow of control top level code. It is in Julia.

    In:
    https://chiefio.wordpress.com/2020/12/09/why-i-prefer-c-fortran-even-algol-to-python/

    I looked at a few languages and sort of settled on Julia as the one I’d try first, with Fortran for any “icky bits” that Julia has trouble with (like fixed format data or database access?) and “C” as the ultimate fall back.

    In these bits of code, I left in the example “test call” to Python modules as a reminder to me of how to do it. Yes, I’m also likely to call Python for some bits as it has a load of libraries that might be useful.

    I’ve run this, and the basic “flow” is there. Pretty much all of it “just a stub” saying “got here!”… but I do iterate over years, months, days, cells (with very cut down values) to show the idea of launching a bunch of cell independent calculating processes (eventually to use the Julia parallel processing features and distribute over multiple cores on many SBCs).

    While it looks pretty puny, do remember this is also me learning a whole new language for this processing…

    What are the bits “so far”?

    ems@OdroidN2:~/Cmodel$ ls
    AirLayers.jl   Cells.jl  Monitor.jl     Surface.jl
    BuildWorld.jl  Model.jl  SubSurface.jl  WReports.jl
    

    These are invoked with “julia program.jl” like:

    ems@OdroidN2:~/Cmodel$ julia Model.jl 
     
    This is the Main Program that coordinates / starts the rest. 
    [...]
    

    What’s in “Model.jl”?

    A heading saying I wrote it. The preserved model call to Python as comments. Then a set of “include foo.jl” that pulls the other programs in as though they were written here.

    Then I print out the notice that you got here, and it is running, and then invoke function calls to each of the functions in those other programs. (That presently mostly just say “You got here!” but eventually will do something more useful).

    ems@OdroidN2:~/Cmodel$ cat Model.jl 
    #
    # Climate Model via Cells
    # 16 Dec 2020 - E.M.SMith
    
    #using PyCall
    #@pyimport Main
    #Main.Funct.Name()
    # Built for Python 3
    
    include("BuildWorld.jl")
    
    # spawn Monitor needed eventually
    include("Monitor.jl")
    
    include("Cells.jl")
    include("WReports.jl")
    
    println(" ")
    println("This is the Main Program that coordinates / starts the rest. ")
    println(" ")
    
    BuildWorld()
    Monitor()
    Cells()
    
    WReports()
    

    Here’s what BuildWorld.jl looks like at the moment:

    ems@OdroidN2:~/Cmodel$ cat BuildWorld.jl 
    #
    # Climate Model via Cells
    # 16 Dec 2020 - E.M.SMith
    
    #using PyCall
    #@pyimport Main
    #Main.Funct.name()
    # Use Python 3
    
    function BuildWorld()
    println(" ")
    println("You have reached BuildWorld. ")
    println("For each Cell, place it on the Globe ")
    println("Calculate the neighbors, angles, inclination,etc ")
    println("Stuff the database with calculated values ")
    println(" ")
    end
    

    Again, a descriptive header (that will expand as the program does more), the preserved nag about how to call Python programs. Then I declare a function that basically just prints a message. Eventually it will actually build the cell_world.

    I just noticed I didn’t “pretty print” by indenting the println statements. OK, I’ll need to do that. Fortunately, unlike Python, that doesn’t change the meaning of this bit of program.

    Monitor.jl is about the same ATM:

    ems@OdroidN2:~/Cmodel$ cat Monitor.jl 
    #
    # Climate Model via Cells
    # 16 Dec 2020 - E.M.SMith
    
    #using PyCall
    #@pyimport Main
    #Main.Funct.name()
    #Fails on  python2 as Julia built for python 3
    
    function Monitor()
    	println(" ")
    	println("This is the proposed Monitoring Program ")
    	println("It will be spawned into the background")
    	println("or may be launched by hand")
    	println(" ")
    end
    

    Though I did pretty print / indent it. It will stay that way until such time as I need a monitoring program, then I’ll write it ;-)

    Cells,jl actually does a loop, testing that I can properly make loops go (given the strange way Julia requires anything in a loop to be local, unless you declare it “global”, or it is an Array, or sometimes in the REPL, but not always, or…

    ems@OdroidN2:~/Cmodel$ cat Cells.jl 
    #
    # Climate Model via Cells
    # 16 Dec 2020 - E.M.SMith
    
    #using PyCall
    #@pyimport Main
    #Main.Funct.Name()
    # Use Python 3
    
    function Cells()
    	println(" ")
    	println("This is Cells - It launches per cell model runs ")
    	println(" ")
    	global celllist = (1:3)	
    	println(celllist)
    	for year in 1:3
    		for month in 1:4
    			for day in 1:2
    				for cell in celllist
    	                         	println("Spawn Cell ",cell," in year ", year," in ", month," on day ", day)
    				end
    			end
    		end
    	end				
    end
    

    I don’t know that I’ll keep the iteration of spawning cells like this, but maybe. Julia has some nice features for parallel execution that would let me spawn each cell model run to a different core, and it kind of makes sense to do all of them on a given day at once (with air layers and insolation iterating by the hour inside the cells) but I might need to do iterations by hour. I hope not…

    (Coordinating parameter / value passing is the issue here. 24 value buckets and shift / share them? Or iterate over 24 sharing one at a time? It will depend on process set up time vs run time vs storage cost and time… so need some test runs to decide. Process set-up is not cheap, so you want to do a LOT in one spawned process. OTOH, memory needed is not known yet, so don’t want to end up swapping / thrashing. Decision for later I think…)

    Then I write a report at the end (once I know what will be IN the report…)

    ems@OdroidN2:~/Cmodel$ cat WReports.jl 
    #
    # Climate Model via Cells
    # 16 Dec 2020 - E.M.SMith
    
    #using PyCall
    #@pyimport Main
    #Main.Funct.name()
    #Fails on  python2 as Julia built for python 3
    
    function WReports()
    	println(" ")
    	println("This is the proposed Final Reporting  Block ")
    	println(" ")
    end
    

    At present, I’m not having a “cell” do anything, but did write targets for Surface, SubSurface, and Airlayers. All of them like that just above, only announcing that “you got here”.

    Now that Cells is working and jobbing out lots of individual cell executions, I’ll add “cell” (but on this one computer only) and play a bit with multiple jobs (4 cores…) along with the other modules.

    Here’s what you get if run “as is” without the actual cell job doing anything yet:

    ems@OdroidN2:~/Cmodel$ julia Model.jl 
     
    This is the Main Program that coordinates / starts the rest. 
     
     
    You have reached BuildWorld. 
    For each Cell, place it on the Globe 
    Calculate the neighbors, angles, inclination,etc 
    Stuff the database with calculated values 
     
     
    This is the proposed Monitoring Program 
    It will be spawned into the background
    or may be launched by hand
     
     
    This is Cells - It launches per cell model runs 
     
    1:3
    Spawn Cell 1 in year 1 in 1 on day 1
    Spawn Cell 2 in year 1 in 1 on day 1
    Spawn Cell 3 in year 1 in 1 on day 1
    Spawn Cell 1 in year 1 in 1 on day 2
    Spawn Cell 2 in year 1 in 1 on day 2
    Spawn Cell 3 in year 1 in 1 on day 2
    Spawn Cell 1 in year 1 in 2 on day 1
    Spawn Cell 2 in year 1 in 2 on day 1
    Spawn Cell 3 in year 1 in 2 on day 1
    Spawn Cell 1 in year 1 in 2 on day 2
    Spawn Cell 2 in year 1 in 2 on day 2
    Spawn Cell 3 in year 1 in 2 on day 2
    Spawn Cell 1 in year 1 in 3 on day 1
    Spawn Cell 2 in year 1 in 3 on day 1
    Spawn Cell 3 in year 1 in 3 on day 1
    Spawn Cell 1 in year 1 in 3 on day 2
    Spawn Cell 2 in year 1 in 3 on day 2
    Spawn Cell 3 in year 1 in 3 on day 2
    Spawn Cell 1 in year 1 in 4 on day 1
    Spawn Cell 2 in year 1 in 4 on day 1
    Spawn Cell 3 in year 1 in 4 on day 1
    Spawn Cell 1 in year 1 in 4 on day 2
    Spawn Cell 2 in year 1 in 4 on day 2
    Spawn Cell 3 in year 1 in 4 on day 2
    Spawn Cell 1 in year 2 in 1 on day 1
    Spawn Cell 2 in year 2 in 1 on day 1
    Spawn Cell 3 in year 2 in 1 on day 1
    Spawn Cell 1 in year 2 in 1 on day 2
    Spawn Cell 2 in year 2 in 1 on day 2
    Spawn Cell 3 in year 2 in 1 on day 2
    Spawn Cell 1 in year 2 in 2 on day 1
    Spawn Cell 2 in year 2 in 2 on day 1
    Spawn Cell 3 in year 2 in 2 on day 1
    Spawn Cell 1 in year 2 in 2 on day 2
    Spawn Cell 2 in year 2 in 2 on day 2
    Spawn Cell 3 in year 2 in 2 on day 2
    Spawn Cell 1 in year 2 in 3 on day 1
    Spawn Cell 2 in year 2 in 3 on day 1
    Spawn Cell 3 in year 2 in 3 on day 1
    Spawn Cell 1 in year 2 in 3 on day 2
    Spawn Cell 2 in year 2 in 3 on day 2
    Spawn Cell 3 in year 2 in 3 on day 2
    Spawn Cell 1 in year 2 in 4 on day 1
    Spawn Cell 2 in year 2 in 4 on day 1
    Spawn Cell 3 in year 2 in 4 on day 1
    Spawn Cell 1 in year 2 in 4 on day 2
    Spawn Cell 2 in year 2 in 4 on day 2
    Spawn Cell 3 in year 2 in 4 on day 2
    Spawn Cell 1 in year 3 in 1 on day 1
    Spawn Cell 2 in year 3 in 1 on day 1
    Spawn Cell 3 in year 3 in 1 on day 1
    Spawn Cell 1 in year 3 in 1 on day 2
    Spawn Cell 2 in year 3 in 1 on day 2
    Spawn Cell 3 in year 3 in 1 on day 2
    Spawn Cell 1 in year 3 in 2 on day 1
    Spawn Cell 2 in year 3 in 2 on day 1
    Spawn Cell 3 in year 3 in 2 on day 1
    Spawn Cell 1 in year 3 in 2 on day 2
    Spawn Cell 2 in year 3 in 2 on day 2
    Spawn Cell 3 in year 3 in 2 on day 2
    Spawn Cell 1 in year 3 in 3 on day 1
    Spawn Cell 2 in year 3 in 3 on day 1
    Spawn Cell 3 in year 3 in 3 on day 1
    Spawn Cell 1 in year 3 in 3 on day 2
    Spawn Cell 2 in year 3 in 3 on day 2
    Spawn Cell 3 in year 3 in 3 on day 2
    Spawn Cell 1 in year 3 in 4 on day 1
    Spawn Cell 2 in year 3 in 4 on day 1
    Spawn Cell 3 in year 3 in 4 on day 1
    Spawn Cell 1 in year 3 in 4 on day 2
    Spawn Cell 2 in year 3 in 4 on day 2
    Spawn Cell 3 in year 3 in 4 on day 2
     
    This is the proposed Final Reporting  Block 
    

    Just noticed I left out “month” in the header for “in month”… I’ll need to add that. (How bugs happen and get fixed…)

    So I’m pleased with that. Even if it is a bit trivial. It gets me to the point where I’m writing some code (any code…) and where I’m ready to take on the next language features (parallel processing, how to properly pass parameters and return values / arrays. It looks like arrays can be modified inside functions and the results visible globally, if so, that’s great as I can just treat arrays as “Common Blocks”. If not, well, more to dig at…) along with starting on the physics of some of the activities. Oh, and actually turning the prior method of cell stuffing into code:
    https://chiefio.wordpress.com/2020/11/08/there-is-no-good-coordinate-system-for-climate-models/
    https://chiefio.wordpress.com/2020/11/17/what-happens-in-a-climate-cell-model/

    So some actual progress. Instead of just fooling around and complaining about cells and language syntax vs semantics…

  29. V.P. Elect Smith says:

    Got the “cell” basic flow of control bits done. And the stubs written. I indent the output of the stubs so that you can see when it’s called by a step above it.

    Here’s the current ‘ls’ of programs. It has grown a bit…

    ems@OdroidN2:~/Cmodel$ ls
    AirAbsorbTemp.jl  Cells.jl    Model.jl    Stratos.jl     WReports.jl
    AirAdjustRHT.jl   Clouds.jl   Monitor.jl  SubSurface.jl  Winds.jl
    AirLayers.jl      Convect.jl  Ocean.jl    Surface.jl     cell.jl
    BuildWorld.jl     Dirt.jl     Precip.jl   Tropos.jl
    

    It’s a bit daunting to think that all those trivial stubs are set to grow into hundreds, perhaps thousands of lines of code. This could take a while… |-{

    “cell.jl” is the basic single cell that is quasi-described in the “what happens in a cell” posting.

    ems@OdroidN2:~/Cmodel$ cat cell.jl 
    #
    # Climate Model via Cells
    # 16 Dec 2020 - E.M.SMith
    
    #using PyCall
    #@pyimport Main
    #Main.Funct.Name()
    # Python 3 supported
    
    println("You have a cell!")
    
    include("Surface.jl")
    include("SubSurface.jl")
    include("AirLayers.jl")
    
    Surface()
    SubSurface()
    AirLayers()
    

    Pretty basic. I could likely start losing some of those “how to call Python” nags, I think I have enough ;-)

    So it just says “Hey, I’m a Cell!” (Next I get to pass a cell # to it so it actually knows what cell it is…). Then it does the “surface stuff” where TSI (filtered) turns into heat / temperature. Then sorts that to SubSurface (that further divides into water vs dirt subsurfaces) and then AirLayers (that at present is just a Stratospheric stub and Troposphere, that had a lot going on.)

    Present output looks like this:

    ems@OdroidN2:~/Cmodel$ julia cell.jl 
    You have a cell!
     
    You have reached Surface Physics. 
     
     
    You have reached SubSurface Heat Flow. 
     
     
        You have reached Dirt, Rocks & Permafrost Heat Flow. 
     
     
        You Are In The  Oceans Now.  Good luck with that. 
     
     
    You have reached AirLayers Process. 
     
     
        You have reached The Stratophere, 
        So do Stratospheric UV, O3, Temp, etc.
     
     
        You are in the Troposphere Convective Zone. 
     
     
            You have reached Troposheric Self Absorption.
            So just deal with it.  All that IR and stuff
     
     
            You are a rising ball of hot air. 
     
     
            Send in the Clouds, there must be Clouds. 
     
     
            You are all wet, or getting there. 
     
     
            You have reached Air RH & T Adjustment. 
            After all we've been through, we need to adjust.
     
     
            You have ridden the Winds... 
     
        After all of that, I'm done with Tropospheric for a while.
    ems@OdroidN2:~/Cmodel$ 
    

    Yes, as the tedium of coding wears on, the printed comments get more “out there”…

    So I need to noodle around a bit more on exactly what order things need to be in. I can already see that I really need to assure Stratos and Tropos have absorbed there bits of TSI (and albedo has done it’s bit too…) before I do Surface.

    This is a decent example of why coding a top layer flow of control, even just with stubs, can be helpful. It lets you think through “Is this order really right?” before you’ve written 20 pages trying to fix an ordering problem…

    Here’s what Tropos.jl looks like:

    ems@OdroidN2:~/Cmodel$ cat Tropos.jl 
    #
    # Climate Model via Cells
    # 16 Dec 2020 - E.M.SMith
    
    #using PyCall
    #@pyimport Main
    #Main.Funct.name()
    #Use Python 3
    
    include("AirAbsorbTemp.jl")
    include("Convect.jl")
    include("Clouds.jl")
    include("Precip.jl")
    include("AirAdjustRHT.jl")
    include("Winds.jl")
    
    function Tropos()
    	println(" ")
    	println("    You are in the Troposphere Convective Zone. ")
    	println(" ")
    
    	AirAbsorbTemp()
    	Convect()
    	Clouds()
    	Precip()
    	AirRHT()
    	Winds()
    
    	println("    After all of that, I'm done with Tropospheric for a while.")
    end
    

    The “include” files are all just basic “You got me!” stubs, but with ever more “out there” comments printed…

    AirLayers just calls Stratos() and Tropos(); but later I can add more layers as needed. I can easily see a TropoPause() and perhaps a Meso(). Or maybe break it into Ferrell, Polar, Hadley cells (somehow; waves hands…)

    So by breaking flow of control that way, I can code different ideas about how air layers might work, and swap them in as desired, to see what’s best or what’s more “real”; or just what’s interesting but wrong…

    ems@OdroidN2:~/Cmodel$ cat AirLayers.jl 
    #
    # Climate Model via Cells
    # 16 Dec 2020 - E.M.SMith
    
    #using PyCall
    #@pyimport Main
    #Main.Funct.name()
    #Use Python 3
    
    include("Stratos.jl")
    include("Tropos.jl")
    
    function AirLayers()
    	println(" ")
    	println("You have reached AirLayers Process. ")
    	println(" ")
    	Stratos()
    	Tropos()
    end
    

    Similarly, SubSurface just calls Dirt() and Ocean() but at some point after I have a surface type, that will test what kind and only call the right one for that cell.

    ems@OdroidN2:~/Cmodel$ cat SubSurface.jl 
    #
    # Climate Model via Cells
    # 16 Dec 2020 - E.M.SMith
    
    #using PyCall
    #@pyimport Main
    #Main.Funct.Name()
    # uses Python 3
    
    include("Dirt.jl")
    include("Ocean.jl")
    
    function SubSurface()
    	println(" ")
    	println("You have reached SubSurface Heat Flow. ")
    	println(" ")
    	
    	Dirt()
    	Ocean()
    end
    

    During devlopment, I can just comment out one or the other and concentrate on getting the Chosen One just right (or good enough…). Then go back out to a larger scope, chose based on cell type, and run multiple cells.

    This first cut is “Top Down”, doing global flow or control and stubs.

    Then I’ll do a bit of “Bottoms Up”. Picking some part of the physics of it to code up into a hopefully correct function. (Likely an airless dry world of dirt in a vacuum so ‘Sun & dirt only’)

    Then I shift to “middle out”. Run the top and some of the bottoms together, and look for places where I need more work in the middle. Things like what order which bits of physics need to be in, and putting in missing bits and such.

    Then, at the end, you get to “end to end integration” and testing…

    Probably a couple of years away at my present rate. Oh Well… something to live for ;-)

  30. The trouble is that going beyond the basic three cell structure (in each hemisphere) necessarily involves ever increasing complexity the further down the rabbit hole one goes.
    Logically, a complete model would involve a representation of every air movement between the main three cells and an individual zephyr above the surface. You are having a go but getting quite a headache in the process.
    In fact the usual meteorological forecast models are pretty good now up to five days out but then it all goes haywire because the potential variations from the initial parameters are literally infinite in a three dimensional atmosphere around a rotating spherical body illuminated by a point source of energy with a whole raft of different materials with differing thermal characteristics on the surface and in the air.
    The question is, how deep does one really need to go to benefit from enough utility value to justify the cost and time input.
    Given that my proposition based on atmospheric mass and convective overturning sets the base temperature for a surface beneath an atmosphere at given level of insolation regardless of radiative gases I think you only need go as far as to obtain an approximate idea of the amount of potential variability over a period of 1000 years and the amount of weather variability arising from climate zone shifting during such a period.
    So, if you can produce a broad brush computation that generates the Roman Warm Period, The Dark Ages, The Mediaeval Warm Period, The Little Ice Age and the current warming spell then that is as good as one can get.

Comments are closed.