There Is NO Good Coordinate System For Climate Models

Math is a cruel mistress…

Just like there is no fixed length for the coastline of Britain (and can not be; at best you can determine what size ruler you used for any given length you get): In my attempts to decide the best grid system for a climate model, I’ve come to the conclusion that there isn’t one. It just isn’t possible.

How bad does this make the models? I don’t know. But I do know that they must be wrong to some degree just based on the coordinates / cells used.

In this posting I’m going to briefly admire some of the “issues” I’ve run into. It’s been months, so this must be a small subset of the pondering done.

The basic issues end up being a choice of “doesn’t represent reality” vs “makes models harder”. I’ll try to sort things so this conclusion is supported, but it is complex.

Shape Matters

Your first “unreality” problem is that the Earth is an oblate spheroid. Attempts to treat this as a flat plane introduces all manner of distortions. See the Mercator Projection as one example. Area per square grid cell is highly variable. Flowing mass from one to another is distorted.

So typically folks just use a spherical reference instead of oblate spheroid and let the fact that any given point is a few hundred meters (or more) off from realty be “OK”. Probably fine for gross models with under 100,000 cells as the cells are so large anyway.

Just consider the case of a “North west wind”. The cells on diagonals meet at a point. How do you flow the mass of air leaving a cell through a point? Send all of it to the cells just off the diagonal? Part north and part west (and thus none actually north west…)? Ether you do something very un-physical (flow high mass through a point contact) or you do something incorrect (turn a north west wind into one North wind and another West wind; into two cells where the air would not actually be flowing in reality. This must introduce distortions. Enough to matter? Nobody will know…

So “projection matters”. There’s all manner of map projections and you WILL be projecting reality onto a grid map for your model to work. The models all take a cell and “do math” for that cell. Solar heat gain. Mass gain / loss via wind (advection). Air rise / fall (convection) along with cloud formation / evaporation. Precipitation guesses. These all happen over an area. Depending on your map projection used, you can get equal area, or equal distance, or equal angle, but not all of them. So, OK, which matters more to your globe? That your flat grid map has incorrect distances, areas, or directions? Again you can try to correct for these but with what artifacts? I’m pretty sure that again, nobody knows.

Choosing a projection surface

A surface that can be unfolded or unrolled into a plane or sheet without stretching, tearing or shrinking is called a developable surface. The cylinder, cone and the plane are all developable surfaces. The sphere and ellipsoid do not have developable surfaces, so any projection of them onto a plane will have to distort the image. (To compare, one cannot flatten an orange peel without tearing and warping it.)

One way of describing a projection is first to project from the Earth’s surface to a developable surface such as a cylinder or cone, and then to unroll the surface into a plane. While the first step inevitably distorts some properties of the globe, the developable surface can then be unfolded without further distortion.

Now some of those distortions can be adjusted in the model. So, for example, that a grid cell at the polar end of a map is visually quite large but actually a smaller area, in your computer model you can assign it the correct area. Nothing in the math needs to respect the visual projection. However… you still end up with artifacts such as the polar cells being, effectively, triangles with a very wedge shape while the equatorial cells are nearly rectangular and equal edge lengths. How do you “correct” for that? From what I’ve seen, the models do not try. Having equal degrees of width to rectangular cells is not the same as having equal length of edge of rectangular cells. (And even here ‘rectangular’ is an inaccurate statement. These are, at best, ‘spherical rectangles’ with curving edges; in reality).

Three planes define a spherical triangle, the principal subject of this article. Four planes define a spherical quadrilateral: such a figure, and higher sided polygons, can always be treated as a number of spherical triangles.

Near as I’ve been able to tell, the difference between a rectangular flat cell and a “spherical quadrilateral” is simply ignored in the model codes I’ve seen. Many are private, paywalled, or restricted access, however, so again “who knows”.

So, right out the gate, we have the problem that “cells” are the core of the models, the objects that handle mass flow, calculations of physics events, and have the attributes of altitude, latitude, longitude, inclination etc. etc. Yet they are wrong. A curved “spherical quadrilateral” does not have one latitude, longitude, inclination or angle to the sun, topography. We are again faced with the fractal problem of the coastline paradox. What is the surface area of the spherical quadrilateral? It depends on the size of ruler you use…

Size Matters

So all sorts of averages and estimates and simplifications and such are done to the individual cell. Mountains cease to exist. Lakes and rivers evaporate. In some cases, islands disappear and the cell becomes all water. South facing hill sides become flat plains and north facing hillsides become sunny.

Per the Earth Wiki, the surface area of the Earth is: “510.072.000 km2 (196,940,000 sq mi)”. So using 10,000 “grid cells” each one would be 19,694 square miles in area or 51,007 That’s 140 miles on a side or 225 km. Any object smaller than 225 x 225 km can not even show up. Your coastline will have 225 km or 140 mile excursions in it. The English Channel can not exit. Sicily, if you are lucky, will get one cell.

To get any useful precision and accuracy, you will require hundreds of thousands of cells. Perhaps millions. GisTemp just recently changed from 6000 cells to a larger-but-still-too-small number. Then note that most “climate data”, i.e. temperatures, are only recorded for 1200 cells or less over most of the history used. The rest are fabrications by various means of interpolation and more. Models tend to be highly sensitive to starting conditions, yet most cells will have no actual data for starting conditions.

The computational load to handle hundreds of thousands to millions of cells will be horrific. The reason models have used far fewer cells is due to the computational load being intractable at higher numbers. Moore’s Law has lead to far more computes being far cheaper now, so models are gaining more cells, but they are still oversimplifying reality.

But if you don’t have that many cells, you get things like this coastline of England and France:

Artifacts of Hex Map Conversion

Artifacts of Hex Map Conversion

Now realize the size of these hex cells is quite small and the detail vastly more than you get in climate models…

Then Add 3D Layers

Then you get to add layers of altitude on top and below the reference cells…

The atmosphere is where all the weather happens and the oceans are where all the climate is happening. Yet only 7 layers of ocean were used in the fancy model I looked at (and even then it could be set to one, or off). Atmosphere gets a similar treatment. IMHO, we need at least one layer per 1000 feet of altitude to properly handle cloud layers (“decks”), fog, thunderstorms, tropical storms, etc. And THEN you need to have a tropopause and stratospheric layers to even begin to have a clue what’s going on in the air. UV and IR have differential properties based on ions in the air, altitude (pressure broadening), lateral flows, and a whole lot more. The resolution of those layers is far too low in existing models, plus, you are back at the issue of geometry of those layers. Higher is bigger (though not by much).

The oceans have a complex bottom (ignored) with massive volcanic inputs (ignored since we don’t know their location and size with any precision at all). Ocean currents are 3 dimensional things and poorly mapped. Then there’s that pesky size issue thing again. Currents under a few hundred km in size can’t even show up. At best you can get an average mass flow. Maybe. But that isn’t the same as the Gulf Stream with eddies and all nor the Arctic currents with salinity gradients nor the outflow of the Mediterranean high salt density water forming a subsurface layer. Will the Strait Of Gibraltar even show up in your grids?

Going to a hex cell map does solve the “mass flow out a point” problem of the North West wind, but introduces other issues. Coastlines become jagged. Winds that would blow in a straight line along the equator now must quasi ‘flutter’ back and forth between cells in a serpentine path. Then there’s that pesky Pentagon that simply must be in a globe to keep it round. (Oh, and don’t forget the hexagon / pentagon sides are actually not straight, but spherically curved… so again a complexity in calculating the area of a tile…)

Here we can see a couple of those kinds of artifacts:

I’m not even sure what part of he planet this is supposed to be. The hex tiles are just not fine enough, so we get water spread over real land and we get land that just disappears, along with snow spreading in hex shapes:

Covering Earth with Hexagonal Map Tiles [closed]

Covering Earth with Hexagonal Map Tiles [closed]

Well, lots of people have made the point that you can’t tile the sphere with hexagonal tiles – maybe you are wondering why.

Euler stated (and there are lots of interesting and different proofs, and even a whole book) that given a tile of the sphere in x Polygons with y Edges total and z vertices total (for example, a cube has 6 polygons with 12 edges and 8 vertices) the formula

x – y + z = 2

always holds (mind the minus sign).

(BTW: it’s a topological statement so a cube and a sphere – or, to be precise, only their border – is really the same here)

If you want to use only hexagons to tile a sphere, you end up with x hexagons, having 6*x edges. However, one edge is shared by each pair of hexagons. So, we only want to count 3*x of them, and 6*x vertices but, again, each of them is shared by 3 hexagons so you end up with 2*x edges.

And so it goes. Even with a LOT of hexagons (and the constant 12 pentagons) you end up with the “straight line through the hexagons” taking a hard turn at each pentagon and so get lines of discontinuity:

I’m looking to make a geodesic hexagonal global grid based on a icosahedron. Such a grid would need to have 12 pentagons in it as well to be able to fit a sphere.

I know mmqgis can generate grids, but these grids are plain flat grids, not geodesic ones, and they do not map to a sphere without extreme distortion, which I’m trying to avoid.

What I’m trying to make is something like this:

Hex Globe Earth

Hex Globe Earth

Notice that there is a line of hex cells almost horizontal from the Pacific Ocean (from the left to the right, middle) between Asia and Alaska headed toward Greenland. Somehere in the middle of the image, it runs into a pentagon (one of 12) and then the straight lines take different directions.

Wind, even if you aligned one row of hexagons with the equator, will run into these discontinuities in the direction of the neighbor cell. You can not draw a straight line wind around the whole equator. It must zig / zag from cell to cell at some point.

So your choice is between quasi-rectangular cells where some cells meet at a point and mass must “magically flow” to get to them; or spherical hex cells where you can’t have straight line winds following the equator as happens in the real world.

One could go for some kind of irregular tiling, but then you have no constant area / tile nor consistent angle of contact between tiles (and your math gets messy quick).

Essentially, there is no clean way to space tiles equally over the surface of a sphere, and the Earth isn’t even a sphere so it’s worse than that.

What’s That About The Point?

The best I’ve found so far is to just approximate equal spacing of points, assume mass flow is assigned to a point, and allow that the actual edges between points will be “soap bubble” confined to the closest least energy space between the two. (But even that ‘has issues’ with indistinct edges and linear winds needing to, at least a little bit, serpentine between points.)

Has a decent way of quasi-packing points onto a spherical surface. As the number becomes quite large it ought to approximate a decent even spacing. He does use an odd font for theta and phi, so just realize it’s the usual theta and phi of spherical coordinates ( up-down angle and right-left angle):

Regular equidistribution can be achieved by choosing
circles of latitude at constant intervals dϑ and on these
circles points with distance dϕ, such that dϑ ~= dϕ and
that dϑdϕ equals the average area per point. This then
gives the following algorithm:

Set Ncount = 0.
Set a = 4πr^2/N and d =√a.
Set Mϑ = round[π/d].
Set dϑ = π/Mϑ and dϕ = a/dϑ.
For each m in 0 . . .Mϑ − 1 do {
    Set ϑ = π(m + 0.5)/Mϑ.
    Set Mϕ = round[2π sin ϑ/dϕ].
        For each n in 0 . . .Mϕ − 1 do {
        Set ϕ = 2πn/Mϕ.
        Create point using Eqn. (1).
        Ncount += 1.

Where formula 1 in the usual conversion of spherical coordinates to Cartesian via x=sin Theta cos Phi, y =sin Theta sin Phi, z= cos Theta.

Do note that phi and theta can swap meaning between math folks and physics folks usage… (Computer guys just avoid Greek ;-)

Several different conventions exist for representing the three coordinates, and for the order in which they should be written.
The use of {\displaystyle (r,\theta ,\varphi )}(r,\theta ,\varphi ) to denote radial distance, inclination (or elevation), and azimuth, respectively, is common practice in physics, and is specified by ISO standard 80000-2:2019, and earlier in ISO 31-11 (1992).

However, some authors (including mathematicians) use ρ for radial distance, φ for inclination (or elevation) and θ for azimuth, and r for radius from the z-axis, which “provides a logical extension of the usual polar coordinates notation”.[3] Some authors may also list the azimuth before the inclination (or elevation). Some combinations of these choices result in a left-handed coordinate system. The standard convention {\displaystyle (r,\theta ,\varphi )}(r,\theta ,\varphi ) conflicts with the usual notation for two-dimensional polar coordinates and three-dimensional cylindrical coordinates, where θ is often used for the azimuth.[3]

The angles are typically measured in degrees (°) or radians (rad), where 360° = 2π rad. Degrees are most common in geography, astronomy, and engineering, whereas radians are commonly used in mathematics and theoretical physics. The unit for radial distance is usually determined by the context.

When the system is used for physical three-space, it is customary to use positive sign for azimuth angles that are measured in the counter-clockwise sense from the reference direction on the reference plane, as seen from the zenith side of the plane. This convention is used, in particular, for geographical coordinates, where the “zenith” direction is north and positive azimuth (longitude) angles are measured eastwards from some prime meridian.

Don’t you just love it when you have multiple conflicting “conventions” to choose from along with several measuring “systems” and a few different alphabets and languages… all in a system that can’t map to a flat grid?… Sigh.

So what the author does is just calculate an average area circle / dot, figure out about how far that is from each other, and then pack dots onto the sphere at about that distance apart until full.

It’s an approximation, but it’s the best approximation I’ve found so far for an insoluble math / geometry problem.

I’d get to keep the “irregular direction” problem from dot to dot, but the sizes are “about the same” and for very large number of dots ought to be “close enough” (as I make the same kind of hand-wave assumption ALL the models make…)

It will have a small problem of assigning any actual temperature station data point to the “right dot”, but I figure just calculating actual distance to each and then stuffing it in the closest ought to work “OK”… or I can take the “Climate Scientist” approach and just fabricate temperatures for any cell-center near to it via averaging and extrapolation… (yuck!)… But again, with something of the order of 100,000 dots, the 1200 to 6000 temperature stations we have for most of history ought to end up pretty much in one of them. Maybe…

The result would be rather like his result, but with many more dots:

Sphere Dot Packed With Equal Areas

All parameters for that area can be assigned to the point (and ASSUMED constant over the nearby area). Mass flow can be vectored proportionately to the dots along the path of the wind. Dividing mass by angle and assuming areas are about equal.

I think this has the two desirable properties I was hoping to add to the models. A large number of properties for each point can be pre-computed / set. Then a large number of ‘math bits’ for neighbors can also be pre-computed and be set to constants (so, for example, angle to nearby cell and percent of a given wind that will flow to it). Essentially a whole lot of trig functions (like cos and sin and all) can be computed once per cell and set as parameters.

Then, each dot can have “gozinta” and “comesouta” buffers. So any given cell can calculate how much mass at what temperature is leaving (given that cell time, temperature start, albedo, etc. etc.) and at what temperature / humidity / specific heat. Those values can then be placed into the “gozinta” buffer for the neighbor dot / cell.

I believe that in this way, each cell can have an assigned processor that just calculates what is to happen in that cell in any one time step. IF the “comesouta” buffer for an adjacent cell at that time step is empty, it just waits. As soon as data shows up, it computes. Time step and “rinse and repeat”.

By focusing the math on a per-dot basis, you can scale to any number of dots just by adding more processors. This would be a huge gain / benefit. It ought to side step Amdahl’s Law by segmenting the work appropriately to a per-cell basis.

I’d even go further and assert that with proper use of queue data, one cell could even “run ahead” of the time stamp in the next cell and just load up its work queue with increments of data (each time stamped accordingly). This would let you use very large data stores to get some degree of asynchronous computes going. Unblocking some potential sticking points. So, for example, dry air flowing over the middle Sahara is unlikely to have very much complex cloud computes to take care of. It could load up the “ocean cell” queues for those Atlantic cells that need to do complicated cloud simulations at multiple altitude bands (and perhaps even participate in some of those cloud bits as distributed processors)

In Conclusion

So there you have it. That’s where I’ve gotten to.

I think this approach will work at least as well as the square grid on a round ball approach of most of the models, and has the virtue of being more “distributed computes friendly”.

I’m going to ponder it just a wee bit more, then likely try some very basic stuff with a “few dozen” dot world and a few SBC Pi cores of computes. See what crap floats to the top.

IFF it works out reasonably easy to code, then expanding it to massive numbers of Dots and a whole forest of R. Pi Cores ought to be very straightforward.

But, as usual, “We’ll see.” ;-)

I think I’ll call it ‘Dotty World” ;-)

Subscribe to feed


About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in AGW Science and Background, Earth Sciences, Tech Bits and tagged , , , , . Bookmark the permalink.

40 Responses to There Is NO Good Coordinate System For Climate Models

  1. billinoz says:

    E M I’ve been waiting for a fresh bout of your ponderings on Climate for a while. And finally here they are. You do hit the spot a lot on the failings of the computer models. A couple of hours of thinking for me in it ! And thanks for that !

  2. Soronel Haetir says:

    Until they get to grid sizes of a few km or less it’s not even close to modelling the ‘real’ climate. I live on an island in Alaska, we have a pretty sharp turn as you go along the coast, north of that bend tends to be significantly cooler than south, and the line where that change occurs is a band perhaps 250′ wide.

    That is, there are lots of days where north of some point in that bend will get snow and south rain. While the entire westward face of the island across the channel from us (a couple miles away) will get snow even though the corresponding east side of our island is getting rain.

  3. E.M.Smith says:

    You are most welcome.

    It is what I do to escape the political crap. It is a more orderly place ;-)

    It was important to do the political push in that moment, but the moment is almost past. So I’m doing more of what I like now.

    The coordinates / mapping thing really was a surprise to me. I thought there ought to be a good way, but there isn’t. Just different kinds of not exactly… I’m hopeful the area packing works OK.

  4. H.R. says:

    The dot system reminds me of my Dynamics classes. In most cases, it was useful to find the centroid of the various things interacting and use vectors to get the sum total of the forces. Depending on the shape and interaction, say a sphere bouncing off a surface, you could then calculate the force on the sphere or surface given the angle of incidence. If it was a squarish or rectangular shape, you could calculate over the area hitting some portion of another shape.

    I see the dot system as useful, being much the same where the sum of the gozinta is placed at the center and then distributed according to some logic as the gozouta into surrounding dots.

    Your observation makes a lot of sense to me, E.M. The calculations would be fairly straightforward and the transfer of energies would be far more natural than the square grids.

    Good explanation, BTW. Ya need some book larnin’ to get it, but not too much. Maybe H.S. to get the concept without being able to do all the math, though there are some bright ones that could probably do the math as well. :-)

  5. Nancy & John Hultquist says:

    You have been very busy. Interesting. Thanks.
    _ _ _ _ _
    Back in 1965 or ’66, a class I was taking had to try to make a 3-D looking map from a rectangular grid of squares. I’ll guess the grid was about 20 high by 30 wide. Cell entries were the elevations. We were using FORTRAN II-D, a drop box for punch cards, and a centrally located computer, with a 24 hour turn-around.
    This is all very fuzzy now.
    Others eventually solved the mapping problems, and no one thinks of these issues now.

  6. H.R. says:

    Ummm… what would a practical dot size be?

    Think of the McDonalds plastic ball pit. It’s really deterministic. It’s static until someone jumps in.

    But… mumble, mumble, program each ball, some sleight of hand, mumble, mumble… and you can calculate the end result of a dive into the pit. It would be a real bear, but quite doable, I’m sure. Money and time is all it takes.

    The dive into the pit of plastic balls is the initialization. You know the particulars of the body, the dive, and the splat.

    I’m not exactly sure how you’d initialize the dots. Not the same as plastic balls because the balls are static at the start. You’d have to pick a ‘now’ – a slice of the dynamic – that was pretty darn good for each dot and simultaneously true for all dots.

    So is the right dot size just determined by money and time. The “ponder your posterior off” would be the initialization of the dots.

  7. E.M.Smith says:

    I’ve spent a while pondering number of dots. I did my initial head scratching with a pad of paper and drawing hexagons on it. One over the pole, then 6 around it, then 12 then “How the heck do I bend these down onto the sphere?” That resulted in “wedding cake world” where instead of constantly doubling the ring in a flat plane, I’d run it 90 degrees (vertical) at equal numbers in a cylinder for a while…then back to radially out… then back to cylindrical then…

    I was trying to figure out how many hexagons it would take to get enough conformity to the sphere and resolve enough surface features… My conclusion was that while interesting, Wedding Cake World was not going to cut it …

    So the real number depends on what kind of features you need to resolve. Do you need to resolve the Mississippi? Then cell size needs to have enough small enough to show it… That’s a whole lot smaller than 140 miles on a side.

    My best guess is what I stated. Hundreds of thousands to millions. For 196 x 10^6 square miles, you need that many cells to resolve a square mile… And I think you likely really need to resolve about 100 yards… So somewhere around 2 x 10^10 cells.

    Yeah, not going to happen. Even the 2 x 10^8 for a square mile resolution is not likely for decades. (AND remember you need another at least x 10 for the vertical layers ;-) so just call it about 2 BILLION cell points…)

    So as a practical matter, you will run the number of cells your available compute budget supports. Likely about 10^4 to 10^6 for government agencies with huge budgets. Hey, it’s only 3 orders of magnitude too small to be useful!

    As to starting initialization:

    Yeah, that’s a problem.

    There are 2 cases.

    1) Model converges to a realistic state over time. Most any initialization will work with enough computes to let it converge. Best is either to initialize all cells with the average data from their nearest real points; or perhaps initialize them with the best guess for what they will be. (So ocean cells high humidity and cool air layer even if the “nearby” temperature station at Hawaii Airport has it hot and dry over the runway…)

    2) Model does not converge (or worse, has dynamic divergence problems). Most any initialization will not work. Your model is unstable and will diverge to all sorts of crazy answers if you run it long enough. Apparent sane answers from short runs are just “crazy talk that’s not gone nuts yet” as you didn’t let it run to complete the divergence path it was on.

    So first job is to just start your model with some “polite fiction” and test if it converges or not. Then if it diverges, get back to work trying to fix that.

    IMHO, given that both of those are Big Shit Issues: No climate model has any hope of reflecting anything at all like reality. For the next many decades at least.

    But they can still be fun to play with and it’s always fun to admire the problems…

  8. E.M.Smith says:

    Oh, and for Very Cheap Computes, you can get Arm cores on a CPU board for about $2 each. That’s “only” $4 Billion for the raw CPUs to do minimal computes / cell. Figure at least double that for a whole system (PSU, cables…) so about $8 Billion to get it all bought. Then “some assembly required”, plus a BIG computer center, plus power, AC and staffing. I could likely get it all built and running for about $20 Billion.

    For a 1 mile resolution.


    FWIW, with some Very Fancy Coding, you MIGHT be able to run some of the compute tasks on Video Card Cores and get that hardware cost down to about 1/2 that. Though software / coding costs would rise…

    And that, boys and girls, is why they run models with 16,000 cells instead of 16 million.

  9. E.M.Smith says:

    FWIW, in addition to “Wedding Cake World” where I started, there are other avenues I explored that didn’t make it to this posting (though contributed to the thought process). One is the notion of a “Spherical Hexagon”. Like a Spherical Triangle, the thought was to just bend the hexagons. Turns out it isn’t that simple… but there are a lot of things you can do with spherical polyhedra. (One is that hex and penta world above).

    I especially liked learning that my beach ball is a hosohedron… Who knew? ;-)

    Along the way I learned that spherical geometry is non-Euclidean. Oh Joy…

    Spherical geometry is the geometry of the two-dimensional surface of a sphere. It is an example of a geometry that is not Euclidean. Two practical applications of the principles of spherical geometry are navigation and astronomy.

    So, oh great, I can toss out all that geometry I learned… (Full year of it!).

    I still hold out a bit of hope for Spherical Triangles. Everything meets at an edge (even if it is a curved one…) but they don’t travel in straight lines… so… wiggly wind again.

    BUT it rapidly devolves into a forest of Trig Functions just for a simple triangle or two. So aside from my not really wanting to learn an entire new geometry and refresh a lot of trig just to figure out if this approach was “worth it”… There was the practical consideration that Trig Functions Are Compute Expensive. If this stuff was going to run on cheap compute cores, I needed to avoid that much trig. For The Cores Sake! Yeah, I was doing it to “Save the Cores!” ;-)

    It was not too long after that I ran into another version of the sphere dotting process and thought it had the most promise.
    (see the long list of various matlab functions for all sorts of sphere and ellipsoid filling and other functions that were not what I wanted on that page… )

    SPHERE_DELAUNAY, a MATLAB program which computes the Delaunay triangulation of points on the surface of a sphere in 3D.
    SPHERE_GRID, a MATLAB library which generates a grid of points over the surface of a sphere in 3D.
    SPHERE_LEBEDEV_RULE, a MATLAB library which computes Lebedev quadrature rules over the surface of the unit sphere in 3D.
    SPHERE_LLQ_GRID, a MATLAB library which computes a grid of quadrilaterals bounded by latitude and longitude lines over the surface of a sphere in 3D.
    SPHERE_LLT_GRID, a MATLAB library which computes a grid of triangles bounded by latitude and longitude lines over the surface of a sphere in 3D.

    Bernardt Duvenhage’s Blog
    Generating Equidistant Points on a Sphere
    Jul 31, 2019

    In this post I’ll revive work I did during my PhD to generate 3D points that are equally spaced on the unit sphere. Such equidistant points are useful for many operations over the sphere as well as to properly tesselate it. The method is based on a spiral walk of the spherical surface in angular increments equal to the golden angle. The golden angle is related to the golden ratio.

    Two quantities are in the golden ratio, φ, if their ratio is the same as the ratio of their sum to the larger of the two quantities. a/b=(a+b)/a:=φ which is approximately 1.6180339887… The golden angle ϑ is the angle subtended by the small arc b which is approximately 2.3999632297… radians or 137.5077640500… degrees.

    The ratios between consecutive Fibonacci numbers approach the golden ratio. Also, an alternative for expressing the Fibonacci sequence is F(n)=(φ^n − (1−φ)^n)/√5. #mindblown. The spiral walk discussed here is therefore often referred to as a spherical Fibonacci lattice or a Fibonacci spiral sphere.

    The method presented here I originally implemented for a paper on Numerical Verification of Bidirectional Reflectance Distribution Functions for Physical Plausibility. A pre-print of the paper is available from ResearchGate and via my Google Scholar page. The paper also discusses an alternative method based on subdivision of a 20-sided regular icosahedron.

    Which may still have legs, but getting all wrapped up in Phi and Fibonacci seemed a lot more complicated than the “area bin packing” approach above that’s got pretty simple math that will compute “way fast”. The tessellated spheres in the linked article also looked a bit ‘irregular’ and the dot pattern didn’t look conducive to linear winds at the equator. But they might do a better job of generating the cyclonic winds that prevail at higher latitudes, so might be a good explore for later…


    That’s some of the “stuff along the way” you might want to look at that contributed to the above article but didn’t make it into the (already too long) article ;-O

  10. cdquarles says:

    Let us add another complication. In real life, the atoms and molecules of gases are moving on the order of 1 km/sec (slower at the poles and faster at the equator) at surface conditions. Wind, strictly speaking, is a biased random walk. I wonder how you handle that via vector matrix mathematics, without losing information. Throw in that many of our measurements are not of the actual entities, but are proxies for those entities.

  11. H.R. says:

    As I said, it all comes down to how much time and money you want to spend.

    If they had started with your idea, E.M., we’d be a lot farther down the road.

  12. H.R. says:

    @cd – Are you sure wind is a random walk? It seems to me that it’s a function of pressure differentials, Coriolis effect, convection, and terrain.

    You are right that wind would be poorly modeled as a vector and that information would be lost. It’s a single(?) ‘gozouta’ and the next dot’s ‘gozinta’ function has no way of knowing how to break it up. I suppose the dot’s internal calculation could somehow use the four factors I mentioned. You’d need a very detailed 3-D map of the World

    The temperature (there’s that proxy you mentioned), however it’s measured and represented, is probably well represented over a large area. When the forecast is for 70(F) for your neck of the woods, of course it means that the air mass over the forecast area is going to be 70(F), but any specific spot will be +/- several degrees off from that.

    It’s my understanding that the current grid size is way too coarse for a single temperature to be even remotely valid over the entire area. That’s where dots would be helpful.
    What your comment brought to my mind is clouds. Clouds are the Devil’s Playground right now, and I’d don’t see it being any different for any gridding scheme. Willis’ Thermostat hypothesis explains how clouds greatly affect surface temperatures, but so far, as best I know, no one has a handle on modeling clouds. I have no idea how you’d hand off cloud information cell to cell for any type of grid.

    Oops! And clouds affect wind because their shading creates temperature differentials… *sigh* I missed that on the first go ’round of factors.

  13. A C Osborn says:

    E M, Clive Best has also looked at the problems of current Global Temperature calculations here

    Might be worth a read.

  14. H.R. says:

    @E.M. – Wedding Cake, eh?

    Well, yah, but… it still seems a better way than the current gridding system to go about things.

    Did you consider sub-dots to distribute the ‘gozinta’ information within the dots? Not perfect, but perhaps useful, as the parameters of the sub-dots could be say, elevation, and other factors. Hmmmm… yeah, some things like elevation would be constants in sub-dots. Not enough coffee in me to think of other factors right now which could also be constants.

    Your Wedding Cake is certainly worthy of further contemplation.

  15. H.R. says:

    @A C – Hey, that’s a good link. I skimmed it and it’s well worth a deeper dive when I have some time.

    Thanks, A C!

  16. Simon Derricutt says:

    EM – I also spent a while thinking about the same problem last time we were talking about new climate model designs, and though I did come up with the dots being the closest to a workable method it still looked to be practically unworkable because we’d need a grid of around 100m or so and too many computes.

    Though maybe a lot of the calculations could be sidestepped by using look-up tables, and each point would only need to know the coordinates of the nearest points to it (and those could be set up in small arrays of direction/distance so the air in a particular wind-direction could be partitioned between them), still seems you end up with millions of parallel processes where each needs its own tables and memory, and each point needs to be initialised with an individual set of tables. Since we’d need to know the current direction of the Sun for each point, and that solar input could be blocked by cloud in a higher-up “cell” in the right direction, and there may be partial blocking since adjacent overhead cells may have different cloudiness, we’re starting to get too many computes for a single cell. Is it cheaper to update the tables for Sun position or for each cell to calculate it for each time-tick? Also to be noted is that each time-tick would need to be short-enough that the highest wind-speeds only move the air-mass to the next dot centre, and not beyond it, so if we have an average 100m between dots and a maximum wind speed of 160km/h (around 100mph, 44m/s) then that brings the time-tick down to around 2 seconds. That’s a pretty short time-tick, and I expect the current models use times of the order of hours.

    As regards getting the initial conditions correct, if the model doesn’t converge on something pretty close to today (almost) no matter what starting conditions you use, then it probably isn’t realistic anyway. The main thing about the real climate is actually how stable it is despite the cyclical differences in energy input from the Sun, so there is quite a bit of negative feedback in the real system. There is probably feedback from the biosphere (see Lovelock’s Gaia hypothesis) that tends towards conditions conducive to life, but I’m not sure how you get that into the model – the albedo at ground level will depend on the vegetation on the ground. Certainly the cells at ground level will need the most computing to take account of solar input, soil moisture and water-evaporation (thus also what vegetation there is), angle of the Sun combined with angle of the ground, updraughts on hills, impediments to wind such as trees (or buildings or wind-turbines), and likely several other large effects I haven’t thought of. The cells higher up only need to take in air-masses and apply Coriolis calculations, lapse rate, and condensation (or not) of water – but then that also depends on dust or other nucleation centres such as cosmic rays.

    I’m thus wondering whether this problem is actually suited to digital computing, and whether it might be better to use an analogue array. Maybe also physically arrange those processors in a spherical shell. Since for completeness we’d need to include ocean currents, and the effects of ice-melts on water-density and those currents, it’s very much a cross-coupled system and everything depends on everything else. Missing out one of those interdependencies could result in unexpected oscillations that we don’t see in the real world.

    The dot method looks to be the optimum way of making a model, but the dot-separation and thus the time-tick needed to be able to account for clouds of size down to 100m or so makes the computation requirements several orders of magnitude larger than we can reasonably expect to get hold of. Could be why the current climate models can’t be relied upon anyway to predict the future with any skill beyond 3 days. In order to be computable, they need to make too many simplifications, and thus cannot model what’s actually happening. Getting to Einstein’s “make things as simple as possible, but no simpler” gives us a pretty complex system with a load of computations (maybe 1500 or so cycles of computation over several million cells) just to predict the next hour of weather. I suppose you could at least test the predictions by looking out of the window, since simulation-time may not be that much different from real-time.

    Using larger dot-separation, to reduce the computation load and enable larger time-ticks to work, would involve getting away from physical calculations of what happens and instead substituting averaged estimates of what happens with a partially-cloudy sky. Since getting that estimate of cloudiness a few percent wrong will produce a large error, I suspect such a model wouldn’t be worth trying. Clouds are both really important and really complex, and some days I see lots of tiny fluffy clouds, others they’re bigger, and we can see waves in them at times from an aircraft. If the model ends up showing that spottiness of the cloud layer, with maybe different clouds at different levels going in maybe different directions (which I do see happens at times) then it would seem to be closer to reality. I’ve watched a small cloud on an otherwise pretty cloudless day, and it seems like in some places it is condensing and other places evaporating again, so it changes shape. Larger clouds also change shape over time – they just aren’t static but are constantly changing as the equilibrium points change.

    Though I think the dot model is probably the nearest we can get to a workable design, and you could make things a bit easier by having wider spacing over oceans and deserts, and I think such a model could produce better weather forecasts, I suspect that projecting that model years into the future would still remain a step too far. In order to get an answer in a reasonable time we’d need to reduce the computation load by using bigger cells and timesteps, and thus lose the accuracy.

    Bit of a bugger to crack this nut….

  17. H.R. says:

    Simon Derricutt: ” I suppose you could at least test the predictions by looking out of the window, since simulation-time may not be that much different from real-time.”

    Ok. That gave me a big chuckle 😁

    It’s probably true that it would take longer to calculate the changes than for the actual changes to occur, if you were able to get things fine enough (and with the correct calculations).

    So ‘now’ would keep getting farther and farther ahead of model predictions, and it’s just much easier to record ‘now’ than it is to calculate the past. Oh the ironing!.
    I see we agree on the cloud problem, though I don’t think there’s anyone with two connected brain cells that wouldn’t also think it’s an intractable problem for today’s capabilities. Perhaps FTL interstellar space travel would be an easier problem to solve 😜

  18. billinoz says:

    Can I add to the complexity ? The Earth is moving. It is rotating ( spinning ) on it’s axis; it is moving around the sun completing an orbit roughly every 365 days. And the entire solar system is moving as part of one of the arms of the Milky Way galaxy; and the entire galaxy is also moving as a collective entity in a direction at significant speed. ( Pardon I do not remember the details )

    Thus the Earth is Never in the same point in Space more than once. The Earth is each moment in a unique place in the universe. And Outer Space of the Universe is something to ponder about. We tend to assume it is empty ‘Space’ with little in it. But this assumption is false. There isa lot of stuff out there in outer space. And so the solar system or just the earth, passing through a cloud of interstellar dust for example, or a dustless part of outer space, has impacts on the global climate. . But until the past 30-40 years we humans were in no position to know or think about such impacts.

    So how does one ‘model’ for such Earth ? Or come to any accurate predictions ? I suggest it is impossible.

  19. Simon Derricutt says:

    H.R. – there may be a solution to interstellar travel fairly soon. Mike McCulloch seems to be getting good results on the experiments based on QI theory. Bit OT for this thread, though…. If you want I’ll put some pointers up on WOOD.

    As far as I know, clouds can be reasonably modelled over a volume of a few hundred metres now, but it requires a lot of computing power. Spreading that over the whole Earth, and taking into account the ground conditions for each such area (which isn’t done in the accurate calculations yet, but just looking at temperatures and flows) would require computers many orders of magnitude larger. Intractable at the moment, and likely to remain so for the foreseeable future.

    It may be practical using analogue computers though, where the inaccuracies are smoothed away by adding in a gradual decay to some average state – yep, that’s a kludge but it should be near-enough to the reality to be useful – set the rate of decay per cycle to be slightly larger than the indeterminacy of the calculation. Also possible, if the inaccuracies are genuinely random, to just let it go without correction. By passing analogue values between the analogue processing elements each “calculation” could happen in less than a microsecond, so we’d be running at around 2 million times real time. We’d need millions of such analogue processors, though, and all connected together in a (logical) sphere. Basically a neural net, where each “neuron” (or analogue processing module) has connections to around 20 other “neurons” around it and above/below. Each would need to be individually addressable to set initial conditions, and to recover current data tables. I’d suspect that vertical resolution would need to be around the same as the horizontal resolution, giving an extra order of magnitude over EM’s 1000m vertical resolution and somewhere around 30 million processors (including ocean currents). Might take a few weeks to build and program such an array…. I really doubt we’ll do such a project.

    Cells at the ground level need inputs of insolation, ground humidity, ground inclination, ground height, obstructions, albedo, air pressure, air humidity, wind direction and speed. Once above ground level, they’d only need insolation, temperature, pressure, humidity, and wind speed/direction. Likely I’ve missed some bits there, though.

    Every time I come back to this problem, and try and find a way of getting a good model we could actually build, the resultant rough design looks way too huge to be practical if you want useful results. One we could build would not be good enough to be useful.

  20. Simon Derricutt says:

    Billinoz – good points, and there’s a fair chance that such things have indeed had climatic effects in the past. Currently, though, we don’t know the dust-levels we’ll be passing through, or what level of cosmic rays we’ll be receiving at various points in the future. I’d suggest that changes in the cosmic backyard won’t change that rapidly, though, and over the next century or so we may not see that much difference. Changes in the Sun and Earth magnetic fields may have a larger effect on that timescale, and yet we can’t predict those at the moment, either.

    Worth considering, but I suspect we can’t put it into a model because we have no predictions for the future conditions.

  21. cdquarles says:

    @H. R.
    Yes, I am. Consider that a cubic meter of air contains roughly a kilogram of atoms/molecules all moving at roughly 1 km/sec at surface conditions in 3 dimensions (a bit faster at the equator and a bit slower at the poles). Wind, though has to be a bulk biased random walk that moves at a fraction of that at a few km/hr (hour!). Pressure is the result of the impact energy of all of those randomly moving, because there are too many and too small for us to track. Convection is that biased random walk in the vertical direction. Advection is the same in the tangent plane (2 dimensions) to a point on the surface. Surface irregularities do add to the bias. The better named Coriolis Effect is the result of differential surface rotation biasing that part of the bulk motion. Temperature is the proxy for the internal kinetic energy of a sample of matter and only its internal kinetic energy. Other than the gravitational field, there is no container in actuality to do that actual sampling. You can, of course, make a box or an imaginary parcel in your mind; but that’s only approximately what reality is.

  22. E.M.Smith says:


    I still have a bit of hope for Wedding Cake World. Based on using spherical triangles to construct the hexagons and conform them to the surface of a globe. BUT…

    As a disk, the hexagons double with each row outward. As a cylinder they are the same constant number. IF you have conformed your hexagons to the surface by making them from curved spherical triangles, then somehow you must conform / reconcile the 2 x 2 x 2 x 2… growth of number of hexagons in the disk with the actual spherical slower growth. In short, your hexagons have to shrink as you plate them down the globe toward the equator. Not Good.

    So you end up back at the need to prune out some hexagons as you add concentric circles so that they conform to the globe. Either the hexagons change size, or they change number, and both of those are a pain. But potentially workable.


    Yeah, the “size vs time step” is a big issue. One I’d carefully avoided thinking about ;-)

    The model I looked at most used a 1 hour time step IIRC. 24 increments in a day. A 150 MPH hurricane wind can go 150 miles then, but with a grid size far beyond that, it didn’t matter. I did ponder that the grid size was so big even a hurricane would not show up. I think that matters rather a lot…

    Per computes:

    Part of why I like the “Gaggle of Pi” approach is that each compute engine can have a GB or 4 of memory to hold a lot of lookup tables, then another 128 GB of uSD card for even more “pretty quick” reads of data. For about $40 you can add a TB of more scratch space, work queue, and tables on a much slower USB drive. That’s a LOT more lookup space than I think can even be theoretically beneficial… so I’m pretty sure you can have more than enough lookup tables.

    The big one I’ve ignored so far is “between the layers” dependencies. My “mental model” of how this would work is based on just one layer. Start at some point, compute your values, load the next cell work queue, look at your in queue, rinse and repeat. But “that’s just wrong”…

    Somehow you need to also look up and down. Clouds above, sunny dirt below, water surface, etc.

    So far I’ve completely ignored how to handle that.


    On a flight to Florida, looking out the window, I noticed all the clouds below were in nice little rows of puffy bits. Why? As the flight progressed, the spacing changed. Why? Sun angle.

    A row of cloud does not shade the ground beneath it. (Only at high noon at the equator). Every other latitude and time of day, it shades a space off to the side. THAT space has lower evaporation and cooler air. The space directly below the cloud (gravitationally, not geometrically – they differ…) is sunny, so warmer and evaporates more (and thus supplies the cloud with growth…). So as a cloud grows it shades out the neighbor space and stunts cloud formation from it. The end result was long rows of clouds with long shadows just off to the side of them, and then another row of clouds and then…

    That observation has haunted me. Now you must, somehow, know where on the diagonal at any given latitude and time of day and season, the shadow falls, and that conditions the air above it. That’s a PITA.

    At cell size bigger than about 100 miles, you can ignore this as the clouds are well below a 100 mile height, but as you get cell size small enough to be very accurate (say 10 miles, or 50,000 feet) this will start to matter.

    But even at larger cell sizes, you must somehow get vertical air flow “right” in terms of mass, temperature, humidity, etc. All while there are at least 2 major types in play. The cooler dryer air in the shadow bands, sinking, and the warmer wetter air in the cloud forming bands, rising and making cloud that changes the former. So some “reasonable” assumptions about that have to be fabricated and tested and worked up … somehow…

    For Hurricanes, you need to deal with tons of ice being made, dropped, melting and with “rain bands” that can dump feet of water, often many miles away from where the water was picked up. In a cell of 500 miles that has to be assumed into some sort of average function. In a cell of 10 miles it must be modeled. How do you do that? When a cyclone makes a 10 mile eye, but the upper deck spreads out 200 miles, how do you get that right (including all the vertical flows and shading) with a 100 mile cell? If you can’t get cyclones and hurricanes, how can your model be right? They define our weather patterns. (Watch nullschool, you see circles and swirls and cyclonic action everywhere, changing size with altitude. At 10 mb there’s ONE giant cycle over the pole. IF you don’t get that cyclonic action then you have not got your layers right and your cells right as that Polar Vortex defines how winter comes.) So you must have cyclonic action at the scale of a hemisphere, at 10 mb, but at 1000 mb it starts in cells much smaller. Circles in circles… consolidating as they rise.

    It’s that “vertical interaction” that’s sitting on my “How the heck do you?…” plate. How do you do vertical swirling with any hope of ‘time stepping’ all the cells in the layers up and down? I hope it arises as a natural consequence of something simple. I fear it requires accurate modeling of real complicated processes.

    So will it be adequate to have air go “up, then over, then up, then over” or do you really need “air rising at 30 fpm and NW at 50 fpm with a torque of…” IF you can’t get swirls with air doing “up then over then up then over” in layers that are less than 50 and cells wider than 100 miles, then give it up. Not enough computes. If you need actual vectors for each parcel of air, globally, it’s just painful.

    Clouds bother me, but I think I have clue how to make them happen. It’s swirls that give me nightmares ;-) How do you generate a fundamental fractal pattern of nature that is circular in nature from a model that is fundamentally fixed number and size boxes?… ( I hope dots help to solve this as the air mass can just have a vector of direction, both up and over… and maybe that’s enough to generate swirls … IF you have a fine enough dot mesh…)


    Right now we are traveling through a “local cloud” of interstellar dust. That wasn’t the case some several thousands of years ago, and will not be in several more thousands. Similarly, we “bob up and down” through the galactic plain about every 50,000 years.

    All that is now thought to influence long term climate, perhaps even triggering ice age glacials, but nobody really knows in what way and to what extent. So, yeah, it’s an issue…


    Oh, and for vertical cells, you need opacity on the line of sunshine in and on the line of IR outbound… Those diagonal sun rays in the morning and evening are a bitch…

    Per cosmic rays: I think you can model them well enough using a constant for the space source, but then modulating based on solar cycles (that we know reasonably well for human time scales).

    The interstellar dust cloud is on the order of 10s of thousands of years for a change, so IMHO can be ignored for a climate model looking at 10s of years.

  23. E.M.Smith says:


    What you say is true, but I think we can use the bulk properties. (IF I have to model every single atom, it can never work, so I’m going to ignore that problem. Hey, there may be hope for me yet as a “Climate Scientist”! Ignore uncomfortable points that might stand in front of your grant request for a bigger computer? I got this! ;-‘)

  24. cdquarles says:

    Oh, another point about biased random walks: I did some light microscopy years ago. A drop of liquid containing solids spread out by a small square piece of glass on top of a thicker glass rectangle, so a constrained 3 dimensional system. You could see the Brownian motion effect (a random walk). Given the necessary and sufficient conditions (evaporation from the edges, heating from below by the light source and differentials in the rest of the room environment, you could watch a particle bounce around from one edge of the field of view to its opposite, in an irregular but mostly linear path. Tell me why that wouldn’t happen in an open atmosphere, in three dimensions.

  25. E.M.Smith says:


    Ah, yes. Watching things wiggle and bounce under the microscope. I’ve done it…

    Yes, it happens in the full atmosphere. The necessary assumption is that given a path length that is quite long relative to the atom and quite short relative to the cell size, the typical atom smacks into some other atom long before it can escape.

    No, I don’t know if that assumption is true…

  26. E.M.Smith says:

    The Local Interstellar Cloud (LIC), also known as the Local Fluff, is the interstellar cloud roughly 30 light-years (9.2 pc) across, through which the Solar System is moving. It is unknown if the Sun is embedded in the Local Interstellar Cloud, or in the region where the Local Interstellar Cloud is interacting with the neighboring G-Cloud.
    The Solar System is located within a structure called the Local Bubble, a low-density region of the galactic interstellar medium. Within this region is the Local Interstellar Cloud, an area of slightly higher hydrogen density. The Sun is near the edge of the Local Interstellar Cloud. It is thought to have entered the region at some point between 44,000 and 150,000 years ago and is expected to remain within it for another 10,000 to 20,000 years.

    The cloud has a temperature of about 7,000 K (6,730 °C; 12,140 °F), about the same temperature as the surface of the Sun. However, its specific heat capacity is very low because it is not very dense, with 0.3 atoms per cubic centimetre (4.9/cu in). This is less dense than the average for the interstellar medium in the Milky Way (0.5/cm3 or 8.2/cu in), though six times denser than the gas in the hot, low-density Local Bubble (0.05/cm3 or 0.82/cu in) which surrounds the local cloud. In comparison, Earth’s atmosphere at the edge of space has around 1.2×1013 molecules per cubic centimeter, dropping to around 50 million (5.0×107) at 450 km (280 mi).

    The cloud is flowing outwards from the Scorpius–Centaurus Association, a stellar association that is a star-forming region.

    In 2019, researchers found interstellar iron in Antarctica which they relate to the Local Interstellar Cloud.

    So yeah, it matters, but if we limit our model runs to -40,000 years to +10,000 years we can assume it is a constant ;-)

    “Given these conclusions what assumptions can we draw? -E.M.Smith”

    My conclusion is that I want to play with models therefore I draw the assumption that it is reasonable to assume constant those things that are a pain ;-)

  27. pauligon59 says:

    A hard problem indeed. Any theoretical guidance out there as to how close to reality (modelling every atom) you have to get to be able to project some time into the future with any accuracy? We all know that weather models are only good for so many days into the future before they no longer match reality – which is why I’ve had a hard time putting any faith in what the climate models purport to tell us.

    On the dotty Earth representation. I didn’t understand whether the dots positions were static or not. If you created a static model of the terrain (non liquid or gaseous stuff) and then represented the rest of the atmosphere with the equal volume dots and allowed them to move in three dimensions as well as change between liquid and gaseous state based on energy content, you might be able to define a dot as an object.

    Information exchange between dots is also an interesting problem, even if they remain statically positioned, as you need to know which dots they can exchange information with, and how much goes to each dot in the three dimensions around it.

    Plate tectonics also plays a role in climate. Movement is slow, centimeters per year, but over 100 years you have a lot of centimeters with the potential for changing ocean currents. Which, as you noted, changes climate. How much of that do you have to model to make the model trustworthy? I’ve heard that the development of the Antarctic circum-polar ocean current had a dramatic effect on the world wide climate and that its development occured as a result of plate tectonics.

    Lots of changing variables in the real world and all of them interact to some degree or another. Do we even know enough to understand which of the variables are insignificant over a century or not? I seriously doubt it.

  28. cdquarles says:

    My point, all, is that wind is akin to that small particle embedded in a larger bulk; where the larger bulk is moving faster randomly, yet constrained by things that impose conditions on the larger bulk. You can use these constraints and conditions, but don’t forget the limitations implied and the uncertainty that results from the limited knowledge. We can, still, get good enough models that have skill sufficient to make decent predictions; yet me must never elide over the limitations.

  29. Simon Derricutt says:

    EM – I think the swirls will not be a problem, since if the dots are on a small-enough scale then the Coriolis calculations will naturally produce those. A problem may happen from excessive accuracy, in that if you calculate what happens to the air over an infinite hot plate no particular point will rise, however in practice there are perturbations and you’ll end up with the air rising in some spots with the air falling between them. Thus to calculate what actually happens you’d need to add in various types of perturbation and then choose the flow that get the most “votes” from the calculations. Makes life easier to choose an odd number of perturbation calculations, but of course that step adds to the calculation burden. There may be a way to check on whether a perturbation calculation is needed or not. Still, based on real life, you may get dots side by side and under the same conditions where one is rising air and the other is falling – the calculations for what happens must happen subsequent to all the local conditions for the dots being calculated.

    The slanted sunlight going through bands of clouds was why I suggested that the height of the cells needs to be reduced. The dot area as we go up doesn’t really change that much over the troposphere, so inaccuracies from that would be minor. However, the calculations for what dot at ground level would be shaded depending on the Sun angle locally at that time-tick would involve a fair bit of trig.

    I can see why the current models use large cells and long intervals between ticks, since it simplifies the calculations tremendously. The problem about simplifying it that far is that the results aren’t realistic.

    Even though the dot idea gives effectively equal volumes, it does give a problem in calculating the direction of each dot relative to its neighbours. It might be easier to calculate using a constant increment of longitude and by having a box (patch) size equal to that in latitude at the equator. This would get smaller the further you get from the equator, but it could be changed once it’s halved in width to double it back to almost the original width. With a lot of winds often running along a line of latitude, this might make some of the calculations (especially what is shaded by cloud) a bit easier, and possibly mostly a look-up. Though the poles would be a single patch rather than a lot of wedges, the insolation only varies slowly there so the weather-changes are driven from elsewhere, so again I doubt this would produce much inaccuracy. Could be the polar patch could actually be quite large relative to the equatorial patch without introducing much error. The advantage of this method is that the area shaded by clouds higher up could largely be done by a look-up, with a change of the table only really needed every 4 simulation days or so to keep a reasonable accuracy. With each patch in a fixed relationship to those around it, the only real hassle is that the volume of air in each patch varies with latitude. The good thing, though, is that there is no area that isn’t covered (the patches all meet at lines), whereas with the dots they will not fit exactly into the diameter at that latitude, so there will be locations left uncovered or where you need to use a different volume, and that will make the slanting-light-through-clouds tricky. However, if you treat them as being dots but with a variable size, with the local size marked in the table for that dot, then all the other advantages of the dot method should be available.

    Each patch will still have to know where its neighbours are, the same as with dots. You’ll still have to apportion the air movement out between them. It’s however a bit less work, since the number and angles between them will be the same for all patches on the same latitude, whereas with dots every one would be different. For the Coriolis calculations that produce all those swirls, you really only need to know the momentum of the air and the differential pressure across the patch, and use conservation of momentum on the rotating world along with the force (thus change of momentum size/direction) on the mass of air. In this case, of course, momentum is absolutely conserved. Since linear momentum is preserved, so is angular momentum, and it might be easier to calculate using angular momentum as the air rises or falls. This angular momentum of a column of air as it changes height will however only be a small percentage of the perceived angular momentum of a multitude of dots/patches as they form a cyclone or anticyclone. You’d only really need to consider that with very large cells, and the smaller the cell gets the less the error if you ignore it.

    I figure that if you start with the guzinta and see how that changes the airmass, then apply the momentum-change from the rotation of the Earth in that time tick, and then calculate the comesouta, most calculations should be fairly simple and you’ll end up with realistic swirls. Only the ground-based cells have a lot more calculation involved as to what’s coming in that isn’t from other cells. For cells above the ground-level one, they’ll need to feed the guzinta of the correct ones at ground level if they produce cloud.

    I just realised that, if a cell produces rain, then the ground down-wind of it will get colder and wetter, and that that might take some calculations as to which (ground-based) cells to change the guzinta of, and probably also all the other cells on the way down. Gets more complex with precipitation because there’s a heat-exchange between that precipitation and the air on the way down, and if it falls as snow or rain the ground will have a different albedo.

    Where we’ve ended up is that depending on conditions then the amount of calculation needed per cell may vary by quite a lot. We’d also need to make sure that the precision of calculation is sufficient to give a reliable answer after all the iterative calculations based on the last results. After all, any error will accumulate.

  30. a_scientist says:

    I like Bucky Fullers icosahedron globe, un-furls to nice triangles without too much distortion. But for modeling you still have those point vertices as singularity points, so what to do there?

  31. E.M.Smith says:

    @Soronel Haetir:

    Yes, that fractal problem again.

    I tell the story of being at a camp grounds and having a rock in the sun at about 90 F, a patch of shady dirt at about 45 F and a creek at 32 F / liquid while just around the corner in the shade of a bush is snow at 31-32 F solid. What is the temperature?

    Using a Stevenson Screen at a few feet off the ground and ASSUMING that the air does a fair job of averaging the various intrinsic temperatures into a valid value for the whole space is just an assumption. It masks the various specific heats, masses, and all and completely ignores rate of heat transfer along with phase change. Oh Well.

    Mountains, in particular, are a real issue. On any Island, there is a wet side and a dry side. Air arriving over the ocean is approaching 100% humidity, but not making rain much of the time. Just a little lift on the central volcano, you get rain. Often lots of it. On the downwind side of the same mountain, it’s a desert. Rain was squeezed out on the upwind side and now the air is expanding and drying.

    On Kauai, there’s about 300 inches rain on the wet side mountain top, desert just the other side.

    So with substantially the same sunshine, latitude, longitude, etc. etc. for your tiny little island; one far smaller than the cell sizes, you get dramatically different precipitation and temperatures. Guess where they put the airport with the weather reporting facilities… Yeah, on the side with the “better” weather (i.e. less rain and more sun, warmer) most of the time.

    So you either ignore that, hide it in an ‘average’ for a big cell, or use a cell size that’s way too small to calculate with the money available. None of them is ‘right’. Then continents have the same problems only on a bigger scale. It is cold and foggy as heck in San Francisco some summer days, while 105 F and sunny 50 miles south. The difference? S.F. has open access to water on three sides. San Jose is behind the coastal mountains that block the fog deck off the ocean.


    The “dots” are fixed in space. Their purpose is to give approximately equal sized volumes of air at fixed locations with the neighbor cell at known angles / distance and with known volume; BUT all at approximately equal spacing and similar shape on the globe.

    Since there is NO possible actual constant spacing constant shape constant volume grid, you must find something “good enough” limiting the distortions and the effects on calculation difficulty. That’s the purpose of the dots.

    Think of them as the central dot of a “soap bubble” of air. Each bubble pushes up against the neighbor and makes a flat plane intersection. Normally they will be roughly the same shape (and always about the same volume due to the construction method of packing the surface with unit volumes). So IF there’s a discontinuity in the alignment of the dots ( so you drop one in a row compared to the row above or below) your soap bubble will have somewhat different shaped faces at that point BUT they volumes will stay the same and the neighbor bubble centers (dots) are at well defined places.

    So while the exact math done to apportion an air flow might change some, the method does not. Look at the angles to the ‘down wind’ dots, figure the wind direction into vectors at each, and apportion the volume accordingly.


    Wind going due West. Next dot from “me” is due west. Send 100% of ‘wind’ to that dot.

    Wind going due North. Next dot from “me” is 45 degrees NE and another 45 NW. Send half of the wind to each.

    Other winds, apportion similarly with trig.

    All I need to know is angle from me to ‘next dots’ in the wind direction and their relative angles. I can ignore the exact shape and distance of the edge of the cells and just accept that construction made them all about the same volume so about the same area of contact.

    (Where it gets dicy is trying to extend this into the 3rd dimension of altitude… so at first I’d just do it as stacked layers of the same location of ‘dots’, though bin packing it might be better / more real, but I think the direct vertical is more in keeping with gravity effects…)

    For less than about 10,000 years, I think we can safely hold plate tectonics constant. Volcanoes not so much though… but then they are the product of plate motions so… Hmmm….

  32. Ed says:

    This all rang a faint bell, and I finally remembered, The Finite Element Machine:

  33. Simon Derricutt says:

    EM – looks like Soronel Haetir’s comment might have ended up disappearing, though it arrived on the email feed. Maybe check the “deleted” bin?

  34. H.R. says:

    @Simon D – Nope, Soronel’s comment is the second one in the thread. It was late in appearing, so I had missed it too.

  35. E.M.Smith says:

    Yes, my bad.

    Almost all the time folks commenting are already on the approved white list. So after checking “moderation” a few dozen times and finding nothing, I’ll start to just read comments that show up in the comment list at the right of a regular page… Then a day or two later, check the admin board and BINGO! someone new, or with a new IP address as their router DHCP resets over a weekend shutdown, goes to to the moderation queue and I wasn’t “johnny-on-the-spot”…

    So to make up for those occasions of late approval, I always comment referencing their comment. Sometimes with a link to it if it was very late.

  36. Pingback: What Happens In A Climate Cell (Model) | Musings from the Chiefio

  37. Pingback: Cell Model – Top Level Flow Of Control | Musings from the Chiefio

  38. Compu Gator says:

    E.M. Smith originally posted on 8 November 2020 at 8:23 pm GMT [*]:
    Do note that phi and theta can swap meaning between math folks and physics folks usage (Computer guys just avoid Greek ;-)

    Computer guys […] avoid Greek“?  What a silly claim!  Please tell us that the combination of cursive theta (U+03D1: ϑ) & phi (U+03D5: ϕ) isn’t enough to baffle out gracious host!  APL afficionados would’ve immediately substituted the Byzantine or cursive rho (U+03C1: ρ or U+03F1: ϱ, respectively) for the Deserno paper’s boring ‘r’. That’s before they made use of the more-or-less complete set of APL characters, including those originally typed by overstriking, in Unicode Miscellaneous Technical (Symbols) (U+2300–23FF). And that’s before getting into the Mathematical Operators (Symbols) (U+2200–22FF).

    Programming-language guys love nonLatin alphabets (well, except those scripts whose glyphs change depending on position in a word, as for Aramaic and Arabic). APL character set? Hey-ell yeah!

    Note * : “There Is NO Good Coordinate System For Climate Models”.

  39. V.P. Elect Smith says:


    APL is dead for a good reason. The non-Latin keyboard. Heck, not even a recognized alphabet for that matter. All sorts of strange symbols on it.

    Only very recently have general purpose computer languages moved to using Unicode characters (such as Julia). Even there, it is not commonly done.

    Note just how regularly even just Wiki Pages show up with lots of boxes instead of the intended characters. It is STILL a “work in progress” just for display.

    Per me, personally, and Greek Script:

    The problem has 2 roots. First off, the article in question used what looks to me like cursive italic Greek. Yes, I had no clue which character was intended. (Thus my note at the bottom of my program about “lollypop” vs “Cracked egg”:

    # Cracked Egg / Backwards G is Theta
    # Lollypop / Circular T is Phi

    But clearly it “isn’t enough to baffle out gracious host!”, just a PITA on the road to figuring out just what the heck the guy was trying to say. Losing a 1/2 hour to finding relatively similar cursive Greek instead of just knowing what was meant by the author using something sane (like even CAPITAL Greek… heck, even lower case printed Greek…) would have been nicer to the reader. Also nice would have been dodging the Math Guys vs Physics Guys confusion of the two Greek Letters.

    FWIW, I had one Russian class long long ago and can “puzzle out” Russian simple text now, some 45 or so years later… and I can read a little bit of PRINTED Greek; BUT: My total exposure to Cursive either one is near NIL.

    So yes, I stand by my assertion that “Computer guys avoid Greek”. (Not 100% as nothing involving people is 100%, but in bulk and by & large; absolutely.) So yes, you will find a few crack pots deliberately using odd character sets in programs (use for decorative effect in output is common and not crackpotty) but most folks, nope.

  40. V.P. Elect Smith says:

    Oh, and I note in passing that it wasn’t until 1991 that there was ‘even a thing’ of Unicode and common use was still changing in 1996, with about 2000 as the point where it started being more common.

    The Unicode Consortium was incorporated in California on 3 January 1991, and in October 1991, the first volume of the Unicode standard was published. The second volume, covering Han ideographs, was published in June 1992.

    In 1996, a surrogate character mechanism was implemented in Unicode 2.0, so that Unicode was no longer restricted to 16 bits. This increased the Unicode codespace to over a million code points, which allowed for the encoding of many historic scripts (e.g., Egyptian hieroglyphs) and thousands of rarely used or obsolete characters that had not been anticipated as needing encoding. Among the characters not originally intended for Unicode are rarely used Kanji or Chinese characters, many of which are part of personal and place names, making them rarely used, but much more essential than envisioned in the original architecture of Unicode.

    The Microsoft TrueType specification version 1.0 from 1992 used the name Apple Unicode instead of Unicode for the Platform ID in the naming table.

    Then note that it is still changing, and that in the world of Linux, we still get “introductions” to it:

    User space programs use so-called locale information to correctly convert bytes to characters, and for other tasks such as determining the language for application messages and date and time formats. It is defined by values of special environmental variables. Correctly written applications should be capable of using UTF-8 strings in place of ASCII strings right away, if the locale indicates so.

    Most end-user applications can handle Unicode characters, including applications written for the GNOME and KDE desktop environments,, the Mozilla family products and others. However, Unicode is more than just a character set — it introduces rules for character composition, bidirectional writing, and other advances features that are not always supported by common software.

    Some command-line utilities have problems with multibyte characters. For example, tr always assumes that one character is represented as one byte, regardless of the locale.
    Also, common shells such as Bash (and other utilities using the getline library, it seems) tend to get confused if multibyte characters are inserted at the command line and then removed using the Backspace or Delete key.

    And so much more…

    Oh, and note that a lot of those “works” only apply to UT8, not UTF16 or UTF32. So “good luck with that” comes to mind…

    Essentially on a daily basis I run into square boxes where some Unicode character is intended to be (mostly because I look into odd alphabets and languages and stuff from history). It’s still very much a developing work in progress.

    And so, were I writing a program that I wanted to have continue to work Just Fine without issues, I’d avoid using any strange character sets in the program text itself. (Display, OTOH, depends entirely on the spec for the output).

    We’re only really about 20 years into common use and still things are changing and “with issues”. That’s not nearly enough time to call it “done”. Not to mention that typing a unicode char for most things is somewhere around 4 to 6 characters on a regular keyboard. What kind of person would do that AND use a language that uses {} instead of BEGIN END; or uses ls for ‘list’? It just would not be The Unix Way to type any more characters than the minimum possible…

    BTW, the original “avoid Greek” was really a bit of a joke. But since you objected, I figured I’d toss rocks ;-)

Comments are closed.