Why Volume Grid?

Just a ponder.

In all the computer models, they use a “grid” of volume. Most are squares (cubes in layers of square outline, some with ‘taper’ toward the poles) and at least one is hexagons.

So I was pondering this and pondered: Why not just use vectors?

They calculate the “stuff” either at a point in the center of a grid cell, or one of the corners, or middle of an edge. Yet the thermometers are really just at points on the ground. Then things like “wind” are moved between boxes, very unnaturally.

So why not just have a set of vectors for wind. Each one starting at a point in your grid, with angle and velocity as properties. Similarly, have a temperature vector anchored at each point with size = temperature? Seems to me a set of vectors anchored at points is easier to deal with programmatically and ought to have similar results.

So am I missing something or what?

Subscribe to feed

Advertisements

About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in GCM, Tech Bits and tagged , , , . Bookmark the permalink.

32 Responses to Why Volume Grid?

  1. EM – I’ve been pondering about your circles and hexagons, but hadn’t got far enough to write something. I’d come to the idea of just using the points, and the air in the volume around that point can be treated as a lump to a fairly-good approximation. It will have a composition, a mass, a pressure, and a temperature, and will also have momentum. It will be acted on by the points around it, and at the end of each time step it will output a quantity of its air to other points around it. Obviously proportional amounts depending on the movement of that air, whether it’s going up or down etc.. It will also receive air from other points with a particular composition and momentum at the beginning of the time-step and then average all the air it currently has for composition, momentum, effects of rotation of the Earth, solar input, moisture input if it’s at ground level, etc. before running the output at the end of the step. It seems that with the geodesic points division of the globe that the actual ground height could be modelled. The triangles would need to be chosen sufficiently small to enable a reasonable cloud to be modelled. Clouds would be expected when the relative humidity rises above 100%, though maybe difficult to model cosmic ray seeding or other effects that stop RH exceeding 100%. Solar input would mainly happen at ground level, of course, but with the varying angle of the Sun to the ground it would be possible to say which points were in shadow from a cloud at a particular level in the grid.

    It’s not going to be a perfect model, but can be massively parallel. Putting the global data into it may be a bit of a bummer though. When there aren’t enough cores that each point can have its own, then running through the sequence of point input, averaging, seeing if a cloud forms, and outputting the air to the correct points around it in the correct ratio does give you as much resolution as you want on wind direction. New set of sun position, cloud shadows etc. for the next cycle. You can add heat absorption of greenhouse gases into the averaging process if you want, given whether the point is in shade or not. I figure that’s not going to be a large correction, but you never know…. Still, the nice point is that there are no “forcings” but everything runs from first principles. We know the absorbance of CO2 and H2O and also what they will re-radiate,

    The first part is writing the program to deal with a single point, and its inputs and outputs. Looks to be not too hard. The big job is to set up the point grid and for the points to receive solar radiation based on where they are at each point in time, and then setting all the correct data for all the points used by looking at the global conditions at a point in time. Basically, a lot of that information will need to be guessed. Maybe some sneaky approximations that all the triangular prisms of air are the same volume as we go upwards. Given the ratio of the atmosphere height to the Earth diameter, and the relative dearth of pentagons rather than hexagons, this may be close enough. Also reasonable to approximate the same drop-off with height of gravity, and to treat the gravity as being the same at the poles as the equator. Add in the corrections later, but allow for them being modified in the program spec. Such things will be tied to the points, anyway, so are part of the database for the points and can be updated programmatically.

    Triangular prisms at ground level may be a bit tricky. They will need to take into account the height of the land and what is on it in order to determine ground temperature and water that evaporates, and will also need to take account of ground water storage and flows (rivers etc.). Once above the ground level, though, the points above would be relatively simple. Treat the points as all the same base-height, so that some will be inactive in the middle of mountains? May be nicer to do that than vary the base height above the mean sea level.

    It looks a better idea to deal with points rather than volumes. It still looks a very big job to actually write the program and put the data in to start it off, though. Maybe that’s why they don’t give the data out along with the models.

  2. jim2 says:

    A volume will contain a mass, of say, air. In its cell, it will respond to heating, cooling, radiation, etc. But I suppose you could put that mass in a point and pretend it’s an actual volume. I didn’t really like the hexagon idea. IIRC, you can’t cover a sphere in them anyway. Triangles are simpler. But a point is even more simple than that and can cover a sphere with them equidistant – or not. I like the idea.

  3. jim2 says:

    The shape of the volume might matter for things like radiation absorption, so …

  4. jim2 says:

    Maybe the volume could be modeled as a sphere – I don’t see why the boundaries have to be contiguous.

  5. jim2 says:

    Just have more spheres per unit area as altitude increases. Raises the question of how to divvy up output of lower spheres into higher ones … maybe need variable volume spheres?

  6. Lionell Griffith says:

    EM: So am I missing something or what?

    Of that I am absolutely certain. The thing is everyone is missing something because we can be confident that there are a few things we know, many things we know we don’t know, and many more things we don’t know that we don’t know. We can simulate the things we know and make educated guesses about some of the things we know we don’t know. We can’t even make wild guesses about the things we don’t know that we don’t know.

    Now this should not be used as an excuse for not attempting to create a simulation but it should give us huge reasons to question the setting of public policy based upon the results of our simulations. If the purpose of developing the simulations is to find out if we know the really important things, I am OK with that. More than that, I say No Way!

    Assuming the latter, there are a few fundamental issues that must be addressed. The really big issue, beyond all the unknowns, is that there are a multitude of both intensive and extensive variables must take part in any realistic simulation of weather/climate. Until that issue can be properly dealt with, there is no point in going further.

    The question is do we use volumes, point, scalars, or vectors? There is no or, we must use all of them as appropriate to represent the various variables and attributes: volumes and scalars for extensive variables and points and vectors for intensive variables. Only then can we start dealing with the actual physics of the situation.

    That the whole of the field of climate simulations compute average temperature is a for certain indicator that the difference between an intensive and extensive variable escapes them. This error by itself is sufficient to explain why their simulations don’t simulate anything close to reality. The non existent CO2 | H20 feedback loop they assume pales to insignificance to this monumental error. Their final results are only numbers without any actual connection to reality beyond the fact they are numbers. Oh they ardently wish and intend they had meaning but there is none.

    I don’t know how to deal with this issue in a simulation of climate, I just know that a simulation needs something more than all variables being treated as if they are extensive.

  7. p.g.sharrow says:

    As I look at this, it strikes me that the the data represents a fixed point, so that is a start. It is so far to the next data point so that it is the midpoint of the volume of the hex comb.around it. Elevation, location, wind speed and direction, air-pressure, humidity and temperature are all measurements for that point. Triangular vectors to and from the adjoining data points should give the best representation of average between the 2 data points as to energy movements.
    However Climate might be better measured by ice volume on the oceans and the elevation of snow fields and tree line. At least that has been my experience over watching climate changes over the last 70 years…pg

  8. Lionell Griffith says:

    You might be able to avoid the intensive/extensive trap by simulating mass and energy transfers only. That would be all extensive and you could avoid using temperature averages altogether. Then the final result would be net energy gain or loss. Interpreting that is not trivial but it would be a physical result rather than the fiction of a so called average temperature anomal

  9. E.M.Smith says:

    My general approach would be to model extensive variables and let the intensive be derivative of them (if at all possible). So dew forms and you reduce mass of water in the air and compute temperature change from that and heat of vaporization of water. Not using temperature averages to drive things. (Still in the pondering state so ‘reality constraints’ not yet included ;-)

    Since the surface of the earth is nearly cue ball smooth seen from space, I’m not sure a lot of effort needs to go into modeling it. Just call it “surface” and include some N/S and E/W drag /lift factors where there are mountains in the surface calculation.

    Then again, compared to atmosphere height, some mountains get way up there… Maybe needs more think time on that…

    @Simon:

    Good stuff. Thanks for the ideas ;-)

    @Jim2:

    Hmmm…. quantity of land would need to be kept track of for each spot, and the angle to the sun, so as to compute absorbed / reflected…

    Models with single digit layers are common. Figure a 5 layer model and 100,000 feet is about 20,000 feet / layer. Even a 10 layer model only 10,000 feet resolution. Seems a bit lite to me …

    I need to read some of the present model codes and see what their top of ATM level is and figure out the actual granularity. Don’t see how you can get clouds even 1/2 right if you can only form them at 20,000 and 40,000 feet…

  10. E.M.Smith says:

    @P.G.:

    IMHO, any real climate model must START with solar inputs and variation over time (sunspot cycle / 179 year cycle / major min max nodes) AND water / ice as the key dynamic; and get clouds right. Then add on other bits only as needed to get more accuracy.

    One big question is just how to make it run from “first principles” and get convergence. I’d test that by loading in constant temp, pressure, etc. as initial conditions and then see how if converges to a more real state. Modeled at all well, it ought not take long to get polar ice and tropical rising warm wet air… IFF it diverges rapidly or never approaches reality, you have something wrong… ’cause nature didn’t get a nice clean start on an ideal load of initial conditions…

  11. jim2 says:

    If you imagine points connected with pipes or tubes, then you could apply something like Kirchoff’s law to determine mass flows from one point to another.

  12. vuurklip says:

    @Lionell:
    “… we can be confident that there are a few things we know, many things we know we don’t know, and many more things we don’t know that we don’t know. ”

    Indeed, but we can also add that there are things that we THINK we know but which we actually don’t know.

  13. p.g.sharrow says:

    Just a BIG refrigeration, heating and air-conditioning, problem. ;-)

    Water is the main working fluid. O2&N2 is the insulation. Solar radiation the energy source. 8-) polar ice banks would give a good approximation of the net energy changes. K.I.S.S.
    the actual amount of energy in the atmospheric gases is very minor. It’s the Water!
    Without water Earth’s surface would be a Venus like hell hole! Not only is water the air-conditioner working fluid but it is the fume scrubber fluid that has washed most of the heavier gasses out of the atmosphere, reducing it’s density.

    Maybe your billiard ball in the sunlight will work as a starting position. then add troposphere. next add water. Now the gross circulations can be added. these things should solve for 99% of a theoretical Earth surface temperature. Only then would day to day data points be of value to determine changes in actual conditions. just some thought…pg

  14. If the model runs on first principles, and given a starting-point of even temperatures it produces the emergent properties we know the weather does such as clouds, upwelling air masses and downcoming air masses and thus high pressure/low pressure areas and vortices, then it’s starting to look like the basis is reasonably correct. There’s a lot we don’t know, whether we know that or not. Does the model tend to look like the world we know, or does it instead spiral off into far too hot or too cold?

    pg’s point about the water is somewhat critical. As a start point we can simply put the known water-temperatures in to see if the model works, but the full model will need to also have water flows that are themselves modified by the clouds and other variations of heat input (spectral change in the Sun?) and thus the air and water will interact.

    One other problem in the model is that we’ll mostly be dealing with very small differences of variables that are much larger, so absolute accuracy will be reduced. Is 64-bit maths good enough? I don’t know that. It may be better to work with the differences themselves sometimes to improve accuracy.

    Lionel – if we’re dealing with energy changes in the lump of atmosphere we’re assigning to a point, then we end up with the temperature at that point and not an average temperature. There will be an error from saying the whole lump is at the same temperature, but that sort of error is probably unavoidable in the finite analysis we seem to be talking about. Make the time-steps and the distances involved small enough and the error will go down.

    The weather is normally portrayed as a chaotic system where prediction is not actually theoretically possible. I suspect this is wrong, and that if we had enough data then we could in fact predict it for a much longer time-span than we do (problem only if some basics change such as solar output or volcanoes). Our input data is pretty spotty on the ground, with very little data about what’s really happening in the air. I know that here I can go a short distance and read a different temperature, and that the usefulness of the predicted air temperature is just about adequate to decide on whether I’ll need a coat or not.

    Maybe the big advantage of a “climate model” (though I’d be happier calling it a weather model) that works from first principles rather than fitting the fudge factors is that we can see what difference CO2 etc. will make as we vary it. We know already from first principles that any effect will be vastly outweighed by the amount of water vapour in the air, but in the current pseudo-scientific climate where everything depends on CO2 level then it would be nice to show what the real effect actually would be predicted to be if we got the sums right.

    Ocean circulations also need to take account of undersea volcanoes. IIRC there’s some new volcanoes that have been discovered under the section of the Arctic ice-sheet that’s melting faster than expected. Such eruptions (and those of land-based volcanoes) are not predictable, so long-term predictions of the weather will get further from reality.

    Maybe one of the big things to come out of the project is that there is negative feedback in the real world that the current models don’t reflect. For the model EM has, it’s known that if the input data is varied too much then the predictions go wildly hot or cold and stay there – there’s too much positive feedback. Sure there seem to be two fairly-stable situations, ice-age or interglacial, but within a certain distance of those two stable points there is negative feedback that tends towards the current mean.

    Given that there are currently around 14 climate models in use by the IPCC, and that they all disagree with each other and what actually has happened, a first-principle model that stays pretty close to what really happened when tested against the data from a particular time would be good. Given that we can’t predict volcanic eruptions or solar output, long-term predictions will be on the basis of “if things stay as they are”, but it could still give far better predictions of the weather we’ll experience in the next month/year than current. If the first-level proves good, then using a tighter grid and smaller time-steps should give better predictions as well (providing the rounding-errors are small-enough, of course) which could be commercially valuable.

    On the coding, it looks to me that we’ll need a database with “local constants” for each point. Some can be updated on each cycle, such as angle of the Sun and local insolation. Others may be set once and then left alone, such as local gravity and height above mean sea level. At each step, we’d need to update the local constants – may be quicker to do that in a separate process so that the program for the point can just read in a record containing all the necessary constants for this step. Basically, though, hard-coding some “constant” into the program wouldn’t be a good idea, and better to pull it in from the database in case we find that it does vary from place to place. Maybe a nice idea to put the local temperature, rainfall/snowfall, wind direction/strength, barometric pressure, insolation etc. (basic weather details) back into that database or write it to another somewhat more compact one as the local weather at that point in time.

    The program is going to produce a whole lot of output data. Consider points around 1km apart, which means it can only model rather large clouds, and we’ve got somewhere over half a billion points (509,296,248 for a square grid if my calculations are right). Go down to a half-kilometer grid and there are around 2Gb per byte in the data record. Looks to be a lot of I/O needed, and that using numbers rather than characters in the output file would save a lot of space. Also, using a snapshot of the data (say once each hour) may be better than saving after each time-step. Maybe a bit in the “constants” database could be set at the desired intervals.

  15. Lionell Griffith says:

    Simon,

    Temperature is a non physical proxy for something else and has units only because they are assigned to it. The physical parameter being measured is a smoothed kinetic energy of the atoms and molecules of a limited volume of the substance being measured. Even then, the “correct” temperature indication requires a thermal equilibrium between the substance and the thermometer doing the measurement.

    For a given change in temperature, different amounts of kinetic energy will be required that is dependent upon such things as mass, density, and molecular structure. This is why temperatures cannot properly be averaged over heterogeneous substances of which the earth is a prime example. This is also why you must stay in the energy domain if you want the results of your calculations to be physically meaningful.

    Keep in mind that you can apply any function you want to a set of numbers. The operation can be mathematically correct but the result is not necessarily physically meaningful. For example, you can divide the a temperature by 2 times pi and take the sin of it. The result, though mathematically computable, is physically meaningless. Ditto for the result of computing the mean of a set of temperature readings scattered all over the globe.

    “The computer said” is always BS without the fine details of the computation and a huge amount of meta data about the input into the computer. Even then it is often difficult to interpret the results physically and correctly.

    Did you ever wonder why physics and chemistry classes spend so much time on units and their manipulation. The reason is that it is expected that those subjects are about the physical world and not the fantasy of so called “pure” mathematics. It is the difference between reality and fiction.

  16. Lars Silen says:

    Jim2
    The reason why you want clean geometrical volumes is that othervise you will have problems with balancing the books. For a hexagonal pillbox you want to make sure that all outgoing mass for sure goes to an adjacent volume element and not to some undefined space between spheres.
    Hexagonal, triangular etc. forms have well defined windows (planes) through which mass goes in or out making computations easier to manage.
    Using mathematical points or spheres for our volume elements leave voids in the fabric that may become difficult to handle … meaning computationally expensive to handle.

  17. jim2 says:

    Lars – yes, see my comment above. Mass balance is essentially what Kirchoff’s law is based on.

  18. Lionel – the temperature predicted for the point will be a result of the energy movements, and only there because it gives us something to check against reality. The air temperature at a point is in fact what is measured. Providing we know the air composition, then we can calculate the temperature from the energy and vice-versa. It’s however that problem of changing air-composition that made me choose energy-transports between points rather than temperature changes. Since evaporation-rates are calculated from temperatures, though, it’s useful to know the temperatures for some of the calculations at ground-level, and also will be useful in the cloud-generation checks.

    The only simulation that would have a chance of being exact would simulate everything – basically another Earth. The question is whether we can get the model good-enough to be useful without needing supercomputers and petabytes of very fast storage. It’s pretty obvious to me that the real system has negative feedback as well as large swings. The biosphere may need to be a part of the simulation, though we’ll probably only get the most-important ones such as forest and agriculture (land-use in general) and miss the things that could affect them such as Elm-disease or bird-flu.

    If we can simulate the energy-transports with good-enough resolution and at a fine-enough scale then we can see whether it is good enough by comparing the results to temperatures, rainfall, and barometric pressure (and possibly humidity) that are actually measured and published.

    The advantage of using points and lumping the whole of the air as being at that point is that you don’t have the problem of energy dropping into a void between volumes if there’s a problem with the geometry calculations. All the energy leaving the point has to be picked up by other points. There are always compromises to be made when trying to simplify a real-world problem so it’s computable (and gives meaningful answers). The approximation that the measured temperature is the same all through the volume represented by that point looks to be unavoidable, no matter how close the grid becomes.

    I suspect a climate model that didn’t calculate what the local temperatures were would not be well-received….

  19. cdquarles says:

    Yes, indeed, Simon. The biosphere must be included simply because all living things alter the environment they live in.

    Earth does not have a climate system. I think we are making a very large error by considering that as something real. Earth does have a weather system. Then we must remember that the constituents of air are not constant. There are chemical reactions happening that are biological and not biological. Then add bulk movement in three dimensions, of which some are net vectors from the very large numbers of particles that are moving through space at about 1 km/sec at the surface. This means that there will be discontinuities (mathematically) with variable effects and lifetimes.

  20. Lionell Griffith says:

    Simon: I suspect a climate model that didn’t calculate what the local temperatures were would not be well-received….

    Which of the currently “accepted” climate models compute the local temperature with any degree of verified accuracy? I would say none from all the scatter charts I have seen comparing models with actual.

    Seems to me that pretend temperatures that have no physical bases are acceptable. Tell me, what is the value of computing a non physical and grossly inaccurate temperature? I can see it means billions of dollars of grants for the modelers. Does it mean anything for the taxpayers who earned the money besides an alarmist cry to return their life style to the 9th century AD British isles? I think not.

    The question is, do you want the model to work, meaning come close to earth’s actual climate or do you just want it to be acceptable? If you want it to work, the starting point is understanding climate in a physical sense. If you simply want it to be acceptable, start with any existing model and leave it as it is because actually working is not even on the must have list.

  21. John de Jager says:

    E.M.
    The comment by Lars Silen is the correct answer to your question.

    Because fluid flow carries with it mass, momentum and energy across the boundaries of cells those cells are best that leave no space unaccounted for that fluid could leak into.

    The finite volume method is based on the integral forms of the conservation equations rather than the differential forms and on a properly tessellated mesh at least do away with those kinds of inaccuracy. it’s much like making sure all gaskets seal properly in an engine: leaking gases are the enemy of efficiency and sometimes even of operation.

    A very good introduction to the integral equations and their utility on non-cubic grids, in answer to your other post about hexagons is J. Blazek, “Computational Fluid Dynamics:
    Principles and Applications”.

    Regards,
    John.

  22. E.M.Smith says:

    @Lars & John:

    OK, so you must account for all the mass flows. Yet one can do that while assigning the mass numbers to a point instead of a box. Perhaps it is more a semantic argument than a real one. The mass would still need to be given an assignment of direction and then that split between target boxes, or target ‘points’, for accounting. Which really, in a way, just says your point is going to have an implied volume for which it is “responsible”. Perhaps better to make that explicit than implied…

    Somewhere in the code one would need to differentiate between, say, two points at 60 degrees apart, and assign the mass flow to one or the other proportionately; which does then imply a ‘wall’ with things going to one side or the other… While storing that information might benefit from a set of vectors on points, it is at the cost of disguising the actual wall in the processing…

  23. Lionell – the current climate models diverge from each other and from the measured temperatures, in that they all show more heating than is seen and some show wildly more heating than is seen. The IPCC take an average of the model results to produce their “best guess” (they call it a prediction) of how temperatures will evolve if we don’t stop emitting that evil CO2.

    Those same climate models are however also used to predict weather, and the divergence can be such that the weather forecasters mention it for the week-ahead predictions. To me, this says that the models are demonstrably wrong. They are trusting the predictions for decades ahead when we can’t be certain that predictions for the week ahead are reasonably close….

    If we’re going to try modelling at all, the physics should be right rather than using curve-fitting and adjustable parameters. We know that the any model of the Earth’s weather systems must miss some things, since otherwise the number of computes and the amount of I/O will exceed the hardware we have. We thus need to prioritise the effects that we want to simulate from the most important to the least important and add them in most-important first until we’ve exhausted the capacity of the computes available. It won’t be perfect. The question is whether it is good enough.

    Do we know what is most important? Maybe, maybe not. It seems that the current models spend a lot of effort on the radiative processes, where it’s pretty obvious that they won’t be that important in the lower levels where weather happens and it’s the conduction and transport of energy that’s more important. Still, about the only way of testing some model is to see whether the results match reality, and whether its predictions match what we measure when that time comes around.

    EM’s idea of a major re-write and getting the physics right should give us a better simulation, and of course it can be tested by putting in the data for some historical point and seeing if the evolving weather patterns match the historical data. If it doesn’t, then we’ve missed something that’s more important than we thought it was. Find out what it might be, and add it in; re-test. Do we currently look at the position of the Moon, for example? That has tidal effects which must also apply to atmosphere height and thus air movements.

    As to what a better model would do, it would give us better weather-forecasts for a longer period than the current week or so. Once it’s shown to be better (predictions closely match reality) then this should help agriculture. Currently, 6-month predictions can be wildly out. Having a bit more certainty of when to plant and when to harvest could be useful.

    As regards climate, I would mainly categorise that in terms of range of agricultural produce and what grows where over periods of centuries. What we see on shorter timescales is weather. Climates can vary over a surprisingly short distance, too. One side of a hill can be different than the other. It’s affected by local vegetation and what’s been built. It’s really not well-defined, which is maybe why there’s such a lot of argument about it. Far better to look at what crops can be grown and what precautions need to be taken to get a good crop, and that can and does vary within a 1 hectare field here in Europe.

    The obvious failures of the current climate models have bugged EM for years. If he wants to put it right (and it’s a load of work) then I’ll add in ideas where I think they could help. They can be discussed, and dismissed if seen to be wrong or useless. Personally, I think his approach is very much better than taking a current model and fixing the bugs, since the real errors lie in the basic assumptions and thus can’t really be fixed. It needs a re-specification from scratch, though some of the algorithms may be re-used where they fit in.

  24. Lionell Griffith says:

    Simon,

    Even if the new model uses the correct physics for the major effects, there is the issue of the initial conditions being based upon a known contaminated climate data record. With the original data being “lost”, with the details of the so called corrections being “unavailable”, and with repeated undocumented corrections being applied, what is called climate data has no connection to reality.

    There is nothing left but a totally disconnected from reality fantasy used to support the anthropomorphic climate change catastrophe myth. Without a proven complete and correct set of initial conditions, the result cannot be valid no matter how correct the model happens to be. More importantly, there is no possible way to test the model to demonstrate its validity.

    You are doing the equivalent of expecting to pull yourself up with your own bootstraps, with no bootstraps, no boots, no feet to put into the nonexistent boots, and no place to stand to be able to start pulling. There is no there, there! It is my opinion we have no bases for even starting to try to do it let alone expecting to be able to do it correctly. The only place to start is way back at square one.

    But…but…? Do you want something that works or do you simply want something that has behavior? If the latter, use what you have. It does at least have behavior. If you want it to work, you have a lot of work to do before you can even start working on the problem.

  25. Lionell – EM has old copies of the global data from before it got “corrected”. There are likely other people with access to un-modified data, too.

    Not being able to achieve a perfect result is not a good reason for not trying at all. People do find having the weather forecasts better than having no forecasts at all. Before the computer forecasts, there were the people who watched the weather and made guesses based on time of flowerings and number of berries, in order to predict how hard the winter was going to be and when to plant. Even today they can give better predictions that way….

    There is also a lot of data available to check against. Yep, we know it is likely only accurate to 0.5 degrees but it is what we have to work with. What we need to compare against is measured temperature, rainfall and barometric pressure at the weather stations. See whether the predictions diverge from the measured temperatures. We’re measuring at points, so predicting for points seems reasonable.

    We know the current models are wrong, and that they diverge from reality over time, with that divergence making the predictions for a week in the future being more of a guess than prediction. Having a model that gives less divergence over time is worth having, simply because it gives a better prediction for a week or two weeks ahead, thus gives better guidance for planning things that depend on the weather.

    I’m not a purist, but instead would like to improve what we’ve got. I don’t expect perfection – that would take too many computes and too much storage, as well as massive effort in getting the starting data input even if we were certain it was right. We thus start somewhere and see how the predictions match reality. If we start with uniform temperatures/pressures everywhere then will it progress to something that looks familiar, with cold at the poles and hot in the tropics? Do we get cyclones and anticyclones forming? Do we get clouds and rain/snow? In short, does it act somewhat like the real world and can we identify where it isn’t reacting like the real world? If it does that without needing “forcings” and fudge-factors, then putting in the real-world data (as good as we’ve got) and checking the developments against real-world needs to be done. Do the cyclones/anticyclones have the same paths, and does the cloud form in the same places, and is the rainfall right? Do the temperature variations match the measured ones?

    It’s a major amount of work. It does however look like we can get the physics right from the start, and thus it stands a good chance of taking a lot longer to diverge from reality. Looking at periodicities of the divergence may show us the importance of something previously considered to be unimportant (such as the position of the Moon, for example) and improve the simulation. Still, it will never be reasonable to include all drivers of the changes, so there will always be divergence. It seems unlikely that we could predict the weather 10 years into the future or even a year, but having a few more weeks of good prediction could still be worth the effort.

    Since my programming has mainly been in assembler of one sort or another, and I didn’t have a project that needed C or other high-level languages (apart from quick hacks in BASIC-type languages), it’s likely I won’t be of much use to EM. Still, he’s really scratching his own itch and seeing if it’s possible to get a good model running on a cheap supercomputer. To me, it sounds like a good idea, and he can probably do a good job of it. It will at least be clean, well-commented and well-designed code, and will do what it says. Whether the specification is correct may be a moot point, but at least it won’t have fudge-factors in it and won’t depend on having the input data carefully tuned so that the results don’t spiral to inferno-world or ice-world. Since it also looks likely that many people will be able to see both the specification and the code, I’d expect there to be few bugs.

  26. Lionell Griffith says:

    Simon,

    OK. You simply want something that has behavior and then try to see if it can be made better. Obviously it is your choice to make. You have your work cut out for you. Sorry, but I don’t think it’s much more effective than spinning your wheels.

    I do agree that trying to do a really big calculation using a cluster of cheap hardware is an interesting challenge. The result could be a very useful cluster. I am not so sure that a climate model is the really big calculation to try for. I think it is too big of a problem to scale down to useful size.

    I am not into doing the impossible, with the inadequate, immediately, for free. Although I have done the thought impossible with the nearly inadequate, far faster and at a lower cost than others could conceive of rather often.

    I wish you all success on your quest. Perhaps it will turn out not to be as hopeless as I think. A difference of opinion is what makes a market.

  27. E.M.Smith says:

    Since folks are talking about what I want…

    My goals in looking at / working with the climate models, more or less in order of importance:

    1) Learn what they presently do so I can see the issues better. Thus much of my “ponder” about things like grid types or need for a grid at all. Just questioning what is used and wondering what might improve it.

    2) Find Faults. IF there are stupid coding things (like x=x tests…) then point them out. Might help someone and lets the rest of us chuckle while showing where there are “issues”. Biggest Aw Shit so far is just that they are sensitive not just to initial conditions, but to rounding type and compiler flags. The real world ought NOT be so sensitive as we’ve had stable climate, modulo what looks like external driven via lunar and solar cycles, for thousands of years despite perturbations. i.e. positive feedback models vs negative feedback reality (perhaps inside a larger oscillator on the 10k year scale when polar ice gets too big and / or gulf stream shuts down).

    3) Learn the physics and methods in the models in more detail. CFD isn’t my main area of expertise… yet… But usually my “discoveries” of interesting alternate ways of doing things happen in my first entry to an area, not after soaking up the indoctrination. (Part of why I’m not enthusiastic about “consensus” and appeal to authority. Most of time things get moved forward by outsiders asking WTF?.. and ignoring the ‘rules’ that thralls the others…)

    4) Explore parallel coding methods. I’ve got some skill there, but not as much as I’d like. So think of it as a ‘tech hobby’ point wanting something to play with in that area, and this looks like a good one. Lets me play with my stack of Pi boards too ;-) There’s also a general interest in seeing if Moore’s Law has reached a point where I can have a Toy Computer of about $400 size that’s functionally and structurally similar to a $Millions of maybe 20 years ago. That basic idea is also why I ported GIStemp to a white box PC with a pentium class CPU ;-) Take some of the stuffing out of the “puff” about computer size… it’s only a matter of a decade or so of time, really.

    5) Perhaps make a better model or at least contribute to more reality centric models. One with proper negative feedbacks and that allows for things like THC slowing as the Gulf Stream varies and changed atmospheric height with solar UV / IR shifts. Either by glue-ons to an existing model or via a ‘from first principles’ write, as needed.

    6) Make a model “for skeptics” so we are not going to a gun fight with a fruit pie… I’m just tired of having the crappy models thrown in our face with nothing but “they are crappy. Na na nah nah!” in our tool kit of responses. Unlikely to complete in less than several years unless someone wants to fund it, but it ought to be done. From what I’ve seen so far, the various models are mostly variations on the same theme with different folks just gluing on some small thing that interests them to a basically wrong radiative model tuned to curve match and ignoring solar / lunar-tidal and other real major drivers.

    I have no idea if this exploration will result in anything that actually gets run or any real improvement to models. It could easily be that the answer is it is a waste of time and an impossible goal as the system just can’t work given chaos, complexity, and divergence in reality. Or even just that I don’t have 10 staff-years of coding time left in me and nobody cares enough to pitch in.

    OTOH, it could also be that taking one of the simple models, take out the fudge, add some tides & atmosphere modulation with UV / IR and spread computes over $400 of boards (or about 40 to 80 cores or even 256 CUDA cores on a $600 nVidia board) gives better results and suddenly we’ve got a machine gun in the knife fight ;-) and anyone can buy one and run it.

    I’m still in the first phases of the “Dig Here!”, so not a lot of conclusions yet.

    So far, my “sense of it” is that the physical part of the model is easily ported and can have some parallel work done in a not too hard way. Mostly they seem to have dumbed down the granularity to make things run on small hardware, not stupefied the physics. In some cases they leave out lower impact things like biosphere or such. This implies that just de-weighting the radiance part and making it more realistic (i.e. in moist air have the troposphere convect more and radiate nearly nil…) can also be done easily. The hard bits will be adding things like UV / IR variation of atmosphere height (as it isn’t understood well yet) or lunar driven currents (ditto) on an 1800 year cycle. Then making things parallel could be easy or hard, depending on language and method and the nature of the problem.

    I’d guess about 20 staff years of work in total. Could likely make decent progress on it all in about a year with 5 folks. But so far it looks like it’s mostly just me and just when not playing plumber or roofer or fence builder on the weekends ;-)

    Well, time to get back to work…

  28. p.g.sharrow says:

    The ability to grow certain crops on a plot of land changes over 58-62 years, climate change or just weather? ( High mountain desert) The ability to live on a plot of land over generations, climate change or just weather? ( Greenland)
    What is Climate Change?

    Hansen, Jones, Mann all thought that they could use super computers to find the answer that they did seek back in the 1970s. I remember reading their papers on the subject and their proposals. If only the giant big Iron that only government could afford was put at their disposal they could poke in all the weather station data and the Great Machines would discover the answer! Modeling clouds was the hangup, the requirement that they all pointed to as their excuse for their need for funding super computers to solve for an understanding of climate changes.

    The ancient religion of human caused climate change was eager to add science to it’s resume. And the Eloloons needed another cause to add to their campaign against humanity..

    They got the funding, hired doctorate students to program the computers to find the answers that they KNEW existed and wrote papers that justified their funding. Actually no different then any other government financed research. Output as advertised, not failure, is what gets you more funding, greater grants to grow your empire and fund your life style…pg

  29. Larry Ledwick says:

    If only the giant big Iron that only government could afford was put at their disposal they could poke in all the weather station data and the Great Machines would discover the answer!

    They were embarrassed when the machine came back with “42” so they have been making up data ever since.

  30. E.M.Smith says:

    @P.G.:

    Pretty good summary. What I’m still exploring is the question of “Is there a converging non-massive compute approach?”. Answer TBD… The present approach is clearly going to fail. Exponential sloth with small increments of size and divergent processing mixed with chaos… AKA perma-research job…

    @Larry:

    LOL ;-)

  31. pouncer says:

    Seems to me one could start with a dodecahedron, three “colors” and then refine.

    Two white poles, five blue and five green. A white face surrounded by green faces behaves differently than a white face surrounded by blue. But how?

    From there, add more colors. Or more faces. Or both.

  32. Back when I was designing boards, we had a copier User Interface (keyboard and screen) based on a Dragonball EZ. https://en.wikipedia.org/wiki/Freescale_DragonBall . All the various built-in I/O functions were useful, but it ran at around 16MHz. Someone had the smart idea of adding in small videos to show people what to do in the case of a paper jam or similar fault, but they were writing in C and figured we’d need around 10 times the processor speed to get a 640×400 video running at 30fps. I’d been writing the test software for the board anyway using assembler, and figured it was possible to do it with what we had, so I wrote a bit of code that did it and was totally relocatable, so they could drop it anywhere and call it to display the video file. I cheated a bit by only updating the parts of the screen that changed, since the full screen update would only run at 15fps, but once that frame was there then the changed bits happily updated at 30fps leaving a lot of bandwidth left for the other processes needed to be run. Unfortunately I don’t know if that code ever went live, since before it did our site had been shut and the business moved to Hungary. Globalisation….

    The basic point here is that a judicious bit of assembler for the intensively-used bits of code can speed up the processing dramatically. Though that makes it machine-dependent, it’s probably controllable by a compiler flag so that, if the target architecture is correct then you get the assembler but if not then you get the high-level language (and it’s slower). Maybe the 5x speed-up over C I found may not apply to Fortran, but it may still be worth having. I suspect most of the speed-gain is a reduction in the housekeeping overhead of subroutine calls/returns and needing to stack/unstack data for those calls, but with a dedicated subroutine you can keep track of data more easily. Pass the subroutine the address of that point in the database, and we can index from that point to both pick up data and put it back, and only write to disk when the block is full rather than having a lot of little writes.

    It’s still a massive task. One variation that may be worth thinking about is to have variable distances between points – an uneven grid. There are likely some rather large areas of the globe where a large-scale grid would be OK, for example polar regions, oceans, large deserts or large forested areas. Here, probability of clouds/rain may be adequate, and it’s likely we haven’t a lot of data to compare against anyway. For built-up areas we’d probably need a small grid. The points would thus need a database entry of cell volume. We’d still be passing parameters such as energy and air-volume from point to point, so no possibility of losing any of it in the cracks, and the division of the globe would be in triangles from largest to smallest scale where each scale-change would have 4 times the number of triangles per area with around 1/2 the side-lengths.

Anything to say?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s