Weather Without Mountains Averaged Gives Climate Without Reality

So I’m sitting in the sun on my patio, dogs snoozing on the lawn, contemplating the Winter Weather Alert for California mountains from now to next weekend and pondering Climate Models. Wondering about things like how to have the land contour handled, how to compute winds upslope and downslope, how to compute damp valley warm air turning into cold mountain snow with no change of external heating or loss. Just what’s happening now.

Then it occurs to me two things:

1) I could look in the model code I have and see how they do it.

2) I don’t remember seeing anything like ground contours in the GCM code I’ve got (Model II).

Now this matters rather a lot. Much of weather comes directly from valleys and mountains interacting. Everything from all those loverly named downslope winds (like the Santa Anna’s that can make L.A. hot and miserable) to massive ridge lift snow levels.

After all, if climate, as the Warmers claim, is the “long term average of weather” and if weather depends a lot on ground texture and contours (which it clearly does), what is the long term average of model weather if it does NOT have ground contours and mountains in it? Seems to me it will be rather, um, “bogus” from the get go.

So I went off to my postings about GCMs. Did some simple word searches. Does “altitude” or “alt” appear? How about Mountain? Nope. Only in one comment by me about the layers processing.

https://chiefio.wordpress.com/category/gcm/

https://chiefio.wordpress.com/2017/01/02/model-ii-main-program/

https://chiefio.wordpress.com/2017/01/03/model-ii-dynam-and-the-dynamic-subroutines/

Now I need to get back on my Climate Model Workstation and go back thorough all that model code in more detail. A simple word search on a write up / description of it with a few quotes is NOT enough. What if the programmer called it ‘mntn’ as a variable name and hid it in the “surface layer” handling somewhere? It could still be in there.

However:

I’m pretty sure it isn’t. I read all this once and usually my brain hangs onto that kind of thing and reminds me later when I ask a question about it. Furthermore, there’s the issue of “scale”. Most models from that era run at about 8,000 cells. Only recently did GIStemp move to 16,000 cells. So how big is a cell?

Pondering this on the lawn chair, I used a simplification. The Earth is about 8,000 miles diameter, about 24,000 to 25,000 miles circumference (Wiki says 24,901 but I didn’t have it on the lawn chair). Say we have the Earth divided into 100 x 100 grid, that gives 10,000 grid cells (so more than the usual 8k as a safety margin). What is 1/100 of 25,000 miles? 250 miles. IF you start at sea level and proceed into the Sierra Nevada Mountains, you are well past the peaks at 250 miles inland. Nevada is wider than California and it is only 322 miles wide. Google Maps says it’s 228 miles by car from San Francisco airport to Reno Nevada on the backside of the peaks. That’s by car, so shorter by air.

Basically the resolution of the models is too course to contain the Sierra Nevada Mountains. One cell spans from sea level to the peaks and down to the valley beyond with Reno in it (and then some).

IMHO, that explains why they just use a few “layers” of atmosphere. Often as few as single digits. IIRC 7 was available if you turned it on in one model. Sometimes only surface, troposphere, and stratosphere look to be referenced. How do you model the changes from moist warm sea level air as it rises 7000 feet up the mountains and becomes a Winter Storm dumping lots of snow, if you don’t have mountains and you divide 70,000 feet of air into at most 10,000 foot layers?

Simply put, GCMs like Model II can not correctly model real topography, so can not get weather right, so their time-averages of it to give hypothetical climate can not be right, so their conclusions are garbage. That’s what it looks like to me. How can you get the monsoons in India without the Himalaya ridge lift wringing the water from the clouds? Or the desert in China behind them?

To Do:

I need to look at some bigger, newer, finer grain climate model codes. In particular, those that try to be good weather models first. Like the MPAS code or maybe Model E and see if they have this same spectacular failure mode.

https://chiefio.wordpress.com/2017/11/30/liking-the-mpas-code-much-more-than-model-ii-or-modele/

I suspect they will. As a first approximation, I think it would need at most a 25 mile wide swath to capture topography from major mountains such as the Sierra Nevada, and likely a 2.5 mile cell side length to get accuracy. that would require roughly a 25,000 / 1000 or 25,000 / 10,000 scale. Or a 1,000,000 (million) grid cell model to a 100,000,000 (one hundred million) grid cell model. Near as I can find out, nobody is even running things at a faction of that grid detail.

All of which leaves me wondering:

Without mountains, how can you get any climate model right?

Subscribe to feed

About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in AGW Science and Background, GCM and tagged , , , , . Bookmark the permalink.

24 Responses to Weather Without Mountains Averaged Gives Climate Without Reality

  1. E.M.Smith says:

    Looks like as of 2015 “high resolution” was 60 km on a side:

    https://www.sciencedirect.com/science/article/pii/S1873965215300050

    Future changes in precipitation intensity over the Arctic projected by a global atmospheric model with a 60-km grid size

    Author links open overlay panelShojiKusunokiRyoMizutaMasahiroHosaka
    https://doi.org/10.1016/j.polar.2015.08.001
    Get rights and content
    Under a Creative Commons license

    Abstract

    Future changes in precipitation intensity over the Arctic were calculated based on three-member ensemble simulations using a global atmospheric model with a high horizontal resolution (60-km grid) for the period 1872–2099 (228 years). During 1872–2005, the model was forced with observed historical sea surface temperature (SST) data, while during 2006–2099, boundary SST data were estimated using the multi-model ensemble (MME) of the Coupled Model Intercomparison Project, Phase 3 (CMIP3) model, assuming the A1B emission scenario. The annual mean precipitation (PAVE), the simple daily precipitation intensity index (SDII), and the maximum 5-day precipitation total (R5d) averaged over the Arctic increased monotonically towards the end of the 21st century. Over the Arctic, the conversion rate from water vapor to precipitation per one degree temperature increase is larger for PAVE than for R5d, which is opposite to the tropics and mid-latitudes. The increases in PAVE, SDII, and R5d can be partly attributed to an increase in water vapor associated with increasing temperatures, and to an increase in the horizontal transport of water vapor from low to high latitudes associated with transient eddies.

    Better than Model II, but still not near good enough.

  2. E.M.Smith says:

    Model-E AR-5settings:
    https://data.giss.nasa.gov/modelE/ar5/

    ModelE CMIP5 Climate Simulations
    Configurations for CMIP5 Simulations, Updates, and Issues

    GISS submitted a number of different configurations to the CMIP5 model data repository via the Earth System Grid Federation (ESGF) (or via PCMDI). We have also submitted versions of this model to the ACCMIP project. The configurations vary as a function of the degree of interactivity in the atmospheric composition, the carbon cycle, the ocean model and the atmospheric grid. Each configuration corresponds to a rundeck in the Model E distribution, as follows:

    GISS-E2-R: ModelE/Russell 2×2.5×L40. This uses the ModelE atmospheric code on a lat-lon grid, with 40 layers in the vertical, a model top at 0.1 mb and is coupled to the Russell ocean model (1×1.25×L32). There are three versions of this model that vary in how aerosols and atmospheric chemistry are handled:
    physics_version=1 (NINT), aerosols and ozone are read in via pre-computed transient aerosol and ozone fields. The aerosol indirect effect is parameterized. This corresponds to the rundeck E_AR5_NINT_oR.R
    physics_version=2 (TCAD), aerosols and atmospheric chemistry are calculated online as a function of atmospheric state and tranisent emissions inventories. The aerosol indirect effect is paramterised. This corresponds to the rundeck E_AR5_CAD_oR.R
    physics_version=3 (TCADI), atmospheric composition is calculated as for physics_version=2, but the aerosol impacts on clouds (and hence the AIE) is calculated. This corresponds to the rundeck E_AR5_CADI_oR.R
    GISS-E2-H: ModelE/Hycom 2×2.5×L40. This uses the same ModelE atmospheric code as above but is coupled to the HYCOM ocean model (tripolar grid ~1×1×L26 – Note that HYCOM output diagnostics are made available remapped to a cartesian 1×1 grid with a uniform 33 levels). There are also three physics versions as described above.
    GISS-E2-R-CC: ModelE/Russell 2×2.5×L40. Interactive Carbon Cycle As for GISS-E2-R with interactive terrestrial carbon cycle and oceanic bio-geochemistry. This corresponds to the rundeck E_AR5_NINT_oR_CC.R
    GISS-E2-H-CC: ModelE/Russell 2×2.5×L40. As for GISS-E2-H with interactive terrestrial carbon cycle and oceanic bio-geochemistry. E_AR5_NINT_oH_CC.R

    So 40 vertical layers to 0.1 mb. I’d guess that at about 50 miles so call it 5000 ft to the layer. At least the Sierra peaks are in a different layer from sea level…

    2 x 2.5 I think is degrees. 180 x 144 = 25,920 cells. 138 miles on the short side.

    IMHO, not good enough to capture mountains right.

  3. ossqss says:

    It is all about resolution.

    Ever look at a 160×120 image stretched on a 4k TV?

  4. Larry Ledwick says:

    The issue of topography adds another complication that is not at all obvious.

    That is the release or storage of heat energy due to latent heat.

    Take a parcel of air near San Francisco high humidity moderate temperature, push it east so it climbs the Sierra and cools due to lapse rate and orthographic lifting. IFF the temperature drops enough to generate snow, you wring a lot of water (and heat energy) out of the air in the form of snow. The snow becomes a cold sink of stored cold mass until it melts (perhaps with a time lag of months before that heat sink gets eliminated by melting. As that parcel crests the summit and slides down hill it warms due to lapse rate and adiabatic compression and becomes a warm down slope wind, warming that lee side of the mountain (and since moisture was wrung out of it, on the up slope trip is sucks moisture out of the terrain (hence high desert Nevada and Idaho)

    Now repeat that experiment with a very small change – raise the ambient temp on the up slope side of the hill just enough that the air mass never cools enough to create snow and release heat of fusion as the moisture freezes into snow. Now that same moisture load gets dumped as cold rain which quickly runs back to the sea, and the down slope wind on the lee side of the mountains is a cold dry wind. Do the same thing on a hot enough day that rain never forms and you get cool fog and clouds as the air crests the summit but no dump of heat in the rain, and the down slope wind will be cool and about the same humidity it was as a similar elevation on the up slope side.

    Three very different heat transport problems separated only by slight changes in temperature at critical points in the journey of the air parcel and the same regarding the humidity. If it is not high enough to reach the condensation level it can also determine if there is rain, snow or no significant change in humidity of the parcel as it moves east.

    This raises all sorts of interesting problems with determining heat transport.

  5. nickreality65 says:

    RGHE theory exists only to explain why the earth is 33 C warmer with an atmosphere than without. Not so. The average global temperature of 288 K is a massive WAG at the ”surface.” The w/o temperature of 255 K is a theoretical S-B ideal BB OLR calculation at the top of – the atmosphere. An obviously flawed RGHE faux-thermodynamic “theory” pretends to explain a mechanism behind this non-existent phenomenon, the difference between two made up atmospheric numbers.

    But with such great personal, professional and capital investment in this failed premise, like the man with only a hammer, assorted climate “experts” pontificate that every extreme, newsworthy weather or biospheric flora or fauna variation just must be due to “climate change.”

    The Earth’s albedo/atmosphere doesn’t keep the Earth warm, it keeps the Earth cool. As albedo increases, heating and temperature decrease. As albedo decreases, heating and temperature increase.

    Over 10,500 views of my five WriterBeat papers and zero rebuttals. There was one lecture on water vapor, but that kind of misses the CO2 point.

    Step right up, bring science, I did.

    https://principia-scientific.org/climate-science-what-doesnt-work-and-why/
    http://writerbeat.com/articles/14306-Greenhouse—We-don-t-need-no-stinkin-greenhouse-Warning-science-ahead-
    http://writerbeat.com/articles/15582-To-be-33C-or-not-to-be-33C
    http://writerbeat.com/articles/19972-Space-Hot-or-Cold-and-RGHE
    http://writerbeat.com/articles/16255-Atmospheric-Layers-and-Thermodynamic-Ping-Pong
    http://writerbeat.com/articles/15855-Venus-amp-RGHE-amp-UA-Delta-T

  6. E.M.Smith says:

    @ossqss:

    Would be interesting to get a picture of Hansen or Jones, de-res it to 16k cells (call it 160 x 100) and see if anyone can recognize them; or even that it is a face…

    @Larry:

    Yes, the heat flow flip at RH saturation then at 0 C is a big issue. How does one compute albedo w/o snow line? How snow line w/o effect of ridge lift?

  7. Larry Ledwick says:

    The more interesting question to me, is how do you account for time lag of melting snow packs. As the snow forms it dumps heat of fusion into the atmosphere as the snow forms, then 3-6 months later that heat gets pulled out of the atmosphere as the snow packs melt and cold run off water runs back to the sea. (ever go wading in a mountain stream in the spring fed by snow melt?)

    Same goes for heat transport as water vapor from the Pacific which gets dumped as the snow forms or rains condense. Advection of water vapor is a huge latent heat transport mechanism which may or may not put heat in cold storage of snow packs depending on minor changes in temperature/humidity (ie very non-linear)

  8. jim2 says:

    BEST does use latitude and elevation in their temperature reconstruction. I know this isn’t specifically your target.

    ” Any adjustment needed between the stations will be done automatically as part of the computation of the optimum temperature baseline parameters. We break the local climate function into a three sub functions:

    C x( ) latitude elevation ( ) = + λ ( ) h( ) + G x (8)
    The functions λ and h are adjusted to describe the average behavior of temperature with latitude and elevation. G(x) is the “geographic anomaly”, i”

    Click to access 2327-4581-1-103.pdf

  9. R.de Haan says:

    Climate models made by 3 year old’s.

  10. John F. Hultquist says:

    I do not consider climate(s) as a “long term average of weather.” This is the 2 buckets idea, one with hot water and one with cold water. Put your right foot in one and the left foot in the other. On average you will feel just fine.
    When Köppen first worked out a climatic classification, he used vegetation types. We now use the word “ecotone” for what he sought, namely transition areas between biomes.
    Vegetation integrates the weather. That is, wet high-sun season followed by a mild dry season allows certain plants to grow.
    Ex: https://en.wikipedia.org/wiki/Bamako#Climate

    A dry high-sun season followed by a moist cool season will allow different plants.
    Ex: https://en.wikipedia.org/wiki/Naples#Climate

    There is some notion of “average” in all this, but it has very little to do with a yearly average or a world-wide average – global warming.

    ~ ~ ~ ~
    To add to your thinking about land forms and such, there is another complication most folks have never heard of.
    I’m thinking of “cold air damming.”
    https://en.wikipedia.org/wiki/Cold-air_damming

    The thing is that warm air rides up over the cold air – the surface is not a real mountain.

    We get this in the area near Wenatchee and Quincy WA.
    Quincy is now home to many server farms, but the area is a major tree fruit region (with irrigation) and the weather is important for getting apples to color nicely.
    In the last 30 years, wine grapes have proliferated. We helped write the request to the Bureau of Alcohol, Tobacco, and Firearms to get a small area designated as its own American Viticulture Area (AVA).
    https://en.wikipedia.org/wiki/Ancient_Lakes_of_the_Columbia_Valley_AVA

    That submission includes the idea of cold air damming and the fact that the small area gets more snow than nearby areas. Thus it is differentiated from other growing areas and deserves the distinction of being its own AVA.
    We were involved with another submission – a bit to our south – and there too, we looked for a unique climate factor to include. Every AVA submission will have such a section – I think.

  11. Steve C says:

    FWIW, I believe that the POES satellites render 1 pixel from about 1x1km on average, and the VHF transmissions about 4x4km. If you could use the data from one such scan as input to a high resolution model, you might be beginning to get somewhere. Certainly the images show a pleasing amount of detail.

  12. Over the last few years the weather forecasts have been mentioning the jet stream more and how it is driving the cyclones and anticyclones. Since the jet stream is basically running all the way around, I can’t see this as driving anything – there’s no specific energy source for it to be able to do that. It makes more sense that the vortices of upwelling and downwelling air are driving the jet stream, and that these are the energy sources that produce the jet stream.

    If they are regarding the jet stream as the cause, when it is actually an effect, then maybe that’s the reason it can’t be predicted.

    This ties in to the “mountains” idea since if you don’t have the cell-size small enough to be able to take account of the geography and mountains/hills/water features in driving the air movements, then you won’t be able to predict where the air is rising and where it is falling and thus where the jet stream will be and its strength. When we were talking about re-specifying the way to write a climate model we were talking about very much smaller cell-sizes (and thus orders of magnitude more cells) and also about the elevation and slope of the ground level.

  13. E.M.Smith says:

    @Simon:

    I think you are right. I’ve pondered the global air patterns a lot. My number 1 observation was that it’s all made of circular flows. Vortices of various kinds. Rolling cylinders of air. Global bands of winds. See the”cyclone up” posting as one example:

    Gonzo Gonzalo and Cyclone Up, Vortex Down

    Then pondered how to get those iconic circular based patterns out of square grid cell layouts…

    IMHO that vortex / circularity process drives everything else. It’s the largest power centers in air flow (and possibly the ocean as well). So the Polar Vortex is gigantic (over the winter pole especially) as the whole global atmosphere eventually cycles down it. At the top of convective cells you get Cat 2 hurricane force winds headed off to the polar vortex at the stratosphere boundary (and taking an Earth tangent arc in the process) . Then cyclones and hurricanes are massive beasts moving gigantic energy again in circular patterns. In between them and the polar vortex are all the High and Low “pressure centers” all making circular wind patterns with rising or falling warm or cold air in them.

    The whole thing is made of circles and vortexes. Even the “Hadley Cells” and “Ferrel Cells” are cylenders on their side rolling, and wrapping around the global circular perimeter.

    Now how do you capture that in a square grid?

    And yes, absolutely, the energy in hurricanes and similar vortex structures is measured in many atomic bombs scale. It’s what’s driving things. Not a high altitude jet. That’s the product.

  14. EM – the air carries momentum with it, and as it is moved from one part of the globe to another that momentum will interact with the pressure-gradients on our packet of air. This is what gives rise to those vortices. When we’re using that geodesic cell-structure (or even if we’re using “rectangular” cells), then as air moves from one cell to the next then the momentum of the air will be at a different angle to the axes in the new cell, and accounting for that in the calculations should naturally produce the vortices in the output data.

    I suspect that the complexity of the calculations involved in the climate models is due to the massive size of the cells that are used. A lot of the processes will need to be averaged out and fudged in order to give answers that match what is measured to happen. By the time you get down to sub-km cell size, though, and at a time-scale where the air that moves from one cell to the next is not 100% of the air in the cell (so time-steps of somewhat less than a minute) then the actual calculations should be a lot simpler. Of course you have to do many orders of magnitude more of them, and the precision will need to be higher since we’ll be talking of very small changes in all parameters, but the problem is naturally massively parallel so throwing more processors at it is reasonable. The calculations would not have “forcings” since it would all be based on first-principle calculations. We do know what the properties are of air with various amounts of water vapour. In such a small cell, the water will have a known state, so we’ll know the energy the volume of air contains, its temperature, and whether it is clear air, water-cloud or ice-cloud (and thus set the solar input to cells in its shade). At this scale, also, the effect of the ground on the packet of air can be taken into account, and hills/lakes/forests will have their effect on the packet of air at ground level and these effects will then naturally propagate upwards since each cell both affects its neighbours and is affected by them. Much the same as in a spice simulation, where everything affects everything else and an iterative process finds what the balance point is.

    Setting up the start-data for this sort of simulation would be a massive task. At ground-level, one cell per km² (or possibly more) and then multiplied by as many air-levels as required. Setting up the start parameters is going to require its own program and interpolations of ground-based point measurements.

    Still, it might give a use to those massive GPU arrays used for bitcoin mining once the bubble bursts. I think the resources needed to do the job right are beyond any single person, though. You would also need access to current weather conditions across the globe in order to set start conditions and to compare the predictions with reality, and thus fix any problems with the algorithms.

    The “Gonzo” post pictures show us that as we get further up into the atmosphere then the resolution won’t need to be as high in order to get a reasonable result. Since it’s what happens at ground level (where we live in general) that is important to us, then the resolution at ground level and the first few hundred/thousand metres needs to be much higher than it is. The reduction of resolution with height could however reduce the number of processors required.

  15. A C Osborn says:

    Simon, I agree, the weather in the UK is mostly controlled by prevailing winds, which are mainly from the West and South West.
    However we can get big changes when the wind direction changes, an increase of 10 degrees C can easily occur when we get a southerly or south – easterly summer winds with Mediteranean temperautures or a decrease of 10 degrees C if we get north or north easterly winds from the Arctic or Siberia.
    We also get spli weather a lot, both split north to south and east to west depending where that area is getting it’s win from.

    BEST impose the European Continent temperatures trends on the whole of the UK which is patent nonsense.

  16. cdquarles says:

    Exactly right, EM. You can’t and there is so much that we don’t know. Even then, considering what we do know is highly conditional, details like your mountain and vortex ideas matter. Even abstracted boundary conditions matter to a damped-driven deterministic chaotic system of poorly specified abstracted differential equations. [Reminder: the map is not the territory!] Finally, it is the weather that we care about because it is the weather that we live in and must survive. Not the climate, for the climate is a set of summary statistical statements about a defined location’s weather as realized over an arbitrary time baseline, just as the thermodynamic temperature is a statistical summary of the kinetic energies found in a specific sample of matter at a specific location in space and time.

  17. E.M.Smith says:

    My first idea on the “Grid Problem” was just to make it vectors instead of cells. Put a vector start point at each n x m location. Give it a magnitude and a direction. But what vectors? Wind direction is clear, but it is derived from pressure and temperature gradients. Still, I’ve got a vague feeling that the vector math would be easier to handle than all the grid / cell conversions.

    So think of a vector for wind speed. It gets scalars attached for current pressure and temperature. You do the math to move some mass (at a temp at a pressure) to the next vectors over on the trig resolved direction of that vector and then adjust the scalars for temp and pressure accordingly. (Now just expand it for things like humidity, cloud formation %, albedo / transparency / light absorption, etc. etc….)

    In the end, you have a mesh of vectors and a set of simple data moving from one vector calculation to the next. Just seems to me like it would compute faster and be more accurate / real.

    Nothing really analytical to back that up. Just a lot of decades of math and programming intuition.

    So things like “surface” can have a scalar for center altitude and a vector for angle up-slope. Now when a neighbor vector tries to deposit air into it, you know if it must change direction (vector donor is pointed down 10 degrees but recipient surface vector leaves at a 5 degree upslope…) and have any compression heating or cooling. Clearly the vectors need origins and density to avoid things like at 50 mile up-slope then 75 mile down-slope over a mountain being in one vector…) This ends up in the fractal surface has unknown perimeter and surface area problem, but I think a reasonable scale could be worked out.

    I’m thinking the whole thing could be implemented as a large data matrix and then multiple processors could just range over different parts of it. (Find a chunk of matrix where the ‘output’ values are not set and compute them, move on. Find a chunk of the matrix where the ‘input’ cells are filled and compute the result. Move on. Do some matrix transformations over the whole thing as possible / needed.)

    At any rate, it’s just an idea kicking around in the back of the pondering spot… What I’ll be doing first is just regular old grid-cell stuff. In reality, the data structuring and the processing approach shape the compute hardware needed just as the hardware chosen influences the data structure best used and the processing approach to take. I tend to ponder both together for a long time then chose. Other folks tend to pick their favorite somewhere in that bunch then the rest derives rapidly (though may be far from optimal).

    On Starting State:

    IMHO, the world tends to wobble inside bounds. This implies there are negative feedbacks net that keep it in line. (IMHO largely water as negative feedback and celestial cycles as the wobbler – with small scale chaos from hydrodynamics / aerodynamics causing things like thunderstorms at a given point vs another or which way a cyclone wanders). This implies that most any initial state ought to work. Over time, the “big lumps” ought to converge on the reasonable values we find persistent in the world (like a hot Sahara and frozen Antarctic) while the chaotic bits ought to manifest their known ranges and non-predictable patterns (like hurricanes landing just where… next year…) Then you could add the celestial wobblers and see if we get known cycles of weather / climate like the 1500 year and 60 year cycles.

    So in short I’m less worried about setting starting state. IMHO if your model is highly sensitive to initial conditions:

    1) You programmed it wrong as the world is not that way in reality.
    2) It’s useless even if right, so why bother programming it that way?

    Pick one and move on ;-)

    @CDQuarles:

    That’s the thing that nagged me massively about the existing models. Soooo many computes achieving nothing. Massive computing to get zero connection to the things that really drive weather and climate and a resolution rather like trying to use 8,000 black dots to recreate the Mona Lisa… Just not going to work.

    @A C Osborn;

    That’s a very good example of just what’s wrong with the present approach. The nature of reality is that some specific things (like, oh, the English Channel) can have a dramatic boundary like effect that is glossed over. Other things (like the Polar Vortex) has several modes that have dramatic effects (like the present breakdown leading to frozen all over the N. H.) that are nowhere to be seen in the models or their products. (In short, they are “not even wrong”…)

    @Simon:

    The present “step forward” for current models is that they are being converted to run on things like CUDA Cores ( NVIDIA processors) so they are moving toward GPU processing. Once that’s done (the bits that can be made parallel set out and coded for that) it’s much easier to move it to a different GPU scheme (even if you do have to change language constructs) like using MPICH, OpenMP, OpenMPI, etc.

    I don’t know enough about the ASICs used for mining to know how amenable they are to re-purposing. I mean, the name kind of says it is aimed at one application to the exclusion of others: Application Specific Integrated Circuit …

    OTOH, there would be a large ASIC fab facility in place and looking for products to make, and I’m certain that ASICs would work well as climate cells. Once you have a model you really like (i.e. looks and tests as ‘like reality’) ordering up a million ASIC compute nodes would be much more effective than a giant supercomputer of general purpose computers.

  18. Glenn999 says:

    Two questions:
    What is the difference between weather models and climate models?

    What is the difference between the various weather models?

  19. EM – the reason for the comment on the amount of data needed to set up was not that the system may be sensitive to initial conditions, since I agree that there must be negative feedback built in to the real weather systems and thus any starting condition will iterate towards the sort of conditions we see in the real world if the program is correct. However, since such a system should give far better predictions of the future weather we’re going to see, then setting the starting conditions to what the real world measurements tell us will be necessary if we use the simulation for weather prediction. Comparison with the real-world progression seems necessary if we want to show that the model is good.

    If it has rained in a certain location, then the ground will be wetter and thus the evaporation rate will change – if the ground data is put in as a set of constants then the model will not be realistic. Similarly, the ground data will change as the vegetation goes through the seasons. Flowers will change the albedo (black daisies/white daisies as with Lovelace’s Daisy World), and the soil water content (and CO2 content in the air) will change the growing conditions. It seems that the cells in contact with the ground will be the most complex, being affected by their history and by current insolation, and to a certain extent by local fauna. A sudden increase in the local guinea-pig population will reduce the number of dandelions blooming…. There’s a lot of inter-related data to think of. Some will be positive feedback, too, since rainfall may lead to increased surface evaporation and thus more rain after a time-delay.

    Glen999 – AFAIK the same models are used for weather prediction and climate prediction, and for climate you simply look further into the future. Since the various models used diverge over a matter of days then depending on how far the divergence is the weather forecasters can be either fairly confident of their predictions for the next week or pretty uncertain. For the 6 month predictions you’d do as well with Old Moore’s Almanac. Given that obvious uncertainty over a few months, projecting many years into the future and screaming doom is just a tad unscientific.

  20. E.M.Smith says:

    There is some overlap in the weather vs climate models, but also significant difference. Weather models will run a lot more cells in finer steps to get accurate predictions of the next few days, where climate models (the ones I’ve looked at) will have just a few time steps per day and fairly coarse grids, but run for a simulation of many years.

    Just look at the granularity of the hurricane model predictions and you can see it. They will have hour markers on the path and image size is fine grained enough you can see the hurricane where that is invisible in a 250 mile square grid spacing.

    Then there is “what does the model do”. Near as I can tell, weather models ignore CO2 (as it does not change inside a few weeks) while GCMs (Global Circulation Models) don’t pay much attention to things like hurricanes or blizzard paths (only gross precipitation in cells from what I’ve seen).

    Yet for looking at “long range weather”, they start to converge on the same problems. So some GCMs get labeled as also being long range weather models. Which IMHO is a bit of hype…

    I don’t know enough about the particular dedicated weather models to do a compare and contrast.

    @Simon:

    I see it as 2 different goals so 2 different things to do.

    1) Testing the model. Start with a bland world surface or even an “all wrong” one and see if the model properly (even if after a long run) converges toward a realistic world. That would give some confidence that the model was not insane.

    2) Getting the most accurate run in the least time. Start with as good and accurate a set of data as possible so the model gives a more accurate prediction quickly.

    Both are important, so I’d not choose between them but do both. As my first step would be testing the model, I’d start with #1 (so that was my comment focus) but that doesn’t mean #2 isn’t necessary. Just means it comes later.

    To the extent #1 works well, you need less precision in #2 data.

    However…

    Say there’s a case of bi-stable outcome, like oh, a rain forest causes more rain and perpetuates itself but if it is started as dry sand stays as dry. In those cases the initial conditions are highly important.

    What I don’t know is just how many of those cases exist.

    Clearly the rain forest has a tendency to perpetuate itself by causing more rain, and removing it pushes toward dryness; but is is stable at both ends? Don’t know… The Sahara has a wet phase and a desert phase. How sensitive is that switch to initial conditions or to conditions elsewhere? Will a model get that right? Will it evolve from any initial state, or only the “right” one?

    So a known unknown…

    I have said that the best use of models is to “inform our ignorance”, so running those models with “insane” inputs then with “accurate” initial conditions would be interesting as it would point at places to explore both the model behaviors (errors…) and the instabilities in the real world (so, for example, if it showed West Texas having a wet phase available, go digging in the dirt there for evidence it ever happened). Having it tell you where there’s a “Dig Here!” to look into.

    To the extent folks limit themselves to only using “best available” initial conditions, they are not testing the models enough and are missing potential discoveries about real world bi-stable states. At least, that’s the way I see it.

  21. E.M.Smith says:

    https://www.ncdc.noaa.gov/data-access/model-data/model-datasets/global-forcast-system-gfs

    The Global Forecast System (GFS) is a weather forecast model produced by the National Centers for Environmental Prediction (NCEP). Dozens of atmospheric and land-soil variables are available through this dataset, from temperatures, winds, and precipitation to soil moisture and atmospheric ozone concentration. The entire globe is covered by the GFS at a base horizontal resolution of 18 miles (28 kilometers) between grid points, which is used by the operational forecasters who predict weather out to 16 days in the future. Horizontal resolution drops to 44 miles (70 kilometers) between grid point for forecasts between one week and two weeks.

    So a whole lot more resolution with grids of 18 miles (bumped to 44 miles for multiple weeks) and runs only a couple of weeks into the future.

    Since this sucks supercomputer clusters to their knees, you can’t run it for a 100 year prediction. It also starts to give slightly insane answers with long runs anyway (i.e. you get hypothetical hurricanes that never actually form, or the prediction of a hurricane forming way out at sea has it landing in Florida when it really ends up running up to Rhode Island instead).

    To get runs of 50 and 100 years, you have to cut the resolution way way back and make the time steps larger. Though, IMHO, that doesn’t improve the sanity of the predictions any… But at those granularities, you don’t see individual hurricanes anymore. Just gross shifts of bulk properties like total precipitation.

    While I’ve not looked at the source code for any weather models, I’d be very surprised if they had inputs for the things like CO2 / GHG changes over time and historical volcano events that are seen in climate models / GCMs.

  22. Power Grab says:

    My family friend who got a degree in meteorology not many years ago had this to say when I asked about the grid size:

    “My classes actually never discussed the finer details of models. I have some off-hand knowledge based on conversations I’ve had with colleagues and some theoretical stuff in some of my classes and studies. Both sources told me that mountains and valleys have a big impact on what the models come up with, but they do account for elevation changes. Mountains, for example – this isn’t a perfect example, but when you move an air mass or a storm system over a mountain, it basically scrunches it up. Because you’re decreasing the amount of space that storm system has in the vertical. Then, when it moves off the mountain, it stretches out with the new-found space, and strengthens. “

  23. Larry Ledwick says:

    Mountains, for example – this isn’t a perfect example, but when you move an air mass or a storm system over a mountain, it basically scrunches it up. Because you’re decreasing the amount of space that storm system has in the vertical. Then, when it moves off the mountain, it stretches out with the new-found space, and strengthens. “

    That is exactly the situation here in the mountain west. A well organized storm system will enter Colorado and Utah from the west and as it passes over the mountains it spreads out and becomes disorganized. Then as the storm drops into the great plains, it will find a new center of organization and spin back up, once it is in deep atmosphere again.

    If you watch the winter forecasts for snow storms, you get familiar with the problem as they attempt to forecast where the storm will reorganize along the front range of the Rockies. Just a 60 mile north or south shift of the storm center can make the difference between the Denver basin getting dumped on and the Colorado Springs area getting dumped on because there is a ridge of high ground that divides the two basins (called the Palmer divide) that runs basically east north east from near Palmer Colorado and the Monument hill area, out to the north east toward Limon Colorado. Which ever of those basins gets the most energy, is where the storm will reorganize. If the storm sets up south of Colorado Springs, and then strengthens substantially as it slides out toward Springfield Colorado and the far south east corner of the state of Colorado, it will pull up gulf moisture out of Oklahoma and Texas and pump that moisture around the end of the Palmer divide. Then back into the Denver basin (there is high ground to the north along the Cheyenne and Laramie Wyoming I80 corridor that blocks further movemen north, and helps turn the flow back toward the south west into Denver.

    You get ample moisture coming up from Texas and cold air getting pulled down off the Cheyenne divide I80 area, the result is a big snow dump in the Denver area in the winter time, all because of local terrain effects on surface flow encouraging storm development and circulation along certain tracks.

  24. Pingback: Weather Without Mountains Averaged Gives Climate Without Reality – Climate Collections

Comments are closed.