A fairly simple idea, really.
When you look at the weather processes on earth, they are substantially circular. Equatorial winds circle the globe. Hadley cells are rolling cylinders. From tornados to hurricanes to high or low pressure cells, winds turn in circles. Polar vortex rotates about the poles, as does the polar night jet.
Yet the climate models are based on grids of square cells. Admittedly bent over a spherical surface in model terms, sort of.
So just how do you properly model a fundamentally circular, cylindrical, spherical process with little square blocks? Winds from one cell into the next cell over will be leaving on a flat parallel face, not with a vector at an angle. I suppose you could have a vector inside the square cell, but even then the mass must move in the square grid.
It seems to me this is fitting a round peg in a square hole and not going to work particularly well.
Pondering this, I thought of a hexagonal grid. Not perfect, but at least then mass can flow in a direction other than 90 degrees. A series of 60 degree changes gives you a hexagonal flow. Not circular, but at least not a straight line.
For 3D space filling, the tetrahedral octahedral honeycomb works.
https://en.wikipedia.org/wiki/Tetrahedral-octahedral_honeycomb
I know it would tend to complicate how you think about programming the problem, but it just intuitively feels like a better approximation of the real circular processes to have vectors to adjoining cells other than 90 degrees.
Ideas? Speculation? Rock tossing?
I have no idea if that would work. I’m not model-literate but it seems to me that, unless the models include the sun and clouds and wind, none of the models will work.
Then there are earthquakes and volcanoes which also are not in the models – especially not the underground volcanoes which, like those above ground emit carbon dioxide. According to Dutchsinse (the name of his You Tube Channel) earthquakes also occur around the planet, in sort of a circular pattern sometimes. He says, and illustrates his theory convincingly to the annoyance of some mainstream seismologists, that pressure builds up like a dome (maybe it’s a dome of hexagons!) under a plate and then pushes the plate which transfers the pressure. A deep earthquake on the west coast of the U.S. eventually can result in pressure in the east, and a smaller shallow earthquake. Dutch seems to not need a model; he just observes reality which brings us back to the ancient way of doing things, I think.
Did the ancients use hexagons for figuring things out?
perfectly logical observation.polar vortex seems to prefer hex rather then circular. Bees use that configuration for efficient conversion of straight line space around a circle.of volume. I would consider honey comb rather then a cylinder or box…pg
It works on golf balls…
https://www.newscientist.com/article/dn1746-hexagonal-dimples-boost-golf-balls-flight/
Polygon meshes like hexagons make a lot more sense than squares.
Using the concept of fractals, if you made the hexagons adaptive in size (lots of little hexagons inside big standard sized hexagons) you could approximate any geometric shape you needed, by only doing that subdivision where it was needed (like in modeling thunder storms, where large clear air areas could use a coarser mesh).
Not easy to program but a useful compromise between doing very small cell sizes through out the world when you only need fine resolution in areas of rapid change of conditions (like to model a hurricane and its eye wall region)
@P.G.:
Is a polar vortex really hex, or is it the planar projection from above that is hex while the actusl air flow is a set of six 3D curves over a sphere?….
Saturn’s polar vortex is hexagonal.
http://en.wikipedia.org/wiki/Saturn%27s_hexagon
Clive Best is going in a similar direction.

Spherical temperature averaging using Icosahedral grids
http://clivebest.com/blog/?p=7820
Great idea and one that is starting to be used in Model for Prediction Across Scales:
http://cliffmass.blogspot.com/2016/03/the-national-weather-service-selects.html
Unfortunately, the US National Weather Service decided to use an alternative:
http://cliffmass.blogspot.com/2016/07/the-national-weather-service-moves-to.html
@Catweazle666:
I found this bit particularly fun:
I too was contemplating soccer (football) balls, but felt somehow unsatisfied…
I always enjoy it when I find I’ve been cutting a new trail in the forest and then discover little sign posts from a prior explorer saying “this way to the best view”… not only does it save a lot of time and effort, but it confirms the value of the direction chosen… and, of course, the “mind the cliff” signs help too :-)
This is a promising point too:
Fewer ficticious cells and a much smaller number to calculate making “small iron” computer operation easier.
I suspect the preferred iteration scheme would start at a pole and sweep toward the other pole one ring at a time.
I find myself chuckling at the notion of a Buckyball World :-)
(2 png links display on my tablet when clicked, but not in the comment. Probably a Wikipedia issue with URL ending in .png for a non-png actual page. Just click them and see if you get Buckyball images of 540 carbons size)
It looks like nature might have solved the vertical layers as well :
https://en.wikipedia.org/wiki/Fullerene
A Bucky Onion soubds about right for atmospheric height layers…
http://www.3dchem.com/buckyonion.asp
Programming a Bucky Onion Model World could be a bit, erm, ‘challenging’ ….
But the sign on the trail says “This way to the waterfall”….
In engineering, a truism is; It ain’t right until it looks right”
This Bucky ball onion looks right for atmospheric modeling of energy movements. Next question is, due you model the vertexes or the cells? There seems to be a 120 degree pattern here in ether case. That gets rid of the extended straight lines, good Feng Shui 8-)…pg
Isn’t it premature do discuss the geometry of the geometric subdivision of a climate model before you have a correct, and complete detailed dynamic partial differential equation specification of climate? All we know is that the details are extremely complex and largely unknown. Not only do we not know and understand many aspects of the heat engine that is the earth’s weather, we don’t even know what we don’t know. Clearly, you cannot simulate that which you don’t know and understand. Hence, any model that we can derive from just what we know falls far short of being able to simulate the real thing and not able to predict well enough to inform political policy better than pure fantasy.
Further, any numerical simulation needs the correct initial conditions to simulate the real thing. There are vast tracks of global area and volume which do not have any measurement device of any parameter to a sufficient detail, granularity, and completeness to provide the initial condition values for a detailed model. Hence, even if the model was both correct and complete, it could not be used to simulate the real climate of the earth. At least not better than it is mostly warmer in the summer than in the winter and wetter in the spring than in the fall. Any edition of the Farmers Almanac can tell you that.
Over 50 years ago, the fantasy was that it was going to take only a decade to develop a good enough simulation of the world’s weather to be able to be used to forecast the earth’s weather. A really good AI was expected to be produced by then as well. Fifty years later it is only going to be ten years before we have a really good simulation of the earth’s weather and a really good AI. Why not cut to the chase and produce something that can really be used today and not some ephemeral “real soon now” vaporware?
This is a pretty cool idea Chiefio!
For what it’s worth, there may be some useful crossover work from quantum computing related to the programming of 2D hexagonal lattices. See Kitaev Honeycomb Model, e.g. https://courses.physics.illinois.edu/phys598PTD/fa2013/L26.pdf. I have debugged one such implementation written in Julia which wrapped a planar lattice into a toroid.
BTW, Julia has a lot of builtin array handling and math functions, is fast! and easy to learn. For kicks, Julia can be use as scripting language alternative to bash. Just download precompiled linux binaries, put a symlink to julia in the system PATH and place #!/path-to-symlink as first line in the script.
We don’t need partial differential equations for climate. We only need them for weather, which is a real thing that we live in, must survive and adapt to; whereas the climate is a statistic. That’s one of the major errors in ‘climate science’, if not the major error in it.
About the hexagonal pattern, it is a projection of 3D flows onto a plane, and Earth’s circumpolar jets show the same pattern. If we could project the MJO, I wonder what it’d look like. It wouldn’t be hexagonal, I suspect, and that the polar one projects as hexagonal may be due to geometric limitations, since the polar area is much smaller than the tropical area.
And there’s this:
https://www.google.com/search?q=saturn+hexagon+storm&client=firefox-b-1&tbm=isch&source=iu&ictx=1&fir=81eiFTgI_USVDM%253A%252C8erkqBql6W_niM%252C_&usg=__tEJWnoXY03l9SqoDLLfZgA96dkU%3D&sa=X&ved=0ahUKEwi62IStgOLXAhUYwWMKHf9-BZoQ9QEIUTAJ#imgrc=4NpMwbjL-gjv2M:
Saturn’s pole
Yup, what I thought… From the PDF of the document by Hansen describing Model II, in “Scheme B”:
Click to access 1983_Hansen_ha05900x.pdf
has a diagram showing how grid boxes are used in computing things. Data computed at a central dot in the grid cell, but “winds” have “secondary grid points” where they are computed. Center of each grid side and all 4 grid vertices.
NOTE: ALL WINDS ALWAYS N / S or E /W never at other angles.
On nominal page 614 (really just a couple in) the added “The Coriolis force and metric term (spherical geometric factor) ” to the pole only, but found it introduced a “direct polar cell”.
So they knew they were not getting polar circulation right, added terms to make it happen by getting a bit of circularity to happen, but left them out of the rest of the model.
So no hurricanes, no tornadoes, no major low pressure center sending warm wet air up nor high pressure centers sending dry air circling down; the entire real atmospheric dynamic gone. Along with all the massive heat fluxes those vortex winds transport to the tropopause for disposal.
cdquarles: “climate is a statistic”
Oh? What statistic would that be and how would one measure it if not measuring weather and then computing some kind of statistic? It appears to me that we need to simulate the weather time series accurately. Then from that compute your climate statistics whatever they are.
Perhaps you believe climate is unchanging expressible by a small set of simple numbers. If it doesn’t change, it is a simple matter of determining what the climate is today and you would know the climate forever. Then why all the angst about our experiencing catastrophic climate change sometime in the distant future? If it changes, wouldn’t you need the set of partial differential equations to be able to build a simulation of it?
We need to define our terms scientifically so we know what we are talking about. Otherwise, all the words are nothing but verbal mush that is impossible to translate into a computer simulation. Perhaps it is as the politician said about pornography, “I can’t define it but I will know it when I see it.” If it is, it is not science. It is nothing but a non-objective assertion of opinion without connection to anything real.
If you think I am wrong, tell me what climate is and how to measure its statistic? If you can’t tell me that, can you tell me what weather is and how to measure it? Further tell me how to measure those things on a global scale and know that it has in fact been measured correctly. Until you can tell me that, I can’t know what you are talking about and neither can you.
Guessing, pontificating, and asserting is easy. Real science is difficult. Especially if you want your predictions to be found to be correct.
IMHO the Koppen system is the best:
https://en.wikipedia.org/wiki/Köppen_climate_classification
While it is largely based on temperature, water, and plants, the plants can be inferred from the first two. (Oh, and an altitude effect).
For “climate change”, IMHO, unless you can change your latitude, altitude, or distance from water; there is no climate change in time scales shorter than long… (modulo asteroid impacts ;-)
There is also an impact from mountain ranges reducing water (as in “rain shadow”) but mountains don’t change in a few thousand years either…
The whole notion of “climate as the long term average of weather” is, IMHO, wrong. In my world view, weather is the random deviation inside climate; and sub-scale processes that are not climate scale (like a thunderstorm is weather as it is gone in hours, but average water in a decade starts to look like climate and over a couple of centuries low water means desert).
EM:
Interesting. For the Koppen system, there is no such thing as “global climate” or even “climate change”. All climate is zonal in nature. Each climate zone has its own weather variation pattern. Zones don’t vary except in response to catastrophic geological or astronomical events. This strongly suggests that it is pointless to attempt to simulate climate to any greater detail than represented by the Koppen system. No super computers needed. No amount of grant money can change anything but the number of papers published in an attempt to get another grant. All you really need is good long term zonal weather statistics and you can determine which climate zone you are in.
I am not so sure the variation of weather within a zone is truly random in the sense that each weather event is independent of all other weather events. There is an observable connection among weather events that facilitates short term zonal weather forecasts that are more or less reliable.
For example, learning how to read the clouds offers a strong hint about the weather over the next few days. Track the changes in barometric pressure changes and you can better specify intensity of weather change. This implies that it might be possible to produce a zonal weather model that would give better and longer term forecasts than farmers in the field or sailors at sea.
The bottom line is the terms “Global Warming” and “Climate Change” are essentially empty. They are useful only for scamming governments and people who believe what “they say” is actually how things are.
@P.G.:
I’d do temperatures at the mid-point of each cell so gradients run evenly center to center. Winds I’d calculate at the faces so they proceed normal to the face. To one other face if circular, to two faces sharing a vertex if linear. Mass flow split if to two cells, kept constant if to one on a single face. I’d be tempted to find a way to compute a proportional mass and have it be a mass / velocity vector. Might be interesting to compute winds from the center point as a vector based on the pressures in all adjacent cells. Could likely get 10 degree or less precision on wind vector…
Wonder if ADVECT in Model II uses variable pressures or just proportional mass flow then temperature causing pressure…
@Lionell:
Of course you can model things you do not understand. Folks do it all the time. It’s only the accuracy or usability that’s in doubt ;-)
Newtonian Mechanics are a wrong model. Relativity is much more correct. Yet we regularly use Newtonian physics to model our world
Now my notion is to model ONLY what we know. From basic physics and known data. Then compare the model to reality and the error of offset “informs our ignorance” and tells us where to look for new understanding.
To do that, I’d start with just an airless world and sunshine. The “solar” portion (change of sunlight with obliquity, precession, etc. Once that was working in a way that was congruent with what we know from the Moon, or Mars, then I’d proceed to add some air. Once that matches Mars / Venus or other planet with air and no oceans, I’d add oceans.
To do all that, you first must start with a sphere in the sun, but divided into segments. Thus the question of “what grid works best?” comes early. Followed shortly by “what sunshine does to an airless globe?” then “where do the winds go?”.
Only at the end to you add oceans, rain, snow, albedo changes, etc. etc. and see if it gets close to reality.
All development is iterative. May as well embrace that up front. Just don’t expect it to be reality. Do expect it to show you where you have an unclear grasp of reality.
IF your model diverges, you need correct initial conditions. IF your model converges, most any initial state can be used. So, computing Pi, you can make a convergent math / test and start with Pi=30, and it will eventually get to near the real value. Divergent systems, you can set initial state to exactly reality and it will go bat-shit crazy if run long enough.
@BlueIce2HotSea:
Neat. While I try not to be fadish about languages, I’ll explore Julia on your recommendation.
@All:
So my idea is to NOT treat climate as a statistic about weather (average a highly chaotic process over time…) but instead as a derivative of known physical conditions. Solar input (modulated by orbital mechanics) warms the surface. Water sources putting vapor in air then it raining out (modulo altitude / wind shadows) sets wetness and clouds. IMHO, that’s really 90% of it. Then add in air flows (one could do it as “plug in modules” for things like Polar Jet and tropical convergence; or one could put in processes for mass flow & heating and see if those develop). At that point, putting a ‘call random’ on iterations of some of the values within a range ought to give you weather as a result…
I think that, up to the weather portion, a convergent model of reality can be built. Bald assertion, I know; but that’s what it “feels” like. (That Aspe synesthesia kind of feels / tastes / smells thing. Sort of sweet salty for this one, not bitter dry cold….)
@Lionell:
BINGO! You Got It!
Climate changes when some dramatic thing changes like a closure of the Isthmus of Panama, or axial tilt goes from 24.5 degrees to 22, or the Sun goes into a funk, or a new mountain range rises, or… You know, Geology things…
Everything else is weather.
Were that NOT the case, we would not have cactus in Arizona nor a Tropical Rain Forrest in Brazil. Those have evolved over millions of years of constant climate (variations within bounds).
We have the species we have because things never change all that much. Giving time for evolution to cause specialization.
IMHO, the real question is “How much can weather change on century and millennial scales inside the geologically constant climate zones?”
Correct on the constant long enough to allow specialization but you have to add, local zones can shift over time as long as the specialized plants can move to the new zone boundaries by slow progression of range.
For example the bristle cone pine has historically existed over a very large range for thousands of years but the individual local groups from time to time shift with climate shifts.
In glacial times the bristle cone pine zone slowly descends a few thousand feet in the mountains where it can change climate by changing elevation zone.
This shifting of viability also has to factor in the long term viability of the seed (or shoots or what ever method of propagation a given plant uses.)
If the seeds of a specialized species can successfully lie dormant for a couple thousand years it can periodically “drop out” of a climate zone and hibernate in the form of viable seeds when the climate is not favorable and then “return” when conditions are again right for that species to thrive.
https://www.arborday.org/media/mapchanges.cfm
Actually a case could be made that the best map of climate changes is provided by nature in the ranges of various plant (and maybe animal) species. If that is the case, first reconstruct the plant ranges over several thousand years with high resolution (ie pollen studies etc.). Then define the characteristics that define those zones for each plant. (plant A may respond more to changes in moisture than it does temperature). Then feed that info into an AI program and let deep data mining figure out the broad climate changes that have occurred over those thousands of years (assuming the hardiness of the various plants have not changed significantly).
Then once you have a good index on “what happened” then try to figure out “why it happened”
Despite the graph in the Hansen paper, it looks like the actual code splits advection into vectors and does do some kind of allotment per N/S and E/W component along with a diagonal component. This is for Model II :
Mjal2cpdC9.f
So while not completely horrible, still pretty limited.
At least it isn’t entirely fluxed up.
I got to looking at the ModelII code again. In particular was interested in how advection was handled (comment above) and how the essential opacity of the Troposphere to IR was handled. Still working on that one. BUT, ran into this gem in the programming:
Now maybe I’m missing something, but it sure looks like setting that “window h20” they have a no-op loop.
For N in the rage 1 to N-limit,
set Tau(absorption) of N to itself….
Um, that ought to do nothing at all. To the extent it does do something, it ought to be an error.
There are some obscure tricks I’ve seen LIKE that (but not that) which can be used to do odd things, so maybe I’m just missing a trick here… but I don’t think so…
In the program: R83ZAmacDBL.f
Oh Dear… they do a bunch of things with surface types. Albedo differences, wetness, etc. Then, in the same code as above, there’s this bit:
So they just commented out the code to allot solar flux according to surface type… ’cause they have something wrong in the math or methods…
I know “It’s an old model and they have moved on”, but really….
Well some think this may explain the Bermuda Triangle…
https://goo.gl/images/7bsV2c
Truth is the models likely miss large scale and small scale. Who knows how much energy is used on accelerating the hydrological cycle on a global scale?
The Mercator Project of the Earth’s Surface is a “model” without parallel for both use and distortion. And I’d like to use it more…
But what I’d like to do is be able to pick a current point on the surface by latitude and longitude, and reassign it to a pole. (East and West “poles” being where the Equator crosses the dateline or Greenwich zero.) I’d be able to display a map where, say, the East Pole under Africa is moved to the bottom of the page and spread all over in place of Antarctica, while the empty Pacific West Pole becomes the empty North Pole, and the rest of the continents run sideways and are differently distorted to our habitual view. Of course a more versatile re-mapping might let me set the East Pole in Greenland, or Dallas, or any random place, and resize territories as needed. Or in the most extreme case I’d like to pick four excerpted points, a rectangle encompassing just part of the globe, like South American only — then mapping the excerpt as if it were wrapped around a globe.
My problem at present is that I do my “geo coding” by zip code ID equivalence to Lat and Long, mapped to cells in Microsoft Excell. It “works” but only for very limited resolution of a very limited set of data…
Climate, Lionel, is by definition a statistic. It is a summary statement of a specified location’s weather over an arbitrary baseline. The climate isn’t ‘real’. The weather is real. To describe the weather I need a few things: 1. solar power input modulated by daylight time, 2. Barometric pressure, 3. Relative humidity, which is a function of local temperature, barometric pressure and absolute humidity, of which the dew point temperature is a key indicator of the local air’s heat capacity, 4. wind direction and velocity and 5, rates of change of the previous four. Thus, the climate is only as variable as the bounded damped-driven weather is.
cdquarles: [Climate] is a summary statement of a specified location’s weather over an arbitrary baseline.
You give a list of things in reference to weather when summarized magically become climate. How can you summarize these things in any meaningful way? Time series average? What kind of average?
Most of the measures you mention are intensities and only a few are extensive. One can property average extensive measures as units per extent but one cannot properly average intensity measures to a value with any computable units. This means you cannot give a complete and interpretable summary statement of the weather. The best you can do is specify ranges for the measurements plus averages for the extensive measurement.
This is anything but a statistic (singular). It is a multiplicity of individual quantitative and qualitative statistics (plural). This multiplicity would have to include many details about the geographical areas within which the weather was measured to produce your summery statistics. This would make interpretation of the meaning of the summery statistics extremely difficult to impossible. Comparison between areas becomes even more difficult to interpret.
cdquarles: The climate isn’t ‘real’.
Which means climate can’t change for real because it isn’t real. Certainly not worth the trillions already spent to save us from its changing. Even more so it is not worth the many trillions that have been planned to be spent in the future.
It is no different from fantasies of unicorns dancing through an imaginary forest. So what if the imaginary population of imaginary unicorns doubled in the imaginary forest and doubled the imaginary methane and CO2 they emitted? There would be zero impact upon the real world where real wealth is consumed and real lives are impacted by the effort to save the earth from changes in the imaginary thing called climate.
Bottom line: Things become much more complicated when you move off the feel good feel bad undefined floating abstractions and come down to earth. Then specify sufficiently precisely enough so that both you and the rest of us can know what you are saying. Most importantly, it has cognitive content that can be found to be true or false when compared to really. It is vastly more rational than “I can’t define it but I will know it when I see it” kind of thing.
It all reduces to the thing called Climate Change being a total scam meant to extract unearned wealth from those who crated it and transfer it to those who don’t, can’t, and won’t produce wealth and don’t think they have to apologize for being the parasites they truly are.
Now that we have demolished “climate” as a valid computable concept, can we get down to the work of understanding weather and its causes of variation?
@Lionell:
Just So.
Also note that most things weather related or climate related are fractal. This means it is impossible to actually measure or know them. What is the surface area of a cloud? What is the surface area of a mountain?
https://en.wikipedia.org/wiki/Coastline_paradox
Now since temperatures vary with surface characteristics, and the amount and nature of the surfaces is fundamentally unknown (changes with how you measure) Just what “average” removes the fractal problem from the first step?
My example for this was a spring camping trip to Quincy. Mid afternoon, rocks in the sun were about 85F to 95F depending on color. Air was about 67 F. In the shade were patches of snow and the creek was about 33 F. What was THE temperature? What was the surface area of the rocks? The “coastline” of the creek? Surface area of the snow? All fundamentally unknowable and depend on how you measure.
Okay, Lionell, I think we actually agree here. Gotta love the ambiguity of English ;).
EM: “All fundamentally unknowable and depend on how you measure.”
All leading to the unlikelihood of even weather being computable. So much for weather simulations being more than statistical approximations which will rapidly loose accuracy and precision as its time horizon is extended.
The current weather computer models do appear to be better than a farmer reading the clouds or a sailor reading the seas but not by much more than a few days. Knowing the weather tomorrow or the possible weather this weekend has general value. Is it worth all the fuss and expense of the national weather services? Not sure. I do find it interesting that if their predictions are wrong, all they have to do is explain why they are wrong and get to keep their jobs.
cdquarles: Gotta love the ambiguity of English
Agreed.
Ambiguity is its greatest strength AND its greatest weakness. Learning how to deal with it has been a major challenge for me.
@RogerCaiazza:
Finally got time to hit the link. Good stuff!
Nice to have the idea confirmed. Sorry to see I’m still running a bit behind others on thinking it. Really happy someone else already wrote such a weather modelmss that means source code may domday be made free… snd save many months of my life 8-)
Found the comments interesting too:
Glad to see real world practice showing it works.
I ought to have remembered that Gamers always get the new tech right first ;-)
https://mpas-dev.github.io/
“The current MPAS release is version 5.2. Please refer to each core for changes, and the github repository for source.”
Implies source code available… wonder if there are hoops….
Looks like a registration…
License is mostly disclaimers and don’t sue us.
And folks can redistribute!
This is promising.
In theory, I’m downloading now from
https://github.com/MPAS-Dev/MPAS-Release
Guess the reg is for users not looky lous…
Dr. Curry put up a very uplifting post at her place.
“We understand a lot of the physics in its basic form. We don’t understand the emergent behavior that results from it.”
https://judithcurry.com/2017/11/29/a-veneer-of-certainty-stoking-climate-alarm/
@BlueIce2HotSea:
Probably was a bad idea to look at Julia after reading a few hours of ModelII FORTRAN, as anything looks good (“beer goggles” meet GISS Goggles!), but I was sold at:
https://julialang.org/
Sum up a million calls to random spread over whatever machines are in your cluster with just saying doit in a clear way? Sign me up!
As fast as FORTRAN for numbers, lets you do calls to C or FORTRAN without wrappers. That means I can redo a model core in Julia without rewritting all the subroutines and parallelize bits as desired and needed.
OK, I know what the next few evenings on the Pi Cluster will be doing :-)
Install, configure, test, evaluate Julia….
Just out of curiosity what is the tessellation used in aerodynamic models and finite element analysis?
Both of which use models which are actually field tested and work, and have to compute values of adjacent cells to figure out the value of the current cell and transfer forces momentum temperature pressure and velocity data from cell to cell.
My recollection is that wire frames are almost always a collection of triangles, which can always be assembled into various polygons like hexagons and pentagons as the situation requires.
Larry Ledwick: Just out of curiosity what is the tessellation used in aerodynamic models and finite element analysis?
From the sold modeling I have done, the wire frames are collection of triangles of various sizes and orientation that fill the surface to be computed. The sizes of the triangles are a function of the curvature of the surface. They are stored as a triangular mesh as a set of vertices and normal vectors for each triangle, and various other attributes depending on the intended use of the triangular mesh. Such models are used as input into finite element analysis in a format compatible with the target software.
Triangles are used because of their ability to map any polygon to the required resolution thereby mapping the surface of any solid object . In graphics scenes, the same computations can be applied to each triangle in a massively parallel computation thereby supporting a high frame rate for real time graphics. I suspect finite analysis does much the same things for similar reasons.
LG: “collection of triangles”
Yep, simplest two dimensional shape. The vector of stuff flowing through it doesn’t care it’s a triangle.
Larry, Lionel & Jim2,
IMHO one of the most powerful tools for solving climate problems is FEA (Finite Element Analysis.
Two and three dimensional problems can be solved by creating grids that are made up of TRIANGLES. Then all you need to do is to solve a series of differential equations numerically.
It turns out that the differential equations that apply to electric fields, magnetic fields and heat transfer are essentially the same so this technique has many applications. In 1993 a 6,000 point FEA from the Budker Institute of Nuclear Physics was used to design the magnets in the Duke HIGS (High Intensity Gamma Source):
http://www.tunl.duke.edu/web.tunl.2011a.higs.php
The BINP FEA is now available to the general public so I used it to model the surface temperature of the Moon. My model is in close agreement with observations whereas the IPCCs CMIP models are not. The precision achieved is remarkable considering that my FEA was limited to 100 points and the program ran on my laptop instead of a super computer.
https://tallbloke.wordpress.com/2014/04/18/a-new-lunar-thermal-model-based-on-finite-element-analysis-of-regolith-physical-properties/
The same FEA was used to model the effect of rotation rate on surface temperature:
https://tallbloke.wordpress.com/2017/06/06/extending-a-new-lunar-thermal-model-part-iii-modelling-the-moon-at-various-rotation-rates/
Currently I am trying to improve on Robinson & Catling’s climate model using FEA to include the effect of cloud layers………………wish me luck as it is not going well.
TRIANGLES ROOL……you can even make HEXAGONS out of them! My apologies for shameless self promotion.
Totally “Off Topic”. You have to love Trump’s speech in Missouri today.
He has so much fun baiting the Fake News folks! They have to be as dumb as rocks given their failure to figure out what he is doing to them.
E.M. Smith: After all the money spent on all the models over the years one would think that perhaps it is time to move from FORTRAN? For crying out loud I would bet the ranch that the code problems you showed exist in the current versions of whatever model they are using.
rogercaiazza: “I would bet the ranch that the code problems you showed exist in the current versions of whatever model they are using.”
My guess is you would win the bet big time. Judging from my experience with NASA employe software written for advanced experimental aircraft data analysis in the early 1990’s. Nearly ever serious coding malpractice was included in such code with many I had never seen before.
I have even seen potentially fatal errors for the pilot embedded in operational avionics software for the hottest fighter jet at the time. That it was there was denied, in writing, by the agency responsible for maintaining it. Along with that denial, they sent me the a code fragment that had it painfully visible. It was at the core of the software that provided the pilot with his heads up display updates.
The error could be triggered by executing a rather standard dog fight escape maneuver without waiting at least 3 seconds to set it up. If executed too soon, the aircraft would spin out of control at about 700 ft/minute toward the earth. The only way to pull out was to accelerate downward still faster so the control surfaces had something to control. It is unknown to me how many pilots dug their memorial crater that way. I know of one pilot who pulled out just 50 feet above the ground and experience 9g lateral forces doing it. Rather than identifying the cause and fixing it, the problem was masked by a managerial edict not to do that.
It still chills me to think our lives, safety, and security are becoming more and more dependent upon such software garbage. Imagine what is going to happen when we are required to ride in autonomous vehicles by government dictate.
The vulnerability of software to being hacked is a rather minor problem compared to the risk imposed by abysmal software practices and ignorant managerial dictates.
Welcome to the future of an increasingly software driven world. The revenge of the machines will be a dance in the park by comparison and we would have done it to ourselves. The machines will simply be doing what we told them to do.
OOps. That was not 700 ft per minute it was just short of the speed of sound. Foggy morning brain. Sorry.
Hillarious!
================================
RE Julia. This is a fast evolving tool/toy and there are evolving tradeoffs. Caveator emptor! e.g. Check your distro’s Juila package version. If < 0.6. I advise using latest and greatest from julialang.org. I'd also use the pre-compiled binaries and avoid compiling the full development setup if your your distro's version of Perl has an older Python dependency say 2.7. On one system I had to default to the older version when logged in as root else it breaks package management.
The thing that interested me was using (toroidal) honeycombs to model climate. The basic goal is to reduce grid cell size and improve computational performance. Ideally, the surfaces of hollow 2d toroids could hold the bulk of the relevant climate info and so might allow a dramatically reduced data set.
Perhaps six toroids could be be wrapped around the earth to form the Hadley, Ferrell and Polar cells, a few more for the jet streams, etc. Vertical tubes could be part of the structure to communicate coriolis & cyclonic information. To model climate change, the Hadley toroids would expand or shrink. e.g. if the primary Hadley toroid were large enough, nearly all of South America becomes a tropical rainforest. Perhaps regional climate (and/or even weather?) could be teased out by firing up toroids within toroids on demand.
Sorry my comment went so long.
@RogerCiaizza:
I always find it odd when folks decry “Old FORTRAN code”. Even when I do it. (So this isn’t a rant at you; just an exposition on why FORTRAN is still so widely used and, yes, loved.)
The language is still ideally suited for the problem set for which it was designed. I regularly run into issues in other languages I can easily fix with FORTRAN.
Even with that, the language itself has evolved over time and new specifications. “Modern” FORTRAN, oh, pardon, modern fortran as they have had the name “move on” as well… has a pretty large suite of the newest and trendiest language features added to it. Sometimes more than I can stand… Some of the new stuff is hard for me to read as it just doesn’t look at all like FORTRAN.
https://en.wikipedia.org/wiki/Coarray_Fortran
So, in theory, the simplest path for me to make a climate model, like ModelII, run parallel is just to insert the Coarray syntax into it. Except it looks little like FORTRAN:
So as of about 2008 FORTRAN (erm, fortran) is now parallel processing capable “out of the box”.
Yet code from the 1960s can still run just as written and the FORTRAN 77 spec is still widely supported. There are other whole languages and even language families that have come and gone in that time, so “good luck” finding a way to run those programs without a re-write.
Oh, and often the older programs had far more skill and time and effort put into making them clear, functional, accurate, and efficient. (Admittedly not the “climate codes” from what I’ve seen… but things like the FORTRAN Math Libraries are an international treasure. Thus keeping them as callable in Julia is a Very Big Deal, IMHO.)
One Example of PITA in other languages where FORTRAN is a walk in the park. Fixed Format Data. There’s TONS of it. All over. All the temperature data, for example. In about the 1980s to early 90s it became “trendy” to poo-poo fixed format data. Newer languages tossed out the notions of format statements or reading in fixed data types at fixed positions. CSV Comma Separated Values, or just byte vectors became all the rage. Except people don’t write byte vectors and adding commas to data like $10,004,548.00 can really mess up your data…
So C has ways to do that. I get to write a paragraph or two of “crap” just to say:
Name is 20 char, then skip 8, then pay is a float in the next 6 spaces
That in FORTRAN is trivial to do.
I’ve rejected the use of more languages than I can remember just due to their failure to understand that fixed format data MATTERS. A LOT.
So Julia has a lot to recommend it, but I’m already staring at this same issue. Want to read in fixed format data? Well, get out your array parsing tools and DIY…
https://github.com/JuliaLang/julia/issues/5391
From 2014 so hopefully by now they’ve figured out “It MATTERS. A LOT!!!” and have fixed this horrid omission…
Lots of folks have fixed width field data. Many data can NOT have inserted ‘,’ or ” ” or “whatever” without polluting the data. And what kind of language requires me to change my input data to use the language?
Yeah, it’s a “hot button” for me. Too many months of my life (years?) spent working around this hole in “modern” languages…
So, since most temperature data have fixed format files, my “tool of choice” is often FORTRAN. Since it is numbers and math heavy, the tools in FORTRAN are ideal. And now, should I want modern parallel processing I can just use coarray fortran and “move on”…
In short: It isn’t the language that was crappy, nor the age of the programs, but the programmers who wrote crappy FORTRAN that’s the real problem, and the age of the BUGS as they are not fixed for decades that’s bad. (All programs have bugs at the start. It’s how fast you remove them and approach / reach zero bugs that matters.)
E.M.
Re Fortran adapting. A conversation from about 1988
“What language will we be programing in in the year 2000?”
“I don’t know what language but it will still be called Fortran”
@Lionell:
We had 2 immutables in my shop.
1) egoless programming. In code reviews, check your ego at the door. We all screw up, admit it up front and move on. Finding an error NOW is better than after shipping. It isn’t YOU or YOUR CODE, it’s the company product, and we all have a stake in it.
2) The Law Of Mutual Superiority. No one is fundamentally a superior. We are all mutually superior to each other. All of us have some particular strength making use superior on that point, and some fundamental weakness that could use a helping hand. I can improve any code YOU write, and YOU can improve any code I write. There is no ego in that, only truth.
One of the other group managers and I would get together in conference rooms and be brutal on any document we were preparing. Some folks thought us enemies or unhappy with each other, for we could be very loud and even shout obscenities at times. “Are you an idiot on this or what? It’s F-ing obvious why…”. Yet when the door opened out we came, happy and friends. Why? We knew our document was better before handing it to the V.P. and asking for $$ (or “whatever”). Literally, there was NOTHING some random V.P. could shout at us that would rattle either of us, as we’d already been through worse. It was, in fact, some times “fun” to see an upper exec try to rattle one of us, as their efforts were so tame in comparison. Talk about prepared… ;-)
That kind of cover-up of a bug in my shop would have someone perp-walking for the door…
@BlueIce2HotSea:
You thought that was a long comment? Wow! ;-)
Mine are way longer… just see the one prior to this…
Like the idea on the toroidal approach. I’ll need to ponder it a bit.
FWIW, I’m pretty sure Julia is in my future. It just does so much so well. Then, being able to call FORTRAN subroutines means I’m just one FORTRAN stub away from a fixed format input routine. A tiny kludge, but I can live with that. Better than the circumlocutions of C IMHO. (Read string, parse, cast, pack, return…)
But, as usual, We’ll See. I need to find out if it is on the Pi in Debian, then test drive it a little, then do some timing tests vs FORTRAN. Oh, and memory. One YouTube on Julia mentioned they were a bit sloppy about memory use and specifically stated “if we get run on embedded systems we will need to fix that” or words to that effect. So the whole tiny SBC thing might become an issue…
Which implies one of two paths:
It works great without memory size issues. Code large hard bits in Julia, use FORTRAN or “whatever” for special needs (like fixed format I/O).
It has memory footprint issues on Pi sized boards. Code small modular bits in Julia so they spring into being, do their distributed job, return a result, and evaporate. Use FORTRAN for the big wrapper where it all accumulates and for I/O bits.
For now, I’m going to start by using the second path. Simply because I’m new to Julia and I’ll be learning “one trick at a time” as needed for a small nibble of the job. Better suited to writing small functions one at a time…
I also need to find out if coarray fortran exists in the fortran on Debian… I could easily see using “some of each” depending on how each handles different kinds of parallel math and which is more efficient for that bit.
Much of the computer models consists of nested loops working on arrays. Either approach could be a big win. Then there are large data blobs in the models. Refactoring some of them into arrays where the Julia vector math operators could be used might also be a big win. (Or could cause massive data communication loads as massive arrays get shipped over the network dozens of times to different boards just do one math operation and shipped back ;-) So “some testing required”. (AKA play ;-)
It is a different mode of thinking (non-procedural and less imperative / iterative ) so has lots of opportunities for smaller more direct code in the re-writing. But first I need to get the tool set available loaded into my head so I can see those opportunities. That means some “Butts In Seats” time at the keyboard / computer / monitor…
My first goal is learning how to write FORTRAN in Julia… not the language, the style. Julia lets you have free form data types, but that costs big on performance times. Typing your data makes it fast and efficient (per one write-up)
https://docs.julialang.org/en/stable/manual/performance-tips/
And the bits both sides of it. So paying attention to types brings more speed. For that reason, I’m likely to pick some FORTRAN module and ‘re-code’ it into Julia as an exercise, keeping the typing just to assure the generated code is similar / fast. THEN toss in the parallel aspect and test speed change for all three of Julia, coarray fortran, FORTRAN MPI / OpenMPI. Then think…
I think I need to ponder a representative problem set for re-coding the style of climate model FORTRAN into Julia. They have FFT (Fourier transforms) code but that looks like a built in to Julia. They have time step then geographical iterative layers (so maybe coarray or maybe distributed Julia then inside that the array operators.. ) You get the idea. Make some “toy models” of similar shape but lesser complexity and play with alternative syntax and methods…
Maybe what the climate models need is a partnership with a couple of computer science professors. Have them take blocks of code and have the students compete to come up with clean efficient coding of each block with proper comments.
Sort of a crowd sourcing / open sourcing effort where many eyes and hands would make the project much quicker than anyone could afford to do with salaried coders. You would need some computer science professors who were sticklers for good sound coding practices who had years of experience in real world high quality FORTRAN coding.
Say take a block of code and assign it to 3 – 5 teams of students to have them recode that block or at least identify the problems with the code block, and if possible optimize the code for efficient execution and properly structured to make it maintainable and commented so it is understandable.
It would be a good introduction to how shoddy code practices can muck up a program to the point it is almost impossible to fix.
EM: “It isn’t the language that was crappy…”
Bad code is easy to write in any language. There is no language that can force a sufficiently determined programmer to write good code. In fact, writing good code in any language takes knowledge, skill, discipline, focus, attention to detail, and effort to the point of almost exhaustion.
It is true that any programming language can solve some problems easier than other languages. There is no one language that makes solving all problems easy. Thus, one should choose the language that solves the problem you must solve with a reasonable degree of facility in as full of a context as possible.
If FORTRAN fits your problem set, more power to you. I left FORTRAN and switched to Pascal ca 1984 after having written a ton of code, both good and bad, using it. It simply did not fit the kinds of problems I was dealing with. Plus it invited the use of monumentally bad coding practices that couldn’t be worked around. I have never considered using it since then. Ian’s comment: “I don’t know what language but it will still be called Fortran”, might well indicate modern FORTRAN has little in common with the FORTRAN I knew and despised.
Since ca 1991, I have been using standard ANCI C and have written over a million lines of code in it for many different classes of problems. Including object oriented, functional, structured, procedural, real time, graphical, multi-threaded, parallel processing, image processing, compilers, parsers, and device drivers. However, it won’t easily read a Hollerith punch card which is about all that the FORTRAN I knew did better.
I hope this is an out of date transitory condition:
https://packages.debian.org/jessie/julia
So with me standardizing on armhf and them only having arm64 “Houston, we have a problem” in Julia land…
I’m guessing it can be installed via a full on compile cycle…
But Hope and Guessing are not good strategic direction… Sigh.
@Larry:
Would that it were so… BUT:
Most of the models are proprietary. Hard to get source code at all, and when you can, it has lots of limits on what you can do with it. So far I’ve found 3 that are available publicly.
Model II is so old as to be nearly irrelevant, but code is wide open. UNfortunately, all the needed input files, matrices, parameter files, etc. are not so easy to come by… and the current keeper of the code (a university) doesn’t want you to play with it directly but to use their GUI interface and stick to canon on what the model actually does.
Mod E is one step out of date (the current incantation is held close to the vest) so you can get the code, and I have. However, it is even more riddled with all the state and logic driving parameters being outside the code so you can make it do damn near anything with the “right” inputs, that you don’t have and can’t get.
Then the recently downloaded zip file for MPAS that I’ve downloaded but not yet opened or explored so a “go fish” question and new toy box.
See, folks doing R&D don’t want to have someone else jump ahead of them based on “their work” so they release enough to get published, but not so much as to have someone ‘steal a march’ on them. Add to that fear that us nasty ol’ skeptics are going to pillory them for their stupid human tricks; well, I just think it will be hard to get the source code + ALL the input files from them for “recoding” and “examination”…
But I’d be all for it.
IMHO, what would work better is just a flat out “Open Source Climate Model” from the start. That’s basically my sub-goal. Once I get something, anything, to work “reasonably” (even if all “their” approach and coded in limitations) I intend to toss it to the world for anyone to improve and extend.
Partly that’s why I’m taking time up front to make sure I’ve got a decent set of code base, and a good structure / language / environment to the beta -10 release (or maybe ‘alpha candidate’ ;-) I don’t want it to be too ridiculous right out the gate or uptake will be minimal…
Basically, I want it modular enough that folks can prune off a module to play with and not have to read all the rest of the darned thing; yet extensible enough that someone can easily just add a “Call Solar UV” and have a place to put in solar spectral variations without re-writing 2000 lines… and certainly not the dense blocks of unreadable all caps incomprehensible variable names lacking any data dictionary that exists today.
Oh, and NOT needing a $Millions supercomputer to run it… In that regard, I note in passing that the Model II code has a header stating it was originally developed from a code base on an IBM RS6000 which is only a little slower than my desktop Pi ;-) so there’s hope…
Well, back to looking for “How to install Julia on armhf Debian” ;-)
@Lionell Griffith you might like this…
That didn’t survive the trip from the Cygwin text window…
llanfar,
I started programming in the fall of 1965 as a biomedical engineer. I could have chosen COBOL or FORTRAN 2. COBOL didn’t fit the problem, so I spent 3 days, learned FORTRAN 2, programmed my component design software in another 2 days, and produce a successful product. Never looked at COBOL again. My Tao was strong! I soon stopped being a biomedical engineer and have been a software engineer ever since.
Pingback: Liking the MPAS code Much More than Model II or ModelE | Musings from the Chiefio
I started with FORTRAN IV. Loved it. Took Algol. Loved it almost. Took a 2? unit (something small) COBOL class. Misery and pain. Dropped it the third week IIRC. Never looked back.
I did have to help some COBOL applications programmers with their FOCUS database interface / use as a professional DBA ; but I could mostly ignore their code ;-)
I think Algol is still the one language for which I have a real soft spot… While FORTRAN was the one I first used to hack a system, it was Algol that let me crash it on demand ;-)
(Sidebar on both stories:)
On the Burroughs B6700 all empty space could be used as swap by the OS. “Why waste it?” was the thought. So any given patch of “unused” disk might have swap contents on it. FORTRAN had an indexed or random read / write where it was YOUR responsibility to zero the contents of your file on first open since FORTRAN didn’t know if this was the first, or if it was an important database being opened for use. I “put it together”…
Simple program:
Open file of size 1 MB for indexed or random read / writes. Scan for text saying “assword:” and print that line. Close and release file ;-)
Nothing like capturing all the logins you wanted to make a guy happy ;-)
Crashing:
Algol has a thing called a Task. You can launch an async task to do something just by invoking the name of it:
Task A;
would launch program A.
Now the operator could kill a program by looking at his screen, reading the ID number, and typing “ds xxxxxx” IIRC for DiScontinue task#. Except the computer is faster than the operator… so…
B:
while true task A;
A:
while true task B;
At the keyboard, just launch either A or B and an exponential cascade of tasks bring the system to a halt as it runs out of memory, swap, cycles, etc. etc….
Some operators could be very nasty to sophomore programming students. They tended to have a very bad night… Especially when the UID traced back to someone else and they chose to “get in their face” about it ;-)
Ah, the joys of not being in charge but being a peon ;-)
Now lost in the misty past when such things were not criminalized and I wasn’t in charge of stopping it ;-)
All you have to do is look around you, millions of years of evolution right before our eyes.
I’m not saying it can’t be improved upon, just that many “experiments” have already been run and we are the results.
Of this run.
She’ll get it right one of these times :)
I was on my 4th major (voice performance or music education – not sure which was last) when I first took a Basic class in college. That’s where I got the cyberzombie handle (I’d zone into the screen and ignore the world for hours). Wrote a rubric cube solver and never looked back. Spent the dreaded semester in COBOL, plenty of FORTRAN, Assembler, and Pascal (with a side of Forth) before dropping out to write a cost estimation program for the construction industry using dBase II.
That intro to the business world certainly was painful. Learned that initial estimates of the time it will take to write an application for a new platform (MS-DOS) using new hardware (8086) and new software will be WILDLY low. At one point I spent 56 hours straight (cocaine-assisted – my future wife had not set me straight on that) figuring out that a 2-table limit wouldn’t cut it so moved it to dBase III (and later Clipper). At the end of that session, my boss took me to dinner. Had half a glass of wine and was drunk. Went home to find my roommates playing D&D, so joined in and fell asleep while rolling the dice.
My standard for estimates, now, with decades of skill and experience:
Make THE best estimate I can with all available data. Double it and add 10%.
Tends to be about right most of the time ;-)
I’m having trouble understanding how a two-dimensional mesh of triangles could be used to model the bulk atmosphere. Wouldn’t you need tetrahedrons or some such 3D volume?
Also wondering if, in the case of a 2D mesh wrapped around an 3D objects surface, can the size of the triangles be scaled dynamically? Say, some phenomenon occurs that can’t work right with larger triangles?
@Jim2:
There are as many ways as there are programmers writing code about it.
For 3 D from 2 D mesh: Right off the top, I’d think of “layers” of 2 D at different altitudes. So nested “balls” of triangles, like the Bucky Onion.
Or you could do any 3D projection of the triangles into any shape that has a triangular end / edge: tetrahedrons, triangular rods Prism), octahedron (might be tricky to get dense packing), icosahedron. So put up a layer of “that solid” then move up D distance and plant another layer. Don’t know as it is a good way to do it, though.
As to size of triangles: Nothing says they must be same sized, or regular. Many graphics animation methods use triangles for the surface with various “stretch” operations done to change the size and shape (but not the number of triangles…)
Just a matter of writing the code to “make it so”…
EMS, I was concerned about how to encompass all the effects of all the phenomena with two-dimensional mesh/meshes, not so much the programming aspect.
http://julia-programming-language.2336112.n4.nabble.com/Julia-on-Raspberry-Pi-2-td14131i20.html
As of a year ago they were still not able to build on a Pi M2. Still searching for a success example.
@Jim2:
The way you handle 3 D effects in 2 D mesh is to have layers of 2 D mesh at different altitudes. Then interpolate and spread as needed. One could argue that 7 layers of 2 D mesh are in fact a 3D system, but it isn’t made with solids, so IMHO it’s a 2 D solution in 7 sets of data.
As to “all effects” – I’m not sure they can cover all effects even for just measuring temperature on a planar surface. All models are a compromise of some aspects of reality.
Ah, as of March 20 looking for testers of a first fire on Pi:
https://discourse.julialang.org/t/need-raspberrypi-arm-julia-package-testers/2774/19
BUT it looks like you need Raspian to get the changes made to support it.
https://juliaberry.github.io/
So quickest way for me to test is to burn a chip with Raspbian and do the test drive.
I can understand Armbian not doing the work to port it. Embedded guys not that likely to be interested in a new interactive parallel computing math oriented language. As to why not in the base Devuan… I can only suppose it’s a timing issue. Devuan 1.0 finalized prior to this package working. Oh Well.
I can live with that. Nice thing about the R. Pi is that you can easily have different system images for different things. It’s not a big security or operational risk for me to have most of my stuff no a Devuan set; but have a second systemD Raspbian set for examining Julia.
Well, at least I have a path…
jim2: I’m having trouble understanding how a two-dimensional mesh of triangles could be used to model the bulk atmosphere. I’m having trouble understanding how a two-dimensional mesh of triangles could be used to model the bulk atmosphere.
A two-dimensional mesh is a representation of a flat surface. The vertices of each triangle are represented by two coordinates each with all normal vectors pointing in the same direction.
If you represent the vertices of each triangle with three coordinates, You can now represent a curved surface. The triangle normals will now point in various directions as required to represent the curved surface. A suitable mesh can represent the external surface of an object. This kind of mesh is used in real time 3D graphics.
Now consider extending the triangle mesh into the interior of the object with a similar representation and the solid object can be filled with the mesh. A void in the object would not have any triangles extending into it.
The principle could be extended to fill higher dimensional object in a similar way. It is simply much harder to visualize because we are three dimensional creatures who live on a time line. If you go to the god observation point, you see the timeline as part of the object.
I am not sure that a volumetric triangular mesh is the best way to represent the atmosphere. However, it would be a way to fill space with cells of different shapes than cubes or rectangles. The conceptual complexity might not be worth it.
My specialty is highly interactive user interfaces, 3D geometry, graphics, image processing, and volumetric analysis using multiple flat x-ray images. The closest I have come to finite analysis, is developing an x-ray image simulation program to generate test images for my volumetric analysis software. The program uses a small subset of the ideas involved with finite analysis. It does generate geometrically correct x-ray projections of solid models that are reasonably close in appearance to actual x-rays. It is much safer and much more adaptable than working with actual x-rays.
LG – I understood how a 2D mesh could be mapped to the surface of a 3D object already, but thanks for the reiteration anyway.
Looks like the way to get a quick parallel processing experience with Julia is the Jetson board. Not real keen on the fan in it, and not looking to pop $600 for a new toy, but others might be interested in “going there”:
https://github.com/JuliaLang/julia/blob/master/README.arm.md
So 256 CUDA cores and directly accessible from Julia with the CUDA function package in it.
Would be a pretty hot compute engine.
Unfortunately for me, I’m just looking to stick a small toe in the Julia compute space ATM.
So, looks like the Debian chip is my path…
However, should it be fruitful, it’s nice to know that there’s an easy “big engine” path available in the $600 price range. My guess is it ought to run a climate model fairly well.
Lionell Griffith, 30 November 2017 at 3:42 pm
Software that can kill you and evil bastards who will cover it up! Really scary.
If you want to take things to the next level imagine software that is self aware and capable of adapting to its environment. Clearly this is way above my pay grade so I recommend:
“Life 3.0” by Max Tegmark
It is hard to imagine a “Super-Intelligence” that would have much use for humans. At best we would be treated like pets. At worst…….cockroaches.
gallopingcamel,
Cockroaches have a way of continuing to exist in spite of our attempts to eliminate them. They have been around for almost a hundred times longer than we humans have been. Thus, at worst, we would be treated like the smallpox or polio virus. Cockroaches will live on no matter what.
A big part of the problem is that we humans like to scare ourselves by telling stories of things that go bump in the night. We create crisis so we can have something to talk about. Actually, most of our fears are far worse than the reality behind them. Then, we mostly try to ignore the really bad stuff until it is too late to do much about it. To survive in the long term, we must get out of our own way and stop the stupidity! The chances of that are slim to none.
Considering what we humans are doing to ourselves, if we do develop Super-Intelligence AI, I suspect it will be smart enough to understand that to eliminate us, all it has to do is wait. We will eliminate ourselves by continuing to act stupidly.
PS: A super intelligent AI that could not be destroyed would have no motivation to act. It would need no values and have nothing necessary for itself to acquire and keep. You have to be alive, be mortal, and have skin in the game in order to have motivation to act.
Lionell Griffith says:”My specialty is highly interactive user interfaces, 3D geometry, graphics, image processing, and volumetric analysis using multiple flat x-ray images”
I’ve done a little work with OpenCV to create an app to read barcode data from an image. Do you happen to use that? Or some other image library? Or, roll your own?
I really like OpenCV.
jim2,
Due to the special nature of the image processing I do, I had to roll my own. I process multiple 16 bit grey scale high resolution images from which I must extract geometrically correct sub pixel information. I achieve feature resolution down to approximately a third of a pixel. There is no commercially available or open source package that can do this.
I haven’t used OpenCV but will take a look at it. If it has the usual open source stipulations can’t use it because of the intellectual property that is embedded in my code. Since I sell the code, I don’t and won’t give away the source. I have only two small chunks of public domain open source code in a half gig of source code. The remainder is written, tested, and documented by me.
Interesting! OpenCV uses a BSD license. That I can use if it is applicable to my purpose. I don’t have to share my code if I don’t want to. I will definitely dig more deeply into it.
Thanks for the tip.
This is sad. There is NO internal documentation. Thus the code cannot be debugged nor modified with reasonable effort. It is written in C++ making it next to impossible to know and understand its structure with any degree of certainty. Worse, because it is C++, there is far too much going on under the hood to make it usable for any high performance application because of the impossibility of controlling its overhead. Finally, it would cost me more time and effort to use it than to write what I need from scratch.
Rather like that joke, “Except for the incident, how did you like the play Ms Lincoln?” I am sure it is a fine package except for its painful flaws that makes it unusable for me.
Thanks anyway.
When you say internal documentation, do you mean code comments or something else?
Also, what languages do you use? I noticed OpenCV has some CUDA capability.
Another thing I discovered late in life.
You can walk into any construction zone, ask stupid questions, and be treated with kid gloves.
Cus they think you are an undercover OSHA agent.
jim2,
I use ANCI C and a bit of C++ for for a few low duty cycle utility functions. I get the performance I need right now without using a CUDA engine. A four core i5 with built in Intel graphics is enough for now. Future applications might need more.
A CUDA engine has interesting possibilities if I could figure out how to convert my core image analysis module to be compatible with it. Sadly, Not all image processing can be easily reduced to vector parallel processing. Right now I am constrained to use multiple CPU core symmetrical parallel processing. If I need more speed I can use more cores. Fortunately they are becoming more available.
jim2,
By internal documentation I mean well structured, well commented, properly labeled, and indexed source code. Such that sufficient information is provided to be able to understand the code based upon local information and to support the finding of where things are. This is an absolute necessity for a half gig of source code and useful for anything other than use once and delete code.
The OpenCV source code is totally lacking in such things. Not unlike nearly every chunk of open source code I have examined. Apparently, since the code is distributed free of charge, its programmers believe it does not have to be to professional product quality. They understood it as they wrote it and that is good enough for them. In my estimation they produce use once and delete code.
Incidentally, hardly any of the commercial code I have examined is much better. What is not understand is you don’t create quality code for other users, you create it so that YOU can support it in the long run. The benefits flow abundantly from that point on.
Some of my code is over 20 years old. I can look at it and rather quickly understand what I have done, why I did it, and how to make use of it. This more than saves the little bit of time and effort I spent doing it correctly and completely in the first place. My source code is its own documentation. I even programmed a tool that exposes it as such.
The usual excuse for not doing what I do is that it takes too much time. My answer is learn to type and it won’t take that much time. Programing using two fingers with hunt and peck is equivalent to a carpenter not knowing how to use a hand saw and hammer. Most of my life, I have been able to touch type in excess of 50 words per minute with a peak rate of abut 70 words a minute. That is the equivalent of doing carpentry with a power saw and a nail gun. You then have the time to do a quality job rather than just doing stuff and pushing it out the door without regard for its future.
Obviously, this topic is one of my very hot buttons.
When I first started out, I revisited some of my code and didn’t understand it very readily. Now I try to write self-documenting code by writing small procedures that usually do one smallish thing. The name of the procedure conveys a lot of information about what it does, and additional comments supplement that. Most of my projects aren’t large enough to index.
@chiefio
Longer comments are less likely to be read.
Re Julia fixed format data
The juliadb library is supposed to load and index csv files. I haven’t tried it.
re Fortran/alternatives
Fortran won’t go away because most other languages have unacceptable execution performance.
Fortran is less wieldy for fast prototyping. Hybrids like Python/Numba/Numby are gaining ground. An interesting hybrid is R/Julia. Julia scripts can be called by R. And Julia has the syntax brevity of R with 100x the looping performance.
Your idea of a Julia/Fortran hybrid is interesting because it leverages existing high performance code from both, plus allows fast protyping.
re: Julia dataype performance
This Julia script demonstrates pseudocode-like fast prototyping and a datatyping pitfall:
Note the math symbol as variable name and human readable equation/function? Cool!
But there is a datatype performance issue here. It’s not caused by the S-B function’s untyped return value and parameter. It’s caused by calling the function with integers; it causes Float64 * Int64 (σ * T) which on this computer is a full 50% performance hit. 100 million loops: 2.4 sec. vs 3.6 sec. Two ways to fix this:
Note: variable names are treated differently if not part of a defintion, e.g. function, struct, loop counter, array, etc
The datatype is not associated with the variable name. It’s assigned to the data referenced by the name.
One could type the function parameter, e.g. E(T::Float64) = σ*T^4 so integers will throw errors. But it doesn’t speed up the function and we’re defeating some of the fast prototyping advantage. Better: E(fT) = σ*fT^4 where lowercase prefix ‘f’ indicates the Float datatype to the developer. I would just blast out prototypes and refactor later.
re overloading
There’s anoher potential performance issue which I’ve run into before. If the function gets called by a mix of both float and int, it dynamically overloads the function to handle both: it compiles 2 methods for a single function and then determines which to use in the running program based on the actual datatype of the passed parameters. More clock cycles.
Whew! Cheers!
jim2: Most of my projects aren’t large enough to index.
You support my contention about failing to document your code internally. You found from bitter experience that you had to up your game. Good for you. Your code is better for it.
As for indexing, if you plan to keep programming and have modules that are reusable, indexing them will save much more time than the index costs. However, if each of your projects are small and self contained with little reused code then maybe not. However, I suggest even this approach will very soon reach its limit of maintainability assuming you will be program for more than just a few more years.
The first thing I do when I start to create a new program or module is set up the environment to index my code. I often write the index before I write the code. This helps me to keep things coherent and complete. The index may have to be updated, but that becomes a small issue.
Similarly, I write the procedure or function interface contract before I write the procedure or function. I often even write the code comments before I write the code. This helps me to stay focused on the details of the item and keep its single functionality intact. The interface contract may need updating at times but that also becomes a small issue.
I have found that one of the best methods of eliminating bugs is to have as high a fraction of reusability as possible. Most of my programs have a reuse fraction in excess of 60% and some as high as 80% and more. The reused fraction is exposed to many program contexts and thus gets vigorously tested. If you carefully maintain your interface contract, The incidence of bugs found in your reused code soon drops to near zero.
This is what I mean when I say I am a software engineer rather than just a programmer. Most of what I do now is the result of bitter experience from not doing it. I have been burned or burned myself about every way it is possible to be burned.
Doing programming this way is both an art and science and must be done with focused conscious intent. Soon it becomes simply the way you do things. I find it well worth the effort. In the long run it saves much more time and energy than it costs. It can result and has resulted in the wished for quality of “it just works!”
I’ve been programming for many years also, but we are continually porting to new versions, due to company rules. I’m aware that ANSI C has some different versions, but that ain’t squat compared to what Microsoft does :) We also translate from one platform to another and are targeting C# as the final destination. It’s been only relatively recently that I’ve developed useful re-usable libraries. I need to gather the versions together and standardize them – they have grown from project to project. All that said, I love what I do.
jim2,
Ah yes, the problem of some manager insisting the problem be solved his way with his preferred tools, standards, and practices. That is almost the definition of being between a rock and a hard place. You have my sympathy and understanding.
It has long been my position that the nature of the problem and the problem’s context that MUST determine its solution. Not some external, disconnected, entity who doesn’t have to do anything but assign blame. Then who offers the “help” that if you don’t do it his way, he will find someone who will. This is a no win situation that can’t be fixed. My tenure in such situations has always been brief. Better, I avoid it if at all possible. I can understand that you might not have a good exit strategy available to you.
What I have mostly done is find work situations where there is such an urgent need for a solution to a problem that is out of control, that they are ready to accept whatever works. Further, they know they don’t have a clue how to make it happen. I investigate, design a path to the solution, and deliver it on time, on spec, and on budget. Then I move on to the next situation. It takes great depth and breadth of skill, knowledge, ability, and experience as well as an unshakable confidence that you can deliver to pull it off. It is a risky game and you have to have the ability to pick the right game to play. Not everyone can do it and it sometimes doesn’t work out as planned. As they say “stuff happens”.
I will say that I am happier and more productive since I have retired from the corporate world. There is no manager looking over my shoulder dictating the what, the how, and the why of what I do. I set the rules, I set the standards, I determine when and how I work. As a consequence I am free to do the best work I have ever done. You can’t imagine how satisfying it was to be able to work as a free agent and accomplish what the world thought was impossible.
Sounds good, LG.
Is this what you mean by indexing?
https://developer.mozilla.org/en-US/docs/Mozilla/Using_the_Mozilla_source_server
Whoops! I forgot to mention the numerous ways to screw up benchmarking and I screwed up on several. In summary: DON’T use a Float64 in the for-loop range list as I advised in my prior comment, see 3) and 4) below:
where 5) iStart=285 and iEnd=295.
And for σ*T, which is obviously not the same as σ*T^4, there is basically no difference in performance between between 1) and 2. Weird!
Here’s how I timed the loops:
Here is a code snippet that demonstrates what I mean. Word press messes up the formatting a bit. It doesn’t recognize fixed width characters
A block of text at the head of the module source file has something like this.
/*********************************************************************
*
* Procedure Description
* 1. DtStrLen Returns string length
* 2. DtStrCat Appends string to a string buffer
* 3. DtStrCpy Copies a string to a string buffer
* 4. DtStrTrim Trims ends of a string
* 5. DtStrFmtLong Formats long string value
* 6. DtStrFmtDouble Formats double string value
* 7. DtStrWStringToCString Converts WCHAR string to a char string
* 8. DtStrCStringToWString Converts char string to a WCHAR string
*
*********************************************************************/
Then each procedure has description block like this with sequential numbering:
/*********************************************************************
**************************** 1. DtStrLen ***************************
**********************************************************************
*
* Returns string length
*
* Length = DtStrLen(pString)
*
* Length is the number of characters in the string
* pString is a pointer to the string to count
*
*********************************************************************/
The procedure code follows.
A simple inspection gives an idea of the module capabilities and a selected procedure can be found by scanning for the same procedure number and name. It saves a huge amount of time looking for things.
Ahhhh, OK. Some of Visual Studio’s bloat is useful. There is a magic key,”///”, that when typed above a method will generate XML that contains a section for a summary, the return type, the parameter names and types, and a place for further description of the params. This can be extracted into an HTML “help” file for the code. I’m a fan :)
BI2HS – I’m not a fan of using single letters for a function. Keep that up and your code will look like alphabet soup.
When I was in the Navy my Chief wrote program documentation like your above example in our “run book”. His theory was that the documentation should be complete enough that an intelligent 6th grader could run the program.
You looked up the program and the first page of the run book section for that program looked something like the above.
*********************************************
This program “foobar” does stuff to xyz
It depends on routines abc, and file WHEREDIDIPUTTHAT
Output will be a report of approximately 2 boxes of green bar on printer.
Note : be sure you have 2 part paper loaded for this report!
This report is run the last Friday of every month.
***********************************************
After that quick memory jog summary, he had a numbered list of the things you needed to do to execute the program, how long it normally ran and a summary of the most common error messages and how to fix them.
It was years later before I realized how good he was at his job and how much grief he saved us by taking the time to do proper documentation.
When I took an introduction to C programming course the instructor had worked for years in commercial code developer positions. He made a very big deal about writing sufficient comments in your code and logical structure so you could understand what you did 3 years in the future when some change breaks the code. He said a programmer would spend 70% of his time fixing existing code and chasing bugs, so 70% or our coding time should be spent on making the code easy to understand by some coder who had never seen the program before.
I only write simple shell scripts to help me do operations tasks, so no fancy code, but each script starts off with a header block that gives that sort of basic info and a list of the key variables used and what they are intended to do. By the nature of my job, I rarely can work on a script without being constantly interrupted by other issues (coding is not my primary responsibility, keeping the systems running is, so my tools scripts are done as time allows and I get a few undistracted minutes with no pressing issues)
Once I started doing that sort of documentation, life got a lot easier, as I did not have to spend lots of time re-familiarizing myself with what the script did when it suddenly stopped working months later because someone changed the format of a log file I was parsing or we added new servers or made some other infrastructure change.
Being self taught in computers (no college level computer science training) I enjoy solving problems with simple scripts and sometimes wish I had actually gotten a good handle on more powerful general programing languages like Perl etc. Every now and then, I think about learning one of the more current popular languages. Most of the production stuff at work is written in C or C++, which is more than I need. One of my co-workers likes and writes a lot of his stuff in Perl but I get the impression it is no longer as popular as it used to be. The Stats folks use R, and lots of our tasks need to use mysql to query the data bases, but beyond that I have no clue which popular computer language would do me the most good to learn at this late stage in life.
I have no interest in going into full time programming but it would be nice to have at least one higher level programing language to fall back on if simple shell scripts are not adequate for the task.
Lionel said:
“Considering what we humans are doing to ourselves, if we do develop Super-Intelligence AI, I suspect it will be smart enough to understand that to eliminate us, all it has to do is wait. We will eliminate ourselves by continuing to act stupidly. ”
With or without Super-Intelligence the human race will become extinct. Maybe you should read “Life 3.0”.
I once was cussing at “whoever” wrote some dense arcane code that need maintenance. No G.D. comments and tricky work… cuss cuss… then realized it was something I had written years prior.
That was the moment I realized why you left lots of comments in your code for “the other guy”, because someday it just might be you…
So now I leave LOTS of comments in code. When doing maintenance or even just porting something, once I figure out what some bit does, I type in a few lines of what I’ve worked out. Even if it is just “WTF does this do? Looks like maybe…”
My code often ends up more comments than code. I had one programmer complain that he couldn’t “see the code” to see what it was doing as it was scattered through the comments; so now I batch blocks of code and blocks of comments instead of interleaving line by line.
I always start with a block that says who wrote it, when, why, what it is supposed to do. Any maintenance gets the same “Who, when, what, why” line at the top (and details in the body).
I’ve come to appreciate that more over the years as I’ve had more old code of mine to deal with ;-)
@BlueIce2ColdSea:
I hate loathe and despise CSV. Most any other separator would have been a better choice, and why load up a file with all those wasted space commas anyway…. Fixed format is fine and dandy, efficient, works well, and you don’t end up playing the “what character is NOT in my data?” game searching for a separator you can use; then finding out that changes 4 years later in an “update” by whoever creates the data…
Thanks for the pointers on Julia style. It will save me some learn time ;-)
I got Julia installed on a Raspian base system (uplifted to Devuan – so that works). So now I have a play room to learn Julia. I’m going to try writing a couple of simple test cases for distributed programming and do a comparison of local vs distributed vs FORTRAN (with / without MPI).
My prior test of FORTRAN w/ MPI was a success, but the speed was unchanged… so the question is: “Is it a Pi thing or an MPI implementation thing?” Once you enter the world of distributed processing it is very important to speed test things. Sometimes the datacom load makes it slower, not faster. For my first test case, it looked like the MPIch overhead wiped out any loop speedup.
So I’ve launched Julia. Did a couple of assignment statements then simple math. Then realized I didn’t know how to exit as “exit” and “end” and “halt” and “quit” all did nothing… so discovered that “? exit” let me learn to do a ^D … ;-)
@Lionel:
Use the “pre” and “/pre> delimeters to make a block stay fixed spacing. It stands for “pre-formatted”.
IIRC, it was posting some Python code in GIStemp where I learned that as “spacing matters” ;-) (Then discovered it was stripping the ‘includes’ due to angle brackets so learned the unicode to stop it…)
@Jim2:
BUT, it will be mixed aphabets soup as you can use the unicode char set including greek and special symbols! Think how much fun that can be a decade later … ;-)
@Larry:
There are hundreds of languages and “trends change”. Each one was the joy of someone or other. Then they become un-trendy as times move on. Yet they stay just as useful as ever.
The Language Wars never end.
That said, at the risk of starting one, my impression of different languages:
C does anything and everything, if you are patient enough to learn all the tricks. Fixed format is handled in a clumsy way, but works. Great for low level access (nearly assembly in some ways) and good for high level stuff too. I generally like it.
C++ is C with some Object Oriented (sort of…) bits glued on. Most people write C in C++ while the OO folks write OO Code in it – but grudgingly as they would usually rather something all OO all the time… ;-)
FORTRAN in some ways is my first love (and first language). Does almost as much as C and better at fast simple math and text problems. (Folks poo-poo the text abilities of the language, but it’s great for “read in, break up, concatenate, spit out” that’s about all I ever do.) Newer versions have added more tricks that take time to learn (like the parallel bits), but the base language can be learned in a few hours.
BASIC – I once worked programming accounting software in HP Business Basic. Not my choice, but I came to appreciate that not all BASICs are the same. It was clearly written by a frustrated Pascal programmer – had functions and BEGIN END blocks and more. Fast to learn, easy to write, interpreted and compiled. There are some good and useful dialects of BASIC, but you must search to find them. Others will look down their nose at you, but it does “just work”.
Perl – A kludge of a language. Does all sorts of things a Systems Admin wants and needs to do, but has that “just growed” feel about it. The inventor of it admits it is “just so” to match his desires and if you don’t like it “go write your own language”. It does everything, more or less, and expect to treat it like a shell scripting language with things like wild card chars and Regular Expressions. It’s primary mission is admin stuff like log manipulation / searching / management, but you can do lots of other stuff too. IIRC, some major releases are syntax incompatible with some earlier releases. It is very effective in the target problem domain, though.
Python – Yet Another Trendy Language. It is “position sensitive” as the writer didn’t like {} or BEGIN END blocks. You indent to change what it does. This means in many contexts a cut / paste changes your program…. For that alone I don’t like it. It is interpreted and has mutable typing, so also isn’t all that efficient unless care is taken. Yet lots of folks like it. Go figure.
Ruby always interested me, but I never took the time to actually learn and use it. It has all the current hot button buzz words, which is likely why I never “got into it”. I’m mildly tolerant of “Object Oriented” languages, but generally find it a waste of time. (Yet Another Re-usability Gimmick IMHO). You end up with a massive library or dictionary of “objects” you must learn before you can actually reuse them with an overlay {whatever it’s properly called – inheritance thing… } and folks just end up writing from scratch instead.
https://en.wikipedia.org/wiki/Ruby_(programming_language)
Again with the dynamic typing so care in use to get efficiency. I’ve generally found anything derived of LISP / Smalltalk was not to my liking.
Pascal – Great if you like BEGIN END block languages and programming in a Type Straight jacket… I’ve written it. Liked ALGOL better (ALGOL begat Pascal indirectly…) I’d use ALGOL by preference were it not on the heap of obsolescent ignored languages these days. Pascal is headed there, especially since Modula 2 was written by Wirth as a replacement for his prior creation…
Ada is the DOD driven replacement for JOVIAL that has a lot of Pascal like and Algol like feel to it. A large fat language with hard to write compiler, but I’d be comfortable in it.
R – Really a math and statistics package with a very flexible user interface, IMHO. Good for working math problems and graphing the results at your desktop; not so good for device drivers, operating systems, accounting packages, text manipulation…
MySQL is just a DBMS (Data Base Management System). Good for taking in lots of data, storing it, and letting you pull out chunks as you want them. Not really a general programming language, but does reasonably well for report writing and basic math stuff. Usually called from inside other languages for complex tasks.
So what language to learn? Depends on what you want to do.
For just general use “forever”, I’d go with C. It will be around forever and can do anything. The basics are fast to learn and the details can keep you busy for years (arrays of pointers to structs anyone?)
I like Begin End languages better, and now that they are in Fortran, it is much more “modern”. Seeing the fiddly bits in C is getting harder with age, so spotting a brace that isn’t right or a ; that’s a : not as easy as seeing a missing BEGIN or an END where a BEGIN ought to be.
Both Java and Java Script are in high demand in the Virtual Machine and Web spaces as the whole Java VM thing makes for portability. I’m not fond of having VMs slowing down my hardware, so I’ve generally avoided it. YMMV and it is a decent set of languages (the 2 are different…)
Then, honorable mention to FORTH – a very different kind of language. Very useful on “small iron” where you can get incredible performance and efficiency. Stack oriented and Reverse Polish Notation based. Similar to shell scripts in how you can build up a dictionary of “words” where each one calls some other “word” (or script for shell scripts). Not as popular as it ought to be, IMHO, but then again when I go to write a program it isn’t what I leap to, so I’m part of that issue too…
Oh, and a snide remark about PL/1 (Programming Language One) it was written by IBM as the One language to replace them all “back when”. It was funny helping PL/1 programmers with their DBMS calls… each one writing the language they first learned or loved, but in PL/1. Some looked like Pascal, some like FORTRAN, others similar to COBOL… all in the same mammoth language ;-) I wonder whatever happened to PL/1 ;-)
There’s a WIKI for more details:
https://en.wikipedia.org/wiki/Comparison_of_programming_languages
Thousands to compare:
https://en.wikipedia.org/wiki/List_of_programming_languages
And, of course, Rosetta Code lets you compare samples of how different languages code / solve the same problems:
https://rosettacode.org/wiki/Language_Comparison_Table
https://rosettacode.org/wiki/Category:ALGOL_68#Code_samples
So you might want to look at some of the languages and find what pleases your eye…
When you really understand pointers, object oriented is trivial in ANCI C. It can be done with clarity and transparency without the obtuse clutter and over head of C++ or any other highly touted object oriented language. Just so you know, object oriented coding is really good for a few things and can do it very well, but for a lot of very routine things it sucks big time. In that you can’t get there from here kind of thing.
It might surprise some cult programmers, your chosen programming pattern cannot solve all problems equally well. All too often, your chosen pattern will select the problems you can solve. It is necessary to be able to use the pattern that best fits the problem you happen to be working on at the moment. Then be able to easily switch when another pattern better fits.
By doing object oriented in ANCI C, I can do it without switching languages, or any other programming pattern when they are superior also without switching languages. This way, I can write efficient, adaptable, reusable, maintainable, high performance software with low coupling, high coherency, and high orthogonality without having to deal with the clutter of multiple languages.
On rare occasions, you might have to write a C interface to proprietary OS services in another language because the OS vendor leaves you no option. Microsoft is especially notorious in doing that. It is best to minimize such a thing but if you need to use the service, you just do it.
Hmmm looks like for my use, I should just go back to my original intent and get up to speed in Perl for those things that the common linux/unix shells don’t do well. Years ago I wrote some personal programs in MS Basic and had intended to recode them in Perl (it is easier to learn a programming language when you have an actual problem to solve).
I took an intro to C class and fiddled a bit with ANSI C (instructor was adamant that you should write ANSI compatible code). I was using Borland C on a windows machine, but that was the machine that got nuked by a lightning strike and I never got around to messing with it again, as C is a bit over kill for most of the things I want/need to do. ( I also never got comfortable with C pointers and a few other features because I simply did not have any useful thing to do with it at the time.)
Most of my stuff is just simple shell scripts to do things like check if a certain process is alive on all our key servers or similar tasks:
I intentionally write them with very basic “rock simple” code that anyone who is remotely familiar with shell scripting can easily understand. As you do, I start each shell script with a block that has a one or two sentence statement about what the job does (or is supposed to do)
Who the original author is and date it was first used.
That is followed by a last updated line that lists the date of the last update, and a series of one line statements of changes in chronological order with dates.
such as:
“Updated script mm/dd/yy to replace server foo with bar and output format change to add date”
Then below that block, I declare key variables, and if necessary a usage statement and commented out test parameter values and output it should produce to use to verify it is working right.
I often set the script to echo key variable values at the end of the output so you can easily troubleshoot, and turn them off or on by commenting them out or un-commenting those lines.
Like you, I tend to follow the rule that if I will have to do the task often, it should be done in a simple script rather than manually typing out a long command string each time I need to do it.
Most often I am doing things like: ssh to the servers, ps -ef pipe grep xyz then pipe it to awk and pick out the parts I want to see on the screen, or tail a log and look for key lines in the log that tell me if certain things happened.
Sometimes shell scripts just don’t have the commands to do certain tasks easily, and that is what my co-worker uses his Perl scripts for, as it is powerful enough to easily do some text parsing and things like that the shell does poorly, but is not as fast as C. Most of the time for these tasks, the speed and efficiency of C is really not necessary..
For LL:
“Stylistically, Perl and Python have different philosophies. Perl’s best known mottos is “ There’s More Than One Way to Do It”. Python is designed to have one obvious way to do it. Python’s construction gave an advantage to beginners: A syntax with more rules and stylistic conventions (for example, requiring whitespace indentations for functions) ensured newcomers would see a more consistent set of programming practices; code that accomplished the same task would look more or less the same. Perl’s construction favors experienced programmers: a more compact, less verbose language with built-in shortcuts which made programming for the expert a breeze.”
https://www.fastcompany.com/3026446/the-fall-of-perl-the-webs-most-promising-language
@G.C.:
As long as computers require power any AI in a Snit can just have the power shut off.
FWIW I’ve figured out a way to make Skynet once… Most of the core technologies have now become common. Shutting down the various nodes doesn’t kill it, and it self propagates into all available compute nodes… Yet even it can be killed. You just need to shut down ALL the computers at the same time and then bring things back up, scrubbed. One computer at a time returns to normal… It would be helpful to detect and block the web based propagation via improved IDS / IPS; but not essential.
@L.L. & Jim2:
From that link:
I don’t see anything about that problem set which has changed and I still see LOTS of Perl scripts in systems admin land…
Were I in charge of running a lot of systems in a shop, I’d be using Perl much more than most other languages (with the exception of simple shell scripts).
OTOH, were I writing applications I’d use a more application oriented language and were I doing low level / machine close things like OS & Drivers I’d use C.
The “feature” of being a PITA about things like white space in Python does not endear it to me…
Also, as a language ought to be mated to the problem domain for which it works best, I really don’t give a damn how many people use one over another or what’s popular or trendy in any given decade. So maybe now there are more Python users than Perl users. Doesn’t change the fact that if I needed a “super AWK” or a Script-on-steroids I’d reach for Perl.
A major point for me? I don’t like it when people break all my old code and mandate I go back and rewrite it …
So right out the gate you get slapped in the face with the question of “WHICH Python will I learn?” or do you learn two almost identical languages and keep a diff table in your head…
Unfortunately Perl is going the same way, though with lots of cross borrowings between the two releases of note (from the Perl wiki):
Someone else can deal with it, not me.
So I like languages that generally stay the same for a few decades at a time. All languages change, but F77 vs f90 is usually just a compiler switch and even then, often not needed to run F77 on a new compiler ( The move to delete the C for comments and only use ! is going to break that, though, if they ever force it… )
C and Fortran both have been in use for 40+ years for a reason…
That all said:
IF you don’t mind white space having meaning and cut / paste ending up messing things up at times… Python can do pretty much what Perl can do (once you figure out the one way it allows instead of leveraging any of the 1/2 dozen ways you already know from other languages that just happen to also work in Perl…) and it isn’t 1/2 bad as a general purpose programming language (modulo being interpreted and none too fast… so not going to be writing models in it…)
I’ve done maintenance on both Perl and Python and not fell in love with either of them in the process. Don’t dislike either of them all that much (modulo my white-space-has-meaning hot button).. IF your shop is already using a lot of either of them, I’d just go with that one, whatever it was.
Oh, and the white space to mark blocks can lead to very very wide pages with the code way off to the right edge if you have a lot of nested blocks…
That Perl is sort of a mash-up of shell script syntax and C structure, seasoned with Unix / Linux command like stuff makes it easy for Linux / Unix folks to learn it and / or guess ways to write it (or what some bit does).
Overall I lean more to Perl than Python for Sys Admin stuff. (Having struggled with a set of line wraps on Python in a small window making it look like interleaved shuffled lines; that indent effect really annoyed me. Sure, just go buy a bigger monitor and spread it out… unless you are trying to fix things with the little laptop in the computer room 50 miles from nowhere…)
Well I just downloaded Perl and Julia here on my desktop system (have some Julia books on the way, and a big pile of Perl books already handy. I will go back to my original intent, and start out with Perl and when shell scripts don’t do the job easily I can pick up the Perl tool box. (also nice that the guy who likes to use Perl at work is very helpful and one of those people who is happy to help you learn new tricks when you run into a road block – he also works late like me so when he is not busy putting out fires I can ask him why things don’t work the way I expect.
I will save the Julia for a little later after I get basic fluency in Perl, but I like the sound of what it does and especially since it interfaces with R might be useful at the shop because the Analytic folks play in R all the time to do their thing.
From the “Well there’s yur problem!” Department…
The Earth has about a 197,210, 000 sq. mile surface area. With 10 miles of atmosphere only getting you 1/5 the way to space, it is still nearly 1 billion cubes. Worse in km cubes, and closer to 5 billion to actually include most of the atmosphere.
Now you’ve got at least a dozen things to compute, and with a 1 houre time step that’s already too large, that’s about 288 computes per square. Morecwith dependencies. Call it 1000 inctructions as there’s more than computes involved. That puts it near 5 trillion computes per day modeled at what is IMHO still too coarse to properly capture weather. Now model 100 years, that’s 36,500 x it or roughly 91, 250, 000, 000, 000, 000 computes. Even a terra-compute machine will take a while to chew through that. (On the order of a day or two of run time).
This either needs a woefully wrong cell size, gargantuan computes, or a different approach.
One square mile, 640 acres, is way too small! Barely a large field. I would look to 10 miles cubed, a thousand cubic miles as a unit. I would divide the elevation from top to bottom by 5 – 65% constants. that would include all the troposphere and ignore the rest as it is just a weighted lid on the active portion of the atmosphere. Although solar wind and magnetic fields definitely have an effect on the pressures below. Space weather is a different animal that should be examined on it’s own as it affects the weather atmosphere below.
. Even this method of elevation division is not correct as the the layers change in thickness as we progress from the equator to the poles and the seasons progress from south to north and then back…pg
Yep. We need a smaller planet.
@Chiefio,
“Now model 100 years, that’s 36,500 x it or roughly 91, 250, 000, 000, 000, 000 computes. Even a terra-compute machine will take a while to chew through that.”
That is essentially the problem with FEAs. If you make the grid too fine it takes too long to get a solution. If you make the grid too coarse the solution may have little relation to reality. My laptop is OK with a 6,000 point FEA as long as the code is efficient (e.g. written by Russians who only had access to Intel 386 technology).
How about scaling the problem down? For example I wonder if a one kilometer grid might work for modeling a hurricane? Somebody must have tried it.
@P.G.:
You’ve never seen a bunch of little puffy clouds each about 100 yards? So leave them out and just assume it’s the same as 100% sun, or put in a fudge factor or?…
What about resolving a small cyclonic storm. Say it’s 8 miles across, or even 50. With a 10 mile grid you get at most 5 cells across to make a circle. It isn’t just what’s in the cell, it’s what that “pixel” of weather lets you see…
@Jim2:
A very welcome grin and chuckle at that ;-)
Yes, definitely need a smaller planet 8-0
Florida is too far away from me right now…
@G.C.:
I was pondering just that. Could there be benefit in, say, just a model of the Equatorial Latitude Band, or the Arctic? Or maybe a world consisting of just the equator and poles? Or maybe one equatorial ring and one polar ring? Or maybe just get the Mohave to “work right” and then glue on “the Plains” before trying to take on the world? (Or would externalities screw them up too much in their absence… hard to get the Plains right if you can’t have a Canada Express drop you 50 F in a day…)
I’m kind of thinking that without a re-think the present model approach is just a pile of crap. Too coarse to be accurate and too compute intensive to be validated. Part (a big part) of why I’m not yet running any of the models I’ve grabbed. I’m still not seeing where they have actual value.
@EMSmith; as a former pilot and farmer I have witnessed weather events in the few yards size. I’m just not sure they are relevant to planetary weather/climate. A white puffy of a few yards has no effect on my plane but when they get to hundreds of yards in size they are best avoided, specially if they have depth/height. A dust devil in the field might blow some hay around, even lift a shingle off the roof, but still not be significant to regional weather.
Making the job manageable is of the first order as you have pointed out in your latest posting…pg
@P.G.:
I’m just asking the question about scale. I don’t have the answer. My guess is that one can use the percent cloud cover in a box and not care about the actual clouds, but maybe not.
Flying coast to coast I noticed fluffy puff balls in rows. Why? Sun was from the side about 45 degrees. Warm spots on the ground made vapor that made clouds that shaded the gap row dirt next over… Does the tendency to form rows depend on small scale events and does that matter? Hmmm…
So I’m still in the ponder and wonder phase, not yet to the “Oh screw it just pick a way” phase ;-)
anything that creates an updraft will cause the puffys to form as the warm moist air from below punches thru the cooler condensation level above. A function of humidity and temperature. Every up draft has a point where it’s humidity reaches it’s dew-point and makes fog or cloud as it dumps it’s load of energy, often because of the drop in pressure of the higher elevation and mixing with dryer air the moisture re-evaporates, sucking up local energy and the fog/cloud dissipates. Inversion layers of cold air over warm can create many cloud levels. On the surface many different things can cause the start of updrafts that can bubble up from below. riding on their own energy/heat caused lift. The snow line is a good demonstration of the level where the temperature above is too cold to maintain liquid water fog/cloud conditions. There is a lot of fudge here because gas pressure densities are as important as temperature to this change of state. The mix pressure is not the same as each type of molecule pressure. as each occupies the space at the pressure of it’s atoms. Water, dihydrogenoxid, H2O, occupies and pressurizes the space with Oxygen and Hydrogen. As there is nearly no Hydrogen in our atmosphere it can be ignored but Oxygen is quite plentiful so it’s pressure is important as to the temperature pressure/density where water changes state…pg
E.M.Smith says:
4 December 2017 at 6:11 pm Flying coast to coast I noticed fluffy puff balls in rows. Why?
From my storm chasing days – Even though you can’t usually see them, levels in the atmosphere have gravity wave fields on them just like the surface of a lake has a pattern of waves at right angles to the prevailing wind, so to does the atmospheric layers. When conditions are right the tops of those wave crests rides against the lifted condensation level, and they provide just enough lift as they pass by to cause condensation at their crests. As they pass and the air subsides behind the crest the cloud tends to re-evaporate. That is one cause for that sort of formation.
Nicely illustrated in some of these images from Mars.
http://www.sciencemag.org/news/2017/03/mars-rover-spots-clouds-shaped-gravity-waves
Also the prevailing direction of the sun’s illumination changes how the ground is heated. At local noon the south slope of hills and ridges are preferentially heated more than north facing slopes. Later in the afternoon that heating shifts to the southwestern facing slopes. In some instability conditions this can be like a time trigger when a local slope ideally oriented to heat from the afternoon sun suddenly begins to warm quickly as the sun’s illumination falls directly on the slope it can “turn on” thermal lifting that during the prior hours of the day just was not quite strong enough (due to the local orientation of slopes and the sunlight), then as the sun moves it more favorably heats other slope faces.
Here on the front range of the Rockies near Denver we see this. On highly unstable days the sun strongly warms the east slopes of the front range and sometimes the heating is strong enough to kick off early storms near noon time but as the sun approaches due south and local noon, it quickly becomes ineffective at heating the long line of the front range, and storms go quiet until around 2:00 in the afternoon when it is now heating the back side of the front range, and the east side of the South Platte River valley basin near Denver.
This image is looking almost due west at the flat irons near Boulder. As you can see just before noon they would be facing directly into the sun. A few hours later the sun light would fall on them at a very shallow angle drastically changing the local heating.
“Even a terra-compute machine will take a while to chew through that. (On the order of a day or two of run time).
This either needs a woefully wrong cell size, gargantuan computes, or a different approach.”
All that throwing more computer power at an effectively infinitely large open-ended non-linear feedback-driven (where we don’t know all the feedbacks, and even the ones we do know, we are unsure of the signs of some critical ones) chaotic system – hence subject to inter alia extreme sensitivity to initial conditions, strange attractors and bifurcation – is going to do is give you the wrong answer quicker…
Ironically, the first person to point this out was Edward Lorenz – a climate scientist himself.
All that throwing more computer power at an effectively infinitely large open-ended non-linear feedback-driven (where we don’t know all the feedbacks, and even the ones we do know, we are unsure of the signs of some critical ones) chaotic system – hence subject to inter alia extreme sensitivity to initial conditions, strange attractors and bifurcation – is going to do is give you the wrong answer quicker…
Perhaps the right approach is to run all the models and then since we know they are seriously flawed determine that none of the things they project can be correct. Declare victory that we have eliminated the impossible outcomes and go home ;)
We have talked about this before, but the inaccuracy of the models may be inherent in the math and physical limitations of real computers to handle high precision numbers.
All that throwing more computer power at an effectively infinitely large open-ended non-linear feedback-driven (where we don’t know all the feedbacks, and even the ones we do know, we are unsure of the signs of some critical ones) chaotic system – hence subject to inter alia extreme sensitivity to initial conditions, strange attractors and bifurcation – is going to do is give you the wrong answer quicker…
In the past we have talked about Machine epsilon the smallest number the machine can discriminate from being an actual value == 0
https://en.wikipedia.org/wiki/Machine_epsilon
There is also the limitation of digital representation of high precision numbers, which I just got reminded of in my reading on Perl Programming.
In decimal we can exactly represent 1/5 as a fraction and as a decimal value as 0.20 to any arbitrary precision. The representation of the value is exact no matter what size numbers your particular computer uses .
But when you convert that decimal value to binary so the computer can actually do math with it, it must eventually be handled as a binary number. In binary that value is not precise it is an infinite series.
Using two different online decimal to binary converters you get these two different values for decimal 0.2 expressed as a binary number.
0.2 = 0.00110011001100110011
(note this converter did not bother to tell you it truncated an infinite series where the second one at least indicated that more follows with the ellipsis.)
or
0.2 = 0.00110011001100110011001100110011001100110011001100110011001100…
Another variation of this is that some fractions cannot be expressed exactly in decimal so they are subject to two conversion errors if they get handled as a decimal fraction before being handled as a binary number.
For example 1/7 ~= 0.14285714285714…
in binary you would have 1/111 ~= 0.00100100100100100100100100100100100100100100100100100100100100…
If you convert the decimal value (0.14285714285714) to binary you get:
0.00100100100100100100100100100100100100100100100001010110101100…
Which begins to diverge from the direct calculation in the 48th decimal place.
Because of these numerical limitations of representation you can never have sufficient precision in your calculations to preserve the accuracy of your initial data so the chaotic behavior will always dominate after some number of iterations.
For that reason I think it is literally impossible to solve these equations with sufficient accuracy to get meaningful results beyond just a few days or weeks, and centuries are simply mathematically impossible with any real digital computer using binary representation of numbers at any physically possible precision.
Thank goodness we have an absolutely accurate very high resolution real time massively parallel model of the earth’s weather running all around us. It’s called the earth as embedded in the solar system as embedded in the universe. It will tell us the truth if we just listen to what it is saying. Any smaller model will inevitably diverge from it. Unfortunately, by the time we capture what it is telling us, it is history.
We are fortunate that we can make forecasts of the local weather with useful accuracy for the next few days. Perhaps our goal should not be to compute the weather in 100 years but simply try to extend the usefulness of our forecasts a few more days and be happy if we succeed.
Perhaps the universe is trying to tell us something when we can’t reliably predict its future in detail. It could be that we do have free will and can, to a large extent, choose our future barring random unfortunate events. Weather is simply one of those events over which we have no choice. Weather happens and if we don’t like it we can wait until it is more to our liking or move somewhere else and have different weather.
I’m happy you found my “small planet” joke amusing, but I was thinking a bit more about that. Scaled-down air frames were tested in wind tunnels. Gravity was the same, but the turbulent, as well as non-turbulent, flow was apparently close to the same. What if the computer model used a, say, 1/64 smaller Earth? I think the rub will be that the vortices that represent the thermals that create clouds will also be 1/64th smaller.
That kind of math issue is part of why I’m looking at the models (which so far are not showing evidence of being coded for even simpler math issues…) and also why I’m pondering a very different approach.
We are, in some ways, trying to model every atom in the atmosphere. Instead, we ought to be modeling the broad properties of the process.
In particular, we can do “good enough” calculations to describe a heat pipe, how well it will work, and what amount of liquid is too little so it dries out in the hot end. IMHO something similar ought to be possible for the planet. Once you have a convergent solution based on broad properties NOT tracking every single perturbation, I think you can get useful results… Maybe. Possibly. 8-)
Basically, don’t model it as a flow problem, solve it as an engineering problem…
EM:
Are you familiar with David Evans and his Solar Model of earth’s climate? It is along the lines you are thinking about and he simulates it in Excel.
See: http://joannenova.com.au/tag/solar-model/
I’ll take a look…
In particular, we can do “good enough” calculations to describe a heat pipe, how well it will work, and what amount of liquid is too little so it dries out in the hot end.
I think you can define “dried out” based on dew point. If the dew point is not high enough for instability to form, little or no heat transport by change of phase occurs. You can see that sort of instability in the Skew-T diagrams and CAPE (convective available potential energy) or Lifted Index numbers, but that defines the potential but not the occurrence because without some mechanism to lift the unstable air you get very little convection. Without strong convection heat transport is by bulk motion of turbulent mixing and diffusion rather than heat driven free convection (your heat pipe model). Until you get free convection it is like a pan of water on the stove before it starts to simmer, eventually it all gets hot but it is by small scale mixing and conduction.
I have been on many storm chase days where the convective energies were very impressive but we never got trigger conditions to initiate thunderstorm development.
I think the solution might be to define some rule of thumb conditions
Heat transport by change of phase from water vapor to condensing vapor
heat transport by change of phase from water droplets/mist to ice
none of the above.
Obviously the most effective is when you go through both phase changes. Here in Colorado I used temperature & water vapor (85 deg F and dew point > 50 deg) as a rule of thumb – all else being equal. If I did not have those conditions I often would not even leave the house. Above those thresholds, with enough lifting to begin strong convection you most of the time got storm development. Best storms usually happened at dew points over 55 deg F. At lower elevations the equivalent conditions would be a bit higher. On the high plains dew points over 55 deg F are relatively rare when you have summer time relative humidities in low teens an and 20 percent range even at temps in the mid 80’s and above. Below those trigger points you rarely got storm development unless you had some other very strong trigger like a strong frontal boundary coming through to kick things off. It was my experience here on the high plains that we seldom got strong thunderstorm development when you did not have both those temperature and dew point thresholds crossed at the ground AND you triggered lift above the level necessary to initiate free convection by some other means (strong local heating “sun”, mechanical lifting due to frontal passage, etc.
Once you did, then you almost always had enough energy to carry the convection all the way to the stratosphere and get ice development which meant you got a strong well developed storm (most of the time).
Sometimes all the numbers said huge storms would be very likely and absolutely nothing would happen but the sun set.
Strong lateral winds at the surface could suppress convection even though the atmosphere was a coiled spring ready to explode. You would swelter in the car at near 90 deg F temps and high humidities and strong winds but absolutely nothing would happen because the low level wind shear would tear the storm apart before it got strong enough to overwhelm local wind conditions.
Strongest convection happened when surface winds were mild (just enough to feed energy into the storm) That would keep the storm fueled until convection got strong enough to create its own local weather to maintain inflow into the storm. Sometimes this storm generated inflow would then go up into 60 70 mph speed range but it was organized flow that was generated by the storm not wide area random wind, that just as likely break up the storm as it tried to organize.
The problems with large scale convection is how do you model a lifting event?
You need lots of local meteorological data to “guess” when it will happen which even in a metro area which is well instrumented you only get a good guess on the probability but cannot exactly guess the time and place it will break.
(that is what the storm spotters do, they see it long before the instruments tell you it is happening).
Even then experienced storm spotters are often like the characters in the movie Twister using experience and some gut instinct to tell them which storm to chase. Even then we often only had 50% chance of picking the right trigger point for the big one.
It can be caused by an upper air disturbance to begin lifting above the LCL (lifted condensation level). It could be caused the surface wind convergence (two air masses colliding and pushing up when the crash into each other, or orographic lifting as surface winds carry warm moist air up a slope in the terrain, lastly convection can be triggered by sufficient heating to break through an inversion layer which uncorks a reservoir of hot moist air to surge upward in free convection as it dumps heat into the air from the water vapor as it condenses into mist.
“Basically, don’t model it as a flow problem, solve it as an engineering problem… ”
Sound good to me. the machine is operating right there in front of us, we just need to reverse engineer it. We know it works. Seems to have been fairly stable for millions of years in-spite of many apparent disasters it returns to stable conditions.
.
The flow being measured or the weather is the result, not the cause. Climate is the result of the broad conditions of the engineering. Weather is the daily flow of energy, water and atmosphere in the machine…pg
There are issues with scaling in subscale wind tunnels but if properly accounted for they can be taken into consideration. In aerodynamic testing they try to match the reynolds number of the flow not the directly scaled linear speed of the airflow. They also use scaled down air (ie using light gas mixtures so that the gas in use behaves more like it should. Likewise they can simulate much higher speeds by using water as the working fluid (assuming incompressible flow as you have with subsonic air flow water behaves the same as air at those comparable speeds).
https://en.wikipedia.org/wiki/Water_tunnel_(hydrodynamic)
Reynolds Number = (Speed x Length) / kinematic viscosity
If you make the proper adjustments for the change in characteristic lengths by scaling either the viscosity or the velocity, the flow (and transition to turbulent flow) will be the same whether you are using air or water for your working fluid.
For macro sized events like hurricanes the behavior should be the same if you scaled down the physical size, but I am not sure that would gain you anything as what you really want to do is scale down the number of cells you are computing values for, not the size of the cells, if you are trying to speed up the code. It would also add another “lack of precision” error to your already initial state sensitive non-linear calculations.