Some Thoughts on LENR, Vibrations, Chrystals, Phonons, and Quantum Mechanics

I’ve posted a fair number of things on Cold Fusion or LENR or whatever they are calling it these days.

I think I have a handle on how to make it go more reliably. The “bottom line” (here at the top) is that you need to wiggle the atoms and protons and electrons around, faster is better, until they collide enough to join.

The more Quantum Mechanical point of view is that you need to have rapid change of the electromagnetic fields so that the particle Hamiltonians will have a diabatic change and thus have a move to a new Hamiltonian instead of staying on the same one. (or, shake them until they bang hard enough to fuse ;-)

So a lot of detail follows, but the practical implication is that once you have H or D loaded into Pd or Ni (or potentially other metals with a metalic or partially metalic bond) you need to induce atomic and electronic oscillations so the little dears bang into the walls, so to speak. That can be with high speed change of mag fields, electric fields, electric flows, light (such as UV) whacking the surfaces, thermal energy (hotter is better), ultrasonics (make those phonons wobble!) and maybe more. So some folks have cavitation LENR. Some have hot LENR (Rossi). Some have high frequency e-field LENR (Brillouin), and the Papp engine uses thermal spikes with UV and E-field spikes from a giant spark.

You can load the H or D via raw pressure, or via electric pressure via electrolysis. Just get it loaded. More is better. Probably best is some of both. Then whack it with change. Preferably controllable change so it doesn’t blow up…

The Background

Hard sledding comes first. The QM stuff.

No, I don’t fully understand this. A lot of it looks like F.M. to me… er… ‘Friendly’ Magic ;-) but it is what it is. So here’s my take on the QM of it all… First off, a Hamilton.

In quantum mechanics, the Hamiltonian is the operator corresponding to the total energy of the system. It is usually denoted by H, also Ȟ or Ĥ. Its spectrum is the set of possible outcomes when one measures the total energy of a system. Because of its close relation to the time-evolution of a system, it is of fundamental importance in most formulations of quantum theory.

Key Bits: It’s just talking about the energy of the system. Note the “time-evolution”. That matters as we need to control the rate of change of things to kick particles out of their Hamiltonian ruts.

Another word to know:

Generally, quantum mechanics does not assign definite values. Instead, it makes a prediction using a probability distribution; that is, it describes the probability of obtaining the possible outcomes from measuring an observable. Often these results are skewed by many causes, such as dense probability clouds. Probability clouds are approximate, but better than the Bohr model, whereby electron location is given by a probability function, the wave function eigenvalue, such that the probability is the squared modulus of the complex amplitude, or quantum state nuclear attraction.[21][22] Naturally, these probabilities will depend on the quantum state at the “instant” of the measurement. Hence, uncertainty is involved in the value. There are, however, certain states that are associated with a definite value of a particular observable. These are known as eigenstates of the observable (“eigen” can be translated from German as meaning “inherent” or “characteristic”).

So most of the time things are an ill-defined muddle of probability (so unusual things DO happen) but sometimes they have a more definite value. That is called an “eigenstate”. So eigenstates are more definite, the rest is a bit probabilistic.

The time evolution of a quantum state is described by the Schrödinger equation, in which the Hamiltonian (the operator corresponding to the total energy of the system) generates the time evolution. The time evolution of wave functions is deterministic in the sense that – given a wavefunction at an initial time – it makes a definite prediction of what the wavefunction will be at any later time.

So we are packing a crystal lattice of metal with Protons (H without the e-) and with a metallic bond cloud of electrons. (In covalent bonds, the electron is shared between two nuclei. In ionic bonds, one wins the struggle and captures the electron into the outer electron shell, so Na+ is down one and Cl- is up one in salt. In metalic bonds, the electrons run around in a loose soup of electrons. That’s why we can have electricity move and why they are metallic shiny as photons bounce off of the e- cloud. For colored metals, like gold, only the low energy bounces off and the high energy gets absorbed and the metal is blue deficient in reflections and looks golden. All thanks to that electron cloud wandering between atoms in the crystals.) Once packed, we’d like those e- wave functions to get smushed up with some P+ wave functions and become N. Slow neutrons that get stuck into a nucleus somewhere. Normally the e- and P+ don’t get close enough for that. We need a way to collapse their wave functions into a new thing. A way to get each off of their own Hamiltonian and onto a new common one.

Note the words “time evolution” again. So we need to do things that screw around with the motion / time aspect. Get those suckers smacked around by other atomic wave functions, and fast, so they get pushed over the hump into a new N function.

That’s hinted at in this quote:

Wave functions change as time progresses. The Schrödinger equation describes how wavefunctions change in time, playing a role similar to Newton’s second law in classical mechanics. The Schrödinger equation, applied to the aforementioned example of the free particle, predicts that the center of a wave packet will move through space at a constant velocity (like a classical particle with no forces acting on it). However, the wave packet will also spread out as time progresses, which means that the position becomes more uncertain with time. This also has the effect of turning a position eigenstate (which can be thought of as an infinitely sharp wave packet) into a broadened wave packet that no longer represents a (definite, certain) position eigenstate

We want those particles sitting there, broadening their wave functions, until if finds itself overlapping with another wave function ( the e- and P+) and then whack them FAST into one new wave function that takes less space… by forcing them into a known smaller space… by having the atoms around them crush them together into that eigenstate space.

Enter The Phonon

Now you and I might just call this vibration. Or vibrating atoms. But now, now we need a new name for it. Sigh. Just like “eigenstates” means “that stuff you always knew that was not probabilistic”, phonon means that vibration you always thought was just a vibration, but is now more, er, QM special…

In physics, a phonon is a collective excitation in a periodic, elastic arrangement of atoms or molecules in condensed matter, such as solids and some liquids. Often referred to as a quasiparticle,[1] it represents an excited state in the quantum mechanical quantization of the modes of vibrations of elastic structures of interacting particles.

Phonons play a major role in many of the physical properties of condensed matter, such as thermal conductivity and electrical conductivity. The study of phonons is an important part of condensed matter physics.

The concept of phonons was introduced in 1932 by Russian physicist Igor Tamm. The name phonon comes from the Greek word φωνή (phonē), which translates as sound or voice because long-wavelength phonons give rise to sound. Shorter-wavelength higher-frequency phonons give rise to heat.

So, in short, we want those suckers vibrating. In a quantum mechanical kind of way since they are small things…

Now any particle collection larger than a single hydrogen is essentially beyond description as a Hamiltonian. Too many states and too much flux. Our goal is to add a whole lot of particles and a whole lot of flux so some of those QM states end up squashing some wave functions into a new particle. How to do that?

Well, one clue is that these are thermal. So heating stuff up moves it that way. Another clue is that they make sound, so sound vibrations will move things that way. Try “hot sound” and you ought to be getting even more energetic Hamiltonians, even if you can’t write them down. Some, in that QM Probability kind of way, ought to kick some P+ and e- into being a nice Neutron. Perhaps some into turning Ni into Cu as a next step (or maybe via direct additions?)

There is some nice eye candy graphics on that page, so take a look and watch beads oscillating…

A phonon is a quantum mechanical description of an elementary vibrational motion in which a lattice of atoms or molecules uniformly oscillates at a single frequency. In classical mechanics this is known as a normal mode. Normal modes are important because any arbitrary lattice vibration can be considered as a superposition of these elementary vibrations (cf. Fourier analysis). While normal modes are wave-like phenomena in classical mechanics, phonons have particle-like properties as well in a way related to the wave–particle duality of quantum mechanics.

So those wobbles can have the nature of one whopper of a particle. And they have a spectrum of vibrations.

For example, a rigid regular, crystalline, i.e. not amorphous, lattice is composed of N particles. These particles may be atoms, but they may be molecules as well. N is a large number, say ~10^23, and on the order of Avogadro’s number, for a typical sample of solid. If the lattice is rigid, the atoms must be exerting forces on one another to keep each atom near its equilibrium position. These forces may be Van der Waals forces, covalent bonds, electrostatic attractions, and others, all of which are ultimately due to the electric force. Magnetic and gravitational forces are generally negligible. The forces between each pair of atoms may be characterized by a potential energy function \, V that depends on the distance of separation of the atoms. The potential energy of the entire lattice is the sum of all pairwise potential energies:

It is electro-magnetic in character, as well as dipping a toe in the Van der Waals pond. Just what we need to get something else that is resisting joining together (via those same electro-magnetic and Van der Waals forces) nudged into doing what it doesn’t want to do. This also implies that some electrical and / or magnetic forces can be used to stimulate more phonons. So things like a microwave excitation of short harmonic wires, or UV absorption into metals, or VHF magnetic fields. (Shades of Tesla and his spikes of HF fields and the assertion that the did odd things…) In short, we can make more, and potentially bigger aggregations of electromagnetic and Van der Waals forces inside a crystal lattice via induction of large phonon activity.

Cram metals with H or D and loads of excess e-, then induce lots of phonons. That ought to be about it. Having a metallic H bond would likely help, too. (More on that later).

The wiki goes on to point out that the math is intractable so a load of simplifying assumptions are made. I suspect we want the non-simple real actions and are looking to exploit a low probability edge case, but one big enough to make things hot. Take a Trillion atoms, and a ‘one in a billion per second’ reaction becomes very usable.

Solids with more than one type of atom – either with different masses or bonding strengths – in the smallest unit cell, exhibit two types of phonons: acoustic phonons and optical phonons.

Acoustic phonons are coherent movements of atoms of the lattice out of their equilibrium positions. If the displacement is in the direction of propagation, then in some areas the atoms will be closer, in others farther apart, as in a sound wave in air (hence the name acoustic). Displacement perpendicular to the propagation direction is comparable to waves in water. If the wavelength of acoustic phonons goes to infinity, this corresponds to a simple displacement of the whole crystal, and this costs zero energy. Acoustic phonons exhibit a linear relationship between frequency and phonon wavevector for long wavelengths. The frequencies of acoustic phonons tend to zero with longer wavelength. Longitudinal and transverse acoustic phonons are often abbreviated as LA and TA phonons, respectively.

Optical phonons are out-of-phase movement of the atoms in the lattice, one atom moving to the left, and its neighbour to the right. This occurs if the lattice is made of atoms of different charge or mass. They are called optical because in ionic crystals, such as sodium chloride, they are excited by infrared radiation. The electric field of the light will move every positive sodium ion in the direction of the field, and every negative chloride ion in the other direction, sending the crystal vibrating. Optical phonons have a non-zero frequency at the Brillouin zone center and show no dispersion near that long wavelength limit. This is because they correspond to a mode of vibration where positive and negative ions at adjacent lattice sites swing against each other, creating a time-varying electrical dipole moment. Optical phonons that interact in this way with light are called infrared active. Optical phonons that are Raman active can also interact indirectly with light, through Raman scattering. Optical phonons are often abbreviated as LO and TO phonons, for the longitudinal and transverse modes respectively.

Notice that light or sound are both able to set things jiggling, and that having different species in the crystal lattice lets you get both kinds of phonons. So hydrogen loading opens the door (and perhaps some other minor elements in the mix could make it interesting too. B or Li? Other metal alloys?) and then phonons make the smashing happen. The differing mass of H and Ni implies optical phonons, and that might be important. It could also explain why things only start when loading nears 1:1 ratio.

There’s more in that article, but for now it can wait. Just realize that this activity of phonons relates to sound and light, and also to emissions of EM waves like microwaves and such. The implication being that those energies going back in can create the phonon activity as well.

So what happens next?

This is where it gets pulled together.

Avoided Crossing of two Hamiltonians

Original image and attribution

The caption says:

Figure 2. An avoided energy-level crossing in a two-level system subjected to an external magnetic field. Note the energies of the diabatic states, \scriptstyle{|1\rangle} and \scriptstyle{|2\rangle} and the eigenvalues of the Hamiltonian, giving the energies of the eigenstates \scriptstyle{|\phi_1\rangle} and \scriptstyle{|\phi_2\rangle} (the adiabatic states).

Hopefully the Greek characters come through. If not, click to the article and read it there until I learn Greek ;-) Drat. Didn’t work. OK, that’s for later…

What it is showing is two Hamiltonians, the red curve and the blue curve, with the avoided crossover between them. Our goal is to get things to take that black line from one corner to the other and change from one Hamiltonian to the other. What does it say enhances this outcome?

Figure 2 shows the dependence of the diabatic and adiabatic energies on the value of the magnetic field; note that for non-zero coupling the eigenvalues of the Hamiltonian cannot be degenerate, and thus we have an avoided crossing. If an atom is initially in state \scriptstyle{|\phi_1(t_0)\rangle} in zero magnetic field (on the red curve, at the extreme left), an adiabatic increase in magnetic field \scriptstyle{\left(\frac{dB}{dt}\rightarrow0\right)} will ensure the system remains in an eigenstate of the Hamiltonian \scriptstyle{|\phi_1(t)\rangle} throughout the process(follows the red curve). A diabatic increase in magnetic field \scriptstyle{\left(\frac{dB}{dt}\rightarrow\infty\right)} will ensure the system follows the diabatic path (the solid black line), such that the system undergoes a transition to state \scriptstyle{|\phi_2(t_1)\rangle}. For finite magnetic field slew rates \scriptstyle{\left(0<\frac{dB}{dt}<\infty\right)} there will be a finite probability of finding the system in either of the two eigenstates. See below for approaches to calculating these probabilities.

Basically saying things stay on the red or blue line… but what can make things NOT stay on that line? A very fast magnetic slew rate. Or, I’d speculate, a very fast electric slew rate, or phonon slew rate.

In an adiabatic process the Hamiltonian is time-dependent i.e, the Hamiltonian changes with time (not to be confused with Perturbation theory, as here the change in the Hamiltonian is not small; it’s huge, although it happens gradually). As the Hamiltonian changes with time, the eigenvalues and the eigenfunctions are time dependent.


Deriving conditions for diabatic vs adiabatic passage

The math in that article is quite thick at this point, but what I think it is saying is just that if you move things fast enough, they can’t be elastic enough to stay on their Hamiltonian, and things get forced into other shapes when moved very fast. I.e. onto that black line between Hamiltonians.

In 1932 an analytic solution to the problem of calculating adiabatic transition probabilities was published separately by Lev Landau and Clarence Zener,[7] for the special case of a linearly changing perturbation in which the time-varying component does not couple the relevant states (hence the coupling in the diabatic Hamiltonian matrix is independent of time).

The key figure of merit in this approach is the Landau-Zener velocity:

v_{LZ} = {\frac{\partial}{\partial t}|E_2 – E_1| \over \frac{\partial}{\partial q}|E_2 – E_1|} \approx \frac{dq}{dt},

where \scriptstyle{q} is the perturbation variable (electric or magnetic field, molecular bond-length, or any other perturbation to the system), and \scriptstyle{E_1} and \scriptstyle{E_2} are the energies of the two diabatic (crossing) states. A large \scriptstyle{v_{LZ}} results in a large diabatic transition probability and vice versa.

Using the Landau-Zener formula the probability, \scriptstyle{P_D}, of a diabatic transition is given by

So if I’ve got that right, the more rapid and stronger the energy changes, the more likely a diabatic transition.

So high frequency EM fields, bright light, ultrasonics, and even high heat can add some increased probability.

That would explain why high temp E-Cat cells are prone to instability. They start to heat up even faster and make more heat that makes it go faster and… So make it ‘warm enough’, then modulate with something faster to switch, like HF electricity, microwaves, or even ultrasonics.

OK, that’s my theoretical whack at it. Now for a Modest Suggestion on how to make a cell.

Nano-Diamonds and Ultrasonics

There’s a nice PDF on this that I’ve got, but I need to find the link again. For now, a bit less descriptive but flashy link. How to make nano-diamonds with oil and sound:

The chamber is filled with an appropriate slurry, and ultrasound does the rest. It makes tiny spots of heat and pressure so high you get diamonds to form.

What I propose is that the same apparatus ought to make LENR happen. Use a powder of Ni in water, saturated with high pressure H2, or perhaps with a transverse electric current to load the metal, then turn on the ultrasonics to provide the phonon kicker.

Call it the Smith LENR Cell if it works, and I’ll be happy ;-)

Ah, there’s the PDF:

Up to 10% conversion of organic to diamond. Temps of about 120 C in the bulk liquid. Not exactly hard to engineer. Perhaps we could get a master mechanic like P.G. to build one in the garage… First get it to make diamonds, then swap the liquid for a metal in water slurry with H loading and stand back. (Crank up the ultrasonics slowly in case it works too well ;-)

Ultrasonic Diamond Rig

Ultrasonic Diamond Rig

Now I can also think of other ways to create the phonon activity. Make a bundle of metal rods. Bathe them in microwaves that match their length. Anyone who has put a metal trimmed bit of china in a microwave knows how that can vaporize the metal if strong enough. So keep the microwaves under control. Load the metal with H or D via pressure, or electrolysis, or whatever, then slowly add the microwaves. As radiation, or as a directly coupled electric current. (Think “antenna in a hydrogen bath”).

But wait, there’s more!

It ought to also be possible to use this same insight to make a system excited by light (just pick a color that the particular metal absorbs). Or any other phonon creating method.

And about that Papp Engine…

These folks claim they have it working:

Some other links:

Perhaps, just perhaps, as a speculative bit, the Papp Engine can work. It has a massive spark in the top from something like 6 over grown spark plugs. Lots of UV, EM, and heat there, along with a SNAP from the spark. It has a cyclic compression cycle, so heat is produced, along with lots of molecular agitation. The metals might well involve things like Ni in a stainless steel. Could it be that either a noble gas itself reacts, or that the ‘special treatment’ involved the introduction of some small amount of hydrogen into the noble gas?

I have to wonder what would happen if you had a ‘side chamber’ like in precombustion chamber Diesels, or Sterling engines, with a Ni gauze in it, and used Hydrogen gas in the noble gas mix, with loads of spark, and that external magnetic coil, add in some sound from a big fat spark, and maybe even duct in a bit of microwave energy; if somewhere in that mix enough phonons and hydrogen could get together in that gauze to heat the gas enough to make a net gain…

There’s a lot more I’d like to say, but I’m once again out of time. It will have to wait. Things like what crystal lattice looks best. ( I looked at all metal lattice types and crossed it with what is known to happen in excess energy production AND transmutations. Body centered cubic, and perhaps some face centered cubic look to dominate). Also some on metallic bonded H. Ni and Pd do it, some others don’t. Having metallic H bonding, with the right crystal type, with the bond distances the right size for H or D to fit, but only just, looks like the key magic sauce. Some oxides might also work, along with some alloys. Study the bond lengths, look for cubic crystallization, and try for metallic bonded hydrogen… Yes, I’ve got a load of papers to link for that set of ideas. But for now, it’s time to call it a day and have dinner. But at least I’ve put the marker down ;-)

(Also things like my usual typo pass and QA will have to wait, along with a how-to-do-Greek HTML study…)

It is a large search area, so lots of opportunities to bypass patents on things like Ni/H with something else like a B/Cu alloy or??? FWIW, Cu/Ni makes a cubic crystal, and ought to work as well as Ni. I’d add a touch of metal from each side of Ni and Pd and see what happens with H, D, and T loading. Slight variations in bond lengths and spaces ought to enhance some reactions. Perhaps even the ones you want…

In Conclusion

So that’s my theory and practical idea (if any LENR can be called ‘practical’) in one go. I’ve deliberately avoided the deep weeds of quantum mechanical math, while hopefully showing where it has a theoretical opening for this to work, and suggesting ways to make it go.

That also points at why there are so many ways reported as doing interesting things. From cavitation cells to those with electrolysis to others. And why some work and others don’t. It could be as simple as one guy doing his glass electrolysis cell with an open window pointed at the local airport radar…

I don’t know when I can get a chance to try any of these ideas, nor do I see any way I could make a living out of it (given my present circumstances) so I’m tossing into the Copy Left and Free world. If you can make it go, all I ask is a foot note of attribution. (Though if you make $Billions on it, a few $Million would be nice too! ;-)

At any rate, I hope it gives some order to ways to think about the LENR process and ways to make it more likely to work. Some ideas on ways to stimulate phonons, to get crystal lattices that are prone to multiple modes even if the H loading is low (like a bimetallic matrix with H added in smaller than unity amounts) and even some ideas on how to make a better Papp Engine go POP!

Subscribe to feed

Posted in Energy, Nukes, Science Bits | Tagged , , , , | 30 Comments

Lunar Months, Tides; for Vukcevic

In another thread, Vukcevic posted a question about lunar months. Despite my being a Master Druid, I have to protest that I’m not an expert on tides. Just an informed Druid ;-) More a broad generalist than expert in any one thing. So, in response to:

vukcevic says:
27 June 2014 at 3:35 pm (Edit)

Hi Mr. Smith
You are known as the expert on the tides I’ll have to go back to your main article on the subject, but for the moment I have a short question :
Wikipedia: article
lists 5 different numbers for the lunar month Odd one out is the synodic month quoted as ~29.53, while the other four are all with periods closely spaced between 27.2 and 27.55 days
How do you rate significance of the synodic month’s period in relations to ‘climate change’ compared to any of the rest?

We have this posting.

The short form is “it is the beat frequency between them, not any one month”.
The long answer follows…

Tides depend on gravity, and in particular to weather and climate, IMHO, the tractional force pulling ocean waters away from the poles and toward the equator. That is most strong under certain alignments of sun, moon, and earth; with certain orbital conditions of closest approach and straightest alignment. The closer, straighter, and most synchronized with each other, the stronger the tides and the stronger the tractional force pulling cold polar water away toward the equator. Also the shallower or deeper channels such as Drake Passage so the more or less deflection of currents such as the Circumpolar Current up the spine of South America toward the equator.

So when does what happen?

First up, the 5 months:

A synodic month is the most familiar lunar cycle, defined as the time interval between two consecutive occurrences of a particular phase (such as new moon or full moon) as seen by an observer on Earth. The mean length of the synodic month is 29.53059 days (29 days, 12 hours, 44 minutes, 2.8 seconds). Due to the eccentric orbit of the lunar orbit around Earth (and to a lesser degree, the Earth’s elliptical orbit around the Sun), the length of a synodic month can vary by up to seven hours.

The draconic month or nodal month is the period in which the Moon returns to the same node of its orbit; the nodes are the two points where the Moon’s orbit crosses the plane of the Earth’s orbit. Its duration is about 27.21222 days on average.

The tropical month is the average time for the Moon to pass twice through the same equinox point of the sky. It is 27.32158 days, very slightly shorter than the sidereal month (27.32166) days, because of precession of the equinoxes. Unlike the sidereal month, it can be measured precisely.

The sidereal month is defined as the Moon’s orbital period in a non-rotating frame of reference (which on average is equal to its rotation period in the same frame). It is about 27.32166 days (27 days, 7 hours, 43 minutes, 11.6 seconds). The exact duration of the orbital period cannot be easily determined, because the ‘non-rotating frame of reference’ cannot be observed directly. However, it is approximately equal to the time it takes the Moon to pass twice a “fixed” star (different stars give different results because all have proper motions and are not really fixed in position).

An anomalistic month is the average time the Moon takes to go from perigee to perigee – the point in the Moon’s orbit when it is closest to Earth. An anomalistic month is about 27.55455 days on average.

The different months tell us different things about the orbital status and alignments. Lets take them in reverse order and start with the “Anomalistic month”. When the moon is at perigee, it is closest to the Earth so tides are stronger. That matters. So this month length matters. But other things matter too. When that perigee point comes on top of a moon-sun alignment, the forces on tides are even stronger. So it is the interaction of the two that makes the total tide cycle. (Actually the interaction of even more than the two, but for now we are just using two to show the process).

So if the moon is closest to the earth, and the moon and sun are both lined up, we get even stronger tides. That ‘moon and sun line up’ is easiest to see, literally. When lined up with the moon on the far side of the Earth from the sun we get a full moon. When lined up on the side toward the sun we get a new moon. Both are strong tides. When directly over head, the gravitational pull is slightly stronger than when it is on the other side of the planet, so you get the stronger tides when the moon and sun are both overhead during a New Moon and weaker tides (though still strong) when at the full moon. That is the Synodic Month (first on the list above).

As the Synodic and Anomalist months move into an alignment, with perigee at the moment of the moon aligned with the sun, we get Perigean Spring Tides. Some of the strongest. The wiki on tides puts it at 7 1/2 lunations:

The changing distance separating the Moon and Earth also affects tide heights. When the Moon is closest, at perigee, the range increases, and when it is at apogee, the range shrinks. Every 7½ lunations (the full cycles from full moon to new to full), perigee coincides with either a new or full moon causing perigean spring tides with the largest tidal range. Even at its most powerful this force is still weak causing tidal differences of inches at most

Oh, and note in passing that the orbit of the Earth around the sun has a perihelion point where solar tide force is stronger, so that ‘beats’ against these other cycles too. But the solar tide force is smaller than the lunar, so that effect is an addition on top of the lunar cycle, not a dominant force variation. But longer term, it adds to the ‘cycles in cycles’.

That difference in height that is called “weak” does not mean “has little effect”, IMHO. It still amounts to huge quantities of water moved via the tractional forces away from the poles, and significant changes in quantity of water that goes through Drake Passage vs deflection up the coast as a current. What does moving 1/2 foot depth of water over the whole Southern Ocean from Antarctica to the Equator have as an effect on, say, ENSO?

Their use of 7 1/2 lunations is an interesting number. At 29.53059 days per lunation that is 221.479425 days. The beat between anomalistic and synodic months is 29.53059 – 27.55455 = 1.97064 difference. Dividing synodic by that gives 14.9852788941 which needs to be divided by 2 (as they are counting both new and full moon tides) for that 7 1/2 (that is closer to 7.49263). So about every 221 days a major tide, and about every 442 days has one just a tiny bit stronger (new vs full moon). Perhaps a time period useful for weather, but climate not so much…

Next up is the Draconian month. That is when the moon crosses the ‘node’ line. It is in the plane of the Earth / Sun orbit exactly. When that coincides with a perigean spring tide, it makes them just a bit stronger. So yet another beat frequency to factor into the mix. When the moon is high over the North, water flows toward the north pole. When below the ecliptic, more water flows toward the south pole. That cycle too likely matters. Both “strongest” and “which pole” will matter to weather and climate.

IMHO the Sidereal month is not relevant to tides nor climate. The alignment with a star far far away does not influence the solar / lunar alignment nor the lunar Earth distance, so ought to have no effect. It might have a correlation with precession, and that precession can have some climate correlations, but it is more a correlation than a causation, so I’d look for longer term causality in precession interactions with seasons and not with a lunar stellar interaction.

That leaves the Tropical month. When the moon passes through the same equinox point in the sky. This is simply the same as the Sidereal month adjusted for precession of the Earth axis, so again, IMHO, is an unphysical thing in terms of tides.

So that’s the ‘big lumps’ on lunar month vs tides, IMHO. The Synodic and Anomalistic month beat frequency, with a minor Draconian beat overlay longer term.

Anything Else?

IMHO, there are longer tide cycles that are driven by other changes than those months. The circularity of both the Luna and Earth orbits changes over time. That will change the distance between us, and thus the tides. The orbital tilt can also change over time for both the lunar orbit and the Earth orbit vs the sun and vs each other. Similarly there are interactions of tides with the surface structures, so another ‘beat frequency’ as the rotation rate of the earth aligns given surface features with the Perigean tides. I looked at that here:

In that posting it goes over things like the Saros Cycle

The eclipse calendar tends to be set by the Saros Cycle that’s a bit over 18 years.

Fortunately for early astronomers lunar and solar eclipses repeat every 18 years 11 days and 8 hours in a “Saros” cycle.

That bit about eclipses matters. That is when the moon is crossing the ecliptic. Other than that time, it is above or below the ecliptic and pulling water more north or south.

So lunar alignments suited to an eclipse (Synodic, Draconian in sync) have a cycle of variation that repeats every 18 (ish) years AND that resyncs with the topology of the land every third one of those (so the same land or ocean under the same repeat frequency of eclipses / tide forces at the same times) for a (roughly) 54 year repeat. Sort of close to the (roughly) 60 year PDO cycle (that has large error bands on the estimate of the cycle…)

That brings together the more important lunar month beat cycles with the eclipse cycle and the Earth rotation repeat beat.

Is there more?

Well, yes.

Even longer term you can get changes in the inclination of the Earth polar tilt. Obliquity.

The Earth currently has an axial tilt of about 23.4°. This value remains approximately the same relative to a stationary orbital plane throughout the cycles of precession. However, because the ecliptic (i.e. the Earth’s orbit) moves due to planetary perturbations, the obliquity of the ecliptic is not a fixed quantity. At present, it is decreasing at a rate of about 47″ per century (see below).

Note that the pole does not bob up and down so much as the Earth bobs up and down and the measure is relative to the ecliptic, so the axial tilt is described as changing due to the ecliptic changing. Yet for practical purposes, that apparent change is what matters as that determines where the tidal force aligns.

Now realize that the moon has a similar bobbing up and down moment that also changes due to “planetary perturbations”…

Look at the graphs in that link on obliquity and notice that the very long term changes are quasi periodic, but not pure cycles. We know less about lunar obliquity changes, IMHO.

There are also potential resonances with other orbital ‘stuff’, so some of the tidal effects may arrive along with things like increases in meteor storms and dust…

also looks at this paper: that is an interesting paper in terms of tidal mixing physics. They specifically look at lunar orbital mechanics and how much that changes tidal mixing of the surface layers of the ocean. They show the expected size of each impact, and what the net forces are expected to be, but only via an estimate based on purely cyclical projections of present values for cycles (reasonable, since we can’t solve the n-body orbital mechanics issues anyway). So it is good as a place to look at tides and forces as sizes, of a sort. Just realize that very long term some of the values they assume for modeling will change…

A Proposed Tidal Mechanism for Periodic Oceanic Cooling.

In a previous study (3) we proposed a tidal mechanism to explain approximately 6- and 9-year oscillations in global surface temperature, discernable in meteorological and oceanographic observations. We first briefly restate this mechanism. The reader is referred to our earlier presentation for more details. We then invoke this mechanism in an attempt to explain millennial variations in temperature.

We propose that variations in the strength of oceanic tides cause periodic cooling of surface ocean water by modulating the intensity of vertical mixing that brings to the surface colder water from below. The tides provide more than half of the total power for vertical mixing, 3.5 terawatts (4), compared with about 2.0 terawatts from wind drag (3), making this hypothesis plausible. Moreover, the tidal mixing process is strongly nonlinear, so that vertical mixing caused by tidal forcing must vary in intensity interannually even though the annual rate of power generation is constant (3). As a consequence, periodicities in strong forcing, that we will now characterize by identifying the peak forcing events of sequences of strong tides, may so strongly modulate vertical mixing and sea-surface temperature as to explain cyclical cooling even on the millennial time-scale.

As a measure of the global tide raising forces (ref. 5, p. 201.33), we adopt the angular velocity, γ, of the moon with respect to perigee, in degrees of arc per day, computed from the known motions of the sun, moon, and earth. This angular velocity, for strong tidal events, from A.D. 1,600 to 2,140, is listed in a treatise by Wood (ref. 5, Table 16). We extended the calculation of γ back to the glacial age by a multiple regression analysis that related Wood’s values to four factors that determine the timing of strong tides: the periods of the three lunar months (the synodic, the anomalistic, and the nodical), and the anomalistic year, defined below. Our computations of γ first assume that all four of these periods are invariant, with values appropriate to the present epoch, as shown in Table 1. We later address secular variations. Although the assumption of invariance is a simplification of the true motions of the earth and moon, we have verified that this method of computing γ (see Table 2) produces values nearly identical to those listed by Wood, the most serious shortcoming being occasional shifts of 9 or 18 years in peak values of γ.

That 6 year cycle is roughly a 10x of the 221 days above (9.9 x) while 9 years is a 14.9x of it. It is also the case that the 9 is 1/2 of a Saros while 18 is almost exactly one Saros cycle. All indications that the lunar / tidal cycles line up with climate changes.

A time-series plot of Wood’s values of γ (Fig. 1) reveals a complex cyclic pattern. On the decadal time-scale the most important periodicity is the Saros cycle, seen as sequences of events, spaced 18.03 years apart. Prominent sequences are made obvious in the plot by connected line-segments that form a series of overlapping arcs. The maxima, labeled A, B, C, D, of the most prominent sequences, all at full moon, are spaced about 180 years apart. The maxima, labeled a, b, c, of the next most prominent sequences, all at new moon, are also spaced about 180 years apart. The two sets of maxima together produce strong tidal forcing at approximately 90-year intervals.

There’s more, but just read the paper. Here’s a nice graph from it:

Lunar Cycles

Lunar Cycles

For your question, IMHO, that graph is the “money quote” in that it shows Syzygy vs Perihileon (so the alignment of the moon with the sun at closest approach) along with lunar nodal declination effects and the net-net of those on ‘lunar angular velocity’ as a proxy of sorts for tidal force.

So, with that, hopefully that answers your question?

Some More Lunar Links

Just for completion, here’s a few more links to things I’ve posted about lunar cycles over the years. Some show my ‘progress’ from a rough grasp toward better detail, and some have some speculative bits in them, but ‘history is what it is’ ;-)

And Vuk, you ought to especially like this one: ;-)

And my favorite speculation is that we are just trying to figure out (again) what the Ancients already knew:

So, at the end of all that, the simple fact is that if it IS Luna that does it to our climate via tides cycles, then we can not accurately predict very long term as we can not solve the n-body orbital problem. We can get “close” via a lot of calculation via brute force iterations, but that will suffer from drift over long terms, and from lack of exact numbers for initial conditions (such as just what is the mass of the Trojan asteroids?…) Likely not enough error to mess things up in a 100 year prediction, but enough to make using ancient proxy data suspect as we don’t really know what the configuration of planets and Luna (and the Earth!) was 100,000 years ago. It’s just ‘projections’ based on models… perhaps good ones, or perhaps not. Hopefully good enough for ‘all practical purposes’.

With that, hopefully the above descriptions and set of links will give you enough background to ponder so that you come up with some new and useful insight.

Subscribe to feed

Posted in Earth Sciences, Favorites, Science Bits, Stonehenge | Tagged , , , | 7 Comments

Certified Pool Boys and Higher Education

The other day, floating in the pool, I had one of ‘those moments’. The “Ah Hah” moment. It was a small one. Yes, they come in sizes. Some grand and deep. Others shallow and small, but still refreshing. The spouse has credentials. Lots of credentials. She does not have “Early Childhood Education” so can’t teach preschool; but has K-12 with single room schoolhouse endorsement, so can be put on her own running a one room school with K-12 and not much support at all. She has a variety of “Special Education” credentials and has taught several levels of it. (No, I don’t know the specifics. It’s the usual government alphabet soup of unenlightening acronyms and I just don’t have it ‘stick’ in the brain. One she taught was SED – Severely Emotionally Disabled. How that differs from RSP and the others TLA’s (Three Letter Acronyms ;-) she’s racked up is something I’ve not quite mastered ;-)

At any rate, the spouse is looking for a job out here. She was interested in doing something other than teaching, having done that for quite a while now. No Joy. Loads of resumes. Not much response or action. A 3 week gig teaching ESL English as a Second Language to Brazilian kids. (They get ‘immersion’ by spending a month or so at Disney and related parks while having ESL classes each week day for about 6? hours. Oh, and it counts as credits to their degree back home.) Bottom line was that a lot of folks look at a resume that says “teacher” and don’t see “clerk” or “administrative assistant” or “secretary” or “what ever they have”. She is “siloed” into the Teacher silo.

At work, there are more silos. Only a DBA person can do the Data Base Administration. They are certified in it, and not much else. Only the Application folks can do applications coding. They are certified in it and not much else. Only the Project Manager can do PM stuff (and they want a PMP or similar Project Management certificate). Etc. etc. etc… So a load of folks are stuck in their silo and can not step over the line to learn what the other side does. That is only if you go out and get a certification. Where you spend a few hundred to $thousand dollars to a certification agency to learn the BOK Body Of Knowledge, and the ‘received wisdom’ as controlled by them. The end result is that a lot of understanding gets lost when a product has to cross silo boundaries…

Recently an application was brought up in the Disaster Recovery site. Could not get the backups to restore. The Application folks can’t look at it. It isn’t the application. The DBA can’t fix it. It’s not a database issue. The folks who do restores (an outsourced service provider runs the operations) can’t fix it as they just ‘do the prescribed restores’. The ‘Solution Architects’ are the ones ‘certified’ to make solutions, and they made this one, so it is there baby. Except… They just design new solutions, not fix old ones… So we have opened a full on project, including project numbers and sign offs and all, to “design a new backup / restore process”. Which resulted in another month or two delay as folks needed to horse all the bureaucracy around that is involved in a project. Just to get backup data read in to disk in the new location. (In reality, there are a few more complications involving chip sets, backup formats, and different backup software standards at the two sites, but those are technical not organizational issues.) Organizationally, we have entered a kind of ‘Analysis Paralysis’ based entirely on silo structure of the organization, and certification mandate mania. Nobody can just “go fix it.” This stands out to me as my time in Silicon Valley was dominated by “Just DO IT!” organizations where you just fixed it. My resume includes DBA, networks, routers, applications, operations, hardware installs, sales, support, compiler QA, software production and fulfillment, teaching at the Community College level, and more. In a siloed world, I could never have done 90% of it.

Today, to get “certified” in all the things I can do would cost about $2000 per scrap of ‘turf’, and there are at least a dozen of them. Then it would take another $2000 (average, some are more) per year to ‘maintain the cert’. Also a bunch of CEUs (Continuing Education Units) for each. In short, somewhere between $24,000 and $50,000 per year (depending on just which certs I’d collect – they multiply faster than rabbits…) and then I’d be spending all my time maintaining certs, not working. So I’m slowly being defined out of existence by the Certification Bastille. It is not possible to ‘become me’ in that world. The generalist who learns a new area in a week, and does it very well. The guy who parachutes in to a company and ‘fixes what is broken whatever it is’ even if never seen before.

But what about pools?

So what does this have to do with swimming pools?

The Epiphany Moment came while floating in the resort hot tub. Another ’50 something’ couple was in the spa with us. We were talking about finding jobs. The guy said he got hired ‘same day’ at Disney. (The spouse has been trying to get hired there for 3 months now with ‘no joy’). How? We asked… “They needed a pool guy and I am a Certified Pool And Spa Operator.” They wanted a Cert, and he had it. Yes, a Certified Pool Boy.

Now I learned how to do pool maintenance some time back. On my own. About 3 hours all told. Fixed the Florida Friends pool chemistry and did some ‘shock’ to clean out the green. If you have any grasp of basic chemistry, it’s nearly trivial. BUT, I could not get a job at a large company as “Pool Boy” since I’m not a certified pool boy. The ‘opportunity’ is closed to me, even if I wanted to do it in my retirement years.

We are becoming a nation (world?) where opportunity closes as soon as you get your first certification and where choices, both for the person and for the organizations, are eliminated. You are put on a ‘track’ and forced into a silo; there to remain until you don’t have enough ongoing CEUs to retain your cert. (Then you are deemed no longer competent – for reasons that are a mystery to me… and discarded. There is a catch-22 in the end game. For many certs you must maintain the cert or lose it, and to get the cert again you have to be employed in the field, but you can’t be employed without the cert, so… I’ve looked at a couple where I’m very qualified, but having not worked in that particular area for the last 5 years, can not even apply).

In Conclusion

So why the rant?

Simple. Loss of freedom.

The Certification gives some minimal assurance that the person has some clue about the job; but it does not guarantee morality nor competence. Mostly it functions to restrict supply and raise wage rates for those in The Guild. Initially this can be a generally beneficial effect. My Dad sold real estate prior to Real Estate Licensing. He then got his license. He could have grandfathered in to a Brokers license, but didn’t bother. My college roomy did get his. AFTER 4 years of college, a bunch of mandatory additional real estate classes, and a few years working for a Realtor / Broker. He was no better a real estate guy than my Dad. It cost him a lot of time and money to get there. It made the Broker richer. It raised commission costs and helped to assure a closed guild with high costs and lower productivity.

In computing, the Cert Racket is making $Millions for the likes of Micro$oft, Oracle, et. al. At a couple of $Thousand per cert, and several levels of cert, how much can you rake in if ever person working on your product has to pay out $5000 / year to keep their job? (I looked into it, for the cert levels I’d get, it would take about $5k / product to keep up the ‘couple of certs’ each). It keeps the ‘riffraff’ out of the job market for those in the guild, so a DBA doesn’t need to worry that some smart ass Applications guy will offer to do both. But…

In the end, you have highly siloed organizations with nobody who understands the whole picture, who has worked all sides of the issues. The process ossifies. Prices rise greatly. The whole thing starts to freeze up as the BOK does not welcome innovation. And personal freedom is cut short. The spouse has now accepted an offer of being a substitute teacher as they want her and her certs, even though she very much wanted to have change in her life. We are all impoverished, both by higher prices and by fewer choices with less liberty. All in the name of a ‘certainty’ that the certificate does not supply.

If you start looking at the list of certifications and licenses needed for simple things, it will start to curl your hair. Speaking of which, curling hair is one of them. If you want to braid or curl hair, you need a license for that… Sigh. The whole goal being to develop a local monopoly for The Guild in each field. To eliminate choice, and the freedom that goes with it. While racking in cash for government licensing agencies and corporate Certification mills.

Here’s a link for one of the Pool Boy certs I found on a first look:

CPO® certification courses are designed to provide individuals with the basic knowledge, techniques, and skills of pool and spa operations. The Certified Pool/Spa Operator® certification program has delivered more training than any other program in the pool and spa industry since 1972, resulting in more than 375,000 successful CPO® certifications in 93 countries. Many state and local health departments accept the CPO® certification program.

So is this how societies age and ossify? It is at least a part of it…

Subscribe to feed

Posted in Economics, Trading, and Money | Tagged , , , | 31 Comments

LENR Year Of Answers?

Ran into an interesting article on Wired that does a nice “roll up” of the LENR news. It makes it look like 2014 is likely to be the year of “fish or cut bait” for Cold Fusion / Low Energy Nuclear Reactions.


Because folks who make the gadgets are saying they will ship in 2014. Commercial product. Other folks are putting money down for shipments. Either they happen, or they don’t. It is shown real, or “stuff hits the fan”.

I can’t really do any better job than the author of the Wired article already did. I’ve followed some of their links, and I’ll paste here some quotes from those links that help make the case, but really, just read that article.

15 January 14 by David Hambling

Yes, I’m coming to this late. About 5 months late. Oh Well. Hopefully still ‘fresh’ enough to be of interest.

In December, Cyclone Power Technologies, a US company known for its highly innovative Cyclone Engine, announced that Dr Yeong Kim would be joining their consulting team. Dr Kim is a professor at Purdue University and a leading researcher in LENR. In a press statement Dr Kim said that his new role with Cyclone was an opportunity for research to understand and harness cold fusion.

The Cyclone Engine is an external combustion engine — a high-tech steam engine — that can use virtually anything as fuel, from oil or gas to biomass or powdered coal. It can also be powered by waste heat or solar collectors, and Dr Kim suggests that a future Cyclone Engine might have cold fusion as its heat source.

Further down…

Meanwhile Brillouin, one of the lead contenders for commercialising LENR technology, announced in December that they had signed a licence agreement with an un-named South Korean company after a year of due diligence. The deal, [...] licenses the Koreans to manufacture cold fusion units, with production and installation in 2014.

So we’ve got a 2014 delivery date claim for Brillouin tech. The article references a link in Pure Energy Systems News:

After a year of due diligence, a firm in S. Korea has signed a license with Brillouin, according to Bob George, CEO. They hope to roll out manufacturing plans by the end of 2014, as well as retrofitting a stranded asset power plant with their clean, easily-affordable, “cold fusion” boiler technology.

by Sterling D. Allan
Pure Energy Systems News

So not only a 2014 date, but also power plant scale. That’s a pretty big sized boiler. Not a table top scale. In the following quote “Bob” is their CEO:

But the development that Bob said is “the most significant event” they’ve had, and which I could be the first to announce, is that just before Christmas, they signed a multi-million dollar licensing contract with a firm in South Korea,[...]

This contract came after a year of the firm performing their due diligence. [...]

He hopes that by the end of 2014 they will be ready for roll-out of manufacturing, handing over a set of prints to licensees to build and beta-test units. He said that they would have already done the beta testing on their end, by then.

OK, it’s got a weasel in it with the “end of 2014″ that can easily be stretched into mid 2015… but only with some eyebrows raised. By that time, even with a stretch, they ought to have large hardware being moved around and visible.

What Bob is most keen to secure by contract is a “stranded asset” power plant in the range of 5-10 MW willing to beta test their HHT system as a retrofit solution to replace their coal-, or biomass-, or other polluting source that has had to be shut down due to environmental regulations. They would take out the old boiler and scrubber and replace it with their HHT technology. He thinks this could begin to be installed by the end of 2014, as well.

The cost for producing power in such a retrofitted scenario would be 2 cents per kilowatt-hour. Bob is confident that once one plant has been retrofitted as a demonstrator, many others will want to retrofit as well.

They do have another power plant ready to implement the technology, but it’s not a stranded asset scenario.

Well, that’s pretty clear. 5 MW is going to be visible in the construction. Also note the 2 ¢ / kWhr cost. Nice. IF this is real, the whole attempt to kill off western industry via coal bans and CO2 caps goes “up in smoke”, as this just slides into the boiler room… I don’t know which matters more. The derailing of the CO2 Tyrants, or the wide availability of cheap electricity. Lets all hope it’s real.

Further down that Wired article, they mention a New Energy Times article that spots a shift in government attitudes toward funding some LENR research.

The DoE provides funding for innovative energy projects via their Advanced Research Projects Agency for Energy (ARPA-E). The latest funding opportunity announcement included a new addition in the list of technologies which the DoE is interested in: alongside solar, photochemical reactors and radioisotope thermoelectrics and many more, Low Energy Nuclear Reactions made the cut.

U.S. Department of Energy Invites Submission of LENR Proposals
Jan. 3, 2014 – By Steven B. Krivit –

New Energy Times has just learned that, on Sept. 27, 2013, the Department of Energy’s Advanced Research Projects Agency-Energy (ARPA-E) quietly announced a funding opportunity for low-energy nuclear reaction (LENR) research, among other areas.

This first-ever direct invitation from the Department of Energy for submission of proposals to fund this research marks a significant point in the field’s history. [...]
ARPA-E made its announcement in its “Funding Opportunity No. DE-FOA-0001002, CFDA Number 81.135,” at this Web site. [...] Here is a direct link to a PDF of the invitation. LENRs are listed in item 3.6 in Figure 3 on page 11 of 27 in the PDF.

The links are:

The PDF link:

It also looks like ARPA likes obscure URLs…

The Wired article goes on to look at Rossi and his connection via a couple of hops to a Chinese money source. Then wanders off to the the Martin Fleischmann Memorial Project and a claim of a variation that makes detected radiation. Link here:

We have accomplished a few experiments that appear to show small amounts of excess energy (6 to 10%). These results still face a number of open questions that we are diligently working through.

We have shared evidence of a repeatable rise in counts from a Geiger counter. We continue to work to validate these results in new experiments and with better instruments.

OK, positive results. But not exactly an earth shaking ‘rise’…

I actually found this article more interesting:

It covers some fascinating processing of metals at a very small feature scale, and also has a fascinating sidebar or two. Like this one on a novel way to weld glass plates:

My best guess is that the Ni micro-powder had adsorbed moisture on its surface with an H-O-H attached to a surface nickel oxide oxygen atom as …-Ni-Ni-O-H-O-H. When the Fe2O3 is added, a loose bond comes from the dangling H atom as
…-Ni-Ni-O-H-O-H-O-Fe-O-Fe-O-Fe-O-…. Depending on the initial humidity, there could be longer chains of H-O-H-O-H … between the two surfaces.

Hydrophilic bonding is used commercially to bond flat glass plates together, for example to make hermetic crystal packages or optical interferometer components. Just take two clean, flat plates of glass, wet them, place them together, and heat. Initially each surface would look something like
…-Si-O-Si-O with a dangling oxide on the surface. The water chain between them forms
When heated, H-O-H groups drop out of the sandwich until you are left with only
and, at that stage the glass surfaces are permanently bonded. This also occurs in nature, agglomerating smaller oxides particles into larger clusters, and is one reason why nanopowder is not found in nature on the Earth.

Might be fun to try heating some wet glass plates, powders, whatever.

In Conclusion

Looks like while I wasn’t paying attention, the LENR field has gotten much further along. It also looks like this year ought to be the year we get a final answer to “is it or isn’t it?”.

At this point I’d normally love to spend a day or so delving into one Rabbit Hole or another, and adding a lot of ‘what is happening now’ and getting caught up. Unfortunately, instead I’ll be at work managing computer stuff and assuring that Disaster Recovery sites work for this coming hurricane season. Which promises to be a dud. But I digress…

So if folks have a favorite update link or article, feel free to post ‘em up here. With any luck we’ll have more than just “someday maybe” news stories and possibly even a construction site photo or two in the next 6 months.

Subscribe to feed

Posted in Nukes, Oil Energy Nukes, Science Bits, Tech Bits | Tagged , , , | 165 Comments

Le Chatelier and his Principle vs The Trouble with Trenberth

There is a marvelous and generalized Principle that if you push things, they push back. (As a rough paraphrase). It is Le Chatelier’s Principle.

Not always, and not as uniform as a Law, but still very widely applicable. I first ran into it in Chemistry, but the wiki says it has found much broader uses:

In chemistry, Le Chatelier’s principle, also called Chatelier’s principle or “The Equilibrium Law”, can be used to predict the effect of a change in conditions on a chemical equilibrium. The principle is named after Henry Louis Le Chatelier and sometimes Karl Ferdinand Braun who discovered it independently. It can be stated as:

If a chemical system at equilibrium experiences a change in concentration, temperature, volume, or pressure, then the equilibrium shifts to counteract the imposed change and a new equilibrium is established.

This principle has a variety of names, depending upon the discipline using it (see homeostasis, a term commonly used in biology). It is common to take Le Chatelier’s principle to be a more general observation, roughly stated:

Any change in status quo prompts an opposing reaction in the responding system.

In chemistry, the principle is used to manipulate the outcomes of reversible reactions, often to increase the yield of reactions. In pharmacology, the binding of ligands to the receptor may shift the equilibrium according to Le Chatelier’s principle, thereby explaining the diverse phenomena of receptor activation and desensitization. In economics, the principle has been generalized to help explain the price equilibrium of efficient economic systems. In simultaneous equilibrium systems, phenomena that are in apparent contradiction to Le Chatelier’s principle can occur; these can be resolved by the theory of response reactions.

Once again, this is not a Law. There can be cases in multiple variables or cases with known positive feedback where that new equilibrium is a long ways away. Think of nudging a stone over the edge of a small bump on a tall mountain. It WILL come to rest again, but a long ways down…

Still, as a general rule, one ought to look for such things. It is important to expect some manifestation of Le Chatelier’s Principle and look for it as a first behavior. To assume it isn’t active in a physical system is to start off expecting the unlikely.

So why mention this?

Because, at the core of it, the atmosphere is a physical / chemical system and this principle has wide application in such systems. Because a (nearly trivial) increase in the trace gas of CO2 is a nudge to the equilibrium. Now the air is a dynamic disequilibrium system in most places at most times, but it is seeking equilibrium. So any “nudge” ought to expect a “counter nudge”. Where I would speculate that “counter nudge” to come is from water. Why? Because it is THE dominate feature of the IR bands of interest and of the heat moving properties of the troposphere in general. Heck, even above that it has some role.

So a tiny increase of CO2 could be expected to be offset by a similar tiny change in the water cycle as the system seeks to return to the prior conditions. Right out the gate, to PRESUME a positive feedback runaway system, as the Global Warmers do, is a mistake. The first presumption ought to be a negative feedback and Le Chatelier taking his due. Only when that can NOT be shown, ought suspicion move on to a positive feedback somewhere.

So “what is the size of water”? Can we see what it does in the IR bands in the atmosphere in any informative way?

In a prior posting I showed the graph before. It is well worth repeating:

Stratosphere radiation by species

Stratosphere radiation by species

That dashed line across the middle is the idealized Tropopause. (In reality it varies from about 17 km over the equator down to near ground level at the poles (or “indistinct” as they said in the wiki…)).

Below the Tropopause convection rules. Above it, radiative heat transport rules. Right off the bat, we have a big clue about why AGW (human caused ‘Global Warming’) is based on errors. The belief that radiative forcing at ground level ‘matters’ is simply shown a fantasy by the existence of the Troposphere. BY DEFINITION, it is convection, evaporation, condensation, clouds and rain that matter in the Troposphere. But lets look at that graph some more and pick up some interesting bits.

It comes from:

Which is paywalled, but the abstract says:

A line-by-line model (LBLRTM) has been applied to the calculation of clear-sky longwave fluxes and cooling rates for atmospheres including CO2, O3, CH4, N2O, CCl4, CFC-11, CFC-12, and CFC-22 in addition to water vapor. The present paper continues the approach developed in an earlier article in which the radiative properties of atmospheres with water vapor alone were reported. Tropospheric water vapor continues to be of principal importance for the longwave region due to the spectral extent of its absorbing properties, while the absorption bands of other trace species have influence over limited spectral domains. The principal effects of adding carbon dioxide are to reduce the role of the water vapor in the lower troposphere and to provide 72% of the 13.0 K d−1 cooling rate at the stratopause. In general, the introduction of uniformly mixed trace species into atmospheres with significant amounts of water vapor has the effect of reducing the cooling associated with water vapor, providing an apparent net atmospheric heating. The radiative consequences of doubling carbon dioxide from the present level are consistent with these results. For the midlatitude summer atmosphere the heating associated with ozone that occurs from 500 to 20 mbar reaches a maximum of 0.25 K d−1 at 50 mbar and partially offsets the cooling of 1.0 K d−1 contributed by H2O and CO2 at this level. In the stratosphere the 704 cm−1 band of ozone, not included in many radiation models, contributes 25% of the ozone cooling rate. Radiative effects associated with anticipated 10-year constituent profile changes, 1990–2000, are presented from both a spectral and spectrally integrated perspective. The effect of the trace gases has been studied for three atmospheres: tropical, midlatitude summer, and midlatitude winter. Using these results and making a reasonable approximation for the polar regions, we obtain a value for the longwave flux at the top of the atmosphere of 265.5 W m−2, in close agreement with the clear-sky Earth Radiation Budget Experiment (ERBE) observations. This agreement provides strong support for the present approach as a reference method for the study of radiative effects resulting from changes in the distributions of trace species on global radiative forcing. Many of the results from the spectral calculations reported here are archived at the Carbon Dioxide Information and Analysis Center for use by the community.

Yes, it’s another model… Measured would be better, but “going with it”, it’s a very detailed “line by line” model. They are specifically modeling the cooling of IR sensitive gasses (water vapor, CO2, Ozone). That’s what we want to know. (They also run off into Panic Global Warming Land, but hey, even Chicken Little might have some useful information…)

I got the actual image from another source (reference in the Troposphere Rules posting) that captioned it:

3. Stratospheric cooling rates: The picture shows how water, cabon dioxide and ozone contribute to longwave cooling in the stratosphere. Colours from blue through red, yellow and to green show increasing cooling, grey areas show warming of the stratosphere. The tropopause is shown as dotted line (the troposphere below and the stratosphere above). For CO2 it is obvious that there is no cooling in the troposphere, but a strong cooling effect in the stratosphere. Ozone, on the other hand, cools the upper stratosphere but warms the lower stratosphere. Figure from: Clough and Iacono, JGR, 1995; adapted from the SPARC Website. Please click to enlarge! (60 K)

I’ve added the bold bit.

Now notice that there is a nice bright diamond of CO2 radiating away heat in the stratosphere, but just below it and below the dashed line, in the troposphere, the CO2 band is doing nothing. That’s what they are talking about. The CO2 band is already closed in the troposphere. It is NOT going to be any more closed with more CO2. Furthermore, “downwelling” IR from the CO2 above it will NOT open it. (Nor reach the ground). A shut door is shut. All the noise and smoke about CO2 and downwelling IR is just stuff and nonsense. That door is shut. CO2 as a radiative agent is limited to action in the stratosphere, where it is a net cooling agent.

Add more CO2, it will not make the ground warmer.

Now enter Le Chatelier. At most, CO2 could cause some other species to have a reduced effect. It could cause some change in the percent of IR radiated in the stratosphere via O3 or via the small water vapor present. It might couple to the water in the troposphere and conductively put some energy in the tropospheric water vapor engine (that then conducts it to the stratosphere). Le Chatelier says to expect an adaptation toward restoration of equilibrium, and there is a lot of water available to act in just that way. So that’s the first and most likely place to look.

I’d look for “downwelling” IR from the stratospheric CO2 to be absorbed into water vapor or CO2 at the tropopause, and cause that layer to very slightly rise in height, or to put a bit more energetic water in the tropopause / stratosphere mixing band; or perhaps to just be radiated back to space from those water vapor cloud tops. (Notice all that red from radiative water near the top of the troposphere…) In short, to attribute any effect in the troposphere to CO2 IR radiation is an error. It is water that dominates the Troposphere. CO2 IR effects matter in the stratosphere, and there they are cooling.

So what about Trenberth?

From his paper here: we get this graph.

Trenberth 04 / 2009 Energy Balance

Trenberth 04 / 2009 Energy Balance

Now the Trouble With Trenberth begins…

First off, notice that “back radiation” from “greenhouse gasses” at 333 Watts. It is shown as coming from the Tropopause. Right at the cloud tops next to it for precipitation. Exactly HOW do we get “down welling” and “back radiation” from CO2 in exactly the place where the prior graph shows “nothing happening” from CO2? Just not going to happen. But notice he does not say “CO2″, he says “greenhouse gasses”. Wait a minute… what the??… What he is showing, per the first graph, is the back radiation from the water vapor and clouds NOT CO2!

Now what would Le Chatelier be looking for….

Notice that Evapotranspiration and thermals are shown as constants? IFF there were more “backradiation” hitting the surface, don’t you think a bit of added evaporation would happen? That thermals might pick up a couple of more Watts and raise it a hair higher? That added water and clouds would cause more precipitation and cooling? As that added water rises, it would make a bit more clouds and those 79 Watts reflected from clouds would increase.

The added CO2 in the stratosphere would be radiating more to space, making a colder stratosphere. Now it tends to descend to the surface in polar vortex flows, so I’d expect a cooler night pole with more cold air (and polar water) flowing in toward the equator.

All of these things in conformance with Le Chatelier and his principle.

Trenberth shows them all as holding constant (or in the case of downflow of the stratosphere at the poles, missing entirely).

Trenberth assumes that CO2 will cause more IR to pass through a closed radiative window, in a convective non-radiative atmospheric zone, not being absorbed into the already saturated CO2 band, heat the surface, but NOT cause more convection nor more evaporation, and cause a rise of surface temperatures without a compensating mass flow or phase change. And that, IMHO, is The Trouble With Trenberth. It is non-physical and violates Le Chatelier’s Principle.

Given a choice of embracing Trenberth or Le Chatelier, I’ll take Le Chatelier… The system adapts, and doesn’t change much. Most likely a trivial change of water distribution and no temperature change at all.

Subscribe to feed

Posted in AGW Science and Background | Tagged , , , , | 87 Comments