Broken Feature Hell

Sigh.

I’m stuck once again in Broken Feature Hell.

For decades I’ve been working in I.T. doing computer stuff. The whole time has been marked by features that don’t work. Either as ‘bugs’, or as ‘Real Soon Now’ broken promises, or sometimes as “let the customer do QA” (in the Mico$oft model). Sometimes as Sales Guy Hype Not Shipping Yet…

Once, at Apple, I was looking to buy a load of network gear. One Sales Guy had a great product spec. I was ready to buy several and “Make His Day”. Then I said “OK, just bring in a sample and set it up. Show me that it works.”. He was hesitant. Yes, often that is simply because it is a PITA to do the releases and drag the samples out and get the tech guy assigned and all that time he is not selling. But, I was insistent, there would be no sale without a Demo. A working Demo.

So a couple of weeks later they show up with a Demo Unit. We put it on the table. He hands me the manual. I look over the chassis. Nice lights with all the right labels. “OK, plug it in. Let’s fire it up.” says I. He does a sales pitch. “Um, plug it in. Power. Now please.” says I. He looks a bit pained and pale. “Does it work or not? I want to see it run, right now.” says I….

Well, turns out that this prototype unit had worked maybe a few days ago then failed and they were still trying to make it go right back in the lab, but they brought over the nice empty case for me to look at….

A very dramatic case of Broken Feature Hell, in that the whole product didn’t work, and in spectacular fashion, but such is the world of I.T. and new products.

Another time I spent close to a week trying to track down why a client mail server would only deliver mail from outside the company every 20 ish minutes. It worked fine. Just that the mail coming in and going out only happened to move once every 20 minutes or so. For 20 minutes, mail would go out. Often instantly. But no inbound. THen for about 20 minutes, inbound would flow (often instantly) but outbound would not flow. Eventually I worked out that they had 2 “default gateways” set in the configuration. One inside, one outside.

Now the “default gateway” is also known as the router of

    last

resort. That is, there is exactly ONE “LAST” resort. I argued with the client for the better part of a few hours that this was a Very Bad Idea. Eventually, on the Microsoft “support” web site I found a description of this “feature’. Seems that a Windoz box will rotate between multiple “routers of last resort” on about a 20 minute rotation. They stated about this bug “This behavior is by design”…

So with that in hand, I could demonstrate WHY the mail did what it did, and then was allowed to set up the mail server as it ought to have been. A static route to the inside interface for inside corporate mail, and a default gateway that pointed outbound for ‘everything else’. Mail flowed instantly both ways…

All due to a broken “feature”.

Much of my life has been spent in Broken Feature Hell, and I’ve gotten pretty good at navigating the turf. I can see a broken feature being hyped in a sales call faster than most anyone else; and I can smell the fear when you ask about it…

The Present Land Of Broken Features

So I had this “Bright Idea”. Make a Qemu Bigendian system on a chip so that I could make the bigendian parts of GIStemp work on any old PC. Qemu is a free system emulator. It has SPARC support in it (and I’m pretty sure either a SPARC or SGI MIPS was the base system on which GIStemp was run). Easy Peasy…

The first cut of research showed all the right features present. The docs all said it worked on Windows as well as *nix machines (Linux, Unix, POSix, etc.) The features stated it had several bigendian chips emulated (SPARC, MIPS, PowerPC, ARM with big endian set…). OK, looks reasonable. Probably a few Broken Features, but ought to be livable.

I’ve also gotten very good at navigating Broken Feature Minefields and charting the course through them that works. As long as there are “enough” features that work, you can usually find a set that lines up. Even if it takes some exploration.

But sometimes… sometimes not so much. Lots of potential paths, but then after a day or two down that road comes the precipice or the road block. Sometimes it is a small one and you can build a bridge over or around it. Install a package or find the ‘just so’ flag settings that let it work enough. Sometimes it is a hard stop and you back up a few days and try again.

In this case, the number of features is large, and the number that don’t work is large too. LOTS of paths to explore, almost all of them ending badly. It is Broken Feature Hell. All the work, none of the progress.

Whats What

Do realize that in software development projects a fair amount of Grim Determination is required. An unwillingness to give up. So this Complaint by me does not mean I’m giving up. Not until every path is explored, and marked as a dead end, can you say that Broken Feature Hell has ended in death of the project. For now it is just a bleat from a hot cliff overlooking a dead valley Yet Again.

OK, first up, the Windows support. It’s slim. Yes, I have it running on my laptop. Yes, it works. Yes I have a SPARC emulation running. But several “features” don’t work. First off, the ‘prebuilt’ system images (Debian on a SPARC 32) limit you to Debian Etch. Debian stopped supporting SPARC 32 back in 4.x release land. Now they are at 7.x land. So old code, not updated, no new features. I’m OK with that, I guess. The current Debian wants to support SPARC 64 chips, but doesn’t have it working yet. Yes, I know, free software and free labor making it go, if you want to you can contribute time to it or pay for it. Still, it means that SPARC is not really working all that well.

Then, there’s that small matter of flags to Qemu. The two prebuilts come in windowing and command line. OK, the command line one is fast enough and all I need for GIStemp anyway. It does work. The windowing one works to, after a fashion. As nothing is using any of the graphics hardware, it is God Damn Slow. Painfully so. OK, I could ignore it I suppose. Except…

At the GISTemp source code download page, the tarball is not in FTP land, but in HTML land. No Worries, as wget command gets HTTP files too… but attempt to use it from the command line, while working find on most any other site, give 404 and 403 and other errors on the GIStemp page (depending on what options are set).
http://data.giss.nasa.gov/gistemp/sources/

So is that a fault of the GIStemp web page? Or of wget? Or of THIS particular Etch version of Debian wget? Or… So a ‘quick’ 20 minute launch and set up of the windowing Qemu SPARC and the browser lets me download the GIStemp code (that I showed in the last posting). OK, I’ve got the code. But…

It is in the windowing prebuilt image, not in the command line image.

No Problem, thinks I, there’s launch options to not do windowing from the windowing version… a few launch attempts later and I realize those “features” don’t work. I can launch it, and it may be running without ANY visible console, but it did not launch a command line version. Sigh. Is there a work around past this? Maybe, hike down those four roads a day or two each and report back…

So I can download the code, or I can have a working command line system that is livable, but not both at the same time…

Yes, I can do things like put the source code on a CD, and find out how to mount the CD inside the command line Qemu (IFF that feature works in the Windows port…). There are plenty more paths to explore.

I’m not giving up yet.

But….

Welcome to Borken Feature Hell where you may spend days wandering in the woods only to find yourself back at the last camp. Again.

So far I’ve found a half dozen or so launch flags that look like they don’t work (including the -k en-us flag to let me make the keyboard a US keyboard instead of the GB one it has as shipped; at present I can’t find the ‘pipe’ symbol vertical bar that is essential to *nix command line use…). I also had a ‘failure to configure’ on one instance (apt-get upgrade) but it worked on another. BOTH running from an SD chip. What was different? Not much… and nothing that ought to matter… so a sometimes randomly broken feature…

I now have a FORTRAN compiler installed, but no code yet. And another image has the code, but can’t install the compiler. And a third has the data… and features that ought to let me boot one the same way as the other don’t seem to work, and the interface to peripherals may or may not work and may or may not let me get the code moved; but are painful to set up in any case (all command line options at boot time when that seems to have a lot of broken features already).

And that is how you know you are in Broken Feature Hell.

There’s just enough options left to try that you keep on searching for The One Path. But so many broken that the odds of that path existing are dancing with zero.

At that point, the Grim Determination starts to be a non-feature as you spend way too much time looking for The One True Path and note enough time asking “Is this sane? Is there a better way?”

Often that is asked just before you find the One True Path… so hope springs eternal.

Yet…

In Conclusion

I’ve not listed all the Broken Features I’ve run into. Things like looking at the MIPS and PowerPC emulations and finding them not all that complete either. This was just a sample for the flavor of it.

I’m not sure exactly what path I’m going to take out of this. Likely continue with the emulator for awhile. I’d been wanting to set up a Raspberry Pi general purpose server (but being little endian it can’t run the last part of GIStemp directly). Networking seems robust on the Qemu SPARC. My current “This Time For Sure!”, is an RPi file server with the sources, and a command line Qemu SPARC to unpack and run time. All the parts look like they work. Just ought to be a matter of doing it. I’d guess about 2 days of work (if done straight through).

We’ll see.

Or maybe I’ll just take $100 and buy an old PowerMac and turn it into a Debian box… Forget all the round about stuff and go for straight hardware. Only question being just exactly which of the hundreds of Mac configs actually works well with Debian ;-)

Well, time to get on with the day. Wish me luck as I explore more “features”…

Subscribe to feed

Posted in GISStemp Technical and Source Code, Tech Bits | Tagged , , , , , | 22 Comments

Down the Rabbit Hole with Qemu, GIStemp and more

So sometimes something gets my attention and I’m “down the rabbit hole” for a few days (weeks… months… ye…)

In this case, it was an old problem, back again.

I never got the last steps of GIStemp to run, as all the machines I had at hand were ‘little-endian’ and GIStemp was ‘big-endian’ in the last steps. FORTRAN is a bit unforgiving about endian-ness in data structures. Data written in an unstructured way really gets the fundamental structure of the endian character of the processor. Endian is an oblique reference to Gulliver’s Travels and the wiki mentions it:

“In the discipline of computer architecture, the terms big-endian and little-endian are used to describe two possible ways of laying out bytes in memory. The terms derive from one of the satirical conflicts in the book, in which two religious sects of Lilliputians are divided between those who crack open their soft-boiled eggs from the little end, and those who use the big end.”

Basically, if you write 1234 into a computer, is it stored as 1234 or as 4321? (There is also a byte order endian issue, so it could also be 2143 or 3412 in some odd cases…)

Most of the time for almost everyone this endian issue is completely hidden.

Except…

For programmer geeks like me who keep the world sorted out for people who don’t care, or know, that endian issues are all over the place.

So GIStemp keeps endian issues out of the way until the very end, in Steps_4_5 code.

So I had most of everything that mattered by Step_3, and didn’t get the last bit running, as the machines I had at home were either little-endian PCs (Intel chips) or were Macintosh boxes with Motorola or PowerPC chips (big endian) but running Mac O/S and not something I was willing to blow away to install a Linux port.

So I left it an unfinished bit of the GIStemp port that I never did get the Big Endian bits to run.

Different Time Perception

I sense time differently than other folks.

I know this, but can’t change it. I remember being 3 or so years old and running down a dirt track with two tire tracks and grass in the middle and falling down and realizing that falling hurts skinned knees in just the same way that I remember being a ’20 something’ and dealing with a romantic rejection in just the same way that I remember missing a meeting at work a week ago; and in just the same way that I see what the world will be like a year from now, but can’t stop it from happening. To other people those are very different experiences. To me, it is all the same perception. All of it is “now”.

So I’ll set something aside to get back to it ‘later’, and then a 1/2 decade later pick it up again at just that spot. Only ‘lately’ have I realized that other folks don’t do that. That a decade ago is faded and lost to them. That it doesn’t just ‘pick up again’. So I’ll say things like “I need to do a posting on that”, and it is the same to me if it is tomorrow or a decade later. Other folks, not so much. They see it as not delivering as time ends shortly after the statement… Oh Well…

So, some ‘long time ago’ I looked at GIStemp and noted that it wasn’t using USHCN.V2:

http://chiefio.wordpress.com/2009/11/06/ushcn-v2-gistemp-ghcn-what-will-it-take-to-fix-it/

That clearly shows it was 2009. It is now 2014. I make that 1/2 decade. To me it was just yesterday. Oh Well…

So why dwell on this? Because sometimes exact dates matter. Note that 2009 well.

So around 2009 to 2010 there was a change of USCHN and some GIStemp code changed. This is after I did the GIStemp port. This posting is about what has changed in GIStemp, so that date matters.

So let’s look at some GIStemp date stamps, and along the way look at new ways to run very old code.

Does anybody know what time it is?

I finally, and apparently about a 1/2 decade later, found a solution to my Big Endian machine need. That being a bit of emulator software that lets you make emulated Big Endian machines on Little Endian Intel chip machines. That being Qemu or the “Quick EMUlator” software. http://wiki.qemu.org/Main_Page is the home page.

Qemu is an open source software bit that lets you emulate other hardware. Now, on my laptop, I have a Sun SPARC bigendian emulator running. Basically, I have a Sun SPARC based Sun 5 or Sun 10 running. Now, to make this particularly ironic, I have a SparcStation 5 and 10 in my garage. I bought them for about $5 each when some company in Silicon Valley was going out of business. Yet it is quicker and easier to make the emulator run on my laptop. (Not to mention that by now the lithium batteries are likely dead and the machines have lost their identity as their battery backup ROM evaporates…)

So where to get Qemu? Well… that depends. I got the “for windows” version that is scarce. The “for Linux”
version is available for just about any Linux. But… my laptop and my machine at work are Windows Intel machines. So is there a way to get a SPARCstation running on a WinTel box? Yes.

http://lassauge.free.fr/qemu/

Has a couple of Qemu for Windows releases. I installed the 1.5.3 one. It worked fine. (Details in a future posting, though how many decades is an open perceptual difference ;-)

I now have a SPARC running Debian Linux on my laptop. (Technically, on an SD Chip in a slot in the side of my laptop… but…)

The GIStemp Download

So first I tried using “wget” to get the GIStemp sources (as I was running a non-windowing version of Qemu). No joy. So I went whole hog and started a full on X-Windows based Debian On SPARC emulation. It is slow. Very slow. But livable. Barely. And I got the “current” GIStemp source code downloaded via HTTP / Web Browser to my Virtual Machine.

Some Day I’ll write up the details of how to make this go. For now, the important bit is that I did get the download into a Virtual Machine on my laptop. I did get the compressed archive unpacked. It is ready to configure, compile, and make go. But what surprised me was the time stamp. To me, they are not very new. Most of GIStemp is unchanged. Yes, I need to do a ‘diff’ of the sources and figure out exactly what changed. But at a cursory level, it looks like ‘not much’.

Does this mean that they “double dip” and do the GISS adjustments on top of the GHCN / NOAA / NCDC adjustments? I don’t know yet. That depends on the exact differences in the code. That will come in the future. For now, the date stamps do not show much difference.

So what does it look like? Here are some screen shots.

Images Of GIStemp / Qemu Now

This is the top level picture of Qemu running in a window on my WinTel PC.

Qemu with Linux on emulated SPARC

A screenshot of a SPARC instance of Virtual Machine Linux on a WinTel Laptop

So here is a screen shot of a Big Endian SPARC (emulated) processor running Linux. Note the tar ball of GIStemp sources and the unpacked directory of them.

Note the Date Stamps on the GIStemp sources as unpacked. Not much change in the last 1/2 decade or so…

GIStemp listing with date stamps

GIStemp sources 9 July 2014 listing with date stamp

Step0 is largely unchanged. Step1, the Python step, is also not much changed.

To me, at a first glance, it looks like the “pick up the data” processes in Step0 have changed a little, but the actual “apply processing” in Step1 has not changed much.

Has GIStemp “double dipped” by having NCDC apply adjustments and GIStemp do “the same old same old” on top of them? I don’t know. But the date stamp pattern does not look like much change. Yes, I’ll go through the code “line by line” and see what did change. It just isn’t really looking like they ripped out a lot of stuff on a first glance…

How about Steps 2 and 3?

GIStemp Step 2 and Step 3 date stamps on ls listing

GIStemp Step 2 and Step 3 date stamps on ls listing

Again, it doesn’t look like a lot of change.

For completion, here is the last bit. Steps4_5:

listing of GIStemp Steps 4 and 5 source code

listing of GIStemp Steps 4 and 5 source code

The only notable change looking like it involves HadCrut R2 release changes.

OK, I’m definitely “down the rat hole” as I’m going to do a character by character compare of old and new. But it will take some time. My old copies are on a Mac from about 20 years ago (though it is with me). So it will take a while to get it booted and figure out how to get the old stuff from it to the new Virtual Machine SparcStation 10 emulator. And time is what I don’t have a lot of right now.

My sense of it is that GIStemp will not have really changed much from ‘last look’ and that they double dip the NCDC homogination / adjustments. But close examination will answer it, or end it.

Conclusion

Qemu is your friend. Running full on X-Windows is slow, but livable if needed. Running it as a text only ‘small’ shell Linux is quite fast. The SPARC didn’t have hardware video processing, so running in just a single core and having that do graphics is surprisingly effective; though the slow graphics isn’t everything.

(Someday I hope to make a multi-processor Linux Damn Fast machine, but that is a ways off…)

What has been demonstrated? That a BigEndian solution is in hand, if slow, and that GIStemp has not changed much, per the date stamps. More work needed on that to show exactly what changed, and what it does.

I will be spending the next few week making a GIStemp On A Chip, with GIStemp and data loaded onto a BigEndian system image under Qemu; all on a modest SD Card. If this works out well, then anyone can run GIStemp via a small emulator download.

So that’s what I’m doing now. Yet Another GISTemp Port.

Details as they are available.

I’ll be making a post or two about how to set up a Qemu SPARC 10, how to make it portable (on a chip / thumbdrive) and how to have a portable GIStemp on a chip that runs on most any Windows machine you have laying around.

Subscribe to feed

Posted in AGW GIStemp Specific | Tagged , , | 19 Comments

Some Thoughts on LENR, Vibrations, Chrystals, Phonons, and Quantum Mechanics

I’ve posted a fair number of things on Cold Fusion or LENR or whatever they are calling it these days.

I think I have a handle on how to make it go more reliably. The “bottom line” (here at the top) is that you need to wiggle the atoms and protons and electrons around, faster is better, until they collide enough to join.

The more Quantum Mechanical point of view is that you need to have rapid change of the electromagnetic fields so that the particle Hamiltonians will have a diabatic change and thus have a move to a new Hamiltonian instead of staying on the same one. (or, shake them until they bang hard enough to fuse ;-)

So a lot of detail follows, but the practical implication is that once you have H or D loaded into Pd or Ni (or potentially other metals with a metalic or partially metalic bond) you need to induce atomic and electronic oscillations so the little dears bang into the walls, so to speak. That can be with high speed change of mag fields, electric fields, electric flows, light (such as UV) whacking the surfaces, thermal energy (hotter is better), ultrasonics (make those phonons wobble!) and maybe more. So some folks have cavitation LENR. Some have hot LENR (Rossi). Some have high frequency e-field LENR (Brillouin), and the Papp engine uses thermal spikes with UV and E-field spikes from a giant spark.

You can load the H or D via raw pressure, or via electric pressure via electrolysis. Just get it loaded. More is better. Probably best is some of both. Then whack it with change. Preferably controllable change so it doesn’t blow up…

The Background

Hard sledding comes first. The QM stuff.

No, I don’t fully understand this. A lot of it looks like F.M. to me… er… ‘Friendly’ Magic ;-) but it is what it is. So here’s my take on the QM of it all… First off, a Hamilton.

https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29

In quantum mechanics, the Hamiltonian is the operator corresponding to the total energy of the system. It is usually denoted by H, also Ȟ or Ĥ. Its spectrum is the set of possible outcomes when one measures the total energy of a system. Because of its close relation to the time-evolution of a system, it is of fundamental importance in most formulations of quantum theory.

Key Bits: It’s just talking about the energy of the system. Note the “time-evolution”. That matters as we need to control the rate of change of things to kick particles out of their Hamiltonian ruts.

Another word to know:

https://en.wikipedia.org/wiki/Quantum_mechanics

Generally, quantum mechanics does not assign definite values. Instead, it makes a prediction using a probability distribution; that is, it describes the probability of obtaining the possible outcomes from measuring an observable. Often these results are skewed by many causes, such as dense probability clouds. Probability clouds are approximate, but better than the Bohr model, whereby electron location is given by a probability function, the wave function eigenvalue, such that the probability is the squared modulus of the complex amplitude, or quantum state nuclear attraction.[21][22] Naturally, these probabilities will depend on the quantum state at the “instant” of the measurement. Hence, uncertainty is involved in the value. There are, however, certain states that are associated with a definite value of a particular observable. These are known as eigenstates of the observable (“eigen” can be translated from German as meaning “inherent” or “characteristic”).

So most of the time things are an ill-defined muddle of probability (so unusual things DO happen) but sometimes they have a more definite value. That is called an “eigenstate”. So eigenstates are more definite, the rest is a bit probabilistic.

The time evolution of a quantum state is described by the Schrödinger equation, in which the Hamiltonian (the operator corresponding to the total energy of the system) generates the time evolution. The time evolution of wave functions is deterministic in the sense that – given a wavefunction at an initial time – it makes a definite prediction of what the wavefunction will be at any later time.

So we are packing a crystal lattice of metal with Protons (H without the e-) and with a metallic bond cloud of electrons. (In covalent bonds, the electron is shared between two nuclei. In ionic bonds, one wins the struggle and captures the electron into the outer electron shell, so Na+ is down one and Cl- is up one in salt. In metalic bonds, the electrons run around in a loose soup of electrons. That’s why we can have electricity move and why they are metallic shiny as photons bounce off of the e- cloud. For colored metals, like gold, only the low energy bounces off and the high energy gets absorbed and the metal is blue deficient in reflections and looks golden. All thanks to that electron cloud wandering between atoms in the crystals.) Once packed, we’d like those e- wave functions to get smushed up with some P+ wave functions and become N. Slow neutrons that get stuck into a nucleus somewhere. Normally the e- and P+ don’t get close enough for that. We need a way to collapse their wave functions into a new thing. A way to get each off of their own Hamiltonian and onto a new common one.

Note the words “time evolution” again. So we need to do things that screw around with the motion / time aspect. Get those suckers smacked around by other atomic wave functions, and fast, so they get pushed over the hump into a new N function.

That’s hinted at in this quote:

Wave functions change as time progresses. The Schrödinger equation describes how wavefunctions change in time, playing a role similar to Newton’s second law in classical mechanics. The Schrödinger equation, applied to the aforementioned example of the free particle, predicts that the center of a wave packet will move through space at a constant velocity (like a classical particle with no forces acting on it). However, the wave packet will also spread out as time progresses, which means that the position becomes more uncertain with time. This also has the effect of turning a position eigenstate (which can be thought of as an infinitely sharp wave packet) into a broadened wave packet that no longer represents a (definite, certain) position eigenstate

We want those particles sitting there, broadening their wave functions, until if finds itself overlapping with another wave function ( the e- and P+) and then whack them FAST into one new wave function that takes less space… by forcing them into a known smaller space… by having the atoms around them crush them together into that eigenstate space.

Enter The Phonon

Now you and I might just call this vibration. Or vibrating atoms. But now, now we need a new name for it. Sigh. Just like “eigenstates” means “that stuff you always knew that was not probabilistic”, phonon means that vibration you always thought was just a vibration, but is now more, er, QM special…

https://en.wikipedia.org/wiki/Phonon

In physics, a phonon is a collective excitation in a periodic, elastic arrangement of atoms or molecules in condensed matter, such as solids and some liquids. Often referred to as a quasiparticle,[1] it represents an excited state in the quantum mechanical quantization of the modes of vibrations of elastic structures of interacting particles.

Phonons play a major role in many of the physical properties of condensed matter, such as thermal conductivity and electrical conductivity. The study of phonons is an important part of condensed matter physics.

The concept of phonons was introduced in 1932 by Russian physicist Igor Tamm. The name phonon comes from the Greek word φωνή (phonē), which translates as sound or voice because long-wavelength phonons give rise to sound. Shorter-wavelength higher-frequency phonons give rise to heat.

So, in short, we want those suckers vibrating. In a quantum mechanical kind of way since they are small things…

Now any particle collection larger than a single hydrogen is essentially beyond description as a Hamiltonian. Too many states and too much flux. Our goal is to add a whole lot of particles and a whole lot of flux so some of those QM states end up squashing some wave functions into a new particle. How to do that?

Well, one clue is that these are thermal. So heating stuff up moves it that way. Another clue is that they make sound, so sound vibrations will move things that way. Try “hot sound” and you ought to be getting even more energetic Hamiltonians, even if you can’t write them down. Some, in that QM Probability kind of way, ought to kick some P+ and e- into being a nice Neutron. Perhaps some into turning Ni into Cu as a next step (or maybe via direct additions?)

There is some nice eye candy graphics on that page, so take a look and watch beads oscillating…

A phonon is a quantum mechanical description of an elementary vibrational motion in which a lattice of atoms or molecules uniformly oscillates at a single frequency. In classical mechanics this is known as a normal mode. Normal modes are important because any arbitrary lattice vibration can be considered as a superposition of these elementary vibrations (cf. Fourier analysis). While normal modes are wave-like phenomena in classical mechanics, phonons have particle-like properties as well in a way related to the wave–particle duality of quantum mechanics.

So those wobbles can have the nature of one whopper of a particle. And they have a spectrum of vibrations.

For example, a rigid regular, crystalline, i.e. not amorphous, lattice is composed of N particles. These particles may be atoms, but they may be molecules as well. N is a large number, say ~10^23, and on the order of Avogadro’s number, for a typical sample of solid. If the lattice is rigid, the atoms must be exerting forces on one another to keep each atom near its equilibrium position. These forces may be Van der Waals forces, covalent bonds, electrostatic attractions, and others, all of which are ultimately due to the electric force. Magnetic and gravitational forces are generally negligible. The forces between each pair of atoms may be characterized by a potential energy function \, V that depends on the distance of separation of the atoms. The potential energy of the entire lattice is the sum of all pairwise potential energies:

It is electro-magnetic in character, as well as dipping a toe in the Van der Waals pond. Just what we need to get something else that is resisting joining together (via those same electro-magnetic and Van der Waals forces) nudged into doing what it doesn’t want to do. This also implies that some electrical and / or magnetic forces can be used to stimulate more phonons. So things like a microwave excitation of short harmonic wires, or UV absorption into metals, or VHF magnetic fields. (Shades of Tesla and his spikes of HF fields and the assertion that the did odd things…) In short, we can make more, and potentially bigger aggregations of electromagnetic and Van der Waals forces inside a crystal lattice via induction of large phonon activity.

Cram metals with H or D and loads of excess e-, then induce lots of phonons. That ought to be about it. Having a metallic H bond would likely help, too. (More on that later).

The wiki goes on to point out that the math is intractable so a load of simplifying assumptions are made. I suspect we want the non-simple real actions and are looking to exploit a low probability edge case, but one big enough to make things hot. Take a Trillion atoms, and a ‘one in a billion per second’ reaction becomes very usable.

Solids with more than one type of atom – either with different masses or bonding strengths – in the smallest unit cell, exhibit two types of phonons: acoustic phonons and optical phonons.

Acoustic phonons are coherent movements of atoms of the lattice out of their equilibrium positions. If the displacement is in the direction of propagation, then in some areas the atoms will be closer, in others farther apart, as in a sound wave in air (hence the name acoustic). Displacement perpendicular to the propagation direction is comparable to waves in water. If the wavelength of acoustic phonons goes to infinity, this corresponds to a simple displacement of the whole crystal, and this costs zero energy. Acoustic phonons exhibit a linear relationship between frequency and phonon wavevector for long wavelengths. The frequencies of acoustic phonons tend to zero with longer wavelength. Longitudinal and transverse acoustic phonons are often abbreviated as LA and TA phonons, respectively.

Optical phonons are out-of-phase movement of the atoms in the lattice, one atom moving to the left, and its neighbour to the right. This occurs if the lattice is made of atoms of different charge or mass. They are called optical because in ionic crystals, such as sodium chloride, they are excited by infrared radiation. The electric field of the light will move every positive sodium ion in the direction of the field, and every negative chloride ion in the other direction, sending the crystal vibrating. Optical phonons have a non-zero frequency at the Brillouin zone center and show no dispersion near that long wavelength limit. This is because they correspond to a mode of vibration where positive and negative ions at adjacent lattice sites swing against each other, creating a time-varying electrical dipole moment. Optical phonons that interact in this way with light are called infrared active. Optical phonons that are Raman active can also interact indirectly with light, through Raman scattering. Optical phonons are often abbreviated as LO and TO phonons, for the longitudinal and transverse modes respectively.

Notice that light or sound are both able to set things jiggling, and that having different species in the crystal lattice lets you get both kinds of phonons. So hydrogen loading opens the door (and perhaps some other minor elements in the mix could make it interesting too. B or Li? Other metal alloys?) and then phonons make the smashing happen. The differing mass of H and Ni implies optical phonons, and that might be important. It could also explain why things only start when loading nears 1:1 ratio.

There’s more in that article, but for now it can wait. Just realize that this activity of phonons relates to sound and light, and also to emissions of EM waves like microwaves and such. The implication being that those energies going back in can create the phonon activity as well.

So what happens next?

https://en.wikipedia.org/wiki/Adiabatic_process_%28quantum_mechanics%29

This is where it gets pulled together.

Avoided Crossing of two Hamiltonians

Original image and attribution

The caption says:

Figure 2. An avoided energy-level crossing in a two-level system subjected to an external magnetic field. Note the energies of the diabatic states, \scriptstyle{|1\rangle} and \scriptstyle{|2\rangle} and the eigenvalues of the Hamiltonian, giving the energies of the eigenstates \scriptstyle{|\phi_1\rangle} and \scriptstyle{|\phi_2\rangle} (the adiabatic states).

Hopefully the Greek characters come through. If not, click to the article and read it there until I learn Greek ;-) Drat. Didn’t work. OK, that’s for later…

What it is showing is two Hamiltonians, the red curve and the blue curve, with the avoided crossover between them. Our goal is to get things to take that black line from one corner to the other and change from one Hamiltonian to the other. What does it say enhances this outcome?

Figure 2 shows the dependence of the diabatic and adiabatic energies on the value of the magnetic field; note that for non-zero coupling the eigenvalues of the Hamiltonian cannot be degenerate, and thus we have an avoided crossing. If an atom is initially in state \scriptstyle{|\phi_1(t_0)\rangle} in zero magnetic field (on the red curve, at the extreme left), an adiabatic increase in magnetic field \scriptstyle{\left(\frac{dB}{dt}\rightarrow0\right)} will ensure the system remains in an eigenstate of the Hamiltonian \scriptstyle{|\phi_1(t)\rangle} throughout the process(follows the red curve). A diabatic increase in magnetic field \scriptstyle{\left(\frac{dB}{dt}\rightarrow\infty\right)} will ensure the system follows the diabatic path (the solid black line), such that the system undergoes a transition to state \scriptstyle{|\phi_2(t_1)\rangle}. For finite magnetic field slew rates \scriptstyle{\left(0<\frac{dB}{dt}<\infty\right)} there will be a finite probability of finding the system in either of the two eigenstates. See below for approaches to calculating these probabilities.

Basically saying things stay on the red or blue line… but what can make things NOT stay on that line? A very fast magnetic slew rate. Or, I’d speculate, a very fast electric slew rate, or phonon slew rate.

In an adiabatic process the Hamiltonian is time-dependent i.e, the Hamiltonian changes with time (not to be confused with Perturbation theory, as here the change in the Hamiltonian is not small; it’s huge, although it happens gradually). As the Hamiltonian changes with time, the eigenvalues and the eigenfunctions are time dependent.

Time.

Deriving conditions for diabatic vs adiabatic passage

The math in that article is quite thick at this point, but what I think it is saying is just that if you move things fast enough, they can’t be elastic enough to stay on their Hamiltonian, and things get forced into other shapes when moved very fast. I.e. onto that black line between Hamiltonians.

In 1932 an analytic solution to the problem of calculating adiabatic transition probabilities was published separately by Lev Landau and Clarence Zener,[7] for the special case of a linearly changing perturbation in which the time-varying component does not couple the relevant states (hence the coupling in the diabatic Hamiltonian matrix is independent of time).

The key figure of merit in this approach is the Landau-Zener velocity:

v_{LZ} = {\frac{\partial}{\partial t}|E_2 – E_1| \over \frac{\partial}{\partial q}|E_2 – E_1|} \approx \frac{dq}{dt},

where \scriptstyle{q} is the perturbation variable (electric or magnetic field, molecular bond-length, or any other perturbation to the system), and \scriptstyle{E_1} and \scriptstyle{E_2} are the energies of the two diabatic (crossing) states. A large \scriptstyle{v_{LZ}} results in a large diabatic transition probability and vice versa.

Using the Landau-Zener formula the probability, \scriptstyle{P_D}, of a diabatic transition is given by

So if I’ve got that right, the more rapid and stronger the energy changes, the more likely a diabatic transition.

So high frequency EM fields, bright light, ultrasonics, and even high heat can add some increased probability.

That would explain why high temp E-Cat cells are prone to instability. They start to heat up even faster and make more heat that makes it go faster and… So make it ‘warm enough’, then modulate with something faster to switch, like HF electricity, microwaves, or even ultrasonics.

OK, that’s my theoretical whack at it. Now for a Modest Suggestion on how to make a cell.

Nano-Diamonds and Ultrasonics

There’s a nice PDF on this that I’ve got, but I need to find the link again. For now, a bit less descriptive but flashy link. How to make nano-diamonds with oil and sound:

http://www.hielscher.com/ultrasonic-synthesis-of-nanodiamonds.htm

The chamber is filled with an appropriate slurry, and ultrasound does the rest. It makes tiny spots of heat and pressure so high you get diamonds to form.

What I propose is that the same apparatus ought to make LENR happen. Use a powder of Ni in water, saturated with high pressure H2, or perhaps with a transverse electric current to load the metal, then turn on the ultrasonics to provide the phonon kicker.

Call it the Smith LENR Cell if it works, and I’ll be happy ;-)

Ah, there’s the PDF:

http://www.chm.bris.ac.uk/pt/diamond/pdf/drm17-931.pdf

Up to 10% conversion of organic to diamond. Temps of about 120 C in the bulk liquid. Not exactly hard to engineer. Perhaps we could get a master mechanic like P.G. to build one in the garage… First get it to make diamonds, then swap the liquid for a metal in water slurry with H loading and stand back. (Crank up the ultrasonics slowly in case it works too well ;-)

Ultrasonic Diamond Rig

Ultrasonic Diamond Rig

Now I can also think of other ways to create the phonon activity. Make a bundle of metal rods. Bathe them in microwaves that match their length. Anyone who has put a metal trimmed bit of china in a microwave knows how that can vaporize the metal if strong enough. So keep the microwaves under control. Load the metal with H or D via pressure, or electrolysis, or whatever, then slowly add the microwaves. As radiation, or as a directly coupled electric current. (Think “antenna in a hydrogen bath”).

But wait, there’s more!

It ought to also be possible to use this same insight to make a system excited by light (just pick a color that the particular metal absorbs). Or any other phonon creating method.

And about that Papp Engine…

These folks claim they have it working: http://www.pappengine.com/

Some other links:

http://revolution-green.com/unconventional-green-energy/the-papp-engine-history-and-current-progress/

http://www.infinite-energy.com/iemagazine/issue51/papp.html

Perhaps, just perhaps, as a speculative bit, the Papp Engine can work. It has a massive spark in the top from something like 6 over grown spark plugs. Lots of UV, EM, and heat there, along with a SNAP from the spark. It has a cyclic compression cycle, so heat is produced, along with lots of molecular agitation. The metals might well involve things like Ni in a stainless steel. Could it be that either a noble gas itself reacts, or that the ‘special treatment’ involved the introduction of some small amount of hydrogen into the noble gas?

I have to wonder what would happen if you had a ‘side chamber’ like in precombustion chamber Diesels, or Sterling engines, with a Ni gauze in it, and used Hydrogen gas in the noble gas mix, with loads of spark, and that external magnetic coil, add in some sound from a big fat spark, and maybe even duct in a bit of microwave energy; if somewhere in that mix enough phonons and hydrogen could get together in that gauze to heat the gas enough to make a net gain…

There’s a lot more I’d like to say, but I’m once again out of time. It will have to wait. Things like what crystal lattice looks best. ( I looked at all metal lattice types and crossed it with what is known to happen in excess energy production AND transmutations. Body centered cubic, and perhaps some face centered cubic look to dominate). Also some on metallic bonded H. Ni and Pd do it, some others don’t. Having metallic H bonding, with the right crystal type, with the bond distances the right size for H or D to fit, but only just, looks like the key magic sauce. Some oxides might also work, along with some alloys. Study the bond lengths, look for cubic crystallization, and try for metallic bonded hydrogen… Yes, I’ve got a load of papers to link for that set of ideas. But for now, it’s time to call it a day and have dinner. But at least I’ve put the marker down ;-)

(Also things like my usual typo pass and QA will have to wait, along with a how-to-do-Greek HTML study…)

It is a large search area, so lots of opportunities to bypass patents on things like Ni/H with something else like a B/Cu alloy or??? FWIW, Cu/Ni makes a cubic crystal, and ought to work as well as Ni. I’d add a touch of metal from each side of Ni and Pd and see what happens with H, D, and T loading. Slight variations in bond lengths and spaces ought to enhance some reactions. Perhaps even the ones you want…

In Conclusion

So that’s my theory and practical idea (if any LENR can be called ‘practical’) in one go. I’ve deliberately avoided the deep weeds of quantum mechanical math, while hopefully showing where it has a theoretical opening for this to work, and suggesting ways to make it go.

That also points at why there are so many ways reported as doing interesting things. From cavitation cells to those with electrolysis to others. And why some work and others don’t. It could be as simple as one guy doing his glass electrolysis cell with an open window pointed at the local airport radar…

I don’t know when I can get a chance to try any of these ideas, nor do I see any way I could make a living out of it (given my present circumstances) so I’m tossing into the Copy Left and Free world. If you can make it go, all I ask is a foot note of attribution. (Though if you make $Billions on it, a few $Million would be nice too! ;-)

At any rate, I hope it gives some order to ways to think about the LENR process and ways to make it more likely to work. Some ideas on ways to stimulate phonons, to get crystal lattices that are prone to multiple modes even if the H loading is low (like a bimetallic matrix with H added in smaller than unity amounts) and even some ideas on how to make a better Papp Engine go POP!

Subscribe to feed

Posted in Energy, Nukes, Science Bits | Tagged , , , , | 30 Comments

Lunar Months, Tides; for Vukcevic

In another thread, Vukcevic posted a question about lunar months. Despite my being a Master Druid, I have to protest that I’m not an expert on tides. Just an informed Druid ;-) More a broad generalist than expert in any one thing. So, in response to:
http://chiefio.wordpress.com/2014/06/04/lenr-year-of-answers/#comment-58616

vukcevic says:
27 June 2014 at 3:35 pm (Edit)

Hi Mr. Smith
You are known as the expert on the tides I’ll have to go back to your main article on the subject, but for the moment I have a short question :
Wikipedia: article http://en.wikipedia.org/wiki/Lunar_month
lists 5 different numbers for the lunar month Odd one out is the synodic month quoted as ~29.53, while the other four are all with periods closely spaced between 27.2 and 27.55 days
How do you rate significance of the synodic month’s period in relations to ‘climate change’ compared to any of the rest?
Thanks.

We have this posting.

The short form is “it is the beat frequency between them, not any one month”.
The long answer follows…

Tides depend on gravity, and in particular to weather and climate, IMHO, the tractional force pulling ocean waters away from the poles and toward the equator. That is most strong under certain alignments of sun, moon, and earth; with certain orbital conditions of closest approach and straightest alignment. The closer, straighter, and most synchronized with each other, the stronger the tides and the stronger the tractional force pulling cold polar water away toward the equator. Also the shallower or deeper channels such as Drake Passage so the more or less deflection of currents such as the Circumpolar Current up the spine of South America toward the equator. http://chiefio.wordpress.com/2010/12/22/drakes-passage/

So when does what happen?

First up, the 5 months:

A synodic month is the most familiar lunar cycle, defined as the time interval between two consecutive occurrences of a particular phase (such as new moon or full moon) as seen by an observer on Earth. The mean length of the synodic month is 29.53059 days (29 days, 12 hours, 44 minutes, 2.8 seconds). Due to the eccentric orbit of the lunar orbit around Earth (and to a lesser degree, the Earth’s elliptical orbit around the Sun), the length of a synodic month can vary by up to seven hours.

The draconic month or nodal month is the period in which the Moon returns to the same node of its orbit; the nodes are the two points where the Moon’s orbit crosses the plane of the Earth’s orbit. Its duration is about 27.21222 days on average.

The tropical month is the average time for the Moon to pass twice through the same equinox point of the sky. It is 27.32158 days, very slightly shorter than the sidereal month (27.32166) days, because of precession of the equinoxes. Unlike the sidereal month, it can be measured precisely.

The sidereal month is defined as the Moon’s orbital period in a non-rotating frame of reference (which on average is equal to its rotation period in the same frame). It is about 27.32166 days (27 days, 7 hours, 43 minutes, 11.6 seconds). The exact duration of the orbital period cannot be easily determined, because the ‘non-rotating frame of reference’ cannot be observed directly. However, it is approximately equal to the time it takes the Moon to pass twice a “fixed” star (different stars give different results because all have proper motions and are not really fixed in position).

An anomalistic month is the average time the Moon takes to go from perigee to perigee – the point in the Moon’s orbit when it is closest to Earth. An anomalistic month is about 27.55455 days on average.

The different months tell us different things about the orbital status and alignments. Lets take them in reverse order and start with the “Anomalistic month”. When the moon is at perigee, it is closest to the Earth so tides are stronger. That matters. So this month length matters. But other things matter too. When that perigee point comes on top of a moon-sun alignment, the forces on tides are even stronger. So it is the interaction of the two that makes the total tide cycle. (Actually the interaction of even more than the two, but for now we are just using two to show the process).

So if the moon is closest to the earth, and the moon and sun are both lined up, we get even stronger tides. That ‘moon and sun line up’ is easiest to see, literally. When lined up with the moon on the far side of the Earth from the sun we get a full moon. When lined up on the side toward the sun we get a new moon. Both are strong tides. When directly over head, the gravitational pull is slightly stronger than when it is on the other side of the planet, so you get the stronger tides when the moon and sun are both overhead during a New Moon and weaker tides (though still strong) when at the full moon. That is the Synodic Month (first on the list above).

As the Synodic and Anomalist months move into an alignment, with perigee at the moment of the moon aligned with the sun, we get Perigean Spring Tides. Some of the strongest. The wiki on tides puts it at 7 1/2 lunations:

http://en.wikipedia.org/wiki/Tide

The changing distance separating the Moon and Earth also affects tide heights. When the Moon is closest, at perigee, the range increases, and when it is at apogee, the range shrinks. Every 7½ lunations (the full cycles from full moon to new to full), perigee coincides with either a new or full moon causing perigean spring tides with the largest tidal range. Even at its most powerful this force is still weak causing tidal differences of inches at most

Oh, and note in passing that the orbit of the Earth around the sun has a perihelion point where solar tide force is stronger, so that ‘beats’ against these other cycles too. But the solar tide force is smaller than the lunar, so that effect is an addition on top of the lunar cycle, not a dominant force variation. But longer term, it adds to the ‘cycles in cycles’.

That difference in height that is called “weak” does not mean “has little effect”, IMHO. It still amounts to huge quantities of water moved via the tractional forces away from the poles, and significant changes in quantity of water that goes through Drake Passage vs deflection up the coast as a current. What does moving 1/2 foot depth of water over the whole Southern Ocean from Antarctica to the Equator have as an effect on, say, ENSO?

Their use of 7 1/2 lunations is an interesting number. At 29.53059 days per lunation that is 221.479425 days. The beat between anomalistic and synodic months is 29.53059 – 27.55455 = 1.97064 difference. Dividing synodic by that gives 14.9852788941 which needs to be divided by 2 (as they are counting both new and full moon tides) for that 7 1/2 (that is closer to 7.49263). So about every 221 days a major tide, and about every 442 days has one just a tiny bit stronger (new vs full moon). Perhaps a time period useful for weather, but climate not so much…

Next up is the Draconian month. That is when the moon crosses the ‘node’ line. It is in the plane of the Earth / Sun orbit exactly. When that coincides with a perigean spring tide, it makes them just a bit stronger. So yet another beat frequency to factor into the mix. When the moon is high over the North, water flows toward the north pole. When below the ecliptic, more water flows toward the south pole. That cycle too likely matters. Both “strongest” and “which pole” will matter to weather and climate.

IMHO the Sidereal month is not relevant to tides nor climate. The alignment with a star far far away does not influence the solar / lunar alignment nor the lunar Earth distance, so ought to have no effect. It might have a correlation with precession, and that precession can have some climate correlations, but it is more a correlation than a causation, so I’d look for longer term causality in precession interactions with seasons and not with a lunar stellar interaction.

That leaves the Tropical month. When the moon passes through the same equinox point in the sky. This is simply the same as the Sidereal month adjusted for precession of the Earth axis, so again, IMHO, is an unphysical thing in terms of tides.

So that’s the ‘big lumps’ on lunar month vs tides, IMHO. The Synodic and Anomalistic month beat frequency, with a minor Draconian beat overlay longer term.

Anything Else?

IMHO, there are longer tide cycles that are driven by other changes than those months. The circularity of both the Luna and Earth orbits changes over time. That will change the distance between us, and thus the tides. The orbital tilt can also change over time for both the lunar orbit and the Earth orbit vs the sun and vs each other. Similarly there are interactions of tides with the surface structures, so another ‘beat frequency’ as the rotation rate of the earth aligns given surface features with the Perigean tides. I looked at that here:

http://chiefio.wordpress.com/2013/01/24/why-weather-has-a-60-year-lunar-beat/

In that posting it goes over things like the Saros Cycle

The eclipse calendar tends to be set by the Saros Cycle that’s a bit over 18 years.

http://www.myastrologybook.com/Saros-cycle-of-moon-lunar-saros.htm

Fortunately for early astronomers lunar and solar eclipses repeat every 18 years 11 days and 8 hours in a “Saros” cycle.

That bit about eclipses matters. That is when the moon is crossing the ecliptic. Other than that time, it is above or below the ecliptic and pulling water more north or south.

So lunar alignments suited to an eclipse (Synodic, Draconian in sync) have a cycle of variation that repeats every 18 (ish) years AND that resyncs with the topology of the land every third one of those (so the same land or ocean under the same repeat frequency of eclipses / tide forces at the same times) for a (roughly) 54 year repeat. Sort of close to the (roughly) 60 year PDO cycle (that has large error bands on the estimate of the cycle…)

That brings together the more important lunar month beat cycles with the eclipse cycle and the Earth rotation repeat beat.

Is there more?

Well, yes.

Even longer term you can get changes in the inclination of the Earth polar tilt. Obliquity.

https://en.wikipedia.org/wiki/Axial_tilt

The Earth currently has an axial tilt of about 23.4°. This value remains approximately the same relative to a stationary orbital plane throughout the cycles of precession. However, because the ecliptic (i.e. the Earth’s orbit) moves due to planetary perturbations, the obliquity of the ecliptic is not a fixed quantity. At present, it is decreasing at a rate of about 47″ per century (see below).

Note that the pole does not bob up and down so much as the Earth bobs up and down and the measure is relative to the ecliptic, so the axial tilt is described as changing due to the ecliptic changing. Yet for practical purposes, that apparent change is what matters as that determines where the tidal force aligns.

Now realize that the moon has a similar bobbing up and down moment that also changes due to “planetary perturbations”…

Look at the graphs in that link on obliquity and notice that the very long term changes are quasi periodic, but not pure cycles. We know less about lunar obliquity changes, IMHO.
http://chiefio.wordpress.com/2014/01/24/the-moons-orbit-is-wrong-it-can-change-a-lot-and-tides-will-too/

There are also potential resonances with other orbital ‘stuff’, so some of the tidal effects may arrive along with things like increases in meteor storms and dust…
http://chiefio.wordpress.com/2011/11/03/lunar-resonance-and-taurid-storms/

also looks at this paper: http://www.pnas.org/content/97/8/3814.full that is an interesting paper in terms of tidal mixing physics. They specifically look at lunar orbital mechanics and how much that changes tidal mixing of the surface layers of the ocean. They show the expected size of each impact, and what the net forces are expected to be, but only via an estimate based on purely cyclical projections of present values for cycles (reasonable, since we can’t solve the n-body orbital mechanics issues anyway). So it is good as a place to look at tides and forces as sizes, of a sort. Just realize that very long term some of the values they assume for modeling will change…

A Proposed Tidal Mechanism for Periodic Oceanic Cooling.

In a previous study (3) we proposed a tidal mechanism to explain approximately 6- and 9-year oscillations in global surface temperature, discernable in meteorological and oceanographic observations. We first briefly restate this mechanism. The reader is referred to our earlier presentation for more details. We then invoke this mechanism in an attempt to explain millennial variations in temperature.

We propose that variations in the strength of oceanic tides cause periodic cooling of surface ocean water by modulating the intensity of vertical mixing that brings to the surface colder water from below. The tides provide more than half of the total power for vertical mixing, 3.5 terawatts (4), compared with about 2.0 terawatts from wind drag (3), making this hypothesis plausible. Moreover, the tidal mixing process is strongly nonlinear, so that vertical mixing caused by tidal forcing must vary in intensity interannually even though the annual rate of power generation is constant (3). As a consequence, periodicities in strong forcing, that we will now characterize by identifying the peak forcing events of sequences of strong tides, may so strongly modulate vertical mixing and sea-surface temperature as to explain cyclical cooling even on the millennial time-scale.

As a measure of the global tide raising forces (ref. 5, p. 201.33), we adopt the angular velocity, γ, of the moon with respect to perigee, in degrees of arc per day, computed from the known motions of the sun, moon, and earth. This angular velocity, for strong tidal events, from A.D. 1,600 to 2,140, is listed in a treatise by Wood (ref. 5, Table 16). We extended the calculation of γ back to the glacial age by a multiple regression analysis that related Wood’s values to four factors that determine the timing of strong tides: the periods of the three lunar months (the synodic, the anomalistic, and the nodical), and the anomalistic year, defined below. Our computations of γ first assume that all four of these periods are invariant, with values appropriate to the present epoch, as shown in Table 1. We later address secular variations. Although the assumption of invariance is a simplification of the true motions of the earth and moon, we have verified that this method of computing γ (see Table 2) produces values nearly identical to those listed by Wood, the most serious shortcoming being occasional shifts of 9 or 18 years in peak values of γ.

That 6 year cycle is roughly a 10x of the 221 days above (9.9 x) while 9 years is a 14.9x of it. It is also the case that the 9 is 1/2 of a Saros while 18 is almost exactly one Saros cycle. All indications that the lunar / tidal cycles line up with climate changes.

A time-series plot of Wood’s values of γ (Fig. 1) reveals a complex cyclic pattern. On the decadal time-scale the most important periodicity is the Saros cycle, seen as sequences of events, spaced 18.03 years apart. Prominent sequences are made obvious in the plot by connected line-segments that form a series of overlapping arcs. The maxima, labeled A, B, C, D, of the most prominent sequences, all at full moon, are spaced about 180 years apart. The maxima, labeled a, b, c, of the next most prominent sequences, all at new moon, are also spaced about 180 years apart. The two sets of maxima together produce strong tidal forcing at approximately 90-year intervals.

There’s more, but just read the paper. Here’s a nice graph from it:

http://www.pnas.org/content/97/8/3814/F3.expansion.html

Lunar Cycles

Lunar Cycles

For your question, IMHO, that graph is the “money quote” in that it shows Syzygy vs Perihileon (so the alignment of the moon with the sun at closest approach) along with lunar nodal declination effects and the net-net of those on ‘lunar angular velocity’ as a proxy of sorts for tidal force.

So, with that, hopefully that answers your question?

Some More Lunar Links

Just for completion, here’s a few more links to things I’ve posted about lunar cycles over the years. Some show my ‘progress’ from a rough grasp toward better detail, and some have some speculative bits in them, but ‘history is what it is’ ;-)

http://chiefio.wordpress.com/2013/01/04/lunar-cycles-more-than-one/

http://chiefio.wordpress.com/2014/02/16/tides-vectors-scalars-arctic-flushing-and-resonance/

http://chiefio.wordpress.com/2014/05/04/arctic-flushing-and-interglacial-melt-pulses/

http://chiefio.wordpress.com/2012/03/18/volcano-moons/

http://chiefio.wordpress.com/2011/11/14/meteor-showers/

http://chiefio.wordpress.com/2014/01/25/a-remarkable-lunar-paper-and-numbers-on-major-standstill/

http://chiefio.wordpress.com/2013/01/22/australia-bushfires-a-lunar-cycle/

http://chiefio.wordpress.com/2010/09/20/moon-causes-monthly-atmospheric-tides/

http://chiefio.wordpress.com/2014/01/04/static-vs-dynamic-scored-air/

http://chiefio.wordpress.com/2014/02/04/which-way-what-water-wanders/

http://chiefio.wordpress.com/2013/01/29/arctic-standstill-tropical-saros/

And Vuk, you ought to especially like this one: ;-)

http://chiefio.wordpress.com/2013/10/05/vukcevic-geomagnetic-tidal/

And my favorite speculation is that we are just trying to figure out (again) what the Ancients already knew:

http://chiefio.wordpress.com/2012/12/20/why-a-henge/

So, at the end of all that, the simple fact is that if it IS Luna that does it to our climate via tides cycles, then we can not accurately predict very long term as we can not solve the n-body orbital problem. We can get “close” via a lot of calculation via brute force iterations, but that will suffer from drift over long terms, and from lack of exact numbers for initial conditions (such as just what is the mass of the Trojan asteroids?…) Likely not enough error to mess things up in a 100 year prediction, but enough to make using ancient proxy data suspect as we don’t really know what the configuration of planets and Luna (and the Earth!) was 100,000 years ago. It’s just ‘projections’ based on models… perhaps good ones, or perhaps not. Hopefully good enough for ‘all practical purposes’.

With that, hopefully the above descriptions and set of links will give you enough background to ponder so that you come up with some new and useful insight.

Subscribe to feed

Posted in Earth Sciences, Favorites, Science Bits, Stonehenge | Tagged , , , | 7 Comments

Certified Pool Boys and Higher Education

The other day, floating in the pool, I had one of ‘those moments’. The “Ah Hah” moment. It was a small one. Yes, they come in sizes. Some grand and deep. Others shallow and small, but still refreshing. The spouse has credentials. Lots of credentials. She does not have “Early Childhood Education” so can’t teach preschool; but has K-12 with single room schoolhouse endorsement, so can be put on her own running a one room school with K-12 and not much support at all. She has a variety of “Special Education” credentials and has taught several levels of it. (No, I don’t know the specifics. It’s the usual government alphabet soup of unenlightening acronyms and I just don’t have it ‘stick’ in the brain. One she taught was SED – Severely Emotionally Disabled. How that differs from RSP and the others TLA’s (Three Letter Acronyms ;-) she’s racked up is something I’ve not quite mastered ;-)

At any rate, the spouse is looking for a job out here. She was interested in doing something other than teaching, having done that for quite a while now. No Joy. Loads of resumes. Not much response or action. A 3 week gig teaching ESL English as a Second Language to Brazilian kids. (They get ‘immersion’ by spending a month or so at Disney and related parks while having ESL classes each week day for about 6? hours. Oh, and it counts as credits to their degree back home.) Bottom line was that a lot of folks look at a resume that says “teacher” and don’t see “clerk” or “administrative assistant” or “secretary” or “what ever they have”. She is “siloed” into the Teacher silo.

At work, there are more silos. Only a DBA person can do the Data Base Administration. They are certified in it, and not much else. Only the Application folks can do applications coding. They are certified in it and not much else. Only the Project Manager can do PM stuff (and they want a PMP or similar Project Management certificate). Etc. etc. etc… So a load of folks are stuck in their silo and can not step over the line to learn what the other side does. That is only if you go out and get a certification. Where you spend a few hundred to $thousand dollars to a certification agency to learn the BOK Body Of Knowledge, and the ‘received wisdom’ as controlled by them. The end result is that a lot of understanding gets lost when a product has to cross silo boundaries…

Recently an application was brought up in the Disaster Recovery site. Could not get the backups to restore. The Application folks can’t look at it. It isn’t the application. The DBA can’t fix it. It’s not a database issue. The folks who do restores (an outsourced service provider runs the operations) can’t fix it as they just ‘do the prescribed restores’. The ‘Solution Architects’ are the ones ‘certified’ to make solutions, and they made this one, so it is there baby. Except… They just design new solutions, not fix old ones… So we have opened a full on project, including project numbers and sign offs and all, to “design a new backup / restore process”. Which resulted in another month or two delay as folks needed to horse all the bureaucracy around that is involved in a project. Just to get backup data read in to disk in the new location. (In reality, there are a few more complications involving chip sets, backup formats, and different backup software standards at the two sites, but those are technical not organizational issues.) Organizationally, we have entered a kind of ‘Analysis Paralysis’ based entirely on silo structure of the organization, and certification mandate mania. Nobody can just “go fix it.” This stands out to me as my time in Silicon Valley was dominated by “Just DO IT!” organizations where you just fixed it. My resume includes DBA, networks, routers, applications, operations, hardware installs, sales, support, compiler QA, software production and fulfillment, teaching at the Community College level, and more. In a siloed world, I could never have done 90% of it.

Today, to get “certified” in all the things I can do would cost about $2000 per scrap of ‘turf’, and there are at least a dozen of them. Then it would take another $2000 (average, some are more) per year to ‘maintain the cert’. Also a bunch of CEUs (Continuing Education Units) for each. In short, somewhere between $24,000 and $50,000 per year (depending on just which certs I’d collect – they multiply faster than rabbits…) and then I’d be spending all my time maintaining certs, not working. So I’m slowly being defined out of existence by the Certification Bastille. It is not possible to ‘become me’ in that world. The generalist who learns a new area in a week, and does it very well. The guy who parachutes in to a company and ‘fixes what is broken whatever it is’ even if never seen before.

But what about pools?

So what does this have to do with swimming pools?

The Epiphany Moment came while floating in the resort hot tub. Another ’50 something’ couple was in the spa with us. We were talking about finding jobs. The guy said he got hired ‘same day’ at Disney. (The spouse has been trying to get hired there for 3 months now with ‘no joy’). How? We asked… “They needed a pool guy and I am a Certified Pool And Spa Operator.” They wanted a Cert, and he had it. Yes, a Certified Pool Boy.

Now I learned how to do pool maintenance some time back. On my own. About 3 hours all told. Fixed the Florida Friends pool chemistry and did some ‘shock’ to clean out the green. If you have any grasp of basic chemistry, it’s nearly trivial. BUT, I could not get a job at a large company as “Pool Boy” since I’m not a certified pool boy. The ‘opportunity’ is closed to me, even if I wanted to do it in my retirement years.

We are becoming a nation (world?) where opportunity closes as soon as you get your first certification and where choices, both for the person and for the organizations, are eliminated. You are put on a ‘track’ and forced into a silo; there to remain until you don’t have enough ongoing CEUs to retain your cert. (Then you are deemed no longer competent – for reasons that are a mystery to me… and discarded. There is a catch-22 in the end game. For many certs you must maintain the cert or lose it, and to get the cert again you have to be employed in the field, but you can’t be employed without the cert, so… I’ve looked at a couple where I’m very qualified, but having not worked in that particular area for the last 5 years, can not even apply).

In Conclusion

So why the rant?

Simple. Loss of freedom.

The Certification gives some minimal assurance that the person has some clue about the job; but it does not guarantee morality nor competence. Mostly it functions to restrict supply and raise wage rates for those in The Guild. Initially this can be a generally beneficial effect. My Dad sold real estate prior to Real Estate Licensing. He then got his license. He could have grandfathered in to a Brokers license, but didn’t bother. My college roomy did get his. AFTER 4 years of college, a bunch of mandatory additional real estate classes, and a few years working for a Realtor / Broker. He was no better a real estate guy than my Dad. It cost him a lot of time and money to get there. It made the Broker richer. It raised commission costs and helped to assure a closed guild with high costs and lower productivity.

In computing, the Cert Racket is making $Millions for the likes of Micro$oft, Oracle, et. al. At a couple of $Thousand per cert, and several levels of cert, how much can you rake in if ever person working on your product has to pay out $5000 / year to keep their job? (I looked into it, for the cert levels I’d get, it would take about $5k / product to keep up the ‘couple of certs’ each). It keeps the ‘riffraff’ out of the job market for those in the guild, so a DBA doesn’t need to worry that some smart ass Applications guy will offer to do both. But…

In the end, you have highly siloed organizations with nobody who understands the whole picture, who has worked all sides of the issues. The process ossifies. Prices rise greatly. The whole thing starts to freeze up as the BOK does not welcome innovation. And personal freedom is cut short. The spouse has now accepted an offer of being a substitute teacher as they want her and her certs, even though she very much wanted to have change in her life. We are all impoverished, both by higher prices and by fewer choices with less liberty. All in the name of a ‘certainty’ that the certificate does not supply.

If you start looking at the list of certifications and licenses needed for simple things, it will start to curl your hair. Speaking of which, curling hair is one of them. If you want to braid or curl hair, you need a license for that… Sigh. The whole goal being to develop a local monopoly for The Guild in each field. To eliminate choice, and the freedom that goes with it. While racking in cash for government licensing agencies and corporate Certification mills.

Here’s a link for one of the Pool Boy certs I found on a first look:
http://www.nspf.org/en/cpo.aspx

CPO® certification courses are designed to provide individuals with the basic knowledge, techniques, and skills of pool and spa operations. The Certified Pool/Spa Operator® certification program has delivered more training than any other program in the pool and spa industry since 1972, resulting in more than 375,000 successful CPO® certifications in 93 countries. Many state and local health departments accept the CPO® certification program.

So is this how societies age and ossify? It is at least a part of it…

Subscribe to feed

Posted in Economics, Trading, and Money | Tagged , , , | 31 Comments