The DIY Garage Nuke Method – Or how to make a bomb without really trying…

Disclaimers and Caveats

First off, we need some disclaimers and caveats.

I’ve been thinking about this since about 1985 or so. I’ve sat on ALL of it for about 30 years. I’ve watched as bits and pieces, dribs and drabs have entered the public domain. At this point, substantially all of it is easily found in just a few hours of web searching, so it isn’t like I’m giving away a bunch of deep dark secrets.

Second, nuclear engineering is not hard at all, conceptually. It is terrifically hard in the details. The details matter. So giving a conceptual guide does not really provide “aid and comfort” to anyone. They are as likely to kill themselves as anyone else if they don’t already know the details. (Little things are left out, like, oh, how many gamma of high energy you are likely to absorb if you actually try any of this and just how fast your flesh will melt…) So please, no carping about “aid and comfort”, OK?

Finally, I’m a complete amateur at this. Now on one hand, it is startling to me that, as an amateur, I have a decent clue on how to go about this. OTOH, anything I say here could easily spell doom for anyone foolish enough to think this will work. Frankly, I haven’t a clue if it will work. (Well, I have a clue, just not a very good one, and it might well be a suicide pill…)

Political Dimension

Over the years, the nuclear club has grown to include folks like Red China, Pakistan, North Korea and soon Iran. These are among the least morals bound nations in the world and they already have “the bomb”. Folks who are relatively moral have also fallen into ‘less moral’ ways. These include the present Russia. Finally, we have the relatively moral countries of France, UK, USA, and Israel.

This excludes the “could have them if we wanted in no time at all” and the “had them before but probably don’t now” or the “NATO owns them be we have them, tee hee” countries of: Germany, Japan, South Korea, Belarus, Kazakhstan, Ukraine, South Africa, Belgium, Turkey, Netherlands, and even Italy. “How to make a Big Boom!” is one of the worst kept ‘secrets’ in the world. We largely depend on morals and hard to make SNM (Special Nuclear Materials that I like to call “boom stuff”) to keep things from going to hell in a hand basket “right quick”.

So, frankly, I’m not particularly bought into the notion that “security by obscurity” is going to work any longer. IMHO, just about ANY nuclear engineer can do a credible job of making a big-badda-boom and certainly far better than me. So, IMHO, at most I can “spill the beans” about what everybody in the “community” already knows.

That means I’m not particularly worried that anything I’m about to say:

a) Works.
b) Won’t get you killed.
c) Isn’t already widely known.
d) Isn’t already known to be the wrong path.
e) Is of little use to anyone, especially those with a desire to kill me.

In short, anyone who really wants to get The Bomb will be showing up in Pakistan or North Korea with a couple of $Billion and walking away with a working device, not reading some blog by a guy who has only looked at this from the outside for 40 years.

The Tech

First up, we need to state the goals. I’ve mostly been interested in two things. Smallest possible reactor that I could actually make if I really wanted to make one (and didn’t mind a visit from people with badges, uniforms, guns, and maybe even padded cells. I’d hope they would recognize my genius and offer me a job, but that isn’t likely. Why is left as an exercise for the student ;-)

I’ve only secondarily been interested in “path to a Big-Badda-Boom” and mostly as an intellectual exercise in information theory. (Hint: Look where the silence is greatest… the “negative space” tells a great deal.) As a minor point I’ve wondred “why doesn’t FOO look at BAR?” and found my own answers.

I’ve often said that I thought there was a “back door” to SNM, but rarely given hints to what it might be. As that “back door” is now being openly discussed, and it already proven to work (via the USA Mike tests and the Indian U233 bomb) it seems a bit daft to keep silent about that which is openly acknowledged. Besides, that “back door” comes with some pretty significant personal death probabilities from gamma rays and makes a device that gets ever more “toasty” over time while emitting a gamma signature that makes it pretty easy to spot. So at most the game moves from “ignorance is bliss” to “detection and intervention” wins.

So what is this hypothetical “back door”? U233. India made a bomb from it. USA tested a mix material bomb with it (and I’ve seen implied further tests). It is about as good as Plutonium (in some ways a bit better, in contaminant radiation a bit worse) and as Taylor (our best boutique bomb maker) said: There is good plutonium for bombs, and there is better Pu for bombs, but there is no bad Pu for bombs. (As a remembered paraphrase. For actual quote, read John Mcfee “The Curve of Binding Energy”. A great read and to some extent a biography of Taylor. )

Fissile material

In 1946 the public first became informed of U-233 bred from thorium as “a third available source of nuclear energy and atom bombs” (in addition to U-235 and Pu-239), following a United Nations report and a speech by Glenn T. Seaborg.

The United States produced, over the course of the Cold War, approximately 2 metric tons of uranium-233, in varying levels of chemical and isotopic purity. These were produced at the Hanford Site and Savannah River Site in reactors that were designed for the production of plutonium-239. Historical production costs, estimated from the costs of plutonium production, were 2–4 million USD/kg. There are few reactors remaining in the world with significant capabilities to produce more uranium-233.

Nuclear fuel

Uranium-233 has been used as a fuel in several different reactor types, and is proposed as a fuel for several new designs (see Thorium fuel cycle), all of which breed it from thorium. Uranium-233 can be bred in either fast reactors or thermal reactors, unlike the uranium-238-based fuel cycles which require the superior neutron economy of a fast reactor in order to breed plutonium, that is, to produce more fissile material than is consumed.
The long-term strategy of the nuclear power program of India, which has substantial thorium reserves, is to move to a nuclear program breeding uranium-233 from thorium feedstock.

I think you can see that this is largely well discussed and “old news” to anyone in the industry. So anything I can say is at most letting the “unwashed” in on the story.

U238 is the usual and most common form of Uranium. U235 is the usual “boom stuff” of SNM (for Uranium bombs). Little mentioned is U233 that is an even better “boom stuff” and much like Pu in that regard. U233 is easily made form Thorium. In fact, since Thorium is not fissile, all Th reactors pretty much depend on turning Th into U233 that is then fissile in order to work. Mostly folks expect that the “other isotopes” that make the whole process rather incredibly radioactive will make diversion of U233 into bombs “beyond the pale”. I suspect that official “guidance” away from Th and into U reactors was largely to divert attention from that simple fact / process.

So, with that in mind, the question becomes: How to make U233 without too much U232 (that makes it very “hot” with high energy gammas) and how to extract that easily from the process?

Normal power reactors are all about the opposite. How to keep the fuel so “dirty” that it is impossible to make a bomb (as attempts at explosive “assembly” will squib out first due to excess neutrons) or make it relatively deadly to the bomb maker (via so much ambient radiation that attempting to make the device kills you or lights up the whole place with a beacon of radiation easily seen from afar… while it cooks you…)

IMHO, this is why we have not had Throrium MSR (Molten Salt Reactors) commercialized to date. (We had them running decades ago…) Folks realized that the fuel could have the intermediate Protactinium-233 and that maybe it could be removed from the reactor and then when it converts to U-233, the product is pure enough. There are many plans for this, widely discussed, so not a big deal. Most of them need some fairly complicated machinery and it’s all theoretical (at least until China gets their reactors running) so nobody cares much.

So the question becomes: How do you make relatively pure U-233 without too much U-232 in a not too complicated process? Preferably using chemical separation instead of things like isotopic centrifuge separation.

Boot Strap Reactor

First off, you need some way to get some more concentrated nuclear fuel than raw uranium deposits. Yes, many reactors can use un-enriched Uranium (such as the Canadian CANDU, that was suppressed to some extent by the USA in an attempt to prevent what actually happened. India used a CANDU like design to make the SNM for their “devices”…) So we could just go buy / build a CANDU like reactor. But that is big, and expensive, and takes tons of Heavy Water.

While deuterium oxide is readily available and not very expensive ( I have about 3 grams of it in a bottle somewhere… a gift from an engineer friend) it draws attention when you start buying it by the rail car. So is there an smaller, easier way to make a natural uranium fueled reactor with a lot less heavy water?


The “homogeneous reactor”.

In many ways, if you just look at ‘the path not taken’ through the history of the US nuclear plan, you find it littered with the easier means to an end. I’m tempted to say that it looks like every decision was made based on “hide what is easy, show the hard expensive path as working” so as to mislead folks for generations. If that was the case: Good job! It worked!. It also leaves a clear trail of crumbs for finding the “easy way” by just stepping off that path and exploring the “negative space”. (Negative Space Analysis is one of my best and favorite tools. I don’t know if it has ever been codified as a formal thing, but I’ve used it for decades to good effect. Look for what “ought to be but is not” and go there…)

Basically, you blend the U as a salt with water as the coolant and moderator and put it in a bucket. Yup, that simple. (Staying alive is left as an exercise for the student ;-)

Aqueous homogeneous reactors (AHR) are a type of nuclear reactor in which soluble nuclear salts (usually uranium sulfate or uranium nitrate) are dissolved in water. The fuel is mixed with the coolant and the moderator, thus the name “homogeneous” (‘of the same physical state’) The water can be either heavy water or ordinary (light) water, both of which need to be very pure.

A heavy water aqueous homogeneous reactor can achieve criticality (turn on) with natural uranium dissolved as uranium sulfate. Thus, no enriched uranium is needed for this reactor.

The heavy water versions have the lowest specific fuel requirements (least amount of nuclear fuel is required to start them). Even in light water versions less than 1 pound (454 grams) of plutonium-239 or uranium-233 is needed for operation. Neutron economy in the heavy water versions is the highest of all reactor designs.

Their self-controlling features and ability to handle very large increases in reactivity make them unique among reactors, and possibly safest. At Santa Susana, California, Atomics International performed a series of tests titled The Kinetic Energy Experiments. In the late 1940s, control rods were loaded on springs and then flung out of the reactor in milliseconds. Reactor power shot up from ~100 watts to over ~1,000,000 watts with no problems observed.

Aqueous homogeneous reactors were sometimes called “water boilers” (not to be confused with boiling water reactors), as the water inside seems to boil, but in fact this bubbling is due to the production of hydrogen and oxygen as radiation and fission particles dissociate the water into its constituent gases. AHRs were widely used as research reactors as they are self-controlling, have very high neutron fluxes, and were easy to manage. As of April 2006, only five AHRs were operating according to the IAEA Research Reactor database.

Corrosion problems associated with sulfate base solutions limited their application as breeders of uranium-233 fuels from thorium. Current designs use nitric acid base solutions (e.g. uranyl nitrate) eliminating most of these problems in stainless steels.

Here, on one small quote, we have all you need to get started. Use Uranyl Nitrate for longer term runs. But use Uranyl Sulphate for the first start up run to get enough U-233 to make the light water Uranyl Nitrate one run longer term. It is largely self controlling, needs little care in the operation, and works on very small quantities of SNM (or none for a U-SO4 approach).

Run your plain water U-sulfate reactor until you get a couple of pounds of U-233 in a breeder blanket, then swap to a light water version from that point on. Eventually collect enough U-233 (via a relatively clean method below) until you can not only run your breeder, but make a pile of “boom stuff” too.

Assemble and test. (From a distance, please, and you DID wear your lead underwear during this process, right? ;-)

Some Sizing

OK, the Wiki has the idea, but is this thing REALLY small enough to fit in the garage or back yard? (Be aware that various programs look for odd gamma signatures, so you ought to expect a visit form the nice mean with guns and badges even if done out of common view…)

From that same Wiki:

Enrico Fermi advocated construction at Los Alamos of what was to become the world’s third reactor, the first homogeneous liquid-fuel reactor, and the first reactor to be fueled by uranium enriched in uranium-235. Eventually three versions were built, all based on the same concept. For security purposes these reactors were given the code name “water boilers”. The name was appropriate because in the higher power versions the fuel solution appeared to boil as hydrogen and oxygen bubbles were formed through decomposition of the water solvent by the energetic fission products.

The reactor was called LOPO (for low power) because its power output was virtually zero. LOPO served the purposes for which it had been intended: determination of the critical mass of a simple fuel configuration and testing of a new reactor concept. LOPO achieved criticality, in May 1944 after one final addition of enriched uranium. Enrico Fermi himself was at the controls. LOPO was dismantled to make way for a second Water Boiler that could be operated at power levels up to 5.5 kilowatts.

Named HYPO (for high power), this version used solution of uranyl nitrate as fuel whereas the earlier device had used enriched uranyl sulfate. This reactor became operative in December 1944. Many of the key neutron measurements needed in the design of the early atomic bombs were made with HYPO. By 1950 higher neutron fluxes were desirable, consequently, extensive modifications were made to HYPO to permit operation at power levels up to 35 kilowatts this reactor was, of course, named SUPO. SUPO was operated almost daily until its deactivation in 1974.

In 1952, two sets of critical experiments with heavy water solutions of enriched uranium as uranyl fluoride were carried out at Los Alamos to support an idea of Edward Teller about weapon design. By the time the experiments were completed, Teller had lost interest, however the results were then applied to improve the earlier reactors. In one set of experiments the solution was in 25-and-30-inch-diameter (640 and 760 mm) tanks without a surrounding reflector. Solution heights were adjusted to criticality with D2O solutions at D/235U atomic ratios of 1:230 and 1:419 in the smaller tank and 1:856 to 1:2081 in the larger tank. In the other set of experiments solution spheres were centered in a 35-inch-diameter (890 mm) spherical container into which D2O was pumped from a reservoir at the base. Criticality was attained in six solution spheres from 13.5- to 18.5-inch diameter at D/235U atomic ratios from 1:34 to 1:431. On completion of the experiment that equipment too was retired.

Got that? We’re working with spheres of 13.5 inch to 30 inch diameter. Something you could put in a small truck and drive around. Easily fits in a garage or back yard. “Nuke in a jar” sized. Adding a neutron reflector helps make it smaller. What’s a reflector?

A neutron reflector is any material that reflects neutrons. This refers to elastic scattering rather than to a specular reflection. The material may be graphite, beryllium, steel, tungsten carbide, or other materials. A neutron reflector can make an otherwise subcritical mass of fissile material critical, or increase the amount of nuclear fission that a critical or supercritical mass will undergo. An example of this is the Demon Core, a subcritical plutonium pit that went critical in two separate fatal incidents when the pit’s surface was momentarily surrounded by too much neutron reflective material.

While buying a 30 inch (or even a 24 inch) “beryllium sphere” (shades of Galaxy Quest ;-) might draw attention, I doubt that steel would even be noticed. Oh, and plain old water makes a decent reflector, so if you have a swimming pool, just put the sphere in the middle of it…

We will see this again in device design:

A similar envelope can be used to reduce the critical size of a nuclear weapon, but here the envelope has an additional role: its very inertia delays the expansion of the reacting material. For this reason such an envelope is often called a tamper. The weapon tends to fly to bits as the reaction proceeds and this tends to stop the reaction, so the use of a tamper makes for a longer lasting, more energetic, and more efficient explosion. The most effective tamper is the one having the highest density; high tensile strength is unimportant because no material remains intact under the extreme pressures of a nuclear weapon. Coincidentally, high density materials are excellent neutron reflectors. This makes them doubly suitable for nuclear weapons. The first nuclear weapons used heavy uranium or tungsten carbide tamper-reflectors.

On the other hand, a heavy tamper necessitates a larger high explosive implosion system. The primary stage of a modern thermonuclear weapon may use a lightweight beryllium reflector, which is also transparent to X-rays when ionized, allowing the primary’s energy output to escape quickly to be used in compressing the secondary stage.

While the effect of a tamper is to increase efficiency, both by reflecting neutrons and by delaying the expansion of the bomb, the effect on the critical mass is not as great. The reason for this is that the process of reflection is time consuming.

Not mentioned is that putting your Boom Device in a swimming pool or the bottom of a boat under the lead “ballast” also does the job… So avoid living near the Potomac in DC…

For now, what we care about is that we can make that 30 inch sphere into a 1.small or 2 foot sphere with some surrounding reflective stuff, be it steel, water, or beryllium.


The commonly used materials for neutron moderators and reflectors are light water (H2O), heavy water (D2O), graphite (C), zirconium hydride (ZrHx), and beryllium (Be). When utilizing such materials in nuclear reactors, fundamental properties such as thermal and mechanical properties should be understood. This chapter has provided an overview of the fundamental properties of beryllium and zirconium hydride required for neutron reflectors. The outline of zirconium hydride has described the influence of temperature and hydrogen concentration on the basic properties of the hydride.

Note that metal hydrides work and plain old water ain’t too bad…

So our first cut will be a U-sulphate based reactor, likely about a meter or 30 inches in diameter, using natural uranium, but with an added shell containing a Thorium “slurry” or similar aqueous formulation where we breed U-233 (and also acting a bit as a reflector and shielding) with an outer shell of a water jacket (or graphite / charcoal / zirconium hydride / Beryllium / {whatever} neutron reflector). This has to last long enough to breed a couple of pounds of U-233 in the blanket, or we get to re-make it enough times to get there. Whatever, it’s just time vs finesse.

I think you can already see where this is going. A “device” that needs no enrichment to run, using at most a 30 inch sphere of heavy water, and maybe a 60 inch total diameter with breeder blanket and water jacket. All up about a 5 foot sphere. Not exactly a large project.

Once you have the ‘couple of pounds’ of U-233 what’s next? (and it need not be low radiation. While that would make handling it easier, all you really need is the enrichment factor for the next step. A “crazy Ivan” or Jihadi can be found to help if needed, but I suspect that Lead Underwear and fast hands would be enough. Or care in how the breeder blanket is run.) You make the light water version of the same thing, using the breeder U-233 to ‘enrich’ the fuel of natural uranium.

This limits your initial need for heavy water to a fairly small quantity. A 2 foot sphere is Pi*1 volume or 3.1415 cubic feet. So at most about 3 cubic feet of heavy water. IIRC, it’s about 64 lbs / ft^3 for regular water, or about 190 to 200 lbs for regular water. Call that about 20 gallons to 25 gallons. Not exactly a large quantity, though it would likely require a team of buyers at a gallon or less each with a plausible cover story to pass notice. Still way below the trainload scale most folks think is an issue….

OK, so now we have done our first run with heavy water, have a breeder blanket of Th making U-233, and can move on to the enriched (with U-233) light water reactor from this point forward. At most, we’ve had a 30 gallon can of heavy water “tickle” of the monitoring systems, and the whole operation fits in a moving 18 wheeler anyway… so good luck finding where those shell companies each buying one gallon sent their goods… and where it is now.

The Breeder Running Stage

At this point we have a running nuclear breeder in a truck. It is using a light water design with a breeder blanket that needs Thorium (available in beach sand in the Carolinas or Florida and elsewhere) and light water, not much else. We might need a second truck as the ‘refinery’ to take the breeder blanket product and process it, but more on that later. It is about the timing of processes and the need to store ‘volume’ for a while… Bottom line is that we are breeding “Special Nuclear Material” and don’t need a farm of centrifuges to do it nor do we need an ongoing source of “suspicious” materials like heavy water.

We are over the hump of getting natural uranium isotopes to fission and into the realm of breeders and ‘enriched’ fuels without needing a single centrifuge or exotic method. Just chemical separation.

At this point the goal is maximal production of U-233 with minimal U-232 in it. Any fuel with too much U-232 goes into the reactor core as nitrate or phosphate. (Sidebar: Nitrate has fewer corrosion issues. I’ve seen a page saying a U-phosphate reactor was made in the UK… then silence… so most likely the phosphate works best. In any case, the stainless steel corrosion issue is likely behind us now as is the heavy water phase. I’d do a lot more research on the U-phosphate cycle were I actually doing this instead of just making a blog posting out of it. The silence on that front is deafening…)

In any case, the reactor still fits in a mobile rig or easily can be put under a few dozen meters of dirt under a house somewhere…

We are, at this point, likely in the range of a 2 foot reactor with a 1 foot breeder / reflector shell; so the whole thing is about 4 foot diameter or less. Hell, it would fit in my office with a 2 foot concrete shield with room to squeeze past it to my desk…

At this point, it is largely just finding optimizing things and ways to better handle the chemistry. Likely Grad Student stuff.

Some Links and Background


In addition to the Aircraft Reactor Experiment, the Bulk Shielding
Reactor, and the Tower Shielding Facility built as part of its
Aircraft Nuclear Project for the Air Force, the Laboratory had
three other major reactor designs in progress during the mid-1950s:
its own new research reactor with a high neutron flux; a portable
package reactor for the Army; and the Aqueous Homogeneous Reactor,
which was unique because it combined fuel, moderator, and coolant
in a single solution (designed as one of five demonstration
reactors under AEC auspices).

Initial studies of homogeneous reactors took place toward the close
of World War II. It pained chemists to see precisely fabricated
solid-fuel elements of heterogeneous reactors eventually dissolved
in acids to remove fission products–the “ashes” of a nuclear
reaction. Chemical engineers hoped to design liquid-fuel reactors
that would dispense with the costly destruction and processing of
solid fuel elements. The formation of gas bubbles in liquid fuels
and the corrosive attack on materials, however, presented daunting
design and materials challenges.

With the help of experienced chemical engineers brought to the
Laboratory after its acquisition of the Y-12 laboratories, the
Laboratory proposed to address these design challenges. George
Felbeck, Union Carbide manager, encouraged their efforts. Rather
than await theoretical solutions, Laboratory staff attacked the
problems empirically by building a small, cheap experimental
homogeneous reactor model. Engineering and design studies began in
the Reactor Experimental Engineering Division under Charles
Winters, and in 1951 the effort formally became a project under
John Swartout and Samuel Beall.

This was the Laboratory’s first cross-divisional program. Swartout
provided program direction to groups assigned in the Chemistry,
Chemical Technology, Metallurgy, and Engineering divisions, while
Samuel Beall led construction and operations. Beecher Briggs headed
reactor design; Ted Welton, Milton Edlund, and William Breazeale
were in charge of reactor physics; Edward Bohlmann directed
corrosion testing; and Richard Lyon and Irving Spiewak performed
fluid flow studies and component development.

A homogeneous (liquid-fuel) reactor had two major advantages over
heterogeneous (solid-fuel and liquid-coolant) reactors. Its fuel
solution would circulate continuously between the reactor core and
a processing plant that would remove unwanted fissionable products.
Thus, unlike a solid-fuel reactor, a homogeneous reactor would not
have to be taken off-line periodically to discard spent fuel.
Equally important, a homogeneous reactor’s fuel and the solution in
which it was dissolved served as the source of power generation.
For this reason, a homogeneous reactor held the promise of
simplifying nuclear reactor designs.

A building to house the Homogeneous Reactor Experiment was
completed in March 1951. The first model to test the feasibility of
this reactor used uranyl sulfate fuel. After leaks were plugged in
the high-temperature piping system, the power test run began in
October 1952, and the design power level of one megawatt (MW) was
attained in February 1953. The reactor’s high-pressure steam
twirled a small turbine that generated 150 kilowatts (kW) of
electricity, an accomplishment that earned its operators the
honorary title “Oak Ridge Power Company.”

Marveling at the homogeneous reactor’s smooth responsiveness to
power demands, Weinberg found its initial operation thrilling.
“Charley Winters at the steam throttle did everything, and during
the course of the evening, we electroplated several medallions and
blew a steam whistle with atomic steam,” he exulted in a report to
Wigner, asking him to bring von Neumann to see it. Despite his
enthusiasm, Weinberg found AEC’s staff decidedly bearish on
homogeneous reactors and, in a letter to Wigner, he speculated that
the “boiler bandwagon has developed so much pressure that everyone
has climbed on it, pell mell.” Weinberg surmised that the AEC was
committed to development of solid-fuel reactors cooled with water
and Laboratory demonstrations of other reactor types–regardless of
their success–were not likely to alter its course.

Despite AEC preferences, the Laboratory dismantled its Homogeneous
Reactor Experiment in 1954 and obtained authority to build a large
pilot plant with “a two-region” core tank. The aim was not only to
produce economical electric power but also to irradiate a thorium
slurry blanket surrounding the reactor, thereby producing
fissionable uranium-233. If this pilot plant proved successful, the
Laboratory hoped to accomplish two major goals: to build a
full-scale homogeneous reactor as a thorium “breeder” and to supply
cheap electric power to the K-25 plant to enrich uranium.

Initial success stimulated international and private industrial
interest in homogeneous reactors,
and in 1955 Westinghouse
Corporation asked the Laboratory to study the feasibility of
building a full-scale homogeneous power breeder. British and Dutch
scientists studied similar reactors, and the Los Alamos Scientific
Laboratory built a high-temperature homogeneous reactor using
uranyl phosphate fluid fuel. If the Laboratory’s pilot plant
operated successfully, staff at Oak Ridge thought that homogeneous
reactors could become the most sought-after prototype in the
intense worldwide competition
to develop an efficient commercial
reactor. Proponents of solid-fuel reactors, the option of choice
for many in the AEC, would find themselves in the unenviable
position of playing catch-up. But this was not to be.

Section 6.0 Nuclear Materials

Nuclear Weapons Frequently Asked Questions

Version 2.18: 20 February 1999

This material may be excerpted, quoted, or distributed freely
provided that attribution to the author (Carey Sublette) and
document name (Nuclear Weapons Frequently Asked Questions) is
clearly preserved. I would prefer that the user also include the
URL of the source.

I think that covers the requested attribution an links…

There is a LOT more of value at that link than I will quote here.

6.2 Fissionable Materials

There are three isotopes known which are practical for use as fission explosives. These are U-235, Pu-239, and U-233. Of these only U-235 occurs in nature. Pu-239 and U-233 must be produced by bombarding other isotopes with neutrons. A third element, thorium (Th-232), can only undergo fast fission, but can also be used for breeding U-233. There are other elements that are also fissile but they have no practical significance for a variety of reasons. These elements are summarized in subsection 6.2.4.
[…] U-233

This fissile uranium isotope (half-life 162,000 years) is not found in nature. It is instead bred from thorium-232 in a manner similar to the production of Pu-239:

Th-232 + n -> Th-233
Th-233 -> (22.2 min, beta) -> Pa-233
Pa-233 -> (27.0 day, beta) -> U-233

A two-step side reaction chain also occurs during breeding leading to the production of U-232:

Th-232 + n -> Th-231 + 2n
Th-231 -> (25.5 hr, beta) -> Pa-231
Pa-231 + n -> Pa-232
Pa-232 -> (1.31 day, beta) -> U-232
The production of U-232 through this process depends on the presence of significant amounts of un-thermalized neutrons since the cross section of the initial n,2n reaction is small at thermal energies.

If significant amounts of the isotope Th-230 are present then U-232 production is augmented by the reaction:

Th-230 + n -> Th-231
which continues as before.
The presence of U-232 is important because of its decay chain:

U-232 -> (76 yr, alpha) -> Th-228
Th-228 -> (1.913 yr, alpha) -> Ra-224
Ra-224 -> (3.64 day, alpha & gamma) -> Rn-220
Rn-220 -> (55.6 sec, alpha) -> Po-216
Po-216 -> (0.155 sec, alpha) -> Pb-212
Pb-212 -> (10.64 hr, beta & gamma) -> Bi-212
Bi-212 -> (60.6 min, beta & gamma) -> Po-212
alpha & gamma) -> Tl-208
Po-212 -> (3×10^-7 sec, alpha) -> Pb-208 (stable)
Tl-208 -> (3.06 min, beta & gamma) -> Pb-208
The rapid decay sequence beginning with Ra-224 produces a large amount of energetic gamma rays. About 85% of this total gamma energy output is due to the last isotope in the sequence, thallium-208 which produces the most energetic gamma rays (up to 2.6 MeV). The amount of gamma radiation emitted is proportional to the amount of Th-228 present.

The buildup of U-232 as a contaminant is unavoidable during the production of U-233. This is similar to the plutonium isotope contamination problem discussed below in plutonium production, but occurs to a much smaller extent rate. The first (n,2n) reaction only occurs when neutrons with energies in excess of 6 MeV are encountered. Only a small percentage of fission neutrons are this energetic, and if the thorium breeding blanket is kept in a reactor region where it is only exposed to a well moderated neutron flux (i.e essentially no neutrons above the Th-232 fission threshold of 500 KeV) this reaction can be nearly eliminated. The second reaction proceeds very efficiently with thermalized neutrons however, and minimizing U-232 from this source requires choosing thorium that naturally has a low Th-230 concentration.

If the above precautions are followed weapons-grade U-233 can be produced with U-232 levels of around 5 parts per million (0.0005%). Above 50 ppm (0.005%) of U-232 is considered low grade.

In a commercial fuel cycle the build-up of U-232 is not really a disadvantage, and may even be desirable since it reduces the proliferation potential of the uranium. In a fuel economy where the fuel is reprocessed and recycled the U-232 level could build up to 1000 – 2000 ppm (0.1 – 0.2%). In a system that is specifically engineered to accumulate U-232 levels of 0.5-1.0% can be reached.

Over the first couple years after U-233 containing U-232 is processed, Th-228 builds up to a nearly constant level, balanced by its own decay. During this time the gamma emissions build up and then stabilize. Thus over a few years a fabricated mass of U-233 can build up significant gamma emissions. A 10 kg sphere of weapons grade U-233 (5 ppm U-232) could be expected to reach 11 millirem/hr at 1 meter after 1 month, 0.11 rem/hr after 1 year, and 0.20 rem/hr after 2 years. Glove-box handling of such components, as is typical of weapons assembly and disassembly work, would quickly create worker safety problems. An annual 5 rem exposure limit would be exceeded with less than 25 hours of assembly work if 2-year old U-233 were used. Even 1 month old material would require limiting assembly duties to less than 10 hours per week.

In a fully assembled weapon exposures would be reduced by absorption by the tamper, case, and other materials. In a modern light weight design this absorption would be unlikely to achieve more than a factor of 10 attenuation, making exposure to weapons assembled two years previously an occupational safety problem. The beryllium reflectors used in light weight weapons would also add to the background neutron level due to the Be-9 + gamma -> Be-8 + neutron reaction. The U-232 gammas also provide a distinctive signature that can be used to detect and track the weapons from a distance. The heavy tampers used in less sophisticated weapon designs can provide much high levels of attenuation – a factor of 100 or even 1000.

With deliberately denatured grades of U-233 produced by a thorium fuel cycle (0.5 – 1.0% U-232), very high gamma exposures would result. A 10 kg sphere of this material could be expected to reach 11 rem/hr at 1 meter after 1 month, 110 rem/hr after 1 year, and 200 rem/hr after 2 years. Handling and fabrication of such material would have to done remotely (this also true of fuel element fabrication) In an assembled weapon, even if a factor of 1000 attenuation is assumed, close contact of no more than 25 hours/year with such a weapon would be possible and remain within safety standards. This makes the diversion of such material for weapons use extremely undesirable.

The short half-life of U-232 also gives it very high alpha activity. Denatured U-233 containing 1% U-232 content has three times the alpha activity of weapon-grade plutonium, and a correspondingly higher radiotoxicity. This high alpha activity also gives rise to an even more serious neutron emission problem than the gamma/beryllium reaction mentioned above. Alpha particles interact with light element contaminants in the fissile material to produce neutrons. This process is a much less prolific generator of neutrons in uranium metal than the spontaneous fission of the Pu-240 contaminant in plutonium though.

To minimize this problem the presence of light elements (especially, beryllium, boron, fluorine, and lithium) must be kept low. This is not really a problem for U-233 used in implosion systems since the neutron background problem is smaller than that of plutonium. For gun-type bombs the required purity level for these elements is on the order of 1 part per million. Although achieving such purity is not a trivial task, it is certainly achievable with standard chemical purification techniques. The ability of the semiconductor industry to prepare silicon in bulk with a purity of better than one part per billion raises the possibility of virtually eliminating neutron emissions by sufficient purification.

U-233 has a spontaneous fission rate of 0.47 fissions/sec-kg. U-232 has a spontaneous fission rate of 720 fissions-sec/kg.

Despite the gamma and neutron emission drawbacks, U-233 is otherwise an excellent primary fissile material. It has a much smaller critical mass than U-235, and its nuclear characteristics are similar to plutonium. The U.S. conducted its first test of a U-233 bomb core in Teapot MET in 1957 and has conducted quite a number of bomb tests using this isotope, although the purpose of these tests is not clear. India is believed to have produced U-233 as part of its weapons research and development, and officially includes U-233 breeding as part of its nuclear power program.

Its specific activity (not counting U-232 contamination) is 9.636 milliCi/g, giving it an alpha activity (and radiotoxicity) about 15% of plutonium. A 1% U-232 content would raise this to 212 milliCi/g.

The key bits here being that we want to find natural Th that is low in Th-230 if possible but we also want to include a neutron moderating shell that cuts down the N to below the 500 KeV level. Ok, make sure that the “moderator” between the reactor 2 foot sphere and the ‘breeder’ sphere is effective. Maybe fill it with powdered charcoal briquettes if needed… Not exactly a major technical hurdle…

An Issue Of Timing

Note that the Wiki has the ‘concensus’ that U-233 will be too contaminated by U-232 to be usable.

Weapon material

The first detonation of a nuclear bomb that included U-233, on 15 April 1955.
(picture of folks looking at mushroom cloud deleted – E.M.Smith)

As a potential weapon material pure uranium-233 is more similar to plutonium-239 than uranium-235 in terms of source (bred vs natural), half-life and critical mass, though its critical mass is still about 50% larger than for plutonium-239. The main difference is the unavoidable co-presence of uranium-232 which can make uranium-233 very dangerous to work on and quite easy to detect.

While it is thus possible to use uranium-233 as the fissile material of a nuclear weapon, speculation aside, there is scant publicly available information on this isotope actually having been weaponized:

Note that they leave out the Indian test and the USA Mike test… Wonder why… We’ve had U-233 bombs already made and tested.

The United States detonated an experimental device in the 1955 Operation Teapot “MET” test which used a plutonium/U-233 composite pit; its design was based on the plutonium/U-235 pit from the TX-7E, a prototype Mark 7 nuclear bomb design used in the 1951 Operation Buster-Jangle “Easy” test. Although not an outright fizzle, MET’s actual yield of 22 kilotons was sufficiently below the predicted 33 that the information gathered was of limited value.

Oh, here it is. Nice “damning by faint praise” guys…

The Soviet Union detonated its first hydrogen bomb the same year, the RDS-37, which contained a fissile core of U-235 and U-233.
In 1998, as part of its Pokhran-II tests, India detonated an experimental U-233 device of low-yield (0.2 kt) called Shakti V.

Yes, just ignore that “low-yield”… nothing to see here, move along, move along…

Any ideas about WHY a twice proven to go boom “boom stuff” that is very similar to plutonium (though needing a bit more) would be poo-pood in such a way? I’m sure it’s not worth looking at… /sarc;>

In reality land, the Mike test was a last minute swap of materials on an quest for information, not an optimization, and the India test was part of a series of “limit” exercises all done at once (as they figured they had one shot at getting past “monitoring”) that included a power reactor bomb (supposedly not possible, yet it blows…) and was in some ways a test of ‘minimal to go boom’ so expected to be low-yield.

Bottom line is that the U-233 “boom stuff” works rather well.

The B Reactor and others at the Hanford Site optimized for the production of weapons-grade material have been used to manufacture U-233.[13][14][15][16]
U-232 impurity[edit]
Production of 233U (through the irradiation of thorium-232) invariably produces small amounts of uranium-232 as an impurity, because of parasitic (n,2n) reactions on uranium-233 itself, or on protactinium-233:
232Th (n,γ) 233Th (β−) 233Pa (β−) 233U (n,2n) 232U
232Th (n,γ) 233Th (β−) 233Pa (n,2n) 232Pa (β−) 232U
The decay chain of 232U quickly yields strong gamma radiation emitters:
232U (α, 68.9 years)
228Th (α, 1.9 year)
224Ra (α, 3.6 day, 0.24 MeV)
220Rn (α, 55 s, 0.54 MeV)
216Po (α, 0.15 s)
212Pb (β−, 10.64 h)
212Bi (α, 61 m, 0.78 MeV)
208Tl (β−, 3 m, 2.6 MeV)
208Pb (stable)
This makes manual handling in a glove box with only light shielding (as commonly done with plutonium) too hazardous, (except possibly in a short period immediately following chemical separation of the uranium from its decay products) and instead requiring complex remote manipulation for fuel fabrication.
The hazards are significant even at 5 parts per million. Implosion nuclear weapons require U-232 levels below 50 PPM (above which the U-233 is considered “low grade”; cf. “Standard weapon grade plutonium requires a Pu-240 content of no more than 6.5%.” which is 65000 PPM, and the analogous Pu-238 was produced in levels of 0.5% (5000 PPM) or less). Gun-type fission weapons additionally need low levels (1 ppm range) of light impurities, to keep the neutron generation low.

The Molten-Salt Reactor Experiment (MSRE) used U-233, bred in light water reactors such as Indian Point Energy Center, that was about 220 PPM U-232.

The point? Figuring out how to minimize the U-232 content of your U-233 matters. Rather a lot. So where does it come from and how to minimize it?

Alongside its abundance, one of thorium’s most attractive features is its apparent resistance to nuclear proliferation, compared with uranium. This is because thorium-232, the most commonly found type of thorium, cannot sustain nuclear fission itself. Instead, it has to be broken down through several stages of radioactive decay. This is achieved by bombarding it with neutrons, so that it eventually decays into uranium-233, which can undergo fission.

As a by-product, the process also produces the highly radiotoxic isotope uranium-232. Because of this, producing uranium-233 from thorium requires very careful handling, remote techniques and heavily-shielded containment chambers. That implies the use of facilities large enough to be monitored.

The paper suggests that this obstacle to developing uranium-233 from thorium could, in theory, be circumvented. The researchers point out that thorium’s decay is a four-stage process: isotopically pure thorium-232 breaks down into thorium-233. After 22 minutes, this decays into protactinium-233. And after 27 days, it is this substance which decays into uranium-233, capable of undergoing nuclear fission.

Ashley and colleagues note from previously existing literature that protactinium-233 can be chemically separated from irradiated thorium. Once this has happened, the protactinium will decay into pure uranium-233 on its own, with little radiotoxic by-product.

“The problem is that the neutron irradiation of thorium-232 could take place in a small facility,” Ashley said. “It could happen in a research reactor, of which there are about 500 worldwide, which may make it difficult to monitor.”

The researchers note that from an early small-scale experiment to separate protactinium-233, approximately 200g of thorium metal could produce 1g of protactinium-233 (and therefore the same amount of uranium-233) if exposed to neutrons at the levels typically found in power reactors for a month. This means that 1.6 tonnes of thorium metal would be needed to produce 8kg of uranium-233. They also point out that protactinium separation already happens, as part of other chemical processes.

At this point, what is needed is an awareness of the timing of isotope conversions. Th to Pa to U. What are the timings?

From above, we see that Th-232 goes through a decay path to U-233 that has a ‘long pause’ at the Pa-233 to U-233 transition.

Th-232 + n -> Th-233
Th-233 -> (22.2 min, beta) -> Pa-233
Pa-233 -> (27.0 day, beta) -> U-233

Notice that Pa-233 has a 27 day time for conversion to U-233? So you can suck out the Pa early, wait a week or three, and then separate the U-233 as a relatively clean product. (Yes, I’m leaving a detail here a bit opaque… but the astute student will see the timing issues.)

In short, it is very ‘doable’to make a U-233 “boom stuff” device via a thorium cycle and with only chemical processes. It takes a unit about the size of an 18 wheeler truck, and it does not need a lot of things that draw attention other than a few gallons of heavy water at the very start.

Frankly, this scares the hell out of me and I’d love to see a proof that this pathway is completely bogus.

Subscribe to feed

Posted in Nukes, Science Bits | Tagged , , , | 1 Comment

One Tenth Bar, Short Waves, and Solar Spectral Change

I’ve seen these issues come around before, but I don’t remember seeing anyone “connect the dots”.

First, a reminder of my earlier Tropopause Rules where we saw that below the tropopause CO2 does nearly nothing, but above the tropopause, in the stratosphere, it cools via radiation.

In many ways the paper about a 0.1 bar tropopause is just saying the same thing, but for a generalized ‘all atmospheres with SW absorbing gasses’. This Google Books entry lists those shortwave absorbing gases for Earth as being H2O (water vapor), O2 and O3 (Oxygen and Ozone), NO2, and NO3, and leaves out SO2 as it “almost corresponds with the strongest Ozone band”. So basically, water vapor and life derived oxygen and nitrite / nitrate.

Note that CO2 is not part of this block of gases.

In this graph, you can see that the CO2 band only radiates in the stratosphere. Below the dashed line of the tropopause it does nothing radiatively.

Also notice that Ozone is a ‘hot spot’ in the stratosphere as it is absorbing a lot of UV light, and down in the troposphere it is water vapor that’s picking up the heat (and moving it skyward as water vapor is lighter than air and rises to make clouds and rain).

Stratosphere radiation by species

Stratosphere radiation by species

So what’s new here, now? First off, this paper talks about that 0.1 bar (more or less) constant tropopause height, but is paywalled so the abstract is all we get…


Common 0.1 bar tropopause in thick atmospheres set by pressure-dependent infrared transparency

T. D. Robinson & D. C. Catling

Nature Geoscience 7, 12–15 (2014) doi:10.1038/ngeo2020
Received 28 March 2013 Accepted 29 October 2013 Published online 08 December 2013

A minimum atmospheric temperature, or tropopause, occurs at a pressure of around 0.1 bar in the atmospheres of Earth1, Titan2, Jupiter3, Saturn4, Uranus and Neptune4, despite great differences in atmospheric composition, gravity, internal heat and sunlight. In all of these bodies, the tropopause separates a stratosphere with a temperature profile that is controlled by the absorption of short-wave solar radiation, from a region below characterized by convection, weather and clouds5, 6. However, it is not obvious why the tropopause occurs at the specific pressure near 0.1 bar. Here we use a simple, physically based model7 to demonstrate that, at atmospheric pressures lower than 0.1 bar, transparency to thermal radiation allows short-wave heating to dominate, creating a stratosphere. At higher pressures, atmospheres become opaque to thermal radiation, causing temperatures to increase with depth and convection to ensue. A common dependence of infrared opacity on pressure, arising from the shared physics of molecular absorption, sets the 0.1 bar tropopause. We reason that a tropopause at a pressure of approximately 0.1 bar is characteristic of many thick atmospheres, including exoplanets and exomoons in our galaxy and beyond. Judicious use of this rule could help constrain the atmospheric structure, and thus the surface environments and habitability, of exoplanets.

Note that Mars is out as the atmosphere is too thin and it lacks SW absorbing gases. Venus is out due to other reasons. But for many other planets, the tropopause will be near 0.1 bar, or at about 1/10th of an Earth atmosphere density. That also means more than 90% of all the Earth atmosphere is BELOW the tropopause in the zone where infrared radiation is essentially meaningless as it does not travel far. I’ve bolded the bit in the abstract that points out that “at higher pressures, atmospheres become opaque to thermal radiation”. Memorize that. Show it to all your friends (and the occasional enemy ;-)

So below 1/10 bar, IR doesn’t get the job done and it is all about convection. I’d also suggest that some details will depend on evaporation, conduction, precipitation… but they are secondary. With less evaporation, you would just get more convection until the heat is moved back up. Which is why we ignore the day at our peril.

So right off the bat we can say that for 90% of the atmosphere, CO2 “back radiation” is entirely a non-starter. And in the 10% where it does radiate, it radiates to space, not the surface, as the top of the tropopause stops the shortwave radiation “right quick”. It’s a one way diode for heat to go up the convection escalator and then radiate only to space.

Does a very good job of summarizing the paper.

In terms of comparative planetology, we have a Figure 2 very different situations: while some atmospheres are dominated by CO2 or N2, other are mainly composed of H2 and He, the lightest gases available with a substantially different chemistry. Distances from the Sun are also extremely variable, causing a broad range of temperatures from the hot Venus to the cold Uranus and Neptune. However, it is striking to note that all of them (well, all except Venus for reason yet to discuss) display a tropopause at very much the same level. But, why this happens?

This is the intriguing question studied by Robinson and Catling in their work. For doing so, they developed a relatively simple 1-D model to account for the temperature structure of the atmospheres under very different conditions. This model accounts for the solar and infrared radiation absorption but also for diffusion and convection. However, it is known that the infrared opacity is a power law of pressure. The exact coefficient depends on which kind of absorption is dominant at each atmospheric level. In the higher atmosphere, absorption is dominated by Doppler broadening of the lines produced by the atmospheric constituents (yielding a power of 1), while below the middle stratosphere the absorption is mainly pressure-broadened and collision-induced resulting in an infrared opacity proportional to the square of pressure, regardless of which particular species is absorbing radiation. This model is able to give an analytical expression of temperature and, setting the derivative of this function to zero, it is fairly simple to find not only the minimum of temperature (namely the tropopause) but also the conditions under which such a minimum will develop.

I note in passing that a power law with a squared function in it that starts of 1/10th the total is going to be very effective rather quickly at stopping IR from penetrating back down against convective rise.

He does (kind of) explain the Venus differences:

Venus has been already stated as lacking a proper tropopause. This is true for the global mean, but not for some latitudes of the planet with a stratospheric temperature inversion, where a tropopause can indeed be found. So Venus is only an exception in the global mean.

That “over averaging” thing again. Averaging out the parts where it still holds.

And I found this bit particularly interesting:

In the near future, we expect to retrieve spectra of exoplanet atmospheres; assuming a 0,1 bar pressure tropopause can help to determine the surface temperature in a much more accurate way and therefore to determine the feasibility of finding liquid water on them.

So with just the spectra of the atmosphere, some idea of the level of astral radiation, and this formula, we can figure out the surface temperature of the planet and the water phase as liquid, solid, or gas. But per the Global Warmers, just not on Earth? Eh?…

Now, just a quick note before the next paper / link. It references SED, but does not translate that to words. I believe it means Spectral Energy Distribution. As our sun has just gone a bit quiet and TSI didn’t drop much, but the SED shifted dramatically away from UV, I think this matters.

Now, on to a bit of NASA speak…

This includes the graphs from the 0.1 bar paper, and also comments on the usabability of this method to finding potentially habitable worlds. This is an excerpt of their comments on that paper:

This model was recently used to explain why Earth, Jupiter, Saturn, Titan, Uranus, and Neptune all share a common tropopause temperature minimum in their atmosphere at 0.1 bar pressure (Robinson and Catling 2013). The explanation lies in the physics of infrared radiative transport, and should apply to countless worlds outside the Solar System. Furthermore, the assumption of a 0.1 bar tropopause can be used to help constrain surface pressure or surface temperature on an exoplanet, the combination of which determine habitability.

So we have a rule that applies to at least 6 major bodies of our solar system, that can be applied to planets anywhere in the galaxy, and can constrain surface pressure or surface temperature. And we know that the surface pressure on Earth is constrained to 1 Bar, so it is not the variable. I think that means this formula is going to be constraining surface temperature when applied to Earth… Just saying…

They go into albedo aspects a bit more than I would expect, and cover some other turf too, but one particular paragraph a ways into it caught me eye.

Shields, Meadows, Bitz, Pierrehumbert and collaborators used a hierarchy of climate models to explore the effect of the interaction between the parent star’s radiation and the planet’s wavelength dependent reflectivity (from surface ice and snow, and atmospheric absorption) on planetary climate. Their results indicate that planets orbiting cooler, redder (M-dwarf) stars are less sensitive to decreases in stellar insolation (as shown in Figure 2 below), and episodes of low-latitude glaciation may be less likely to occur on M-dwarf planets in the habitable zone than on planets orbiting stars with high visible and near-UV output. This is due to absorption of near-infrared radiation by lower-albedo surface ice and atmospheric absorption by CO2 and water-vapor. However, at the outer edge of the habitable zone, high levels of CO2 mask the ice-albedo effect, leaving the traditional limit of the outer edge of the HZ unaffected by the spectral dependence of ice and snow albedo (Shields et al., 2013). Ongoing simulations also indicate that the amount of increased stellar flux required to melt a planet out of a snowball state is highly sensitive to host star SED. We find that a distant frozen M-dwarf planet orbiting beyond the outer edge of its star’s habitable zone without a continuously active carbon cycle is likely to melt more easily out of global ice cover as its host star ages and its luminosity increases (Shields et al., in prep).

I’ve bolded a bit.

To me, this seems to indicate that SED matters rather a fair amount. Shifts of SED can shift habitable zone of the orbit space, and it can change how hard or easy it is to exit an snowball state. We, not being around a redder dwarf, will be a bit more sensitive to SED as it interacts with surface albedo such as ice and snow.

As I read this it is focused on red dwarfs, so I’m less clear on how this applies to our yellow medium sized star with a load of UV. But it is pretty clear that shifts of UV vs IR matter, potentially a lot, and interacts with ice cover and water vapor and oxygen content. It also seems to say that high levels of CO2 (though it doesn’t say what those are) can mask the ice-albedo effect. So, since IR is absorbed by the ice, having more CO2 means the ice is less likely to melt? Do I have that right?

IMHO, this is going to take some more thinking. It is also very obvious that this is NOT in the climate models. “Settled science” my SED…

Subscribe to feed

Posted in AGW Science and Background, Science Bits | Tagged , , , , , , | 10 Comments

Seeking Mertonian Norms in Global Warming Studies

There is an interesting set of understandings of how Science ought to be conducted. These are the basis of the psychology and ethics of Science as I have learned it. The seem, to me, to be sorely lacking in what passes for Science in the world of “climate science”.

The two key words are “Mertonian” and “cudos”.

Mertonian norms

CUDOS is an acronym used to denote principles that should guide good scientific research. According to the CUDOS principles, the scientific ethos should be governed by Communalism, Universalism, Disinterestedness, Originality and Skepticism.

Notice in particular that “Skepticism”…which I’ve bolded… kind of makes being called a skeptic as a derisive term a bit of an indicator that someone is out of touch with how Science is expected to work…

CUDOS is based on the Mertonian norms introduced in 1942 by Robert K. Merton. Merton described “four sets of institutional imperatives [comprising] the ethos of modern science”: “universalism, communism, disinterestedness, and organized skepticism.” These four terms could already be arranged to form CUDOS, but “originality” was not part of Merton’s list.

In contemporary academic debate the modified definition outlined below is the most widely used (e.g. Ziman 2000).

Communalism all scientists should have equal access to scientific goods (intellectual property) and there should be a sense of common ownership in order to promote collective collaboration, secrecy is the opposite of this norm.

Well. All those “not sharing data or code” moments from Jones, UEA, et. al. come to mind. Not meeting even the first criteria. I’d also add a wet raspberry at all those pay walled publications…

Universalism all scientists can contribute to science regardless of race, nationality, culture, or gender.

And here I’d add “level of acceptance by the Established Illuminati of the field”. Though perhaps that fits under “culture”. I’m definitely from a different culture then them. That is, amateurs can do Science just fine, thank you very much. Universalism, to be really universal, IMHO, must also extend to those folks not degreed or published in the field.

from The American Heritage® Dictionary of the English Language, 4th Edition
n. A person who engages in an art, science, study, or athletic activity as a pastime rather than as a profession.

From the first entry of the first definition of the first listing, it calls out specifically science.

Kind of puts all those “appeal to authority” arguments from the Warmers in perspective as non-Mertonian, doesn’t it?

Back at the original link:

Disinterestedness according to which scientists are supposed to act for the benefit of a common scientific enterprise, rather than for personal gain.

OMG! Given the $Billions being sucked out of granting institutions (often as political patronage via government “guidance”) and given the extremely vocal and very non-disinterested advocacy shouted at our faces, this one is just so “blown” it is beyond description. Hardly any “disinterested” characters at all on the Warmers Bandwagon.

Organized Skepticism Skepticism means that scientific claims must be exposed to critical scrutiny before being accepted.

Well, the only “Organized Skepticism” comes from outside the Church Of Global Warming Dogma and from a very unorganized rabble of folks with essentially no funding. Despite the shouted propaganda claims that Skeptics are funded by “big oil”, the reality is that far more oil money flows to The Warming Cause than ever made it to Skeptics. By and large the Skeptics come from completely non-funded or self-funded areas, or get at most an occasional small grant or two from established granting agencies to their established schools ( Dr. Soon comes to mind). For which they are promptly vilified and attacked.

The funding arena is horridly lopsided toward The Warming Cause and the methods used to keep it that way are strictly Saul Alinsky.

I’d count this as one of THE most strongly violated of the Mertonian Norms by the Warmers.

The Wiki includes a list of “coutnernorms” with a heading saying they might be removed, so I’m quoting them here just to preserve them. To me they are pretty obvious.

As a balance to the Mertonian norms, the following counter-norms are often discussed

Solitariness (secrecy, miserism) is often used to keep findings secret in order to be able to claim patent rights, and in order to ensure primacy when published.

Particularism is the assertion that whilst in theory there are no boundaries to people contributing to the body of knowledge, in practice this is a real issue, particularly when you consider the ratio of researchers in rich countries compared with those in poor countries, but this can be extended to other forms of diversity. In addition, scientists do judge contributions to science by their personal knowledge of the researcher.

Interestedness arises because scientists have genuine interests at stake in the reception of their research. Well received papers can have good prospects for their careers, whereas as conversely, being discredited can undermine the reception of future publications.

Dogmatism because careers are built upon a particular premise (theory) being true which creates a paradox when it comes to asserting scientific explanations.

Also, an interesting, if a bit long, look at the norms in practice and how to extend them to things like Administration is here:

I suspect that the lack of these norms in Administration (and especially in large Grant Bestowing Organizations) is a major problem.

So What?

So what’s a person to do? I don’t have a good answer. I do think this needs to be put, directly, into the face of the most vocal on the Warmers side. They ought to be challenged directly on conformance to Mertonian Norms and if they fail, no kudos for lack of CUDOS.

Will it happen? Not likely. Government driven “research” is about social control. Most large “charitable organizations” have been widely parasitised by folks with a Progressive agenda and a desire to push Global Warming for social control purposes (this is evidenced by the specific behaviours of the Agenda 21 folks in pushing for widespread action to take over organs of control, even local government boards, to push The Agenda and it is evidenced by their behaviours). So as long as there is any chance at all that their goal of control can be reached, the push will stay in place and Science and Mertonian Norms be damned.

Only when it is absolutely clear that this is a busted flush and the hunt for the guilty begins will there be a backing off. (Though do be aware that Progressives, Socialists, Communists, etc. never quit. They just hide for a while and try again in different ways and different scales.) So don’t expect The Club Of Rome, Maurice Strong, and others to just quit. They have specifically pushed for a $200 Billion per year take from developed nations to be spread around via their Cronies. You don’t just walk away from that kind of money, science be damned…

There has been a many decades (nearly ‘generational’) effort to shift funding agencies and popular culture toward their goals. It will likely take just as long and just as effective an effort to move things back to a Mertonian world. Given the hundreds of $Billions on their side at the moment, and given the “top cover” they enjoy (see the white wash of Climategate along with other “caught in the cookie jar” moments that get punished with the reward of large new grants… ) it is a very difficult goal to reverse that. Frankly, I suspect a Year Without A Summer and a Dalton Minimum like event is about the only hope.

But I’m not all dismal on it. There is evidence that the Average Joe and Jane are not buying it. Some whole countries have junked the Global Warming mantra ( Australia and Canada look to be well on their way ) while others have never drunk the cool-aid. China, India, and Pakistan are building coal power plants at a tremendous rate, while Russia is eyeball deep in gas and oil production. Yeah, Russia was willing to play along with the show as long as they got buckets of subsidy money, but they are not buying it. So that puts most of Asia in the not buying it column. Along with Canada, Australia, and the US Population being marginal on the whole thing, there is some hope. (South America and Africa just don’t matter much to the global economy, so forgive me if I don’t focus on them much here.)

Mostly it is just the EU, Britain, and the Progressive-Liberal-Socialist-NWO side of the USA elite that are firmly bought in. Average working stiffs not so much. A popular revolution at the ballot box is not out of the question. (My sense of things is that most folks are just waiting to run out the clock on Obama and get someone with an America First attitude swapped in.)

I do have one small idea to add: Perhaps we can get the Mertonian Norms remembered and included in the expectations of any grant money handed out. Simply advocate that those government slush funds like the NSF who hand out $Millions include a stipulation that the recipients follow these norms. At least it might remind them of what Real Science is supposed to be like.

Also, when someone is busy slandering the Skeptics (or worse, calling for their incarceration or death – yeah, it happened) it would be worth point out to them that Skepticism is one of the Mertonian Norms – then, when they don’t know what those are, enlighten them ;-)

Subscribe to feed

Posted in AGW Science and Background, Political Current Events, Science Bits | Tagged , , , , , | 1 Comment

Rise Of The Mesh-ings

A couple of years back, when Egypt was in turmoil during the ouster of Mubarak (strange that we have to name which time it was in turmoil…) The Powers That Be shut down internet access so as to halt social media, email, skype, and all from showing what was happening. At that time I made a posting about ways to make an “ersatz network”, mostly the longhand way. Building routing points, DNS servers, and such and getting people connected on an intranet of sorts, then having someone with a satellite phone or packet radio or (other method) linking that into the broader internet if needed.

I’d even given a bit of a nod to “mesh networks” as a future thing.

There is mention made of self forming webs in one of the pages. That’s a more complicated extension. To make it such that the devices discover each other, and ‘link hands’ so to speak. So if there were enough of them, the whole area becomes one large web of communications. (The problem there is with the lack of ID. If 2000 folks are all “anon”, how do you find your spouse?) But with ‘not too much work’, that can be fixed too (or may already have been fixed by someone…) So, take a blog where folks can post with any name not already taken. Pretty quickly folks could paste up messages saying “John Q. Public looking for Jane, need lunch in 10 minutes.”

PirateBox DIY

PirateBox can be configured to run on many devices, including wireless routers, single-board computers, laptops, and mobile phones. Key hardware platforms include the TP-Link MR3020 and the Raspberry Pi both of which start at US$35.

PirateBox will potentially run on most OpenWrt compatible routers with USB storage. Check out this tutorial and be sure to visit the forum for support and more info.

OpenWrt with Mesh
Thanks to lead PirateBox developer Matthias Strubel PirateBox can now be configured to create wireless mesh networks using Alexandre Dulaunoy’s Forban. This feature is still in testing – for more info, check out this forum post.

So those distributed mesh self organizing bits are being worked on…

The “how to” do it on a Raspberry Pi mostly has a download of a prebuilt image. I’d want to know what was in it before using it for anything where badges and guns were involved (or checking accounts and money…)

Remember that a “mesh network” is one that mostly self forms between a whole bunch of gadgets and they all share over their individual connections to the nearest ones, and via the mesh of connections, with the further nodes that are out of range for a single hop.

Well that “being worked on” has become “done deal”. And in a far more elegant and complete way than I’d expected. I’d figured that the Libertarian mindset of the Geek World would take umbrage at having THEIR communications screwed around with by mere governments…. and I was right. Furthermore, it wasn’t just the Linux world. Android got in on the act (though as a derivative of Linux the only real surprise there is that Google is willing to let go of that much control and monitoring…) But in any case, someone would do it. So both Linux and Android. And….

Macintosh / Apple / iStuff too. Of course, you need a newer iThingy to get it to work. But a small price (and self healing over a couple of years anyway).

How an Under-Appreciated iOS 7 Feature Will Change the World

Mike Elgan (3:07 pm PDT, Mar 22nd 2014)

curious download hit Apple’s app store this week: a messaging app called FireChat.

It’s a new kind of app because it uses an iOS feature unavailable until version 7: the Multipeer Connectivity Framework. The app was developed by the crowdsourced connectivity provider Open Garden and this is their first iOS app.

The Multipeer Connectivity Framework enables users to flexibly use WiFi and Bluetooth peer-to-peer connections to chat and share photos even without an Internet connection. Big deal, right?

But here’s the really big deal — it can enable two users to chat not only without an Internet connection, but also when they are far beyond WiFi and Bluetooth range from each other — connected with a chain of peer-to-peer users between one user and a far-away Internet connection.

It’s called wireless mesh networking. And Apple has mainstreamed it in iOS 7. It’s going to change everything. Here’s why.

It can also extend an Internet connect to a place where none exists — for example, to a hotel basement, cave or to rural areas where cell tower connections are non-existent.

It does that through the mesh networking capability inherent in the Multipeer Connectivity Framework. With multiple users in the area, FireChat can relay messages just like the internet does, from node to node (phone to phone).

(Apple’s AirDrop works in the same way, by the way.)

Share content with AirDrop from your iPhone, iPad, or iPod touch
With AirDrop, you can share photos, videos, websites, locations, and more with people nearby with an Apple device.

What you need
To share content with AirDrop, both people need one of these devices using iOS 7 or later, or a Mac with OS X Yosemite:
iPhone 5 or later
iPad (4th generation or later)
iPad mini
iPod touch (5th generation)
You also need to turn on Wi-Fi and Bluetooth. If you want to share with your contacts, sign in to your iCloud account.
Learn more about using AirDrop to share with people using a Mac with Yosemite.

So go ahead, evil dictator, cut the local WiFi and even the internet trunk. We’ll just share our short texts, emails, files, photos, movies, you name it directly with each other, no central node need apply. And when any node has a connection to the rest of the world, we ALL have a connection to the rest of the world. (Though how fast a connection is unclear and likely to be slow, but workable for getting the important bits of news out or in.)

While sharing with just recognized contacts requires an iCloud login, sharing with anyone does not. In an emergency, opening up a connection to “All” lets things just flow, but with a loss of privacy.

Were I in the Media Business, I’d be equipping all my remote reporting teams with a iPhone, iPad, Macintosh, etc. along with a satellite internet feed gizmo. Then, when they are reporting from a ‘hot zone’, they can just ‘mesh’ together and to their uplink. If things get really dodgy, then they can open things to “all” and let ‘er rip to an entire demonstration worth of folks.

(there’s more at both links)



Mobile phones normally can’t be used when cellular networks fail, for example during a disaster. This means that millions of vulnerable people around the world are deprived of the ability to communicate, when they need it most.

We have spent the past four years working with the New Zealand Red Cross to create a solution. We call it the Serval Mesh, and it is free software that allows smart-phones to communicate, even in the face of catastrophic failure of cellular networks.

It works by using your phone’s Wi-Fi to communicate with other phones on the same network. Or even by forming impromptu networks consisting only of mobile phones. Mesh communications is an appropriate technology for complementing cellular networks. Think of it like two-way radio or CB radio that has been propelled into the 21st century. For long-range communications you will still need to make use of cellular or fixed telephone networks or the internet.

This software allows you to easily make private phone calls, send secure text messages and share files in caves, in subways, in the Outback, in Australia or Africa, in Europe or the United States — even when cellular networks fail or are unavailable.

You can also keep using your existing phone number on the mesh, which is really important in a disaster when people are trying to get back in contact with each other.

Our software:

* Is completely open and open-source; free for all
* Can be carried and activated in seconds by those who need it when it is needed
* Is carrier independent
* Can be installed during an emergency from only one phone
* Is distributed nature makes network resilient
* Can use your existing phone number
* Encrypts mesh phone calls and mesh text messages by default
* Can distribute pictures, videos and any other files
What’s New
* MeshMS text messages display delivery notifications and timestamps, and only stored encrypted, but are not backwards compatible
* Vastly simplified the connection screen
* Added phone dialler for tablet users
* Our routing protocol has been improved
* The Peer List is more responsive to changing network conditions
* Reduced power consumption in a number of use cases
For a complete list of changes, see

Want an Android to share with a Mac or Ithingy?

Mesh networking from ios to android

Is there any framework that connects an ios device to an android device using a mesh network ?

There are apps like firechat that ables users to speak to each other using only bluetooth and wifi (via apple’s multipeer connectivity framework). But is there any way to connect ios devices to android devices using multipeer connectivity of some kind ?

I’m trying to build an app like firechat to be used by some friends here in college, but it needs to connect ios devices to android devices. If there would only be ios devices, multipeer connectivity framework would be just fine, but in this case I don’t know wich framework to use in order to connect all these devices
I believe the Open Garden SDK may be able to meet your needs.

Basically it is an SDK for multipeer communication, by the creators of Firechat. And they claim that it is the same technology that Firechat uses, so I believe it will work with Bluetooth.

They also claim it works on Android and iOs, and as Firechat works on Android too now, I would believe that it is true.

Sorry for all the hypotheticals, but I have not been granted access to it yet so I can´t confirm any of these facts.

Looks like the needed bits exist and folks are working on the implementations.

Also expect multiple implementations from multiple sources for meshing, so that if any one gets broken, others will step up.

By Steven Max Patterson

Android phones are connecting without carrier networks
A new prototype backup network connects Android phones through a mesh network established with the phones’ Wi-Fi chips, which can come in handy during emergency situations.

Network World | Feb 12, 2013 10:12 AM PT

While the cellphone network in Haiti survived the devastating earthquake in 2010, the added load of international aid workers who arrived in the aftermath caused it to crash. Josh Thomas and Jeff Robble, both working at Mitre, saw this problem and created a working prototype backup network using only the Wi-Fi chips on Android smartphones. This capability won’t be shipped on new mobile phones anytime soon, but it is a really interesting open innovation project to understand and follow, and for some an Android project to which they might contribute.

The Smart Phone Ad-Hoc Networks (SPAN) project reconfigures the onboard Wi-Fi chip of a smartphone to act as a Wi-Fi router with other nearby similarly configured smartphones, creating an ad-hoc mesh network. These smartphones can then communicate with one another without an operational carrier network. SPAN intercepts all communications at the Global Handset Proxy (see figure at right) so applications such as VoIP, Twitter, email etc., work normally.

The source from the Linux Wireless Extension API was merged into the Android kernel source and compiled. The modified version of Android was used to root specific models of Android smartphones to expose and harness the ad-hoc routing features of the onboard Wi-Fi chip to enable this intercept.

It is really a framework for further research to refine how to build the special case of an ad-hoc mesh network. SPAN’s routing module is designed to be plug-and-play so it can be easily replaced. Researchers and developers interested in experimenting with new routing protocols save months of man-hours needed to build the entire app by using the SPAN framework.

The current version can be toggled between widely adopted routing protocols OLSRd and Dijkstra to test the differences in the performance of network discovery and routing. In testing SPAN, the limits of these routing protocols were discovered. Network discovery floods a network with “hello” packets so a routing table can be built. This type of discovery works well in static networks because the amount of bandwidth used for discovery is limited to infrequent changes in the network.

Some background on that Serval app up above:

Researchers enable mesh WiFi networking for Android smartphones
Australian researchers have created a system that allows VoIP calls to be …

by Casey Johnston – Jan 31, 2011 9:30am PST

An Australian research group from Flinders University has found a way to apply WiFi mesh networking onto the Android operating system, allowing phones to act as access points over radio waves to transmit voice calls as data. While the system currently only works between phones relatively close together, the researchers hope the use of transmitters will extend the service to remote areas for emergency use.

The system, named Serval, can relay VoIP calls between phones using their WiFi networking. Individual phones can also act as relay points, and theoretically should be able to bridge together a phone in a remote area with no service to one with access to the cellular network, where the call can finally be relayed to its intended recipient.

In its present state, Serval can only connect between phones that are no more than a few hundred meters away from each other, and the call quality is horrendous. But its creators say that coverage could be extended in areas with no reception by installing transmitter boxes that could pass along the call, which would be good enough for an emergency situation.

“A few hundred meters” is a pretty good distance, especially in a crowd. I note in passing that folks are already thinking about repeater boxes. Linux in a lunch pail with a big antenna and you have a mobile “hot spot / mesh node” for greater distance and clarity. I also note that sending text and pictures is not time dependent so the voice quality issue from packet delays is not an issue.

And, of course, you can do it with Linux boxes / tablets / phones / cards / {whatever}…

Setting up a Linux-based Open-Mesh Wireless Network, Part 1

Hardware and Software
May 26, 2009
By Eric Geier

Mesh networks are a type of wireless network. As you’ll discover, mesh networking is great for blanketing Wi-Fi in larger areas. They are especially useful in places where the environment changes frequently, such as people and walls moving around in malls, trees and buildings growing around an apartment complex, boats moving around the docks, and trucks coming in and out of stops. Additionally, they are perfect for locations and applications where it’s hard to run network cabling.

I would also add that they are really great for letting a cluster of robots share computes for a good hard think session… shades of SkyNet…

Instead of having to run Ethernet cables to each of the access points, mesh networks work wirelessly. Only one mesh node (or more for larger networks) must be grounded and plugged into an Internet connection. Other mesh nodes, acting as repeaters, can be placed throughout a building or outdoor area, only requiring power. When someone surfs the web from a repeater, the traffic hops from node-to-node, making it back to a gateway. The hops can vary depending upon the current signal levels among them all. Hence the common saying about mesh, “self configuring and healing”, and why they are perfect for busy areas.

Where does Linux or open source come into play? Well, there’s Open-Mesh, a volunteer-based organization that provides hardware and services for mesh networks. The comparatively low-cost hardware, or nodes, are loaded with open-source firmware.

The service or dashboard is provided for free by Open-Mesh and lets operators manage their mesh networks online. Then for user authentication (username and password-based access) or pay-for-use applications, there’s the free CoovaOM or CoovaAAA services in addition to other paid options.

n this two-part tutorial series, we’ll set up a mesh network using the Open-Mesh gear and services. First we’ll gather the hardware, create a Dashboard account, and configure the network settings. Then in the next part, we’ll experiment with the internal splash page, third-party captive portal, set up web filtering with OpenDNS, and finally install the nodes and test coverage. Now lets get started!

Expect to see folks showing up at major demonstrations or events with those management / authentication apps in a lunch box or backpack ready to go for the event.

And, yes, you can do it with the Raspberry Pi:

Since he’s got several Raspberry Pi boards on hand [Eric Erfanian] decided to see what he could pull off using the robust networking tools present in every Linux installation. His four-part series takes you from loading an image on the SD cards to building a mesh network from RPi boards and WiFi dongles. He didn’t include a list of links to each article in his post. If you’re interested in all four parts we’ve listed them after the break.

He says that getting the mesh network up and running is easiest if none of the boards are using an Ethernet connection. He used the Babel package to handle the adhoc routing since no device is really in charge of the network. Each of the boards has a unique IP manually assigned to it before joining. All of this work is done in part 3 of the guide. The link above takes you to part 4 in which [Eric] adds an Internet bridge using one of the RPi boards which shares the connection with the rest of the mesh network.

If the power of this type of networking is of interest you should check out this home automation system that takes advantage of it.

And, for those inveterate diehards who insist on things that are Industrial Strength, you can do it on BSD Unix. Here’s the pointer:

A wireless mesh network, sometimes called WMN, is a typical wireless network but using a mesh topology instead. These networks are often seen as special ad-hoc networks since there’s no central node that will break connectivity (in contrast with common wireless networks that you have at home/office, where there’s a central Access Point). 802.11s is an amendment to the 802.11-2007 wireless standard that describes how a mesh network should operate on top of the existing 802.11 MAC. If you want to know more, check the resources section. You may already know about the Wireless Distribution System, WDS for short, and if you do, just think of 802.11s as the standard that will expand and unite WDS. Note that 802.11s is much more complex than WDS (for example, 802.11s includes a routing protocol, an airtime link metric protocol and a congestion control signaling protocol, just to name a few).

This project aims to implement the upcoming 802.11s wireless mesh standard (not yet ratified) on the FreeBSD operating system (of course :-) )

Development is occurring at the FreeBSD HEAD branch and an experimental support is present on FreeBSD 8.0.

This work was sponsored by The FreeBSD Foundation.

In Conclusion

Did I mention that “It’s a bad idea to annoy the Geek!”?
We have keyboards and we know how to use them…

So TPTB decided to get in the grill of the Tech Generation (of all ages…) and we collectively are responding with a nice Aikido rotating side step and letting all those negative waves go flying by, while we continue to communicate as we wish. At most, TPTB can disrupt things for a little while, in a new way. Then we adapt and keep on keeping on.

Yeah, I suppose they could flood the entire assigned bands with jammer noise. Then we would just need to respond with an external gizmo to frequency hop over all available bandwidth ( a known communications method ). Just a few dollars and time. The process marches on.

Oh, and as a side note, these mesh networks don’t have those nice big central servers where all your communications can be saved and dredged through by companies and agencies. Expect to see increasing use of encrypted packet communications over meshes. Central Services? To quote P.G. “We don’t need them!”…

Subscribe to feed

Posted in Emergency Preparation and Risks, Political Current Events, Tech Bits | Tagged , , , , , , , , , , | 12 Comments

Welcome to Political Manipulation Awareness Day!

Just a short note to commemorate this day of awareness. As some of you may have noticed, there is a great deal of hype and noise going on today. We are all supposed to have our “awareness” raised. So, to do my part, I’m recognizing this as the Earth Political Manipulation Awareness Day!

Call it P-MAD or EP-MAD if you like. Just remember the MAD part of it.

So, in honor of the high effort put forth by such manipulative political agencies as The Club Of Rome (who have been bringing us “Oh, No! Running Out!!!” ‘awareness’ since the very bogus book Limits To Growth in the ’70s), and to all the Malthusians who have been preaching ‘awareness’ of “Doom In Our Time!” from too many people to ever feed, and most recently to the IPCC and minions and useful idiots who have been force feeding us ‘awareness’ of the clear and present demise of the entire planet in a too cold / too hot / too wet / too dry / apocalyptic acid in an ever more dilute ocean from the missing Antarctic; in honor of all them, and more, I propose to heft a glass of modestly cheap white wine while I raise my awareness of their cheap Political Manipulation.

What else?

Today I have put a new liner in my (mandatory) recycle bin. Nothing else. Just the liner. Made from oil. Disposable. Got it from the store by driving my 2 ton station wagon over (with only me in it) using high octane gasoline. Everything else went into the garbage can.

This evening, I’ll be cooking BBQ marinated chicken over a mesquite (thanks P.G.!) and briquette fire. The mesquite tastes better, but I’m going to get it started using petroleum on charcoal. Note that the name “charcoal” is for the two ingredients from which it is made. Char, made from wood and often called charcoal, and coal. Yup. Real honest to goodness coal goes into charcoal briquettes. Once those are started, I’ll lay on the mesquite and once it is involved, the food goes on. Meat. Copious quantities. With a side of baked potatoes with real butter. Thanks to all those folks who have been raising my ‘awareness’ about the amount of grain and water that goes into making a pound of meat, I’ve developed a hankering for more of it ;-)

I’ve also got time to water the lawn and garden, making sure my bunny and plants are well watered and fed. I’ll be using Miracle Grow fully chemical fertilizer. The stuff works great. I’ve been made very “aware” of usage of fertilizers on farms, and figure it seems to work well.

I’ve also got some alien seeds from my seed archive. I’m still trying to figure out which one is most “foreign” here, but once I’ve worked that out, some of them go into the garden. I understand Purselane and Goosefoot are both weedy species that are also usable for food. Though I’ve been wanting to plant my Italian Dandelions for a while. Selected cultivar with larger more tasty leaves. We’ll see what strikes my fancy, er, ‘awareness’…

Finally, I’m cleaning up part of my patio, so going to generate more landfill fodder as old things no longer needed get thrown away. Need to “clean up the earth” (by filling a landfill) after all… So the old “pop-up” tent like cover gets the heave ho. Yeah, it is a plastic top over metal folding legs, and could likely be recycled, but… the locals decided that only plastic with numbers on it can be recycled and the tarp top has no numbers, plus getting it off of the metal would be a pain, so the metal can’t be recycled. Oh Well…

At least I’m more ‘aware’ of it.

I’ve also got lots of lights on in the house and yard for the cheery “aware” look. I’m very aware of how happy I am to have lots of light. Even especially aware of the incandescent lamps on dimmers. A few years back when the Eco-Nazis came for the lightbulbs, I bought a large stash. What ought to be a lifetime supply. On dimmers they last much much longer ( even a 5% power reduction can give a large lifetime increase). So my awareness of being politically and legally manipulated as to what kind of lighting I can buy was very successful, and I reacted accordingly. There is nothing quite as pleasing as that warm yellow glow of a pure incandescent bulb. No flicker (fast or slow) like with fluorescents, no excess 540 nm blue causing insomnia like from LED bulbs, no mercury to worry about. Instant on. Instant off. 100% dimmable range. No hum, buzz or whine. Wonderfully aware of them ;-)

So be sure to do what you can to raise your awareness of political manipulation on this day, and of course, react accordingly!

To your health! (Raises glass of wine…) Now where did I put that meat and marinade…

Subscribe to feed

Posted in Political Current Events | Tagged , , , | 2 Comments