30 miles due west of Oroville, in the flat

Yet more “Flooding Drought”.

This is very near my old home town. It is the Seasonal Flood as in my childhood. This area is “protected” by Shasta Dam on the Sacramento River, so this flood is from water in that catchment, plus local rains. This IS NOT related to the Oroville dam problems.

Yet it is flooded.

The horrible thing this means is that releases from Lake Shasta are too high for present rain conditions, which implies the operators are very worried about something. Perhaps a too full lake with too much snow above it and a warmish atmospheric river starting?

The pictures of Williams I-5 as a watercovered road are also not good. That roadbed is raised relative to the land on each side, and has good drainage (normally).

This is what is downslope from Central Valley reservoirs that are full.

This (toward Colusa and Williams) is where Oroville residents head to leave town (though 30 miles from Oroville). The main road west goes directly to this flood zone. The southern road is along the banks of the Feather River, so the prime flood direction. The northern road to Chico took 3 hours to go a 25 minute drive.


And we have 3 more days of steady rain ahead.

From this posting:


Subscribe to feed

Posted in Human Interest, News Related | Tagged , , | 14 Comments

RAID, LVM, Gaggle Of Disks…

What to do if you have a stack of “modest” sized disks, say a couple of TB each, but you need a single directory of about 6 TB?

I suppose you could go out and buy a new 8 TB disk (some is lost in formatting and such). Or move some of the files to another disk (and put symbolic links in the original location – I’m running a wget, so if the files are just gone, they would be downloaded again). But the first one is expensive and requires moving a lot of data up front. The other has ongoing need to move data around and assuring that the wget is structured so that it really doesn’t try to download all that stuff again. All of it is a kludge.

There are alternatives.


The first one most folks think of is a RAID group. Redundant Array of Inexpensive Disks. This is most often used to make a group of disks where any one disk can fail, be replaced, and you lose no data. There are a bunch of RAID levels. Mirrors (2 sets of disks with one copy of the data each). Striped Groups (where each file has blocks on each disk, usually done to increase read and write speed as you can have a block buffered and R or W on each disk. And higher RAID types. Most often this is RAID 5 where blocks are spread over several disks, as is a block of parity data enough to reconstruct the data blocks on any one disk, were it to crash.

More on RAID levels:


In computer storage, the standard RAID levels comprise a basic set of RAID (redundant array of independent disks) configurations that employ the techniques of striping, mirroring, or parity to create large reliable data stores from multiple general-purpose computer hard disk drives (HDDs). The most common types are RAID 0 (striping), RAID 1 and its variants (mirroring), RAID 5 (distributed parity), and RAID 6 (dual parity).

Raid levels cover things like glueing together a set of disks, but often has a large time cost in building, and changing, the structure. When you add or remove a disk, the RAID does a “rebuild” and it can take a long time, especially on slow hardware like the Pi.

A striped group gives performance improvement as reads / writes are spread over several disk spindles and heads.

A mirror group gives data security, but at a high cost in duplicated disks and reads / writes.

RAID 3 and 4 are fairly specialized combinations of bit or byte striping and parity (on a dedicated disk for RAID 4).

RAID 5 has the parity distributed over all the disks, and RAID 6 has two copies of the parity so that you can lose 2 disks and survive.

All that parity has a large cost in computes, especially when the compute engine is small. Thus the very long rebuild times. Even adding a new empty disk involves a ‘rebuild’ as the data and parity get spread over that new disk and recomputed.

I built a RAID as my first cut at this problem, and then found that the ‘rebuild’ when I added a third disk was going to take a day. During that time, the RAID array is at risk. Every time I would add or remove a disk, that same process would happen. Furthermore, one disk is lost to parity, so for 3 disks, you get 2 disks of storage. Each disk improves efficiency, so more smaller disks is better than 2 giant disks. My USB Hub has only 4 slots, so at best I could get 3 disks worth of space usable. For 6 TB that would mean using 4 x 2 TB disks, and that would be “close” on total space. When it ran out, I’d be basically stuck. Adding another hub and more disks would start to get pricy and then there woud be the rebuild time.

Oh, and since for RAID 5, the basis is a striped group:

“RAID 5 consists of block-level striping with distributed parity.”

Each disk (or partition) must be of the same size. Well, some can be bigger than others, but the only space used will be the size of the smallest disk or partition. So if you have 4 x 1 TB disks, but one of them has a 100 GB partition set aside of something else, you will get 4 chunks of 900 MB each used, and only 3 x 900 MB available after parity. Spending 4 TB to get 2.7 TB starts to bite pretty quickly, especially when after formatting you are closer to 2.5 TB.

For anyone wanting to play with making a RAID, pretty good directions are here:


The very abbeviated form is:

If your Debian / Devuan has been a while since the last update, bring it up to date:

sudo apt-get update
sudo apt-get upgrade
sudo apt-get dist-upgrade

Personally, I’d skip the dist-ugrade, especially since it can screw up your Devuan on Pi in some cases (replacing the kernel on BerryBoot seems to kill it).

The program that impliments RAID on the Debian family is “mdadm” (multi-disk admin?) so install it.

apt-get install mdadm

Then you plug in your disks and create your RAID. Quoting the article:

mdadm -Cv /dev/md0 -l0 -n2 /dev/sd[ab]1 ( configure mdadm and create a raid array at /dev/md0 using raid0 with 2 disks ; sda1 and sdb1. To create a raid1, replace the line to read mdadm -Cv /dev/md0 -l1 -n2 /dev/sd[ab]1 )

Clearly for RAID 5 you would use -l5 instead. Also note that you can list the disks explicitly without the wildcard [ab] bit. So like:

mdadm -Cv /dev/md0 0l5 -n3 /dev/sda1 /dev/sdb1 /dev/sdc3

I’ve not tested that command, but think I have the syntax right and no typos… one hopes. IIRC, that’s what I did with my test case. Note that you can use different partitions on different disks and your particular disk partition names will vary. Note that you now have a RAID group on /dev/md0 but not a file system. So make one:

mkfs /dev/md0 -t ext4 

You can now mount /dev/md0 like any other disk. I mounted it as /RAID for my testing.

For a fair time I searched for how to keep it straight what disk was in the RAID. They get marked with a magic number on the disk itself and assembled at boot time. Removing it can be a challenge

Is there something less complicated, that takes less computes, and is more efficient with the disks?

LVM, an easier way

Logical Volume Manager.

The purpose of LVM is different from that of RAID. RAID is to handle data protection and performance, while LVM is for the purpose of making volume management easy.

Before anyone asks, yes, you can use the two together ( IFF you are prone to loving hyper-complex environments and enough levels of indirection to cause your eyes to glaze… but folks have used RAID to build the underlaying data vault then used LVM on top of it to make administering the disks easier).

With LVM you can “glue together” a gaggle of disks so that they look like one giant disk to the world. Or break up one gaggle of disks into a different gaggle of logical disks.

I just used it to create what looked like one giant disk by glueing together a 4 TB disk, a 2 TB disk, and a 1.5 TB disk. Notice that volume sizes can be anything and we’re not talking about data preservation or speed of access here. Just one BIG file system made out of several different disk bits.

The LVM Wiki is pretty good:


First you do the usual “upgrade / update” of the system. Then you install the LVM code and start the service:

sudo apt-get install lvm2
sudo service lvm2 start

The wiki has you install a graphical management bit, but I didn’t bother.

apt-get install system-config-lvm

Now there is a 3 level set of “stuff” to keep track of during the rest. Physical disks or disk partitions. Groups of “volumes” (called volume groups). And logical volumes created inside a volume group. There are commands to create, inspect, and manage things at each level. (So you can see how adding RAID above or below and adding a couple of more levels can be a bit confusing…)

OK, at the physical level we need to assign disks or disk partitions to the Volume Group. You can pretty much mix and match bits of disks at this level, though the pages encourage slugging in whole disks as simpler to manage. I built mine out of partitions and put a swap partition as slice ‘b’ on each disk. Why? Because I’m an old school surly curmudgeon who doesn’t like the idea of running swap onto a splotch of disk on a LVM volume in an LVM Group on a gaggle of physical disk partitions… but you can put swap on an LVM volume if you like, then just slug in whole disks for space. So instead of using /dev/sda1 for disk space and /dev/sda2 for swap (and paritioning accordingly) you can just add /dev/sda to the LVM group and parcel it out as desired to files or swap.

So once the LVM service is installed, how do you hand it disks or partitions?

As usual for all things systems admin, you either put a “sudo” in front of commands or run them as root. Just a reminder… So what is that command?

pvcreate /dev/sda2

This marks that partition as part of the LVM batch. If you used “pvcreate /dev/sda” you would assign the whole disk.

There are a bunch of physical volume commands, but I’ve not found one that tells you how much real data is on any given physical disk.

PV commands list

pvchange — Change attributes of a Physical Volume.
pvck — Check Physical Volume metadata.
pvcreate — Initialize a disk or partition for use by LVM.
pvdisplay — Display attributes of a Physical Volume.
pvmove — Move Physical Extents.
pvremove — Remove a Physical Volume.
pvresize — Resize a disk or partition in use by LVM2.
pvs — Report information about Physical Volumes.
pvscan — Scan all disks for Physical Volumes.

You would think pvs would tell you how much of each physical volume had data on it. It doesn’t. It tells you how much has a file system built on it:

root@orangepione:~# df /LVM
Filesystem                                 1K-blocks       Used  Available Use% Mounted on
/dev/mapper/TemperatureData-NoaaCdiacData 7207579544 2688218088 4189744724  40% /LVM

root@orangepione:~# pvs
  PV         VG              Fmt  Attr PSize PFree
  /dev/sda1  TemperatureData lvm2 a--  3.64t    0 
  /dev/sdb1  TemperatureData lvm2 a--  1.82t    0 
  /dev/sdc1  TemperatureData lvm2 a--  1.36t    0 

So with 60% empty, pvs shows nothing free. OK… It makes a certain kind of sense in that I can’t add a new Logical Volume as the space is committed to the /LVM mount point (made from the Volume Group “TemperatureData” and the Logical Volume “NoaaCdiacData” – and yes, I wish I’d used shorter names ;-)

As I understand it, unless you make it a striped group, then files are allotted in order from first disk to last disk, so I can assume that the 2.6 TB used is all on that first /dev/sda1 physical volume at this point… but I’d really like a command that let me know for sure…

OK, you have handed over some disk or partition to the physical volume list. Now how to do that Volume Group and Logical Volume stuff?

Create your Volume Group. I used TemperatureData as the name and wish I’d used TGroup…

vgcreate myVirtualGroup1 /dev/sda2

Then add another disk or partition to it with:

vgextend myVirtualGroup1 /dev/sda3

There are lots of things you can do with Volume Groups:

VG commands list

vgcfgbackup — Backup Volume Group descriptor area.
vgcfgrestore — Restore Volume Group descriptor area.
vgchange — Change attributes of a Volume Group.
vgck — Check Volume Group metadata.
vgconvert — Convert Volume Group metadata format.
vgcreate — Create a Volume Group.
vgdisplay — Display attributes of Volume Groups.
vgexport — Make volume Groups unknown to the system.
vgextend — Add Physical Volumes to a Volume Group.
vgimport — Make exported Volume Groups known to the system.
vgimportclone — Import and rename duplicated Volume Group (e.g. a hardware snapshot).
vgmerge — Merge two Volume Groups.
vgmknodes — Recreate Volume Group directory and Logical Volume special files
vgreduce — Reduce a Volume Group by removing one or more Physical Volumes.
vgremove — Remove a Volume Group.
vgrename — Rename a Volume Group.
vgs — Report information about Volume Groups.
vgscan — Scan all disks for Volume Groups and rebuild caches.
vgsplit — Split a Volume Group into two, moving any logical volumes from one Volume Group to another by moving entire Physical Volumes.

I’ve not explored most of those commands…

OK, you have a nice big volume group, now what? How to split out what looks like a disk to the system and mount it? Create a Logical Volume.

lvcreate -n myLogicalVolume1 -L 10g myVirtualGroup1

Now I used NoaaCdiacData for my logical volume name and wish I’d used NCData…

lvcreate -n NCData -L 100g Tgroup

Then format it to ext4 (or something else if you have good reason to).

mkfs -t ext4 /dev/Tgroup/NCData

You could now do a mount on /test to see if it worked:

mount /dev/Tgroup/NCdata /test

There are lots of Logical Volume commands too:

LV commands

lvchange — Change attributes of a Logical Volume.
lvconvert — Convert a Logical Volume from linear to mirror or snapshot.
lvcreate — Create a Logical Volume in an existing Volume Group.
lvdisplay — Display the attributes of a Logical Volume.
lvextend — Extend the size of a Logical Volume.
lvreduce — Reduce the size of a Logical Volume.
lvremove — Remove a Logical Volume.
lvrename — Rename a Logical Volume.
lvresize — Resize a Logical Volume.
lvs — Report information about Logical Volumes.
lvscan — Scan (all disks) for Logical Volumes.

I had to add disks to my Volume Group after it was built, and then extend the size of the file system to include those other disks. The “lvextend” command does that, and then the resize2fs command to expand the file system to fill that extended space.

Essentially that’s it. If folks want examples of the lvextend an resize2fs commands, let me know and Illl add it, but it is fairly simple.

In Conclusion

So that’s where I’m at on scraping the approximately 6 TB of “superghcnd” that looks like it is hourly data for a selection of GHCN sites. About 10 GB / day… I chose to use LVM just to avoid several days worth of “rebuild” on RAID volumes, and because I could glue together a gaggle of different disks into one logical volume image. I risk that any disk loss can cause all of it to be lost, but since it is a duplicate of an online server, I’m able to reload it if needed (as long as NOAA keeps it up).

Sometime after I have a full copy, should I desire more security, I could make a RAID volume (and maybe put LVM on top of it), then gradually grow the RAID as I copied over data and shrink the LVM group… Or just toss a couple of $Hundred more at a couple of added 4 TB disks. It’s a full 3 weeks until the download is finished, and I’ve got plenty of raw space at the moment, so lots of time to think about the next step. At the moment, I’m happy to just have it all download and then leave the disks turned off 90% of the time. Turned off disks in a drawer have a long MTBF (Mean Time Between Failure).

At present I have about 2 TB additional empty disk, beyond the 4 TB free in the LVM Logical Volume at the moment, so things are fine for now. I think I’m going to need about 2.5 of that 4 TB to have the download finish. Then I’ll decide on “safe in a drawer” or “Move to a RAID”. I already have a simple copy of the data that does not include the “superGHCNd” mammouth chunk, so the only bit at risk is that huge chunk of unclear value. I think just moving one day of that data and the ‘diff’ files to a duplicate is enough “protection”.

So there you have it. The “joys” of slugging TB of data around and how to do it.

Subscribe to feed

Posted in Tech Bits | Tagged , , | 6 Comments

Trump Campaign Stop in Melbourne (FL that is…)

Well, we’ve finally done it. We’ve reached the point of perpetual Presidential Campaigning.

The cleverness of it is impressive, though. Blocks NGOs from attacking him without compliance with the whole PAC thing. Lets him hold support rallies for the President as he governs and lets him get loads of “earned media” while bypassing the news cycle.
Just love it.

Remember to can watch it live, and without talking heads telling you what to think, at RightSide Media:


Happening now as a live feed with reruns later…

Subscribe to feed

Posted in Political Current Events | Tagged , , | 35 Comments

Frittatomelette – a hybrid

The spouse was especially fond of the omelette I made this morning, so I thought I would share the single detail that makes it different.

Americans often call a frittata an omelette. The difference is subtile, but important.

In a classical omelette, the egg mix is cooked almost to done, then filling is laid in the middle and the egg folded over it. In a frittata, the filling is placed into the frying pan and cooked some, then the egg mix is added, and when near the finished point, folded and served.

Over the years, I’ve played with both. Trying to work out what is best. Making a cheese frittata is an exercise in eggs that don’t set up right. Making a ham omelette means having cold ham with undercooked egg mix on the bits and lacking that browning that enhances the flavor. Just how can one make a decent Ham & Cheese omelette with that problem?

My solution is a hybrid. Place the ham bits in a lump of better in the skillet. Saute or fry them until browned just enough. Add the egg mix (couple of eggs beaten with a Tbs or two of milk) and let it cook to the almost all set stage (lowering the heat helps here so the bottom surface doesn’t overcook while the top layer is not cooked yet… at lower heat the whole depth warms more evenly). Just about the time it’s ready to set up on the top layer, at that gelatinous but still not set up stage, sprinkle on finely shredded cheese. I use the Mexican Taco Mix shreds. Fold, and finish (that for me, means let it sit just long enough for the folded flaps to stick, then turn the whole thing over to seal and finish.

For things like a Denver, I also fry the onion and peppers bits with the ham. Essentially, I make a frittata out of any bits that fry well, and an omelette with the bits that ought not be fried, like cheese and avocado and whatnot. The Frittatomelette.

With that, time to refill the coffee cup and admire the stormy weather with a hot cup a Joe and a full tummy ;-)

Subscribe to feed

Posted in Food | Tagged , , , | 13 Comments

Late Late Change and Gaga Carpool

It would seem that I’ve not watched Late Late Night non-news in a long long time, and it’s a whole different thing now. On Youtube, I stumbled onto Carpool Karaoke and despite sort of hating karaoke, well, it was Lady Gaga and… I had to look. It was 15 minutes of just fun…

All this time, I had no idea. It seems there are many of these with different performers… and James Corden who, it would seem, took over Late Late Night at some point…

Somehow I think I’ve got a few days of this ahead of me… Sia, Britney Spears, Elton John…

Oh God, it’s happening…

I think I’m going to hit the fridge for that 1/2 gallon of Castle Pietra Pinot Grigio… it’s gonna be a long night… then this happens…

I think 1/2 gallon isn’t going to be enough…

Subscribe to feed

Posted in Arts, Favorites | 5 Comments

SCOTUS Justice Alito said what?



Alito speech excerpt:

Here’s another example: regulation of the emission of carbon dioxide and other greenhouse gases. Now, Americans are, obviously, of two minds about the regulation of greenhouse gases and the question of climate change. But one thing that I think is beyond dispute is that whatever our country does about this matter is important. It will have a profound effect on the environment, or the economy, or on both. In a healthy republic, this issue would be publicly debated, and the basic policy choices would be made by the elected representatives of the people. That is the system prescribed by our Constitution. But that is not what has happened.

The Clean Air Act was enacted by Congress way back in 1970, and it regulates the emission of “pollutants” – that’s the term in the statute. Now, what is a pollutant? A pollutant is a subject that is harmful to human beings or to animals or to plants. Carbon dioxide is not a pollutant. Carbon dioxide is not harmful to ordinary things, to human beings, or to animals, or to plants. It’s actually needed for plant growth. All of us are exhaling carbon dioxide right now. So, if it’s a pollutant, we’re all polluting.

When Congress authorized the regulation of pollutants, what it had in mind were substances like sulfur dioxide, or particulate matter—basically, soot or smoke in the air. Congress was not thinking about carbon dioxide or other greenhouse gases. Yet in an important case decided by the Supreme Court in 2007, called Massachusetts v. EPA, a bare majority of the Court held that the Clean Air Act authorizes EPA to regulate greenhouse gases. Armed with that statutory authority, the EPA has issued detailed regulations for power plants, for factories, for motor vehicles.

The economic effects of these regulations are said to be enormous. I am not a scientist or an economist, and it is not my place to say whether these regulations represent good or bad public policy. But I will say that a policy of this importance should have been decided by elected representatives of the people in accordance with the Constitution and not by unelected members of the judiciary and bureaucrats. But that is the system we have today, and it is a big crack in our constitutional structure.

Hit the link for more…

Subscribe to feed

Posted in AGW and GIStemp Issues, Political Current Events | Tagged , , , , | 10 Comments

Month One of Trump

The prior thread is here:


As of now, we have had Flynn hounded out of office for actually doing his job and opening a dialog with what The Left is pushing as our biggest national security threat ever (seems like they never have grasped the concept of “know your enemy”) so he has been encouraged to leave as Trump and Pense got blindsided by The Media and the Democrat Political Assassination Squads.

IMHO letting him go was a mistake as it just feeds the sharks… Better use would be to let them be focused on him and teased by the tempting morsel so close yet unreachable. Now they have tasted blood and want more, so need the next sacrificial body to rend.

Concern Troll Senator McCain was just on CNN expressing “concern” over our having no Security Advisor in the face of all the Soviet Russian Issues. Hey, Johnny, how about not hounding a perfectly good one out of office… then bitching about it. Oh, and getting with the post-cold-war era while you are at it. Russia as Evil Empire is sooo 1970s…

The Soros Sponsored Rent-A-Mob is not in the streets about Russia at the moment, having gotten a nice take-down on that already this week, but instead have gone home so that they can swamp Republican Town Haul Meetings (no, not a typo, they haul in a load of loot from back home donors…) and disrupt and chant in an attempt to get them to turn on Trump. IMHO the best thing to do would be for Bikers For Trump to start showing up en-mass at Republican meetings and just “mingle”… /sarc;

Soros continues to try to push for an American Color Revolution just like the joyous ones he funded and furthered in places like Lybia, Egypt, Ukraine, Syria, etc. etc. Now didn’t they turn out just wonderful places now? Well, if you ignore the deaths, rubble, bombs, wars,… He’s pissed at Putin for catching on and tossing his ass (and his Open Society NGOs asses) out of Russia – along with issuing a Soros International Arrest Warrant. Hey, Don Donald: Want fewer problems? Just recognize the Russian Warrant and swap Soros for Snowden… kind of a ‘win win’… Or heck, just give Putin Soros and let him keep Snowden then he has all the political headaches of both.

Unfortunately, the well oiled Democrats are just loving it rolling in all that Soros money and paid for Rent-A-Mob free disruption staff, so have sold out the nation to the expedience of Kaos… pardon, chaos… and are well on their way to sedition-land. Having zero of anything positive to offer the nation, they are doing the only things they can do: disrupt, have tantrums, character assassinations, complain, cry, fling poo. All the best you can expect from a 3 year old. But hey, it’s $Millions on the line.

The good news is that all that disruption on their part just shows how well on target is POTUS Trump and his Band Of Merry Millionaires as they set about “fixing” (now where is that banding gun…) the government. Dear Donald: Just keep remembering that the more they scream about it, the more we Deplorables know you are in the right place doing the right thing. Don’t let the media bleating wear you down. While they think it horrible (as they are on the wrong end of the bander) we think it makes them much more tasty and tender over time…

Finally, we had the outrageous display of a misogynistic anti-semitic mob attacking a woman and a Jew. Yes, the despicable dimocratic assault on the family of the POTUS in the form of Ivanka. Guess it is open season on women and Jews in the Democratic Party and at Nordstrom’s but I note in passing that Macy’s seems to still have the Ivanka clothing line. Isn’t it interesting how the party that screams “misogynist” and “anti-Semite” the most is quite happy with DOING those things as long as it is to the family of someone they hate. Can we call that a “hate crime”? Pretty please? Near as I can tell, Ivanka is a proponent of women’s issues, leans Democratic, and is in all ways visible “one of their own”. But never mind that, it’s feeding time in the shark tank and cannibalism is fine as long as you hate them first.

It isn’t even a whole month yet…
One wonders what the next 11 months will bring?

Subscribe to feed

Posted in Political Current Events | Tagged , , | 139 Comments