Just a quick note about the status of building out compute infrastructure using the Raspberry Pi.
This is related to making a very secure and private Tails like R.Pi as a secure workstation; but is more of ‘all that back room work’ most folks don’t think about.
I have two Raspberry Pi boards in use at present. A third one, a R.Pi B+ model, didn’t survive the trips to / from Florida. Likely as I didn’t have any anti-static protection on it and Florida has a LOT of lightning and the associated charge. The one in the antistatic bag did fine. I had expected to use three boards for all this, but in fact 2 is more than enough.
The first board, a R.Pi B+ model, is doing most of the infrastructure. The second, a R.Pi Model 2, is acting as the workstation. I’ve covered the performance of it before. Adequate, not great, some small type ahead in WordPress editing, and not quite doing sound / video as I want; but that will improve as software gets debugged (and / or I update the system ;-)
This posting is mostly about the use of the B+ as infrastructure server.
Up until now, I’d had the Powered USB Hub on the R.PiM2 along with all the disks. I’ve been unpacking a decade worth of backups and storage grabs from various computers as they neared End Of Life (or in some cases after brief resurrection). Over the years, having backed up A to B, then B (with A) to C, then C dying, dumped the wad back on A, that… We’ll, it doesn’t take too may “powers of 2” and we’re talking Terabytes. So it was time to take out some trash. I’ve been at it for a couple of weeks now, and likely still have a couple of more to prune down to what is really useful. That went faster on the M2.
But I’d also launched a data scrape for temperature data. Due to little clue how big the source was, what I expected to take a day or two is still running. I’m presently at 322 GB used for temperature site data and rising… That tended to ‘lock down’ the M2 and the associated terminal / screen as it ran for days. Which precluded using other computers with ‘the good screen’ and precluded advancing on development as the machine was ‘working’.
So for the last couple of days I’ve “bit the bullet” and did the back end work to make the infrastructure more supportive of my varying work style.
The Model B+
First up, I moved the Powered HUB and the disks onto the R.Pi B+. I expected the thing to sag under the load, but it is doing just fine. In fact, it worked reasonably well for the last couple of days of ongoing unpacking.
The only really big thing I noticed was that when doing an ‘unzip’ or ‘gunzip’ or similar, it tends to “peg” the CPU at full (and other processes slow down). When running NFS Network File Services it spawns somewhere over 1/2 dozen nfsd daemons and if you try to run a couple of decompresses, a heavy NFS transfer, and a disk to disk copy; well the poor dear does it, but things bog down. So you learn to only have a couple of things going on at once.
I could have the wget scrape, a modest NFS file transfer, and one unzip going and that was about fully loaded. The M2 would just put the uncompress function on one CPU, NFS on another, and the minor loads like the wget and everything else on a third and still be only about 65% CPU used. One unzip or decompress and the B+ took all available cycles. The scheduler does a good job of it, though, so the NFS slows down or the unzip, but terminal response stayed OK. Just don’t expect this little board to serve a compressed file system on the fly to 1/2 dozen clients while archiving the news… and playing your favorite songs…
What all do I have on it now?
Well, it is an NFS file server for 3 TB of data (soon to be 4 TB I think), it is doing 2 wget site scrapes, I have DNS running from it, and it is doing sporadic Bittorrent Server service for 26 GB of old Linux (soon to be upgraded to more like 500 GB and newer versions too ;-) along with Samba (Microsoft file services) running but not tested yet, and an Apache Web Server that works, but only has one generic page and isn’t being hit with traffic at all other than when a mangled URL hits.
That seems to be a decent load level for it. When not heavily trafficked, it is running about 30% to 50% CPU, and only goes to 100% with decompressing, large file moves locally, or very heavy NFS traffic (usually in combination). I’m quite happy with it as ‘The Infrastructure Box’. (On the ‘someday’ list is to buy a replacement for the dead board, and duplicate the OS/SD-chip from this one, put it all in a metal ammo-can, depot it somewhere safe and have the whole thing ready to go if I ever need an instant “back in service” effort. Also put an encrypted copy of the data in ‘The Cloud’ somewhere obscure.)
The Model 2
This machine is now freed up for further development work. I’ve also moved my “home directory” onto a USB dongle. It can move between machines so that I always have “the usual tools” available. It can go on the Chromebox, the ASUS, or the Pi as the mood hits. Also, the ‘disk farm’ is now NFS mounted on any machine that needs it.
While I’ve not found how to mount NFS files onto the Chromebox, that hasn’t been an issue, really. It’s a low priority. I did find that you can get a terminal server with Ctrl-Alt-T and that you can install an SSH server app. (They have removed SSH from CROSH the Chrome Shell – a mistake IMHO). So yesterday I had the Chromebox on The Good Monitor enjoying videos and music (something it does very well…) while having two SSH Terminals open in tabs on the Pi.B+ doing file management. All near silent.
Today, I’ve moved the monitor to the R.PiM2 and I’m doing this posting with a few, more usable, Terminal Windows opened. A real Linux with real terminal windows works much better than tabs on the Chrome browser, IMHO. And this, too, is essentially silent. I also don’t have to wonder if the “App” in Chrome is creaming off my login and password and shipping it to Google… Not that I think they care about my 2 TB of temperature data, Linux antique release archive, and canonical collection of dead Windows Machines Software… but it’s the principle of the thing… So while it is nice to know I can do Systems Management from a SSH App in the Chromebox, I’d rather use a system that can be shown locked down.
I do still get the Boot Rainbow screen saying the power supply provided is a touch weak (likely the same one shipped with the prior Pi models and they didn’t think that the 4 core might need more…) but it has not made the little rainbow square in the upper right when running to say it’s weak once booted. Oddly, some afternoons I’d get that when it had a loaded USB hub on it. Just about peak A/C demand time. I suspect the power company was sagging volts by a few then. We’ll see what happens this afternoon. Only on hot days did that happen, and not enough to crash the box. It could also just be that the little power brick doesn’t take hot days well and sags then. Further testing for “someday”. Just realize that an Industrial Power Brick would be better if you intend to load up a Model 2 with stuff and work. The provided one is “ok I guess”, but will go to the next B+ I buy when I’ll also get that oversized power brick.
I have my TBs of data mounted via NFS on the “machine du jour” and today that is the Model 2. I’ve also had them mounted to the ASUS box. USB ‘home directory’ dongle moving between both. One ‘trick’: Make a generic home directory so you can log in and use the box without the dongle. Then mount the dongle file system over the top of it when desired. Now all your history and browser cache and such moves with you. Unmounted, the box looks like a normal generic Linux, with a sparse home directory. Mounted, it is ‘your space’. Pull the power and the dongle and it goes back to ‘generic uninteresting’. (This is not complete privacy as some fingerprints are left in system logs and such, but far better than the usual “all my stuff is here, paw through it” and vastly better than the Microsoft “We stash stuff EVERYWHERE for your PRISM Program staff”… A good and easy “big lumps done” step, though.
Then, with lightly used files on the NFS server, you can have the deeper archives available everywhere. Next step there being to put that server a bit remote from the obvious workspace. (Eventually with encryption on the disks, but that might take a more beefy board. We’ll see in a few weeks.) The final step being to have that data served as encrypted blobs from The Cloud and only decrypted at point of use, but that’s a bit further out for me.
That status as of now is pretty good. One dirt cheap R.Pi B+ is doing essentially all the infrastructure work and doing it well. I’ve got about $120 all told in that setup, and most of that is the fat disk.
My desktop is ‘behind a firewall behind the boundary router firewall’ and working well. Someone would need to break through the first router and not be noticed as they tried to break through the second one. Hard as the blinking lights would show traffic when I’m not doing anything… (Yes, I have blinky lights on each wire on the routers ;-) and I watch them…
The Model 2 is adequate for a desktop, but “someday” I’m going to get a larger single CPU in something (maybe that Cubieboard) as it does bog a bit on video. Or maybe software improvements will fix that as they make better use of the GPU / FPU hardware (Graphics Processing Unit, Floating Point [math] Unit). For most stuff, it’s fine. (And since the Chromebox is destined to be media server in the living room anyway, lets just say that Netflix can go there instead… and Google can know what the spouse likes to watch on TV…)
The entire shop can be built out for around $300 including the $120 spent on disks and WiFi router. So far the R.Pi portion is about $100 for both boards and kit. Not too bad, IMHO. Add the Chromebox as media station and it’s still under $480. With TB of disk, 2 levels of network and firewall, infrastructure services, and workstation. I’m good with that. And at those price points, duplicating some of the hardware in an iron EMP shield ammo-box (water tight seal too) depot it where The Gendarmes will not be hoovering up everything (like a friends attic or basement) and put an encrypted wad of data on ‘the cloud’ as backup and you are pretty much ‘good to go’ for recovery in one or two days (potentially as short as hours…) post “event” (natural or otherwise).
There is still a good chunk of more work to do. It all needs a good polish and final write up (with things like how to config NFS for those unfamiliar with it). I also have more ‘services’ to install and turn on (and just get running better, like that temperature SQL database load…) and it needs another week or three of trash removal from old archives. (I expect to recover about 1/2 the disk space at least). But even just as it stands now it’s very workable.
Oh, and I need to dust off one of my deprecated UPS Uninterruptable Power Supply boxes, replace some batteries, and make the power supply “Democrat Proof”. (Not a political comment. Just an observation that California under Republicans has stable electricity and under Democrats we have rolling brown outs and blackouts. Which reminds me, we had a power failure a few weeks back and my generator didn’t start. Need to clean the carb and tune it up for winter.) As this is all very low power equipment, even one of the small cheap UPS boxes ought to be plenty. Or maybe just get a fat stack of 5 NiCad batteries and call it done… since it all runs on 5 VDC, it is a bit silly to go through all those AD / DC / AC / DC swaps… wall, UPS, UPS inverter, powerbrick… But that’s for “someday”.
The key point here, really, just being that I’m now “living the vision” as I’m essentially moved onto a USB dongle that gets plugged into the Machine Du Jour, which is mostly the R.PiM2 and only the ASUS when doing GIStemp development and / or something that demands XP or a 64 bit Intel processor power. Chromebox mostly just media server (and mediocre SSH / Terminal access).
We’ll see where the tight spots are in this shoe and what it takes to stretch them. For now, though, it’s comfortable enough.