Down The Tech Rabbit Hole – systemD, BSD, Dockers

For the last couple of days I’ve been on a walkabout in the outback of tech land. Down the various rabbit holes along the way. One muse / ponder leading to another. I’ve about reached the end of this particular wander, but thought I’d share a couple of interesting bits from along the way.

First off, I’d started by wanting to try, again, to get a Linux or BSD running on the Macbook Air. Why is pretty simple. It has no SSD (that died, which is why I got it for “free”) so the MacOS, being fat and finding 2 GB of memory just not quite enough and just knowing swap must be on SSD, does a lot of swap… to a VERY PAINFULLY SLOW mini-SD card… Every so often is just stops “working” for a few seconds to go play with swap, once a dozen web pages have been opened (depending on page weight) and it seems to never let go of memory until you exit Firefox / Opera.

I just figured with a Linux / BSD I could control swap and it would run nicely in 1/2 the existing memory.

Well, that’s still an ongoing effort, but I’ve learned a few more tricks. I now need to corral them all in one place, lay out the dependencies and try again to get a working install. THE major complication is that the Air has NO ETHERNET, just WiFi; and the WiFi driver is a proprietary binary blob that isn’t in the releases. So you can’t just install minimal and upgrade as you have no WiFi nor can you download and install the driver nor… So there’s a kabuki dance you must do that is very unclear as to exactly what in what order to get the install done and a working network connection. But at least now I know the problem and have a couple of (different…) exemplars to model after.

Along the way ran into an interesting discussion of using virtual private servers for various things, not the least of which is a personal private VPN. This company has some fairly low cost VPN choices and seems well regarded:

Their lowest cost server with only 128 MB of RAM is just enough for a headless Debian / Devuan doing something like VPN for a couple of folks:


Plan 	RAM / VSwap 	CPU 	IPv4 	Storage Bandwidth Price 	
128MB 	128 / 64 MB 	1 Core 	1 	12 GB 	500 GB 	$15 / yr 
256MB 	256 / 128 MB 	1 Core 	1 	25 GB 	1000 GB  $8 / qtr 
512MB 	512 / 256 MB 	2 Cores 1 	50 GB 	2000 GB  $5 / mo 

That’s $15 / year for a minimal headless service providing box that will do a lot of interesting things. The higher end services (at the lowest 3 tier) are $32 / year and $60 / year so still quite cheap. Bandwidth is /per month which is an odd metric that seems to have originated in the digital cellphone market, but Oh Well… Unless you are watching a lot of TV / video it’s likely enough for everything else. (It would give a good reason to quash video commercials and crap).

So probably at some indeterminate future time I’m going to “get one” just to play with it if nothing else.

Which then lead to the idea of deploying applications out to such a machine. I’d thought of something like putting up a GISStemp on the AWS cloud, but feeding Amazon was not that attractive. Still, it would be nice for things like climate models to be able to deploy out to a huge compute farm for a run, then button it up when done, and not need to do any of that back room computer building / maintaining stuff. Here was a place where I could get VMs of my own to play with, just supply money… So another “someday” project idea was sprouted.

Which, this morning, sent me off looking at the idea of a swarm of “containers” as a way to cluster. Unlikely in that most compute intensive applications want the closest possible (minimum communications bandwidth / latency / cost) connections between nodes; and containers are better for isolated instances that don’t need much communications (like booking engines or serving up news pages to individuals). But I’ve had a desire to explore the whole “container” tech thing. It took off just after I left any real development role, so I’ve watched it but not done it.

Sidebar on Containers: The basic idea is to isolate a bit of production application from all the rest of the system and make sure it has a consistent environment. So you package up your DNS server with the needed files and systems config and what-all and stick it in a container that runs under a host operating system. It isn’t a full on Virtual Machine, so avoids that overhead and inefficiency, but it does isolate your applications form “update and die” problems, most of the time. “Docker” is a big one. Lately Red Hat et. al. have been pushing for a strongly systemD dependent kubernets instead. The need to rapidly toss a VM into production and bring up a ‘container’ application on it drove (IMHO) much of the push to move all sorts of stuff into systemD to make booting very fast (even if it then doesn’t work reliably…/snarc;) Much of the commercial world has moved to putting things in Docker or other container systems. On BSD their equivalent is called “jails” as it keeps each application instance isolated from the system and from other applications. On “my Cray” we used a precursor tech of change root “chroot” to isolate things for security; but I got off that train before it reached the “jails” and “docker” station.

Pondering putting a Docker container on a VPS (Virtual Private Server) off in RamNode land, I wondered if I could play with Docker containers on the Raspberry Pi? The idea of putting up a virtual image inside a dinky SBC (Single Board Computer) with very limited speed caused a momentary shudder… but for things lighter than browsers ought to be fine. Even on my local system it would let me isolate some services from seeing the actual system and put in another layer of security in that a compromised application would still be stuck in the container. Not impossible to break out but significantly harder. But still, would anyone have been so silly to have ported this? Well, yes! I’m not the only silly person in tech land ;-)

Posted by Matt Richardson
Executive Director, Raspberry Pi Foundation North America
30th Aug 2016 at 12:57 pm

Docker comes to Raspberry Pi

If you’re not already familiar with Docker, it’s a method of packaging software to include not only your code, but also other components such as a full file system, system tools, services, and libraries. You can then run the software on multiple machines without a lot of setup. Docker calls these packages containers.

Think of it like a shipping container and you’ve got some idea of how it works. Shipping containers are a standard size so that they can be moved around at ports, and shipped via sea or land. They can also contain almost anything. Docker containers can hold your software’s code and its dependencies, so that it can easily run on many different machines. Developers often use them to create a web application server that runs on their own machine for development, and is then pushed to the cloud for the public to use.

While we’ve noticed people using Docker on Raspberry Pi for a while now, the latest release officially includes Raspbian Jessie installation support. You can now install the Docker client on your Raspberry Pi with just one terminal command:

curl -sSL | sh

From there, you can create your own container or download pre-made starter containers for your projects. The documentation is thorough and easy to follow. You can also follow this Pi-focused guide by Docker captain Alex Ellis.
Docker Swarm

One way you can use Raspberry Pi and Docker together is for Swarm. Used together, they can create a computer cluster. With Swarm containers on a bunch of networked Raspberry Pis, you can build a powerful machine and explore how a Docker Swarm works. Alex shows you how in this video:

So it’s been there about 2 years now. Originally by a ‘grab a url and run it as a script though the shell as root’ very dodgy but often done process. A bit more poking around shows that ‘apt-get install docker’ ought to work now (but some testing of that belief in order…) and would be much safer.

So, OK, YASP Yet Another Someday Project… Install and learn / play with docker on the Pi. Configure my mini-cluster to run docker as a “docker swarm”. Find some interesting application to feed to the swarm and learn to operate / manage it.

There’s an interesting picture of a Docker Swarm built on Raspberry Pi Zero boards at the link.

Following a lead back from that site to the Author’s blog resulted in YASP again! He runs his blog from a Raspberry Pi at home!

Here’s his posting in detail on setting up Docker on a Pi (though I suspect it’s now in the apt-get repositories)

And here is his series on setting up his blog on a R.Pi

Alex Ellis on nginx, blog, docker, linux, cloud 16 February 2018

Run your blog with Ghost, Docker and LetsEncrypt

Host your own blog just like mine with Ghost, Docker, Nginx and LetsEncrypt for HTTPS. Find out about key Day 2 operations like backup and analytics too. »

I am carefully resisting the urge to set up a blog on my Raspberry Pi, but weakening! I may do the process just to experiment with it, but not put it into production. OTOH, putting it up on a hosted VPS would mean I could run a blog from a reliable platform without needing to do maintenance / recovery after things like lightning strikes when on the other side of the country… It wouldn’t be running on a R.Pi, but on a VPS, but everything else could stay the same. Oh, and the VPS has a fixed IP address ( I’d want to get registered for a domain name, though, and that adds a whole ‘nother set of Someday Tasks…) And at that point that muse slows to a halt… yet more email and bureaucracy with more published PII (Personal Identifying Information – a term used in businesses to describe the stuff that gets you sued and in congressional grillings when it leaks…) but mandated to be public for a domain reg.

Still, there would be no ads and it would be private (no corporation harvesting information from the back side, like contact stats… no ‘google analytics’). Maybe someday…

I do find it just an infectious idea, though. Having a little blog engine sitting on the desk, serving away…

Then I briefly took a left turn into Yocto. Yocto is a ‘builder of systems’ that lets folks (mostly the embedded systems guys) custom roll a new operating system with just the bits they want included in it. The embedded system folks are often using THE smallest hardware possible (save $1 on a million toaster ovens, you have $Million more profit) and trying to get the MAX performance out of it, so usually pruning and trimming and tuning like everyone else did “in the old days”. That often means a Custom Build. Well since building from scratch is long and hard, someone made a tool to make it easier. Yocto. Well, looks like someone noticed the R.Pi and there’s a Yocto for that!:

Building Raspberry Pi Systems with Yocto

08 Mar 2018

This post is about building Linux systems for Raspberry Pi boards using tools from the Yocto Project.

Yocto is a set of tools for building a custom embedded Linux distribution. The systems are usually targeted for a particular application like a commercial product.

If you are looking for a general purpose development system with access to pre-built packages, I suggest you stick with a more user-friendly distribution like Raspbian.

Yocto uses what it calls meta-layers to define the configuration for a system build. Within each meta-layer are recipes, classes and configuration files that support the primary build tool, a python framework called bitbake.

The Yocto system, while very powerful, does have a substantial learning curve and you may want to look at another popular but simpler tool for building embedded systems Buildroot.

I have created a custom layer for the RPi boards called meta-rpi.

The systems build from this layer use the same GPU firmware, linux kernel and include the same dtb overlays as the official Raspbian systems. No hardware functionality (compared to Raspbian) is lost using Yocto. It is only the userland software that differs and that is configurable by you.

There are a some example images in meta-rpi that support the programming languages and tools that I use in my own projects.

Why this interests me is because the embedded guys are often resistant to stupid ideas, instability inducting code, and bloat. That is, Yocto lets you choose NOT to use systemd. (It does let folks use it should they wish). You know, that whole “init freedom” thing? So a maybe someday if I have issues that need a custom build is to look more at yocto for the R.Pi and try building a custom system with it that does not include systemd.

The “gotcha” lurking here is that the embedded systems folks rarely use things like browsers on their targets, so most of the stuff with hard systemd hooks that will break if it is not there are just not of interest to them. This implies you can make the cut down minimal non-systemd embedded OS just fine, but the applications land stuff with systemD dependencies will likely break. So Devuan for the desktop for the foreseeable future until proven otherwise. OTOH, should you want to build a dedicated SQL server in a container using a Yocto based minimal build to drive it could make sense…

At which point I pondered the question of Docker on BSD. It is sort of there… I sort of want to run BSD. But does it run well or just barely? The answer is “not well”. Experiment with it, sure, but production? Use jails.

Down in comments is this:

Jan 31, 2018
Some recent interesting development regarding CoreOS:

Which basically says Red Hat has bought CoreOS (a minimal cut down “core” Linux just for running containers) and is pushing hard on kubernets as their container technology of choice.

The comment just below that is telling… (Why I like the BSD forums – so many clueful in so compact a space…)

Jan 31, 2018
CoreOS _heavily_ relies on and makes use of systemd
and provides no secure multi-tenancy as it only leverages cgroups and namespaces and lots of wallpaper and duct tape (called e.g. docker or LXC) over all the air gaps inbetween…
Most importantly, systemd can’t be even remotely considered production ready (just 3 random examples that popped up first and/or came to mind…), and secondly, cgroups and namespaces (combined with all the docker/LXC duct tape) might be a convenient toolset for development and offer some great features for this use case, but all 3 were never meant to provide secure isolation for containers; so they shouldn’t be used in production if you want/need secure isolation and multi-tenancy (which IMHO you should always want in a production environment).

SmartOS uses zones, respectively LX-zones, for deployment of docker containers. So each container actually has his own full network stack and is safely contained within a zone. This provides essentially the same level of isolation as running a separate KVM VM for each container (which seems to be the default solution in the linux/docker world today) – but zones run on bare-metal without all the VM and additional kernel/OS/filesystem overhead the fully-fledged KVM VM drags along.[…]

The three links having tree horror stories of systemD induced crap. Makes me feel all warm and fuzzy that I rejected it on sight as the wrong idea. Confirmation, what a joy.

First link:

10 June 2016
Postmortem of yesterday’s downtime

Yesterday we had a bad outage. From 22:25 to 22:58 most of our servers were down and serving 503 errors. As is common with these scenarios the cause was cascading failures which we go into detail below.

Every day we serve millions of API requests, and thousands of businesses depend on us – we deeply regret downtime of any nature, but it’s also an opportunity for us to learn and make sure it doesn’t happen in the future.

Below is yesterday’s postmortem. We’ve taken several steps to remove single point of failures and ensure this kind of scenario never repeats again.


While investigating high CPU usage on a number of our CoreOS machines we found that systemd-journald was consuming a lot of CPU.

Research led us to which included a suggested fix. The fix was tested and we confirmed that systemd-journald CPU usage had dropped significantly. The fix was then tested on two other machines a few minutes apart, also successfully lowering CPU use, with no signs of service interruption.

Satisfied that the fix was safe it was then rolled out to all of our machines sequentially. At this point there was a flood of pages as most of our infrastructure began to fail. Restarting systemd-journald had caused docker to restart on each machine, killing all running containers. As the fix was run on all of our machines in quick succession all of our fleet units went down at roughly the same time, including some that we rely on as part of our service discovery architecture. Several other compounding issues meant that our architecture was unable to heal itself. Once key pieces of our infrastructure were brought back up manually the services were able to recover.

The start of the cascading failure was restarting systemd-journald on all machines in quick succession. […]

2nd Link”

Why did we build our solution on top of FreeBSD?
Posted by Egil Hasting on Nov 7, 2017 8:35:27 AM

Synergy SKY offers multiple software solutions, however the one i am going to talk about here is an analytic solution for video related CDRs (Call Detail Records).

[description of system design omitted -E.M.Smith]

Now we needed to decide on what would be the OS of the hardware or virtual hardware (The one called “SAS Appliance at Customer location” in the diagram)? Linux of course.. we had experience with CentOS from previous product and building virtual appliance. However the more we talked about it, the more we realized that we should be looking around for options.

One major issue we experienced was that Systemd managed to crash the whole dbus-systemd connection if our software took too much memory, leaving the system in unstable state, with reboot as only option
(I don’t think we can blame CentOS directly for that, however pulling in such premature technology into a proclaimed stable platform, makes no sense at all!!). We had some other minor issues as well which is probably weeded out in the latest versions of CentOS (and not worth mentioning).

The third looks like a Twitter discussion of their experience but I’ve not gone into it.

In Conclusion

So that was this morning after morning coffee… (Day in the life of a hacker sort ;-)

I’m pretty sure I’m going to play with a VPS. For $15 / year it’s less than the shipping cost on most new toys ;-)

I’m pretty sure I’ll do a minimal trial of a Docker Swarm on my mini-cluster.

I’m thinking maybe someday I’ll try a toy blog bring up on a R.Pi.

I’m certain I’ll continue to do everything possible to avoid SystemD.

I’ll not be using any OTHER tech pushed by Red Hat (kubernets, CoreOS) either since they WILL be highly dependent on SystemD and the Red Hat management has already shown they are prone to bad decisions with SystemD roll out.

Nice to know since CoreOS is one of the choices for OS type at the VPS vendor. Instead I’ll choose Debian and do an Devuan “uplift” on it (or at least find out if that is not possible – perhaps it will require a KVM VPS instead of their OpenVZ choice, and $30+/year instead of $15.)

Well, it’s time for lunch and a coffee. I’m pretty much awake now and ready to start my day… Nothing like a run through tech land to warm up the brain cells ;-)

Subscribe to feed


About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in Tech Bits and tagged , , , , . Bookmark the permalink.

2 Responses to Down The Tech Rabbit Hole – systemD, BSD, Dockers

  1. E.M.Smith says:

    Interesting idea. Docker engine ( where you build containers) on a Pi Zero that connects via USB networking through your main computer (Windows or Mac or whatever) to the internet.

    So for a few bucks you get a Docker engine, can build containers and deploy them onto other Docker platforms, and it all fits in a pocket. Hmmm….

    It also looks like you can break applications into separate containers then stitch them together. (I’m just working my way through the tutorial now…) which might mean (provided some interprocess communications can be done between operational machines running different containers) that would let things with several processing steps have parts run on different machines, shoveling data along from stage to stage. Provided it lets you do that ;-)

    I’d like to do something with Docker just to get familiar with it, but at this point I don’t know what I’d use it for… I think I need to know more about what kind of I/O can be done from a container. Just data or can it do networking things? So can you put a web proxy in a container? Ah, the joys of learning something brand new…

  2. E.M.Smith says:

    Things that make you want to strangle somebody…

    I’ve got this nice cluster of headless nodes. I’ve used it without change for a while. A few weeks (months?) ago I could no longer ssh in to the nodes. OK, other things to do, didn’t debug it beyond “systems are up when I plug in monitor and KB” and ssh doesn’t work.

    Well, today I got my circular 2it and debugged.

    1) Seems somewhere along the line you must now put a file named ssh in /boot for ssh to be active at boot time.

    2) MaxStartups in /etc/ssh/sshd_config was getting complaints. I commented it out.

    Now ssh works again and I can use the headless nodes again.

    I’ve not sorted out if both those changes were needed, nor have I figured out “what changed” on MaxStartups. It still shows on web searches. I had set the values to 3 digit numbers as I wanted effectively unlimited limits, has someone now set it to limit of 2 digits? Is MaxSessions now something else? Whatever…

    Dick With Factor strikes again as someone, somewhere, just can’t leave it alone… and I got an “upgrade OS and die” result.

    OK, it’s working, I know the next thing to check if any other boards have the problem, and all is getting better in the universe.

Comments are closed.