Raspberry Pi Operating Systems Review

Or OSs On Parade…

First, a tiny bit of terminology and the parts of the software that makes your computer “go”. We’ll start at the hardware and work our way up to the part you see…


The hardware, things like screens and disk drives and mice, need specific software that knows THAT hardware and how to make it go / talk to it. These are called “drivers”. It is no end of grief trying to find the right “driver” for a specific monitor and / or video card as the makers of these things keep mutating them. Since each driver takes space and time to maintain it, older drivers get dropped from newer operating systems. Eventually the newest code won’t run on your old hardware. Similarly, new Whiz Bang! hardware often won’t work as no driver is in the system yet. The sweet spot seems to be hardware about 1 to 2 years old. The driver typically has two parts. One talks to the hardware directly, putting bits in registers and reading return bits. The other half talks to the operating system and puts things in more abstract forms. The rest of the driver code translates between those two halves. In this way, porting a driver to a new Operating System (OS) environment only needs a rewrite of the back half, while a new bit of hardware only takes a rewrite of the hardware side half. Often (though not always!) these are written as “modules”, or parts of the OS that only get loaded into memory when needed.

One of the BIG advantages for a company like Apple, with more closed hardware, is the small number of drivers they must get running correctly.

One of the BIG issues for Linux is that it is expected to run on every single thing ever produced, or that will be produced in the next 5 years. It isn’t a rational expectation, but it is what people expect.

When porting an OS to a new batch of hardware, most of the work goes into getting the “tool chain” (programs that build programs, like compilers and linkers) ported, porting and starting the ‘kernel’ (or core services), and then the drivers all need porting and testing. Then you have the basics up.


As noted, there are some very core services used by just about everything. How to open a file. How to connect bits from the output of a program to the input of another. Those sorts of things go into one block of code called the “kernel”. There is a constant battle to keep out as much as possible (since a fat kernel is a slow buggy insecure kernel) and to put IN as much as possible (since if your bit of code is in the kernel it can go very fast and do anything with lots of privileges making your life easier…)

Linus Torvalds wrote the Linux kernel, and he is the guy who decides what goes into the kernel, what goes into a ‘loadable module’ (sort of…) and what stays out.

Now that’s not a hard rule. System builders can choose a “monolighic kernel” where it all goes together when it goes, or a “modular” kernel where some bits are left on disk and only loaded when needed. There are specific reasons why you might want one over the other in a given use. (Modular lets you have more choices of drivers, for example, while not sucking down a few GB for drivers you don’t use or need… while monolithic will always have what you need in memory and never take a long pause to find it on disk – great for time sensitive things like routers.) But Linus generally decides what class of thing can be in a module, or not.

The Linux kernel forms the base for not just the usual Linux machines, but for ChromeOS, Android, and who knows what experimental things. Since the source code is published, you can make anything you want from it; but at the time you want to sell it, the license applies…

There are other kernels for other operating system choices. BSD, for example, has a history that leads back to the ’70s and Unix at Bell Labs. It is an entirely different kernel with different system calls and choices. There are variations here, too. So, for example, the NetBSD flavor has a kernel with design choices optimized for rapid handling of network demands and packets of data / protocol stacks. While OpenBSD is more worried about security issues.

Then there are the things like Plan 9 where the guys at Bell Labs went on to make a whole new thing. That kernel is designed to support things like networked machines sharing workload and data more directly.

The point? Picking your kernel tends to also pick a lot of other things. LOTS of stuff (end user applications for example) was created on Linux. It expects to find Linux interfaces and Linux services and more. It will not run on a BSD kernel. (Some folks have made ‘shim’ programs to let some of that Linux Only code run on BSD, so it translates the BSD kernel into a Linux image, but that’s not exactly perfect all the time…)

For this reason, just stomping a foot down on “BSD!” or “Linux!” without working out the dependencies can result in a lot of suffering later when, after a few weeks work, you find out your favorite Browser that you Just Can’t Live Without! is not available on the kernel you selected… or that no driver exists for your Mondo-Kool-XRB-2020 Video Display… that you just put down $4000 to buy.

It is much Much MUCH better to work out those issues before you commit to The One Holy Kernel or Distribution. (I make this easier for me by having a half dozen chips (mini-SD cards) with a half dozen operating systems on them so that what’s missing from one, I can use from the other. While I hope to end that need ‘someday’, it works best for Right Now to have that flexibility. Besides, it’s all of about 1 minute to reboot on a new chip…)


There are chunks of code that do things everyone needs. These tend to be put in shared “libraries”. Often named with something like FOO.lib in various scripts. These do things like format strings, do I/O to device drivers, do math. They are typically supplied whole by your installation. When rebuilding from source code, they need to be compiled in particular order so that the ones needed by later steps exist first. Often a set is made on one machine that isn’t the target (say, on an Intel x86 laptop with the binary language target being an ARM chip Pi board) so that they can be just imported to the target ‘ready made’ before it has a compiler of its own (or just because the target is very slow and the laptop is very fast…) This is called a ‘cross compiler’ as it is making (compiling) the binary code for a system other than itself in architecture.

So we have drivers to talk to hardware, a kernel with basic services, perhaps modules for some optional basic services, and a library of widely used program bits needed by just about everyone for things like math and text handling (and more).

Build Chain or Tool Chain:

This is stuff like compilers and linkers and loaders and YACC (Yet Another Compiler Compiler) and more. These are programs that turn text like FORTRAN or C into bags of binary bits that run in the CPU. Other than that you need them to write programs, and the folks who use them know what to do with them (or are learning it), you can pretty much just install them as needed and forget about it.

In bootstrapping a new build of an operating system, it is necessary to put a lot of attention into the tool chain as it both depends on “libraries” of already compiled code, and it BUILDS those libraries. So in the case where you are making a ‘first build’, it is now customary to make a ‘cross compiler’ on some other already running system, use that to make a set of starter libraries, then use both those to make the target system tool chain, then use those target system libraries and tool chain to remake themselves. (This is all done to wash out mistaken dependencies where, for example, your C compiler is expecting a library from your cross compiler system but you forgot about that, now it doesn’t build on your target system…)

In the “old days” one would sometimes ‘hand compile’ or ‘hand assemble’ a bootstrap compiler that was very minimal and just enough to compile enough tool chain on the target system for it to start building more complete versions of the tools, then of itself. With ubiquitous presence of computers and cross compilers today, that’s pretty much history. (I’d not be surprised if some guys making small appliances with minimal hardware still did this. The FORTH folks especially are good at this.)


There’s a lot of “tools” in a typical operating system. From a “terminal window” to a disk partitioning tool like ‘parted’ or the graphical version ‘gparted’. Once you have a kernel, libraries, and tool chain running, the next step is usually to build a bunch of tools (starting with the ones you use most).

This is a list of thousands, so I’m not going to put it here… but it does things as diverse as spool print out to your printer, let you configure and set up your WiFi link, compress / uncompress files, and generally get stuff done oriented toward the system itself. (i.e. not oriented toward your term paper or calendar).

Systems (“distributions” or “distros”) are highly variable as to the particular utilities included. Some of them have integrated the tools into a “desktop” choice – a mistake IMHO – via having particular “kits” available to make it easy for those utilities to present their interface on that window system or desktop, but tied to it. For this reason, choosing, say, KDE vs GNOME vs LXDE can determine just what utilities are available to you.

I generally try to avoid desktops / window systems that have such ‘lock in’ and things with too much “Swiss Army Knife” design. I generally like the Unix Way of small programs, doing one thing well, and independent of all others. We will see this issue return in discussions further down.

At this point, you have the basic Unix / Linux of the ’80s to ’90s. Pretty much a command line interface with utilities and ability to write programs. Then a vendor would write some “applications programs” and sell them to you, to be installed on top of this stuff.

Then X happened…

Xorg, Windows & Desktops

Note that from here on down, we are NOT talking about the Operating System. We are talking about a pretty Graphical User Interface layer on top of it. You can put different layers on different operating systems. So saying I want a MATE system is not correct. You want a MATE system on Linux or a MATE system on BSD or…

I’ve often run into folks who say “I just love Fedora Gnome as my operating system” when they really mean I love the Gnome interface on {whatever} operating system… While there are some differences (often from porting decisions) between MATE on Ubuntu vs MATE on Fedora vs MATE on {whatever}, they are not significant to most folks.

In these discussions “Windows” does not mean “MS Windows”. LOooong before Microsoft appropriated the name “Windows”, it was in wide use for all sorts of “windowing systems”. I resent their theft. So here it means “Those boxes of graphics on your screen” as opposed to a simple line text interface.

Starting with X

The X-Windows system was written at MIT. It works in all sorts of places tolerably well. It is a big fat ugly pig with painful configuration and upside down thinking in it. (For example, your desktop terminal, connecting to a mainframe, is the ‘server’ and the mainframe is the ‘client’ – backwards from all prior use. Cute, but a PITA. MIT is often like that. Too ‘cute’ by half.) You can see the kind of bafflegab (that is valid) which this causes in statements like this one:


X terminals

An X terminal is a thin client that only runs an X server. This architecture became popular for building inexpensive terminal parks for many users to simultaneously use the same large computer server to execute application programs as clients of each user’s X terminal. This use is very much aligned with the original intention of the MIT project.

So the “client” is the X “server”… OK… Millions of folks have had to fight that kind of “cuteness” for a very long time, and will for a long time to come.

But I digress…

In any case, X runs everywhere, it a pain, and all sorts of folks have built all sorts of graphical ‘environments’ on top of it.

I won’t belabor the details too much, but just realize there is a very real and important difference (to someone…) between a Window Manager, a Window System, a GUI, a Widget Toolkit (like a library for things inside a window manager), a Desktop, a… You can spend days just sorting that out.

What most folks care about is what is most often called a “Desktop”, but don’t be surprised if you find a given “desktop” really isn’t, but is a window manager instead… (it has to do with the total capabilities it gives you…)

So what are these beasts? This is where the bulk of the food fight over given “distros” is fought. Folks have a given ‘look and feel’ they like, and certain tools they want, and if those are tied to KDE or MATE, then THAT is what they must have. But that ISN’T the Operating System!

A very short list, with just a comment or two. (I am omitting ’tiled’ window systems as I find them a pain since you must know all sorts of command shortcuts to get anything done. Fine for ‘the experienced geek’, a PITA for everyone else.)


twm (Tab Window Manager) is a window manager for the X Window System. Started in 1987 by Tom LaStrange, it has been the standard window manager for the X Window System since version X11R4. The name originally stood for Tom’s Window Manager, but the software was renamed Tab Window Manager by the X Consortium when they adopted it in 1989. twm is a re-parenting window manager that provides title bars, shaped windows and icon management. It is highly configurable and extensible.

twm was a breakthrough achievement in its time, but it has been largely superseded by other window managers which, unlike twm, use a widget toolkit rather than writing directly against Xlib.

Various other window managers—such as vtwm, tvtwm, CTWM, and FVWM—were built on twm’s source code.

twm is still standard with X.Org Server, and is available as part of many X implementations.

Gives you a basic graphical environment, doesn’t need a ‘widget kit’ (nor ties you into one), but primitive by todays expectations and you get to do a lot of the work of making things ‘go’ yourself if writing an application for it. (Or all the derivatives of it).


Here we get more “goodies” as it is a “desktop environment” not just a ‘window manager’.

Xfce (pronounced as four individual letters) is a free and open-source desktop environment for Unix and Unix-like operating systems, such as Linux, Solaris, and BSD.

Xfce aims to be fast and lightweight, while still being visually appealing and easy to use. Xfce embodies the traditional UNIX philosophy of modularity and re-usability. It consists of separately packaged parts that together provide all functions of the desktop environment, but can be selected in subsets to suit user needs and preference. Another priority of Xfce is adherence to standards, specifically those defined at freedesktop.org.

As I’m not fond of some of the things done by freedesktop.org, that last line is thin tea… but I generally find it a workable system. It is what I’m using now on Slackware, though I generally prefer LXDE. Do note that, like X, it runs in many kernel environments from Linux to Solaris (Unix SystemV derived) to BSD.


LXDE (abbreviation for Lightweight X11 Desktop Environment) is a free desktop environment with comparatively low resource requirements. This makes it especially suitable for resource-constrained personal computers such as netbooks or system on a chip computers.

LXDE is written in the C programming language, using the GTK+ 2 toolkit, and runs on Unix and other POSIX compliant platforms, such as Linux and BSDs. The goal of the project is to provide a desktop environment that is fast and energy efficient.

In 2010, tests suggested that LXDE 0.5 had the lowest memory usage of the four most popular desktop environments of the time (GNOME 2.29, KDE Plasma Desktop 4.4, and Xfce 4.6), and that it consumed less energy which suggests mobile computers with Linux distributions running LXDE 0.5 drained their battery at a slower pace than those with other desktop environments.

LXDE is the default desktop environment of Knoppix, Lubuntu, LXLE Linux, Peppermint Linux OS and Raspbian, among other distributions.

LXDE uses rolling releases for the individual components (or group of components with coupled dependencies). The default window manager used is Openbox, but a third-party window manager may be configured to be used with LXDE, such as Fluxbox, IceWM or Xfwm. LXDE includes GPL-licensed code as well as LGPL-licensed code.

Note that it is widely used in just the kinds of systems that are my target, resource limited ones. It is also called “fully featured”, a big plus, IMHO.

The downside? It depends on a ‘widget toolkit’ for graphics. GTK+. You don’t have that, it doesn’t work. Since I use Gimp, using the Gimp image tool kit isn’t a burden to me since it will be there anyway. (Yes, they changed the acronym since Gimp, but I’m not into changing history…)

Also note at the bottom of the quote that the “window manager” on which it depends can be changed. People who really care about that kind of thing get excited about it, me not so much. I’m sure there are some kinds of resource or look and feel things they like, but I’ve just not cared enough to figure it out. (Or, conversely, I’m too busy getting a basic desktop running and browser up to care about the shapes of corners or the way windows animate – that I always shut off anyway…)

From here on down, things get bigger, fatter, more resource hungry, and more tied to particular tools, kits, and utilities. They also give some added features (but not ones I’ve really needed once on LXDE…)

Still they are where you start to notice that “Hey, I wanted FOO tool and it isn’t here? I want my KDE back!”…


I’ve used it. I like it. It’s too green as I like blue better ;-) Works well on an Intel machine, OK on the Pi M2, too much for the Pi B IMHO. It is basically GNOME from before they screwed it up with GNOME 3, but I never really liked GNOME much. ( I was a KDE guy then, before it got screwed up ;-)

MATE (/ˈmɑːteɪ/; Spanish pronunciation: [ˈmate]) is a desktop environment forked from the now-unmaintained code base of GNOME 2. It is named after the South American plant yerba mate and tea made from the herb, mate. The name was originally all capital letters to follow the nomenclature of other Free Software desktop environments like KDE and LXDE. The recursive backronym “MATE Advanced Traditional Environment” was subsequently adopted by most of the MATE community, again in the spirit of Free Software like GNU (“GNU’s Not Unix!”). The use of a new name, instead of GNOME, avoids conflicts with GNOME 3 components.


GNOME 3 (released in April 2011) replaced the classic desktop metaphor, substituting its native user interface: GNOME Shell. This action led to some criticism from parts of the free software community. Some users refused to accept the new interface design of GNOME and called for continued development of GNOME 2. An Argentine user of Arch Linux started the MATE project in order to meet this demand and announced the availability of MATE on 18 June 2011.

As I like my ‘desktop metaphor’, Gnome 3 was dead on arrival with me. MATE is a reasonable alternative.

Now notice the baggage it drags with it, and how much code your little Pi might have to load and execute to draw a window or click something:

Software components

See also: List of GTK+ applications

MATE has forked a number of applications originating as the GNOME Core Applications, and developers have written several other applications from scratch. The forked applications have new names. Most of them used names from Spanish, including:

Caja (box) – File manager (from Nautilus)
Pluma (quill/feather/pen) – Text editor (from Gedit)
Atril (lectern) – Document viewer (from Evince)
Engrampa (staple) – Archive manager (from Archive Manager)
MATE Terminal – Terminal emulator (from GNOME Terminal)
Marco (frame) – Window manager (from Metacity)
Mozo (waiter) – Menu item editor (from Alacarte)

Thus my staying on LXDE…


KDE (/ˌkeɪdiːˈiː/) is an international free software community developing free and libre software like Plasma Desktop, KDE Frameworks, and many cross-platform applications designed to run on modern Unix-like and Microsoft Windows systems. It further provides tools and documentation for developers to write such software, which makes it a central development hub and home for many popular applications and projects like Calligra Suite, Krita, digiKam, and many others.

The Plasma Desktop is one of the most recognized projects of KDE and the default desktop environment on many Linux distributions, such as openSUSE, Mageia, Kubuntu, and Manjaro Linux.

Back when KDE was just the K Desktop Environment, it was nice and light and worked well. Then they decided to make it a Swiss Army Knife of All Things to All People and it got fat and dependency hell ridden. I no longer use it. Just look at that list! It’s become a development environment. I have a C compiler for that!

Like many things German (where it comes from) it has tried to take over all functions it can and has more options and complexity than a Tiger Tank. You need a few languages just to install one PART of it:

KDE Frameworks

In KDE 4 series KDE Platform consists of the libraries and services needed to run KDE applications. Libraries include: Solid, Nepomuk, Phonon, etc. Packages include: kdelibs, kdepimlibs and kdebase-runtime. The libraries must be licensed under one of the LGPL, BSD license, MIT License and X11 license.

While the KDE Platform is mainly written in C++, it includes bindings for other programming languages. Bindings use the following generic technologies:

Smoke: for creating bindings for Ruby, C# and PHP
SIP: for creating bindings for Python
Kross: Embedded scripting for C++ applications, with support for Ruby, Python, JavaScript, QtScript, Falcon and Java

Stable and mature bindings available for the following programming languages:

Ruby (Korundum, built on top of QtRuby)
C# (However, the current framework for binding to C# and other .Net languages has been deprecated, and the replacement only compiles on Windows.)
When starting to use Qt 5 KDE platform was transformed into a modular multitude of what is now referred to as KDE Frameworks.

All that to get my browser to run? No thanks…


As mentioned above, when GNOME moved away from the “desktop metaphor” I moved away from it. The desktop metaphor works just fine, thanks.

GNOME (pronounced /ɡˈnoʊm/ or /ˈnoʊm/) is a desktop environment that is composed entirely of free and open-source software. GNOME was originally an acronym for GNU Network Object Model Environment. Its target operating system is Linux, but it is also supported on most derivatives of BSD.

GNOME is developed by The GNOME Project, which is composed of both volunteers and paid contributors, the largest corporate contributor being Red Hat. It is an international project that aims to develop software frameworks for the development of software, to program end-user applications based on these frameworks, and to coordinate efforts for internationalization and localization and accessibility of that software.

GNOME is part of the GNU Project.

Pushed by Red Hat, and with every increasing links into the SystemD block, not on my menu for a long time to come. Oh, and it’s gotten fat too.

Summary On Windows

There are a dozen more ways to get and make windows of various kinds. Each layer of the system has several players and the ‘mix and match’ combinatorics will run into the hundreds.

Lots of folks just love one system or the other, and / or want the particular tools and environment ONLY available in that subset world (like all the KDE parts). Fine with me, have at it. I just want a basic window system where I can pop a browser and a couple of terminal windows, open GIMP and / or Open Office, and be done. LXDE lets me do that with minimal fuss and resources (though MATE is a close second…)

Just don’t confuse that with the Operating System. It’s more the paint and seat-covers and some of the engine package options, in car buying terms.

Now some of these are not available on some operating system base builds. GNOME, in particular, will likely never again be workable on BSD. Why? Because BSD isn’t going with SystemD (and work to make a ‘shim’ keeps running into the way SystemD keeps mutating to invade and take over more system functions…). For this reason, the Grand Unified Does All Desktop tied to SystemD will only ever run on SystemD based boxes (until and unless someone finds a way to make a decent shim, which I doubt will ever happen…)

That, BTW, is one of my complaints about SystemD. Since it is so invasive AND forces layers above it to have “lock in” to it, it fractures the ability to have “mix and match” choices. You can’t swap your init system (some folks, such as Slackware, liking the BSD rc.d style, others, like just about everything else, up until now using SystemV Init, folks now being forced into an alien SystemD Way…) and increasingly you can’t change your windows manager or desktop environment. Being non-interested in “lock ins”, I’m non-interested in SystemD. GNOME, being heavy in the Red Hat world, and Red Hat pushing SystemD like crazy (as they want lock ins…) means IMHO GNOME is dead for non-SystemD. (Since it was already “dead to me” at GNOME 3, no big loss…)

What surprises me is the GNU project folks accepting that.

OK, enough of the “Window Environment Overview”. On to “What Linux and BSD types are there?”

Some Source Lists

First, a digression on Unix. In The Beginning K&R Created Unix, and it was good…

Then all hell broke loose and it started spreading and mutating. For a variety of odd legal reasons involving US Anti Trust Law (back when we used it…) various folks made different variations. (Eventually leading to a clean break of Linux as a restart from scratch). Here’s a nice graphic from the Unix history page in the wiki:

You can see the genetic relationship between the various operating systems, with BSD on top, Linux as a sprout out of the middle, then the whole Version 7 / System V / Posix stuff in the rest. I dispute the dotted line from Linux back to “Research Unix” as inspiration. In reality it was BSD that was common on Universities and “in the wild” and was, IMHO, the inspiration for GNU & Linux.

Technically, only the kernel is “Linux”. Most of the compilers and utilities and such are made by the GNU project. Stallman and friends get their panties in a bunch when nobody says “GNU / Linux” but just says Linux. Oh Well. Like most folks, I’ll use GNU for their tools explicitly (like gcc or GNU cc for the C compiler) but Linux for the whole bundle.

https://en.wikipedia.org/wiki/Linux has lots more details, including a nice graphic of what is in “kernel mode” and what is in ‘user land’ for anyone wishing a better description of the cut off.

Dozens of different flavors of “Linux” exist (echos of “GNU / Linux” please…) and there’s a bunch of words to specify exactly what you are talking about. In general terms, a “release” is a specific instantiation of a numerical level, like Red Hat 7.2. While a ‘Distribution” or “distro” is the list of vendor, kernel, tools & utilities, window environment, target hardware. Like Red Hat Linux GNOME x86 or Ubuntu Linux MATE ARMv7. Distro + release usually fully qualifying the specific object: Centos Linux 6.3 GNOME x86 being what I run on my x86 box sometimes. Debian Jessie (that is also 8.x IIRC) Linux LXDE ARMv7 on my Raspberry Pi M2. Note that the utilities and tools are typically not listed, so you get to look them up “somewhere”.

Nice thing is, Distrowatch tends to keep lists of all those bits:


If you think I’ve looked at a lot of Linux / BSD releases, take a look at their inventory… Part of why I’m reluctant to just start “rolling my own distro” is the flood of them already out there (and the large number that died after release in the wild…)

But to check what’s in and what’s out of any given distro, they’ve likely got it. Plus pointers to the parent web site for that distro.


has a link to a wonderful (giant) graphic showing the family tree of the major Linux distributions out there.

It is on this page in wiki commons, but I’ll leave it for folks to pull up on their own as it is really a big chart to read.


The basic point to observe in the chart is that Debian, Red Hat, Slackware, and their spawn (literally…) account for most of the Linux distros out there. The one at the root of a tree is called the “upstream” for those that follow. So Debian is the “upstream” for Ubuntu and Raspbian. Those two are just a repackage and polish of their upstream with local customizations. (Like your local dealer putting on custom wheels and a performance exhaust kit when you buy a new car.)

Now you can see that Knoppix is a derivative of Debian, but that does NOT mean that as Debian cuts over to SystemD, all the rest must go too. There will be pressure to do that, but many may well “fork” and stay on the older system design. So while I’m generally looking to abandon the Debian tree of choices over time, for now there will still be some in there I like. For example, Knoppix as a live system.

The other conclusion from that graph is that many “different” Linux distributions are really 99.9% identical to their upstream and their siblings.

This site claims to have a ‘definitive list’ on the Pi. I doubt it, but it is pretty good. I note, for example, that Alpine isn’t listed, yet I’m running it on my DNS server Pi…


Android (RTAndroid) actively updated, and here is a video tutorial
Angstrom Linux
Arch Linux ARM
Chromium OS
Debian ARM
Fedora ARM Pi 2/3 only.
Fedberry , a Fedora Remix for Pi 2 and 3
Kali Linux
Meego MER + XBMC
Nard SDK (Embedded systems)
OSMC , Open source media center
Pidora , a Fedora Remix for Pi 1 [No longer maintained]
PwnPi , a Raspbian clone for penetration testing. Does not seem to be actively maintained. Last version from 2012 works on Pi 1. Replace the files on the /boot partition with those of latest Raspbian to make it work on Pi 2.
Plan 9
Raspbian , a Debian derivative
Raspbmc is now OSMC
Risc OS
Slackware ARM
Ubuntu Mate
Void Linux
Windows 10 IoT Core

Needless to say, I’m not going to review all of those…

If there’s any of them where an opinion is desired, holler.  Otherwise I’m just going to say that I’m most fond of Devuan due to the lack of SystemD.  This is followed by Debian / Raspbian and the rest of that family.  That includes Ubuntu that is very complete, but fat and SystemD afflicted.   Alpine is nice for routers.  Fedora  / Centos for more of a production environment experience.  Beyond that it mostly becomes the land of personal toys and specific needs.  Some turf wars over style of administration too.  (Like what package manager you use for software updating).

Special mention for BSD.  It’s a very special place.  Much more robust and sturdy than Linux and far more stable (it actually has a guiding group in charge).  Some things are just a pain to make work (like installing X-Windows) and others are a dream (the build system).  I love it.  Yet I’m not running it on my desktop.  Entirely due to the X-Windows pain and suffering.  I have a perpetual “To Do Task” of making a comfortable BSD desktop that I just never quite get around to doing…   But it really is the better operating system.

In Conclusion

Originally this posting started as a review of the choices and I was going to put up compare-and-contrast descriptions of the major Pi choices. Then SystemD hit. Then some kernel security bugs hit. In the end, I was too busy re-making things to finish this posting. Now, the major non-SystemD choices are few, so less reason to fret over choices I’m not going to make (like Fedora and things on old kernels).

So instead it’s a review of the parts of an OS, a list of them, and an open discussion of them if anyone wishes.

Over time several postings that were never finished have accumulated in “drafts”, and I’m going through them and either tossing them, or finishing them. This one is from about a year ago, when first written. But I think it still is useful.

Subscribe to feed

About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in Tech Bits and tagged , , , , . Bookmark the permalink.

6 Responses to Raspberry Pi Operating Systems Review

  1. Steven Fraser says:

    @EM: Great article, and a fun read.

    Working for hardware and software companies since the ’80s, I’ve had occasion to see engineers go through the process you describe for an OS port to new wire-wrap prototype, including the cross-compilation you described here. Powerful Jedi they were.

    Of interest to me in the commercial ‘sons of Unix’, is the group that derived from the Research done at Carnegie-Mellon … Mach. When Jobs left Apple and started NEXT, he licensed Mach for the Cube, and it made its way into Apple when he returned. All the SystemX incarnations resulted from that.

    Good stuff.

  2. E.M.Smith says:


    Glad you liked it! Talking about computer operating system guts is majorly prone to alienation and yawns, so nice to know I kept it (somewhat) interesting!

    When I first started back in the ’70s, we actually loaded a bootstrap compiler from tape into the HP mini-computer then used that to “compile the operating system” and then to compile the compiler that recompiled the compiler then compiled the OS then… IIRC, it took about a day to get to a running system. I’m glad those days are over ;-)

    At one time I ran QA and the compute infrastructure (and documentation…) for a small compiler company (that eventually got absorbed by Red Hat). I also was “Build Master” for a small company that made a network communications appliance (combined web server, time server, email server, etc. etc.) and managed the cross compilation of that OS onto the target hardware.

    One day I was a bit bored… so I set up a few boxes to have Beowulf facilities and ported the build environment to it. Ran a build. Instead of 8 to 10 hours, it finished in about 2. I then happily went to announce this to my boss, the V.P. of Engineering who had complained about long build times to the Engineering staff… and fretted over the need to coordinate development builds between staff.

    Much to my surprise, his only concern was that it might change the build product and he didn’t want that risk nor that QA workload. Never mind that we did the full QA on builds anyway. So i was patted on the head and told to go back to doing it the old way… Sigh. It doesn’t pay to be ahead of the curve too far. (This was the ’90s and parallel builds were new then). Oh Well.


    It’s a micro-kernel design:

    Basically a whole lot more stuff is loaded only as needed and not stuck into the kernel as a single blob. IMHO a much better way to design things. It can be a bit tricky (the kernel doesn’t know that everything it needs is ALWAYS available and must remember to check / load bits sometimes ) but it does let you have a small fast core to the kernel.

    With the way the Linux kernel has gotten fatter over the years, I’d vote for a micro-kernel do-over.

    Then again, I’m not the guy going to write it so…. but others are thinking about it:


    It’s possible. And “possible” is all it is. This would require you modifying the source code of the core of Linux. There’s no way to do it on an already-built system. It would require an enormous amount of work to “port”, or in-place convert, something as complex and widely-depended on as the Linux kernel to a totally different architectural pattern. We’re talking thousands of hours of work and millions of lines of code. Even if that were successful, it probably wouldn’t work with almost every available Linux driver, module, or kernel-hook-dependent library (i.e. libc).

    However, if you want to see what GNU/unixish system that runs on a microkernel looks like, take a look at GNU/hurd. It’s a robust enough microkernel system that you can run Arch Linux–er, Arch Hurd–on it.

    This topic has been discussed before. To review some of the discussions about why/how likely it is that Linux itself (rather than an alternative like hurd) would ever switch to a microkernel, browse this thread.

    Now if only GNU/hurd would develop at faster than glacial speed…


    Well, there’s a reason I play with R. Pi and build OSs and stuff. It’s because I’ve done that kind of stuff for decades and kind of like doing it ;-) Just wish someone would pay me to do it. Then again, I’ve not bothered to actually apply anywhere. Something about the corporate environment being no fun anymore.

    I’d planned that at this point I’d be teaching at a local Community College (have the State lifetime credential) but they’ve decided they want “Manufacturer Certs” and I’m just not going to pop a few $Thousand a year to Microsoft.

  3. jim2 says:

    I switched to Mate when Ubuntu went with the Unity desktop and haven’t looked back. Like that is has drivers for some common apps and doesn’t bitch about them, plus I was able to make it look and work kinda like MS Windows which damped down some noise from the Better Half.

  4. Larry Geiger says:

    “I dispute the dotted line from Linux back to “Research Unix” as inspiration.” It looks to me like the dotted line goes to Minix and not Research Unix. Which I think would be accurate. Linus actually mentions Minix I believe. I would have to look up the reference.

  5. E.M.Smith says:

    Minix was written for teaching and about 4 years before Linux, so I’d not be surprised to find it was in the text used when Linus was in school. Or that he was aware of it before starting.


    It’s a micro-kernel design so clearly a technological different path from Linux.

    AND… 4 years is kind of short overlap for MINIX awareness to spread all over the globe.


    MINIX (from “mini-Unix”) is a POSIX-compliant (since version 2.0), Unix-like operating system based on a microkernel architecture.

    Early versions of MINIX were created by Andrew S. Tanenbaum for educational purposes. Starting with MINIX 3, the primary aim of development shifted from education to the creation of a highly reliable and self-healing microkernel OS. MINIX is now developed as open-source software.

    MINIX was first released in 1987,
    with its complete source code made available to universities for study in courses and research. It has been free and open-source software since it was re-licensed under the BSD license in April 2000.


    Linux (/ˈlɪnəks/ (About this sound listen) LIN-əks) is a family of free and open-source software operating systems built around the Linux kernel. Typically, Linux is packaged in a form known as a Linux distribution (or distro for short) for both desktop and server use. The defining component of a Linux distribution is the Linux kernel, an operating system kernel first released on September 17, 1991, by Linus Torvalds.

    Possible, but…

  6. E.M.Smith says:

    On a related note:

    While looking up the creation date for Linux, ran into this page:


    The Linux kernel is an open-source monolithic Unix-like computer operating system kernel. The Linux family of operating systems is based on this kernel and deployed on both traditional computer systems such as personal computers and servers, usually in the form of Linux distributions,[9] and on various embedded devices such as routers, wireless access points, PBXes, set-top boxes, FTA receivers, smart TVs, PVRs, and NAS appliances. The Android operating system for tablet computers, smartphones, and smartwatches uses services provided by the Linux kernel to implement its functionality. While the adoption on desktop computers is low, Linux-based operating systems dominate nearly every other segment of computing, from mobile devices to mainframes. As of November 2017, all of the world’s 500 most powerful supercomputers run Linux.

    The Linux kernel was conceived and created in 1991 by Linus Torvalds for his personal computer and with no cross-platform intentions, but has since expanded to support a huge array of computer architectures, many more than other operating systems or kernels. Linux rapidly attracted developers and users who adopted it as the kernel for other free software projects, notably the GNU Operating System. The Linux kernel has received contributions from nearly 12,000 programmers from more than 1,200 companies, including some of the largest software and hardware vendors.

    The Linux kernel API, the application programming interface (API) through which user programs interact with the kernel, is meant to be very stable and to not break userspace programs (some programs, such as those with GUIs, rely on other APIs as well). As part of the kernel’s functionality, device drivers control the hardware; “mainlined” device drivers are also meant to be very stable. However, the interface between the kernel and loadable kernel modules (LKMs), unlike in many other kernels and operating systems, is not meant to be very stable by design.

    I had not realized Linux was now the only kernel OS used on the Top500 list of supercomputers.


    Kind of surprising. I’d have thought some proprietary OS would survive there.

Comments are closed.