MyPi *Nix Vague Roadmap and Ponderings

I’ve been pondering, between various minor catastrophes and political explosions, just what the “roadmap” of My Pi *Nix ought to be. It is likely THE hardest decision of the process and with THE Most fallout downstream, both in total workload, and in likely success.

So I’m going to post “where I’ve got to” and ask for opinions. No, you need not be a Linux God to have an opinion. It can be as simple as “Will it run Mint? I like Mint.” or “I don’t like red/orange background like SliTaz has”… or as complex as “The Gentoo build system is a PITA to learn, but then you will save hundreds of hours of development due to {feature and effect list}”

So I’m just going to toss my ponderings at the wall to see what sticks, and you are welcome to do the same.

Goals and Comments

Here is a (partial) list of things I’m trying to achieve. If some other release / distribution already does all this, I’d love to know because I’d rather be drinking beer by the garden than building the Linux Kernel on a Raspberry Pi… I’m also going to be tossing in some random commentary about things like what a given goal means to me in terms of building as (parenthetical sniditude)…

Goals (in no particular order):

1) The Kernel ought to use all available hardware on the SOC. (So it ought to be built with V7 instruction set and at least the hardware float if not an optional NEON SIMD float processor, unlike FreeBSD that is V6 and softfp, and others that are non-NEON hardfp. So some choices might need a kernel devo / remake process).

2) Userland similarly needs to be built as v7 / hard floating point / optional NEON. (So many userlands would need a recompile / flag tuning and / or debugging hard float options).

3) Upstream support is desirable to as full an extent as possible. (So a semi-abandoned upstream supplier is not a good thing, and a large userbase of upstream is a good thing.)

4) No Systemd. It may be wonderful. It may be God’s Own Gift to big systems running on Virtual Machines. I don’t like it. I don’t like what it does. I don’t like the attitude toward screwing with “everything I know”. I don’t like the “finger in every pie” and “one security bug and you are 100% screwed and get to read a giant inscrutable blob of code”. And I really don’t like the Microsoft Binary Blob design flavor of it, from log files to all the rest of it. Not negotiable. No, I don’t care if it is fast, so is plunging to your death off a high cliff.

5) K.I.S.S. build. Having a build system that takes 6 months to learn is just not going to cut it for a “Joe and Jane Sixpack” buildable system.

6) Eventually, some day, I’d like to build it with the MUSL libraries. This is a bit geekly, but they are much smaller and more efficient than the glibc libraries, and IMHO somewhat more secure. (They do have problems of failure to build and bugs in new things ported to them at present (when things depend on side effects or non-spec behaviours), so will be easier over time as more folks use them).

7) Explore using the same “Multi-lib” ability used for the 32 bit / 64 bit builds in x86 / AMD-64 land to make a dual-lib glibc / MUSL hybrid during any library transition. That way you can still use things that fail to build with MUSL due to some brain dead dependency on a bad idea in glibc…

8) Security and reliability first. “Bling” last.

9) Efficient and small resource footprint, given the target.

10) A Build System that does not suck a person dry, though might tax the hardware. (Let the machine work, I can always drink beer by the garden while it does.)

Some Ideas and Ponderings

First off, the “no systemd” right off the batt tosses out many “upstream” providers and their kit. Arch, Debian / Raspbian, Ubuntu, Fedora. So it goes. It leaves in LFS Linux From Scratch, FreeBSD, Slackware, and a few others including Gentoo that can “go both ways”.

Then #5 kind of kills off Gentoo. Their build system lets you do anything including make a build to make your product 12 ways for 14 targets via trick build script jiggling… It also means you must control everything with a modestly arcane system of “use” flags and more. It will take me a couple of months to get good at that build system, and IMHO there is no hope that the “average joe” who just wants to make a system build would be able to figure it out on a weekend.

Since part of the goal here is something the Average Jane and Joe can have a hope of assuring themselves is OK to build (in terms of security), having an arcane inscrutable build system makes that nearly impossible. Might as well just download prebuilt binaries and skip the weekend of build time. So that kicks out Gentoo.

LFS has a modestly well defined and fairly fool proof build (though Beyond Linux From Scratch that builds the userland has more “well this might work” caveats in it…) but not a large developer base to depend on. Slackware has a gigantic userbase, and a relatively clear build method, but no dependency resolution at all and you are on your own for that. (No package system or build system isn’t quite as bad as you might think.)

Most of the other ports are even more minor, or more idiosyncratic. Like Alpine Linux that does much of what I want (MUSL build, for example) but is very raw so far; or Puppy that has a great small distribution but an idiosyncratic design and build process. Puppy has a certain appeal, but has a particular POV about it.

In the end, I find my self thinking “LFS or Slackware?”.

LFS with some build scripting would be quick to bring up, and clear in what it is doing (exactly what the script says and not a lot of ‘interpreting’ settings and flags). But it has a limited set of software demonstrated to “play well on the Pi”.

Slackware has everything. Including a large userbase. But the Pi area is a bit raw and not their main area of interest. Their ports system is a bit ersatz and based on “community support”. The base system has one browser that isn’t my favorite, and some of the design choices, while not bad, are, er, ‘not mainstream’… (In some cases that is good, like still using Berkeley rc scripts instead of systemd or System V init).

This all leaves me thinking doing a first pass LFS build, then a reassessment, and adapt if needed. Yet a Slackware based build is going to have more “upstream” help as other users port things and put them in their build / packages site.

I could easily see a LFS first, then a transition to Slackware once the process was well characterized.

Process Things

This ought to be a multi-step process. Start with a pre-built kernel and libraries and a ‘good enough’ userland, as I’ve already gotten to work. Add any missing build parts.

A) Make a custom Userland, based on a scripted build, sorting out dependencies and order of build, and defining the set of included packages.

B) Rebuild the libraries if needed (i.e. if I’ve got a desire to add hard float or ‘whatever’ is missing in the base build)

C) Rebuild the kernel with the new libraries (if needed).

D) Rebuild userland. Final pass on the build script and dependencies.

Yet I can also the case for doing a 2 track approach. Have a rapid bring up “Quick and high function” system based on just sucking down binaries where available. Get to running very fast. Then, over a longer period of time do the high PITA, perhaps faster running if compile options can be tuned, more secure rebuild from sources. It would give “usable quicker” but reach final goal later.

Proposed Roadmap

(Or more accurately, rough sketch of the dirt trails out back…)

Base on LFS with a scripted build. Use the FreeBSD Ports system to lay out dependencies. (It has a command to list dependencies for any package and things ought to be very similar).

Eventually add some kind of “ports” system package and build manager, but not up front, only after a final build is acceptable.

Optimize the build for the hardware. (I.e. use the hard float and NEON and V7 flags, as appropriate).

Add a “multilib” facility like that used for x86 / AMD-64 builds, but with glibc and MUSL. That would let things that work be built with MUSL, and those that fail fall back to the glibc build.

The order of build that I would follow, in thumbnail:

1) Get a working kernel (done!) using glibc. Along with a user base built on glibc (done! for narrow base…)

2) Build any needed porting tools and ‘build-utils’ parts. Get Xorg running. (WIP at the moment).

3) Rebuild an optimal kernel using best build flags. (Probably not needed for LFS as it was already done?)

4) Rebuild base applications and userland with optimal compiler flags (as needed).

5) Test scripted build (based on above build records) via a rebuild. Iterate as needed.

6) Attempt the multi-lib extension. Small MUSL based test builds of selected easy applications.

7) Port a ports like build system for the selected programs based on the script. (Optional)

8) Experiment with a MUSL based kernel and libraries. IFF it goes great, swap over.

9) Port userland to MUSL to the extent possible (and really make sure the multilib works…)

It looks to me like everything from about 6 on down is entirely optional R&D / Devo work. I could easily see spending a long time on #5 fleshing out userland with desktops, browsers, etc. There would need to be some reassesment at that point as to how much needs to go into userland enhancements and debugging, vs how much goes into that R&D for a sleek efficient package… (but with the potential for a LOT of userland bits that get bugs from library dependencies… thus my interest in a dual library model.)


So that’s where my pondering led me. Toss rocks at it. It is much much easier to change direction now than after I’ve sunk a few months into doing something stupid…

One big hole is just the entire lack of any kind of “what does userland look like” list. I’m leaning toward xfce and a browser based on Firefox (GNUzilla or Pale Moon or… whatever complies ;-) as the base set. I’d also expect LibreOffice and the usual stuff like gparted and a terminal. Any other suggestions to put on the “must have” list welcome. (As of now I’m just figuring to run through the menus on Debian or Fedora and make a list.)

Too ambitious? Not ambitious enough? Skipped something critical? Ought to have my head examined? ;-)

Oh, and I’m not married to this as The One True Path. I’m likely at some point going to do a “Woof” build of Puppy just to learn what it does. I also want to get FreeBSD running with full on X-windows just because… (but a lot of Linux stuff isn’t in BSD land). But whatever mainline is chosen, gets the most time. The other things mostly get time only when “stuck” elsewhere and wanting a break / difference experience.

So kick it around and let me know what y’all think.

Subscribe to feed

About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in Tech Bits and tagged , , , . Bookmark the permalink.

15 Responses to MyPi *Nix Vague Roadmap and Ponderings

  1. Larry Ledwick says:

    I have no complaints with what you have outlined (much of it too deep in the grass for me to have a knowledgeable opinion and I am inclined to trust your judgement on such things).

    My observations are:
    A colorblind aware build, ie turn off default use of color highlighting which most linux builds default to. I literally cannot read some things on many default builds and the first two command I run in windows (or my profile) are “unalias vi” and “unalias ls” to remove the color highlighting used in those commands in Centos.

    Two options, a setting “colors=yes” or “colors=no” in the build script or default to old fashioned unix no color highlighting and provide a setting to turn them on after the install.

    Also as I am sure you are inclined anyway be sure the default build includes the appropriate packages to run the most useful/powerful troubleshooting commands (basic sysadmin tool kit) right out of the box.
    htop, top, netstat, iostat, vmstat traceroute, last, nslookup etc.

    Other than that, I would need to ponder a bit, but being red-green colorblind, default color highlighting is a pet peeve for me (and approximately 10% of the European Caucasian user base) The assumption that all users want color highlighting and forcing you to dig around to figure out the best way to shut it off when you have to struggle to read the output of some commands is really aggravating.

  2. E.M.Smith says:

    I like b&w color default because the deep blue on black terminal window, or yellow on white bg are unreadable by me! Since bg color or wallpaper are unknown, odds are some color is going to fail.

  3. beng135 says:

    Long ago on primitive/slow boxes, xfce was by far the fastest X-windows package that met my needs, so I agree w/that.

    Your “roadmap” reminds me of outage-planning we had to do when I was a power-plant engineer. Plan before-hand, then adapt/modify as you go along.

  4. Martin FIsher says:

    The arrival of the internet has made many things easier. (I am building Odroid UX4 HA clusters, but couldn’t do it without the community support to overcome driver issues etc). The bad news is the price that has to be paid, in this case the pervasive nature of Systemd. I admire what you are trying to achieve but I don’t think there’s an easy path. You are back to the pre-internet days where no one else is doing what you are doing, so it has to be done alone, including it would seem, writing device drivers for hardware unsupported by the non-Systemd OS. As you probably remember when Unix and the 68000/10/20 first came out there were 10+ companies each doing what you are trying to do, take semi-standard OS and semi-standard hardware and make it work in a way they thought was best. Rolling your own has a certain beauty to it, but the smaller the community the smaller the number of eyeballs checking the code for security/backdoors etc.

    So, are you being too ambitious? I don’t think so, but if the goal is total understanding and total security I think the ROI for the “sleek efficient packages” with complete hardware support conflicts with that goal. If the goal is having fun then there’s no cost except time, and building new OS on new hardware would seem more fun than say watching TV.

    Also, why go for efficiency? The Pi is so cheap for the time spent optimizing you could have connected a couple together then who cares? (An I’m aware of the elegance of optimization, having started on a PDP-8 with 256 some bytes of Page 0 memory)

    BSD was “good enough” 30 years ago, and while technology has moved on, I don’t think the basic requirements have really changed, the utilities have improved and the commands have got more verbose/capable, so if Linux stuff is missing from BSD, is it really needed? (YMMV)


  5. p.g.sharrow says:

    @EMSmith; I have read this post twice, once yesterday morning and once this morning. I must ruminate on it a bit more.

    The ability of the Pi to change character with the change of a chip seems to be handy and parts are inexpensive so several are doable if needed. “let the machines do the work” is a good concept. Computer time is cheap nowadays.

    Abilities that I see needed are;
    #1 communications
    #2 office suite – data base/spread sheet, word processor
    #3 process control and sensors

    Some abilities by the user to understand what is under the hood is essential, so some amount of training must be planned for. Like you, I prefer a car that I can trouble shoot and fix rather then one that must be serviced by a factory trained rep with the latest diagnostic gear. We don’t need all the hand holding and eye candy, just something that works and is inherently secure. I am rebelling at this constant drumbeat to upgrade into larger and larger masses of bloatware filled with embedded issues.

    Presently I am working as a wood butcher building a “post and timber” gazebo for a neighbor that must be completed tomorrow for a yard party Saturday. Two months of gathering and milling local wood, mortise and tendon jointery with trenails and now finishing up today, I hope.
    6,000year old technology meets today ;-) the 21st century …pg

  6. Larry Ledwick says:

    I like that modular idea for the builds.
    Perhaps a one stop shopping place download of a build specifically setup for your local DNS filter, all the other attack interfaces stripped out, and a starting place list of sites to block.
    (do one thing and do it well)

  7. Bill S says:

    I think I want to know what “a woof build of puppy” means. All I know for sure is that raspbian is not making me feel warm and comfy. And that I should not have given away the only laptop with Ubuntu installed.

  8. Jay says:


    Are you stuck going for the Pi.

    I have one also and run the raspbian.

    But if you want a xfce distro without systemd, have you gone back and looked at the old Mepis – a new off shoot derivative, MX-15.

    No systemd, xfce (with a community supported kde version if that’s of interest)
    I run it on three AMD systems with 100% rock solid stability.
    A great community behind it.

    (a lighter version with smaller DEs is available as Anti-X

  9. p.g.sharrow says:

    @EMSmith; It appears from your post that your brain has all ready settled on LFS as the base to begin the creation of *NIX, so I would posit that the decision is in. ;-), just accept it and move on.
    I see no reason to question that judgment and concur…pg

  10. Another Ian says:


    Seen this?

  11. E.M.Smith says:


    Thanks for the comments. I’ll “ruminate” over them with a bit of Smith’s Cider

    I’m chasing down another interesting port at the moment. “Void” linux. Looks to have done many of the things I want to do (alternate libraries, using runit instead of Systemd, runs on the pi) already; but will need a test drive and commentary…

    A few quick particular comments:


    Thanks! Must be my disaster planning background showing through ;-)


    I think there are enough of us resistant to Systemd to make a community of it. Slackware, Devuan, LFS, Alpine, Void, one Gentoo branch, All the BSDs and a few others is not a small group.

    @Bill S:

    My apologies. I try to keep jargon minimal, and try to remember to “translate” all jargon at least once prior to use, but sometimes fail. Especially when deep in “fast think” mode. So, some translations:

    “Puppy” is a particular flavor or Linux. A “distribution”.

    “Woof” is the name of the package assembly program that builds a release media (CD / whatever) for the “Puppy” family of linux releases. It is an automated “pick one from column A and one from column B” release builder.

    So a “Woof build of Puppy” is a machine driven customized version of the Puppy Linux distribution based on your check boxes. Basically a semi-automated system building system.

    @Another Ian:

    Yes, distrowatch is a great resource. I usually “check out” distros there when first evaluating them. LIke: the one I’m presently chasing.


    Nope. Not fixated on the Pi. Just it’s my first target. I also have 4 x x86 based boxes in the office to “do something with” and 3 more in the garage… I’m hopeful that a solution for all will be found. My intent, too, is to use a CubieTruck for my file server since it supports SATA and no binary blob boot loader.

    My reason for not going straight to x86 boards out the gate it twofold:

    1) They are becoming “boot freedom” hostile with EUFI (and it brings a big security question being essentially about 100 MB of “binary blob”… and it tries to block OS choice).

    2) IF I can get all the computes I need for $40, why pay $400?

    But I’ll likely continue to run some x86 and amd-64 boards as long as I can easily put MY OS of choice on them.


    I do something a bit odd outside of Silicon Valley. I’ll make a preliminary decision and run with it, but still have some cycles devoted to “proving up” that path. (Like in CPU chips where both arms of a branch are evaluated, then you throw away the one not taken once the decision is made). Likely from my many years on supercomputer arches where that’s the norm…

    So a preliminary decision for LFS is made. Done deal. Wrapped and shipped. And yet might change tomorrow… Welcome to the Silicon Valley Disruptive Tech Planning Method ;-) I’m presently doing a minor search on Void Linux as it looks like it has already done much of what I want to do. Yet I’m proceeding with the LFS ‘layout’ of build plan and also evaluating the difficulty of adding a build system (despite disparaging the need above). You pick your main line of attack, but don’t marry it… and keep an eye on the alternatives so one doesn’t cut you off just before ship and AFTER spending all that Venture Cap money… at least, in Silicon Valley and IF you got V.C. money ;-)

    it isn’t a sign of weakness nor equivocation, but of flexible market response and resource optimizing… even IF hard for “upper management” to “get it” some times… (what to you mean you are wasting time looking at something you already rejected?… yet military types do tend to ‘get it’ easily… something about the battle plan lasting until first contact with the enemy and all ;-)


    My “visioning” would be to pick a decent distribution, then make what Fedora calls “spins” of it. In my world, “spins” based on original package sources and where you can compile 100% of them yourself, or download binaries as desired. Custom “spins” for several classes of “device”, from DNS Pi Hole to Fileserver to DonglePi to Desktop to… all the while with some audit trail back to the original sources (including “U-Built-It” as a choice) and with decent selections to optimize the hardware.

    Then wrapped and packed in such a say that “Joe and Jane Sixpack” have some hope of doing the build / audit process themselves, if desired.

    IMHO right now the world divides into “Roll your own, be a Geek, and shut up” too tech and “Take The Binaries, trust us Geeks, and shut up” too trusting. I’m trying to bridge that gap.

  12. E.M.Smith says:

    @Regis Lianfar:

    I have a Raspberry Pi B+ that’s my DNS server. It has been up continuously for about 6? months. It only goes down when I choose to shut it down. It has never crashed…

    The Orange Pi in that article looks interesting. Didn’t know you can get them for $15… but maybe that’s in another place… or maybe I’ve just not looked ;-)

    In industrial settings, commercial Linux servers and Unix servers are known to run for years without stopping, other than for scheduled maintenance. I once had to do an “emergency shutdown” on the build server for a software company. I’d just joined, and doing my “walk around” found that the build box was not blowing hot air out the back… the fan was stopped and only occasionally turning one or two turns, then halting again. Since burning up is hard on computers… and powersupply failures longer to fix than to clean, I sent out a “going down now” message and then in 5 minutes, shut it down. Logs showed it had been running continuously for about 2 years… and the ‘dust bunnies’ had just gummed up the fan.

    About 30 minutes of cleaning, it was back up and running fine. Still running a year or two later when I left the place… (with an annual scheduled cleaning ‘going forward’…).

  13. Larry Ledwick says:

    When I was working at Sun our data centers had quite a few Solaris servers with uptimes greater than 600-700 days. If it ain’t broke don’t fix it.

  14. beng135 says:

    EM, any opinion on PCBSD? Based on Free BSD which you’ve mentioned.

Comments are closed.