There is a concept in computer programming of “Dependency Hell”. It comes about, IMHO, when folks forget to follow the K.I.S.S. principle (Keep It Simple, Stupid.) and / or just don’t pay attention to a couple of basics of computing. In particular, to realize that ALL change is incredibly expensive in time and effort while changes that are incompatible with other parts of the system (or other changes) can be lethal (to the project, product, or whole system).
The Unix Way of “Do one small thing and do it well” comes from this understanding. One Small Thing done well is unlikely to change much. If I have a program that just takes a byte stream and directs it to a file, that’s not got a lot of room for “enhancements”, revisions, or bugs. If my “Init Systems” just launches a PID1 (Process ID #1) that launches some other processes listed in a script or configuration file, well, my init system is unlikely to ever need much change, revision, “enhancement”, nor will it have much in the way of bugs (if any). This has been fundamental and true for about 50 years of Unix history.
What has happened relatively recently is an explosion of (gratuitous?) change and “enhancement” that looks to me like it is NOT making things better and IS making things worse. Simply because it makes for a huge growth in Dependency Hell issues.
Here’s the Wiki on it:
I’ve bolded some bits.
Dependency hell is a colloquial term for the frustration of some software users who have installed software packages which have dependencies on specific versions of other software packages.
The dependency issue arises around shared packages or libraries on which several other packages have dependencies but where they depend on different and incompatible versions of the shared packages. If the shared package or library can only be installed in a single version, the user may need to address the problem by obtaining newer or older versions of the dependent packages. This, in turn, may break other dependencies and push the problem to another set of packages.
Dependency hell takes several forms:
An application depends on many libraries, requiring lengthy downloads, large amounts of disk space, and being very portable (all libraries are already ported enabling the application itself to be ported easily). It can also be difficult to locate all the dependencies, which can be fixed by having a repository (see below). This is partly inevitable; an application built on a given computing platform (such as Java) requires that platform to be installed, but further applications do not require it. This is a particular problem if an application uses a small part of a big library (which can be solved by code refactoring), or a simple application relies on many libraries.
Long chains of dependencies
If app depends on liba, which depends on libb, …, which depends on libz. This is distinct from “many dependencies” if the dependencies must be resolved manually (e.g., on attempting to install app, the user is prompted to install liba first. On attempting to install liba, the user is then prompted to install libb, and so on.). Sometimes, however, during this long chain of dependencies, conflicts arise where two different versions of the same package are required (see conflicting dependencies below). These long chains of dependencies can be solved by having a package manager that resolves all dependencies automatically. Other than being a hassle (to resolve all the dependencies manually), manual resolution can mask dependency cycles or conflicts.
If app1 depends on libfoo 1.2, and app2 depends on libfoo 1.3, and different versions of libfoo cannot be simultaneously installed, then app1 and app2 cannot simultaneously be used (or installed, if the installer checks dependencies). When possible, this is solved by allowing simultaneous installations of the different dependencies. Alternatively, the existing dependency, along with all software that depends on it, must be uninstalled in order to install the new dependency. A problem on Linux systems with installing packages from a different distributor (which is not recommended or even supposed to work) is that the resulting long chain of dependencies may lead to a conflicting version of the C standard library (e.g. the GNU C Library), on which thousands of packages depend. If this happens, the user will be prompted to uninstall all those packages.
If application A depends upon and can’t run without a specific version of application B, but application B, in turn, depends upon and can’t run without a specific version of application A. Upgrading any application will break another. This scheme can be deeper in branching. Its impact can be quite heavy, if it affects core systems or update software itself: a package manager(A), which requires specific run-time library(B) to function, may brick itself(A) in the middle of the process when upgrading this library(B) to next version. Due to incorrect library (B) version, the package manager(A) is now broken- thus no rollback or downgrade of library(B) is possible. The usual solution is to download and deploy both applications, sometimes from within a temporary environment.
Package Manager Dependencies
Dependency hell is unlikely but possible to result from installing a prepared package via a package manager (e.g. APT), because major package managers have matured and official repositories are well maintained. This is the case with current releases of Debian and major derivates such as Ubuntu. Dependency hell, however, can result from installing a package directly via a package installer (e.g. RPM or dpkg).
When a library A depends on libraries B and C, both B and C depend on library D, but B requires version D.1 and C requires version D.2. The build fails because only one version of D can exist in the final executable
Package managers like yum, are prone to have conflicts between packages of their repositories, causing dependency hell in Linux distributions such as CentOS and Red Hat Enterprise Linux.
This was to some large extent mitigated by having a huge common base in Linux (often shared with the BSD world) and a very large number of folks aware of the issues and working (with the right attitudes) to keep any consequences small. A key concept here is the “Dick With Factor”.
Some folks just can’t stop themselves from “Dicking around with things”. This is often seen in relatively inexperienced folks (not enough burned fingers in their past) or folks who’s ego exceeds there ability. Folks who “have a bright idea” that isn’t really all that bright, and folks who “want to make a name” for themselves (at the expense of aggravation for everyone else). They tend to screw things up most of the time, though occasionally a good idea gets through.
One Big Onslaught of “Dick With Factor” was the “System V consider it standard!” push by AT&T. It fractured Unix into 3 different worlds, but only 2 of them directly. Originally, the Unix set free in the world was Version 7 (note: I’m abbreviating the history. By definition there were 6 prior versions, but they were mostly inside Bell Labs or of limited reach. Commonly, the start of Unix history is held at 1971 or so and the Version most folks first saw was 7. You might ask: Why is System FIVE after Version SEVEN. Please don’t ;-) There is no good answer other than some marketing dweeb in AT&T with a high Dick With Factor).
Version 7 was the base from which all the BSDs sprang. AT&T was forbidden to make money off of software then, being a monopoly, so granted U.C. Berkeley a sweetheart license that included the right to grant sub-licenses all for free. Universities and Colleges around the world adopted BSD as their teaching tool. Sun Microsystems used it as the base for Sun OS and many other manufacturers also used it.
Then AT&T went through the trauma of a Government Dick With Process and got broken up, but also got the right to make a profit off of software (again, I’m compressing a few years of history into one sentence. If that bothers you, go write a book…) They tried to reclaim the BSD licenses, but could not. So what’s the next best thing? Declare that the only Standard Unix is the one YOU sell and rename, repackage and Dick With a lot of stuff to make it different and incompatible. Thus System V, consider… it standard?
Well this broke all sorts of stuff. Among them the init process. rc.d gave way to init.d and other “changes for change’s sake” and a whole lot of Systems Admins got to double or square their workload. This happened while I was at Apple and there was a great deal of annoyance when Sun OS became Solaris and System V isms crept in. HP took the squared approach. You could set flags to do things the BSD / V.7 way or the System V way. (Oh, also don’t ask why they went from version numbers to “System”…). Similar things happened with other vendors. Huge numbers of scripts and programs got “flags” to set to configure them for one world or the other and figure out how to fix their dependencies… Welcome to Dependency Hell. By Design from AT&T.
Well, I said 3, not 2… The US Federal Government, being a big buyer of Unix systems and a big user of BSD, didn’t like Dependency Hell, so they, of course, made a committee (and we all know how good committees are at elegant design… NOT!) who set about defining THE Standard. POSIX. Which mostly is a mash up of System V isms and BSD isms and a minimal set of common stuff that must be there. After a few years everyone was “POSIX Compliant” and incompatible with each other… But now we had 3 “standards” with different dependencies.
So along came Linux…
Here we get the added complexity that AT&T had started suing folks who did things that looked too much like Unix, so one set of Gratuitous Changes was done just to be able to say “Look, we are different!”. Eventually AT&T got tired of the game and sold Unix Rights off to other folks, who also eventually got tired of attempting to wring money via lawfare out of folks doing something for free… But we were left with yet more compatibility issues.
Linux is, really, just the “kernel”. That core of the operating system that knows how to make disks work, put things on screens, find library modules, manage memory and basically keep the hardware happy. It was largely written by Linus Torvalds. Now there’s a whole Foundation wrapped around it. A key problem here is that you can buy Foundation position (and power) via application of sufficient money. Not everyone with lots of money is good of heart and loves Linux. Microsoft has bought a seat…
The part people interact with most is either GNU software (commands like “cat” and “awk” and things like the C compiler) or applications on top of it (FireFox, Gimp, LibreOffice). GNU is a meaningless name that is claimed to mean “GNU is Not Unix” in a recursive sort of way. Cute? Ah, yeah, sure… Each Application is individually developed, and depends on what’s under it. All those “libraries” of basic functions from GNU and all those system calls in the Kernel.
Now here’s the bad part: IF your application expects GNU facilities of the “wrong” sort, or a Linux system call that’s changed – it breaks.
Note, too, that if you want your application to run on Solaris, HPUX, FreeBSD, etc. etc. you get an entirely different set of libraries and system calls to deal with. Sometimes in a glorious confusion of what was historically System V or BSD derived, sometimes Linux derived. Sometimes just different.
Most of this is hidden from your view by the Application Developers who take it in the shorts to make all these things line up right and work. (But sometimes that fails and you get broken software…)
Into this already complex mess have arrived 2 more bits of confusion.
So with that, my historical preamble is done. The next section looks at where we are at now, and references some bits of that history above as context.
ARM vs Intel vs AMD
Each kernel has a release number. You MUST have the right release number for your hardware to work. “Why” is pretty simple. If, for example, your have a kind of memory that’s new, and the old kernel code doesn’t understand it, you can’t use memory. Ditto for disk drivers, display drivers. All sorts of hardware bits.
For this reason the Kernel must be at least somewhat customized for each machine. This was really not that hard when it was almost entirely Intel CPUs in use (and their AMD Clones). For other odd CPU types, the vendor tended to do all the work and sometimes provided “patches” back to the Linux Foundation developers who might “mainline” them. So, for example, if you used a MIPS or PowerPC CPU, then most likely your vendor did the “porting” work and either applied their patches to the kernel, sold you a proprietary OS, or sent the patches to Linus and in a few months to years then the Official Mainline Kernel would support your hardware. Maybe. Most of the time, Intel x86 or AMD64 got all the attention first and everyone else lived on patches or hope.
The “embedded systems” folks just loved the dirt cheap ARM chip. Often CPUs could be fabbed into devices at rates as low as 5 ¢ per core for the license. LOTS of ARM chips went into things like routers, switches, IoT devices, and cell phones. Part of the attraction was that a vendor could buy a license and then make their own cores adding on, leaving out, or changing things as they liked. This matters because the kernel must be able to handle those missing bits, added bits, or changed bits.
Needless to say, with hundreds of vendors screwing around with the basic chip spec, the job of keeping the kernel patched to support every possible Dick With Factor is “not small”. In fact, it is a royal PITA.
As a consequence, not all ARM chips get “mainline” support. Even if the vendor patches the kernel and sends the patches to The Linux Foundation. This is where you get folks talking about this or that SBC being on a “very old patched kernel”. Their Guy maybe took 3.18 and figured out how to make it go, made patches, and now he’s off doing something else. Since then, the Mainline Kernel has moved on to 4.18++ and between security patches and just Dick With Changes, is becoming ever more different from 3.18 and eventually the GNU layer and / or the Applications Layer will need something from 3.19+ and 3.18 is just not going to work. Welcome to Kernel Dependency Hell.
So it’s a Big Deal when an SBC Vendor (Single Board Computer) says their “chipset” is supported by the mainline kernel. That means their patch set gets incorporated in with all the other changes with each new kernel released.
In many ways, this process of mainline kernel acceptance is both an indicator of success and a factor in granting it. You can only stay on a very old back level kernel so long before Dependency Hell breaks your device. You can only continue to integrate your patches into a new kernel via application of money and programmer skill – and not all vendors can do that longer term.
This is an example of why “Community Size” matters. The Big Fish get bigger as they stay mainline.
This is also why Linus is not happy about ARM chips. There’s dozens of them, all tossing patches at Linux and all clamoring to be mainline, but not all of them are providing money, time or effort to get the work done; and some of them are direct competitors of those big companies that have bought seats on The Linux Foundation… (but they assure us nothing untoward would ever enter their decisions..)
Why this matters to me (and I hope to you) is that while Intel is THE Mainline target, it has had some horrid hardware / firmware level security “flaws” lately (and one wonders if TLAs “helped” design for that end state…) and costs $Hundreds / CPU. Compare $Hundreds for a CPU to pennies. Hmmm….
So for the absolute minimum risk of Kernel Dependency Hell, buy Intel / AMD computers. For lowest cost and higher security, ARM chips. Then remember that SBCs using an odd chipset and perhaps one from some vendor in China who patched their own kernel but doesn’t play well with the Linux Foundation is likely to have an old kernel (if not now, soon…), you are dependent on them to patch it, and you can find yourself easily in Kernel Dependency Hell after a year or two.
So, right out the gate, given my decision to live on ARM based SBCs, I’ve got a higher risk of Dependency Hell, and a more difficult time getting any OS not supported by the vendor to work on any given odd SBC / chipset. This is why the Raspberry Pi “just works” most of the time. A very large community. Lots of folks using the devices so kernel patches mainlined. Kernels kept up to date. It is also why a board like the Odroid C1 or the Orange Pi One can be a pain to keep running and up to date. Are their kernel patches mainlined? Or, for the Orange Pi One, adoption has been low (based on the observed supported OSs available). Partly this may be the small memory (1/2 GB) available. Perhaps the H3 chipset. Does it really matter? What matters is that you get a small choice of supported Linux distributions. The rest is more of a “roll your own” operation and they MAY involve doing kernel patches for a newer kernel.
For the last several weeks, I’ve been trying different Linux Distributions on some of my SBCs and running into these issues.
So it’s FINE for a vendor to have a low cost hot board, but if they only supply one OS choice, and the kernel is older with custom patches, well, it isn’t going to be keeping up with changes longer term, and then you will eventually enter EOL Dependency Hell.
There’s a certain “crap shoot” aspect to new hardware, too. When a “Hot New Board” comes out with a new chipset (like the RockPro64 ) the first OS ports will use instruction subsets that work, but are very much not optimal. For the Raspberry Pi, the Floating Point Hardware is often ignored. That’s a big part of your total compute capacity to just ignore and effectively throw away… The armv7 or armhf 32 bit instruction set will run fine on the aarch64 / armv8 64 bit hardware, but you only use 1/2 your word length. It is still VERY common for v8 systems to have a v7 operating system on them, wasting a lot of the hardware. BUT it is easier to port, armhf is more fully debugged and changing more slowly, so fewer changes means less Dependency Hell. Plus, people are lazy and programmers expensive. If v7 works, well hell, ship it! and go work on some other project…
As a consequence it may take a few years for some new SBC / chipset to let you fully use all that new hot hardware. The FPU, GPUs, all your word width. A board may be running at 1/4 or even 1/10th of the ability of the hardware, just due to the operating system being a “quick port” and not a full optimized port and the kernel may not support some of the hardware anyway. So buying a new board is fine and all, but buying it 2 years after release may be better. BUT, if nobody buys it for 2 years, will it still be sold?… And again, community size matters.
So that’s the hardware layer. Moving on…
The Great SystemD Debacle
There’s a hierarchy of GNU / Linux OS development. THE Big Dog is Red Hat. They make Red Hat, Fedora, Centos and contribute a lot of the time and money to development that keeps things going. For decades this was a Very Good Thing.
Others would look “upstream” to them for most of the development work, then layer on their particular bits. Debian, for example, has Red Hat as their “upstream”. Debian then applied their package manager and other specific preferences.
Then lots of others would look at someone like Debian as their upstream and often just indulged in some particular “eye candy” cosmetics or specific customization or, like Ubuntu, a heavy QA layer and tweaking.
So when Red Hat decided to rip out core functionality of GNU/Linux and replace it with a HUGE piece of work, one that fundamentally Dicks With all sorts of dependencies; their “downsreams” had to make a decision. Figure out how to NOT accept that, know you are headed for Dependency Hell, piss on the Big Dog, and hire enough staff to duplicate the ones at Red Hat on that part of Linux: Or just go along.
Most chose to just Go Along. It is the easiest and cheapest choice. Just layer your eye candy over “whatever” and move on.
But the SystemD Dependency Hell just doesn’t stop.
I’ve had 4 boards “brick” from a bad interaction of SystemD with fstab (easily fixed once you know it is happening and what it is) and there have been many stories of systems hanging, requiring a reboot to fix things, etc. I’ve had to reboot just to get fstab changes to be seen.)
The basic fact is that the approach of SystemD is fundamentally flawed. It’s a fat and buggy thing getting fatter and buggier over time, and introducing all sorts of Dependency Hell for applications and systems admins to deal with.
As the SystemD cancer spreads throughout the core of Linux, ever more dependencies on it or from it show up and ever more things become a PITA. But as long as it is easier to just “go along to get along” and do what Red Hat does, folks will just put up with it. But it’s wrong and a pain. It is not an “init process”, it is a Service Manager that wants to own and control your whole system (and not in a good way).
So some of us chose to look elsewhere. We want the *Nix way and a world that is more stable.
It seems odd to call the most stable and traditional *Nix “alternative”, but there it is.
UN-fortunately, a large part of the world had hitched to the Red Hat wagon train, so just went along with it. Now they are committed and will most likely not turn back. It was mostly a smaller group of surly curmudgeons, conservative systems admins, and tech hacker types that were fooling around with other stuff and didn’t go along. Of them, an even smaller subset focus on ARM based SBCs. The world is much easier for folks on AMD/Intel PCs. You have a lot more choices. Knoppix and VOID for example.
My exploration has shown me that it’s a technical challenge to NOT do a SystemD release, partly due to issues of a different kind of Dependency Hell. Partly due to those ecosystems having been populated with more hard core hacker types from the get go, so expect you to be the same.
So here’s my biased opinion of what I’ve found:
I really have a long term love of BSD. It does it’s job very well. It is highly secure. There are three main variations for SBCs. OpenBSD, NetBSD, and FreeBSD. More alike than different, but with important differences. OpenBSD cares most about security and hates “binary blobs”, so things like the binary blob boot loader for Rasbperry Pi causes them to say no. FreeBSD has a great ports and packages system, but is a little looser than OpenBSD on things like binary blobs and running proprietary codes. FreeBSD would be my first choice. NetBSD likes to run on everything if possible. But isn’t as good a ports system as FreeBSD and is a little less friendly from that.
BSD in general is often described as “User Hostile” and likes it that way. I find it very easy to get running to the CLI (Command Line Interface) level, but a pain to make a good X-window system run. I’ve done it, but I hate it.
Installing “packages” often involved compiling them from source code. This can take a while and it can give more efficient code, but when it breaks, you are the hacker in charge of fixing it.
Many of the BSD ports just find a way to make their prior release run and then stop caring. For one of them, OpenBSD IIRC, there was a v6 instruction set version for v7 and v8 ARM chips. Yeah, it works; but … about the rest of the hardware and the larger word size? Well, you can always port it yourself…
In general, BSD has NOT followed Linux into the various new and different ways of things. For this reason their market share is lower (mostly pro shops and back room servers along with Education) and they have fewer hands working on it. In general, too, there’s a Central Committee that keeps a steady hand on the direction of change. This is great in that you have a stable well thought out system (where things like SystemD are quashed for being stupid in design) but it is bad in that change is slow and if you want the new bright ideas, you are SOL.
Also realize that as an entirely different system, not everyone wants to bother porting their applications to BSD and certainly not to all of them. It is an entire domain of Dependency Hell that they can dodge just by saying “We only do Linux”.
It is the oldest of the Old School styles, being fundamentally unchanged since about 1980. It also takes a fair amount of skilled work to embrace it.
Slackware is the oldest Linux still kicking around. They never embraced the System V Init and still use a BSD / Version 7 style rc.d. They also are not fond of automated dependency resolution package management.
In general, installing it and making it run were not hard. The GUI came up easy enough. Just generally nice.
What’s not so nice?
By Definition, without dependency resolving automation YOU are the one doing the dependency resolving. For most packages most of the time, this is not an issue. “slackpgk install” is not hard to type nor is “slackpkg update” or “slackpkg upgrade”.
Where I found it a bother was on failures. “slackpkg install chromium” finds nothing. “slackpkg install gimp” worked just fine. Even gave me the menu item in the drop down. GIMP however doesn’t work. “slackpgk install libreoffice” doesn’t work. Is it just not ported to aarch64? Don’t know. Nothing says. I’m on my own to work out dependencies. Welcome to dependency hell..
I’m still going to run it. On the RockPro64 as my main browser and media station in the office.
I’m using it on the Rock64 right now to type this. In another screen I have a chroot Gentoo running. It has been doing an upgrade to the armv7 Gentoo for about 36 hours now. More on that below. As an armv7 capable (armv8 native) A53 core machine, it’s a more “vanilla’ architecture and so more codes ought to be ported to it. Later I’m going to see if LibreOffice, Chromium, Gimp install on it (right now it is running close to 100% on Gentoo compiles in the chroot…)
I find Slackware comfortable, mostly, and “easy enough” for most things, unless they break. Then I get to deal with Dependency Hell or shift to some other system for that function. For now, I’m just dividing the work between systems. It is a race condition to see if my Slackware skill increases faster or some other system shows up that I like more.
IF you run Gentoo, you ARE a systems developer. The only questions are how experienced and how good. ALL installs and builds are from source code and a small bootable core. I recently found out why…
The big focus of Gentoo is their “Portage system”. It handles all the setting of compile flags and preferences, “USE” variables and more. The problem is just that you must know what all the knobs are and go set them… Things do NOT “just work”.
I’ve been trying, on and off, for a couple of years to get a GUI Gentoo running on ARM hardware. I’ve not been trying that hard, but still, not yet. I’ve gotten a few to the CLI level of install.
The basic install process is to use a chroot window on some other Linux and download a minimal tarball image in it. Extract the tarball, then use portage in it to download ALL the source code and compile it for your particular computer.
Major problem is just that you have no clue what all portage knobs and levers are, nor what to set them at anyway. I sunk 2 days into trying to find the right mirror for my arm64 system and still it wasn’t quite right.
No Problem, I thought, here’s a binary image for the Pine64. I’ll just install it and then I’ll be 90%+ of the way and can just “emerge update” and “emerge install-new” and “emerge upgrade” (or whatever) and be on my way. Well, no. Remember those bolded lines from above?:
If application A depends upon and can’t run without a specific version of application B, but application B, in turn, depends upon and can’t run without a specific version of application A. Upgrading any application will break another. This scheme can be deeper in branching. Its impact can be quite heavy, if it affects core systems or update software itself: a package manager(A), which requires specific run-time library(B) to function, may brick itself(A) in the middle of the process when upgrading this library(B) to next version. Due to incorrect library (B) version, the package manager(A) is now broken-
That’s exactly what I ran into. I needed to upgrade portage to do the upgrade, but it needed an upgraded (Python IIRC) that it could not upgrade until I upgraded portage that needed an upgraded…. BRICK.
(Technically not bricked but just stuck, as the OS worked, just could not be changed).
This was a system image that was all of 2 or 3 years old. I hit the same problem on the Orange Pi One IIRC and it was a 2 year old image. That’s just 2 years from “Fine” to “Bricked From Dependency Hell”.
Do NOT expect your Gentoo system to be used every couple of years and otherwise not upgraded. You ARE committed to FREQUENT updates to avoid this lock up / update-bricking.
I’m now trying to work through the process with the Raspberry Pi image from 2018. I’m doing the update upgrade on it in a chroot window to 14.1 release all updated. Then I’ll try to step it to 14.2 which is the present released version (but one back from -current which is really “development”).
This upgrade has been running for about 36? hours now. It has crashed the chroot once and I had to reboot the whole box to get a new chroot to work (just closing the terminals didn’t do it) and it is presently on 38 out of 40 packages so may be done soon. (It did 99 out of 138 before the crashed window). With luck, I’ll “only” need to do this 2 more times… and then it will be in sync. Maybe. IF I can step through the changes without once again stepping into Dependency Hell…
THEN I’ll get to try that GUI install again… Oh Joy…
What Is Needed
What is needed is someone like Devuan to make a nice, works from the box, easy to add programs with their package manager, well QA checked and SystemD free distribution.
I really really like Devuan, and when it works it is great. My only complaint is that the Odroid XU4 2.0 release image seems to be broken. I have 2.0 running on my PC and my Raspberry Pi boxes. I’d love to run it everywhere. But either I’m seriously screwing the pooch on the XU4 install (unlikely IMHO) or they did a lousy QA job and haven’t noticed in the better part of a year. Not Good.
So I worked out a “work around” to marry their userland to a kernel / boot loader from an Ubuntu uSD card. It worked nicely, except that after the boot, keyboard and mouse didn’t work. So most likely I didn’t match up the Device Tree Bundle DTB with the kernel with the LXDE window environment in such a way that they play well together. Welcome to Dependency Hell….
Sometime this weekend I’ll look at it again.
I’d like to see Devuan really take off. More hands and eyes fixing and testing things. It would be my ideal. VOID is also interesting, but has low interest in ARM chips. Similarly Knoppix would be a nice base to start from, but they are very PC Centric.
What it comes down to is just where among the various levels do I want to stick my oar into the sea of Dependency Hell. Assembling my own DTB / Kernel / Userland tarball on the XU4 for Devuan, and maybe trying it on other boards too? At the source code dependencies of Gentoo? Package dependencies of Slackware?
Or just live a while longer on Armbian as SystemD continues to make “everything you know is wrong” and makes my ability to manage my own systems less as their Dick With Factor dicks with more stuff?
All because some folks can’t “keep it zipped” and avoid screwing up things, and do not realize just how evil Dependency Hell can be.
I do hope that the wider Linux community comes to realize just how bad it can be. At present, Linux is at risk of dying in Dependency Hell and I think most of them don’t even see it. Heck, I’m thinking a used Macintosh is a good idea… It doesn’t take much for folks to “just walk away”.