Linux Dying In Dependency Hell

There is a concept in computer programming of “Dependency Hell”. It comes about, IMHO, when folks forget to follow the K.I.S.S. principle (Keep It Simple, Stupid.) and / or just don’t pay attention to a couple of basics of computing. In particular, to realize that ALL change is incredibly expensive in time and effort while changes that are incompatible with other parts of the system (or other changes) can be lethal (to the project, product, or whole system).

The Unix Way of “Do one small thing and do it well” comes from this understanding. One Small Thing done well is unlikely to change much. If I have a program that just takes a byte stream and directs it to a file, that’s not got a lot of room for “enhancements”, revisions, or bugs. If my “Init Systems” just launches a PID1 (Process ID #1) that launches some other processes listed in a script or configuration file, well, my init system is unlikely to ever need much change, revision, “enhancement”, nor will it have much in the way of bugs (if any). This has been fundamental and true for about 50 years of Unix history.

What has happened relatively recently is an explosion of (gratuitous?) change and “enhancement” that looks to me like it is NOT making things better and IS making things worse. Simply because it makes for a huge growth in Dependency Hell issues.

Here’s the Wiki on it:

https://en.wikipedia.org/wiki/Dependency_hell

I’ve bolded some bits.

Dependency hell is a colloquial term for the frustration of some software users who have installed software packages which have dependencies on specific versions of other software packages.

The dependency issue arises around shared packages or libraries on which several other packages have dependencies but where they depend on different and incompatible versions of the shared packages. If the shared package or library can only be installed in a single version, the user may need to address the problem by obtaining newer or older versions of the dependent packages. This, in turn, may break other dependencies and push the problem to another set of packages.

Dependency hell takes several forms:

Many dependencies
An application depends on many libraries, requiring lengthy downloads, large amounts of disk space, and being very portable (all libraries are already ported enabling the application itself to be ported easily). It can also be difficult to locate all the dependencies, which can be fixed by having a repository (see below). This is partly inevitable; an application built on a given computing platform (such as Java) requires that platform to be installed, but further applications do not require it. This is a particular problem if an application uses a small part of a big library (which can be solved by code refactoring), or a simple application relies on many libraries.

Long chains of dependencies
If app depends on liba, which depends on libb, …, which depends on libz. This is distinct from “many dependencies” if the dependencies must be resolved manually (e.g., on attempting to install app, the user is prompted to install liba first. On attempting to install liba, the user is then prompted to install libb, and so on.). Sometimes, however, during this long chain of dependencies, conflicts arise where two different versions of the same package are required (see conflicting dependencies below). These long chains of dependencies can be solved by having a package manager that resolves all dependencies automatically. Other than being a hassle (to resolve all the dependencies manually), manual resolution can mask dependency cycles or conflicts.

Conflicting dependencies
If app1 depends on libfoo 1.2, and app2 depends on libfoo 1.3, and different versions of libfoo cannot be simultaneously installed, then app1 and app2 cannot simultaneously be used (or installed, if the installer checks dependencies). When possible, this is solved by allowing simultaneous installations of the different dependencies. Alternatively, the existing dependency, along with all software that depends on it, must be uninstalled in order to install the new dependency. A problem on Linux systems with installing packages from a different distributor (which is not recommended or even supposed to work) is that the resulting long chain of dependencies may lead to a conflicting version of the C standard library (e.g. the GNU C Library), on which thousands of packages depend. If this happens, the user will be prompted to uninstall all those packages.

Circular dependencies
If application A depends upon and can’t run without a specific version of application B, but application B, in turn, depends upon and can’t run without a specific version of application A. Upgrading any application will break another. This scheme can be deeper in branching. Its impact can be quite heavy, if it affects core systems or update software itself: a package manager(A), which requires specific run-time library(B) to function, may brick itself(A) in the middle of the process when upgrading this library(B) to next version. Due to incorrect library (B) version, the package manager(A) is now broken- thus no rollback or downgrade of library(B) is possible. The usual solution is to download and deploy both applications, sometimes from within a temporary environment.

Package Manager Dependencies
Dependency hell is unlikely but possible to result from installing a prepared package via a package manager (e.g. APT), because major package managers have matured and official repositories are well maintained. This is the case with current releases of Debian and major derivates such as Ubuntu. Dependency hell, however, can result from installing a package directly via a package installer (e.g. RPM or dpkg).

Diamond dependency
When a library A depends on libraries B and C, both B and C depend on library D, but B requires version D.1 and C requires version D.2. The build fails because only one version of D can exist in the final executable

Package managers like yum, are prone to have conflicts between packages of their repositories, causing dependency hell in Linux distributions such as CentOS and Red Hat Enterprise Linux.

This was to some large extent mitigated by having a huge common base in Linux (often shared with the BSD world) and a very large number of folks aware of the issues and working (with the right attitudes) to keep any consequences small. A key concept here is the “Dick With Factor”.

Some folks just can’t stop themselves from “Dicking around with things”. This is often seen in relatively inexperienced folks (not enough burned fingers in their past) or folks who’s ego exceeds there ability. Folks who “have a bright idea” that isn’t really all that bright, and folks who “want to make a name” for themselves (at the expense of aggravation for everyone else). They tend to screw things up most of the time, though occasionally a good idea gets through.

One Big Onslaught of “Dick With Factor” was the “System V consider it standard!” push by AT&T. It fractured Unix into 3 different worlds, but only 2 of them directly. Originally, the Unix set free in the world was Version 7 (note: I’m abbreviating the history. By definition there were 6 prior versions, but they were mostly inside Bell Labs or of limited reach. Commonly, the start of Unix history is held at 1971 or so and the Version most folks first saw was 7. You might ask: Why is System FIVE after Version SEVEN. Please don’t ;-) There is no good answer other than some marketing dweeb in AT&T with a high Dick With Factor).

Version 7 was the base from which all the BSDs sprang. AT&T was forbidden to make money off of software then, being a monopoly, so granted U.C. Berkeley a sweetheart license that included the right to grant sub-licenses all for free. Universities and Colleges around the world adopted BSD as their teaching tool. Sun Microsystems used it as the base for Sun OS and many other manufacturers also used it.

Then AT&T went through the trauma of a Government Dick With Process and got broken up, but also got the right to make a profit off of software (again, I’m compressing a few years of history into one sentence. If that bothers you, go write a book…) They tried to reclaim the BSD licenses, but could not. So what’s the next best thing? Declare that the only Standard Unix is the one YOU sell and rename, repackage and Dick With a lot of stuff to make it different and incompatible. Thus System V, consider… it standard?

Well this broke all sorts of stuff. Among them the init process. rc.d gave way to init.d and other “changes for change’s sake” and a whole lot of Systems Admins got to double or square their workload. This happened while I was at Apple and there was a great deal of annoyance when Sun OS became Solaris and System V isms crept in. HP took the squared approach. You could set flags to do things the BSD / V.7 way or the System V way. (Oh, also don’t ask why they went from version numbers to “System”…). Similar things happened with other vendors. Huge numbers of scripts and programs got “flags” to set to configure them for one world or the other and figure out how to fix their dependencies… Welcome to Dependency Hell. By Design from AT&T.

Well, I said 3, not 2… The US Federal Government, being a big buyer of Unix systems and a big user of BSD, didn’t like Dependency Hell, so they, of course, made a committee (and we all know how good committees are at elegant design… NOT!) who set about defining THE Standard. POSIX. Which mostly is a mash up of System V isms and BSD isms and a minimal set of common stuff that must be there. After a few years everyone was “POSIX Compliant” and incompatible with each other… But now we had 3 “standards” with different dependencies.

So along came Linux…

Here we get the added complexity that AT&T had started suing folks who did things that looked too much like Unix, so one set of Gratuitous Changes was done just to be able to say “Look, we are different!”. Eventually AT&T got tired of the game and sold Unix Rights off to other folks, who also eventually got tired of attempting to wring money via lawfare out of folks doing something for free… But we were left with yet more compatibility issues.

Linux is, really, just the “kernel”. That core of the operating system that knows how to make disks work, put things on screens, find library modules, manage memory and basically keep the hardware happy. It was largely written by Linus Torvalds. Now there’s a whole Foundation wrapped around it. A key problem here is that you can buy Foundation position (and power) via application of sufficient money. Not everyone with lots of money is good of heart and loves Linux. Microsoft has bought a seat…

The part people interact with most is either GNU software (commands like “cat” and “awk” and things like the C compiler) or applications on top of it (FireFox, Gimp, LibreOffice). GNU is a meaningless name that is claimed to mean “GNU is Not Unix” in a recursive sort of way. Cute? Ah, yeah, sure… Each Application is individually developed, and depends on what’s under it. All those “libraries” of basic functions from GNU and all those system calls in the Kernel.

Now here’s the bad part: IF your application expects GNU facilities of the “wrong” sort, or a Linux system call that’s changed – it breaks.

Note, too, that if you want your application to run on Solaris, HPUX, FreeBSD, etc. etc. you get an entirely different set of libraries and system calls to deal with. Sometimes in a glorious confusion of what was historically System V or BSD derived, sometimes Linux derived. Sometimes just different.

Most of this is hidden from your view by the Application Developers who take it in the shorts to make all these things line up right and work. (But sometimes that fails and you get broken software…)

Into this already complex mess have arrived 2 more bits of confusion.

So with that, my historical preamble is done. The next section looks at where we are at now, and references some bits of that history above as context.

ARM vs Intel vs AMD

Each kernel has a release number. You MUST have the right release number for your hardware to work. “Why” is pretty simple. If, for example, your have a kind of memory that’s new, and the old kernel code doesn’t understand it, you can’t use memory. Ditto for disk drivers, display drivers. All sorts of hardware bits.

For this reason the Kernel must be at least somewhat customized for each machine. This was really not that hard when it was almost entirely Intel CPUs in use (and their AMD Clones). For other odd CPU types, the vendor tended to do all the work and sometimes provided “patches” back to the Linux Foundation developers who might “mainline” them. So, for example, if you used a MIPS or PowerPC CPU, then most likely your vendor did the “porting” work and either applied their patches to the kernel, sold you a proprietary OS, or sent the patches to Linus and in a few months to years then the Official Mainline Kernel would support your hardware. Maybe. Most of the time, Intel x86 or AMD64 got all the attention first and everyone else lived on patches or hope.

The “embedded systems” folks just loved the dirt cheap ARM chip. Often CPUs could be fabbed into devices at rates as low as 5 ¢ per core for the license. LOTS of ARM chips went into things like routers, switches, IoT devices, and cell phones. Part of the attraction was that a vendor could buy a license and then make their own cores adding on, leaving out, or changing things as they liked. This matters because the kernel must be able to handle those missing bits, added bits, or changed bits.

Needless to say, with hundreds of vendors screwing around with the basic chip spec, the job of keeping the kernel patched to support every possible Dick With Factor is “not small”. In fact, it is a royal PITA.

As a consequence, not all ARM chips get “mainline” support. Even if the vendor patches the kernel and sends the patches to The Linux Foundation. This is where you get folks talking about this or that SBC being on a “very old patched kernel”. Their Guy maybe took 3.18 and figured out how to make it go, made patches, and now he’s off doing something else. Since then, the Mainline Kernel has moved on to 4.18++ and between security patches and just Dick With Changes, is becoming ever more different from 3.18 and eventually the GNU layer and / or the Applications Layer will need something from 3.19+ and 3.18 is just not going to work. Welcome to Kernel Dependency Hell.

So it’s a Big Deal when an SBC Vendor (Single Board Computer) says their “chipset” is supported by the mainline kernel. That means their patch set gets incorporated in with all the other changes with each new kernel released.

In many ways, this process of mainline kernel acceptance is both an indicator of success and a factor in granting it. You can only stay on a very old back level kernel so long before Dependency Hell breaks your device. You can only continue to integrate your patches into a new kernel via application of money and programmer skill – and not all vendors can do that longer term.

This is an example of why “Community Size” matters. The Big Fish get bigger as they stay mainline.

This is also why Linus is not happy about ARM chips. There’s dozens of them, all tossing patches at Linux and all clamoring to be mainline, but not all of them are providing money, time or effort to get the work done; and some of them are direct competitors of those big companies that have bought seats on The Linux Foundation… (but they assure us nothing untoward would ever enter their decisions..)

Why this matters to me (and I hope to you) is that while Intel is THE Mainline target, it has had some horrid hardware / firmware level security “flaws” lately (and one wonders if TLAs “helped” design for that end state…) and costs $Hundreds / CPU. Compare $Hundreds for a CPU to pennies. Hmmm….

So for the absolute minimum risk of Kernel Dependency Hell, buy Intel / AMD computers. For lowest cost and higher security, ARM chips. Then remember that SBCs using an odd chipset and perhaps one from some vendor in China who patched their own kernel but doesn’t play well with the Linux Foundation is likely to have an old kernel (if not now, soon…), you are dependent on them to patch it, and you can find yourself easily in Kernel Dependency Hell after a year or two.

So, right out the gate, given my decision to live on ARM based SBCs, I’ve got a higher risk of Dependency Hell, and a more difficult time getting any OS not supported by the vendor to work on any given odd SBC / chipset. This is why the Raspberry Pi “just works” most of the time. A very large community. Lots of folks using the devices so kernel patches mainlined. Kernels kept up to date. It is also why a board like the Odroid C1 or the Orange Pi One can be a pain to keep running and up to date. Are their kernel patches mainlined? Or, for the Orange Pi One, adoption has been low (based on the observed supported OSs available). Partly this may be the small memory (1/2 GB) available. Perhaps the H3 chipset. Does it really matter? What matters is that you get a small choice of supported Linux distributions. The rest is more of a “roll your own” operation and they MAY involve doing kernel patches for a newer kernel.

For the last several weeks, I’ve been trying different Linux Distributions on some of my SBCs and running into these issues.

So it’s FINE for a vendor to have a low cost hot board, but if they only supply one OS choice, and the kernel is older with custom patches, well, it isn’t going to be keeping up with changes longer term, and then you will eventually enter EOL Dependency Hell.

There’s a certain “crap shoot” aspect to new hardware, too. When a “Hot New Board” comes out with a new chipset (like the RockPro64 ) the first OS ports will use instruction subsets that work, but are very much not optimal. For the Raspberry Pi, the Floating Point Hardware is often ignored. That’s a big part of your total compute capacity to just ignore and effectively throw away… The armv7 or armhf 32 bit instruction set will run fine on the aarch64 / armv8 64 bit hardware, but you only use 1/2 your word length. It is still VERY common for v8 systems to have a v7 operating system on them, wasting a lot of the hardware. BUT it is easier to port, armhf is more fully debugged and changing more slowly, so fewer changes means less Dependency Hell. Plus, people are lazy and programmers expensive. If v7 works, well hell, ship it! and go work on some other project…

As a consequence it may take a few years for some new SBC / chipset to let you fully use all that new hot hardware. The FPU, GPUs, all your word width. A board may be running at 1/4 or even 1/10th of the ability of the hardware, just due to the operating system being a “quick port” and not a full optimized port and the kernel may not support some of the hardware anyway. So buying a new board is fine and all, but buying it 2 years after release may be better. BUT, if nobody buys it for 2 years, will it still be sold?… And again, community size matters.

So that’s the hardware layer. Moving on…

The Great SystemD Debacle

There’s a hierarchy of GNU / Linux OS development. THE Big Dog is Red Hat. They make Red Hat, Fedora, Centos and contribute a lot of the time and money to development that keeps things going. For decades this was a Very Good Thing.

Others would look “upstream” to them for most of the development work, then layer on their particular bits. Debian, for example, has Red Hat as their “upstream”. Debian then applied their package manager and other specific preferences.

Then lots of others would look at someone like Debian as their upstream and often just indulged in some particular “eye candy” cosmetics or specific customization or, like Ubuntu, a heavy QA layer and tweaking.

So when Red Hat decided to rip out core functionality of GNU/Linux and replace it with a HUGE piece of work, one that fundamentally Dicks With all sorts of dependencies; their “downsreams” had to make a decision. Figure out how to NOT accept that, know you are headed for Dependency Hell, piss on the Big Dog, and hire enough staff to duplicate the ones at Red Hat on that part of Linux: Or just go along.

Most chose to just Go Along. It is the easiest and cheapest choice. Just layer your eye candy over “whatever” and move on.

But the SystemD Dependency Hell just doesn’t stop.

I’ve had 4 boards “brick” from a bad interaction of SystemD with fstab (easily fixed once you know it is happening and what it is) and there have been many stories of systems hanging, requiring a reboot to fix things, etc. I’ve had to reboot just to get fstab changes to be seen.)

The basic fact is that the approach of SystemD is fundamentally flawed. It’s a fat and buggy thing getting fatter and buggier over time, and introducing all sorts of Dependency Hell for applications and systems admins to deal with.

As the SystemD cancer spreads throughout the core of Linux, ever more dependencies on it or from it show up and ever more things become a PITA. But as long as it is easier to just “go along to get along” and do what Red Hat does, folks will just put up with it. But it’s wrong and a pain. It is not an “init process”, it is a Service Manager that wants to own and control your whole system (and not in a good way).

So some of us chose to look elsewhere. We want the *Nix way and a world that is more stable.

The Alternatives

It seems odd to call the most stable and traditional *Nix “alternative”, but there it is.

UN-fortunately, a large part of the world had hitched to the Red Hat wagon train, so just went along with it. Now they are committed and will most likely not turn back. It was mostly a smaller group of surly curmudgeons, conservative systems admins, and tech hacker types that were fooling around with other stuff and didn’t go along. Of them, an even smaller subset focus on ARM based SBCs. The world is much easier for folks on AMD/Intel PCs. You have a lot more choices. Knoppix and VOID for example.

My exploration has shown me that it’s a technical challenge to NOT do a SystemD release, partly due to issues of a different kind of Dependency Hell. Partly due to those ecosystems having been populated with more hard core hacker types from the get go, so expect you to be the same.

So here’s my biased opinion of what I’ve found:

BSD

I really have a long term love of BSD. It does it’s job very well. It is highly secure. There are three main variations for SBCs. OpenBSD, NetBSD, and FreeBSD. More alike than different, but with important differences. OpenBSD cares most about security and hates “binary blobs”, so things like the binary blob boot loader for Rasbperry Pi causes them to say no. FreeBSD has a great ports and packages system, but is a little looser than OpenBSD on things like binary blobs and running proprietary codes. FreeBSD would be my first choice. NetBSD likes to run on everything if possible. But isn’t as good a ports system as FreeBSD and is a little less friendly from that.

BSD in general is often described as “User Hostile” and likes it that way. I find it very easy to get running to the CLI (Command Line Interface) level, but a pain to make a good X-window system run. I’ve done it, but I hate it.

Installing “packages” often involved compiling them from source code. This can take a while and it can give more efficient code, but when it breaks, you are the hacker in charge of fixing it.

Many of the BSD ports just find a way to make their prior release run and then stop caring. For one of them, OpenBSD IIRC, there was a v6 instruction set version for v7 and v8 ARM chips. Yeah, it works; but … about the rest of the hardware and the larger word size? Well, you can always port it yourself…

In general, BSD has NOT followed Linux into the various new and different ways of things. For this reason their market share is lower (mostly pro shops and back room servers along with Education) and they have fewer hands working on it. In general, too, there’s a Central Committee that keeps a steady hand on the direction of change. This is great in that you have a stable well thought out system (where things like SystemD are quashed for being stupid in design) but it is bad in that change is slow and if you want the new bright ideas, you are SOL.

Also realize that as an entirely different system, not everyone wants to bother porting their applications to BSD and certainly not to all of them. It is an entire domain of Dependency Hell that they can dodge just by saying “We only do Linux”.

It is the oldest of the Old School styles, being fundamentally unchanged since about 1980. It also takes a fair amount of skilled work to embrace it.

Slackware

Slackware is the oldest Linux still kicking around. They never embraced the System V Init and still use a BSD / Version 7 style rc.d. They also are not fond of automated dependency resolution package management.

In general, installing it and making it run were not hard. The GUI came up easy enough. Just generally nice.

What’s not so nice?

By Definition, without dependency resolving automation YOU are the one doing the dependency resolving. For most packages most of the time, this is not an issue. “slackpgk install” is not hard to type nor is “slackpkg update” or “slackpkg upgrade”.

Where I found it a bother was on failures. “slackpkg install chromium” finds nothing. “slackpkg install gimp” worked just fine. Even gave me the menu item in the drop down. GIMP however doesn’t work. “slackpgk install libreoffice” doesn’t work. Is it just not ported to aarch64? Don’t know. Nothing says. I’m on my own to work out dependencies. Welcome to dependency hell..

I’m still going to run it. On the RockPro64 as my main browser and media station in the office.

I’m using it on the Rock64 right now to type this. In another screen I have a chroot Gentoo running. It has been doing an upgrade to the armv7 Gentoo for about 36 hours now. More on that below. As an armv7 capable (armv8 native) A53 core machine, it’s a more “vanilla’ architecture and so more codes ought to be ported to it. Later I’m going to see if LibreOffice, Chromium, Gimp install on it (right now it is running close to 100% on Gentoo compiles in the chroot…)

I find Slackware comfortable, mostly, and “easy enough” for most things, unless they break. Then I get to deal with Dependency Hell or shift to some other system for that function. For now, I’m just dividing the work between systems. It is a race condition to see if my Slackware skill increases faster or some other system shows up that I like more.

Gentoo

IF you run Gentoo, you ARE a systems developer. The only questions are how experienced and how good. ALL installs and builds are from source code and a small bootable core. I recently found out why…

The big focus of Gentoo is their “Portage system”. It handles all the setting of compile flags and preferences, “USE” variables and more. The problem is just that you must know what all the knobs are and go set them… Things do NOT “just work”.

I’ve been trying, on and off, for a couple of years to get a GUI Gentoo running on ARM hardware. I’ve not been trying that hard, but still, not yet. I’ve gotten a few to the CLI level of install.

The basic install process is to use a chroot window on some other Linux and download a minimal tarball image in it. Extract the tarball, then use portage in it to download ALL the source code and compile it for your particular computer.

Major problem is just that you have no clue what all portage knobs and levers are, nor what to set them at anyway. I sunk 2 days into trying to find the right mirror for my arm64 system and still it wasn’t quite right.

No Problem, I thought, here’s a binary image for the Pine64. I’ll just install it and then I’ll be 90%+ of the way and can just “emerge update” and “emerge install-new” and “emerge upgrade” (or whatever) and be on my way. Well, no. Remember those bolded lines from above?:

Circular dependencies
If application A depends upon and can’t run without a specific version of application B, but application B, in turn, depends upon and can’t run without a specific version of application A. Upgrading any application will break another. This scheme can be deeper in branching. Its impact can be quite heavy, if it affects core systems or update software itself: a package manager(A), which requires specific run-time library(B) to function, may brick itself(A) in the middle of the process when upgrading this library(B) to next version. Due to incorrect library (B) version, the package manager(A) is now broken-

That’s exactly what I ran into. I needed to upgrade portage to do the upgrade, but it needed an upgraded (Python IIRC) that it could not upgrade until I upgraded portage that needed an upgraded…. BRICK.

(Technically not bricked but just stuck, as the OS worked, just could not be changed).

This was a system image that was all of 2 or 3 years old. I hit the same problem on the Orange Pi One IIRC and it was a 2 year old image. That’s just 2 years from “Fine” to “Bricked From Dependency Hell”.

Do NOT expect your Gentoo system to be used every couple of years and otherwise not upgraded. You ARE committed to FREQUENT updates to avoid this lock up / update-bricking.

I’m now trying to work through the process with the Raspberry Pi image from 2018. I’m doing the update upgrade on it in a chroot window to 14.1 release all updated. Then I’ll try to step it to 14.2 which is the present released version (but one back from -current which is really “development”).

This upgrade has been running for about 36? hours now. It has crashed the chroot once and I had to reboot the whole box to get a new chroot to work (just closing the terminals didn’t do it) and it is presently on 38 out of 40 packages so may be done soon. (It did 99 out of 138 before the crashed window). With luck, I’ll “only” need to do this 2 more times… and then it will be in sync. Maybe. IF I can step through the changes without once again stepping into Dependency Hell…

THEN I’ll get to try that GUI install again… Oh Joy…

What Is Needed

What is needed is someone like Devuan to make a nice, works from the box, easy to add programs with their package manager, well QA checked and SystemD free distribution.

I really really like Devuan, and when it works it is great. My only complaint is that the Odroid XU4 2.0 release image seems to be broken. I have 2.0 running on my PC and my Raspberry Pi boxes. I’d love to run it everywhere. But either I’m seriously screwing the pooch on the XU4 install (unlikely IMHO) or they did a lousy QA job and haven’t noticed in the better part of a year. Not Good.

So I worked out a “work around” to marry their userland to a kernel / boot loader from an Ubuntu uSD card. It worked nicely, except that after the boot, keyboard and mouse didn’t work. So most likely I didn’t match up the Device Tree Bundle DTB with the kernel with the LXDE window environment in such a way that they play well together. Welcome to Dependency Hell….

Sometime this weekend I’ll look at it again.

I’d like to see Devuan really take off. More hands and eyes fixing and testing things. It would be my ideal. VOID is also interesting, but has low interest in ARM chips. Similarly Knoppix would be a nice base to start from, but they are very PC Centric.

What it comes down to is just where among the various levels do I want to stick my oar into the sea of Dependency Hell. Assembling my own DTB / Kernel / Userland tarball on the XU4 for Devuan, and maybe trying it on other boards too? At the source code dependencies of Gentoo? Package dependencies of Slackware?

Or just live a while longer on Armbian as SystemD continues to make “everything you know is wrong” and makes my ability to manage my own systems less as their Dick With Factor dicks with more stuff?

All because some folks can’t “keep it zipped” and avoid screwing up things, and do not realize just how evil Dependency Hell can be.

I do hope that the wider Linux community comes to realize just how bad it can be. At present, Linux is at risk of dying in Dependency Hell and I think most of them don’t even see it. Heck, I’m thinking a used Macintosh is a good idea… It doesn’t take much for folks to “just walk away”.

Subscribe to feed

About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in Tech Bits and tagged , , , , . Bookmark the permalink.

64 Responses to Linux Dying In Dependency Hell

  1. YMMV says:

    They say MacOS is based on BSD. In my experience, Mac is pretty good. They do add more features every upgrade cycle, so older Macs can’t run the newer MacOS after a number of those cycles. That doesn’t bother everybody. Better than Windows where you don’t dare let an old OS connect to the internet.

    Do we still have to say *NIX or is it just habit? Who or what is the driving force behind SystemD?

  2. E.M.Smith says:

    @YMMV:

    The curreng crop of MacOS is basically a highly modified BSD running on a Mach kernel. You can jump under the gui skin and find a shell with ls, df, etc.

    You don’t have to say *Nix, instead you can say “Unix, Linux, Posix and other Unix like operating systems” but *nix is a lot faster…. as “anything”Nix.

    The driving force behind SystemD, IMHO, is superfast boots on a VM in big cloud computing shops. A machine crashes, you want to respin up the VM image inside a couple of seconds so your service interruption is nearzero. It can cost $ Millions per hour to have a booking engine down, for example (I upgraded one live once at $4 million per hour if I failed).

    That leads to NOT doing the sequential boot of rc.d or init.c as that takes a minute or two. Call it $66, 666 thousand dollars lost….

    So they accept the issues of non-synchronous boots in exchange for speedy boots.

    After that, the mission creep sets in. How to control and restart services with your Service Manager and avoid downtime reboots (but poorly implemented).

    Now me, at home, do I care about relaunch of a VM with restart of a process where it left off in under a few seconds? Or do I care about easy systems admin, readable log files, stability release to release, not needing a $5000 recertification each year from Red Hat, etc.?

    It may be God’s gift to Cloud Service Centers, but it is crap for home gamers and small shops.

  3. p.g.sharrow says:

    @EMSmith; thank you for the lesson on computer operating systems from the “EMSMITH UNIVERSITY of Computer Science! I really got my money’s worth this time…pg

  4. E.M.Smith says:

    @P.G.:

    Any time… “It’s what I do”…

    FWIW, right now I’m watching RSBN on the Odroid N2 in the bedroom at the Trump rally… then in another terminal window, I’ve done:

    ssh to the Rock64 that’s in the office running Slackware.

    Opened a terminal on it, and in that window, did a SU to root.

    That, then, let me do a “chroot” into the Gentoo partition on it…

    where I’m now doing a test X11 / LXDE install process…

    So how many levels of indirection is that? I’m not sure (sake is involved as it IS Friday ;-)

    But really, I think I’m into it 3 OS distributions, 3 (or maybe 2) machines (depending on your attitude about the TV) and a whole lot of context….

    Dependency Hell? Dependency HELL! I don’t care about no Steeeenkiing Dependency HELL!!!!

    (It takes a certain amount of hubris to do this kind of thing… I admit… )

  5. p.g.sharrow says:

    If you can walk and chew gum at the same time, Why not pat your self on the back as well. LoL ;-)…pg

  6. jim2 says:

    I’m sad to see Linux sliced and diced in the worst way, but I’m not sure about the cloud aspect of it.

    Typically for a cloud application, there are a few VMs on different racks for the same app, or even in different parts of the country/world. So if one rack nukes a server, you don’t even notice. After all, all the apps in the cloud are web apps.

    That said, Mint does boot a lot faster now. But I also keep getting bad disc sectors from time to time, even after a new disc drive. I’m wondering if it’s trying to write something to a log as an admin when I don’t routinely run the machine as an admin user.

    At any rate, I’m with you on keeping the old Nix way. I don’t twiddle bits often, but when I do, I want it to work as I expect.

  7. H.R. says:

    @p.g. – That was a back-handed, lefthanded, tongue-in-cheek over the top compliment to our host, E.M., yet at the same time it was one of the nicest things written to/about/for E.M. that I’ve probably ever seen posted here.

    Well done. Nicely said. Funny, accurate, sincerely given, and greatly appreciated, seconded and supported by this Chiefio Blog aficionado.

    This is waaaay more than just a fun place to hang out.
    .
    .
    .
    .
    I showed up here to support and encourage E.M. when he started this blog to a) have a place to archive and display his analyses/takedowns regarding CAGW that he was tired of re-re-re-posting on WUWT.

    ii) He also was taking on the GISS Fortran code just to see what, exactly, was under the hood. Well do I remember his frustration at the nightmarish mish-mash of embarrassing-to-college-Freshmen Fortran coding that did who-knows-EFF-all to already diddled data. It amazed E.M. and all of us that he was even able to get the @#$! thing to compile and run. Aside – I think no one was more surprised than E.M. ;o)

    Many or most of us came here from WUWT, but have stayed here because of the wider-ranging topics, uummmm… different pace (threads stay active much longer as we ruminate on or find additional info for the thread topic), and because E.M. does take the time and effort to acknowledge and respond to almost to each and every one of our comments*** (Yes, you too, Serioso).

    So, p.g., that simple thanks to E.M. that you gave encapsulates a lot of why we are all here having a good, informative time. We learn not only from E.M., but from each of the others that post here. Everyone pitches in. E.M. is just the glue that binds us. he has skilz that do that.
    .
    .
    .
    So far, I have had the opportunity to meet E.M., Ossqss, and Rhoda Klapp solely because this blog exists. I am a better man with a ‘+many’ bump to my life for having done so.

    I am looking forward to meeting other regulars here on this blog; p.g., Larry L., gallopingcamel, jim2, perhaps the elusive Gail Combs & hubby (intriguing dude, eh?), cdquarles, philjourden, another Ian, beththeserf, and too many more to name. Mea culpa and apologies to those I did not explicitly name. I would even relish the chance to meet and greet Serioso even though we are 180 degrees opposed on about…. everything except ice cream, and I’m not sure about that ;o) However you perceive it, Serioso is still part of the conversation here and would be good to meet up with, regardless of blood pressure effects. I’d love to say howdy even though we are 99.9% opposite in viewpoint.

    That was a very nice thing you wrote there, p.g. (Obviously) It set me off.

  8. E.M.Smith says:

    ok, FIRST off and clearly: BLUSH!

    Hey, I’m just “some guy” with an open notebook and an open mind… ok?
    (but don’t stop!!!) ;-)

    @P.G.:

    My basic idea is simple: IF you have not crashed yet, you didn’t try hard enough. IF you have not got more balls in the air than you think you can catch, you are not improving. You FAILED!? Good, what did you learn?

    Though some times I look at the things I do and wonder: “WT? do you think you are trying to do? I mean REALLY?”…

    So I’m about 1/2 drunk ATM. (IF I’m lucky, about 1/2 hour more to fully gone! ;-) yet I’m still doing a X11 build in a window on the bedroom TV / Odroid N2. I mean, come one, what kind of A-hole thinks he can type 3 sheets to the wind WHILE doing Systems Programming? Really?

    Yet “it’s what I do”… Credit it to a Viking Ancestry that valued drinking the other guy under the table then taking his country from him…

    BTW, after the second upgrade cycle ( 18 more packages) the install of x11 is 36 of 59 packages done… THEN I get to start the LXDE install (though I’m thinking of trying lumina instead…)

    BTW #2: Grifon Delle Venezie Pinot Grigio is rather good. Rather way good… 250 ml down and 500 ml to go ;-) This is AFTER a fifth of 18% Sake with the sushi…

    @Jim2:

    Yes, absolutely. I desperately wish they had gone for redundant servers and NOT Dick With init. But they did both. Sigh. A Crappy Idea can sell to stupid management as long as it looks like it can get a bonus in the process…

    I have it on decent authority that this was the thought process, though. But who knows isn’t talking…

    @H.R.:

    I’m very much up for more “Meet & Greets” (especially if others buy my beer :-0

    And: Hey, Serioso! Me and H.R. would be happy to buy you chips, beer, nachos, whatever and have a “night together” in the bar of of your choosing. Seriously.

    I’m willing to drive the 3000 miles to the East Coast and meet in the bar of your choosing at the date of your choosing (with at least a couple of months advance notice to arrange things and as long as it doesn’t land on a grand kids birthday or something) just to have a good ‘ol beer fueled “discussion”. (No physical contact allowed, all participants to enjoy the experience, and BS allowed as long as it is forgotten in the meeting notes ;-) In other words “Good time, no worries, no regrets”.

    Why? Because DIFFERENCE is FUN and I’d like a good time talking over differences.

    I would like to have something near Denver in spring, too. Right now winter is just too cold ( I know it is fall, but the weather doesn’t…).

    FWIW I’m likely to be in Chicago about Thanksgiving as there’s a new granddaughter expected then / there. Shortly after I’m going to Florida to set up “new digs” then about spring will be moving “everything” to Florida (arrangements willing…) So I’m likely to be doing coast to cost 3 or 4 times in the coming year. Anyone want’s a “meet and greet” on that California – Chicago – Florida triangle, just speak up (and declare beer likely to be provided ;-)

    Just one Big FYI:

    I think Democrats are currently flirting with insanity and I think Socialism has it in the Rear View Mirror, BUT: Anyone who buys me a beer is my Sworn Beer Brother and I will defend them against ALL comers, no matter what! Nothing, and I do mean nothing, is stronger than the bond of the beer. Say anything, think anything, whatever. If you bought the last pitcher, I’m duty bound to slay anyone who speaks ill of you.

    And that’s not just the Pinot Grigio speaking… I think… but IF it is, well, I’m duty bound in any case… as long as you order me an IPA or better ;-)

  9. Larry Ledwick says:

    EM that is the first rational reason I have seen for systemd being pushed on the world.

    Note when I was over at IBM they were running Red Hat for mainframe and the reason I had a job was they were rolling out their so called “On Demand” service. The selling feature was a commodity install of linux where they had a “standard” install so that some major client could call them up and say we are doing a big advertising promo and would like you to provide us 12 servers to handle the promotion, and they could say sure we can provision the servers and have them ready for you in a few hours.

    Then we would get a call back 24 hours after they kicked off the promo – Uhhh this is more popular that we expected can you provide another 8 servers??

    Sure give us a couple hours and we will have them on line.

    Bad news is everyone wanted to do the standard install but can you please tweak this and also tweak that and oh by the way we like to use XYZ do you support that application.

    Commodity servers were a good idea in the board room but no one really wanted a cookie cutter server everyone wanted to fiddle with the standard install, but the idea of being able to roll out new VM servers in a matter of an hour or two was the enabling capability that drove their concept for the commodity server farm they were trying to sell to clients.

  10. H.R. says:

    E.M.: “BUT: Anyone who buys me a beer is my Sworn Beer Brother and I will defend them against ALL comers, no matter what! Nothing, and I do mean nothing, is stronger than the bond of the beer. Say anything, think anything, whatever. If you bought the last pitcher, I’m duty bound to slay anyone who speaks ill of you.”

    I am absolutely ROTFLOL!!

    Please do recall, I bought you those two Unholy beers mixed in with that six-pack sampler. I believe you may have recalled that… a few hours later after you regained consciousness.

    Should the apocalypse come strolling by, I expect you to knock off a few of the zombies on my 08:30.
    .
    .
    .
    I am expecting Ossqss to have my 06:00 covered as well as my 16:30. I gave him a tin of Buckeyes that probably sealed his life-long fealty to my health and safety. I’m bringing him another box of 50 Buckeyes this trip to Dee-double-triple ensure that he has my back. I don’t think he shared any Buckeyes with his Mrs. or the kids, so they will probably just say, “Oh… hi, uuuuuhh…H.R. was it?”

    I’m hoping Rhoda Klapp is in-State this winter. If Rhoda is in but can’t make it up from the Naples area, I will try to make arrangements to take my turn traveling and head down that way for a nice chat. It will be worth my time 1,000-fold.
    .
    .
    .
    The Mrs. is coming out of her surgery reasonably well. (I thought I mentioned the surgery, did I?) She had a bad day today, but all indications are that the surgery will help with her mobility, which has been declining after her 2008 stroke. The long term prognosis is excellent.

    That being the case, and hopefully trading in my Honda and the V-10 F-250 for a newer truck next year, we still plan on heading West for the Badlands, Rushmore, Grand Canyon, Colorado (relatives there) and other wonders of the American West. (Vegas, Baby!)

    I’ve already mentioned in previous threads that I will try to stop somewhere amenable to Larry L. (schedule doesn’t matter to me since I’m retired) and since I can’t get the camper up to p.g.’s place, stay at one of the several RV places down the hill from him and then drive up to visit.

    This is all nice stuff to put on the “fuzzy future” calendar, hoping that the stars, sun and moon align.

  11. Larry Ledwick says:

    The Mrs. is coming out of her surgery reasonably well.

    Good to hear she is recovering well, often is a gradual process but good to hear!

    I have reasonable job flexiblity for time off with a couple weeks notice I can schedule most any day you folks might be within a couple hundred miles of Denver Metro area.

  12. Graeme No.3 says:

    It would seem to this novice that a BSD base with X windows (or some other graphical interface) would have appeal to quite a few people launching into ARM computers with less expertise than they really need.

  13. jim2 says:

    EM – I think you may have misbeerunderstood. I am saying cloud data centers DO use multiple VMs for one app, each on an independently provisioned rack. So it power fails on one rack, the other two are already up and running. That is the minimum fail-resistant feature. Additionally with more dollars, one can have redundant VMs at different data centers entirely, or even different countries.

    If the customer has only infrastructure in the cloud (network, VMs, storage, etc) but no OS or apps, then the customer is responsible for configuring the VMs, installing OS, and apps. Or can buy VMs and OS, but install apps. Or, if it is an app already supplied by the vendor, buy VM, OS, and the app.

    VMs can be spun up/down in seconds. The customer can either manage number of VMs manually, or have that process automated.

    All the services are pay as you go. So, by that alone it is cheaper. You don’t have to buy a server room, servers, people with skilz to set up and maintain. If you are a new business, you can have a web app up and running quickly and dump it just as quickly. And it can be available world wide, or just in one country.

    On top of that, it is highly and quickly scalable, both in adding things like storage, or a complete copy of what you already have.

    So, depending on the options chosen, you will need only people to, say, set up and run the web site. And even that can be contracted out.

    This is why businesses and governments are moving to the cloud.

    Of course, I have to wonder what happens to pricing once everyone is in the cloud. On the pricing topic, if yours is a Microsoft shop, then the MS cloud can get you MS servers way cheaper than Amazon. I’m sure Apple can provide Apple-centric services cheaper than MS.

  14. E.M.Smith says:

    @Graeme No.3:

    It has a lot of appeal, but then folks discover they don’t just do “apt installl lxde” to get lxde windows and have second thoughts. Those who make it past that point are worth hiring ;-)

    @Larry L:

    So now ponder: When IBM can instead say “12 Servers? Now or an hour from now?” and spin up the added 8 in a minute as VMs, think that would sell? Think they would then lean on Red Hat for that ability?

    @H.R.:

    Absolutely I remember them. Fondly.

    Hope all finds you and your spouse well…

  15. jim2 says:

    Here is the Azure VM page with a drop-down of VMs available. Different regions can have different offerings.

    https://azure.microsoft.com/en-us/pricing/details/virtual-machines/linux/

  16. Larry Ledwick says:

    So now ponder: When IBM can instead say “12 Servers? Now or an hour from now?” and spin up the added 8 in a minute as VMs, think that would sell? Think they would then lean on Red Hat for that ability?

    Not sure what the question is???? That is exactly what they did using either AIX unix or Red hat on mainframe.

    The problem was the fact that the customers did not really want cookie cutter installs. They would either ask for tweaks to the “standard” install right up front or shortly after going live ask us to install some special tweak or additional software package, so very quickly none of the installs were still standard, and each client ended up with non-standard builds which was exactly what they were trying to avoid.

    The intent of the cookie cutter standards was to have well worked out combinations of software that played well with each other and were relatively bug free and reliable. Once you started monkeying with the build and changing things or tossing in a non-standard software package you suddenly started breaking things due to incompatibility or dependency issues as you pointed out above.

    There were other management issues that were the final knife in the back for that operation. They out sourced all our talent to off shore providers in places like India and Brazil. On the accounting sheets they thought they were saving lots of salary by going to places that payed maybe 1/5 as much for staffing but their productivity was only 1/6 or 1/8 as much (ie they lost all the advantage in higher head counts and longer turn around on outages.)

    They had guaranteed up time standards in the contracts with huge penalties for excessive down time.It only took one long outage a month or a quarter to miss a 99.99% up time standard.

    One month has 720 hours of possible uptime (30 day month) that left you with a total down time of 7.2 hours breaking your performance guarantee.

    Only took one screw up in India of some dba sitting on his hands waiting for a formal request to do what needed to be done, to eat up all the salary saved with a $100k + excessive down time penalty.

    I have sat on conference calls where everyone in the conference call agrees (after checking their own area of responsibility) that the problem was a dba issue. The to find out that the on call dba we had been dealing with for 3 hours at quitting time just walked away after handing the phone to the new shift dba with no turn over at all, so we had to walk them through all the trouble shooting steps we had already done and convince them that yes your data base is broken and we have been waiting for 5 hours for you to run the command necessary to fix it.

    I remember one time actually getting on the phone and after waking up the dba (very slow to respond to my question) telling him, everyone on the call agrees that the database has a xyz problem. (if I recall the index was screwed up and it needed to be re-indexed)
    He agreed.
    I then asked him if he had started running the checks / fixes to that problem.
    He then said no do you want me to do that?
    Yeeeesss please that is why we have been sitting on the phone for several hours, could you do that?
    Sure! give me a minute.

    Just made you want to scream but, those contractors (most of them) would not do anything including obvious tasks, without a specific request, zero initiative.

    In fact I think it was a cultural thing, they did not want to risk making a mistake because they knew there were 1000 other dbas waiting to take their chair if they did something that clobbered the data base so they took the path of least risk, only do what they were explicitly told to do. They had the paper certifications and training but near zero real world experience, and always had to step through a laundry list of trouble shooting tasks even if it was obvious that we were way beyond the basics.

    There in fairness were a couple of them that had an American style “can do attitude” which were a real pleasure to work with, but they were almost always 3rd tier support and it took a couple hours of screwing around with the tier 1 or tier 2 support before we could even get them on the phone.

  17. jim2 says:

    As much as I would like to see the traditional linux managers continue, I’m starting to think it might be a good idea to learn systemd and live with it. Obviously the big boys want it, so that’s where most of the inertia will live.

    If the SBC builders want their boards to thrive, they will have to move to systemd, like it or not. Same with users and distro maintainers. If they want usage, the distros will have to accommodate systemd.

  18. E.M.Smith says:

    @Larry:

    You had said hours to days, I shifted it to minutes for a VM boot. I now see you were talking about the custom work…

    @Jim2:

    Um, remember that I’ve built out big data centers, installed redundant servers and colocation facilities, and done Disaster Recovery projects for dozens of application using interState networks and automatic failover. Basically, I’ve made all those things you described… including virtual machines. I know they are done. I’m just adding a layer of why the OS Guys would want more rapid booting.

    Per outsourced and cloud being cheaper, see Larry’s comment. That has also been my experience.

    The only (semi) real benefit is managerial. Your I.T. staff is Overhead & Administrative expense in the budget, as is the computer facility. O&A is ALWAYS interpreted as a deadweight loss to be cut by bean counters, even if critical to operations. As rented functions, they bill to the departments and it becomes “operations” or R&D expenses, seen as essential to profit and growth, a good thing.

    Executives get a bonus for reducing O&A and increasing R&D or Operations (manufacturing, etc,)

    Per SystemD: No.

    Yes, I’ve learned it and I use it even now, when required. It is not hard to learn. It is stupid in what it does. No, it isn’t a good idea and no surrender is not an option.

    There are more than enough of us to keep alive a clean Gnu Linux. The entire teams at Devuan, Gentoo, Slackware, Void, Knoppix, Puppy, and at least 73 other distributions.

    I will complain about the process and it will take time to stabilize, but the future does NOT belong to systemDemented.

  19. jim2 says:

    RE systemd. There will be more people working on systemd distros. New tech will be adopted more quickly and bugs fixed faster. That’s why I think they will come to rule.

    And as far as the cloud goes, follow the money :)

  20. Larry Ledwick says:

    Yes the delay was mostly administrative. The physical spin up of the VM itself does not take long at all, but the organizational over head of taking the order, pushing it down to the worker bees, finding a worker bee who was not already working on something else, getting him/her all the necessary information for the request (ie IP address, and network info, and any config information like system host name, file systems that needed to be mounted etc), and actually making it happen required that they promised a short turn around but were not obligated to get it operational for about 24 hours (although they did it faster on some situations and already established accounts that already had contracts). Also depended on time of day and staffing, we were very limited on technical staff at night, during the day they could easily pull someone off some other task for a few minutes to do a more urgent task)

    The old under promise over deliver thing.

    If there was some problem you needed to include a time buffer to sort it out.
    For example on more than one occassion I got a call in the ops desk at 1:00 in the morning, asking me to go down to one of the sever rooms and see if I could figure out why a server was not rebooting properly. It took me 30 minutes from the time I got the call, looked up the “expected” location of the physical device, walk down a 1/2 mile or so of corridors, go in to the server room and find the physical foot print of the server (which was not always were it was supposed to be – sometimes not even in the right server room) and then poke it to make sure all the data cables were properly inserted, check for error lights, no one had tripped over a power cord or in one case someone had left a CD in the CDRom drive and the system was trying to boot off of an application CD instead of the disk drive, then walk back to the ops area and call them back and tell them what I had found.

  21. jim2 says:

    Cloud computing isn’t comparable to old-fashioned out-sourcing. I myself have had to train my Indian replacements back in early 2000’s. I was not impressed with them at all and the business kept asking me to stay longer and longer. I stayed until I got the job I wanted elsewhere.

    Cloud computing is highly automated so there isn’t any manually installing VMs and such. Software runs the cloud. And you (a business/government) still can have your own technical people to configure a database, cloud network, and/or application. You can spin up a dev system, develop your product with your own programmers, then turn the dev system off when its not needed. Scripts can be written and re-used to set up an entire system/application.

    There are also many storage options, the pricing dependent one if it is ‘hot’ storage with a lot of activity, ‘cool’ storage with infrequent activity, or archival storage with very infrequent activity.

    I know in our shop we have a disaster recovery exercise every year. It is a PITA. Disaster recovery in the cloud is relatively easy since your system can span the country or the world. The only question I could stump the cloud guy with is what would happen in a Carrington event :) He didn’t know what that was. Tsk tsk! At any rate, there is always some scenario that can knock out an entire system. But for most routine disasters, DR would be fairly simple in the cloud.

    So basically a business doesn’t have to buy and maintain servers and software. That’s the big advantage here. It can still have tech people to spec it out and configure it. How many of what kinds of techs you will need depend upon what options you select.

  22. E.M.Smith says:

    @Jim2:

    Been there, done that, got the T shirt….

    My big question was “What happens when ALL your DR Cloud customers ALL want to spin up a DR image at the same time?”

    Crickets….

    Everyone can have a colo-DR image, but only about 20% can run at the same time…. so nobody is allowed to have a continental scale catastrophe… or for some, even regional.

    BTW, I was the DR Manager who was making the PITA DR test happen each year…. I’m sorry for your pain…

    We’d fail-over aps from Orlando to Carolina and then back. Every ap, every year.

    BUT, we never did ALL aps at the same time and we never did it while every OTHER customer of the CoLo facility was doing the same….

    Look, I’m OK with “cloud computing”. In fact, once I have the whole Climate Database thing clean and pure, my intent is to make a container version anyone can download and run AND put up a Cloud Computing spin of if (and, maybe, someday, a Cloud version of whatever climate model I get running nicely) just so anyone only pays for running it and not for having a data center 24x7x365 to run it once in July…. It very much as a place.

    BUT….

    I’ve been to this rodeo a few times. I was around when “service bureaus” was a thing, and when the “debate” over your own Mainframe vs service bureau vs Mini-computer happened and 2 or 3 other similar “local vs remote” cycles happened.

    NONE of this is NEW. It’s the same old stuff, recycled with new names.

    Communications speeds high and costs low: Folks do “service bureaus” aka “cloud” computing.
    Local memory / disk / CPU speeds fast and cheep: Folks do local data centers.

    The cycle repeats every change of relative cost of communications vs local computes.

    At least twice (and I think it’s 3 times…( since I’ve been paying attention). )

    BTW: I have no dog in the fight. I just think it is stupid to have the fight….

  23. jim2 says:

    I guess I do have a dog in the fight – my job :) But I’m not paid to promote cloud computing. We still have a mainframe, but not ours on site as we used to have. The company is looking to get rid of it and will. I, too, see the cycles of centralized, decentralized, and again centralized.

    For myself, I like to have my PC. Not a smartphone and not a PC in the cloud. I’m more amenable to systemd than you are, only because I really don’t want to have to deal with dozens of dependencies just to install my Descent emulator :) I loved that game. So, yes, I’m lazy. I figure systemd will get things worked out. It is open source so I can figure out how to configure it and see what it’s doing if necessary.

    That said, I’m cool. No fight, just observations.

  24. gallopingcamel says:

    Windoze was a bust for me. They kept demanding more money for new operating systems even though I was happy with Windows 95.

    So I tried Linux but was unable to install “Red Hat” (using their $20 install disk) when its market cap was greater than any other North Carolina company. I managed to install “Knoppix” but could not abide the annoying video problems.

    Then I found Ubuntu which was faster than Windoze and much more stable. I was a Ubuntu devotee until they launched their “Unity” interface. I jumped ship to Linux Mint which I see as Ubuntu without the Unity interface.

    What our fearless leader is discussing on this thread is way above my pay grade yet it gives me a sinking feeling in my gut. Things are going to get worse and I feel completely helpless.

  25. p.g.sharrow says:

    I’ve been on Wintel boxes since 1986, Started dabbling with Linux 0.96, but I was using 8088 machines and needed MsDos to run the programs that were needed for my uses. Linux seemed to be the right direction to go, BUT! nothing I needed would run on it, and I didn’t have time to waste trying to figure how to make it work on my own. As I finally got computers of good enough quality I tried Dual Boot but at some point MsDos would realize there was a “strange, Damaged?” partition, it would automatically try to “fix” it and trash the whole disk system.
    Then I finally had accumulated enough Boxes available To set up a dedicated Linux computer with Ubuntu and as I was getting comfortable with it something happened and I got locked out, crap! I don’t have time for this! I have real computer work that needs to be done and I haven’t the time to learn everything in the Linux world needed to “build” a usable system that would do my needs.
    I can “See” that Linux is the wave of the future BUT! how do I get there? I am 73 years old and am tired of trying to get up to speed when things keep changing faster then I can learn them while doing everything else. For me the computer is just another tool, Not THE Tool! I just want the Damn thing to work!…pg

  26. Larry Ledwick says:

    I am 73 years old and am tired of trying to get up to speed when things keep changing faster then I can learn them while doing everything else. For me the computer is just another tool, Not THE Tool! I just want the Damn thing to work!…pg

    2 years behind you, but same feeling. “Amen”

    I have used desk top computers on everything from DOS 3.x through DOS 6.2, tried FreeDOS, Windows 3.1, 95, 2000, xp and now Win 7. (work is using Win 10 and I will have to switch yet again as Win 7 64 bit goes out of support) Tried Linux several times starting in about 1998 Slackware was the first one I could actually get to install and run, all the other attempts ran into some hardware compatibility issue – usually video card, monitor or networking. Caldera was the first one to install painlessly but at the time I had nothing to run it on but an under powered Intel PC.
    I think it was Caldera or Corel Linux that I got running well enough to crunch work units for SETI at home, but nothing I needed to use would run on it. In the early 1990’s WordPerfect was the standard word processing software for use in government and I standardized on it and Netscape.

    They would run fine on windows of that era but not on Linux (that I could figure out ) – Like you I have better things to do with my time than bang my head on the key board trying to figure out some obscure setting I needed to use to get things running.

    When I got in to digital photography, processing images became my highest priority computer usage aside from connecting to work, and Photoshop elements has never been ported to Linux. I could get Irfanview to run in wine but never could get photoshop elements to run, and simply surrendered to the demon and started using windows even though I hated it. I found if you got it running and did not screw with it, most of the time it was reliable (except when Microsoft pushed a broken patch and made your system unusable).

    When I went back into IT I started out on mainframes and then migrated to Solaris, I briefly got Solaris X86 (7 – 10) running at home but same problem could do unix stuff but nothing I needed to do. Then at work they moved from Solaris to Centos which is where I am today. I do basic scripting at work but mostly baby sit production system on it but nothing in the sysadmin domain so have no interest in learning systemd admin stuff I will never use (actually am not allowed to use at work due to permissions).

    I got a Linux mint install sitting on the floor near my feet (finally got it installed after fighting with secure boot for 2 days until I physically pulled the disk out of the brand new computer and formatted the disk on an older windows system so the BIOS saw it as a new disk).

    It worked but got tired of having to do updates every time I logged on and although I got Ifranview and open office running on it, never could figure out how to get photoshop to run or get it to connect to our work VPN.

    I need windows at home to connect to work reliably by VPN, so there are lots of incentives not to screw with Linux until they finally get their heads out of their ass and come up with a consumer version that out of the box can run all windows applications in an emulator without spending several days trolling web sites trying to find the magic sauce to get things to work right.

    Like you a computer is a tool, like an electric razor or a toaster or a pickup truck, it should just work out of the box.
    After 30 years of consumer PC development that should not be a problem any more for any OS.
    (most of that incompatibility is actually intentional so the various tribes can maintain market share by forcing you to join their tribe to get stuff done and satisfy outside demands like “Your resume should be provided in a windows Microsoft word RTF format” or some other mandatory requirement you have no control over and no way to work around.)

    After spending all day trying to get 30 hours of processing done in 20 hours or so without melting servers or crashing data bases or filling up disks, in a rapidly growing infrastructure and ever increasing work load, while also figuring out why some program suddenly does not work only to find out they pushed an update without telling anyone, my idea of fun at home does not include trying to get yet another computer problem sorted out by googling 30 pages of solutions to the problem none of which seem to work on my system.

    Windows 7 64 bit does everything I need to get done and mostly reliably except when outside forces break it for me (Yes Microsoft I am talking about you and occasional random blue screens with error messages that no one knows what they actually mean – although recent experience suggests it is actually browsers with a memory leak triggered by something certain web pages do)

  27. llanfar says:

    Kids these days… I started a new job on an ugly code base (7,200 source files spread across 5 servers). It was a nightmare just getting it to build (made worse by the complete lockdown they have even for developers – I have an otherwise nice laptop that is just a dumb terminal into a slow Windows 10 VM). Had one of the existing devs helping (he said it could take weeks to get everything working – guess they don’t mind shoveling out cash just to bring someone up to speed). In any case…

    The guy I worked with wouldn’t drop to the command line for anything. I had managed to get a Cygwin environment setup, but he wouldn’t trust the results I was getting just tracking down files using find piped through multiple greps (including grey -P – LOVE Perl regex’s).

  28. Graeme No.3 says:

    Back in the Dark Ages I got to switch from WordPerfect to MSWord on MSDOS98 thanks to an edict from the multinational USA company that had taken us over. I would describe the experience except (censored) (censored) (censored) wouldn’t mean much. Every file (MSDS) had to be re-formated individualy thanks to the lousy conversion program from MS and there were about 5,000 of them.
    Then the product data was dropped from Unix (sorry, no idea which but it was supposed to crash in 2000AD. It didn’t) into MS Excel where it ran into the known (to all but the H.O. computer section of 56 in the USA) bug of corrupting the data about the 8,160 lines. As there were 8,260 lines (Initialy with daily additions) and 32 columns and this was dumped into MS Access (turf war between the local Computer Section and the USA one) who wanted MS SQL and them (over there, who wanted all data entered by hand into the Oracle DB) I came close to a nervous breakdown.
    I located UNIX software that would have prevented this by avoiding MS (abruptly rejected) then other software that could be linked to the existing DB and set up it ‘own DB compatible with Oracle’
    and calculate the necessary Safety & Labelling info.
    This cost $45,000 and was approved in a day and mostly never used due to the on-going turf war. AFAIK it never was, as the company was closed down a few years after I left. I moved away from anything to do with computers until I ‘retired’ (semi-voluntarily) and switched to Apple. I have occasionally thought of going over to Linux but I have had my quota of things that don’t work.
    Good luck to those who think the general public are interested in fixing problems with computers. Try hiring someone, like our host, who can supply some commonsense with the expertise.

  29. corev says:

    My story is similar to everyone else’s. Except, I am older, approaching 76, and have been in the computer bidness since the US space program. Started working on the Apollo program.

    OTH, I love working on the new computer technologies, and like EM am interested in the SOCs. Also, like EM, I have had the dependencies, and less than full development efforts, issues on them.

    Like others here I am too old to spend inordinate amounts of time continuously learning and/or debugging new system changes. Stability, even stability with bugs, is Windows strength. Even with 5-6 running computers, desktops, laptops and SOCs, when I want to go back to my comfort zone, it is usually on the desktop with Windows.

    Stability rules business and personal costs. Testing and development decisions based upon stability have always been the carrot for users.

  30. jim2 says:

    My wife and I have learned to survive with a Linux Mint PC at home and no smart phones (but do have a couple of “dumb” ones.) Libre Office will save documents in Word docx, docm, and rtf as well as earlier Word versions.

    We use GIMP for image editing. GIMP also can do art and animations. Even the wife has learned to use it.

    Here is an illustrated list of GIMP filters:

    https://alvinalexander.com/design/gimp-catalog-filters-effects-examples-cheat-sheet

    I use Windows 10 at work. I’m a few years younger than some of you it appears, but not by much :)

  31. jim2 says:

    BTW, GIMP works on Windows too.

    https://www.gimp.org/

  32. Simon Derricutt says:

    Sometime back in the early 80s I bought a Slackware CD and installed it on my PC. Took a while to understand the tables needed for X to drive the screen to work OK and match it to the hardware I actually had. Later on, Mandrake came along and was a whole lot easier to get the system running. Nice to have the stability of Slackware, but it’s a bit like a Meccano set in that you have to know what you want before you build it and as a noob you just don’t.

    Somewhere around I’ve probably still got a boot disk for PCDOS 1.x (when the PC with 64kb of RAM was an item and the keyboard was an optional extra), and I did a fair amount of work in MSDOS 2.0 when the company I was working for started selling 40Mb hard disks which exceeded the 32Mb limit for DOS2. Needed a device-driver to split the disk into smaller partitions and to be able to access those partitions. Getting pretty deep into the systems programming at the time, in other words, and having fun running a virtual second machine in DOS2 where you could switch between “normal” and “the telex machine” with a key combination.

    Looking back, things changed at a pretty breakneck speed. However, programs tended to get upgraded yearly and the bugs weren’t normally obvious. These days, running currently Lubuntu on this machine, there’s maybe an average of 50-100Mb of fixes to download each week. OK, it’s a lot faster to download stuff now, but it just seems there shouldn’t be quite that much needing to be fixed.

    Like pg, though, these days the computer is just another tool I use, and I want it to work rather than having to spend a lot of time finding and patching the bugs. Those dependency problems are a pain in the butt, since I don’t have the time to write the programs I want to use and thus need to go find a program that does what I need and make sure its dependencies are satisfied. One such program was written in Python 2.6, which was therefore incompatible with the two current (mutually incompatible) versions available of 2.7 and 3.6. I did find the library mismatch and fixed it so it works in 2.7 now, but the galloping bit-rot probably means I can’t rely on it working when they dick with the definitions next time. Any update I apply may break it, and I don’t know beforehand. On the other hand, not applying the updates could mean my system would be open to some exploit. Damned if you do and damned if you don’t.

    The advantage of writing in a higher-level language is that you can ignore the machine-level differences in what instructions are available and rely on the language-writer to make the different processors do the same thing for the same instruction. If you’ve looked at the spec sheet for one of the PIC family of microprocessors (a few hundred pages) and needed to get all the registers set up correctly, and it’s different for the next in the family, then the higher language overlay reduces the effort quite a lot. You lose the precise control over timing, but normally that’s acceptable in exchange for being able to have library routines that do what you want without tweaking for the precise variant you’re using. However, I haven’t understood OO languages yet since I prefer writing in assembler.

    With Linux, there’s no single organisation that defines things. That is bound to lead to the dependency hell, when someone has produced one function you want and someone else has produced another one, and they may not be working from the same forks. Still, if there was a central control, then the system won’t get changed so quickly and won’t have the bandwidth to keep up with the rate of change of the hardware. Thus BSD remains behind the times on new hardware, and the Slackware remains “some hand assembly required”. I’m not sure there is a good way to get the benefits of fast adaptation and a reliable base to work from. Central control just won’t have the bandwidth, much the same as the USSR couldn’t get production and consumption properly balanced. Free enterprise is bound to end up with different people taking different and incompatible paths, but Darwin ensures that the best paths will get fixes and the poor choices get moribund.

    When I last tried the Pi3, it crashed frequently and didn’t have the programs I needed it to. For the new box, I thus got an Intel processor where there’s more chance of the programs I will need being available and fairly bug-free. Yep, I should re-try the Pi and see if I can run a sufficient subset for net use, but I’d really want to fix it and then run it without keeping on patching every week. It’s a tool, and I want to switch it on and then it does the task I need it to. Finding out why it doesn’t simply takes too much time, and I have other things that are more important.

  33. E.M.Smith says:

    @G.C.:

    Yes it could get bad. But I’m hopeful this is just a temporary bump as folks remake what we had before. This happens at every major “fork” event. It is why distributions exist in such variety. Somebody didn’t like an upstream decision.

    The only real difference is the breadth and depth of systemD changes and bugs.

    Initially folks just went along with “a new init”. We have lots of them, what us one more? I did.

    Then mission creep turned into Mission Everything… Bugs started showing up. At Debian, part of the core developers called BS on it. After being overruled, they chose to leave and start a new fork. That has worked very well, especially for anyone on a Intel PC. It did take a year or two, but my only complaint, really, is the limited support for odd SBCs and potentially one failure of QA on the XU4 port. Very small issues, really. Mostly I just like to gripe about it.

    Every year, they get more “downstream” distributions as folks fork off from Debian and go with Devuan as their upstream. This, too, taks time for folks to decide to take the plunge.

    “Fortunately” systemD is the gift that keeps on giving motivation to run away from it. Every year, more do. I still hope for a brighter future. Just right now, it is a bother. Mostly because it has been years since I made a Gentoo or ran Slackware for any time and I need to relearn some stuff that Debian allowed me to ignore. This takes time, as does exploring the options, and so I complain about that, too.

    Eventually, either someone like me decides to make the distribution they really want, all polished and nice, and a new one happens, or they join an existing team to help; then things get better. (Or enough folks have already shown up that the person can just download and go)

    Puppy Linux started that way with one guy. As was Knoppix. Void is similar but a few guys started it and now it is gathering “contributors” . Then for Devuan, I’ve been in that last “download and go” group; but thinking I need to step up to contributor (porting to variety SBCs in my pile, doing a better QA suite). All this is just how the Linux community works.

    But before I commit to becomming a Devuan Devo, I’m exploring how much pain and work for other spots in the ecosystem. IF , for example, Slackware were drop in and works fine, I could just use it. At the end of the Distro Trials, I’ll be making my choice and choosing my workload. The end product, in any case, will be a nice systemD Free distro that’s comfortable. Worst case is it spawns a new distro and I’m the key developer (setting up download sites and gethub and…). Best case is Devuan picks up more developers and a corporate sugar daddy and I just hit download. In between is getting really good with Slackware or Gentoo and patching things for variety SBCs.

    Do remember: right now, Devuan on a PC is fine, works well, and has a nice desktop. I’m using it on the Evo.

    For a long time, the easy choice for a small SBC maker was to do a Debian port, then Ubuntu came essentially for free as Debian Tweaked. So several of the board makers only have them. I believe that over time, more will look to Devuan for that more trouble free experience. Given the boot bricking I’ve had from Ubuntu, I’ll no longer buy any SBC that is Debian / Ubuntu only. That will show up in sales as more folks do the same. Sales drive decisions…

    So yeah, the theme of the article is more dismal. That’s the view from inside the devo world. The view from the PC User pov is much better :-)

  34. YMMV says:

    Larry Ledwick: “come up with a consumer version that out of the box can run all”
    There’s the Linux dilemma right there. Linux never had consumers, not really. It never had a consumer version. Unix was started by professionals, some very good ones. Linux was always amateurs, not to say they weren’t good. Neither group had any interest in consumers.
    Linus 1991: ““just a hobby, won’t be big and professional like gnu”.”

    If you’re an IT pro, you expect to beat your head against the wall, spend lots of time learning new things to keep up, fixing broken things. That’s the job. That’s why you get paid the big bucks.
    If you’re a hobbyist, same thing except you do it for fun. You’re a geek.
    If you’re a consumer, you expect someone else has done all the work, you expect it to work right out of the box. It’s a tool. You have more important things to do. You have a life.
    Different strokes for different folks. Apple was the first to get serious about consumers, with design and testing. Well, there were others, but those names have been forgotten.

    Simon Derricutt: “Darwin ensures that the best paths will get fixes and the poor choices get moribund.”
    Half right. The poor choices die out. But there’s nothing that says that the best paths succeed.
    Darwin states it more or less as the last one standing is the winner. I hope he didn’t say ‘best’.

    But speaking of evolution, *NIX is a case study. Conclusion? Things split and diverge.
    Look at the tree of religions, same thing. Lots of forking and branching going on. Evolving to one best? Hardly.

  35. E.M.Smith says:

    OK, with all the talking down Linux going on….

    I’ve lived almost exclusively on it for about a decade and it does everything I need.

    Debian was that largely consumer friendly version, then Ubuntu took that and cleaned up any loose ends and put a pretty face on it. Very little work at all to install and use.

    Different from Mac and Microsoft, but not that much harder. Microsoft has shifted paradigms, user interfaces, and obsoleted software faster (I’ve used the same software suite and interface of Gimp, LibreOffice, and Firefox on LXDE for about a decade as MS obsoleted my Win 7, and are now on 10, I think, with entirely changed applications widgets…. to the point they are now alien.

    Then all the folks saying they don’t use Linux: No smart phones ? Tablets? Smart TV.? Macs?Just about every device uses Linux. Android and Chrome are just linux reskinned. The Mac starts from BSD so technically isn’t Linux but the “user hostile” Unix…

    Folks use Linux all the time, but only notice the difficult ones.

    Now what is broken right now is that Red Hat has, IMHO, broken that suite of comfortable distributions so now Ubuntu is buggy and Debian has issues. Well, OK, it will take a bit of time, but new old style distros that are polished will rise again. Devuan is well on its way.

    It was fixing those painful bits you all described that made Debian one of THE major upstreams and led to Ubuntu as the common user friendly distribution. It is just that now Red Hat has broken things and Debian / Ubuntu decided to surrender. So a bit of pain to get through but not too long.

    FWIW, this morning I installed NetBSD on the Pine64. It comes up with basic X windows working (so that’s a big gain) and my attempt to install XFCE windows told me it was already the newest version, so I may just need to find it and launch it. XFCE is almost as good as LXDE, IMHO. So it may be much easier now than a few years back. More in a few days… I did have to install their package manager, so still some assembly required… we’ll see how it goes.

    BTW, the market for commercial Linux desktops is big. Many tech companies have it all over Engineering. It has a commercial market, just not yet for novices and Noobs. Then look at the millions of Raspberry Pi boards sold. That’s a lot of folks…

  36. E.M.Smith says:

    Ah, per this it uses twm by default. So OK, good news is it is running a window manager out of the box. Bad news is I want a different one. Good news is they give the recipe…. So likely working this afternoon. Looks like I need to dampen my complaints about installing X11.

    https://www.netbsd.org/docs/guide/en/chap-x.html

  37. YMMV says:

    “It has a commercial market, just not yet for novices and Noobs.” Very good point.
    Still, there is a high cost of entry (except in dollars), there is a steep learning curve, beginning with choosing the right distribution. It’s not for everyone, not for ordinary consumers.
    That said, note that I am not saying anything bad about *NIX, I will save that for all the Windows products, including the ones which were sort-of based on *nix but different.

  38. gallopingcamel says:

    @Chiefio,
    Thanks for your lengthy reply that I have read three times already.

    My lack of understanding is a problem of old age. Some of your faithful have admitted to being over 70 years old but they are spring chickens compared to this arthritic camel.

    I bought copies of Photoshop and Dreamweaver for Window 95. These were the last Windows apps I ever paid for. While I have a copy of Windows Vista I don’t use it because neither of these apps works with it. Fortunately both apps worked perfectly with Ubuntu (Version 8) given that I have “WINE” installed. Even when I switched to Linux Mint in 2010 they still worked. They were working in 2013 but neither works since I upgraded to Mint 18.

    I need to update my website so should I turn the clock back by re-installing Mint 16 or is there a “Work Around” that will allow me to run ancient Windows apps on later versions of Mint?

  39. gallopingcamel says:

    As mentioned above the “Brotherhood of Beer” is a powerful force for good in a naughty world. I still treasure the “Charlie & Jake’s” mug that our gifted leader presented to me in Melbourne, Florida.

    Today I live in Mebane, North Carolina and if any of you are passing by please get in touch. The amazing, elusive Gail Combs lives nearby.

  40. E.M.Smith says:

    @G.C.:

    The answer depends on your needs. If this is to be a “one off”, I’d just do a roll back, do the deed, then roll forward. If this is to be an ongoing need, I’d try to find out why they quit and fix that.

    A web search on “Dreamweaver wine linux fix” might help…

  41. Larry Ledwick says:

    By the way anyone here using Oracle Virtual Box to run a linux distribution on a windows system?
    Our sysadmins use it and they suggest it is an alternate to doing a dual boot system, as you have access to the linux session at any time in the windows system.

  42. llanfar says:

    @Larry I had issues running Virtual Box in macOS – VMWare was much more capable (but not free). Not sure if that mirrors the Microsoft Windows world.

  43. cdquarles says:

    @ Larry,
    You can. I have. That said, Windows 10 will do it without needing Virtual Box. I’ve done that, too. There are a few things you need to do for it to run first (and I am using the Pro version on a machine that has virtualization capability inbuilt, and I expect the Enterprise version can handle it better). You can even get Windows 10 for free; just sign your life away ;p. It’ll be buggier, though, and they do expect some feedback.

  44. E.M.Smith says:

    @Larry:

    I ran Solaris in a VM on a Win 10 desktop machine. I’m pretty sure it was Virtual Box (might have been qemu?) I’ve also run several other Linux version inside VB (for sure) on my Laptop a half dozen years ago.

    Generally they all worked fine. Only issue I had is that they are very very slow. Iron is bigger now, so may not matter anymore.

    I had a dozen or so VMs on the laptop and that way I could play with different distributions while “on the road” without a lot of fuss.

    The Solaris thing was just a curiosity thing. Supposedly the desktops were entirely locked down by the I.T. Masters Of The Universe. I was a contractor, but in the security area, so figured “why not poke the bear?” on a slow day… I put a VM and Solaris on a USB stick and then executed it via a cmd line invocation. As I try to remember the command, I think it was qemu as VB had to be “installed” and that was forbidden but qemu could just be invoked as a run command… Whatever… As I was then able to have it run, and was “root” (superuser) inside of it, I had a full suite of “interesting tools” to probe the hardware, hard disks and network. What’s the utility of a Root Priv Unix Box on someones network? 8-)

    I’d not done anything with it other than ping a box and look at the underlying system. Not wanting to set off any IDS / IPS systems. But it was pretty clear that I had escalated my privs. A few days later my boss came by ( a guy I’ve worked with and for, on and off, for about 35 years, so “we’ve met”) and saw Solaris running on my Wintel box. The look was precious. “Oh God he’s at it again…”. I assured him I was being careful and not doing anything that would cause trouble, and he suggested I not get caught doing that nothing ;-) Since the whole thing was molasses slow, I’d decided to just pack it in anyway, but from time to time I’d boot it up just to see the Solaris screen ;-)

    Hope that helps…

    I don’t know if the sloth was the 2011 hardware, the running from an SD card in a USB adapter, or my configuration. Probably lots of tuning possible. Or just use a lighter weight distribution than Solaris…

  45. Larry Ledwick says:

    Hmmm have to play with it a bit I guess. I currently have win 7 (64 bit) on my primary desk top but have a brand new win 10 (64 bit) install media for when I have to upgrade eventually.

    I am currently down loading some up to date linux iso images and will tinker with it when the mood strikes. I have to be careful not to bollix up my home system so I cannot vpn to work so I tend to do things on different computers. The one I log into work with, I seldom mess with other than updating the antivirus software. The one I do most of my photo stuff on I also don’t screw with much (classic windows method of install – get it to work then leave it alone setup wise).

    The one I am typing on now, is my primary desk top I use for daily browsing and I occasionally tinker with it a bit. I have Julia for windows installed but realized I really need to use Julia in a ‘Nix environment to learn the things I want to learn for use at work, so back to trying to get a linux system attached to this box. As I mentioned above, I have Linux Mint installed on a system which is sitting on the floor but I never power it up because with 3 computers always setup, I am out of useful desk top space and it is a royal pain to pull it out, and run it without taking something else off line or setting up a temporary table to put it on (which makes it almost impossible to move around in the limited space I have).

    I need a bigger house!

  46. jim2 says:

    As someone mentioned, |Win10 will run Linux out of the box. Might require Enterprise edition.

  47. E.M.Smith says:

    @Larry L:

    Part of why I’ve moved to SBCs. I presently have 8 computers on my (small) desktop and can boot them into about 3 x that many distros / builds by chip swaps. My present limitation is number that can have a monitor on them, but the others can be running and I just open a remote shell / window.

    You can have a Real Linux ™ box for under / about $50 for all your learning, it will fit in a cigar box with PSU and docs, and all it takes is an HDMI monitor and kb/mouse to be up and running. A quad-core A53 (Pi M3) is more than enough for database / language tinkering and a hex core A72 / A73 w/ A53 is just killer (2 fast A7x cores, 4 slower A53) like the RockPro64 and the Odroid N2 has 4 fast and 2 slower. In between, the octo core XU4 is my favorite even though only 32 bit.

    Do Note: My moaning and complaining about SystemD mostly falls into 3 things, and for what you want to do, with just a little care, you can ignore them for a year or two. I’ve chosen to do that on the Ordoid N2 (media station, where I’m posting from now, and occasionally other stuff) in the bedroom. It is running Armbian, that hides / fixes most of the SystemD stuff fairly well (so far). So I’m willing to just let it run.

    a) Bricks boot of Ubuntu. This doesn’t happen on Armbian. IF you make your fstab such that you comment out any disk you remove (do not depend on ‘noauto 0 0″ to have the system ignore it) you don’t have this problem. Biggest issue is to realize a “black screen at boot” is NOT dead hardware most of the time, it’s a missing non-mount disk…

    b) Constant changing of admin stuff and generally dicking around with everything. Well, if you just want to install a browser, database and run a language, you get past that the first day. PITA, but a tiny one.

    c) Probable instability going forward and unpredictability. Like “Surprise! eth0 is gone and we’ve renamed all your devices!!!” so you get to “go fish” on getting your network to work again. (Shut off at boot by Armbian, not by the newest Ubuntu). Well, if you are not running a professional shop and intend to set it up and just use it for 6 months or a year, this doesn’t matter too much. Boot Armbian and get “whatever” working, then move on.

    So, were I looking to do what you want to do: Play with Julia on *Nix: I’d buy a widely supported SBC and put Armbian or Devuan on it and call it done. Probably a Raspberry Pi Model 3 if I’d not done much with SBCs / OS builds, or a Pine Rock64 otherwise (USB 3.0, more memory, slightly higher clock). Both have big communities, lots of OS choices, and work very well. (Only the Ubuntu boot brick caused me to speak ill of the Rock64 before, now I know it was Ubuntu / SystemD).

    I have a Compaq Evo sitting next to the desk, but find that I just don’t ever boot it up anymore. It’s dual boot, Windows XP / Devuan. But why bother?…

    Oh, one other honorable mention:

    There are a LOT of “USB Live Boot” systems out there. You just put it on a USB disk or stick and tell your bios to boot it. No dual install stuff. It loads into memory, runs, writes changes back to USB device (or optionally is amnesiac) and you are done. I’ve got 2 x USB Sticks that way, one Knoppix CD, and a Rescue DVD. It lets you bypass that whole “reformat my disk” and change things process on the PC. Just tell it to “boot over there” ;-) Oh, and you can also set up PXE Boot. Boot over network from some other NAS or server. I’ve been meaning to set this up but not finished it yet. Did get to the USB Stick with the server on it, and booted one system once, but then something else came up ;-)

    The point?

    There’s several ways to get outside the “one box one OS” limit. All of them work. I’ve used them all. Each has better primary uses. Knoppix / Rescue for system repair or just incognito browsing. USB boot for easy “dual” or more booting systems. There’s even a CD that does multi-boot so you can just set up your PC to boot from, in order, CD, hard disk, whatever. Then, with the CD in, it gives you a boot loader that looks around for what all OSs are on various media and gives you a menu to choose ;-) I had to use that on a system that couldn’t understand boot from USB at the bios level. Then VMs more for when you need parallel operation both running right now…

    But really, think about it. $50 and be done with intact desktop real estate…

  48. Larry Ledwick says:

    I have 3 Raspberry Pi 3 Model B Motherboards sitting in a box along with power supplies, cases, heat sinks and some powered USB hubs, cases etc, which I have never gotten around to assembling and powering up, so good to go with that solution already, it is just an option that I have not gotten around to playing with.

    At one time I was going to put together a mini test domain with all the key elements of an industrial production environment but never got around to doing it as I was trying to figure out what version of linux I wanted to fiddle with and other design decisions.

    I stopped buying pieces for the R-Pi systems when you started playing with the other systems and found they were a bit too slow for day to day use, and much preferred using the systems like the Rock64 and Odroid N2.

    I figured in a little while, you would have tested many options some of which would not even occur to me, and provide a nice summary of those options.

    Thanks for all the effort involved in doing what you are doing!

    I have been watching, but other day to day issues have been more important to channel my time into recently so it got pushed to the back burner.

    I will ponder a project for days – weeks, sometimes months or years slowly accumulating pieces parts and then at some point I get to the “Okay let’s do this thing” mood and get tunnel vision on that project for a while.

  49. jim2 says:

    I also appreciate EM sharing his micro-computer adventures. Like Larry, I’ve had a lot on my plate and don’t yet have my PiM3 set up. But I’ve got all I need including wireless KB/mouse and a monitor dedicated to the small computer/laptop suite.

  50. p.g.sharrow says:

    @Larry; Start hitting Goodwill or other such places for needed keyboards, mouse and monitors. I picked up several for my Raspi SBCs. Very cheap used equipment. The monitor that I use for the Pi-3 has several USB ports, one of which I use as the Pi power supply. ..pg

  51. E.M.Smith says:

    @Larry:

    I know that tunnel effect :-)

    FWIW the Pi Model 3 is just enough to be a Daily Drive on browsers, and quite fast enough for most other things. If you already have one it is best to just use it, then only buy faster if you are unhappy.

    I did most of that marathon of global temperature graphs by country on a R. Pi M3 Including database, python, graphics and all. Yes the XU4 and N2 are faster, but often are idle in normal use.

    The only place where I find the Pi really fails is playing video in Linux (it works in dedicated video like Kodi that uses the GPU) and as a specific kind of distributed computes (OpenMP). It does fine with big distributed chunks like distcc compiles.

    What I would suggest is that you get one R.Pi going and just try it. About an hour to load os to uSD, put case together and plug things in, then boot.

    Which Linux? Depends on what you do… but generally, I’d say start with Devuan ( works great, efficient, flexible and familiar apt-get package system) or Armbian (also Debian like and nicely debugged). In both, everything just works.

  52. p.g.sharrow says:

    A few years ago, my grandson brought the Raspberry Pi to my attention because of my interest in SBCs as a basis for my BeerCan computer concept. So I dug into the board and foundation, was impressed enough to buy and early Pi-1 to try it out. Got a SD imaged, hooked the Pi up to the TV and booted it. Crappy old analog tube TV, too poor a picture to really do the setup. Shut it down to get a monitor and hook it up. Tried and tried to boot it up but It just wouldn’t work. Found out that there was a poor solder joint problem under one of the IC chips. When the Pi-2s came out We got several to work on as the Chief started his Odyssey into The Pi form factor SBCs. Later We got a RasPi-3 under Debian with Chrome and OpenOffice running to play with. It always seems to work and is handy for remote troubleshooting of the Ethernet network, A bit slower then the WinTel boxes but not as cranky, It just works!
    For the last few years I have been watching EM hack his way through the OS jungle to find the most likely foundation to use for everything, It seems to me that Devuan is that foundation OS to use, but I await Mr Smiths conclusion, as he is the resident expert in this field.
    I based my conclusion on the RaspPi as it has a very large user base and is the least likely to become an orphan. Was designed to serve as an educational tool for robotics and programing and is NOT a part of a, for profit corporation that might sell out at any time. The Raspi SBCs are being used in great numbers as the board that large numbers of aftermarket products are being created on top of. I am delighted to see so many of us here ready to follow into this adventure…pg

  53. E.M.Smith says:

    @P.G.: Any computer choice is at most a 3 year commitment. You can use them for a decade or even more, but after 3 years there is something 10 x as fast for the same money. Just look at 700 MHz single core 32 bit Pi 1 to quad core 1.4 GHz 64 bit Pi 3.

    My Pi 1 is still in use, but as a DNS and time server… My 2 x Pi 2s are just compute servers for distcc, squid proxy server, another DNS time server and the ad blocking PiHole. They are in a single dogbone case stack with my Orange Pi One NAS NFS server. But I’ll not be buying more of them. Tech has moved on..

    While I’m doing a bunch of further OS Exploration devo stuff for the various odd SBCs I’ve got: That is just to keep them in use AND move them off SystemD. Devuan IS my OS of choice. Going forward, unless there is a strong reason otherwise, I’ll just limit my hardware buys to systems that run Devuan. Honorable mention goes to Armbian that also always just works. My only concern there is how long they can keep up fixing systemD bugs and hiding the annoying bits (like the device renaming that breaks most normal folks config eth0…) So I’m guessing a year, 2 max, then issues. So I’m happy running it on the SBCs I have now “for a while”.

    I don’t expect that OS prefence to change.

    I am going to bring up at least 2 boards on NetBSD mostly to refresh my skills and because they have X11 twm already running at install, so my major gripe resolved. It is also the only non-systemD OS on one of them. While I prefer BSD for servers, it is not friendly to most folks and can be picky. Lots of new linux stuff doesn’t run there either. But that doesn’t make it my number one general purpose pick. More a hobby thing.

    As of now, the R.Pi product works most easily and well enough. The Pine64 company product is easiest to put variety OS types on, and is faster per $. Then the Odroid stuff is better designed hardware, but OS selection more limited and their boot process makes DIY ports of an OS harder. Also, unlike Pine, not made in China.

    There’s a lot of hardware choices I’ve not tried. Most notably the Banana Pi products that do have a Devuan port (IIRC Sunxi?). I’d likely experiment with one of them (IF I wasn’t already overstocked on boards :-)

    Oh, and a note on heterogeneous shop issues: For various kinds of computing you want everything The Same. Distributed compiles for example, want 100% matching libraries, word length, instruction set… For some other codes and uses, it doesn’t matter. So depending on your intended use, buying many different boards like I did is a worse decision than buying several lower performance boards that match. You can make heterogeneous clusters, but not as easily nor as easy to manage.

    Because of that, at some point I’ll make a dedicated compile farm for the source build systems. Gentoo, for example, taking over a day to update. Better if I can make that 1/4 day on a 4 node Gentoo distcc cluster. This does NOT mean 4 new boards. Just 4 uSD cards flashed with the same gentoo build level (as I already have for Devuan … that doesn’t need it…)

    NetBSD has both a binary download choice like Debian for packages and a source build choice like Gentoo and Slackware, so it might also benefit from a distcc cluster set of images.

    You can see how three os types has resulted in 3 x 4 system images to make and store… also how standardizing up front can eliminate 2/3 of the work… So chasing after the new hot board is not as good as picking one and sticking with it for 2 to 3 years…

    Unless your purpose IS to search that space to optimize the choice. I’m doing a partial optimization so have 4 vendors (Raspberry, Orange, Odroid, Pine64) and for some, a few board designs. That ignores a dozen more makes, but life is short :-) so at some point exploring ends and you make your partial map, and choose. My main conclusion is to pick one with the OS you prefere… then remember just about everything runs on Raspberry Pi.

    Which is why I have 5 x Raspberry Pi and only one of everything else (except the Orange Pi One that was something like $14 so I ordered 2 figuring a spare could be good). I actually bought my Pis in 2s so had 6, but static discharge killed one of my Pi 1 boards one staticy day when I was not careful about carpet snd sliding shoes…

  54. gallopingcamel says:

    What I love about the above discussion is that it plays to my obsession as an engineer. Engineers are really “Cheap” by which I mean that an engineer can do for $100 things that any fool can do for $1,000.

    Chiefio said:
    “You can have a Real Linux ™ box for under / about $50 for all your learning, it will fit in a cigar box with PSU and docs, and all it takes is an HDMI monitor and kb/mouse to be up and running. ”

    While I can’t match our leader’s $50 I am not too far behind with my HP “Elitebook 840” purchased from “PC Liquidations” for $100. The battery was dead on arrival so it cost me $39 to replace it.

    While this is a laptop with a dual i5 CPU (1.6 GHz) it it much faster than my quad CPU “HP2000” so I am a happy camper. You can buy this from HP for $1,316.

  55. H.R. says:

    Here’s a story of how Microsoft took something really simple and really, really broke it. It’s ‘not exactly’ the same problem we’re discussing, but a few here might recall it.

    About a dozen +/- years ago, when laptops cost $2,000 – $3,000, were 2 inches thick, weighed 8 pounds, and Windoze had already become so bloated that after loading it, you only had room for two spreadsheets, e-mail, and your resume on a Word document, the Microsofties came out with – I forget what it was called, Lite? Mobile? – a stripped down version of Windows to be used on cheap, small, lightweight devices. It was designed to surf the interwebs and handle e-mail. That was it. It was about 10″ x 12″ and weighed maybe a pound and a half.

    I got one of those gadgets with WindozeNothing 1.0 for about $100, which was amazing for the time. I had a desktop at home and at work for my other needs and could not justify the money for a laptop.

    I loved that thing! Right out of the box, it worked great doing web searches and e-mail and it was comparatively fast since it wasn’t trying to do anything and everything else.

    Then they started pushing out updates/fixes. First a few here and there for security patches. Then those fixes caused other problems, so they pushed out fixes to the problems their fixes caused. After several months, it seemed like the updates were coming every few days and sometimes one day right after another. My guess was that the stripped down version stripped out the dependencies that were (more or less) accounted for in the full version.

    After about a year my little web surfer/e-mailer became so slow and so glitchy that it was pretty much unusable unless you were a stone cold masochist. I got tired of turning it on just so I could watch it update itself. When it wasn’t updating, websurfing and e-mail had slowed to a crawl. I just gave up.

    I’m not sure what happened to that device. It might be in a box somewhere. Anyhow, it is one fine example of fixing something until it is broken.

    Anyone else remember that fiasco?

  56. p.g.sharrow says:

    No, I don’t really remember that period. I was running 3.12 over DOS 6.2 in our business computers. I Was UP to SPEED and had everything working well computers talked to each other and then a printer/Fax machine died. No biggie just get a new one. Well It required Windows 95 or newer. Had to find and install drivers to get it to work. Things started to break software wise. Computers began to get grumpy and not talk to each other. We had to get a newer computer to run new business software up grades, now nothing worked well and I was way out of date and obsolete. Finally we got to XP into everything, I got UP to SPEED again, everything working well and Microsoft Attacked again! We are now into Windows 7, I have stopped 2 Microsoft efforts to help me “UP GRADE” to 10. NO! I refuse to play this game again. Everything in the world will run Linux, not Microsoft. Planned, forced, obsolescent is dead. Even if I have to pound a stake in it’s heart myself…pg

  57. Steve C says:

    pg – Lest you miss one of those “upgrades”, Steve Gibson has a helpful tool … “Never 10”.
    “The elegance of this “Never 10” utility, is that it does not install ANY software of its own. It simply and quickly performs the required system editing for its user.”
    https://www.grc.com/never10.htm
    Also works for those slightly eccentric folk who love their Windows 8!

    And, if we’re honest, ALL these dependency hassles, whatever the OS, are effects of not making the new version properly downward compatible in some way. Okay, if you’re making huge changes to the new version, by all means add commands and options that aren’t there in previous versions, but unless it really, really breaks stuff, you really ought to cater for formerly valid calls.

    Windows often handled it by programs copying an earlier version of one (or more) of your system’s dlls into the program’s own directory and looking there first for its calls. It seemed to work OK, but not sure how you’d handle that in *nix. Logically, every program could bring all its dependencies with it, though it would lead to (even more) bloat.

    We seem to have drifted rather too far from “the program talks to the OS talks to the hardware” and too close to every version of everything demanding that everything be just as it was on the designer’s machine. It’s reminiscent of the old saw about the computer industry loving standards so much that every company had its own …

  58. E.M.Smith says:

    @H.R:

    Uh, vaguely. .. but isn’t that THE Microsoft Way? Updating you into obsolescence? Part of why I embraced Linux… I’d get “unusable” Microsoft gear, often for free, after some update cycle. Install Red Hat or Madrake or whatever, and be set for a half decade.

    @P.G.:

    My feelings exactly. Though it did help the contracting gigs ;-)

    @Steve C.:

    Everyone saw MS raking in more cash via gratuitous change and breakage without pushback… so followed the money… I railled against it for years, but only got blank looks from the fleeced and dirty looks from the guys getting rich.

    Eventually you just shut up.

    IMHO, systemD is bringing that same system to Linux for the same reasons. Even including a Push Update process. Thus my “no thanks”.

  59. jim2 says:

    EM has already noted how extensive systemd, but I was taken aback at just how extensive it is. Here is a diagram of the stack:

    https://www.linux.com/tutorials/understanding-and-using-systemd/

  60. E.M.Smith says:

    @Jim2:

    Yeah… feel my pain… Thus my push to purge it from my systems within the year… it will not get better, only worse. My only real complaint is them calling this complete do-over remake “Linux” and pushing it on the rest of us. Had they announced “Red Hat OS” and left the rest of us alone, it would be fine.

  61. Steve C says:

    Or “Chapeau Rouge Assistant Pernicieux” system, perhaps. Acronym acceptable.

  62. E.M.Smith says:

    Just another SystemD Oh-C.R.A.P. note:

    Today I booted the XU4 running Armbian. It is my favorite hardware and there is a Devuan port for it, so figured I might give it another go at a “fix”. It did NOT put my desktop picture up (that’s odd… but sometimes it takes a while as the async way-of-the-systemD makes such operations unpredictable and often I’m already logged in with terminal or htop window open before it displays my desktop wallpaper…) so I proceed to launch the browser and do “whois” on a questionable new comment.

    No whois command, but I can install it… so I do “apt-get install whois” and (still waiting for wallpaper…) it says:

    root@ArmbianUX4:/# apt-get install whois
    Reading package lists... Done
    Building dependency tree       
    Reading state information... Done
    The following NEW packages will be installed:
      whois
    0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
    Need to get 70.5 kB of archives.
    After this operation, 274 kB of additional disk space will be used.
    Get:1 http://cdn-fastly.deb.debian.org/debian buster/main armhf whois armhf 5.4.3 [70.5 kB]
    Fetched 70.5 kB in 0s (147 kB/s)
    Selecting previously unselected package whois.
    (Reading database ... 108941 files and directories currently installed.)
    Preparing to unpack .../archives/whois_5.4.3_armhf.deb ...
    Unpacking whois (5.4.3) ...
    Setting up whois (5.4.3) ...
    Processing triggers for man-db (2.8.5-2) ...
    E: Write error - ~LZMAFILE (28: No space left on device)
    

    WT? I don’t have any lzma compressed file and I’m pretty sure installing a program doesn’t need one, and whois did install and work

    root@ArmbianUX4:/# df
    Filesystem      1K-blocks      Used Available Use% Mounted on
    [...]
    /dev/mmcblk1p1   29522764   3324968  25850264  12% /
    [...]
    tmpfs             2097152     21740   2075412   2% /tmp
    /dev/sda12      103081248    705920  97132448   1% /SG2/home
    /dev/zram0          49584     48560         0 100% /var/log
    

    WT? Again… It’s the /var/log zram based log file partition that was just created de novo at boot by Armbian. One of their “nice touches” is logs are written to zram and then only ‘every so often’ written out to actual file system space, saving uSD card wear.

    But WHY is it full already, what filled it up?

    Now usually at this point you would go to /var/log, do a ‘du -s *” and see who’s the fat pig, then tail or grep the file to see just what process has run amock and have a clue what’s going on with the system. But Pottering has obfuscated logs with making them binary and you must launch his log reading tools to gain clue.

    root@ArmbianUX4:/# cd /var/log
    root@ArmbianUX4:/var/log# du -ms * | sort -rn
    48	journal
    [...]
    

    Everything else, 1 MB or less, so “journal” is my problem…

    root@ArmbianUX4:/var/log# ls -l journal
    total 1264
    drwxr-sr-x 2 root systemd-journal   4096 Nov  6 16:15 10101010101010101010101010101010
    drwxr-sr-x 2 root systemd-journal   4096 Oct 17 16:26 e206206f70f742c3ae58f65233c83031
    -rw-r----- 1 root systemd-journal    240 Oct  8 22:28 system.journal
    -rw-r----- 1 root systemd-journal 634880 Oct  7 04:40 system@300ca8f83fb043589048f276da55b90b-0000000000000000-0000000000000000.journal
    -rw-r----- 1 root root            634880 Oct  1 03:06 system@fed3cbadaab24bc08a668a035505d579-0000000000000000-0000000000000000.journal
    

    So WT? is a system@{garbage} file? Why do I have two of them with different groups? Why are they large? BUT, more importantly:

    root@ArmbianUX4:/var/log/journal# du -ms * | sort -rn
    47	10101010101010101010101010101010
    1	system@fed3cbadaab24bc08a668a035505d579-0000000000000000-0000000000000000.journal
    1	system@300ca8f83fb043589048f276da55b90b-0000000000000000-0000000000000000.journal
    1	system.journal
    1	e206206f70f742c3ae58f65233c83031
    

    What the ***S$#@ is this 101010etc directory and why does it take up all the space? Is it something I can safely just “blow away”? What is in it? It looks like a very suspicious directory name. Will I be able to reboot if I DO blow it away?

    root@ArmbianUX4:/var/log/journal# ls 10101010101010101010101010101010/
    system.journal
    system@0005951dabba6bf5-23ac0d9b65e58951.journal~
    system@0005951f3a114557-83841fa94eb915d4.journal~
    system@000596afdacc0afe-490aed52502f1f01.journal~
    system@d8d45c54a30640f4816c929ae4a07fd8-0000000000000001-000596953e01064e.journal
    user-1616@000596afc66e6d4b-12bf172b560f4ebc.journal~
    user-1616@000596afdab74470-da91f178635f1df7.journal~
    

    Well, it claims to be more systemD logs of (who knows what kind) some sort. Is this a clever ruse by a bit of systems cracking software, or just more systemD bogus crap behaviour? Do I purge this system and start over clean, just delete these files and hope? Spend the rest of the day trying to untangle the madness that is the sytemD logging system to figure out what it has done and is this normal?

    So just on logs alone, I could lose all of today due to SystemD crap. Or play “roll the dice” on my system health. Either way, plans for the day out the window and comfort level at very uncomfortable.

    Oh, and it isn’t like you can just head or tail one of these, look at a bit of the contents and know “Oh, that’s just a kernel log or that’s just a runaway chron job”. Nope.

    root@ArmbianUX4:/var/log/journal# file system@300ca8f83fb043589048f276da55b90b-0000000000000000-0000000000000000.journal
    system@300ca8f83fb043589048f276da55b90b-0000000000000000-0000000000000000.journal: data

    Everything is just inscrutable “data”.

    Thanks, Pottering, for pissing on my day. Again.

    So I’ll be gone for an indeterminate “while’ while I just blow away some of these files and hope it doesn’t kill my system. Then reboot.

    “But hope is not a strategy. -E.M.Smith”

    though it can be a tactic… the strategy is to get SystemD the hell off my systems.

  63. E.M.Smith says:

    Well, I just deleted any ‘what looked like a log file’ with a date older than today and rebooted. /var/log is no longer full and it looks like it worked… but…

    I still don’t have my desktop wall paper and trying to just choose to have it re-set to the image (that is still in my Desktop directory) doesn’t work…

    So I guess the question to me is just “How much do I want my wall paper… and is this a symptom of something bigger?”

    Sigh.

    BTW, my home dir is on a Real Disk partition so this can’t be a uSD card wear issue in my directories…

    I think I’ll just proceed to that “try to make a working Devuan on this SBC” task… though perhaps using a different board as Daily Driver… It would mean more plugging and unplugging of cables as I swap between boards (instead of just swapping uSD cards) but I can live with that….

  64. Pingback: Links 9/11/2019: Linux Journal Goes Dark (Offline), KStars 3.3.7, OpenSUSE Name Change Aborted | Techrights

Anything to say?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.