SystemD – it keeps getting worse

There are some things where your first impression is “WTF?”, but on closer inspection turn out to have good features. The point of the pyramid is not so good looking, but the deeper you dig, the better it gets. Then there are other things where the top view is great, but “the digger you deep the deeper the doo”…

My first impression of SystemD was “why on earth do that?”. Digging a little, it became “What the? That’s not right.”. Now I’ve gone further into it via using it on a couple of platforms and had that sinking feeling when “everything you know is wrong, now” and had some inexplicable bad behaviours from those systems. (Things like a directory in which I was residing being apparently deleted and recreated and “kill -9 PID” not causing that Process ID to die.)

So I decided to look a bit more at just what IS SystemD and what all is it doing (now, and if possible in the future given the massive rate of “take over” of functions that it has done so far). I started with The Wiki. Yes, there’s a wiki on it. Oddly, a not too horridly biased one.

Since Wikis have had a tendency to be rewritten if referenced for criticism of The Left, and probably other times as well, I’m going to quote more of it than I would were it a more stable platform. Some bits are out of order, and any bold is mine.

systemd is an init system used by some Linux distributions to bootstrap the user space and manage all processes subsequently, instead of the UNIX System V or Berkeley Software Distribution (BSD) init systems. The name systemd adheres to the Unix convention of naming daemons by appending the letter d. It is published as free and open-source software under the terms of the GNU Lesser General Public License (LGPL) version 2.1 or later. One of systemd’s main goals is to unify basic Linux configurations and service behaviors across all distributions.

As of 2015, many Linux distributions have adopted systemd as their default init system. The increasing adoption of systemd has been controversial, with critics arguing the software has violated the Unix philosophy by becoming increasingly complex, and that distributions have been forced to adopt it due to the dependency of various other software upon it, including, most notably, the GNOME 3 desktop environment.

Right out the gate, the intent is to have ALL distributions “unified” in using it. That is exactly counter to the Free and OPEN Software (FOSS) way. ALL software is availabe for the express purpose of changing over time to become whatever YOU want if YOU (and your friends) put in the time to work on it. Even the kernel. Roll your own if you like. Add some bit of kit you need for your particular hardware or process. Make it a Real Time Kernel if doing things like robotics. Etc. Now along comes this bit of “shimware” that forces itself between “userland” and the kernel and is so full of hooks into everything that it is hard to root it out for your system needs.

A picture showing Systemd as a deep blue layer just below userland isolating it from the kernel and hardware.

Systemd is that deep blue layer just below userland isolating it from the kernel and hardware.

Larger full sized original image here.

So by design this thing is supposed to isolate you and install itself between all other code and the kernel. To take control of all outside the kernel. And to force “uniformity”. That is not The Unix Way, nor the FOSS way, nor even “playing well with others”. Not nice.

Poettering describes systemd development as “never finished, never complete, but tracking progress of technology”. In May 2014, Poettering further defined systemd as aiming to unify “pointless differences between distributions”, by providing the following three general functions:

A system and service manager (manages both the system, as by applying various configurations, and its services)
A software platform (serves as a basis for developing other software)
The glue between applications and the kernel (provides various interfaces that expose functionalities provided by the kernel)

“One ring to rule them all, one ring to bind them…”

Now notice that 2nd line. “Software platform” for “developing other software”? What is this now, a tool chain? An environment for devo? A constantly mutating monster? By design? And that third line “expose functionalities provided by the kernel”? I thought that was the job of the kernel calls and interface? Why do I need a shim stuffed in there in my way?

But it gets worse.

In August 2015, systemd now provides a login shell, callable via machinectl shell.

Graphical frontends

A few graphical frontends are available, including:


Also known as systemadm, it is a simple GTK+-based graphical front-end for systemd. It provides a simple user interface to manage services and a graphical agent to request passwords from the user. As of 2014 the systemadm program has received little development or maintenance in the last few years, because development focus has shifted to command-line tools like systemctl and systemd-analyze.


Provides a graphical systemd frontend for the KDE Plasma 5 desktop. It integrates into the system settings window and allows monitoring and controlling of systemd units and logind sessions, as well as graphical editing of configuration files.

Graphical front ends and a login to this layer between the kernel and ALL users processes? Really? Can you say “REALLY JUICY Attack surfaces!”? I knew you could…

Why on earth have a login (protected how, by whom, with what audit trail independent of that system?) into the guts of a system critical for security logging and authentication? Where does this put my two factor authenticator protection? Sigh. Yeah yeah, they might have done it well, with hooks back out to the external authenticators, but now I don’t know. One of the keys to a breaker proof system is knowing. And, since this is (by design) a constantly moving target, you can never really know.

Now for most folks they just download a binary blob and run with it. Look at the source code on a server somewhere in the “cloud” maybe. This presents and enormous and frankly easy way for “Agencies” to bugger your system. Just watch your wire for a system update check, redirect the request to their server (they do all this part already sometimes) and “update” your systemd with one where they can “login” to it. Now they OWN you. I can’t help but wonder, given the PRISM program, if this is not also “by design”. Make a giant blob nobody but the developers will ever actually read, make key things dependent on it (since they could not corrupt Linus and the kernel, put a layer above it comes immediately to mind) and have it be adopted nearly everywhere. Game and set done, Match underway now IMHO.

Since systemd supports only Linux and cannot be easily ported to other operating systems due to the heavy use of Linux kernel APIs, there is a need to offer compatible APIs on other operating systems such as OpenBSD.

In a September 2014 ZDNet interview, prominent Linux kernel developer Theodore Ts’o expressed his opinion that the dispute over systemd’s centralized design philosophy, more than technical concerns, indicates a dangerous general trend toward uniformizing the Linux ecosystem, alienating and marginalizing parts of the open-source community, and leaving little room for alternative projects. In this he found similarities with the attitude he found in the GNOME project toward non-standard configurations. On social media, Ts’o also later compared the attitudes of two key developers to that of GNOME’s developers.

I gave up on Gnome about a decade back for just that kind of reason. Well, that, and it was a big fat pig of code that didn’t run well and was not, IMHO, designed right. The subsequent flowering of a half dozen alternative “light weight” replacements like xfce and lxde show I wasn’t the only one.

Now one clear “good thing” is that this means by definition systemd must fail at one of the stated main goals. You can’t stamp out all those nasty little variations of preference and “unify” it all if you leave out the world of BSD. It also means that, worst case, I can run BSD and be happy. (BSD is used as the base for the best most hardened systems in the world. China uses it as the base for kylin, I used it as the base for our hardened systems at Apple when I was there, it is widely used in other industrial strength places and by TLAs as well. (Three Letter Agencies…) Hmmm… one wonders if that might be why systemd was designed to not play well with BSD…

On the good news side, there are some islands of sanity unwilling to be lead by the Judas Goat of really fast boot times. LFS Linux From Scratch is not ‘going there’ for the simple reason that things were changing too fast to keep up even if they wanted to, and they didn’t want to much. Slackware uses the BSD style “rc.d” init system from before System V came out from AT&T ‘way back when’ as AT&T tried to mutate Version 7 Unix (BSD base) into something different so they could make money off of a proprietary license. Many of us, then, considered Version 7 the last real Unix… (It was the basis for SunOS and the shift to Solaris as a System V base was not greeted with praise by many of us. I ran SunOS at Apple as long as I could…). Gentoo lets you run systemd, but enables OpenRC by default.

So that’s your “short list” of alternatives. Slackware, Gentoo, LFS, and BSD systems. (And things derived from them).

As of now, I’ve gotten a full Slackware running on the Pi Model 2, Gentoo up to where I need to install X-windows (yet another dogs breakfast of code…) and I’m likely to start a LFS attempt this week. BSD some time later.

Why? Well, I need to “pick one” to concentrate on. I have learned through the years (decades now?) that it is best to do as complete a search as possible “up front” as that removes the most issues later. Oddly, I also follow an entirely conflicting philosophy when “time is of the essence”. I’ve done very many “fast bring up” and “emergency speed” projects. “‘Fast, good, cheap: pick any two’ and you already chose fast -E.M.Smith”… so want it good, you will use excess resources. Once the client understands that, the “Fast Project” method is used. Pick one path as early as possible, start on it, keep someone investigating the other paths, if needed, swap horses mid stream preserving as much of the work done as possible in the transition. For as many parallel steps as possible, ‘rapid prototype’ them for early defect / issue discovery. Propagate that info back upstream. In the end, you do the work 3 times, but with a LOT of time overlap, so finish faster than the alternative methodical linear approach. Rather like modern computer cores that precompute some paths through the code then when they come to the branch / decision throw away the branch not taken… So again, I’m not alone in seeing how this strategy increases speed to goal. But on this project, I am only one person, so there is no excess resource to spend… so the upfront search is the faster line.

I’ve already found that both Gentoo and Slackware are cleaner and IMHO seem faster. LFS is likely to be very small and fast. There is a port of LFS to the MUSL libraries ( a candidate for small fast libraries to replace bloated and slow glibc ) and I may well leverage off of that. There’s a nice comparison of the different library choices here:

Now the reason this matters to a systemd discussion is that the “libraries” are what everything else is built upon. IF they are fat, your system is fat. If they are slow, your system is slow. Systemd is locked to glibc and you MUST use it. (Which is likely why the embedded folks at Gentoo just walked away from it. uClibc is essential to making small secure routers on minimal hardware.) Since I want a small fast system that also runs on minimal hardware, I’m likely going to step away from glibc too. (This is not without “issues”. The more recent GNU C compiler has a tie to glibc, so that LFS on MUSL had to cross compile from a glibc based machine…) has a link to

This is Sabotage, an experimental distribution based on musl libc and busybox.

Currently Sabotage supports i386, x86_64, MIPS, PowerPC32 and ARM(v4t+). ARM hardfloat (hf) is supported via crosscompilation of stage1, since it requires a recent GCC which we can’t easily bootstrap in stage0 due to library dependencies of GCC introduced with 4.3.

The preferred way to build Sabotage is using a native Linux environment for the desired architecture. It is now also possible to cross-compile large parts of it. As cross-compiling is hairy and support for it is quite new, expect breakage. Native builds are well tested and considered stable.

OK, so it can’t completely self host and self build… Something for later…

But, to the point, systemd force locks you into glibc libraries and that matters. From the chart in the link above, comparision of the library size and of the “minimal C program” complies sizes.

Bloat comparison 	musl	uClibc	dietlibc    glibc
Complete .a set 	426k 	500k 	120k 	    2.0M †
Complete .so set 	527k 	560k 	185k 	    7.9M †
Smallest static C prog 	1.8k 	5k 	0.2k 	    662k 

Yup, THE smallest possible C program ends up 660 kB larger than needed from bloat in the libraries. The entire .so set ends up 7.3 Meg of boat.

Think that matters at run time on small hardware? You betcha!

So one of my design goals is a non glibc based system, for obvious reasons. Yet systemd prevents that.

The only logical conclusion is to dump systemd. Which is exactly what the embedded systems folks at Gentoo did. Good on ya!

Preserving the History quotes

This is from the History section of the wiki, here mostly to preserve it, since it is presently flagged with:

“This article’s Criticism or Controversy section may compromise the article’s neutral point of view of the subject. Please integrate the section’s contents into the article as a whole, or rewrite the material. (November 2015)” the PC Police don’t like the negative waves…

History and controversy

The design of systemd has ignited controversy within the free software community. Critics argue that systemd is overly complex and suffers continued feature creep, and that its architecture violates the design principles of Unix-like operating systems. There is also concern that it forms a system of interlocked dependencies, thereby giving distribution maintainers little choice but to adopt systemd as more user-space software come to depend on its components.

In May 2011, Fedora became the first major Linux distribution to enable systemd by default.

In a 2012 interview, Slackware’s lead Patrick Volkerding expressed reservations about the systemd architecture, stating his belief that its design was contrary to the Unix philosophy of interconnected utilities with narrowly defined functionalities. As of August 2014, Slackware does not support or use systemd, but Volkerding has not ruled out the possibility of switching to it.

In January 2013, Lennart Poettering attempted to address concerns about systemd in a blog post called The Biggest Myths.

Between October 2013 and February 2014, a long debate among the Debian Technical Committee occurred on the Debian mailing list, discussing which init system to use as the default in Debian 8 “jessie”, and culminating in a decision in favor of systemd. The debate was widely publicized and in the wake of the decision the debate continues on the Debian mailing list. In February 2014, after Debian’s decision was made, Mark Shuttleworth announced on his blog that Ubuntu would be following through as well in implementing systemd, despite his earlier comments in October 2013 that described systemd as “hugely invasive and hardly justified”.

In March 2014, Eric S. Raymond opined that systemd’s design goals were prone to mission creep and software bloat. In April 2014, Linus Torvalds expressed reservations about the attitude of Kay Sievers, a key systemd developer, toward users and bug reports in regards to modifications sent to the linux kernel itself by Kay Sievers. In late April 2014, a campaign to boycott systemd was launched, with a website listing various reasons against its adoption.

In an August 2014 article published in InfoWorld, Paul Venezia wrote about the systemd controversy, and attributed the controversy to violation of the Unix philosophy, and to “enormous egos who firmly believe they can do no wrong”. The article also characterizes the architecture of systemd as similar to that of svchost.exe, a critical system component in Microsoft Windows with a broad functional scope.

In November 2014, Debian maintainers and Technical Committee members Joey Hess, Russ Allbery, Ian Jackson and systemd package maintainer Tollef Fog Heen resigned from their positions. All four justified their decision on the public Debian mailing list and in personal blogs with their exposure to extraordinary stress levels related to ongoing disputes on systemd integration within the Debian and open source community that rendered regular maintenance virtually impossible.

In December 2014, a fork of Debian, called Devuan, was announced by a group calling themselves the “Veteran Unix Admins”. Its intention is to provide a Debian variant without systemd installed by default.

In August 2015, systemd now provides a login shell, callable via machinectl shell.

In October 2015, an article titled “Structural and semantic deficiencies in the systemd architecture for real-world service management” was published, which criticized systemd in several areas, including its design as an object system with too many layers of indirection that make it prone to ordering-related failure cases, a difficult to predict execution model, its non-deterministic boot order, implicit state in unit file configurations and its general inadequacy at providing a uniform external abstraction for unit types.

Other than that, no problem! ;sarc/

No, I don’t need my Unix world re-made in the image of Microsoft architecture and sychost.exe design. It is exactly that design approach that drives me away from Micro$oft. Well, that and their participation in PRISM…

In Conclusion

So that’s what I’ve been up to the last day or two. Wandering in the desert of systemd, finding just how much it damages choice, even down to the library system you must run, finding out it is an operating system wanna-be running on tope of the real OS kernel and filtering what the real OS userland can do, complete with its own login… Fighting the urge to wretch in the sink…

From the wiki:

“ systemd provides us with a great set of APIs to get done what we want to get done. In ‘earlier times’, we had to do these things for ourselves. Not so long ago, gnome-settings-daemon shipped some system-level services that were a poorly-written mess of #ifdef in order to do basic things like set the time and enable/disable NTP just so that we could provide a UI for this in gnome-control-center. We had to handle the cases for each operating system, for ourselves, often without having access to those systems for testing. These are components that run as root.”

— Ryan Lortie,

So one bit of over reaching bloatware that was trying to do things with the OS that it never ought to have done, needed a more “uniform” layer below it to take the pain and suffering out of their bad decision to be a giant Does-All and not be part of The Unix Way of modular code each doing one thing very well. No thanks. Just deep six Gnome and move on…

“ I understand the natural urge to design something newer than sysvinit, but how about testing it a bit more? I have 5 different computers, and on any given random reboot, 1 out of 5 of these won’t boot. That’s a 20% failure rate. Its been a 20% failure rate for over 6 years now.

Exactly how much system testing is needed to push the failure rate to less than 1-out-of-5? Is it really that hard to test software before you ship it? Especially system software related to booting!? If systemd plans to take over the world, it should at least work, instead of failing.”

— Linas Vepstas,

“ I don’t actually have any particularly strong opinions on systemd itself. I’ve had issues with some of the core developers that I think are much too cavalier about bugs and compatibility, and I think some of the design details are insane (I dislike the binary logs, for example), but those are details, not big issues.”

— Linus Torvalds,

Nice when you find yourself out on a fringe edge, and next to you stands someone like Linus… Yes, Linus, I too saw some of it as a bit insane, and really detest binary logs you can only read with a special program…

With that, I’ve narrowed my base candidates to Gentoo, Slackware, LFS and their derivatives. Most of the rest are either already systemd afflicted, or headed that way. BSD is hanging in the wings as a fallback candidate, but it is so different from Linux that for most folks it would be a bit hostile to install. For me, it would take more work to make it go and make it handholdy enough for others. Puppy has some very interesting build tools, and I’m likely to raid it for some design aspects and tools. (That whole boot to memory and build system kit). As a base, though, Puppy has some issues. It is, by default, single user. It is prone to scatter as they have a lot of neat tools to roll your own, and their own code base shifts over the years. Sometimes Slackware, sometimes others. Flexibility may be nice later, but up front it makes the search space large and the options non-converging. So I’m going to pick bits out of it if they work for me, but not just make another Puppy… (besides, while I like dogs, the constant dog metaphor in everything drives me up a wall…)

As of now, the most likely process I see is a LFS build against glibc (later against MUSL that has been shown to work but needs a cross complie complicated build system), then a 3 way final compare, and a full on system build. For now, my hunch is that the first one will be LFS based as their build system is fully documented and they have an auto generated script that drives it. Essentially, most of the work is already automated if desired and self hosted / contained. Eventually I’d like to get to Gentoo, but the learning curve on their build system will take some time, so it is a parallel task of lower priority. Slackware calls to me with that rc.d BSD flavor and the way it can offer both binary blobs and builds from source, so I’m likely to put fleshing it out for my install ahead of Gentoo exploration. That it only comes with the konqueror browser is a bit of a pain, since attempts to edit an article with it did not end well… I’d rather not start out with the need to port browsers… (Yes, packages are available… but I’d rather not commit to that whole approach just yet).

So that’s where I am now. LFS up next, Slackware being finished out as desired, and Gentoo hanging over my head ;-) I really don’t look forward to slogging through the Gentoo unique installation method AND x-windows in the same bucket…

OK, back to work! ;-)

Subscribe to feed


About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in Tech Bits and tagged , , . Bookmark the permalink.

21 Responses to SystemD – it keeps getting worse

  1. Larry Ledwick says:

    I see systemd as a reflection of the modern trend in the IT community to favor socialism and one size fits all solutions. When Unix and Linux first came out, they were the ultimate free expression vehicle. As you mentioned, want to do it a different way? Just roll your own and get on with your custom version.

    In that sense, early Linux was the ultimate IT example of Libertarian and free expression, but as it has become fashionable for the new generation of geeks to support progressive/socialist expressions in all things, systemd is just a smiley face version of totalitarianism (or Microsoft like view of the software world if you will), do it my way and like it because we will smother you with so much code even we don’t really know what all it does (or could do).

    There is no such thing as bug free code. The bigger the blob, the more likely it does something “undefined – unexpected or unplanned” It is just the reality of the the shear complexity of large line number code modules.

    The best practical example I can think of is in the mid 1980’s they were looking at nuclear reactors and safety and deep analysis of the possible failure paths showed that at some point the complexity introduced by multiple layers of “safety systems” actually increased the chance of failure. More importantly the failure modes were more likely to be unanticipated and obscure so less identifiable and more difficult to respond to in a timely manner.

    I liked the rc script approach as well.

  2. wyoskeptic says:

    Re Larry and “There is no such thing as bug free code.” Back in the day when I was doing programming (Industrial real time process control / scada) my standard answer about bugs in the coding was “If x is the number of bugs you have found, then x+1 is the absolute minimum the coding contains.” As for the max number, I never wanted to go there; it was too depressing.

  3. E.M.Smith says:


    I briefly thought of posting a one line program that by inspection must be bug free… then I remebered about 1980 when I wrote a program (trying to debug a problem) of the form:

    PRINT “you can not get here!”

    When run, it printed “you can not get here!”

    How? Called the vendor… compiler bug…. No program runs alone, they run on top of thousands to millions of lines of compiler, assembler, libraries, operating system and utilities, device drivers, etc etc…

    Since that day I put in “you can not get here” diagnostic prints… when other programmers complain about that “style” and wasted code, they get The Story…

  4. Larry Ledwick says:

    We have run into that here at work programs which have run fine for years suddenly break due to changes in header files or versions of mysql which on the surface should handle the code just fine, but in one version you bump into an edge case where the code chooses a different default behavior in an ambiguous case or implicit definition.

  5. E.M.Smith says:


    Why I don’t like self updating systems and have a fondness for static linked libraries in critical applications… and yes, other programmers and sysadmins call me a retro worrywart… but my stuff crashed less… I always shut off all auto update on everything… I only update after a full backup and during slack time.

    I’m not the only one, either. Last contract at Disney, they regularly had change freeze windows. (I was on the change control committe conference calls weekly… well, a few times each week as my projects moved forward..). During high attendance times, nothing changes. (Think what would happen if hotel room assignment and billing failed between Dec 20 and Jan 5 or so… or food ordering failed.. or…)

  6. Thus spake the master programmer:
    “Though a program be but three lines long, someday it will have to
    be maintained.”
    — Geoffrey James, “The Tao of Programming”

  7. Larry Ledwick says:

    Sun microsystems also had change freeze windows which provided an interesting look at the real cause of most outages. We would get pounded with problems just before the change freeze as folks tried to get stuff done and sneak under the wire before the change freeze. Problems were almost always user error during those last minute changes. Then the system would be rock solid stable during the freeze when nothing in the code was changing. Then after the freeze window closed and they were allowed to fiddle with stuff again, we would have a second surge of problems as they threw stuff at the system to try to catch up to their internal schedules. Again the problems were almost always user errors. Small bugs in scripts, forgotten updates to header files, wrong headers, version mismatches etc.

    That was the experience that led me to my current philosophy — 90%+ of all computer problems are caused by trying to improve things. Get it working then leave it alone until you really “need” to change things again. (if it ain’t broke – don’t fix it !)

    My current job our team does some help desk like functions and every time there is a patch push by Microsoft or major update to browsers or other high use software systems we always have someone (often several) who have problems caused by the update. Sometimes we never do learn what the real conflict was as the expedient solution is often “shut up and reboot” (the dogbert customer support solution) followed by “uninstall and reinstall”.

    The complexity of systems is so great it is a practical impossibility to test all possible combinations of user options, system configurations and add-ons. Some of those possible combinations simply don’t like each other and do not play well with each other. Unfortunately since you cannot predict those situations, you just have to beat the gremlins into submission each time you make significant changes to a large population of users systems.

  8. wyoskeptic says:

    @ E.M. your compiler example reminds me of an error trap routine I had to put into a compiled basic program (re 1988 (or so) and IBM PS2) so that the program would continue to run regardless. Simply put, the trap routine would print out the line #, error# and a few other parameters, would save the same data to disk and then jump over the error section and return to the main program. (Process control, real time, 24/7, could never allow that CPU to set idle waiting for any sort of input be it keyboard or anything else.) I had a line that saved data to disk (added to the end of a sequential file). Simply put the location was c:\data files\errors\ xxxx.yyy.

    I did a little modifying at one point, compiled and went home for the the day. I got an alarm call because the computer had shut down completely. I went in to find that the printer had gone though a complete box of paper. An entire ream of paper that had “error in Line # such and such, error # such and such…” printed on it. All because in my haste to finish up, I had put into the data save line a minor little itty bitty error. I had used the path of c;\data files\errors\ xxxx.yyy.

    So, one error then the program became locked into the error trap routine over and over.

    That semi colon instead of colon … sigh.

    In did prove that the error trap worked, though. Sort of. Up until the printer ran out of paper, that is …

    It never ends.

  9. wyoskeptic says:

    Whoops. c;\data files\errors\ xxxx.yyy for the second listing.
    Computers. Gotta Love ’em.

    [Reply: Fixed it for ya, though I had to bold it to see the tail on the ; ]

  10. E.M.Smith says:


    I had something similar while in school. Taking Algol 68 as my second computer language. The B6700 had a very nice Algol that included an extension for I/O (not in the language spec… IMHO always an error to not have I/O built in and as part of the spec…) that was similar to FORTRAN format statements. I had something like:


    but my printout was NOT giving me a new line between lines… so it looked like a long wrapping repeat of lines filling 100% of the page. Try as I might, I could not get the thing debugged. Even thinking the card might be bogus, I duped it ( 029 keypunch…) and the problem remained.

    Finally, in desperation, I just retyped the card. 100% certain that could NEVER fix it, since I was just typing the same thing again.

    That fixed it.

    I compared cards and printout and such. Text on the cards matched. Text on the paper showed a proper statement with a ; in place. Finally put the two cards on top of each other and inspected holes. One hole was different. Decoded the ASCII on the two cards by hand…

    The ; was actually a : BUT…

    The line printer had moving paper so the : all looked like ; since the paper smeared the dots down. The 029 punch in question had a busted pin so was NOT printing a tail on the ; on the card… Essentially, given the combo of paper moving and punch minus a pixel, the : and ; were indistinguishable, and to that particular version of ALGOL, one said to suppress line feeds…


    To this day I don’t like to see a ; or : with semantic importance in computer languages….

  11. Another Ian says:


    Don’t I remember that Wordstar blew it on a supposedly new beauter idea?

  12. E.M.Smith says:

    @Another Ian:

    Looks to me like the “new interface” started the decline, then the guy who was remaking the remake had a heart attack and the whole thing fell in. I just remember it was the hot stuff, then the interface changed, and I never had to upgrade it again ;-) ( I was doing I.T., not using it…)

    Alter I got a free copy bundled with some Linux I bought (still on my shelf…) Never used.

    WordStar 2000

    At the time, the IBM Displaywriter System dominated the dedicated word processor market. IBM’s main competition was Wang Laboratories. Such machines were expensive and were generally accessed through terminals connected to central mainframe or midrange computers.

    When IBM announced it was bringing DisplayWrite to the PC, MicroPro focused on creating a clone of it which they marketed, in 1984, as WordStar 2000. WordStar 2000 supported features such as disk directories, but lacked compatibility with the file formats of existing WordStar versions and also made numerous unpopular changes to the interface. Gradually competitors such as WordPerfect reduced MicroPro’s market share. MultiMate, in particular, used the same key sequences as Wang word processors, which made it popular with secretaries switching from those to PCs.

    BYTE stated that WordStar 2000 had “all the charm of an elephant on motorized skates”, warning in 1986 that an IBM PC AT with hard drive was highly advisable to run the software, which it described as “clumsy, overdesigned, and uninviting … I can’t come up with a reason why I’d want to use it”. WordStar 2000 had a user interface that was substantially different from the original WordStar,and the company did little to advertise this. However, it had a lasting impact on the word processing industry by introducing keyboard shortcuts that are still widely used, namely Ctrl-B for Bold, Ctrl-I for Italic, and Ctrl-U for Underline[citation needed].

    WordStar became popular in large companies without MicroPro. The company, which did not have a corporate sales program until December 1983, developed a poor reputation among customers. PC Magazine wrote in 1983 that MicroPro’s “motto often seems to be: ‘Ask Your Dealer'”, and in 1985 that

    Almost since its birth 4 years ago, MicroPro has had a seemingly unshakable reputation for three things: arrogant indifference to user feedback (“MicroPro’s classic response to questions about WordStar was, “Call your dealer”); possession of one of the more difficult-to-use word processors on the market; and possession of the most powerful word processor available.

    By late 1984 the company admitted, according to the magazine, that WordStar’s reputation for power was fading,[and by early 1985 its sales had decreased for four quarters while those of Multimate and Samna increased. Several MicroPro employees meanwhile formed rival company Newstar. In September 1983 it published WordStar clone NewWord, which offered several features the original lacked, such as a built-in spell checker and support for laser printers. Advertisements stated that “Anyone with WordStar experience won’t even have to read NewWord’s manuals. WordStar text files work with NewWord”. Despite competition from NewStar, Microsoft Word, WordPerfect, and dozens of other companies—which typically released new versions of their software every 12 to 18 months—MicroPro did not release new versions of WordStar beyond 3.3 during 1984 and 1985, in part because Rubinstein relinquished control of the company after a January 1984 heart attack. His replacements canceled the promising office suite Starburst, purchased a WordStar clone, and used it as the basis of WordStar 2000, released in December 1984. It received poor reviews—by April 1985 PC Magazine referred to WordStar 2000 as “beleaguered”—due to not being compatible with WordStar files and other disadvantages, and by selling at the same $495 price as WordStar 3.3 confused customers. Company employees were divided between WordStar and WordStar 2000 factions, and fiscal year 1985 sales declined to $40 million.

    So yeah, someone had a “better idea” for cheapening the product and keeping the price high while cutting developer costs, and forgot about the customer and consistency of product quality and interface, so blew up the company…

    Sadly, a common story. Ignore the guys who write the software at your peril, and when you have bright people doing exceptional things, to not ever think you can replace them with cheaper folks and get the same result. “Fast, good, cheap. Pick any two. And you already chose cheap. -E.M.Smith” means that when you then chose “fast”, you get crap and the company heads to the crapper… Every. Single. Time.

  13. Another Ian says:


    IIRC there was WS 2000 which was the new-beaut that abandoned and then there was WS 4 where there was an attempt to get things back on track. Which worked OK in my experience but by then they’d opened the field to Wordperfect – at least in our govt outfit.

    But a moment I treasure. We got our first micro – an S100 with 8 inch floppys. Same time a different branch got an Apple. So much for consistency and there is another story around that but not now.

    I drew the job of running and training – from an intro to card punch CDC,s and at the age of about 33!. And said to the typing pool “There is a thing called a word processor on this” and I need someone to run the typing while I run the instructions”. So they volunteered the youngest recruit. Who was sold when she realised that she was NEVER going to have to re-type anything ever again.

    They had Canon daisywheel typewriters and I cooked one up to link to the S100

  14. Pingback: boycott systemd – Boring Tech

  15. bk says:

    I think the root problem is that as we moved from the 286 era through the age of Linux there were vast numbers of very real, very material, problems to be solved. Today we have a lot of much settled architecture. GUI’s just haven’t really “changed” in any material way because your basic title bar, frame, menu sort of this is pretty much done, and proven. Users in Top500 HPC space are pretty happy, users in micro controller land are pretty happy, 16PB is, well, enough to make most anybody happy.

    That doesn’t mean there are no problems to solve, like integrating GPUs into all of the problem domains to which they can contribute – if we could figure out how. There’s just not enough of these problems anymore.,

    So you have the likes of Pottering… The boy likes computers, but he has no real problems to solve (and is likely not intellectually suited to solving some of the ones we do have). He has to justify himself. So he’s left “solving” problems that don’t really exist. I saw the term “Architectural Astronaut” somewhere, its a really good term. Like Star Trek, they warp over to some coordinates, beam in, and thereupon start solving whatever the immediate local problem seems to be. Universe? What Universe? We have a show to put on.

    You may notice the MO of your basic Astronaut. No scholarly papers, no RFCs, no irc threads with anybody, no real intellectual discourse of any kind. Just the sudden appearance of half-baked code and truly dubious marketing.

    That doesn’t mean the “solution” doesn’t solve various problems, for various people. But just like with good ‘ol Cap’t Kirk, the “Great and Final Answer to Everything” is, quite naturally, accomplished in the space of a single acre of ground, on some random planet, with a complete and total lack of awareness that there’s even a “Big Picture” out there.

    Or… He’s just a Microsoft plant. I heard they really want that 1.5% of the desktop market upgraded to Win 10.

  16. E.M.Smith says:


    Interesting insights… I’d not thought of looking at it from the human motivation POV…

    Yes, that ‘fits’… “got to make my bones, so let’s create a problem to solve”…

  17. Gail Combs says:

    Sounds like running into the ‘mature field’ problem. At this point just about everyone has some sort of computer and most run OK so why upgrade?

    Yeah you are going to get up grades and new innovation but that steep curve new product idea curve has now leveled off a bit however Microsoft and co. are not going to want to downsize to deal with the existing (replacement type) customer base.

    And yes they still do manufacture buggy whips, I have three. {:>D

  18. E.M.Smith says:


    My Amish relatives have lots of buggy whips… and buggies… And some REALLY big horses ;-)

    When a child, I wanted a Belgian or Percheron … with a saddle ;-)

    has one presently as the top listing, shown with rider… so I’m not crazy, see, other people do it too ;-)

    Per Microsoft: They have a BIG problem. Computer costs are getting so low that they are a significant part of the purchase price. Then there’s that whole Android / MacOS / Linux thing as folks move to tablets. Their answer? Buy patents and sue…

    Oh, and they have announced they are going to cure cancer in 10 years… Yeah, right. (They see it as a DNA computation problem. They are wrong. It’s a programming problem and data integrity problem. They intend to make some magic DNA that will detect when a cell goes rogue and kill it… what could possibly go wrong…

  19. Gail Combs says:

    E.M. I helped a guy training a Clydesdale as a tourney horse for SCA events and at age 8 rode a Percheron named Sour Pus. I actually got a trot out of her!

    As far as cancer goes, a doctor in Germany (1975) told my Mom and I, they had a blood test for cancer. When I came home I asked Mom’s USA doctor about it and got. “Yeah there is a test, so who cares? We would not know WHERE the cancer is.”

    Since both of my parents ended up with cancer AND we now have Chemo that doesn’t CARE where the cancer is. SO why in Hades isn’t this test used for screening people with a family history of cancer on a periodic basis???

  20. Pingback: Any User, One Line SystemD Crash | Musings from the Chiefio

Comments are closed.