Fedora on Pi – a short note

I’m doing a test drive of the Berryboot Fedora install today. So far, I like it.

It is a MATE desktop, that is in many ways comfortable.

It didn’t ‘balk’ when I hand edited the /etc/passwd file to add the user “pi” with the same user id and group id as on Debian. I mounted my (now living on external disk) pi home directory and then logged in a ‘pi’. There’s all my stuff, and it works nicely.

Launched FireFox. It is a faster browser than the IceWeasel and IceApe clones on Debian. No idea why. Using IceWeasel on Debian (the newer) had slow spots and was just “an issue”. Even with squid running. With the IceAgeNow page open, it tends to consume most of a CPU (likely the way ads are done on that site. I have seen this on several browsers with them, so now I use them as a test case.) At present, I have my site, WUWT, and IceAgeNow all open AND I’m puting in a posting.

No type ahead. CPU is at 53% of one core. That’s a significant gain.

I did install Squid on it, and found that it installs with a different user ID, so I can’t just point it at the same cache directory on the external disk drive. No big deal, I’m just leaving it with the defaults for now.

Overall the system is clean and fast. It does have that “slightly stuffy” feeling of all things Red Hat. Crisp, but in a “you WILL do it my way” sort of way.

Yet it works, and from a first look, rather well. I suspect that the new Debian (Jessie) is having “issues” from trying to integrate systemd and message bus processes. Fedora was where it was developed and integrated from the start (only one of the reasons I’m not fond of systemd… it forces change in so much other stuff, all of which will take a good bit of tuning and debugging to get back to where they were prior to the conversion…) But “it is what it is” and Fedora looks to be using those basics better, for now.

It looks like some other folks like it too:


[fedora-arm] New Raspberry Pi 2 with ARM v7 processor
User Digital user0007 at yahoo.com
Sun Feb 22 17:40:19 UTC 2015

Fedora is running indeed on RPI2 – you may use Berryboot – Boot menu / OS installer for ARM devices (http://sourceforge.net/projects/berryboot/). The Berryboot web site is http://www.berryterminal.com/doku.php/berryboot It is needed to download 32.8MB zip file and to copy the unzipped files to a FAT or FAT32 formated microSD card. Then you may add to the Berryboot main menu the existing Fedora ARM 21 OS with MATE desktop. It is running very well, the desktop is nice and fast, Firefox v.33.1 is already pre-installed (Raspbian OS has only old Firefox version Iceweasel). Audio hardware should be added. The previous version included in Berryboot was Fedora 18 with xfce desktop – it was running very well too on RPI2.

I’ll be “living on Fedora” for a few days now. Mostly to see how it does with a wider range of things. See what the “build script” for it would need to be to get “all my usual tools” in place. It isn’t all that big a deal to do:

yum install squid

instead of

apt-get squid install

but knowing what is already in, and out, is the longer part of the process.

I’ve been using CentOS on the Antek/ASUS box, and it is just a slightly older Fedora with bit more QA and a package set more aimed at large data centers. The two are more alike than different.

But what I care about is the browser performance, mostly. This one is significantly smoother. (Hey, they both can be nearly identical, but if one is using 1.5 X the CPU, it hits 100% and bogs while the other doesn’t. As of now, I’ve not seen this Firefox hit 100%…)

I did mount a ‘real swap’ on it, and with just the browser and 2 terminal windows open I’ve got 16,052 blocks of swap already used. It’s a bit of a memory hog build. That may be where the extra speed comes from, a willingness to put more stuff in memory to save some cycles. On an all SD card system this would likely mean more SD card wear. As I really like having “real swap” this isn’t an issue for me. And as SD cards are cheap, just be ready to restore a backup if you run on an SD card for a year or two.

Ah! On clicking “save” during the draft of this article: the CPU usage for FireFox went to 117%, so more than one core. It is both ‘multicore aware” and efficient. Nice. IIRC, IceApe is not multicore aware yet, limiting on one core.

In Conclusion

So that’s the update from this posting. If you are a Fedora / RedHat fan, or just like MATE, it seems to be an industrial strength ‘re-mix’ for the ARM chip set. So far. (this is still early in the test drive).

Oh, and I’m running from a Class 4 card, so it is a reasonable speed chip, but not like a Class 10 Ultra at 30 MB/second. It’s not the chip that lets this be fast.

Now that my home directory is external, a lot of the ‘issues’ of living on another OS for a while go away. All must current stuff and projects come with me. That makes ‘variety testing’ easier. Don’t be surprised if I’m bouncing between distributions for a while. But, with that said, the ease of making postings on a real FireFox without pauses is likely to keep me here when posting. At least for now.

There is a reasonable selection of ‘the usual suspects’ installed. Libre Office and gparted and transmission all already in place. Didn’t see Gimp, though. So until I’m installing it now. “yum install gimp”.

I’ve also not yet tested sound. That will be later today. It ought to work, though. We’ll see what happens on a youtube video…

All in all, it is looking like a decent release.

Subscribe to feed

About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in Tech Bits and tagged , , , , , . Bookmark the permalink.

22 Responses to Fedora on Pi – a short note

  1. E.M.Smith says:

    Just for grins, I swapped over to the Ubuntu MATE on the Pi. It is themed with a particularly pond slime green color. It also uses ‘squid3’ when you install squid… that then will not start due to ‘directory not found’. Moving a terminal window up to the top of the desktop causes it to ‘auto zoom’ to fill the entire window. I’ve found no way to make it go back to normal. I can minimize it to the tab at the bottom. You can close / kill it. But just “go back to normal” is not obvious. Very annoying.

    It does have a Very Nice system monitor with a ‘wiggly line’ for each CPU, each it it’s own color.

    Opening 3 tabs in FireFox it is using 4080 kb of swap (so glad I mounted it) but Ubuntu has always been a bit of a memory hog.

    It also has another interesting “feature”, at least I think it is a feature… maybe:

    At login, if you select “guest”, it just lets anyone log it. It looks like it is making a “disposible user” and it claims nothing is saved, warning that if you want to save anything, it needs to be copied off to somewhere else. IFF that is really true, it is a risk in that you are depending on their code to prevent malicious code insertion; but a benefit in that you have an incognito login on demand.

    It does also mean that I can’t let Ubuntu just be laying around as an alternat boot on any SD card where I’m seriously interested in security. Having a “powerfail, reboot, login for just anyone” is not a security feature. So: “Note to self: No Ubuntu as alternate boot choice on secure cards”.

    (Why I do these test drives… to find stuff like that which is not obvious when you just “install it in case I want to play with it some day”… It might be a Fine and Secure disposible user… but any time you let a person have a login and a USB port there is just sooo much potential… )

    The Firefox seems the same as on Fedora, and it claims to have working audio ( I already had a video ad run on one page). So likely it is a very nice home user option.

    For me, I find enough anoying that I’m unlikely to do more than make sure it works and doesn’t do anything bad. With the pond scum color theme, the blow-up windows, and squid that doesn’t work (and I’m not interested in debugging) it is not a place I’ll be hanging out much. I might make a decidated very small chip with just Ubuntu on it for a ‘disposible system’ where it is 100% generic and all I ever do is log in as ‘guest’.

    With that, back to the bit mine ;-)

  2. E.M.Smith says:

    Ah, if you grab the title bar with the mouse and pull down, the window goes back to normal… just don’t expect any wigits or controls to do that and don’t expect any clue anywhere to do it…

  3. E.M.Smith says:

    Just an FYI… unlikely to bite folks unless they do a lot of the OS du jour like I do…

    With my home directory on /Diskname/home/pi it gets mounted on each operating system. That way all my ‘stuff’ comes with me. Well, that includes some config files that the operating system and its services and parts like to use. Things you don’t normally think about. One of them is the stuff that lets X-Windows work.

    In particular, there is a file named .Xauthority (note that it starts with a dot – that says “don’t show this file in a normal file listing with the ls command. You must use the -a for “all” option to see it, so do a:

    ls -al 

    in your home directory to see it.

    When you do the ‘startx’ command to start the x-windows system (to get all that graphical interface stuff) it spawns an Xauth command that looks at that file. All well and good.


    Ubuntu did something to it that causes it to be incompatible at least with the older Debian.

    I went to boot “my usual” system, and when doing ‘startx’, it just sat there ‘hung’. Not doing anything. Well, I did a CTRL-z to put it in the packground and looked around. Eventually figuring out it was the .Xauthority file that was having Xauth hang on it. I copied the last one from the old home directory default /home/pi/.Xauthority back in to /Diskname/home/pi and after issuing a “kill -9 {process ID number}” on the startx process, doing startx then worked.

    Thus my being able to make this comment using this chip.

    The point of it all?

    1) This is why I leave the old defaults laying around when I do things. If I’m going to change a config file, I do “cp whatever.config whatever.config.DEFAULT” or something similar before I edit the original. I also leave the old default home directory in place (and even leave a symbolic link to it from the new mount point) when mounting a new home directory from another disk.

    2) When you share things between systems, sometimes the problem does not show up on the system that caused it.

    3) I’m documenting “how to fix it” if someone else runs into it.

    4) This may be an example of how dbus and systemd can cause issues of compatibility. In the .xsession-errors file I found:

    (lxpanel:3602): Gtk-CRITICAL **: IA__gtk_misc_set_alignment: assertion 'GTK_IS_MISC (misc)' failed
    ** (zenity:4675): WARNING **: Error retrieving accessibility bus address: org.freedesktop.DBus.Error.ServiceUnknown: The name org.a11y.Bus was not provided by any .service files
    ** (zenity:4675): WARNING **: Can't load fallback CSS resource: Failed to import: The resource at '/org/gnome/adwaita/gtk-fallback.css' does not exist 
    ** (zenity:4675): WARNING **: Can't load fallback CSS resource: Failed to import: The resource at '/org/gnome/adwaita/gtk-fallback.css' does not exist
    (lxpanel:3602): Gtk-CRITICAL **: IA__gtk_misc_set_alignment: assertion 'GTK_IS_MISC (misc)' failed
    XIO:  fatal IO error 11 (Resource temporarily unavailable) on X server ":0"
          after 89814 requests (89810 known processed) with 0 events remaining.
    pcmanfm: Fatal IO error 11 (Resource temporarily unavailable) on X server :0.

    which looks like maybe dbus was having issues. As this Debian release has not moved to systemd, I suspect that “something” was set in the .Xauthority file to say ‘expect systemd stuff’ and when it wasn’t there, this Xauth hung, hanging the whole thing. Maybe… that’s my debug speculation anyway.

    So just “be advised” that moving between non-systemd operating systems and those that are well into it may cause your X-Windows system to hang if you ever regress to the prior system.

    (And why on God’s Earth a program intended to do ‘initialization’ at boot time has its fingers into the X-Windows system is anyone’s guess… part of that whole “all your binaries are belong to us!” behaviour of DBus and systemd and why many ‘old hands’ don’t like it…)

  4. Larry Ledwick says:

    Hmmm the common refrain of “new and improved”
    what could possibly go wrong if we do this?

  5. E.M.Smith says:

    But wait! There’s More!!

    …. I rebooted back into Fedora (after moving the image onto a very small 4 GB mini-SD chip) and noticed it was running about 1/2 my network bandwidth in the very nice 4 color CPU monitor… Why?…. I pondered.

    I have 3 terminal windows open, but I’m not using the network….

    Then it shifted as the network usage fell, and CPU pegged….

    Why?…. I pondered. I’m not using the CPU…

    A process named “dnf” was using my machine…

    Cutting to the chase. The dnf command replaces ‘yum’ (but keeps all the command syntax intact… so why change it? )

    “Why? Don’t ask why. Down that path lies insanity and ruin. -E.M.Smith”

    So at every boot up this thing auto-launches an update process. That’s gonna get real old if I’m using a system that is launched from a static binary each time… and I scrub it back to clean after each use… It is also a ‘beacon’ to Red Hat and who knows who all else that this particular system has booted up. Hardly “incognito”.

    (Has Red Hat joined the Prism prison?… maybe… maybe not… have to look into it… but Systemd and DBus would be a great way to root around in what different programs on your computer might be doing… “Back Doors R Us NSA” folks would love that access… but it is only unproven paranoia at this point. You know, the normal expected part of the job of the security guy doing sysadmin kind of worries…)

    No worries, thinks I… I’ll just pull down ‘preferences’ and shut off automatic update… Uh, no…

    Doing a web search turns up dozens of folks trying to turn off auto-update from various bad things, one being an update of a package that killed 20 machines at one guy’s site. (Why I always turn off auto-update… Nobody but ME gets to decide when to put my machines at risk and nobody but ME gets to kill them and take the blame ;-)

    First off, due to the swap from Yum to dnf, there’s several different recipes. Second, it isn’t easy and I’m not sure you can ‘get them all’ in one place. At least one page had a discussion of how non-privileged users could update gnome from a popup request(!) and why can’t the sysadmin shut that off… Just ghastly…

    But here’s what I found:


    This first one talks about yum, that I think no longer applies to the Fedora release on the Pi.

    In general, Fedora21 does not install updates automatically unless you have it configured that way (using either yum-updatesd or an hourly/daily/weekly cronjob that executes something like su -c ‘yum update’, or yum-cron. Check the Fedora’s wiki page for auto-update to see whether it’s set-up on your system.

    If by “turnoff automatic updates” you mean updating the metadata of your installed repos you do this: Open the yum config file /etc/yum.conf (sudo gedit /etc/yum.conf) an increase the value for metadata_ expire to your desired value. The higher this value the less frequent Fedora will download package information from the repos you have installed.

    Nevertheless, I do not recommend not keeping the metadata up-to-date!

    What you could do in order to use less bandwidth for your update process is using the parameter called bandwith in /etc/yum.conf (i.e. bandwidth=20%). Check out man yum.conf or this page for a description of the parameter and other neat stuff you can configure…

    and a second answer:

    In F21 there are two different components:

    since dnf is installed by default there’s a background service that updates the repos metadata automatically (this is usually not that big, I’d say 20-50MB), you can disable that using systemctl, as root:

    systemctl disable dnf-makecache.timer

    IIUC gnome-software which downloads updates in the background then apply them offline, i.e. on next reboot, you can disable that using gsettings:

    gsettings set org.gnome.software download-updates false

    or use dconf-editor to edit that key.

    To update the system you can manaully use yum or dnf from terminal.

    Now doing a systemctl command is not hard, but certainly not what a typical user of a Pi is going to be familiar with. Also note that this will not be persistent over reboots from a base Berryboot image. (It lets you ‘reset’ to the base image if desired to purge your changed data – i.e. make it amnesiac to some degree). So FIRST you have to configure it to not update, THEN save (‘backup’) the image with the changes from Berryboot, THEN remake your chip with that ‘modified’ image… Sheesh.

    All to stop it from constantly sucking down “only” 50 MB just for the metadata about packages. Lord help you if pakage updates happen too.

    From the first time they moved from being Red Hat to being Fedora the general direction of Red Hat Linux releases has been antithetical to my basic attitudes as a sysadmin. I’ve used it for decades (from the beginning, actually…) and it just keeps getting more “me hostile’ with each release.

    This one runs nice and looks good, but man, making it a locked down system I can trust is getting more an more difficult.

    BTW, automatic software updates sounds like a good idea, but it can crash and brick your system by surprise any time it happens, and it can be subjected to ‘man in the middle’ attacks. Also, since Prism means the ‘man in the middle’ might be your software vendor… for any system that you want to have be secure from that mode of attack, ALL automatic updates need to be firmly and undeniably OFF. You have an ongoing limited exposure to ‘zero day’ and recent hacks as you don’t have ‘the latest bug patches’, but as of the Prism Program I think that less of an issue for me than Big Brother, given the typical exploits found against Linux. (It isn’t that easy to crack it and I’m not much of a target anyway).

    At any rate, you can see that the general direction of Red Hat continues toward the Central Authority model with them deciding how you will run your shop…

    Oh Well…

    If I can’t get this sucker locked down in a reasonable chunk of time it will go back on the ‘never mind’ list…

    Oh, and this link about an earlier way to do it on an earlier system using yum has an interesting example of why I (and many many others) do not turn on auto-updates:


    How can I turn off Fedora 20’s update-upon-reboot?

    Two-thirds of my Fedora installations were deeply affected, due to unlucky timing, by the recent bad-SELinux-update debacle or inept recovery from it. I understand and accept that there are no guarantee with Fedora. If I run “yum update”, I’d better have done a backup first and the consequences are mine. But automatic updates are another story. They had better work. Therefore, Fedora + automatic updates is an especially risky combination. I want to be able to opt certain computers out of the random drama: if I don’t press “update”, I want stable state.

    I looked in Settings and I looked in Software, but I did not see options to control the automatic updates.

    And in comments it has the reference to the self-updating-users-problem:

    user settings in gnome itself, i.e. those that apply to update notifications popping up (see https://ask.fedoraproject.org/en/question/38951/why-is-gnome-tool-software-not-updating/ ). In order to disable the plugin do:

    gsettings set org.gnome.settings-daemon.plugins.updates active false

    This is unfortunately (see why at the bottom) a per-user setting. You can play with other options gsettings list-recursively org.gnome.settings-daemon.plugins.updates, but every time you change something there you may need to “restart” gnome-settings-daemon with:

    # this may kill gnome-settings-daemon without it being able to restart itself!
    # better logout and login again (or reboot?)
    su -c “kill -15 `pidof /usr/libexec/gnome-settings-daemon`”

    What i’m actually horrified by is that we seem to still have this problem: http://www.theregister.co.uk/2009/11/19/fedora_12_root_imbroglio/ , i.e. a non-priviledged user can apply updates through a pop-up notification. Can someone from seniors confirm this and act if necessary?

    So while it feels like a smooth and somewhat faster system, I’m getting that same old “not for me” feeling again… I don’t need 50 MB of downloads on every reboot…

  6. gallopingcamel says:

    In 1999 Red Hat had a share price of ~$80 per share that put the value of the company ahead of most North Carolina corporations including Lowe’s and Duke Power.

    Back then I paid $20 for a Red Hat disk. I failed to install Red Hat’s version of Linux because it was not “User Friendly” enough for semi-computer-literate people like this camel. I got stuck on the disk partitioning and never managed to get to a log in prompt.

    Later I tried Knoppix. I was able to log in but my screen resolution was lousy and I could not get rid of green lines at the top and bottom of my screen.

    Ubuntu turned out to be a distro that ordinary mortals could handle so I was able to break away from the tyranny of Bill Gates and his dreadful Windows operating systems in 2007. Compared to Windows, Ubuntu is compact so everything happens faster. It is also far more secure so there is no need to combat viruses, malware etc.

    A few years ago Ubuntu introduced the “Unity” GUI that I found totally obnoxious. It featured large disappearing icons and many of the Ubuntu “Apps” I liked stopped working.

    Fortunately, I found Linux “Mint” which is essentially Ubuntu prior to the Unity interface so now I run Mint.

    Rightly or wrongly I equate Fedora with “Red Hat”. While someone like Chiefio can play tunes on every distro imaginable I am not the least bit tempted to try Fedora. Once bitten, twice shy.

    Once you have a Linux operating system all the basic software needed for everyday business applications is available for free. Email, word processor, spreadsheet, data base, and much much more. All these basic applications work better than they ever will in Windoze. I still use Windoze compatible file formats so I can share my files with Windoze users.

    My problem with Linux is specialized software, such as the following:

    I used Quicken’s “Quick Books” for my sub-chapter “S” corporation. Then Quicken demanded that I upgrade with the threat that my on-line banking feature would stop working. It turned out that they were not bluffing. I don’t respond well to coercion so instead of upgrading I installed “Gnucash” which is a free Linux program.

    While Gnucash is not as slick as Quickbooks nobody is trying to force me to pay again for software I already bought.

    In 2006 I was using TurboTax and was very pleased with it. Unfortunately it did not work well with Linux so I looked for an alternative and found “TaxAct”. IMHO TaxAct is superior to TurboTax and it is free if you don’t have to submit a state return (I live in Florida with no state income tax).

    I use “Photoshop” and don’t like the Linux GIMP photo editor. Fortunately Photoshop works really well in Linux if you install WINE.

    I have not been able to find a decent Linux web site editor “App” buit it does not matter as my version of Adobe “Dreamweaver” and associated tools (Fireworks etc) works well in Mint 17.

    Today, only two of my applications still require a Windoze operating system:

    My OTDR is a uOTR204 from AFS (Advanced Fiber Solutions). While the “Post Processor” runs well in Linux, this OTDR uses a proprietary (Closed Source) USB driver.

    I use the student version of “Quickfield” to solve differential equations. It requires a Windoze OS:

  7. beng135 says:

    EM, yes, automatic updates on Mint 17 Quiana have broken not only Firefox and seamonkey, but Opera browser as well. The browsers crash at random times — unacceptable. Tried all the “solutions” on the Mint forums (a fair number of people there seem to have browser problems) — no joy. Thing is, I ran Firefox for six months on Mint w/o any issue, then after a Mint automatic update, the problems began. I’m exhausted from dealing w/the problem, so pretty much given up on Mint for now & get my linux-fix from puppy linux which uses seamonkey.

  8. E.M.Smith says:


    Sorry to hear of your “tale of woe”. It there no easy “roll back” method available? Or was this more of a major update rather than an incremental browser update? FWIW I’ve noticed that the ‘systemd’ release of Debian (Jessie) is a little faster, but also seems a little ‘quirkier’ to me. Nothing fatal, just little things. Like make a terminal window taller, your text goes to the top with that bar, and sits with 1/2 a blank page below it. Until…. Type something it it, the text is re-flowed all the way to the bottom, and rather abruptly. It ought to reflow as you stretch the window… smoothly.

    Oddly enough, CentOS (that is basically Fedora with more stability and some different settings and default packages) comes with automatic updates disabled by default. Data Center guys don’t like auto-update-break-by-whole-data-center-by-surprise…. It’s not just me ;-)

    That is likely also why I tend to pack-rat a collection of old releases. So when some surprise Aw-Shit shows up, I can roll back to older release levels if needed

    FWIW, since CentOS also runs a few levels back from Fedora (guess who gets to do the free QA work for CentOS ?…) you could likely install a CentOS release and find happiness again. I’m running 6.4 and 6.5 (last release pre-Systemd) and I’m happy with it. Firefox works fine.


    The green lines are changed with the ‘overscan’ setting at boot time. BerryBoot does a nice job of asking if you have them, or not, and then doing the right thing. I’ve looked at the BerryBoot process and nothing at all prevents porting it to a non-Raspberry Pi box. I’ve gotten fond of it as it does several very nice things (more in a posting ‘later’) and would not mind having it on my Intel box… Then again, there may be enough variations in PC hardware to make a port complex…

    Fedora was just a marketing re-name of the Red Hat releases. They (Red Hat company) decided to use Red Hat for only the company, and divide the software releases into a Paid Support version and a Free-Don’t-Bug-Me-Kid version. To support the idea that one was worth paying for, they changed the names to be different. Red Hat Enterprise Linux (RHEL) was the one you paid money to get (and got to keep the Red Hat name) while the “freebee” was renamed Fedora.

    CentOS is made by a bunch of “folks like me” – data center junkies who work on big iron or buildings full of racks of small iron – who take Fedora and basically turn it into a free RHEL clone, but without the name or the price tag…

    Ubuntu is basically just a Debian with some “look and feel” added (that a lot of folks liked… right up to Unity ;-) and some eye-candy and a bit more Q.A. Debian has been known to occasionally have an update that causes ‘issues’… but you can back out things modestly easily (as long as it didn’t ‘brick’ on you). Ubuntu tends to hold back a bit more on release of new code and issue a more stable base. Most of the time ;-) My complaint about Ubuntu is that they, too, have an “all your system are belong to us!” mentality with it being “chatty” back to their site. I don’t know that there is any risk in it, as ordinary “keep me up to date with patches and such” does cause a certain degree of “I have / send me” chatter. I just don’t like the idea of my system:

    1) Beacon Broadcasting: Here I am, I have THIS operating system with THIS code and you can look up all my exposures in your database. I run THESE applications, so you can know what I do, and I have exactly THAT configuration so you can ‘finger me’ based on those details.

    2) Auto-Updating: Covered above. In a company, you do NOT want an auto update just before year end close of the books, quarter end financial reporting, month end, payroll, peak sales, etc. etc. so why do I want it just any old time at home? ( I have been known to get contracts at any old time that expect me to have a working workstation…)

    3) Giving me loads of ‘eye candy’ and not so much core function. I hate to say it, but it’s true. I’d rather have the entire “build suite” pre-installed ( including not just the C compiler, but all the other software devo tools and add FORTRAN) than have a really cool to desktop look with the latest Cool Stuff and animated dancing crapletts that gets in the way of my doing things.

    4) Suddenly changing my compute paradigm under me. Ubuntu with Unity, Red Hat with systemd (now infecting Debian, through it Ubuntu, Arch, and a few more). Not wanting to rehash the systemd wars, ‘init’ lost and I ‘get that’, but I don’t have to go willingly. It is just conceptually wrong to any long time Unix guy. Violates the core principle of “many small parts each doing one job very well” with one giant binary blob that is too dense to be provably secure and provably stable; with a ‘finger in every pie’ so if it DOES have an issue, the whole world is broken at once and you can’t shut off that part or backlevel it. In a year or two, if ANY security hole is found in systemd, then nearly 100% of those major distributions will all be exposed at the same time. Impossible to fix and close in time to be anything less than a disaster. (It is the Microsoft paradigm of design, IMHO. It is “the registry” in a way. Binary Blob From Hell stuck in the middle of everything.)

    Oh, CallopingCamel: Do note that Ubuntu Mint is on Berryboot for the Raspberry Pi. You might want to give it a whirl and let folks know what you think of it. Re-skinned in a nicer color ( I’m not fond of ‘pond scum’ as a color theme…) it’s a decent package for the average person, IMHO. Debian with a lot of the wrinkles taken out. Just not quite for me…

    You might want to look at:

    I count 23 packages in the ‘free’ listing alone, then there is the ‘pay for it’ listing…


    FWIW, I’ve become a bit ‘smitten’ with Berryboot. It’s just a well thought out bit of code that does a few things very very well. I’ve found directions for ‘sucking in’ the operating system of your choice, for example. Want to make you own OS for the R.Pi and not get bogged down in the whole boot process? No problem. It’s about 5 lines of commands…

    I’m also increasingly fond of it as a platform for a secure computing platform. It stores your base OS as a squashfs file system (rather like I did with /usr /sbin and such in an earlier posting) and layers a writable layer over that. So at the end of any session you can choose to ‘reset’ to where you started. Incognito on the cheap. (Not quite perfect as some bits will still be on the SC card, but unless you have NSA on your back, good enough). You can, by default, encrypt the Linux part of the card. It is trivially easy to do and works well and efficiently.

    The kicker, though, is that you can clone any one OS on the card and you can back it up to other media and restore it again to that card or to another (just hold down the “Add OS” button and the choice pops up). Now “put it together”:

    Install a basic OS like Debian. Do you “build script” to get all the stuff on it you want. Pull out any bits that you don’t like. Customize whatever you want. Now ‘backup’ (really a kind of export) that image. It gives you a choice of ‘base image’ (that first installed bit) or the whole thing including your customized bits. This essentially merges your customized layer with the base layer into one squashfs image that is stored off on USB drive.

    Now, build a new card. Tick the ‘encrypt’ box. Hold down the add OS button, and pull in your custom image. Now your ‘base image’ has all the things you want in it. At this point, you can reboot on it, and go browsing or ‘whatever’. When done, at the next boot click the ‘reset’ to base button and all that history of where you browsed and what you did goes away. Also any virus or other malware along with cookies and all gets nuked too. You are back to your clean installed base. Or you can not do that and pick up where you left off.

    To me, this is a happy 1/2 house between wide open remember everything normal systems, and Tails that is forced amnesiac and a bit of a PITA to use. IFF you are a journalist working with secret sources or one of those secret sources, by all means, use TAILS. But for “folks like me” who just need a bit of privacy and some comfort too, this is a nice mix. It isn’t “NSA proof”, but likely good enough to beat the local gendarmes It will be strongly malware resistant at only a small penalty in comfort, by being ‘amnesiac enough’ and on my schedule of when to forget.

    At this point my intent is to make a Debian base release, with my ‘build script’ bits in it, and see if I can get tor working on it. It’s not quite Tails, but more than I need. Having that as a base OS choice, and right next to it a ‘regular’ build, lets me choose with one click which world to boot. I’m good with that.

  9. beng135 says:

    EM, I removed FF from updated Mint & installed an earlier version (the version on the original Mint installation DVD which had no issues) & it crashed quickly. So pretty much attribute it to Mint updates. And Opera also crashing supports the Mint origins of the issue. I guess I could reinstall the original Mint & update only FF, but that’s getting to be a bit too many issues.

  10. E.M.Smith says:


    As Mint is just a look and feel layer of the basic OS, I’d be more inclined to think it was a general release issue. That could be checked by seeing if FF on Ubuntu non-Mint also has issues.

    But at this point it looks like your interest has likely moved on…

    I didn’t have any issues with FF on Ubuntu Mint on the RaspberryPi (Berryboot edition), but I also didn’t give it a big shakeout. For $60 and a weekend putting it together it might be an easier solution 8-) Then again, maybe not everyone likes to spend a nice All Saints Day hacking computers ;-)

    FWIW, this kind of ‘mysteriously a bunch of things stop working quite right’ is exactly the kind of issues I’d expect to manifest as folks go through the “teething” of moving from init based systems to systemd / message bus based. Part of why I’m not so interested in ‘living that dream’… The fundamental way processes communicate with each other and with the kernel is being replaced. The potential for Aw Shits (IMHO) is just too high. It might just be paranoia on my part (having lots of burned fingers from far less wide reaching ‘improvements’…) or it might be valid “visioning the future”. Time will tell. In a couple of years it will all be ironed out and stabilized in either case. I’m happy to live “back level” until then. But that’s just me :-]

  11. Larry Ledwick says:

    Classic reason why I stay 1 or 2 releases behind in windows versions.
    Why should I be the beta tester? I remember one urgent update in windows broke my network adapter. Had to physically remove the network card, install another crappy network card, then remove it and put back in the Intel Pro / 100+ card which had been working flawlessly for a long time, to get windows to install the proper old driver for the card which had never skipped a beat until windows “improved things”.

    I like the simplicity of the RC scripts myself, and from the sound of it systemd has its fingers in a few too many pies. I will stick with the non-systemd OS releases as well when I finally get the time to build up the new R Pi 2’s.

  12. gallopingcamel says:

    Thanks for explaining what I was perceiving “through a glass darkly”.

    I don’t have the smarts to “boldly go where Chiefio has gone before” so I will stick with Mint with the auto-update turned off. Currently my general purpose browser (Firefox) is behaving itself and so is Chrome that I use for Netflix on my old (Sony Bravia) HDTV that lacks direct access to the Internet.

    I hope the faithful here have noticed how much money you save by getting rid of your cable TV company. I bought two “4K’ TVs for $800 each and recouped the expense in less than a year. From here on I am saving $150 per month as my TVs no longer need set top boxes ($7 per month) or DVRs ($19 per month). Instead of paying the cable company for 100 channels that I never watch or $3 per VoD movie, I pay Netflix $9/month and watch what I want when I want.

    For sports lovers it won’t be long before you can buy sports programming direct from the source. That way you will soon be able to cut out the middle man (your cable company) and have direct access to the stuff you really want. For example I want to watch the ACC in general and the UNC Tarheels in particular. I live in Florida where there is nothing but the SEC and the ugly Gators. In a year or two you will be able to follow your favorite team no matter where you live.

    My wife insists on watching the local news and “Good Morning America”. Instead of paying the cable company to provide this kind of programming I bought an antenna from HD Frequency that pulls in about 50 channels. OK, most of those channels are about as interesting as watching paint dry but my wife can now watch the “Main Stream Media” (ABC, CBS, FOX etc) for free.

    While watching local TV broadcasts is free you are stuck with the advertising. In the run up to a presidential election the political advertising will be utterly obnoxious and now I don’t have a DVR I can no longer “Fast Forward” over the ads. There has to be a solution to this problem.

  13. beng135 says:

    Well, so as not to be a quitter and also waste hard-earned Mint experience, I reformatted the partition & reinstalled from the original DVD. Will not update ANYTHING, including Firefox & see how it pans out. If it crashes, then I’ll call it quits for Mint. Posting from it now, but sometimes the crash would not happen for several hrs of browsing (sometimes instantly).

  14. E.M.Smith says:


    You could always turn a Raspberry Pi into a DVR …



    Personally, I’d likely go the Model 2 route for the extra speed, but hey, buy them both, they’re cheep enough ;-)


    FWIW, I usually accumulate “images” of systems. Starting at “fresh install”, next at “just finished my usual configs”, then at “installed all my other stuff and moved in” and then at any other “update” point. I toss out the very oldest of the “incremental” parts (keeping the “fresh install” and “just finished my usual”) when I run out of disk space on the TB sized USB drive ;-)

    Takes the fear out of “Heck, I’m going to just reformat the sucker and try again” 8=}

  15. gallopingcamel says:

    A few years ago I built my own DVR using Hauppauge TV receiver cards installed in spare slots on my PC. It worked but packaged DVRs by Scientific Atlanta, Motorola and others worked better so I abandoned “Home Brew” and paid tribute to my cable company.

    Now I have jumped ship from the “Cable Company” I am once again open to a “Home Brew” solution to killing the advertising on the programs my wife watches. That Raspberry Pi approach you linked above is a fraction of the cost of my earlier Hauppauge solution. I will have to try it.

  16. beng135 says:

    Thanks, EM. Couldn’t sleep, so stress-testing Firefox 28.0 on freshly installed Mint — good so far w/a dozen opened tabs.

    Not sure how to image Linux. Do you image on a thumbdrive? Or can you do it on the linux partition itself, or another HD partition? I have plenty of room.

    Question. Instead of a separate partition, I just typically make a swap file at the partition root. Made it (2 Gigs) & turned it on. To get it going at boot-up, appended /etc/fstab with:

    none /mintswap.swp swap sw 0 0

    Rebooted — “free” command reports it didn’t work? Remembering similar issue from previous Mint installs, changed to:

    /mintswap.swp none swap sw 0 0

    Rebooted, and:

    bee@daisy ~/Desktop $ free
    total used free shared buffers cached
    Mem: 8176448 1672360 6504088 18716 136028 663988
    -/+ buffers/cache: 872344 7304104
    Swap: 2097148 0 2097148


    bee@daisy ~/Desktop $ swapon -s
    Filename Type Size Used Priority
    /mintswap.swp file 2097148 0 -1


    bee@daisy ~/Desktop $ cat /etc/mtab
    /dev/sda6 / ext4 rw,relatime,errors=remount-ro 0 0
    proc /proc proc rw,noexec,nosuid,nodev 0 0
    sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0
    none /sys/fs/cgroup tmpfs rw 0 0
    none /sys/fs/fuse/connections fusectl rw 0 0
    none /sys/kernel/debug debugfs rw 0 0
    none /sys/kernel/security securityfs rw 0 0

    etc, etc.

    The swap file doesn’t show on mtab. Anyways, doesn’t figure — mtab shows the “form” for miscellaneous entries to be like my original append (“none” first), not the later append that worked??? Tried the sequence again to confirm, and the same thing happened. Confusing.

  17. beng135 says:

    Oh, EM, you prb’ly know, but a tip I came across — appending


    to /etc/sysctl.conf is much better than the default value of 60. Supposely 5 is even better for a low-memory box.

  18. E.M.Smith says:


    The best two ways of monitoring swap use, IMHO, are just to have ‘top’ running in a window as it reports swap size and use, and to do a ‘swapon -s’ that reports how much on which partition or file. It doesn’t really matter at all if you use a real partition or a swap file, IMHO. I usually have one of each ;-)

    As “swap” isn’t a mounted file system, it ought not to show up in mtab, IIRC.

    Swap is ‘special’ ;-)

    I think I have ‘swappiness’ set to 1 or some such. I’ve not done much with it and don’t really know what it does, other than making it more / less prone to swapping…. but part of me thinks “either you need to swap or you don’t”, so I’m not grokking the idea at some level…

  19. E.M.Smith says:


    An “image” of a system is just a backup copy at a particular state. Preferably in an easy to recover and fairly fast way that preserves ALL the bits. An “archive copy”. I used the word “image” as many of the other words imply a particular tool to use. There are many…

    In The Beginning, when God (Dennis and Richie) created Unix, there was the “dump” command. It had nice options for doing ‘towers of Hanoi’ dump levels with 0 being the lowest level or ‘full dump’. Widely used for incrementals with three levels of incremental 9 (daily incremental) 5 (end of week) 3 (month end) and 0 quarterly full dump. Now it isn’t even showing up on my system when I do a “man dump”. Doing an apt-get install dump got and installed it, so it is still around, just no longer installed by default. So you could use dump to get an image.

    Then there was “tar”, the TApe Archiver. With it, you write to the tape drive by default. It has options for ‘append’ for incrementals. So you can do a full dump by default with something like:

    tar cv /

    Though looking at the Debian man page it looks like the ‘default to tape device’ is not stated any more, so it might have ‘issues’ on that ‘whole name space’ thing. There are a few dozen more options (or so it seems…) for just about anything you would like to do. A common one being to send the output to somewhere other than the default /dev/tape (especially now that many folks don’t have a tape drive…) One I commonly use is:

    tar cvf /Output/Filename.tar /DirectoryTo/StartWith

    You can also use this to move a directory and all it’s contents from one place to another. Decades back, that was hard to do as the ‘cp’ (CoPy) command only copied files. It can also be a tiny bit tricky to keep all the ownership and permissions settings right with copies and moves so I set those bits in tar as well. From memory it was something like:

    (cd /FromDir; tar cvf – .) | (cd /ToDir; tar xf – )

    but with a couple of more tar settings for ownership and permissions preserving. Oh, and the FromDir and ToDir were usually $1 and $2 so they could be passed in with an argument like:

    cpdir /from/this/dir /to/that/directory

    (I had named my scriptlette ‘cpdir’)

    Now you might think you could clone a whole system with:

    cpdir / /archive/place

    but that ends up in a recursive copy problem when most of the system is in /archive/place and tar starts to copy it, too…

    BTW, now ‘cp’ has been made smarter and you can just do:

    cp -a /fromdir /todir

    “way back when” there was ‘cp -r’ to copy directories, but it didn’t keep timestamps and such quite right…

    With the arrival of AT&T Profit Motive we got the gratuitously changed System V consider it all the property of AT&T and send us money… It came with “cpio” that stands for “copy in / copy out”. It had slightly more convenient preservation of metadata and a few other more flexible things (like easier to do non-tape devices) along with enough flexibility and other options to choke a large horse. I’ve used it to backup and image whole systems too. Oddly, Raspbian has cpio installed by default. (but no ‘dump’… how odd…)

    The “cpio” command handles some things better than tar. IIRC it did the whole crossing file systems control and metadata handling a bit better. It has a zoo of options and I’ve used it to copy whole systems. About 20 years ago…

    The “grandaddy” way to image a disk (and with it, presumably the whole Unix / Linux on it) was the “dd”command (that stands for “Convert and Copy”, but ‘cc’ was already taken by the C Compiler so they went with dd instead). It has very nice options for things like turning ASCII into EBCDIC and vice versa… along with all kinds of block padding and other things that only really matter down in the land of raw blocks. I’ve often imaged a disk or copied a system with dd. To or from disks. To or from tape. All sorts of ways. You see this one still listed on the ‘how to set up your Raspberry Pi’ pages for a quick ‘back up your SD card’ via something like:

    dd bs=4M if=/dev/sdXN of=/usbBUPdir/sysemFromDate

    Where bs is block size of 4 Meg, ‘if’ is input file and ‘of’ is output file. The X is the drive letter and the N is the partition number. To take the whole SD card, one leaves off the N. For example, putting my SD card into a card reader on another Linux machine it might mount it at, say /dev/sdc1 and /dev/sdc3 (not mounting my swap partition at /dev/sdc2). To ‘grab the whole image’ I do:

    dd bs=4M if=/dev/sdc of=/usbBUPdir/dd_Secure_System_date

    This will grab every single block on the card and ‘image’ it onto that destination disk as the file “dd_Secure_System_date including all the encrypted blobs, any bit errors, the exact layout of the bytes even if files are heavily fragmented, etc. It will even copy 55 GB of ’empty space’ off of your 64 GB device and your image WILL be 64 GB in size. Where ‘tar’ and others can do various kinds of compression and since they read file by file restore things in full files of contiguous space.

    There are a few other tools too, and Berryboot brings their own with them. They have a ‘backup’ button and then you can “install” that image again in the future if desired.

    Which one to use?

    Depends a lot on what you are trying to do, which particular options matter to you, will you be moving between systems, are things very large but sparse, or just what all.

    Traditionally a “tarball” is used to move things around, often a compressed tar ball, so ending in .tar or .tgz or similar. Least space, lets you take only what you want, restore is automatically defragmented, and everybody has it.

    The use of cpio was mostly on System V type systems, but has spread. I find it less variable between systems than some early tar varieties. But often more complicated to set up. The “find” command has a built in ‘cpio’ option so you can easily use a find command to pick out any particular files you like based on just about any attribute and only THEY go to the output. You can spend weeks learning all the choices of the ‘find / cpio’ combinations… I rarely use it, but when needed it is nice to have.

    I often use ‘dd’ as it always just works. Wastes space. Doesn’t give fine control. Fast, grabs it all, and doesn’t complain much. Restores EXACTLY as was copied… Good for whole disk right back into THE SAME DISK TYPE, not so good for portability across divergence…

    These days I find myself using cp -a and rsync more.

    Which brings up rsync…

    This was designed to copy over the internet and handle stops, intermittent connections, timeouts, and restarts much better. But you can use it to duplicate a directory locally.

    rsync -avP /From/Dir /To/Dir

    It, too, has a few dozen options and will take about a week to learn them all (or at least most of them ;-)

    So I’ll use tar to make a tarball of some important directory or the /etc and /home/dir parts of a system where the rest is pretty generic. And I’ll use ‘dd’ to snag a chip image with full directory structure and partition structure preserved (including the FAT16, FAT32, ext4, swap etc etc layout) and restore it exactly. I’ll use rsync when I want a readable copy at the other end and want it checked as each file is copied. I’m slowly learning to remember to do ‘cp -a’ instead of my old ‘cpdir’ to just clone a subdirectory. And I rarely run ‘dump’ these days but will likely set it up just for nostalgia sake ;-) It tends to be focused on disk partitions and you are supposed to assure they are idle when dumping them. Kind of a pita really unless you go to ‘single user mode’ and stop all the daemons.

    Sorry you asked how to ‘image’ a system? 8-)

    BTW, I think that likely is not an exhaustive list. Systems Admins have been working for 40 years+ on better ways to copy exactly this or that thing (files, directories, partitions, disks, devices, whole systems) with all sorts of filtering for only some files not others and all sorts of partition spanning, or not and all sorts of conversions, or not, and all sorts of metadata preservation, or not, and portability, or not, and… So what all has been invented beyond my preferred set is hard to say for me. For example, I made cpdir before cp -a came along. I also have a ‘crush’ command that does a tar | compress > file.tarx that I made long before folks taught tar how to do inline compression.

    Since thousands of sysadmins were doing “tar | compress” someone just added compress as an option to tar. Now you can do it either way. Multiply that behaviour by a few dozen…

    As a final hint:

    Now, I mostly use dd to image the SD cards, tar | compress to grab selected file systems and sub-directories, and rsync to copy between systems or between device types. I rarely use dump, cpio, or cp -a as the first two are a pain to get right and the last one is ‘new’ for me.

    And remember “The man pages are your friend”…

  20. beng135 says:

    Thanks, EM. Sorry about turning this into a Linux discussion forum, but you’re so darn helpful.

    I edgamacated myself (and came across the dd command you illustrate — the different variations presented scared me a bit tho), but Mint had the tool I wanted — the Gnome Disk Manager. So I booted to the installation DVD to be safe, and used that program to image the 52 Gig mint partition to another, “share” partition made for miscellaneous uses (took ~30 minutes). The image file includes a separate bootsqm.dat file along w/it. So hopefully I can restore that to the mint partition if necessary. ‘Course, HD failure would nix that, but WD drives are very dependable. Prb’ly need to buy a few more thumbdrives tho (the drive manager can image to any big-enough attached drive or partition).

    Stressing Firefox alot — no problem yet.

  21. Larry Ledwick says:

    Great summary of copy image options!

    At work under Centos most of our backups are now written out to tape using gtar (Gnu Tar) mostly the same as traditional tar, and we just recently started using rsync to grab an image on a critical server and sync a backup server to the active server for our most important data. This gives us a manual fail over if the primary system takes a crap while avoiding the head aches of having a high availability cluster and hot fail over server.

    We just switch port numbers and the fail over server becomes the active data base and takes over the load with the same info the primary had at the time of the last rsync. This is a data base which does not change much being static data between updates.

    We still have some old backup tapes which were written out in UFS Dump but most everything has moved to gtar now.

    The big challenge with backups is technology, like changing tape formats and new tape loaders and such. Sometimes the new equipment or tape format forces you to over haul all your backup scripts. On small personal systems the same sort of transition has been going on for a while as spinning disks change from serial to parallel interfaces and now we have ssd disks (solid state disks) and of course the several versions of USB devices and SD cards.

    I like your idea of taking multiple images at various stages of development. Much easier to just say screw it and format and drop back a generation if you do something that totally mangles things.

    It looks like you can use dd for windows to do the same thing on a windows system.
    Guess I need to test that one of these days.


  22. E.M.Smith says:


    Hey, it’s a Linux thread, so no worries!

    Many linux / unix commands can be scary, especially run as root. Lord help you if you swap i for o in that “if” and “of” set… The standard rule is to type the command, then sit on your hands while you read it, twice, and only then, pull a hand out to hit return…


    Yeah, “time moves on” and the old backups don’t… I have some nice 9 Track Tape with some backups on it from about 20 years ago. Probably no longer readable even if I could find a 9 Track tape drive to use with them… IIRC, 10 years was the max to be expected before the oxide flaked and the magnetic flux faded… Not all that important a tape really ( one of them is the first 1/2 million primes that took weeks to calculate then… and now is a day, maybe… and the program to calculate them, that is better re-written since I think the language used may not still exist…)

    Most of my stuff has been in ‘rolling archives’ since those tapes as I learned from them… but a lot of it really ought to be tossed in the trash too. Does anyone really want my net-news archive from 1982?… or the config for it on an AIX machine? or…

Comments are closed.