Distcc node build on Devuan 1.0

It took a couple of days but I’ve now completely rebuilt the distributed compiler cluster (“build monster”) on the Devuan 1.0 release.

Why this matters:

1) It is now running on a real official Devuan release post Q.A. not just a slapped together development version or an “uplifted” ‘whatever’ and hope all the moving parts align right.

2) It is all on a 4.x kernel now. So no issues from the incompatible “enhancement” of ext4 that caused older journals to be “improved” into a format that no longer worked on older releases. That wasn’t a Devuan issue. It first showed up in my shop on Armbian when I needed to run a 4.x kernel to get the OdroidXU4 to work right. Support for those newer cores and things… Plug in a disk, then it would not work back on the original machine. So, OK, I’ve done an out of cycle sudden upgrade of almost everything… Sigh. (Oddly, my big NFS Network File System server with 12 TB of disk on it is the one system still back on 3.x kernel. Then again, it’s disks never move between systems as they are NFS shared. That’s the Orange Pi One. “Someday” I need to both do the “uplift” to Devuan on it and advance the kernel; but for now it’s fine and I’m not touching it.)

3) It seems faster and more professional. I’ve only been using it a couple of days, but things just seem faster. Changing settings in FireFox had more of them “as I like them”. (Only had to shut off auto-update of search engines and change the search engine to DuckDuckgo. Spell checker already installed. Mostly security oriented settings by default.) The htop (performance monitor) CPU bars are more often moving in sync. Where before there was a tendency to “peg one core” it looks like now the work is multithreaded and spread over more cores. I’m typing this on the PiM3 and it is quite acceptable. The slightly annoying lag from before is not showing up nearly as much… I think that’s the 1 core vs 4 cores effect.

4) I was able to more precisely document the build steps. The prior distcc build was when I was trying to sort out what release of OS to use (as SystemD was trying to take over the world) and things were more muddled. Now I’ve got a nicely written log of steps and it worked first time out the gate. This posting will have my log of actions in it. (I didn’t write a build script as once I had the first chip made, I just cloned it and changed the hostname / IP and didn’t need to do more builds).

Sidebar on SD Cards

I went out to buy 3 x 8 GB mini-SD cards for the cluster. Turns out I could not find any. Not at Best Buy nor Walmart nor Walgreens nor Office Max. Everyone has moved up to 16 GB minimum on the rack. When doing whole SD card backups, that’s chewing up the card size of bits regardless of bits used.

I’d planned to make the thing on smaller chips (as there isn’t much space used, really. About 3 GB. Instead I’ve got 16 GB mini-SD cards in them. Oh Well. At some point I’m going to see if I can still order 4 or 8 GB Ultra speed mini-SD cards from Amazon. Looks like I need to lay in an inventory before the manufacture is totally halted.

Testing Oddity

In testing the cluster, I had an “issue” with it seeming to not work. I had made a test case of 8 copies of “Hello world” written in C and a Makefile to launch them all in one batch. I’d launch it. in about a second I’d have all the compiles done and the executables present. I had monitor windows up on all three nodes ( 12 cores ) and nothing showed up. What the?…

So I turned off “localhost” as a destination in the .bashrc settings. Still nothing.

Eventually I turned on logging and saw it was completing per the logs, and it looked like things were in fact being distributed out to the cluster nodes.

Finally I changed it so ALL 8 compiles were sent to just ONE node. Did the make, thought I saw a flicker but it was gone. I had been looking where I was typing. Stared directly at the monitor bars, and launched it again touch typing ( I had a scriptlet that removed the executables and .o files then did the make). There was about 1 second of the 4 usage bars of the 4 cores going to about the 1/2 way used point, then back to near zero. That was it. 8 compiles in about one second on just one board. No wonder I didn’t see anything using all 12 cores! We’re talking about 1/4 to 1/3 of a second of load then! And shorter than the refresh time of the monitor.

I think this thing is going to really rip on bigger jobs.

The Build Steps

Unlike my usual “build script” postings, this one is just a narrative. I’ll eventually put this into a script, but not right now.

I used the Raspberry Pi M2 image on both the M2 boards and the M3 board. All are now running identical code. The image is downloaded from here:


Then unpacked into an .img file with unxz. That image file is shoved onto a mini-SD card (which I mounted into a USB adapter) using the Pi via a dd command. Check VERY CAREFULLY that you get the right device letter when it is mounted, then unmount it and proceed:

dd bs=10M conv=fsync of=/dev/sdx if=devuan_jessie_1.0.0_armhf_raspi2.img

where X in /dev/sdx is the correct letter for your SD card device and if= whatever image you are using.

Then you get to wait a few minutes for the card to write.

The Devuan image does NOT auto-size to use the whole card. So at this point you have a 1.8 GB or so system on your 16 GB card and attempts to add more software to it will run out of space…

So I used ‘gparted’ to just resize the ext4 partition to use the rest of the card. It’s the easiest way to do it, fully graphical, and found under “preferences” on Debian / Devuan. (IF it isn’t installed, just do an “apt-get install gparted” on your build system). Now you can proceed to booting and configuring.

I swapped the card into the Pi M3 (taking out my “build builder” card) and booted. First up, change the root password. Second, I did an “adduser” to add my usual account for me. Then the usual bring it up to date via:

apt-get update
apt-get upgrade

and wait a while… When done, I did a reboot even though it likely isn’t needed.

In one trial, I tried adding xorg and lxde first (for a graphical interface) and for unknown reasons that just gave me a blinking cursor in the upper left corner. Doing it after the other stuff was installed worked. I’m not sure why. In any case, doing “apt-get install” is line oriented anyway…

So, log in as root. Do a series of “apt-get install”, then reboot, then do “apt-get lxde” and all is good. The install of lxde looks to do xorg bits as a dependency so no need to explicitly do xorg.

What I installed, more or less in order of installation:

apt-get install scrot
apt-get install dnsutils
apt-get install build-essential
apt-get install ntfs-3g
apt-get install parallel
apt-get install distcc
apt-get install gfortran
apt-get install distcc-pump
apt-get install ccache
apt-get install dmucs
apt-get install htop
apt-get install gparted
apt-get install unionfs-fuse
apt-get install squashfs-tools
apt-get install hfsplus hfsutils

Now distcc-pump isn’t being used by me yet, and may not be needed for what I’m going anyway. I mostly just want to play with it to get familiar. It might help when doing full Linux system builds, but isn’t required. I’m using gfortran for various climate models, but if you are not doing FORTRAN, it’s not for you. Simlarly, ccache speeds up long large C system compilations by caching parts of the build for reuse. Likely overkill for my use but we’ll see.

Then “htop” is a great little system monitor with a process table listing and CPU load bars for each core. (It shows up under ‘system Tools’ menu). I can’t live without ‘gparted’ to adjust disk partitions, set file system labels, etc.

Then we get to file systems. I’m intending to use squashfs file systems to protect things like /usr from being changed. (We saw that in an earlier posting) So it, and unionfs, let you do interesting things with compressed and local file systems in a file. The HFS file system is what’s on the Mac. If you don’t intend to mount Mac disks on your system, it’s not needed. I’ve got a few Macs in the house (for the family members who don’t ‘do’ Linux…) so I sometimes have a Mac formatted disk to use.

At this point, I did another reboot. Again, not likely needed, but having been bit once with the blinky cursor after installing X, I didn’t want to be reset to zero again. I also used dd to make a backup of the chip into my “build builder” system so I could resume at this point without redoing it all.

dd bs=10M conv=fsync if=/dev/sdx of=Distcc_Devuan_1.0_NoX_12Nov2017

The “conv=fsync” likely is overkill, but forcing a disk sync is a good thing anyway ;-) Again, be sure you get the right drive letter for sdX and the “if” and “of” are pointing the right way! Output File (of) can be whatever you like. I name things with the use, the OS, release and date.

Ok, backup made, reboot with that chip.

At this point I did some more apt-get install steps, starting with LXDE which is my desktop of choice:

apt-get install lxde
apt-get install firefox-esr
apt-get install gimp
apt-get install libreoffice

Not one of those is really needed on the distcc headless nodes. I tend to install them everywhere so that IFF I need to put a monitor and keyboard on one of them for some debugging, I've got all the usual tools and bit installed. It burns a bit of storage on the card and running a graphical UI uses a bit more memory, but there seems to be plenty anyway.

Configuration Bits

There are some things that need configuring. First up, edit the /etc/hostname. I used headless1 headless2 and headend for my three. Use whatever names you like. Then add them all in /etc/hosts. Here’s an example from the chip I’m using at the moment. (I’m trying out a Sony mini-SD Class 10 to see if it is noticeably slower than the Sandisk Ultra ones. It is a bit, but still fine.)        sonydevuan SonyDevuan devuan localhost
::1              localhost            ip6-localhost ip6-loopback
fe00::0          ip6-localnet
fe00::0          ip6-mcastprefix
fe02::1          ip6-allnodes
fe02::1          ip6-allrouters     Opi opi orangepi     H0 h0 Headend headend     H1 h1 Headless1 headless1     H2 h2 Headless2 headless2     C1 c1 OdroidC1 odroidc1

I’m also going to install distcc et. al. onto the Odroid C1 and the Orange Pi as similar v7 core cluster nodes. As they are Armbian uplifted to Devuan, there might be some compatibility issues, but it ought to work. “We’ll see”… someday when I have enough distcc load to care ;-) Also note that I often trip over capitalization so I’ve put in names in all the likely capitalization combinations ;-)

Links to distcc are put in /usr/local/bin on this build. so:

ln -s /usr/bin/distcc /usr/local/bin/gcc
ln -s /usr/bin/distcc /usr/local/bin/g++ 
ln -s /usr/bin/distcc /usr/local/bin/cc 
ln -s /usr/bin/distcc /usr/local/bin/c++
ln -s /usr/bin/distcc /usr/local/bin/cpp

Since /usr/local/bin already is ahead of /usr/bin in the default search path, no changes to the PATH variable were needed.

Then edit the .bashrc file in your home directory and add:

export DISTCC_HOSTS="localhost/4"

But use your IP numbers and the number after the / is how many jobs to send to that particular board.

One each of the headless nodes, I added “distccd” to the /etc/rc.local file.

#!/bin/sh -e
# rc.local
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
# In order to enable or disable this script just change the execution
# bits.

## regen ssh keys on first boot
[ -f /etc/ssh/ssh_host_rsa_key.pub ] || dpkg-reconfigure openssh-server

distccd -a --daemon

exit 0

This launches a set of distccd daemons waiting for work from the given network addresses.

Oh, and as root I did a “service distcc start” to make it go on the headend machine. ( I did it on all of them, but it likely isn’t needed on the headless nodes as they have daemons running).

When I first ran it with the test case, I got an error message “//.discc” could not be created due to a permissions problem. I thought .discc was supposed to be created in your home directory (so ought not to be permissions issues), but on the off chance it was /.distcc I went ahead and did a mkdir /.discc and chowned it to distccd. I also rebooted so all changes would be in effect. One or the other thing fixed the error. “Someday” I’ll remove the /.distcc directory and see if that was it or not… As it now contains a directory named “state” I think it was needed ;-) There is likely a configuration setting to put this directory somewhere more reasonable (thus the two // that look like it wanted an ENV variable with a path name in the middle) so likely some configuration step I skipped.

In Conclusion

It is all working. I’m really liking the Devuan 1.0 official release. It just has that quality feel of kindred spirits at the controls of the build process. It is compiled with various settings that make it work well, and with due attention to things like security settings even in FireFox (no default notification of crashes, for example). I suspect they took the time to assure more things are multi-core ready too.

I’ve got both a 64 bit arm64 and a 32 bit armhf build on chips for the Pi M3. The 32 bit runs great and I’ve not found any issues yet. The 64 bit seems a bit more sluggish. Not a surprise, really. The Pi has poor I/O structure and speed and now you are loading words that are 2 x as long for each instruction. It looks like they used hybrid (both arm64 and armhf instructions available) so it may just be the FireFox that was sluggish on 64 bit. More testing needed ;-) thus my build on this Sony chip to see how 32 bit does and on slower chips.

At this point, with the build cluster AND my major desktops all on Devuan 1.0, I’m happy with it and there’s no going back. The Odroid XU4 is still on Armbian with an “uplift” to Devuan (and has a couple of minor issues like the cursor not showing up until you open your first window… which is a challenge without a cursor to show you where to click…) so I’m waiting for a real XU4 build. They have a Devuan build for “xu” but I think it is only the 3, despite claims to the contrary. They use different chip sets and the octo core is ‘odd’.

IF I get really ambitious and try to build a Devuan from scratch, I may try it targeted at the XU4. But maybe after I’ve been successful with a Pi build ;-)

So, with that, hopefully I’ve not left out any steps nor gotten anything backwards in the remembering. I’ve not ‘retested’ this process via a scripted build so it could have errors. That is, it was a build once and remember, not a QA’d scripted rebuild, so YMMV…

Subscribe to feed


About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in Tech Bits and tagged , , . Bookmark the permalink.

12 Responses to Distcc node build on Devuan 1.0

  1. E.M.Smith says:

    Ah, forgot to state to turn on distcc in /etc/default/distcc

    # should distcc be started on boot?
    # STARTDISTCC=”true”


    Uncomment the “true” and comment out the “false”…

    Also set what network is allowed and what IP the “listener” uses:

    Networks have to be in CIDR notation, f.e.
    # Hosts are represented by a single IP Adress


    # Which interface should distccd listen on?
    # You can specify a single interface, identified by it’s IP address, here.
    # LISTENER=”″


    # You can specify a (positive) nice level for the distcc process here
    # NICE=”10″


    # You can specify a maximum number of jobs, the server will accept concurrently
    # JOBS=””


    Probably also some way to set the location of .distcc here as well, but no example so some man page diving needed…

  2. E.M.Smith says:

    Ah, one more mystery solved. From the man page:

    Per-user configuration directory to store lock files and state files.
    By default ~/.distcc/ is used.

    So something in the “~” substitution is not right… a “bug”…in the default.

    But I can fix it in the ~/.bashrc file… with:

    export DISTCC_DIR=”$HOME/.distcc”

  3. Steven Fraser says:

    @EM: Do you use compression on the SD card handling routines? For things holding executable binaries, there is a lot of wasted space for the instructions and the operands. You could benefit (if you care) from some object-format non-lossy compression and inflation at the appropriate times.

    Years ago (at SWTPC, actually) I wrote a tape restore program that would put the ‘holes’ back into a db file. Then, and now (I think) it was possible under Unix to write a file out to sequential access tape, but if there were ’empty’ blocks in the file (or gaps if you will) they would get written out as all nulls. My approach was to inspect the blocks at the bit level after read, and if all-null, the restore program would simply lseek out to where the first non-null file block would be, and begin writing there.

    With today’s tools, such a hamfisted approach as I used may not be necessary. 16GB is not that much memory that a commpression could not be re-expanded ‘on the fly’ from a small source to a large result, especially if you consider handling the Inodes a bit manually. Yes, it would be slower, but a nice byproduct would be that your archives would be nearly incomprehensible, except to the highly-initiated, very talented geek.

  4. E.M.Smith says:


    I’ve run from a compressed SD image. It’s a bit slower… Since I have more SD storage than I need, I don’t care to be efficient with it, particularly.

    IIRC, though, using squashfs (which is compressed) was a bit faster. Probably due to it being read only so no sloth from rewriting SD card blocks while decompressed bits are cached effectively. Putting /usr on squashfs was nice, when I did that test.

    The other thing I’ve tried, that was very nice, is taking a hard disk and putting various partions on it in the old school way. So /usr /var /home /etc swap and such. Just copy the SD card filesystems to their alloted place and then mount the disk over the SD filesystem point. (So mount disk:/var at /var and it hides the SD original. ) Just by commenting out the mount in /etc/fstab, you go back to the SD original…

    Significant speed up to the Pi, updates hit the disk, not the SD partions, so easy recovery from a bad change, longer SD life due to far fewer writes. It is how my Daily Driver has been for a couple of years. Well, worth it.

    Now the interesting question is would compression on the real disk be a winner? I think it would. The Pi M3 has excess computes for the IO speed, so decompression ought to be nearly free. Writes being bytes not SD Mammoth Block would also be fast. I’ve never tested it though.

    Would be an Interesting test case…
    SD vs compressed SD vs disk vs compressed disk vs squashfs on both…

  5. jim2 says:

    How do you get away with assigning the same IP to different machines? H0 h0 Headend headend H1 h1 Headless1 headless

  6. Steven Fraser says:

    @EM: I really like the idea of mount/hiding of two different medium types: Slower SD for pretty much static files, loaded into faster filesystems that then are re-addressed via the mount cycle. As you say, makes the original media invisible as far as the block filesystem goes, but an SD would still be a character device visible in the /dev (or wherever) dir. I guess the ultimate would be a physical power-down of the SD group, You really only need them accessible when doing a boot, or when you want to update the code on them. This is getting scary close to bootstrapping from tape :-)

    If storage capacity is not the goal, but rather speed, then I would skip the SD compression investment and spend on disc striping that gets more disk heads (spelled that carefully) active at the same time. If there are IO bottlenecks that are not in the controller architecture, there could be speed advances there, particularly during the read. Some experiments with performance tools should show you which controllers and drives are throttling the overall performance. I’ve seen something as simple as RAID-1 (disk mirroring with 2 devices) allow the system to reduce seek time advantageously.

    It may also be worth time to investigate how the kernel/driver/controller interaction spreads the data out over the volume, what I would call the allocation strategy. Years ago, on very slow systems, we would spend time prioritizing the location of often-used files into the filesystem, mostly by controlling the sequence of the data set loads on the drive. Stuff that needed the fastest access would be put on the disk where the drive would spend less time seeking and in rotational delay. Stuff accessed seldom, or which did not need to be speedy, we would put on last.

    The down side of such a strategy in your case would mean something like some extra steps in boot, with a fresh mkfs on a spinning disk whenever you wanted to refresh it and re-optimize it.

    Cool setup you have.

  7. E.M.Smith says:


    I hand edited the pasted copy to not be my real IP numbers and didn’t catch that I used the same thing twice… It’s a bad idea to publish your internal numbering scheme… so I don’t.

    I’ll update the posting to have different fictional numbers ;-)


    Well, the end game of all that is a squashfs / FUSE file system in memory ala Knoppix and similar. Damn fast and no SD card ‘wear’.

    For the Pi (and other cheap boards) the USB 3.0 disks and flash drives are dramatically faster than the USB 2.0 interface, which on the Pi is further shared with the Ethernet… That is substantially always your bottleneck unless you are doing writes to a card, then the SD large block write time kills you. The rest of the typical performance issues pretty much never get a chance to show up. Things like raid groups and mirrors just don’t matter on the Pi as you can’t get enough I/O speed through the USB to matter. Need something with a SATA or similar real disk port on it (CubieTruck) for those games to matter.

    From what I’ve seen, the best and easiest gain is to just copy to real disk and mount over.

    Now that said, you can also just directly install to disk. Then you only use the boot partition from the SD card to load it up. I did that on a few of the other systems. (When testing Slackware, LFS, Gentoo, FreeBSD, etc.). You edit the /boot/cmdline.txt IIRC.

    dwc_otg.lpm_enable=0 console=serial0,115200 console=tty1 root=/dev/mmcblk0p7 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait

    So change that root=/dev/mmcblkop7 to root=/dev/sda1 (or wherever you put the real root partition) and away you go. SD card only used to boot up, then idle. Works very nicely, just like a real *Nix system ;-)

    In theory, you could use a 2 GB card and have oodles of space left over… if you could find any for sale anymore ;-)

    I’ve intended to get around to making a ram based file system like in Knoppix and playing with that, but just haven’t had the time. It’s not particularly hard, just takes some care and different tricks. Essentially, look at how the “Live CD” folks work and copy that. Load it all at boot time into ramdisk and be done.

    Though for me, just having an Ultra class SD card seems fast enough, and putting key file systems onto real disk partitions is more than enough. on the main system. It’s also conceptually simpler than editing cmdline.txt files and installing to USB directly… and you can still use the SD card stand alone with a quick edit of the /etc/fstab entries.

  8. jim2 says:

    Super Talent DRAM Disk 16GB USB 3.0 Flash Drive Review (they have 8 gig also)


  9. Steven Fraser says:

    @EM: Looks good to me. Thx for your insights.

  10. CoRev says:

    EM have you been following this new version of the XU4s https://ameridroid.com/products/odroid-mc1, an already built cluster.

  11. E.M.Smith says:


    I’d not seen that one. Interesting stack.

    This article talks about it:

    And their inspiration being a cluster they built for testing…

    The staff at Hardkernel built a big cluster computing setup for testing the stability of Kernel 4.9. The cluster consisted of 200 ODROID-XU4’s (i.e, with a net total of 1600 CPU cores and 400GB of RAM), as shown in Figure 1.

    Golly! 1600 CPUs is a lot of umph!

    Unfortunately, lots of noise starts my tinitus to making louder sounds, so I really REALLY hate fans… and the MC1 has a big ass fan on the back of the stack. It also looks like they really hobbled the I/O abilities of the board. Fine if it will only EVER be a compute engine, not so fine if you expect to move a lot of data to / from disk and / or want to display things:

    The ODROID-MC1 circuit board is a trimmed version of that used in the ODROID-HC1 (Home Cloud One) Network Attached Storage (NAS), with the SATA adapter removed. The ODROID-HC1 circuit board, in turn, is a redesigned ODROID-XU4 with the HDMI connector, eMMC connector, USB 3.0 hub, power button and, slide switch removed.

    So no SATA (only on the HC1) and no USB 3.0 (granted, it isn’t quite working right on the XU4 at the moment, but a few more SW releases it ought to stabilize). Then, no eMMC so you can’t speed up the OS with it and no HDMI connector so forget debugging with a monitor OR using it to drive a “video wall”.

    It is a very sad truth, but still a truth, that most folks are fixated on CPU cores and speeds and ignore I/O. Folks in HPC High Performance Computing know you must have a balance between them, and spend as much time looking at I/O speeds, feeds and fabrics as they do at CPU speed (and memory size / speed…). You see this in pretty much every SBC out there. Lots of cores and MHz, not enough memory and lousy disk / network I/O. (Some are now getting SATA or similar and USB 3.0 on a dedicated controller with some GigE networking; but it’s still rare and expensive in the land of SBCs. It’s the place folks love to cut corners…)

    For me, as soon as Devuan on the XU4 is clean and stable (and preferably direct from Devuan) I’d consider making a stack of the fanless models. But most likely I’d choose a different model. Why?

    Those 8 cores sure look attractive, but how do you keep them fed and full of memory? IF you have 8 jobs running, they are sharing that 2 GB memory. That’s 256 MB / core… Not exactly big. Then you have a GigE, but again split between 8, so 128 Mb / core really. I bought it thinking it was going to run BIG/little (i.e. only 4 cores at any one time) but was (happily) surprised to see it will run all 8 at once. But that just makes the memory / IO limits closer…

    In reality, to build a cluster with best performance, I’d want a single core SBC with a fast clock (like 2 GHz+) and 2 GB memory and a GigE switched network. Oh, and either SATA or USB 3.0 for the disk. At that point, you have a more or less balanced system that ought to meet Amdahl’s Other Law about system balanced design. (clock = memory = network / IO roughly in Gig each…)

    Oh, and then you also have 1/8 the heat to dissipate in that heat sink too… ;-)

    But everyone is going to multicore and ignoring heat and system balance. Oh Well.

    In reality, for most SBCs, you can think of the Quad Core as really being a single or double core that has a short term burst ability before it overheats / runs out of input / stalls on output / or swaps memory out… Since the other cores really are only costing you about $1 each in board cost, it isn’t really much of a “loss” to figure them mostly a waste… So I buy quad core chipset SBCs and figure I get about 1 to 2 cores of real work out of them most of the time…

    IMHO, the one SBC closest to a “balanced system” that I’ve seen so far looks like the Odroid C1.

    32 bit so the memory goes further. 1 GB
    Nice big heat sink so the heat gets out well.
    1.5 GHz

    Only real “issue” is USB 2.0 is a bit slow for real balance.

    Still has 4 cores so 4 x CPU compared to a direct GHz GB GigE value, but if you think of it as a nice 1 core balance with 3 for small stuff and bursts, it’s a decent balance ;-)

    There may be someone out there with a better balanced SBC, lord knows I’ve not looked at all of them, or even most / many. I’ve also not spent as much attention on exact type of memory and memory bus (which matters rather a lot to performance). Then again, at a price range of $15 for the low end to $60 for the higher end, spending a lot of time on analysis isn’t as effective as just buying a few and putting them to work ;-)

    In reality, I’ve not been able to generate enough workload to keep my paltry stack busy for more then a few minutes. The XU4 alone is hard for me to keep “loaded up” with real tasks. There comes a point where worry about keeping the system balanced so all parts are working about equally is just not important anymore. I think that was driven home with the Orange Pi One at about $16 IIRC. Sure, the thing could only run one core full before it started to overheat, but the only time I was able to do that was with artificial loads… With a small heat sink on it, it went to 2 cores before heat throttle cut back the MHz. That’s about my typical “pushing it hard” load on my desktop machine… So at some point you just have to admit that having a $16 machine with 2 cores at 100% is “better” in some ways than a $80 machine with all 4 cores fully loadable with 100% workload… since you can buy 5 of those cheap boards and have 10 fully functional cores (and another 10 available for short burst loads…)

    Well, I guess I need to admit that I have “board envy” and would love to buy some of those bigger hotter cluster oriented things… but I also have to admit that I have zero need for them at the moment and can’t get my present cluster bogged down (yet!) with existing workloads. Maybe after I get a full on Linux system build set up to compile, or get one of the climate models to a runnable state…

  12. E.M.Smith says:

    Another Day, Another Quirk…

    I’d been using a set of file systems on a Seagate disk as my “interior work / Headend to cluster” system under Raspbian uplifted to Devuan. When I instead put Devuan 1.0 on that system, the EXT4 disk would read fine (so the “journal” issue was fixed for mounting and using) but on rebooting, the file systems would not fsck. I’d get a nag to “install a newer e2fsck”… but there was none newer when I did an apt-get. ( I think I’d need to go into the next release and grab a backport…)

    Well, I took the slower but easier way out. Dumping 1.5 TB of disk partitions (one at a time), reformatting them to type EXT3, and reloading them. That was yesterday….

    At this point, I think I’ve still got 2 or 3 disks that are EXT4 that might bring grief with compatibility level issues, but ‘whatever’. They are the rarely used ones and as likely to be an issue as to be not and issue as to have the OS/e2fsck update come through before I use them again…

    One side effect was that I got to spend a couple of days working on the Pi M3 as my “daily driver” under Devuan 1.0 while all this was going on. My impressions follow.

    First off, realize I’ve mostly been living on the Odroid XU4 for the weeks prior. Going from 8 cores where 4 of them are 2 GHz and have more pipelined (parallel execution) architecture means something like a 4 x performance for those cores compared to the Pi cores. Keep that in mind.

    So I really like Devuan 1.0 as a full install. It’s just more polished and less of a hack where the “uplift” works but feels a bit glued together. At this point, I’d be willing to shop for SBC boards just based on what Devuan lists in their downloads section as a formal release.


    devuan_jessie_1.0.0_arm64_raspi3.img.xz            23-May-2017 11:20    175M
    devuan_jessie_1.0.0_armel_raspi1.img.xz            23-May-2017 11:13    172M
    devuan_jessie_1.0.0_armhf_chromeacer.img.xz        23-May-2017 11:14    231M
    devuan_jessie_1.0.0_armhf_chromeveyron.img.xz      23-May-2017 11:15    235M
    devuan_jessie_1.0.0_armhf_n900.img.xz              23-May-2017 11:15    155M
    devuan_jessie_1.0.0_armhf_odroidxu.img.xz          23-May-2017 11:16    154M
    devuan_jessie_1.0.0_armhf_raspi2.img.xz            23-May-2017 11:14    172M
    devuan_jessie_1.0.0_armhf_sunxi.img.xz             23-May-2017 11:17    152M

    All three Raspberry Pi arch are listed. I’ve got both the raspi3 and raspi2 as mini-SD cards for the Pi M3. They both work well, though the amr64 raspi3 image seems just a bit slower on the browser (it ought to be much faster on double precision math in model runs). I’m mostly running the raspi2 32 bit image on the Pi M3 so as to be binary identical with the 2 x Pi M2 boards in my distcc compile cluster. I run the 64 bit image sometimes just to give it a test drive, really. It’s the best 64 bit Pi M3 image I’ve run so far. I’d be comfortable using it on a regular basis.

    There’s two “chromebook” releases, so you can go buy a cheap Chromebook and purge the Google Crap from it, install Devuan, and be golden. I’m seriously considering that as a Christmas Toy Request to the spouse ;-) No, I have zero need for it… The recycled MacBook is fine, and I’d likely be better served to pop $200 for an SSD kit and fix it… but then again, for about $180 I could have a Devuan laptop ;-) Decisions decisions…

    The OdroidXU image is claimed to run on all the XU platforms, including the XU4, but my attempts have failed. I suspect they built on a different one (with octo core Big/little) but didn’t notice the video cores are different on the XU4. Or maybe I just muffed the install… On my “someday” list is to figure out what’s really the deal.

    That just leaves n900 and sunxi images.

    The N900 is a Nokia smartphone, for those folks wanting a phone with Linux on it:


    An application called “Easy Debian” installs a Debian LXDE image on the internal memory, enabling applications such as IceWeasel (Firefox browser) and all of the OpenOffice.org suite to run within Maemo. Other applications in the Synaptic package manager that are included in the Debian installation, such as GIMP, can run within the LXDE interface. Software can also be added to Debian using Maemo’s chroot utility using Synaptic or apt-get at the command line, such as Stellarium or the zim desktop wiki, and this can then be accessed either via the LXDE desktop, by icons in the program manager, or by shortcuts on the desktop.

    Small and somewhat slow, but if you want a phone that YOU control, there you go.


    The images use recent kernels (4.10, 4.6, and such) which causes a specific
    issue when you try to resize or fsck ext{2,3,4}: they use some ext4 features
    not yet available in the standard Devuan Jessie e2fsprogs. Because of this, you
    are advised to install e2fsprogs from jessie-backports if you are in need of
    using these tools on these images.

    Currently supported images:

    * Acer Chromebook (chromeacer)
    * Veyron/Rockchip Chromebook (chromeveyron)
    * Nokia N900 (n900)
    * Odroid XU (odroidxu)
    * Raspberry Pi 0 and 1 (raspi1)
    * Raspberry Pi 2 and 3 (raspi2)
    * Raspberry Pi 3 64bit (raspi3)

    So there’s the backports issue… I’m happy just moving to EXT3 for a while. I’ve been mening to try out btrfs anyway, so I may try it on some disk instead of fussing with EXT4 if I really want a journalling file system.

    Then we have the sunxi / Allwinner tranche:

    Allwinner boards with mainline U-Boot and mainline Linux can be booted
    using the sunxi image, and flashing the according u-boot blob found in
    the u-boot directory here. The filenames are board-specific, but this
    file is commonly known as “u-boot-sunxi-with-spl.bin”.

    Currently supported Allwinner boards:

    * Olimex OLinuXino Lime (A10)
    * Olimex OLinuXino Lime (A20)
    * Olimex OLinuXino Lime2 (A20)
    * Olimex OLinuXino MICRO (A20)
    * Banana Pi (A20)
    * Banana Pro (A20)
    * CHIP (R8)
    * CHIP Pro (GR8)
    * Cubieboard (A10)
    * Cubieboard2 (A20)
    * Cubietruck (A20)
    * Cubieboard4 (A80)
    * Cubietruck Plus (A83t)
    * Lamobo R1 (A20)
    * OrangePi2 (H3)
    * OrangePi Lite (H3)
    * OrangePi Plus (H3)
    * OrangePi Zero (H2+)
    * OrangePi (A20)
    * OrangePi Mini (A20)
    * Allwinner-based q8 touchscreen tablet (A33)

    Ok, so they don’t have an Orange Pi One… dang it. OK, I’ll leave my file server / site scraper as an Armbian/Devuan uplift for a while…

    Any future buys will likely be from this list, until and unless they come out with an Odroid direct supported build or I ‘roll my own’ from a source build. The “uplift” is a bit quirky on the XU4, and Devuan doesn’t have a C1 or C2 build. Oh Well. The Armbian / Devuan (Armbdevuan) works well on the C1 and C2 near as I have seen it. But for low end play toys I’m more likely to go for a C.H.I.P. and for high end the Cubie family. Though the Orange Pi One has worked just fine, so I expect their other boards would be fine too. Mostly just needing a bigger heat sink than the little one I stuck on it. But in reality, I can’t justify even a $10 toy board buy until I get some actual work running on the stack I’ve already got. Doing “board reviews” is not really my goal here after all…

    With all that said:

    I really really like the speed of the 2 GHz A15 core. It mostly shows up in Firefox, though. FF will peg a Pi M3 core for a couple of seconds from time to time. Easy enough to live with, but you notice it compared to “no wait states” on the XU4.

    IMHO, the ideal system would be a quad core A15 or similar (A53, A57, A9? Some digging needed) running at about 2 GHz. Rarely does the XU4 use more than 1/4 of the cores and even then, mostly never “pegs” one in regular desktop use. Having “only” 4 cores would be absolutely fine, as long as they are relatively fast ones. I suppose I could just slap a giant heat sink on the Pi M3 and overclock it, but I’m not fond of “board abuse” ;-0

    So that’s where I’m at. Sold on Devuan. Like the “out of the box onto the board” real 1.0 release better than the “uplift”, especially on “odd” hardware with mixed CPU core types. Happy to buy hardware that supports that software habit, going forward. Think the Pi M3 is “just enough” but like the comfort of 2 GHz faster …

    With that, I’m “back to work” in my usual work flow on the XU4 and pretty much past the last “issues” from system update load. (Kernels the newer secure ones, most file systems out of EXT4 Compatibility Hell, all boards on Devuan 1.0 or a Armbdevuan build, future hardware path set on “Devuan Supported”.)

Comments are closed.