A Patching We Will Go…

Or “Washing Dirty COWs”… ( “dirty”, or not very well done, Copy On Write bug)

Basically, all you need to do on Linux is swap in a patched kernel.

Now there’s lots of potential hair on that dog. For example, under Debian, the patch is in the kernel in Jessie 8 but may or may not be in Wheezy 7 or earlier, I just can’t tell yet. Retrofitting a kernel that is built for a very newer major release sometimes “has issues” with older releases (libraries are different, expected services and interfaces might change). So for very old releases, it is best to make a brand new kernel from source, adding the patch, and compiling with the right libraries. Because being a “kernel developer” is generally thought to be “not easy”, most folks just do an entire operating system update / upgrade. Since I’m very unkeen on SystemD I’m trying to avoid that, and stay on Wheezy to avoid it. For this reason, many of my older Fedora, Ubuntu, and Debian images will be harder to update, and / or I’ll just not bother.

The Upgrade / Update Path

So, mostly, this top posting is going to be for folks who are going ahead and doing the update / upgrade process and ending up with Debian Jessie and a new patched kernel on a Raspberry Pi, or folks running Arch (like my Daily Driver) that does have SystemD in it, but not in too obnoxious a way. I’ll come back to the harder cases later when I’ve got a couple of usable chips cleanly patched / upgraded.


For those running the most recent Jessie / Raspbian as your system, the fix is nearly trivial:


You may have seen the news recently about a bug in the Linux kernel called Dirty COW – it’s a vulnerability that affects the ‘copy-on-write’ mechanism in Linux, which is also known as COW. This bug can be used to gain full control over a device running a version of Linux, including Android phones, web servers, and even the Raspberry Pi.

You don’t need to worry though, as a patch for Raspbian Jessie to fix Dirty COW has already been released, and you can get it right now. Open up a terminal window and type the following:


It is generic Raspbian NOOBS only! Until proven otherwise
And even that has not been tested by me…

and people wonder why I have 60 system images on a dozen chips
with backups of each…

sudo apt-get update
sudo apt-get install raspberrypi-kernel

Once the install is done, make sure to reboot your Raspberry Pi and you’ll be Dirty COW-free!

Yup, that’s all it takes. Two lines run via ‘sudo’ or you can become root and just type the parts after sudo…

It will likely take some testing and or FM (“Friendly” Magic…) to make the newest kernel work with Wheezy, so that’s going to be for “another day”. As several of my multi-boot chips have a Jessie image on them, my first pass will just be doing that on them. At that point I’ll have several Raspbian images on chips that I can boot and run without Dirty COW worries. THEN I’ll come back and work on the Jessie issue and some time after that, the Ubuntu (that ought to be about the same) and Fedora (who knows what) fixes.

FWIW, this bug was only introduced in 2007, so older kernels will not have it. Digging Here! into just when it arrived:


PostPosted: Sat 22 Oct 2016, 05:20 Post subject:
Maybe not moot

Slacko5.7 by default has settings of -rwxr-xr-x for a great majority of files.

Group and World read-only priveledges can be changed with this bug. and in addition, its a kernel bug since 2.6.22.


So if 2.6.21 or smaller, you are OK (some of those very old systems of mine in the garage, or some old live CDs…) but if newer than that, it’s a problem. What number is fixed?


#2 2016-10-24 17:43:40

From: Asteroid B-612
Registered: 2014-02-20
Posts: 4,000

Re: [SOLVED] Status of CVE-2016-5195 (“Dirty COW”)?

It was fixed for kernel 4.8.3 and Arch is now on 4.8.4 so you’re covered wink

All well and good if you have kept updating regularly or have autonomous updating turned on. If not, how do you know your kernel level? “man uname” is your friend…

UNAME(1)                         User Commands                        UNAME(1)

       uname - print system information

       uname [OPTION]...

       Print certain system information.  With no OPTION, same as -s.

       -a, --all
              print  all  information,  in the following order, except omit -p
              and -i if unknown:

       -s, --kernel-name
              print the kernel name

       -n, --nodename
              print the network node hostname

       -r, --kernel-release
              print the kernel release
 Manual page uname(1) line 1 (press h for help or q to quit)

Soo… what’s my status?

chiefio@ARCH_pi_64:/boot$ uname -r

Oh Dear… Looks like I need an update…

How to do it? Very similar to Debian, but with “pacman” instead of “apt-get”.

sudo pacman -Syy

sudo pacman -Su

Unfortunately, when I did that, I ended up with “issues”. First off, it complained about a file being in the way, so I moved it out of the way and the upgrade proceeded.

-rw-r--r-- 1 root  root  369577 Oct 11 08:30 brcmfmac43430-sdio.bin
-rw-r--r-- 1 alarm alarm 309681 Mar  1  2016 brcmfmac43430-sdio.bin.old
-rw-r--r-- 1 alarm alarm   1076 Mar  1  2016 brcmfmac43430-sdio.txt
-rw-r--r-- 1 root  root  488193 Apr 19  2016 brcmfmac43455-sdio.bin


So you can see the .old and the replacement that the upgrade put in just below it.

But then, when rebooted post update, some things had a systemd “failed” to start at boot time and now it also refuses to do the update again (looks like networking is broken).

The good news is that I had a ‘fall back’ system image on the same chip (named “Arch Chiefly Built” from which I’d cloned the working copy.) Then there’s the fact that my working directory is on its own file system, so it transports to the other images. What have I lost? At most, whatever I installed on the Chiefly image Clone but didn’t note anywhere and didn’t make a new system save image. (Don’t know what that would be, but hey, not a big deal to install ‘whatever’ again…)


Oh, and the kernel didn’t update to a new value either. So looks like “needs work” on the Arch upgrade procedure…

More later as I work through this… (this will be a ‘living document’ for the duration of the updates…)

Subscribe to feed


About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in Tech Bits and tagged , , , , , , . Bookmark the permalink.

17 Responses to A Patching We Will Go…

  1. Larry Ledwick says:

    Does Arch have an equivalent to Ubuntu’s
    apt-get -f install, followed by apt full-upgrade

    I accidentally killed my ubuntu system during an online upgrade the other day to the new version 16.04.1 and it would not boot to the corrupted image.
    I booted from CD and then ran those two commands and it fixed the failed install.

  2. E.M.Smith says:

    Don’t know. I’m fairly new to Avch. Only used it on the Pi.

  3. tom0mason says:

    You probably know but just for completeness here are some links —
    With these method backing-up your important stuff is preferred!

    Updating firmware via github — there are plenty of caveats and warnings here, see the comments.

    Another tutorial/guide for Arch Raspberry Pi

    Also there is this ‘Upgrade Arch Linux for Raspberry Pi 2’ from https://pavelfatin.com/upgrade-arch-linux-for-raspberry-pi-2/
    When you are fully updated have a test with this
    A ‘c’ file that contains instructions on use and the code, I originally found referenced at https://www.redpacketsecurity.com/testing-dirty-cow-cve-2016-5195/

    Good luck with the update.

  4. tom0mason says:

    Looks like my comment (#comment-74130) has gone to spam. It has many Arch Raspberry links that probably cause this action.

  5. E.M.Smith says:


    Yup, in spam. No idea why as I have acceptable link count set higher than that.

    FWIW, I had moved on to trying the Debian upgrade of the kernel. It was late, I wasn’t thinking enough…

    I’m on a BerryBoot chip, not generic NOOBS built, and the Rasbian folks like to pretend BerryBoot doesn’t exist, so apparently did zero environmental checks before blasting the kernel into “where it ought to be” and that blew my chip… (not the hardware, just overwrote something in Berryboot Land).

    So now I’m configuring a second chip that is generic with my saved config files and build script… then I can more easily recover the backup image of my main chip and / or just repair the boot partition…

    It is at times like this I’m glad “my stuff” is on a USB drive, not attached and fine, and I have dd copy images of my chips (usually from the last big change or update).

    Since the OS chip is disjoint from the user space, at worst I just reinstall the system, run my build script, put back the saved config files, and discover what, if anything, interesting I added to the system since then (generally in a posting)… About 4 hours.

    I’ll likely spend more than that doing discovery on what was busted on the first chip and sucking out the intact partitions, if any. (Should be almost all of them… the kernel replace ought to have only hit the boot partion and not the partion table…)

    BTW, per Arch: I’m almost a NOOB on it, having never used it prior to the posting where I evaluated it, so any pointers to where those guys hide there tips highly appreciated!

    Well, at least folks know what I’m doing today ;-)

    (Happy to run out ahead and find the sinkholes… well, sort of happy…)

  6. Larry Ledwick says:

    Well I certainly appreciate you running ahead of us through the mine field, and watching over your shoulder through your posts.

    I am currently waiting on some bits and pieces for my RPi’s, I am going to try to see if a slightly different form of off system storage works. I am ordering a spare USB SD card reader to plug into the Pi and make use of some older fast SD cards I bought for my cameras as off system solid state drives.
    I got the 3A 5V power modules so will see if they can support the power draw of the RPi and the external SD reader. Some of those larger fast SD chips intended for things like cameras might make for compact low power external storage if it works without needing another power wart to power them or an external USB docking station for a spinning disk or true SSD disk.

    These fast Sandisk SD extreme pro cards work great in the cameras and have good data rates at 95 m/s, come in sizes from 16 gig to 128-256 gig, and I have a few of the smaller ones on hand which now have been replaced by the larger sizes in the camera.


    Since the card reader device can read multiple types of cards including compact flash which is faster yet, being available with data rates up to 160 m/sec, it might work out for a small package size off system storage that you could buy new chips for almost anywhere in the country.
    Since the reader can handle just about any card form factor from the micro’s to the full size compact flash you could use just about anything you could find in the stores if you needed more storage, and memory cards for personal devices, cameras etc. are sold just about everywhere in the major chain stores and even road side shops in tourist areas.

  7. pg sharrow says:

    @Larry; glad to hear you are pursuing this avenue of storage. I was wondering about using my reader in a Pi as well. I will also be hanging over the neighbor’s fence as he plows his field. ;-)…pg

  8. tom0mason says:

    BTW, per Arch: I’m almost a NOOB on it, having never used it prior to the posting where I evaluated it, so any pointers to where those guys hide there tips highly appreciated!

    No problem with that. I’m a NOOB on Arch too! Just testing Arch via Manjaro on my old IBM T60, as it is a 32 bit distro. I’m not up to speed with it, it all feels so alien — Pacman and systemd to struggle with. Can’t say I’m a fan of either.
    If I can’t settle soon I’ll be going back to looking at BSD again, GhostBSD just updated their 32bit distro, and seems to be well supported.

  9. E.M.Smith says:


    I regularly use a Targus SD to USB adapter for storage and forSDcard read / writdfor making archi ecopies. If a giant multi interface one sucks too much power, just plug it into a powered USB hub. I use 2 different ones for big USB disks… a few TB in 4 disks at once…

    I have an old Media Gearcombo adapter that I loved (though USB 2.0) but sadly over the years one OS after another has stopped supportinv whatever it needs… I keep it now for use on archival systems if needed. I’d buy a replacement, but the last thinv I need right now is another search and evaluatd task…


    I’m deep enough in the swamp today that a “screw it putBSD on the DNS box and move on” is starting to call my name. Updates on Debian and Arch that ought to be trivial are fighting me… I now have a second chip with Arch on it where I was just bringing it up to date to be my replacement daily driver as I recover the other chip, that is being cranky. I can still boot other images on the chip (including the initialbase image I saved) but the updatd is in the 3rd reboot and I’m not yet to the “install my packages” step.

    I don’t want to be Talking Dirt w/o proof, but both are systemD and both are being a PITA. Frankly, living with Dirty COW for another week is looking good…

  10. E.M.Smith says:


    Base Archlinux same as used for Daily Driver.
    Apply build script, tested and proven before to make Daily Driver. It fails to build many packages.
    Berryboot reset to base Archlinux.
    Apply general Pacman update (as in script). It works.
    Run script (minus update) and packages install.
    SystemD fail to start errors all over the place… system hangs in boot. No clear path to bootable for manual fix, no idea what TO fix. Only choice, reset to base and manually do one step at a time rebooting between each, just like Microsoft…

    Wonder if Bevmo is open on Sunday….

    I think I’m going to just drop back to an older daily driver image for a week while I commit to a non-systemD release and fully script the install of it.

    FWIW, I’ve never had these kinds or number of issues using scripted builds, ever.

  11. E.M.Smith says:

    OK, by doing an “update” but NOT an “upgrade” the Debian Jessie script (done one line at a time by hand, but the same commands) let me rebuild a workable Pi workstation. Now I can use it to restore my chip backups and / or work on recovering individual partitions ON those chips. (The Chromebox and Tablet are nice for TV and browsing, but if you want to do things like “dd” an SD card into an archive or nfs mount a remote data store and download an old chip image onto an SD card, well, let’s just say they’re a bit daft…)

    I still need to do a few file updates (things like /etc/fstab and /etc/exports and such) to get some things automatically mounted and “swap” where and how I like it, but that’s all secondary.

    I’m functional again (even if it likely isn’t a Dirty COW clean system… I likely need to do that “upgrade” to get that…)

    OK, time for a break.. (BTW, posting this from the “Temp Daily Driver”… using my preferred browser.)

  12. tom0mason says:

    I feel for you!
    I’m not a fan of systemd. To me it’s just a method of closing off users so developers can have an easier job. The erroneous assumptions are —
    1. users don’t like/want/need to tinker with the OS or its programs and parameters,
    2. that developers/programmers innately know what users like, want and need.
    Both assumptions are wrong.
    Sure the init system had some issues but it was easily open to inspection and correction. Inits had a logic to follow unlike systemd with its dependency/setting/services/jobs/ in a spaghetti of inter-relationships.

    When systemd messes-up the boot process (as far as I can find) you’re left in the M$ position of clean it all off and start again with a fresh install and reboot. There is no logical way that I’ve found to debug the start process as everything is playing pass the parcel through too many system files to be able to track them. And of course little is in plain text.
    Somehow with a non-booting system you have to get systemctl running so you can interrogate the failed system, and list the jobs, services, targets, etc, etc. It’s an illogical mess but don’t worry because the programmers/developer that did it have left you with this https://fedoraproject.org/wiki/How_to_debug_Systemd_problems. It doesn’t work unless you know the secret pathway through the dependency maze.

    As I originally said good luck with fighting it, personally I think you’d be better off using a non-systemd distro.

  13. E.M.Smith says:

    Interesting side note:

    While going through my various ‘chips’ and doing the Round Tuit Backups (about 20 total images for about 1/3 TB !) I re-discovered the Encrypted BerryBoot Daily Driver test chip. Launching Debian on it, the browser is a bit pokey and I’m reminded why I wasn’t all that keen on the encrypted experience… BUT: Then I launched Fedora / Red Hat on it. Browser is fast and snappy…. Hmmm…..

    Now I’m not embracing the Red Hat experience, they have gone way too far around the bend into Industrial VM First, all you hangers on second… But I do note in passing that the experience is what it is.

    My first thought is that maybe attempting to glue on SystemD post facto is prone to a whole lot more performance issues than having it developed in house and integrated from day one. Maybe Red Hat going for “Competitive Advantage” via control of the SystemD devo path?

    At any rate, just thought I’d note that the Fedora speed in the browser is similar to the experience under Arch (and under older smaller releases in years past).


    IMHO the “goals” of SystemD are more convoluted.

    First off, since several divergent flavors of Linux had sprouted, it is an attempt to make “One Linux” (their term…) by forcing everyone back into one init process. Lets you write simpler code.

    Second, boot time on Virtual Machines in a “Cloud” shop (or a Disaster Recovery shop) is a Very Big Deal. When your booking engine books $6,000,000 / hour, then a minute costs you $100,000 and ANY outage is unacceptable. So great emphasis was put on “up fast, in seconds, on a VM spin up”. Most of us just don’t give a damn about VM spin times measured in a minute or less.

    Third, put Red Hat in control. Competitive advantage in the commercial marketplace.

    Now I suspect the rest of these are also true, but it is more my speculation:

    Fourth, let Pottering take control and push his vision of what is right (in that very German way of One Way and No Individual Control!)


    So anyway, long story short, we came to the conclusion that Upstart is conceptually wrong, and it moved at glacial speeds. It also had the problem that Canonical tried very hard to stay in control of it. They made sure, with copyright assignment, that they made it really hard to contribute, but that’s what Linux actually lives off. You get these drive-by patches, as I would call them, where people see that something is broken, or something could be improved. They do a Git checkout, do one change, send you it and forget about it.
    Putting it all together, we realised that Upstart wouldn’t be it. So at one Linux Plumbers Conference, four years ago or so, Kay Sievers and I said that we should do something about it, after we saw at the conference how Upstart wasn’t moving ahead. And then we started working on it, pulled out the old Babykit code, gave it a new name, and started proposing it.

    A lot of people understood that this was the better approach. It was a lot more complex than Upstart – to make it clear, I think Upstart actually has its benefits. The source code is very, very nice, and it’s very simple, but I think it’s too simple. It doesn’t have this engine that can figure out what the computer is supposed to be doing.

    So we started writing Systemd, and Red Hat didn’t like it at all. Red Hat management said: no, we’re going for Upstart, don’t work on that. So I said, OK, I’ll work on it in my free time. Eventually Red Hat realised that the problems we solved with Systemd were relevant, and were problems that needed to be solved, and that you couldn’t ignore them.

    It’s that desire to have the computer out think you and be in control that is exactly what I do not want. I want simple, clear, human obvious recipes for what my machine is to do at boot time.

    Fifth: TLA desires? Totally speculative, but were I a TLA, this would be a wet dream to me. One Stop Shopping for buggering your whole system. Stuff flying around in dbus easy to pick out and play with. One master control point to flip most anything to do a bit of what I want. Almost all of it buried in a bag-o-bits where maybe a dozen folks in the world look and can understand it. If they were not in on the original design goal, they will certainly be looking at juicy exploits.

    In Conclusion

    So none of that brings anything to ME that I want. It does make sure that 30 years of Systems Admin knowledge and experience gets tossed out and “everything you know is wrong”, while making control harder to impossible (the “computer” decides what to do, remember…) and making everything way more opaque.

    I want small, simple, human readable, human controllable, and definitive state setting. Not a binary blob that decides for me what ought to be done, silly human…

    BTW, I am transitioning to a non-systemD release, I just need to work out which one does what I want while not being too new and buggy… For now, I’m ‘sort of on systemD’ due to having moved to Debian some years back and Debian being dominant on the Pi and having “gone there” into systemD against my wishes… So for now, I’m kind of stuck with the 3 main releases on the Pi fully committed to it (Raspbian, Ubuntu, Fedora) and Arch part-in and headed that way fast. Pickings get slim after that (yes, lots of releases, but not as easy to get running nor as NOOBS friendly – see my prior postings on bringing up several of them). It’s the classical problem of needing to keep working on a system while you completely swap it out…

    That was made all the more PITA in that my strategy of “just stay on the last non-systemD release with Wheezy” just got spiked by the Dirty COW bug in that I need to move up a kernel release or three… all ‘suddenly and against my will’… “Stay Back Level” is not compatible with “Be current as of this week to kill Dirty COW”… So I’m suddenly in the thick of it.

    I do have a minor secondary interest in ‘keeping current’ on the skill set, so as much as I piss on systemD, I need to also be able to “make it go” if I get a contract using it… Something like the Mercedes Mechanic who needs to know how to tune up a POS Chevy since there are a whole lot more of them…


    OK, just as a status update:

    I’ve got all my backups done, and I’ve actually catalogued the chip pile, so now I’ve got 3 “Replacement Daily Driver” chips and I’m back up and running as usual. I can now choose to either recover the scrogged chips, or just over-write them (as I can always ‘restore and recover’ from the backups).

    I do still have the problem that the kernel on all the working DD chips is still Dirty COW friendly… so I’m still where I was 2 weeks ago, but I’ve got a more free hand in how I move forward. I did find my VOID chip and booted it on the laptop – slower than I liked. I also found the Alpine chip, and will likely use it as the basis for the DNS Server rebuild. (Non-systemd and uses musl and has a hardened design… but not as ‘desktop friendly’ just yet).

    I’m going to make a pot of coffee, step back from the terminal, and think just a bit about best path forward: Fix-rebuild-Old DDriver chips? Just flash new builds onto them? Incremental roll forward from one of the working systems? Etc. etc.

    Most likely I’m going to flash one of the chips with an entirely new build (AFTER assuring that the NOOBS or BerryBoot builds have fixed Dirty COW…) and use it as the build platform for the next step that will be ‘Back to that transition away from SystemD…” Then, once that works, I can basically flush about a dozen chips contents as they are no longer of interest (Dirty COW vulnerable).

  14. Larry Ledwick says:

    Comment on above:

    When your booking engine books $6,000,000 / hour, then a minute costs you $100,000 and ANY outage is unacceptable.

    I have always thought that was a false comparison (at least for some situations).

    If you are running a corporate sales outlet that sells your product ( cough Sun Microsystems cough), the idea that someone would not buy a multi-million dollar order of equipment from you if you system was down for 20 minutes always struck me as silly. They spent days/weeks doing system architecture design and deciding what hardware they wanted to use for the build out. I’m pretty sure they will come back after lunch and place the order anyway.

    Now for a concert ticket vendor who is competing against several other outlets for ticket sales to an event he is only a broker for — yes folks will try the other ticket vendors because they are in a race with other people to book tickets before they are sold out.

    If time to sell out is not an issue and especially if you are a sole source supplier for a product like Sun was, not so much.

    Over at IBM this was very much an issue because we supported lots of clients like Coke (TM), Blockbuster video etc. that ran promotions or did new movie releases and when they opened the promotion the servers would get absolutely blasted and they often had to provision new servers to increase capacity when they realized the response was higher than anticipated.

    Just my man in the trenches view of two entirely different market niches, and yes Sun got really cranky if their e-commerce site went down for even minutes. In the case of IBM they also were dealing with SLA (service level agreements) and the specified penalty hits (big $ penalties) if they failed to meet SLA service levels due to down time

  15. Larry Ledwick says:

    By the way EM how exactly are you doing your backups of the chips?
    Are you writing images of them out to a hard drive?
    If so what command string are you using to do that?

    I just got in my order of micro-SD chips so need to think about how to backup and restore images as I fiddle with things.

  16. tom0mason says:

    Well at least you’re moving forward. I’m glad you are lighting the way for the rest of us who follow on later.

    “When your booking engine books $6,000,000 / hour, then a minute costs you $100,000 and ANY outage is unacceptable. So great emphasis was put on “up fast, in seconds, on a VM spin up”. Most of us just don’t give a damn about VM spin times measured in a minute or less.”
    Having come from industries where reliability was priority over speed — maintaining older equipment to very strict requirements, has meant I have always appreciated the trailing edge of technology (with good periodic maintenance) more than the gee-wizzz ( and now it just caught fire!) latest stuff. So for me if a boot-up takes a few seconds more but ensures that the system is better known, properly documented, stable, secure (or at least no major issues and plenty of mitigation strategies) and maintainable, I’ll prefer it.

    Linux in general is heading down the systemd path, can’t say I like because as you say it is a big shift from “30 years of Systems Admin knowledge and experience gets tossed out and “everything you know is wrong”, while making control harder to impossible (the “computer” decides what to do, remember…) and making everything way more opaque.” For us with less knowledge and experience it another steep learning curve to get up.

    If you have time there is interesting piece on https://forums.freebsd.org/threads/49372/ called “systemd is obsolete”, it’s short and the comments are worth a read.

  17. E.M.Smith says:


    I was at an ‘entertainment house’ and the bookings were things like vacations of several thousands each (hotel, meals, cars, park tickets…). On the other end were travel agents trying to close a ‘deal’ and if you were down, they would “suggest”, say, Universal Studios and Sea World as choices instead… Any vacation NOT booked was often lost… Some of the “DIY” from home via the internet would come back, but even then, many were “hot to buy now” and just moved on. But yeah, once you’ve R&D’ed your way to a particular chunk of gear, you don’t like to back down to 2nd choice…

    Per Chip Dups:

    I’m lazy and burning disk space instead of finesse on it. Just dd the chip to USB 2 TB disk.

    dd bs=1M if=/dev/sd{x} of=/BUPS/USB_Disk/Chips_Archive/Disk_{x}_Date

    Where X is the drive letter. Note: No number qualifier. I’m not caring about /dev/sdb1 Fat32 vs /dev/sdb2 as swap vs /dev/sdb3 as type ext4. Just the whole damn thing.

    This means a 32 GB chip with 9 G on it will make a 32 GB image on the USB disk…

    Now for some things, where I’m interested in particular parts of it, I’ll do something like tar to create an archive of just that bit. Say I’ve got /Chip/Linux/ISOs with 30 GB of stuff that hasn’t changed in forever and next to is /Chip/Linux/New_Stuff that I’ve not backed up anywhere. Then I’ll do something like:

    cp -r /Chip/Linux/New_Stuff /BUPS/Linux/
    (cd /Chip/Linux/New_stuff; tar cvf – .) | compress > /BUPS/Linux/New_Stuff.tarx
    or simlar things like
    cd /Chip/Linux/New_stuff; tar czvf /Bups/Linux/New_Stuff.tarz

    or any of a few others as the particular capabilities of any given linux release vary as does what I remember at the moment ;-)

    All the above ‘from the top of the mind’ so might not work on any given platform, or at all, if I’ve got a flag wrong… or an ending… like targz vs tarz vs… RTFM and “man” are your friends…

    Oh, and I’ve been known to use rsync too. I’ve a couple of ‘scriptletts’ that handle the annoying way it does flags…

    Interesting side bar:

    Those 8 GB cheap sticks I got have a blinky light on them. Activity causes it to flash. I’ve set up the pi user on a newly installed Debian to have the home dir pointed to one of the sticks. In Firefox, that little sucker sure flashes a lot… often when the browser takes a pause… I think it cache and such…

    My initial conclusion that that just having home directories on an external (relatively small but fast) USB real disk might work wonders both for speed and for SD card wear… Also separates OS from user data. I made a symbolic link from /USB/home/pi to /home/pi THEN mounted the USB over it, copied /home/pi to /USB/home/pi, and changed /etc/passwd to have the user pi with home directory /USB/home/pi. That way, if the stick is in, it goes to USB, if it is out, it follows the symlink back to the chip at /home/pi.

    Lets me have an external storage (privacy / isolated IO / security of blinky lights / etc) if desired, or leave it in the pouch and have the chip run fine as generic…

    (There is evidence of the swap in /etc/passwd as the home dir location, and in the symlink back, and in the /etc/fstab entry for /USB; so this isn’t forensics proof. For that, you would need to undo those footprints…)


    I’ll give it a read after dinner… (that I need to start cooking…)

    Today I’ve gotten Debian-on-a-Stick (no Spanish Accent!) running on the laptop, so it is now usable as a ‘workstation’ and for browsing as I fiddle with my Pi-Chips (no relation to cow chips… though a dirty COW is involved ;-)

    It does Video with audio OK too so the fact that the Chromebox is out of action (due to screen being on Pi doing OS admin…) is not an issue.

    So I’ve not wasted the whole week. I’m now basically functional again as the “questionable” chips and various issues (some self created…) get cleaned up.

    I’ve also done a fresh download of OSs to a Berry Boot chip (using it now for this comment) but have not looked into the rather important detail of if this Jessie has the kernel upgrade or not… But at least if it gets nuked by someone, I don’t care. (BarryBoot lets me rest to starting pristine state, I’ve got noting on it care about other than an install of Firefox and an update / upgrade cycle.)

    Now comes the hard work… Sigh.

Comments are closed.