The Good, The Bad, and The Slow

I’ll not be decorating this posting with links simply because I ran through a lot of them and it would take a couple of hours to track back and select / sort / enter them. It’s almost entirely the conclusions that matter (and if you don’t agree with them it’s easy enough for you to Dig Here! and find any evidence you like).

First off, the Good:

I bought a Seagate 8 TB disk at Costco for $129.xx and installed it on my Raspberry Pi. It is much smaller than the 2 x 4 TB MyBook disks on my file server and it is USB 3.0 speed. That is nominally $130 / 8 = $16.25 / TB.

My God Man, at that price what’s not to like? It’s hardly worth throwing out the data trash when you can just move it to a “someday” folder and sort it later or not… just call it a “deep backup” ;-)

Then, given some of the recent turmoil in the land of Linux: what with Linus on “sabbatical” having been badgered into self doubt by the SJW Mob, the SystemD crap from RedHat infesting too many releases, then IBM buying Red Hat and Microsoft suing companies using Linux for “patent infringement” (but not willing to share just what patents…) all while buying a seat on the Linux Foundation… well, I decided to take another look at BSD.

At present, all three of the major BSD forks are running on the Raspberry Pi M3. Two of them on all R. Pi sizes 1, 2 and 3. OpenBSD having now shown up on the Pi M3, while NetBSD and FreeBSD have been there for a while. There has also been some progress on making Xorg and LXDE / KDE / etc/ work more easily and be less painful to set up and configure. Part of that being HDMI to TV Monitor like devices being more of a standard (fewer knobs to twiddle to get something to work with every single wacko display you might have). There is now a clear realization that the ARM processor and Single Board Computers are here to stay, there are millions of them, and it is worth supporting.

The Bad:

That 8 TB disk is really 7.2 TB to gparted and 7.5 to the df command (no, I don’t know why they do not agree). That makes it closer to $18 / TB or with California Sales Taxes almost $20 / TB usable. OK, still not going to break the bank, but not the same.

Then, that USB 3.0 would be great if I could use it. Search on “USB 3.0 fail” or similar like “USB 3.0 Fix” and you find all sorts of articles about USB 3.0 failures and flaky behaviour on things from Windows 10 to Odroid XU4 to, well, lots of stuff. Overall, USB 3.0 still “has issues”. Near as I can tell, the one SBC I have with 3.0 ports on it (the XU4) still does not have it working reliably. Hardware? Software? Fundamental failure of design? It would seem nobody knows yet. So it is plugged into the USB 2.0 port of my Raspberry Pi M3 (and I’m thinking it isn’t a 3.0 NOT because the R. Pi folks are dolts and slow but because they refuse to ship it when it doesn’t work right yet…)

In the various ARM BSD releases you find all sorts of caveats. First off, the on-board WiFi doesn’t work. Dongles do. OK… no driver for the odd chip the R. Pi used and nobody writing one at the moment. Not a big deal, but limiting. OpenBSD only runs on the Pi M3 (why bother with a v6 or v7 instruction set port… just go with the less proven, debugged or stable v8 instruction set -CURRENT port…) so NetBSD or FreeBSD look more stable and with better debugging at present. Basically it is still a bit of a Hackers Paradise and not something easy for Joe Average to just set up and run. Yes, you can “cook book” it, but as soon as something goes bump in the night, well… So likely still another year or two for mainline uses.

The Slow:

I’ve been copying my 8 TB (nominal – 7.64 TB formatted) LVM volume group over to the 8 TB Seagate drive. First off, just to have a clean full backup before I go in and start tossing out trash. Second, I don’t really want to be using LVM.

Yes, I know all the reasons “The Experienced SysAdmin will want LVM” since a Logical Volume Manager lets you glue on new capacity or swap out disk with relative “ease”. HOWEVER: Having had to recover the LVM group to a different SBC when the file serve uSD card bit-rotted, let’s just say suddenly installing and configuring LVM was not fun. Furthermore, at the 4 TB / volume size, doing all those loverly maintenance whiz-bangs over USB 2.0 takes forever so I’m just not going to do them anyway. Then there is the PITA of how to back up a 6.5 TB (used) file system. GAK! The experience was just not what I had expected.

So easier to just get a new 8 TB single disk, copy on to it, and have a second copy on the old LVM group until such time as it isn’t needed (then maybe get a second one of these monster disks… FWIW, it’s about 8 x 4 x 1 inches in physical size… about like a small book.) So that’s what I did… and am doing… and will be doing for a couple of more days…

I’ve had the copy process running for a couple of days. I’m not yet 1/2 finished. This is likely to run 5 to 6 days for the copy. Call it 1 TB to 1.5 TB per day. I’d say it is slow but really a TB is very very big. All at USB 2.0 speeds because USB 3.0 isn’t ready for prime time on SBCs. Sigh.

The “good news” is the R. Pi (on Devuan 2.0 straight from the maker) has been rock solid and reliable. It had run 49 days straight when I shut it down to do some testing on other boards prior to bringing it up with the LVM cluster on it. It has copied over 2.8 TB of data (so far) without an issue. I’m typing this on it as it does the copy. (It is ‘D’ for disk wait in Htop so CPU to spare – USB not so much…)

Then some of the BSD ports use the v6 instruction set. You know, the old Pi Model A & B one. Often with only “soft float” math. So not a nice 64 bit instruction set and ignoring the math co-processor. Not going to cut it in terms of performance or efficiency and a real PITA for any science / math stuff or encryption. Yes, I know, smaller instruction width so less memory used and the Pi is ‘thin’ on memory while the other ARM boards sometimes have less and may be missing a math co-processor so “one size fits all”… IIRC it was NetBSD that was this way (as they try to run on all hardware in the world even the very small ARM boards). FreeBSD was mostly v7 instruction set (Pi M2 the older and can run on Pi M3 and the new 64 bit Pi M2). All that argues for using FreeBSD for now and doing your own “build from scratch” if you want lots of speed and things set to use all the Pi hardware. Not exactly Joe Average friendly.

That said, I’m likely going to do it. Why? So at least one of my devices is running BSD, I’ve got the experience base up to date, and whatever happens with Linux I can easily slide off to something I really love and without a pause. Besides, the Devuan Releases “just work” out of the box so I need somewhere to put my Tinker Time ;-)

One of my R. Pi M3 boards arrived with a failed WiFi chip / system on it anyway (why spend $10 of shipping and $50 of time to get a $35 board exchanged… just suck it up and use if for non-WiFi things). So by putting BSD on it I’m not losing anything. It will likely become a generic “server” for experimental infrastructure stuff. A software build box with some TB of disk too. A place to archive my tech collection ;-) Even if it is a bit slow on math at the moment that won’t matter too much.

Once the LVM copy is done, then my main file server store for temperature archives becomes A Single Disk and can move anywhere. I’ll assess the LVM group and perhaps do an “update scrape” using it. I last did a data scrape over a year ago (maybe 2?) when CDIAC was having a “going out of business” sale notice. I need to check in and see what archives died, and what changed so much that an “update” would wipe out my deep archive of the older state.

Basically there is a long slow “unscrambling eggs” process prior to a data scrape update. Eventually I’d like to recover the 2 x 4 TB Western Digital MyBook disks from the LVM group, but “we’ll see”… Things where just MOVING the data take nearly a week are not things that go fast manually assessing them.

In Conclusion

It’s not yet time to make a jump to BSD and Devuan is still running stable, true, and clean. I’m going to update my kit to Devuan 2.0 as released (it has been a stable desktop for some time now) and then just forget about all the other SystemD infested releases. Move their archives off to “junk disk” and set it in the corner ;-) I’m likely to keep a working copy of Puppy Linux “just because” – it is small, works well, and has a different build approach. I want to get better and understanding their build process. I don’t know if they have gone over to SystemD or not, though.

But other than that, I’m settled on Devuan and with an experimental BSD “as time permits” The bits of hardware I have that do not have a Devuan port ( 2 x Orange Pi One… all the OTHER Orange Pi have ports, sigh) will stay Armbian with a Devuan “uplift” until a pure Devuan is available. FWIW, I got the new Devuan 2.0 to boot on the Odroid XU4, so “moving my stuff” from the older chip with Armbian on it “in the works”.

It will be a nice reduction in complexity to have basically one OS running across almost everything. Note that my “Interior Router, proxy server & DNS” is going to stay on Alpine. It is a router oriented distribution that just runs great on small hardware and the Pi B+ is very small hardware having only one core at 700 Mhz and v6 instruction set. It is rock solid anyway. It has “just run” for years now and only gets shut down when the power fails.

And, I can get started on that stuff in just a 1/2 week when the data copy to the 8 TB disk is done… grumble…

If anyone knows of an affordable SBC with stable and reliable USB 3.0, well, let me know. I think I’ve reached the point where it matters to me… and Christmas is coming so I can start dropping hints ;-)

Subscribe to feed

About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in Tech Bits and tagged , , . Bookmark the permalink.

17 Responses to The Good, The Bad, and The Slow

  1. Larry Ledwick says:

    On my system at home using spinning disk and USB 2.0 docking station it takes 24 hours / 2 TB to copy data onto a new disk (ie direct serial write no need to find sectors) when I do my major backups of my photography library. Soooo about 4.5 days is my guess.

    Of course that depends a bit if you have lots of little files or a few huge files to copy also.

  2. E.M.Smith says:

    I could also bother to find the most efficient file by file copy process… is it cp -r or wget or rsync…

    I’m using a sript-lette I wrote 30 years ago back before those things worked and properly saved permissions. The old tar cf – | (cd foo; tar xf -) with some added parameters. Why? Because I KNOW it always works and always preserves permissions and time stamps and such on ANY system and I don’t really care if it takes 4 days or 6… Or at least don’t care enough to write, debug, test, prove and implement an rsync command…

    There’s a time for sloth ;-)

    Besides, now I have a week to work on learning all the “Gotchas!” in rsync ;-)

  3. gallopingcamel says:

    All this stuff sounds really scary. Microsoft and IBM taking over the Linux world………Eek! I switched to Linux to escape Microsoft and IBM.

    This kind of thing does not bother Chiefio because he knows how to home brew something without having to pay tribute to IBM or Bill Gates.

    Oh great wizard Chiefio please tell me that Linux Mint will remain FREE!

  4. KnReLe says:

    One SBC that I know of, that has a USB3 port on it, is the Rock64
    I have one of these running here with an external disk (spinning 1TB Seagate), for some testing, and it has been up and running since I started it early this summer. There is a RK3328 4-core processor, 64-bit os, and 4 GB of RAM in this thing, and it is similar in size and shape to the Raspberry Pi. Like the Pi3B+, it can boot and run off this disk, so there is no need for any SD-card after initial setup.

  5. Steve C says:

    I do agree with the “Sort Later Maybe” principle of operation. I shovel stuff onto “only” 1 and 2TB drives (haven’t needed more, yet) and it gets looked through occasionally for those “I’m sure I had …” moments. (And usually I find it, a few discs and several distractions down the line. :-)

    Only one Aw Shit moment in a fair few years doing it, and alas it was Mint, not the old XP machine, wot dun it. I had been keeping a subdirectory for a favourite comedy show, which was up to about 4 series at the time. Alphabetical sort, select and drag the next lot onto the subdirectory, then watched with growing WTF as the thing churned away for 30 seconds or so before declaring “Not a directory”. Somehow it has compacted the entire subdirectory into what it now tells me is a 1GB+ audio file which, surprise!, doesn’t play at all. I am not impressed. My Mint has a hole in the middle.

    Re Microsoft’s continual corroding of our right to compute, I wonder how far into *nix they’ll take their assault. Hasn’t the Apple OS been essentially a private *nix for a few iterations? It’s clear the plan is “One kernel ring to rule them all”.

    I’m pleased to see I’m not the only one afflicted with USB 3.0 that just doesn’t fly. The little external drives I save onto are all USB 3.0, but on every occasion I’ve plugged any of them into a 3.0 socket on the PC it has either not worked at all, or it appears and disappears as it feels. (It can’t even stream an MP3 all the way through, which hardly stretches it’s claimed capabilities.) Plug any of them into USB 2.0 and it works perfectly – and identical results with a USB3.0 HDD dock, now permanently plumbed with a “retro” USB 2.0 cable. Needs further attention.

  6. E.M.Smith says:


    Define “FREE”…

    FWIW, I’d figured the “crisis” would come when Linus retired. That the Linux Foundation would need to find a similar mind with similar ideals to keep the kernel work going well. Now it looks like it is coming sooner. With big money buying seats on the board, and Linus under SJW attack, it does put the future of the kernel development in question.

    That said: IF you have a working machine now, you already have a working kernel. This will continue to work. Furthermore, the source code for it is already public so easy enough for some other group to step in and take over development from that point forward.

    The “stuff” wrapped around the kernel is properly called GNU Userland. It doesn’t come from the Linux Foundation. It comes from hundreds of individual folks doing their own development work. That is not changed by the corporate moves other than that Red Hat was the source for significant numbers of patches and enhancements. There’s good news in this too. Since Red Hat started pushing a more “Corporate” agenda about 15 years back, their “style” has gotten more offensive. SystemD is only the last of these things they have pushed. With them now being IBM, it is reasonable to expect more of the community being willing to “push back” on some of the directions set by Red Hat. Basically, all the folks who just “went along” because Red Hat were historically “good netizens” will now think twice and be more at liberty so say “Stuff It! on that SystemD” or whatever it is that offends.

    Richard Stallman is the key player in GNU land:

    The second big question will come when he is gone. Has he cultivated a suitable replacement or team of true believers? I think so, but who knows…

    Overall, the Free Software Movement is now so large it isn’t possible to buy it up or derail it much. Somebody goes “too far” and those offended just “Fork a new branch” and ignore them. Devuan, for example. Bunch of us just saying NO! to SystemD.

    OK, per MINT:

    You didn’t say your provider, but I’m going to assume the history from here:

    Linux Mint is a community-driven Linux distribution based on Debian and Ubuntu that strives to be a “modern, elegant and comfortable operating system which is both powerful and easy to use.” Linux Mint provides full out-of-the-box multimedia support by including some proprietary software and comes bundled with a variety of free and open-source applications.

    The project was conceived by Clément Lefèbvre and is being actively developed by the Linux Mint Team and community.

    So it comes from Ubuntu that comes from Debian. NOT Red Hat.

    Looking at the chart of descent:

    You can see that Debian is a root source for a huge number of releases. They accept patches and some code from Red Hat (like their bad decision to adopt SystemD…) but once the code arriving is not acceptable, they will reject it. (As the Devuan folks who left Debian in a split over that decision did to form a new fork).

    So “worst case” you would have the folks at Canonical (Ubuntu) splitting off from Debian as an upstream provider, or the folks at Mint doing such a split from Canonical as starting code base. At the limit, they would reach back to the LFS Linux From Scratch – all it takes is the staff to do the work to do the software maintenance.

    The whole idea of “upstream” is that you leverage off the work of those folks so you don’t have to do it all yourself – most of the maintenance is done for you “upstream”. So you can either bypass them (and take more workload) or swap to a different “upstream”. If you look at that (giant!) chart of Linux descent, you find there are LOTS of choices but they stem from a few “upstream” roots. Red Hat is a big one, but not the only one.

    Debian is a giant one, then just under it is Slackware (who have avoided the whole SystemD thing as they never even moved from BSD like init to the SystemV like one ;-) Then you get to Red Hat (so those downstream releases may “have issues” and move their “upstream” if needed).

    There’s also Gentoo (Enoch) and Arch as roots. Both having more technical (less desktop) oriented downstreams. (Folks making routers, devices, etc. Thought there are desktop users).

    Then at the bottom you see dozens of stand alone releases. Folks who just grabbed a code base to start from (from anywhere they liked) and don’t depend on a given “upstream”. Nothing prevents any release from just stepping out like that. I made a LFS build in a few hours. I could easily customize it in some way, package it up, and all it ChiefiOS should I wish.

    At the very bottom you find the Android series as a self-rooted development.

    So, given all that:

    Nobody can quash any release as long as enough folks want it.

    Most of the kernel work is about adding support for new hardware types. Writing drivers for new peripherals types or code to use new features in new CPU types. The present supported hardware types will be sold for at least a dozen years more, so even if things froze today, you can keep on buying working computers for years. ( I exploit this by generally getting old “obsolete” equipment that the vendors of commercial software no longer support and putting Linux on it. One of my computers is about 25 years old and still runs… ) So even if Linus left town tomorrow, I’m “set” for my remaining probable life span – without doing any coding myself.

    That’s both the risk and the beauty of Open Source. ANYONE can decide to go their own way and see if they collect enough “followers”.

    Sidebar on Kernels:

    The really hard part is making one that works on 2000 different CPU types with 2000000 kinds of odd peripherals stuck into them. If you are just trying to make one small system “go”, it is much much easier. Scheduler, Memory Manager, some other bits, and device drivers. In Computer Grad School folks are often assigned to write one. (The guy who wrote Forth made the kernel of it in a few hours – yes, it is a special case, but it shows the minimal case. It runs direct on the hardware easily.)

    So even if Linus folded up shop today, it’s pretty easy for someone else to make a kernel that runs on a bit of hardware and wrap the Gnu userland around it. (essentially you need to write code to implement the system calls). This is what Apple did. They chose the Mach kernel and used it as their base. So that’s what I’d do.

    But I don’t expect any Bad Thing getting too horrible any time soon. It will take a decade more, IMHO.

  7. Soronel Haetir says:

    Re the reported disk size, my experience is that drive makers claim sizes in multiples of powers-of-ten while software continues to use the traditional powers-of-two values. They do this because it allows them to fudge the claimed size higher while remaining factually correct.

    8,000,000,000,000/(2**40) ~= 7.28, I’m not sure where the one that reports 7.5TB is getting its value from.

  8. E.M.Smith says:

    Oh, and worth mention is that some folks who mostly do Linux also port their UserLand to run on top of a BSD kernel. Debian has a FreeBSD kernel option and Gentoo seems to run on all the major BSD types.

    I’ve not “gone there” just because I didn’t see any reason to. But, should Linus leave and the Linux Kernel starts being bogus, it IS possible to keep the UserLand you know and love and swap to a different kernel. Take some work? Yeah, but so does any new kernel…

    Gentoo/FreeBSD 	Gentoo Linux developers 	? 	FreeBSD 	? 	? 	Free 	GPL, BSD 	Server, Workstation, Network Appliance 	uses Gentoo framework
    Gentoo/OpenBSD 	Gentoo Linux developers 	? 	OpenBSD 	? 	? 	Free 	GPL, BSD 	Server, Workstation, Network Appliance, Embedded 	uses Gentoo framework
    Gentoo/NetBSD 	Gentoo Linux developers 	? 	NetBSD 	? 	? 	Free 	GPL, BSD 	Server, Workstation, Network Appliance, Embedded 	uses Gentoo framework 
    Gentoo/DragonflyBSD 	Robert Sebastian Gerus (project not yet officially supported by Gentoo) 	? 	DragonFly BSD 	? 	? 	Free 	? 	Server, Workstation, Network Appliance 	uses Gentoo framework
    Debian GNU/kFreeBSD 	The Debian GNU/kFreeBSD team 	2011-02-06 	GNU, FreeBSD 	7.5 	2014-04-26 	Free 	DFSG 	General purpose 	GNU userspace on FreeBSD kernel
    Debian GNU/NetBSD 	The Debian GNU/kNetBSD team 	Abandoned 	GNU, NetBSD 	Abandoned 	Abandoned 	Free 	DFSG 	General purpose 	GNU userspace on NetBSD kernel
    MidnightBSD[34] 	Lucas Holt 	2007-08-04 	FreeBSD 6.1 beta[35] 	0.8.6 	2017-08-28 	Free 	BSD 	Desktop 	GNUstep based Desktop Environment 

    FWIW, the archive copy is now 2/3 done ( 4.x TB out of 6.x…) so only a couple of more days to go…

    I remember a time when Windows NT could not stay up that long. At Ericsson we had a process of rebooting all the NT server every Friday otherwise they would hang early in the following week. So every Friday the whole company did a reboot… Compare BSD where I’ve had systems up and running for YEARS without a reboot. One I took down because the fan on the power supply was failing… ( I was new at the company, did my usual ‘walk behind the racks’ with a back of the hand facing them to sense air flow and temperatures… that one was not blowing hardly at all… It was sporadically turning a little, but gummed up…) first time it had been shut down in over a year ( IIRC it was about 1.5 years).

    And some folks wonder why I like BSD and Linux ;-)

  9. E.M.Smith says:

    One odd bit: The process of the copy is almost always “D Disk Wait” on the creation of the pipe (reading the LVM disks). You would think with 2 spindles there would be less of an issue, but perhaps the data are a bit more scattered requiring disk seeks? Or it just less efficient in that mode.

    BUT, it’s at 30 CPU hours for the extract (writing) and only 10 CPU hours for the reading (creating).

    Now one point is just that the “from” disk is ext4 and the “to” disk is ext3, so perhaps the journal or update of access times is causing it…. Nope! Just looked at /etc/fstab and it is mounted “noatime” so there are no updates for access time. Curiouser and curiouser…

    As a reminder, I’m using ext3 since there is a version difference in ext4 where moving from “older” to a “newer” system changes the file system in such a way you can never go back to an “older” system and read that disk. Not an issue for an NFS mounted LVM volume with only the server ever seeing it; big issue for a disk that is expected to move around between systems. So I’ve slowly converted almost all my disks back to ext3. It works fine everywhere.

  10. E.M.Smith says:

    5/6 ths done! Only one TB left to go…

    Maybe tomorrow it will finish…

  11. E.M.Smith says:

    And the COPY is DONE!

    After a mere 7221 minutes (24.35 seconds) or about 120 1/3 hours or almost a round 5 days. (5.0148)

    Woo Hoo!

    Now I can get on with the rest of this stuff…

    FWIW, there were 83 minutes of “User” time and 3701 minutes of “system time” reported and watching top the “tosser” was at about 16 1/2 hours of CPU while the “catcher” was about 46 1/2 hours of CPU time. (Top itself has racked up 7 hours 26 minutes…)

    Basically I’m putting the new disk on the file server SBC, setting aside the LVM group for “do something later” and cleaning up the desktop now that the real estate and wires needed can be reduced.

    Then I can get back to the business of making a Devuan 2.0 secure system I like ;-)

  12. H.R. says:

    *champagne cork*

  13. Steve Crook says:

    > That makes it closer to $18 / TB or with California Sales Taxes almost $20 / TB usable.

    ROFL. I have to think back to when I started my programming career, on a mainframe, when the disks were in large packs of… 20MB. 6-8 platters about 12″ in diameter. Thing is, 20MB seems so ridiculously small I have to constantly remind myself it wasn’t at least GB.

    It’s an amazing world we live in.

  14. E.M.Smith says:

    @Steve Crook:

    Then there’s that hard disk tech I started with… The Winchester Drive. Named for the 30-30 Winchester Rifle as it was to have 2 spindles of 30 MB each (and about 24 inches diameter…

    From the wiki:

    In 1973, IBM introduced the IBM 3340 “Winchester” disk drive and the 3348 data module, the first significant commercial use of low mass and low load heads with lubricated platters and the last IBM disk drive with removable media. This technology and its derivatives remained the standard through 2011. Project head Kenneth Haughton named it after the Winchester 30-30 rifle because it was planned to have two 30 MB spindles; however, the actual product shipped with two spindles for data modules of either 35 MB or 70 MB. The name ‘Winchester’ and some derivatives are still common in some non-English speaking countries to generally refer to any hard disks (e.g. Hungary, Russia).

    The photos in the wiki show how huge those things were.

    I continue to be amazed at the low cost and high data density. Unfortunately, “continuous use” stats suffer greatly. These things are designed for sporadic use and must spin down the spindle often to prevent dying in short order.

    But yeah, if you told me when I was at Apple (and a 20 GB disk was the new Big Deal) that I’d have 20 TB sitting on my desk top, I’d have said you were nuts. All up I think I’ve got somewhere around 30 TB. I’ve drained most of the data off of all my older 500 MB, 1 TB, 2 TB, 3 TB etc. disks and I’m semi-consolidated on the LVM group (now the 8 TB disk) and backup copies.

    Biggest problem now is just finding the trash to throw out AND finding the good stuff you want.

    “A Find is a terrible thing to waste! -E.M.Smith”


  15. E.M.Smith says:

    FWIW, I spent much of last night doing just that. Trying to clean up GB of crap left in various places over the last couple of years. When ext4 came out with the “new” version that would make it non-compatible with the old version, I had to suddenly duplicate my Home Directory in several places. Then the Monster USB drive died (reached read-write limit) and I had to restore that mobile home directory to a couple of other places. Then the whole SystemD thing and I abandoned some systems (and the associated home directories / saved stuff).

    Well, I’d mostly done that by just not mounting a partition on a drive, or putting a whole drive in a drawer, or sometimes compressing the image onto an archive.

    Heck, I’ve got something like 3 copies of the 500 MB disk on my old HP laptop. (The fan died, so I gently sucked an image out and left it turned off since). So 1.5 to 2 TB of “whatever” was on that disk – some of which was backups of other older 100 GB disk!

    Sporadically I’ve gone back and unscrambled some bits of it, but there’s still more to do. For a while I was making more “splatter” faster than I was removing old. Now not quite so much

    So one thing I’m doing is just making “by kind” archives. JPEGs, MP4s, PDFs, .IMG, etc. Then I just work through some archive or old system image and toss an object into that folder. If it is a duplicate name, it hollers, and I can choose “same or different” and toss, keep newest, keep both, whatever. I figure in about 2 years I’ll be done ;-)

    I lose some historical data (what was i working on together at a point in time?) and for some cases I need to keep a “package” together (like a Linux release with .img .txt etc. files in a folder) so one of my “kind” choices is “Software” and it goes by folders for a product / release. That’s actually the 2nd largest block. (Temp data scrape is #1). At some time I really need to decide if I need my canonical archive of Centos 3.0 and Centos 6.0. One runs on old gear, the other the last one prior to SystemD(efective). We’ll see. I might just pick some old 1 TB disk I don’t care about much anymore, move the deprecated software onto it, and chuck it in a drawer.

    I’m also to the point where I’m pretty sure where my OS choices are going in the future, and never returning to in the past. So things like Ubuntu on the Pi? Just a bit too fat, slow, and the UI annoys me (things like moving a panel to the edge of the screen causes it to do crazy stuff – goes away or rolls up or something. I ALWAYS put a ‘top’ panel right at the upper edge, so this always drives me around the bend. So why keep copies of it around?

    I’m pretty much set, now, on ARM based software on SBCs. I still have two Intel based machines in use, but one is dying and the other uninteresting in use. The Mac is not long for this world. It is displaying strong EOL aspects. (Balding keys, SSD died so running from external uSD, can’t upgrade SW since a couple of years back…) Then the Chromebox is a bit of a pain to use and very “snoopy”. Makes a nice media station though. I think it will be going out of support soon too (It’s about 6? 8? years old now). Then I’ll try putting a Linux on it ( IF I can find an old one that still runs on them). So mostly I’m just looking at newer ARM SBCs and wondering “Why keep the old crap now?”. Nostalgia?

    Why keep an old 32 bit 256 MB Pentium class machine and all the software releases for it for a decade? Eh? I’ve got 3 or 4 of them in the closet and garage. Am I REALLY ever going to fire it up again when I’ve got a quad core (or octo-core) 1.2 GHz 2 GB memory SBC on the desktop?

    Um, no.

    Only thing that’s really an issue is that I have a large stack of CD and DVD “backup and archive” disks and one by one my CD / DVD drives have died. New equipment comes without them… So at some point I need to move off of that “forever archive” just because the readers are not forever… and I think “sooner” is before moving to Florida (and “later” may be before that too…).

    It is a lot like the garage problem. All that “stuff” just stuck out of site for “someday I’ll get it cleaned up” and then “someday” comes ;-)

  16. Larry Ledwick says:

    I moved to that archive by type system about 4 years ago, all the PDFs get moved to one file in the archive, all the txt files to another etc, between type and date range you can usually find what you are looking for fairly fast.

    Then once I got a copy of everything in those files I started renaming files with more descriptive names if I could not figure out what they were about from the original name. This was mostly a problem with the very old files created before long file names were supported.

    At some point I will probably do what I did with photography files and start saving them by year of creation. Right now all the images created this year get put in one file system and same with prior years. In most cases I can guess +/- 1 year when a picture I was looking for was created, and that method block sorts things into manageable piles. Then within the year I break them down under broad categories like “landscape”, “people” “severe weather” etc.

    I had to do that because a while back I did an image count and found I had several hundred thousand unique images in several 2 TB drives. Sometimes I double up and keep a group of shots taken at the same time all together in a sub directory, and also include copies in a sorted by age file system. Luckily the camera puts a unique image number on each image and when I rename it I leave that number as the last element in the file name so once I narrow down a date range for an image I can also do a brute force search for a block of images with similar id numbers in their image name.

    Like you I need to cull those images and toss all the hopelessly bad images but just never find the time (I have an aversion to destroying data and keep almost 100% of the images I shoot unless they are absolutely useless like a black image or a picture of my shoes.)

  17. H.R. says:

    @E.M – why keep an old cassette recorder around, let alone those old computers?

    There is a lot of good information stranded on media that no longer has a readily available device that is able to access that information. How much valuable information is stranded in old media? I dunno.

    It may be valuable in the ultimate “aww sh!t” scenario. Some technology has advanced step-by-step to where no one can recreate the top level from scratch. However, if you can go back a ways, you might get to the point where there’s a good handle on the tech and then rebuild the last few steps from there.

    It’s somewhat analogous to your observation of the tech levels one should expect in a survival situation; stone age? Renaissance? 1700s or 1800s? 1930’s? 1970s?

    Hey! I’m just trying to give you a little justification for pack-ratting that stuff.😜

    “I’ll give up my Commodore 64 when they pry it from my cold, dead fingers.”
    ~ H.R.

Comments are closed.