I’ve been tepidly looking for documentation on which CPU is the slow one and which the fast one when looking at a “top” or “htop” command output for the Odroid XU4. It has 8 cores, 4 of them slower Cortex A7 type and 4 of them faster A15 type. Well, I got tired of the occasional poking around not yielding much, so decided to just test it myself.
In theory, an operating system can be tuned for maximum performance or for minimum energy consumption, as desired. The intent of the “Big / little” architecture is to let you make a system using a SBC (Single Board Computer) with an ARM “Big / little” chip in it and have it use very little power when idle, but ramp up to high performance when needed. The idea being that you use the A7 (lower power and lower speed) cores until they are not enough, THEN you jump up to the A15 cores to “get ‘er done”.
But in the “htop” command, the usage bars typically are in cores 5, 6, 7 & 8 with only minor blips up to cores 1-4 when doing normal things like running a browser and having a terminal window open. So were cores 1-4 the A15 cores? And if so, why, when something demanding launched, would it start in cores 5-8 and then stay there?
Well, the answer is that cores 5-8 are the high performance A15 cores, and Debian just starts things in them most of the time by default. Now this chip does have frequency scaling and some other power management built into it; so that isn’t as wasteful as you might think. Running an A15 core at a low clock rate is not going to use much power at all, then ramping up the clock with demand lets you easily gain power without a lot of fancy scheduler work to move the process to a different type of core.
In essence, the XU4 under Debian (Devuan uplift or not) really acts like a quad-core 2 MHz A15 chip that can glue on 4 more cores of A7 1.4 MHz performance if needed; and it sporadically tosses very small tasks to the A7 chips under normal use (there are tiny blips of use of them in htop)
I’m OK with that. For this board, it is run from mains power, so it isn’t like I need to save every Watt of power from a Lion battery… I’d rather have the bigger cores running and avoid the scheduler action and context switch penalty.
I made a little script called “looper”. It lets me load up a core with a task that has no I/O so pegs the CPU at 100% “doing nothing really”, but not interacting with other shared systems (like I/O) that might cause it to enter wait states. “bcat” is a little script that prints out my personal scripts from my “bin” directory. A specialized “cat” (concatenate and print) command, if you will. Saves me typing a “cd ~/bin” when I want to look at one of my scripts ;-) Over the years, that kind of ‘mini script’ can save hours of typing …
chiefio@odroidxu4:~$ bcat bcat cd $HOME/bin more $*
I have another one, ‘cmd’, that does a “cd” (change directory) into my bin directory and pops up a given command into the vi editor (or opens a new session to make a new command) and then sets the permission bits to ‘executable’.
chiefio@odroidxu4:~$ bcat cmd cd $HOME/bin vi $1 allow $1
And yes, “allow” is another one of mine. It does a chmod +x $1 … I actually wrote it first, because I’d write something and forget to set the execute bits and I just wanted to say “allow it, damn it”… so I did ;-)
But that’s all a digression to explain why I have “bcat looper” to show what is in “looper”. It also nicely shows how shell scripting is a threaded interpretive language and some of the advantages of a library of “words” you create to do things for you in Linux / Unix land. In many ways, the Linux I use is full of commands different from the one everyone else uses. My own shorthand. So I first wanted to just allow things to run, then I wanted a command to make commands, then one to just look at them, then… All of them saving me typing and remembering for me exactly which option flags I wanted to set.
But, back at “looper”:
chiefio@odroidxu4:~$ bcat looper i=0 lim=$1 while [ $i -lt $lim ] do i=$(( i+1 )) done echo $i
It basically just takes in one argument “$1” that is the loop limit. I used 1,000,000 in my runs. It has a counter (increment counter, or $i ) that gets initialized to zero. Then it has a “do math only” loop from 0 to $lim ( I could have just used $1 there, but $i and $1 look similar so I stuffed the value into $lim that’s easier to notice is different and is a limit.
So the “while” loop looks for less than limit, and once the limit is reached, goes on down to print out, or “echo” the final value reached by the loop counter. Inside the loop, I just increment the loop counter by 1 each pass. Basically, it’s a count to a million loop with the parameter I passed into it. Again, not liking to type things over and over, why type the argument “1,000,000” 8 times when I can make a new word to do that for me? I named it loop1 for loop 1 million.
chiefio@odroidxu4:~$ bcat loop1 time looper 1000000
I also put the “time” command in front of it so it will report the “real” or elapsed time, how much was “user” time in the script and how much was “sys” system time doing overhead to run the script. I then launched 8 of these into the background.
loop1 > loop1& loop1 > loop2& loop1 > loop3& loop1 > loop4& loop1 > loop5& loop1 > loop6& loop1 > loop7& loop1 > loop8 &
Note that the “&” puts a given command running in the background, the “greater than” sign sends the output to a file (in this case sending 1000000 into each file, a kind of silly thing to do really, but gets it out of the report from the “time” command that by default does not go to the “standard output” but goes to your screen as the “standard error” device) So this long line rapidly launched 8 jobs and sends any output to a different file in my current working directory for each of them.
I hit “enter” and watch “htop”. The usage bars go to 100%, starting in the lower register of 5-8, and then filling the upper four of 1-4. After 28 seconds, first batch of four completed, then after 1 minute 2 seconds, the second batch finished. Of interest to me was that, once running in a slower A7 core, even after the A15 cores were no longer busy, the processes stayed in those cores. The scheduler didn’t move them. I would expect that any tight loop would be treated that way, only moving a process on an interrupt of some sort. That also likely explains why they set up the OS to start with the A15 cores. It would take some new code to do the “launch an interrupt and move a process to a different faster core IF it is at 100% in a slow core and the A15 is available”, and that code isn’t written yet. Basically, they didn’t want to fool around with the scheduler in the first porting effort to this board (or nobody volunteered to do it – schedulers are tricky things).
So here’s the output to the screen:
real 0m27.959s user 0m27.800s sys 0m0.015s real 0m28.246s user 0m28.135s sys 0m0.005s real 0m28.284s user 0m28.130s sys 0m0.020s real 0m28.316s user 0m27.995s sys 0m0.005s real 1m1.838s user 1m1.815s sys 0m0.015s real 1m1.845s user 1m1.835s sys 0m0.010s real 1m1.850s user 1m1.825s sys 0m0.005s real 1m2.338s user 1m2.220s sys 0m0.015s
You can see that the 4 A15 cores finished first, then I got to sit here staring at the htop display showing cores 1-4 pegged at 100% for another half minute until the last 4 results came in.
All in all, from first sitting down at the keyboard until results were known was about 4 minutes. Far far faster than writing it up in this article. That’s what I like about shell scripting in *nix, and personal mini-script tools. Things can be done “right quick” ;-)
So now I know a bit more about how to use this board. Make sure, IFF I’m going to load it up with hard tasks, to launch the 4 slowest and harder ones first, then load up the rest of the cores.
Still TBD is to run the test with, say, 16 loopers running and see if ‘taking interrupts’ has them all finish equally, or if the scheduler has core type stickiness based on first dispatch core type.
This is not the kind of information the average desktop user would need to know, but it does matter if using the board for ‘distcc’ compiles of whole operating systems (where you load up all the cores in the cluster with jobs and final completion time depends on how fast each job gets done) or in running things like models and simulations. But now I know. Load up 4 “big ones” and let the rest take the small ones.
Well, doing the “16 jobs” run took all of about 2 minutes to get written and done…
The script to launch 16 such jobs:
I did this without redirecting the output, so ignore the 1000000 repeated in the output below.
Ten finish in about 1 min 13 seconds. Six more take variously longer, about 1 1/2 minutes each, plus a few seconds for the slower ones. At the end, the A7 cores again had 4 jobs pegging them, and the A15 were empty (other than the OS / browser).
I can’t tell for sure, but it looks like as long as there are interrupts the jobs share cores, and at the end, the ones in the A7 cores finish in the A7 cores. It looks like if a given task gets assigned to an A7 core it stays there under 100% load with excess tasks. i.e. did the 6 jobs taking 1 1/2 minutes all start and stay in A7 cores? I think so, as the “user” time remains at about 1 minutes for each of those jobs.
It could stand some more testing, but it looks like the scheduler assigns a task to a core type and then leaves it there until done, even under conditions of interrupt preemption.
@EMSmith; Interesting experiment on that 8core. Not sure if I see any long term value in that BIG/little setup. Kind of like having 2 engines in my big truck. 1 small for freeway hauling and a big 1 for mountain work. and hope the little one can help the big one on the steep grades.
I suppose the designers had a good reason at the time when they laid the thing out, but it escapes me…pg
I do some similar small scripts as aliases but really the same concept. In stead of typing out:
sudo su – mysql
I type cms which is the alias for that command string.
Interesting look at how they did the internal architecture, I know when beowulf clusters were a big thing in linux they found that for maximum efficiency of the cluster you needed to choose a cpu that just barely got done with the work unit when the system got back to it to see if it was still busy.
No point constantly checking on a slow cpu “are you done yet”?
and no point to giving a work unit to a really fast cpu which finishes and sits there idle for a long time before you get back to see if it is done.
Like bowls of porridge you want it “just right”
Building a lot of those Odroid XU4 into a massive cluster that ability to choose the speed of the cpu used might be very helpful on massive compute problems where you could manage processing time so that all work units ended at the right time for optimum performance and minimum wait time.
Reminds me of the good old four-barrel carburetors. Two smaller barrels in front, for everyday driving (sipping gas, as it were), and two larger ones in back, for when you’d punch it….
Ahh, those were the days :) (Gas was MUCH cheaper then, too).
Not completely on topic, but related. I had been considering Qubes as a more secure OS alternative, but I like your idea of using different, smaller machines for various on-line activities. Maybe someday when I have a chunk of time …
But one thing about that setup bothers me. It may not be a problem for on-line security, but overall I think it is a security issue. SD cards have many more memory cells than they need and a cell load-leveling manager used to ensure the memory cells are used uniformly. This, as you know, because the cells “wear out.” Like a magnetic hard drive, when a cell is “erased”, it actually isn’t. It’s just not used for a while at the whim of the cell manager. So data is still in the inactive cells and can be read. This might even expose this data online, but would definitely be there if someone took possession of the SD card.
What are your thoughts on this?
I have thought of that issue with solid state drives (same as sd cards in that regard). One way around it would be to periodically write very large files of random data to the device to overwrite those cells (of course that would add to your write cycles and slightly shorten the life of the device it would also miss cells which have been retired and which additional writes have been blocked)
Perhaps a small loop script that creates pseudo random character strings then ran that through a hash algorithm and wrote the hashes out to the device until it filled up, then deleted all those junk files leaving the free cells populated with those hash string values.
Even on a fast device it would have some time over head to write out to the device.
I am sure there are some disk shredder utilities out there that already will randomize unallocated memory cells on ssd and flash memory devices.
It is very helpful in tablets and cell, phones where battery size and life drive everything else. Every microWatt is treated like gold, as it drives sales worth $ Billions.
I was very surprised to see that all 8 cores can run together. Not the design use case due to heat production and power demand, but good for my needs.
That’s the idea…
Everything depends on your threat to defend against. For my financial box, it is random viruses or malware letting bad guys in. Those usually come via email, clickbait, or malware websites. Buy doing email and Web browsing on other disjoint systems, they are not able to get into the financial system.
Per empty block data:
That exists on hard disks too. In fact my first system crack was on a B6700 using FORTRAN. A file opened for random read write was not zeroed out as the compiler didn’t know if was and old or new file. Swap was not a dedicated slice, but shared the free pool of the general files. I’d just open a few megabyte file for random access and then search it for “assword:”. Kept me in computer time nicely… (college undergrades got yhe smallest allocation and I wanted to learn new tricks faster but that needed more cycles…)
So my standard kit includes a command named MB that writes a megabyte of crap (sometimes just zeros, some versions on MS Windows boxes seed it with repeated copies of MS Word, Excell, and Lookout er Outlook…) then repeadedly copies A to concatenate to file B then B concatenated with A repeat until disk full. Usually just a minute or three on small drives. TB hard disks take hours…
The SD card has the feature that a pair of dikes chop it in two swiftly and easily if expediencecis expected, though I like the application of a neon dign transformer myself. Something about 12 kV and silicon chips not playing well together…
Oh, and I did a run of 64… looks like the allotment of jobs favors the fast cores at just the right balance that most cores run to empty about the same time. It seems to just schedule any given job to one core type, but more jobs to the A15 set in just the right number (assuming the jobs are all equally long).
Not related except to be glad you’re using RPi! CIO has brought this up before, but it’s worse than we thought. From the article:
he reported that systems using Intel chips that have AMT, are running MINIX.
Neither Linux nor any other operating system have final control of the x86 platform
Between the operating system and the hardware are at least 2 ½ OS kernels (MINIX and UEFI)
These are proprietary and (perhaps not surprisingly) exploit-friendly
And the exploits can persist, i.e. be written to FLASH, and you can’t fix that
In addition, thanks to Minnich and his fellow researchers’ work, MINIX is running on three separate x86 cores on modern chips. There, it’s running:
TCP/IP networking stacks (4 and 6)
Drivers (disk, net, USB, mouse)
MINIX also has access to your passwords. It can also reimage your computer’s firmware even if it’s powered off. Let me repeat that. If your computer is “off” but still plugged in, MINIX can still potentially change your computer’s fundamental settings.
“this behaviour is by design” ;-)
I stopped buying Intel based machines when the whole EUFI and AMT / Management Engine stuff started coming out. I was able to last a long time on old Intel / AMD hardware, and during that time I planned my path and executed it (if slowly… as faster hardware finally arrived…)
At this point, I’ve retired my Intel based systems, mostly. Most of them are now just “archival” in that I’ve not scrubbed and tossed them yet, so in a real “aw shit” case I can plug them in and recover them for use again or suck a backup copy of very old data off. I’ve still got an Intel based Chromebox in use, but it’s relegated to just driving the TV with videos and sporadic generic web browsing for entertainment purposes. Basically, I really don’t care if someone finds out I’m watching a Youtube comparing the C.H.I.P. to the Pi Zero at the moment ;-)
Sidebar on that: The CHIP, with A8 core, is faster than the Pi Zero, and with built in WiFi could be used in a ‘wireless cluster’ headless. The A8 core runs the same instruction set, 32 bit v7, as the Pi and could be integrated into my “distcc” compile cluster easily. There is a question of library levels and keeping things in sync with hybrid clusters, but manageable I think. In any case, a small cluster of C.H.I.P. machines would have each core with its own heat sink for good heat management, 4 GB of on board flash with nice speed, a fast A8 core and WiFi for the connectivity so no rats nest of wires and ethernet hub / switch. Once configured with fixed IP address, it would just take a power cable to it, and boot it up. Having nodes you can add by just plugging in a USB power source at about $12 including power lead and cheap PSU is an interesting thought.
So I don’t care if someone cracks into the Chromebox box, or if Google / YouTube etc. etc. know what tech videos and music videos I watch on the TV. It has no microphone, no camera, and is powered down 95%+ of the time anyway.
I’ve got one old HP laptop plugged in on the desktop just in case I need a Wintel box. So far, it’s been powered off for about a year and a half ;-) I’m thinking it’s about time to take the last WIntel bits and make more room in the office ;-)
So yeah, I”m not running their crap. Just not going to happen. I’ve found the Odroid XU4 is just a dandy desktop (despite some minor quirks from my personal choice of an odd duck mix of OS, that being an Armbian version of Debian “uplifted” to Devuan; the Ubuntu is likely more comfortable and certainly more debugged for most folks). The C2 works well for limited things so has been assigned to “financial transactions” until such time as the 64 Bit OS gets a bit more functional (browser gets fixed) and stable. I figure about a year. Then the Pi M3 being a bit slower, but more stable and “rock solid” is used for all things internal and requiring a more assurance of correctness. None of them is particularly easy to crack, and I’ve wrapped operational behaviours around them to make it very hard to “get to my stuff” (since at any one time, most of it is powered off and what is powered on is of limited use case and isolated from other use cases).
The Pi M3 is just a bit slow for full comfort as a desktop. It’s very much “good enough”, but would be better if just a little faster. Yet everything “just works” as the user base is gigantic and the software mature. I’m likely to buy the Pi M4 whenever it comes into existence ;-)
The Odroids are very much fast enough, but with more rough edges in the software arena, made worse by my choice of a kludge OS most folks will not be running (i.e. a very young port with very narrow user base, Armbian uplift to Devuan, and on newer kernels so as to get cross cluster compatibility on ext4 file systems) Exacerbated by my use of an HDMI / DVI adapter instead of a direct HDMI monitor. I’ll likely get a 24 inch HDMI TV for the office just to make that issue goes away (and letting the C1 and Orange Pi both drive it so I can run them other than headless if desired…) But for folks with a real HDMI monitor, and running a more common “spin” or “distro” of Linux, they would have fewer quirks to deal with and more than enough speed.
All in all, at this point, I’m just not seeing any “new” hardware I want to play with. I’ve got more computes than I can effectively use almost all the time, already, and a desktop experience that’s “just fine” for all the things I do. Given that, I can see no need at all to pay any money to Intel or Microsoft to buy their borked products with NSA Approved Hobble built in.
Sidebar on Clean Slates:
Were I “starting over”, first off, I’d not have the legacy Pi B+ or Pi M2’s to deal with. I’d just get Pi M3 arch. I’d have a real HDMI monitor / TV, and likely just run the Odroid C1 as my desktop using it ( 32 bit and well supported v7 architecture) for one board of “external browser box” that doesn’t need to be integrated into everything else (different network anyway). While much faster, the C2 and the XU4 have “strange” architectures that runs newer codes. One is 64 bit, and the other has two types of cores in it, so things like the scheduler and linker / loader are different. Video drivers a bit odd too. So I’d have a Pi Stack of M3 board for the compile / model cluster / misc. servers, and internal desktop; an Odroid C1 as the external exposed system, and a swap of SD card for the “financial services” box use case. Even that would likely be overkill.
OTOH, I like playing with toys at the edge, so I’ve had lots of fun with the C2 and XU4 ;-) Even the Orange Pi Zero was a fun experiment with minimal cost high performance. While currently running headless as a file server / site scraper; with a nice heat sink on it and driving a real HDMI monitor it ought to be quite usable as a very cheap but good enough station. They all pushed me to learn a bit new about the video settings and “issues” along with being good examples of heat management (the Odroids being at the excellent end with giant heat sinks and the Orange Pi being at the cheap and no heat sink so you can run one core of the 4 before it heat limits end ;-) but illustrating the value of a big heat sink, or only one core / board in a cluster …)
But Intel? We don’t need no steenking Intel! ;-)
And, your having fun :) I’m still running AMD a few years old. While I don’t have anything to hide, I simply in principle don’t appreciate all the spying by our government. I don’t appreciate all the built-in back doors that make it easy for hackers to get into my machine.
I still have a RPi3, but haven’t done much with it other than get it running. I also have an extra router waiting to participate in a sub-net. Alas, between professional work and upgrading the house, I’m still pretty busy even though the kids are long gone.
I think all my boxes are either AMD or pre-ME Intel. (Some are very old “white box PCs” that I upgraded from 486 CPUs to AMD 400 MHz pentium chips ;-) via motherboard swaps about 15? years ago… or maybe longer). I’ve got the Chromebox that’s Intel, and only about 5? years old, but it’s more a Google Leakage issue, IMHO. IIRC, the chip set in it was lower end and not with added “features” for management.
That just leaves the inherited Macbook. It’s some kind of Intel, and about a decade old, but I just don’t care. Running from an external mini-SD card (due to SSD death) it’s slow in any case. Someone adds ANY increased activity, it’s going to be molasses slow and I’ll know it. I only use it for sporadic web browsing and as a remote terminal server anyway. It’s too painfully slow for anything else. On my “to do” list is to assess the CPU et. al. and perhaps buy the $200 replacement SSD and install it. It can’t take any further Mac OS upgrades, so that’s a fundamental limit on security right there. I’d have to sign up for a Linux install too, to use for much more than now. So “someday” I’ll need to try booting it with Linux from a different SD card, then make my decision.
My present leaning is toward just getting a real Linux Laptop on a damn fast ARM SOC, but I might just build my own for the fun of it.
And yes, I AM having fun ;-)
There’s something about living on old cast off computers that’s liberating. You just don’t worry about what you do with them. Blow it up, melt it, rip the guts out and build something else, who cares? ;-) Then the Pi is so cheap it’s just a nothing worry. When you find the gas fill up costs 2 Pi boards, it puts it in perspective. So IF I avoid one dinner and a movie out with the spouse, I can buy 2 high end boards, or 3 Pi boards with PSU. Since they will consume far more entertainment time than the movie and dinner, it’s a good decision to put my play time there ;-)
As to why I care about security:
Well, I have zero to hide too, which is why I post my stuff for the world to see. Want to know what I’m thinking and doing? It’s all here. However, I have a professional interest in security (having been responsible for it at the corporate level since about 1983 at major companies) and I not only need to keep my skilz up, but enjoy The Game.
That said: At some point I started to worry about what I was seeing. “Bad choices” on security becoming the norm. (MicroSoft in particular making horrid design choices). Eventually it sunk into my pea brain that “this behaviour is by design”. I was very gratified when the whole Prism Program was outed by Wikileaks; simply because it showed I wasn’t paranoid, I was right, and they were out to get my machines.
So I felt a bit of professional duty to “everyone else” to clue them in that they were NOT secure and show ways to become more secure. So set about trying to find “secure enough” solutions that could be implemented by a Regular Joe or Jane in the home. I’m not there yet (my spouse could never configure a Linux properly…) but I’m getting closer. I think I’ve already worked out a “good enough” system of hardware, software, and operational behaviours that can be done by the ‘slightly technical’ type. Not enough to keep out a determined TLA, but enough to keep out most bad things and enough to slow TLAs and force them to decide you really are a target worth the extra effort and budget before proceeding.
As to making the swap:
Just pick one little thing at a time to move onto the “other than Wintel” platform. Set up the Pi M3 with its own keyboard, mouse, and monitor (or do some plugging back and forth… ) Install Debian / Raspbian (and don’t worry about SystemD for a long while as it is an esoteric target) and then just do one thing on it, like browse web pages for personal interest, or news, or whatever. Or maybe use it for reading your email (thus isolating email and phish attacks from your real work computer).
As time permits, add other uses, some on other chips (again for isolation). Maybe one weekend bring up that 2nd router and get comfortable with having a 2 layer network.
One step at a time, put growth into the new kit and let the old kit shrink / ossify.
In a few years you will find yourself looking at that old laptop and wondering if you ought to just scrub it as you haven’t used it in a year or 2…
Get 2 mini-SD cards. Install a generic Raspbian on both. Use one card ONLY for financial things. IF you pay bills on line or move money, use only that card. On the second one, do your web browsing for fun. Now you have one chip where you are essentially only using a common browser (like Firefox) at secure sites (like your bank) and 99%+ of the time it sits in a plastic chip carrier, safe and secure. No risk of email phish or web browsing malware getting into it. You’ve fairly strongly increased the security of your financial life on line. (I’d even go so far as to use a 3rd chip for online purchases of stuff, isolating that from your banking actions, but I rarely buy anything on line ;-) That second chip can now be used for visiting fun sites, reading news articles and blogs. Some site shoves malware onto it, they can, at most, break your system (so you re-flash the chip – about a 1 hour job) or they can find out you read Fisherman’s Life regularly … not exactly interesting to them. Leave everything else on your Wintel box as is. Minimal disruption of your present skills and habits.
Whole thing ought to take, at most, about 4 hours to set up. If the browser is a little slow, who cares, it’s only paying 10 minutes of bills, so if it’s now 12, no big. If something isn’t to your liking, you can just go do it on the Wintel until you have the time to find an alternative you like. Plus, a whole lot of risk has been removed from your Wintel box. Since it isn’t doing financial stuff nor general web browsing, the potential ways in are reduced as is the potential damage of a hacker or malware making it in. Less internet interactions means less risk and isolation of use cases means less cross contamination risk.
Over time, you can add more chips as other isolated use cases come up that you would like to try. On some chips, you can stack several use cases if they are compatible. For example, I’d put my junk email account on the public browser chip; my private email account on an isolated chip. Whenever you get tired of swapping chips, buy another Pi ;-)
But yes, I know the pressures of house and work. I’m “booked up” for the next year on home repairs. (Spouse wants a new bath renovation, and then the roof will be tested in the rain today – I did the repair now we find out if it holds and I can proceed to reroof.) I’m about a year behind on where I wanted to be on the SBC compute package. I’ve published bits and chunks, but it isn’t a smooth integrated process yet. (Then again, we had a ‘surprise!’ Linux Kernel swap needed for security then a second one to the more cutting edge kernel for ext4 compatibility reasons… along with SystemD crawling into everything sucking down about 6 months to find an alternative I liked).
So it’s slow to implement. That’s OK. I’m already way more secure than most folks on a Wintel box that’s 3 years out of date ;-)
At least there’s now an official Devuan Release for the Pi so all the “roll your own” is gone, it’s just copy to chip and boot. That makes the whole Pi Stack much easier to work with on a systems administrator level. I’ve also settled on Alpine for my DNS server / router chip (the old Pi B+) and it’s been stable for years now. Then the use of Armbian for “other chips” has worked well, so that exploration out of the way. The Orange Pi running 24 x 7 for months now without a sniffle. So I’ve got all the parts and pieces up and doing their thing, and worked out the “what, why, and how”. We’re into the polish and document phase ;-)
Unfortunately, to do that right, I typically do a full re-install from scratch using the on site integrated systems. It cleans out the false path leftovers and any ‘stuff’ that wandered in and I didn’t notice. (Like installing mysql on a system that doesn’t need it and finding it launches a bunch of daemons that suck down memory even when not being used). So basically a few days work, given the number of systems I’m working / playing with. Oh Well. It can wait for some winter day when it is lousy weather and no work on the house can be done …
So just pick some simple single “first step”, and take it. It really is that easy. Boot the Pi and browse with it, for example.