DNS Server on Pi Alpine Linux

Well, as of now, I’ve got my home DNS server running on a Pi Model B+ with Alpine Linux.

There were a few quirks in the process, but mostly it just went very very well. The little sucker is blindingly fast, with a tiny memory footprint, and has security and routing features built in from the get go. I’m happy.

Some background on the decision: I fretted far too long over this. Yet Another Package Manager to learn. Yet Another init process to sort out (it uses OpenRC instead of rc.d files as in BSD, or SystemV Init like most other sane things outside BSD, or the {bastard} systemD).

Just being way too lazy. Turns out that the “package manager” parts you need to “learn” are one name (apk) and so far one option (add). Oh, and an “update” to start it off right. As for OpenRC, so far all I’ve needed is one command to tell it to start services on boot up. “rc-update add packagename“. That’s it. Sigh. Maybe an hour all told reading web pages and looking for pain, and not finding any, and then thinking “Hell with it, just try it” and finding out they were NOT leaving things out…

I’d pondered BSD. I love it. It makes great routers and firewalls. It installs fairly easily up to the point of X-Windows (which mostly sucks everywhere but really really sucks when you have to do it yourself, as it is in BSD near as I can tell). It is also huge to build as it has every package in the universe available. But generally you can make a locked down ‘stripper’ from it and be fairly small and fast. But a lot of work as “the experienced systems admin” is expected to want to customize everything anyway, so why start with something they don’t want? Just more work for them to strip that out first…

I’d pondered Void. Small and using musl libraries and with security features like Alpine, but with a working out of the box GUI… Reading the pages on it, it is still a bit of a young port. Running it from SD on the laptop was a bit slower than I wanted (but that could just be the laptop not liking SD cards). It has it’s quirks too. It’s own design for several key parts, like package mangers and init processes.

For anyone wanting the canonical download set of all things void, visit:


Unlike trillions of other existing distros, Void is not a modification of an existing distribution. Void’s package manager and build system have been written from scratch.

xbps is the native system package manager, written from scratch with a 2-clause BSD license.

xbps allows you to quickly install/update/remove software in your system and features detection of incompatible shared libraries and dependencies while updating or removing packages (among others). See the usage page for a brief introduction.


We use runit as the init system and service supervisor.

runit is a simple and effective approach to initialize the system with reliable service supervision. See the usage page for a brief introduction.


xbps-src is the xbps package builder, written from scratch with a 2-clause BSD license.

This builds the software in containers through the use of Linux namespaces, providing isolation of processes and bind mounts (among others). No root required!

Additionally xbps-src can build natively or cross compile for the target machine, and supports multiple C libraries (glibc and musl currently).

So the same “learn new stuff” issues as Alpine. With the same musl (smaller, faster, and likely more secure libraries) security features



Void was the first distribution to switch to LibreSSL by default, replacing OpenSSL. Due to the Heartbleed fiasco we believe that the OpenBSD project has qualified and pro-active developers to provide a more secure alternative.

So the major claims to fame were the build system (and I was not looking forward to learning a complex build system) and LibreSSL vs OpenSSL. Add in the usual expectations of a young port (missing packages, bugs not discovered so far, too much work and not enough hands to fix everything fast) and it was not exactly what I wanted on a dedicated, and ignored, appliance that Had To Work Always (or the family would be nagging me about maintenance issues when DNS was crappy…)

I decided it would be better as an experimental desktop platform, and a bit later.


But what tipped me to Alpine was the heritage. It started life as a router project that fit on a floppy.



Originally, Alpine Linux began as a fork of the LEAF project. The members of LEAF wanted to continue making a Linux distribution that could fit on a single floppy disk, whereas the Alpine Linux wished to include some more heavyweight packages such as Squid and Samba, as well as additional security features and a newer kernel. One of the original goals was to create a framework for larger systems; although usable for this purpose, this is no longer a primary goal.

The LEAF project being a follow-on from the Linux Router Project, that I loved way back when.

The LEAF – Linux Embedded Appliance Framework […] is a collection of Linux distributions that began as a fork from the Linux Router Project (LRP) “linux-on-a-floppy” distribution. Most users of these distributions are primarily interested in router and firewall functions, particularly as combined with the convenience of major features of general Linux distributions such as shells, packet filtering, SSH servers, DNS services, file servers, webmin and the like. LEAF is a common choice when commercial NAT routers are insufficiently flexible or secure, or are unattractively nonconformant to open source philosophy.


I’ve bolded some bits.


Alpine Linux is an independent, non-commercial, general purpose Linux distribution designed for power users who appreciate security, simplicity and resource efficiency.


Alpine Linux is built around musl libc and busybox. This makes it smaller and more resource efficient than traditional GNU/Linux distributions. A container requires no more than 8 MB and a minimal installation to disk requires around 130 MB of storage. Not only do you get a fully-fledged Linux environment but a large selection of packages from the repository.

Binary packages are thinned out and split, giving you even more control over what you install, which in turn keeps your environment as small and efficient as possible.


Alpine Linux is a very simple distribution that will try to stay out of your way. It uses its own package manager called apk, the OpenRC init system, script driven set-ups and that’s it! This provides you with a simple, crystal-clear Linux environment without all the noise. You can then add on top of that just the packages you need for your project, so whether it’s building a home PVR, or an iSCSI storage controller, a wafer-thin mail server container, or a rock-solid embedded switch, nothing else will get in the way.


Alpine Linux was designed with security in mind. The kernel is patched with an unofficial port of grsecurity/PaX, and all userland binaries are compiled as Position Independent Executables (PIE) with stack smashing protection. These proactive security features prevent exploitation of entire classes of zero-day and other vulnerabilities.

This thing has been a small, fast, secure router and firewall from the very start. Generations of tuning and polishing, and enhancements. It is unlikely you will find mysterious networking bugs in something with that history behind it.

Sure, it’s grown up. A Lot. No longer fitting on a single floppy. But it fits Very Nicely on a 4 GB SD card AND runs entirely from memory. Things get real fast when you never take a pause for I/O to disks, SD cards, or whatever…

As a generally more “shaken out” heritage, with small fast footprint, and security features up front, it looked like my best choice.

My review of my first install of it as an initial survey is here:

Besides, on my DNS server I don’t need things like X and GUIs… It runs ‘headless’ with just power and network cable plugged in.

So I decided to ‘give it a go’ today. As of a few hours later, it is my running DNS server.

dnspi:~$ df
Filesystem           1K-blocks      Used Available Use% Mounted on
devtmpfs                 10240         0     10240   0% /dev
shm                     223180         0    223180   0% /dev/shm
/dev/mmcblk0p1         3260176     87208   3172968   3% /media/mmcblk0p1
tmpfs                   223180     14132    209048   6% /
tmpfs                    44636       128     44508   0% /run
cgroup_root              10240         0     10240   0% /sys/fs/cgroup
/dev/loop0               19328     19328         0 100% /.modloop

The SD card is that mmcblk0p1 thingy. That’s 87 MB used. That’s it. The whole enchilada. I could put this thing on a 128 MB card and have too much room left over.

Mem: 45216K used, 401144K free, 14264K shrd, 744K buff, 25360K cached
CPU:   0% usr   0% sys   0% nic  99% idle   0% io   0% irq   0% sirq
Load average: 0.00 0.00 0.00 1/81 1395
 1395  1381 chiefio  R     1616   0%   0   0% top
 1380  1378 chiefio  S     5572   1%   0   0% sshd: chiefio@pts/0
 1378  1337 root     S     5556   1%   0   0% sshd: chiefio [priv]
 1337     1 root     S     2976   1%   0   0% /usr/sbin/sshd
 1308     1 root     S     1612   0%   0   0% /usr/sbin/crond -c /etc/crontabs
 1381  1380 chiefio  S     1612   0%   0   0% -ash

This is on the small Pi with ‘only’ 512 MB of memory. Notice that 401 MB of it are “free” and only 45 MB are used for running. (The rest goes to the GPU and / or the RAMdisk). Note, too, that with an SSH login to it, and a remote display of ‘top’ running, and doing DNS services and whatall, it is at load average 0.00 and 99% idle. The load is in a fraction of a percent somewhere.

Now that’s what a Linux is supposed to do. None of this 10% of your quad core CPU sucked up by SystemD playing with it’s bits…

Ok, on to the “how To” part…

Install and Configure

I covered the basics of this in the prior article, but I needed to do it all over again to get the newest copy and put it on the B+ (as I’d made the other one on the Pi-2 and that’s not going to run on the B+… different instruction sets). The “short form” is that you download a compressed tarball, extract it, stuff it onto a Fat-32 formatted SD card, stick it in the Pi and boot. That’s about it. Then you configure.

Their directions are here:



This section will help you format and partition your SD card:

Download Alpine for Raspberry Pi tarball which is named as alpine-rpi–armhf.rpi.tar.gz. You will need version 3.2.0 or greater if you have a Raspberry Pi 2.
Mount your SD card to your workstation
Use gnome-disks or fdisk to create a FAT32 partition. If you are using fdisk, the FAT32 partition type is called W95 FAT32 (LBA) and its ID is 0xC.
Mark the newly created partition as bootable and save
Mount the previously created partition
Extract the tarball contents to your FAT32 partition
Unmount the SD Card.

Now I used ‘gparted’ on Debian to do the formatting of the card, but in fact most cards come already Fat-32. For reasons that now seem silly, I chose to put a 512 MB ‘swap’ partition as the second partition on the card. Now that seems really silly as there is clearly no need to swap anything with 10 x the used memory as free memory… Oh Well… I used gunzip ( I think…) to uncompress it. (Why they bother to compress it is a mystery. 87 MB uncompressed, 84 MB compressed… ) then made a directory on my hard disk ( mkdir /{path_to}/Alpine, cd /{path_to}/Alpine, cp ~chiefio/saved_download_Alpine_stuff .) and then did a ‘tar xvf alpine-rpi-3.4.5-armhf.rpi.tar’ after which you can remove the tar file.

ls -l a*
-rw-r--r-- 1 root    root 86681600 Nov  5 13:43 alpine-rpi-3.4.5-armhf.rpi.tar
-rw-r--r-- 1 chiefio  500 84700261 Nov  5 13:41 alpine-rpi-3.4.5-armhf.rpi.tar.gz

and you are left with:

root@R_Pi_DebJ_DD:/SG2/ext/Alpine# ls
apks			bcm2708-rpi-cm.dtb   bcm2710-rpi-cm3.dtb  cmdline.txt  overlays
bcm2708-rpi-b.dtb	bcm2709-rpi-2-b.dtb  boot		  config.txt   start.elf
bcm2708-rpi-b-plus.dtb	bcm2710-rpi-3-b.dtb  bootcode.bin	  fixup.dat

That stuff gets copied onto the SD card and it’s ‘good to go’. I used “cp -r . /SD/mountpoint” wherever your SD card gets mounted…

This next bit is important. Alpine does NOT SAVE STATE OF CHANGES unless you tell it to do so. Anything you change evaporates at the next boot unless you do “lbu ci” or “lbu commit”.

So if you screw something up, just reboot and try again… and do the lbu ci at the end.


Alpine Linux will be installed as diskless mode, hence you need to use Alpine Local Backup (lbu) to save your modifications between reboots. Follow these steps to install Alpine Linux:

Insert the SD Card into the Raspberry Pi and turn it on
Login into the Alpine system as root. Leave the password empty.
Type setup-alpine
Once the installation is complete, commit the changes by typing lbu commit

Type reboot to verify that the installation was indeed successful.
Post Installation
Update the System

Upon installation, make sure that your system is up-to-date:

apk update
apk upgrade

Don’t forget to save the changes:

lbu commit

First login is as root, with NO password. You set the password later.

Pay close attention to the prompts in the ‘setup-alpine’. First time through I got distracted and hit ‘return’ to take the default on DNS servers, and then it could not find any of the mirrors as the default was no DNS servers… Sigh. Reboot…

Also note that the “apk update” is very important. I didn’t do it before trying to install dnsmasq and it turns out that the first time it is run, it builds the package index from the mirror, so do it FIRST, then ‘apk add’ something…

So what all did I do?

I gave “setup-alpine” my desired ip number and gateway, and DNS server. I told it to set my time zone and answered the other questions (that are all the usual things…) If you don’t know what the usual things are, just run it and note what it wants to know, then reboot without saving and you will be exactly where you started. Mostly things like what kind of keyboard and what clock daemon to run and similar. When in doubt, take the default. (But NOT for DNS… at least give it if you have no other you like better)

I did:

apk update
apk add iptables
apk add iptables-doc
apk add iputils
apk add dnsmasq
rc-update add iptables
rc-update add dnsmasq
adduser {your username}
apk add sudo
lbu ci

That’s pretty much it. Reboot headless and ssh into it for any further configuration.

I still need to add my IP filter choices, and my DNS entries to nuke advertising and other annoyances. For that, I need to suck them out of the old config of the DNS server (or remake them from a current avoid list…)

For now, I’m happy with lightning fast DNS response from it, even if a bit wide open. I initially pointed it to my Telco boundary router for it’s DNS, so I also need to put a longer list of ‘upstream’ for it to try.

Future Stuff

At some point I will want to set up VPN on it, so I can VPN to my home spigot on the world from the coffee shop and have my traffic encrypted in the WiFi snoop zone there…


There is more on package management here:



The apk tool has the following applets:
add 	Add new packages to the running system
del 	Delete packages from the running system
fix 	Attempt to repair or upgrade an installed package
update 	Update the index of available packages
info 	Prints information about installed or available packages
search 	Search for packages or descriptions with wildcard patterns
upgrade 	Upgrade the currently installed packages
cache 	Maintenance operations for locally cached package repository
version 	Compare version differences between installed and available packages
index 	create a repository index from a list of packages
fetch 	download (but not install) packages
audit 	List changes to the file system from pristine package install state
verify 	Verify a package signature 

To do more stuff with networking, see:


It has a link to suggestions on how to set up iptables firewall rules, plus an example of restarting networking service:
/etc/inet.d/networking restart

They suggested a couple of more packages that I’m not familiar with, so I added them just to play with:

apk add iproute2
apk add drill

I’m sure there will be more things I try with it over time. The old server had a torrent server running on it, along with a small apache web server (so dead requests from DNS redirecting to self would hit it and get a ‘worked!’ light weight page in response). It also was a testbed for NFS file services. Over time I’ll need to decide if this card does those services, or if I move them further inside the perimeter.

In Conclusion

I feel a bit the fool for taking so long to ‘just do it!’ (as my old work-mate and skiing instructor, Walt, would say…).

Turns out the whole thing took less time than just typing it up here. What next? Well, just “to be sure”, I’m going to do one last lbu commit and then make a reserve copy of the card. I do that via a plain ‘dd’ of the whole bag of bits onto a patch of hard disk. So I’ll shut down the dnspi, take the card and mount it on my desktop PiM3, and do:

dd bs=1M if=/dev/sdx of=/CardArchives/DNSPI_Alpine_server

Where if=”in file” is the SD card device. So if, say, it is mounted at /SD/card you would do a ‘df’ and see that /SD/card was /dev/sdb1 and then just drop that digit (as we are taking the whole card, not just one partition) and do a ‘umount /SD/card’ first… So in this example it would be:

dd bs=1M if=/dev/sdb of=/wherever/you/like/File_Name

Well, not bad for a Saturday afternoon…

Subscribe to feed


About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in Tech Bits and tagged , , , , , , . Bookmark the permalink.

17 Responses to DNS Server on Pi Alpine Linux

  1. Larry Ledwick says:

    Sounds like a very good setup.

  2. E.M.Smith says:

    As it is presently inside from the Telco boundry router, it doesn’t really need iptables configured (assuming the telco firewall works…) but I’m from the “belt AND suspenders” school… I intend to lock it down to only be able to communicate outbound via tcp/ip to the specific DNS servers I list, and only on the DNS ports, to the time service, and to the Alpine Mirror (though that likely only when doing maintenance) then inbound only to my (non-routing) networks. That ought to make it as close to impervious as you can get.

    At that point, making it an NFS server ought to be safe for things like my cannonical collection of obsolete software… (how now damn COW…) I could also see making it a time server inward (keeping time queries from leaking out to the internet) and maybe add my own Mirror for Debian or whatever I end up using… so builds and installs don’t leave my space.

    It has nice router features (like bonding) so I think I can set up the WiFi dongle to serve DNS to my internal network (though need to avoid bridging or routing…) and might even be able to make it an Access Point (WiFi router). Presently it serves DNS to that net via the WiFi router which treats it as “outside” via a NAT and wire…

    At any rate, it’s a nice fully featured router should I wish to do the set up.

    Oh, and I might try an eMail server and torrent server just for giggles…

    But all that is for later. For now it just needs to rack up some run time and history… even if it is so efficient it is near zero load… (dang that annoys me… it feels like I’m wasting computes ;-)

  3. tom0mason says:

    Sound like you done it.
    Alpine sounds like a distro stuck in the past where Linux modules would do a few things simply and well, which is a fine thing to find out.

    Thanks for the info EM, well done. :)

  4. pg sharrow says:

    you can’t “waste” a Pi. just rechip it for it’s next task!
    A fine piece of lab equipment that can be as specific or general as needed.
    Small, low power requirements, lots of IO available,
    Just need to settle on software to match…pg

  5. pg sharrow says:

    OH! yes, forgot to mention cost. A dozen are cheaper then 1 cheap laptop…pg

  6. Steven Fraser says:

    Your ‘small footprint’ comments reminded me of my days at SWTPC, using UniFLex on a 6809 system with 1 M RAM, booting off a 1.2 M floppy. The whole damn kernel, with fd, hd, tty, parallel print, and tape drivers , fit into 64K. It was coded in 6809 ASM, not quite as convenient as C, but there was a good compiler for it.

    You flashed me back… Thx

  7. E.M.Smith says:


    It isn’t the Pi being wasted, as it is providing the desired service, but rather those unused computes…

    In the supercomputer world, we would sell our idle time for about $1200 per CPU hour. That mind set sticks with you… So I’m thinking things like dusting off my Golomb ruler finding code.
    Or mining bitcoins, or porting a climate model and letting it run for months ( 30 days x 24 hours is 720 cpu hours or the same as a 720 core machine for an hour run…and might actually do something…)

    I just keep thinking I should make some kind of “computes soup” out of those unused bits… ;-)

  8. pg sharrow says:

    My big box MSwindoz sucks 120watts and runs cpu 20-80%displaying a web page as well as eating 750 bits of my bandwith internet service all the time. Now that is computes waste! Electric waste, Service waste.
    My little RasPi-2B does the same job on 10watts and 0-7% cpu usage, Wow! little waste there, and it barely taps at the the router. Doesn’t nag me about one thing or another and complain about my usage. Just sits there and politely does what I ask and only what I ask for it to do.
    Now getting a box full of Raspberry Pi s to do real computing! That would be something!
    Now that you have “settled” on the DNServer, I guess I should see if I can follow instructions and put the Pi-B+ to work.;-) GOD has blessed us this fall, so we can play…pg

  9. E.M.Smith says:


    If you get stuck on anything, just holler, I’m standing (er…. sitting) by.

    When I get a chance (after Church) I’ll post some of my DNS server config files. It really does cut down A Lot on internet traffic and delays to have local DNS. Each DNS lookup can take a few seconds if it must go out to God Only Knows Where for resolving, and you can have a dozen or more in one web page. Pretty soon you are talking minutes. For folks paying for bandwidth by the byte (like on mobile hot spots) blocking Ads and local DNS can save a lot of cash too.

    Oh, and note that I did get ‘distcc’ running for parallel builds across multiple R.Pi boards. I’ll likely make a Beowulf out of them at some point just for giggles… (Lots of folks have done it). I just don’t have a lot of problems that need a cluster…

  10. E.M.Smith says:

    Just for grins, and comparison, here’s the R.PiM3 with Debian Jessie on it, doing only “top” in one panel:

    top - 13:22:18 up 3 min,  1 user,  load average: 0.89, 0.75, 0.31
    Tasks: 179 total,   1 running, 178 sleeping,   0 stopped,   0 zombie
    %Cpu(s):  0.5 us,  0.1 sy,  0.0 ni, 98.3 id,  1.1 wa,  0.0 hi,  0.0 si,  0.0 st
    KiB Mem:    881700 total,   464916 used,   416784 free,    63612 buffers
    KiB Swap:  4194300 total,        0 used,  4194300 free.   246360 cached Mem
      PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND     
     1784 root      20   0  112072  34932  16868 S   1.3  4.0   0:02.46 Xorg        
     1900 chiefio   20   0   47012  18528  15784 S   1.0  2.1   0:00.86 lxterminal  
     1927 root      20   0    5112   2540   2144 R   0.7  0.3   0:00.39 top         
     1066 ntp       20   0    5768   3852   3420 S   0.3  0.4   0:00.05 ntpd        
        1 root      20   0   24332   4120   2684 S   0.0  0.5   0:06.01 systemd  

    So 465 MB of memory used, to do basically nothing but X and a single terminal window with “top”. About 2% of a quad core at higher Mhz too.

    Here it is with IceWeasel opened:

    top - 13:25:37 up 6 min,  1 user,  load average: 0.57, 0.82, 0.44
    Tasks: 172 total,   1 running, 171 sleeping,   0 stopped,   0 zombie
    %Cpu(s):  2.9 us,  0.4 sy,  0.0 ni, 94.8 id,  1.9 wa,  0.0 hi,  0.0 si,  0.0 st
    KiB Mem:    881700 total,   714460 used,   167240 free,    70332 buffers
    KiB Swap:  4194300 total,        0 used,  4194300 free.   324092 cached Mem
      PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND     
     1940 chiefio   20   0  638816 211168  66308 S   3.6 24.0   1:17.86 iceweasel   
     1784 root      20   0  126428  48708  17472 S   1.7  5.5   0:09.67 Xorg        
     1900 chiefio   20   0   47532  18728  15856 S   1.3  2.1   0:01.66 lxterminal  
     1927 root      20   0    5112   2540   2144 R   0.7  0.3   0:01.61 top         
      886 nobody    20   0    2292   1444   1324 S   0.3  0.2   0:00.26 thd         
     1066 ntp       20   0    5768   3852   3420 S   0.3  0.4   0:00.07 ntpd        
        1 root      20   0   24332   4120   2684 S   0.0  0.5   0:06.07 systemd   

    714 MB, or 3/4 of the 1 GB of memory, used. 5% of CPU (of 4 much faster CPUs so about 25% equivalent of the B+ CPU…) Just to have a browser open and top running…

    FWIW, it often runs over into Swap Space if I do anything really interesting…

  11. E.M.Smith says:

    The init commands:


    Quick-Start Information
    Alpine Linux uses OpenRC for its init system.
    The following commands are available to manage the init system:
        rc-update add service [runlevel]
        rc-update del service [runlevel]
        rc-service  [service]  [start | stop | restart]
        Equivalent to /etc/init.d/service 
        To check services and their set runlevels
        rc [runlevel]
        To change to a different runlevel
        Equivalent to shutdown -r now from traditional GNU/Linux systems
        Equivalent to shutdown -h now from traditional GNU/Linux systems
        To turn off the machine
  12. E.M.Smith says:

    needed to do:

    apk add file

    To get the ‘file’ command. It lets you see what is on a disk partition via ‘file -s /dev/sdxn’ were x is the drive letter and n is the slice number, like: file -s /dev/sda1 that gets the first partition of the first disk and tells you what it is… usually a MBR / Microsoft Fat32 for things you have not reformatted otherwise.

    Also, it looks like “lbu commit” or “lbu ci” only saves config changes in /etc unless you tell it more. So my “home directory” evaporates after each reboot at the moment. (Not an issue as I’d just made the ‘chiefio’ login as a landing place for ssh logins… but it gives me a ‘no home dir’ when I log in…)


    The first thing you need to know is this: By default lbu only cares about modifications in /etc and its subfolders, with the exception of /etc/init.d!
    Please have a look at lbu include to save files/folders located elsewhere than in /etc.

    Alpine has the following tools for permanently storing your modifications:

    lbu commit (Same as ‘lbu ci’)
    lbu package (Same as ‘lbu pkg’)
    lbu status (Same as ‘lbu st’)
    lbu list (Same as ‘lbu ls’)
    lbu diff
    lbu include (Same as ‘lbu inc’ or ‘lbu add’)
    lbu exclude (Same as ‘lbu ex’ or ‘lbu delete’)
    lbu list-backup (Same as ‘lbu lb’)
    lbu revert
    Include special files/folders to the apkovl

    Assume that you have some files that you want to permanently save, but they are located somewhere else than in /etc.
    It could be /root/.ssh/authorized_keys (used by sshd to authenticate ssh-users). Such files/folders can be added to lbu’s include list with the following command:

    usage: lbu include|inc|add [-rv] …
    lbu include|inc|add [-v] -l

    -l List contents of include list.
    -r Remove specified file(s) from include list instead of adding.
    -v Verbose mode.


    lbu add -v /home/dir

    Interesting side effects you get into when running from a RAM file system that isn’t persistent but uses an ‘overlay’ and you need to save things off to have them persist. It has the useful side effect (especially for a router / secure appliance) that unless any system cracker knows that and takes very specific steps, a simple reboot flushes there stuff… Since this thing takes about 20 seconds to reboot, might be worth having cron do a 2 x daily reboot… Say 8 AM and midnight?

    The same lbu command also lets you save your state file onto a remote device or machine, so you can easily suck out your config in a pristine state and shove it back in later, even after a full wipe of the system and re-install. Nice. (PITA for a Daily Driver Workstation but as an appliance, just what you want. Or potentially as a disposable mini-browser box… By Definition everything is in memory and goes away on power down…

  13. E.M.Smith says:

    Adding a “squid” proxy server. It caches web pages so if, for example, you visit the same page inside the cache limit you don’t spend any Telco time or money to fetch it again… This can really speed things up, especially if you haven’t blocked repetitive adverts yet… so that 3000th time you get the Amazon ad for the thing you bought last week doesn’t have 3000 downloads…

    Alpine has 2 ways of doing it. Transparent where you force all traffic through it via your router config, and “Explicit” where you have to configure your browser to use it. I’ll be doing the Explicit so I can choose what happens (even though Transparent might be better as the spouse and others would also benefit from the cache… but that involves me taking on permanent I.T. Explainer duties on their computers…and being on call if something breaks)


    has a nice tutorial. I’m initially not going to include SSL interception (so that https sites are not cached and do not do any saving) as I’m not interested in getting into that whole Cert Authority set up and issues this morning. I’ll do that on another day…

    SSL interception or SSL bumping

    The offical squid documentation appears to prefer the term SSL interception for transparent squid deployments and SSL bumping for explicit proxy deployments. Nonetheless, both environments use the ssl_bump configuration directive (and some others) in /etc/squid/squid.conf for their configuration. In general terminology, SSL interception is generally used to describe both deployments and that will be the term used here. We are, of course, dealing with an explicit forward proxy configuration here.
    Behaviour without SSL interception

    Clients behind an explicit proxy use the ‘CONNECT’ HTTP method. The first connection to the proxy port uses HTTP and specifies the destination server (often termed the Origin Content Server, or OCS). After this the proxy simply acts as a tunnel, and blindly proxies the connection without inspecting the traffic.

    Behaviour with SSL interception

    Using this method, clients still use the CONNECT method but the client uses the certificate from the proxy (so it must be a certificate trusted by the client) to encrypt the traffic. Thus, the proxy is able to decrypt and view the traffic on the client-side before creating another encrypted connection server-side. This enables the proxy to, in essence, launch a man-in-the-middle ‘attack’ but also allows it to do all the things is can with plain, unencrypted HTTP traffic, like change the browser User-Agent reported to the server.

    With that added, and the DNS kill list updated, my ‘wasted bandwidth’ for things like advertizing crap and repeatedly downloading the same DNS entries will be greatly reduced. (It is amazing, really, how often your browser will hit the same ads.google.crap DNS lookup and advert downloads…)

    I’d done all this on a Raspberry Pi “Daily Driver” chip so it was in the client side, and really liked the results. Moving up to the DNS Server / Proxy Server means not needing to do it in all the clients… just point their DNS server and Browser at the DNS Pi board…

    Details to follow as I get them done.

  14. E.M.Smith says:

    Well that was quick… I did override their setting of 64 MB for memory cache and set it to 256 (the normal default) which ballooned the size of the memory footprint greatly, but hey, it’s not doing anything with that memory anyway…

    Mem: 88060K used, 358300K free, 39444K shrd, 1212K buff, 57340K cached
    CPU:   0% usr  18% sys   0% nic  81% idle   0% io   0% irq   0% sirq
    Load average: 0.00 0.00 0.00 1/88 1903
     1903  1399 root     R     1616   0%   0   9% top
     1855  1853 squid    S    16988   4%   0   0% {squid} (squid-1) -YC -f /etc/squi
     1853     1 root     S    12412   3%   0   0% /usr/sbin/squid -YC -f /etc/squid/
     1394  1392 chiefio  S     5612   1%   0   0% sshd: chiefio@pts/0
     1392  1364 root     S     5556   1%   0   0% sshd: chiefio [priv]

    So now it’s 88 MB used, but 394 MB shared… I’ve still got 358 MB free, so not worried. Oh, and I added a USB stick as swap. Why? Don’t ask why… I just feel better knowing the little dear has swap…

    Note the ‘squid’ running in the process list.

    Here’s some diff’s of my /etc/squid/squid.conf (based on the model in the howto guide) and the one aht is squid.conf.default in the base install (the default did work, BTW). I’ve trimmed out the clutter (things like the comments changes):

    dnspi:/etc/squid# diff squid.conf squid.conf.default 
    --- squid.conf
    +++ squid.conf.default
    +# Example rule allowing access from your local networks.
    +# Adapt to list your (internal) IP networks from where browsing
    +# should be allowed
     acl localnet src	# RFC1918 possible internal network
     acl localnet src	# RFC1918 possible internal network
     acl localnet src	# RFC1918 possible internal network
    -## Allow anyone to use the proxy (you should lock this down to client networks only!):
    -# acl localnet src all
    -## IPv6 local addresses:
    -#acl localnet src fc00::/7       # RFC 4193 local private network range

    I blocked the IPv6 ranges since I don’t run IPv6.. so they got commented out with a #

    -acl Safe_ports port 210		# waiss
    +acl Safe_ports port 210		# wais
    -acl QUERY urlpath_regex cgi-bin \? asp aspx jsp
    -## Prevent caching jsp, cgi-bin etc
    -cache deny QUERY

    These were just their own differences, not my changes. Was wais misspelled? Was QUERY something they forgot? Who knows…

    +# Recommended minimum Access Permission configuration:
    +# Deny requests to certain unsafe ports
     http_access deny !Safe_ports
    +# Deny CONNECT to other than secure SSL ports
     http_access deny CONNECT !SSL_ports
    +# Only allow cachemgr access from localhost
     http_access allow localhost manager
     http_access deny manager
    -## We strongly recommend the following be uncommented to protect innocent
    -## web applications running on the proxy server who think the only
    -## one who can access services on "localhost" is a local user
    -http_access deny to_localhost
    +# We strongly recommend the following be uncommented to protect innocent
    +# web applications running on the proxy server who think the only
    +# one who can access services on "localhost" is a local user
    +#http_access deny to_localhost

    Since the recommended it, it’s uncommented…

    +# Example rule allowing access from your local networks.
    +# Adapt localnet in the ACL section to list your (internal) IP networks
    +# from where browsing should be allowed
    -http_access allow localnet
    -http_access allow localhost

    Turns out that you need one or both of those last to “allow” lines, despite the top definition of the non-routing networks as OK. That top item is just adding them to the ACL list, this is the one that turns on that access list…

    +# And finally deny all other access to this proxy
     http_access deny all
    +# Squid normally listens to port 3128
     http_port 3128
    -## Uncomment and adjust the following to add a disk cache directory.
    -## 1024 is the disk space to use for cache in MB, adjust as you see fit!
    -## Default is no disk cache
    -#cache_dir ufs /var/cache/squid 1024 16 256
    -## Better, use 'aufs' cache type, see 
    -##http://www.squid-cache.org/Doc/config/cache_dir/ for info.
    -cache_dir aufs /var/cache/squid 1024 16 256
    -## Recommended to only change cache type when squid is stopped, and use 'squid -z' to
    -## ensure cache is (re)created correctly
    +# Uncomment and adjust the following to add a disk cache directory.
    +#cache_dir ufs /var/cache/squid 100 16 256

    That was where I chose to set the disk cache (that will actually be in memory as that’s where the ‘disk’ lives on this RAM based system…) to 1024 (or about a GB) since I’ve got lots… with the swap… We’ll see how that works out over time… I may need to trim it back if there’s an issue since it may well be that the RAM disk is a fixed size and can’t roll to swap (in fact, that may be likely… I’ve just not gone looking at it yet… besides, tormenting a system is part of debugging, isn’t it? ;-) I may need to move this onto the USB stick… then again, I’m not sure USB stick access is faster than my internet spigot…)

    Then I told it not to keep core dumps or logs via that # since I’m never going to read them…

    -## Leave coredumps in the first cache dir
    -#coredump_dir /var/cache/squid
    +coredump_dir /var/cache/squid
    -## Where does Squid log to?
    -#access_log /var/log/squid/access.log
    -## Use the below to turn off access logging
    -access_log none
    -## When logging, web auditors want to see the full url, even with the query terms
    -#strip_query_terms off
    -## Keep 7 days of logs
    -#logfile_rotate 7

    Here’s the actual RAM cache. Kind of redundant since the “disk” cache is also in RAM at the moment. But in any case, there’s some tuning I need to do once I find where the Pi locks up…

    -## How much RAM, in MB, to use for cache? Default since squid 3.1 is 256 MB
    -cache_mem 256 MB
    -## Maximum size of individual objects to store in cache
    -maximum_object_size 10 MB

    I just couldn’t stand the idea of only 10 MB of disk cache when there was 400 MB free… Maybe I’ll bounce this up to a very high number (as it WILL go to swap) and shrink the “disk” cache size… Maybe after the first time it runs out of “disk” ;-)

    Here’s where the default read_ahead was set in the model. I also took their advice to set forwarded_for to delete. Then much of the rest is just the differences between their online model and the default.

    -## Amount of data to buffer from server to client 
    -read_ahead_gap 64 KB
    -## Use X-Forwarded-For header?
    -## Some consider this a privacy/security risk so it is often disabled
    -## However it can be useful to identify misbehaving/problematic clients
    -#forwarded_for on 
    -forwarded_for delete 
    -## Suppress sending squid version information
    -httpd_suppress_version_string on
    -## How long to wait when shutting down squid
    -shutdown_lifetime 30 seconds
    -## Replace the User Agent header.  Be sure to deny the header first, then replace it :)
    -#request_header_access User-Agent deny all
    -#request_header_replace User-Agent Mozilla/5.0 (Windows; MSIE 9.0; Windows NT 9.0; en-US)

    Just for grins, I’ve told it to display the “hostname” of “The_Shadow” ;-)

    -## What hostname to display? (defaults to system hostname)
    -visible_hostname The_Shadow 
    -## Add any of your own refresh_pattern entries above these.
     refresh_pattern ^ftp:		1440	20%	10080
     refresh_pattern ^gopher:	1440	0%	1440
     refresh_pattern -i (/cgi-bin/|\?) 0	0%	0

    Looking at that, I likely ought to have diffed it the otherway around… default to running… Oh Well, if it is confusing to anyone, I can just post the whole config file instead of the diffs.

    With that, I’ve not got a nice little caching proxy server and caching DNS server. My internet traffic will be reduced, and the leakage of information (via things like repeated DNS lookups) also muted just a bit. Visits to web pages that have not changed will also be much faster, and any adverts that sneak through the screens will be cached, so I don’t need to keep downloading them again and again…

    Next up will be dealing with all that “do it for https” stuff. It doesn’t look hard, but “one step at a time”… and maybe after lunch… I think I’d like to ped this one “to the wall” and see if I can break it first. (i.e. fill up “disk”…)

  15. E.M.Smith says:

    Well, since WordPress tosses me into HTTPS no matter what I do (yes, I know, a ‘good security practice’…) and that just tunnels through the proxy, I’m getting no cache benefit for my most used pages. It just adds the proxy / tunnel delay. So I’m going ahead and doing that whole SSL proxy thing. So far, it’s been pretty easy. I haven’t added the self signed cert authority to my browser yet, but the install on the DNSPi has been smooth.

    dnspi:/etc/squid# apk -U add ca-certificates openssl
    fetch http://mirrors.gigenet.com/alpinelinux/v3.4/main/armhf/APKINDEX.tar.gz
    (1/2) Installing ca-certificates (20160104-r4)
    (2/2) Installing openssl (1.0.2j-r0)
    Executing busybox-1.24.2-r11.trigger
    Executing ca-certificates-20160104-r4.trigger
    OK: 41 MiB in 64 packages
    dnspi:/etc/squid# lbu commit
    dnspi:/etc/squid# ls
    cachemgr.conf          errorpage.css          mime.conf              squid.conf             squid.conf.documented
    cachemgr.conf.default  errorpage.css.default  mime.conf.default      squid.conf.default
    dnspi:/etc/squid# pwd
    dnspi:/etc/squid# openssl req -newkey rsa:4096 -x509 -keyout /etc/squid/squid.pem -out /etc/squid/squid.pem -days 365 -nodes
    Generating a 4096 bit RSA private key
    writing new private key to '/etc/squid/squid.pem'
    You are about to be asked to enter information that will be incorporated
    into your certificate request.
    What you are about to enter is what is called a Distinguished Name or a DN.
    There are quite a few fields but you can leave some blank
    For some fields there will be a default value,
    If you enter '.', the field will be left blank.
    Country Name (2 letter code) [AU]:US
    State or Province Name (full name) [Some-State]:California
    Locality Name (eg, city) []:.       
    Organization Name (eg, company) [Internet Widgits Pty Ltd]:The Big Chiefio
    Organizational Unit Name (eg, section) []:Department of Organization and Security
    Common Name (e.g. server FQDN or YOUR name) []:DOOS
    Email Address []:pub4all@aol.com
    dnspi:/etc/squid# ls
    cachemgr.conf          errorpage.css          mime.conf              squid.conf             squid.conf.documented
    cachemgr.conf.default  errorpage.css.default  mime.conf.default      squid.conf.default     squid.pem
    dnspi:/etc/squid# ls -l squid.pem 
    -rw-r--r--    1 root     root          5435 Nov  7 18:37 squid.pem
    dnspi:/etc/squid# chmod 400 squid.pem 
    dnspi:/etc/squid# ls -l squid.pem 
    -r--------    1 root     root          5435 Nov  7 18:37 squid.pem

    Then it looks like this bit gets glued to the bottom of the squid.conf file (per the link in the prior comment to the ‘doing proxy’ how to…)

    ## Use the below to avoid proxy-chaining
    always_direct allow all
    ## Always complete the server-side handshake before client-side (recommended)
    ssl_bump server-first all
    ## Allow server side certificate errors such as untrusted certificates, otherwise the connection is closed for such errors
    sslproxy_cert_error allow all
    ## Or maybe deny all server side certificate errors according to your company policy
    #sslproxy_cert_error deny all
    ## Accept certificates that fail verification (should only be needed if using 'sslproxy_cert_error allow all')
    sslproxy_flags DONT_VERIFY_PEER
    ## Modify the http_port directive to perform SSL interception
    ## Ensure to point to the cert/key created earlier
    ## Disable SSLv2 because it isn't safe
    http_port 3128 ssl-bump cert=/etc/squid/squid.pem key=/etc/squid/squid.pem generate-host-certificates=on options=NO_SSLv2

    And then:

    squid -k reconfigure

    and I now have to debug and change the proxy / cert stuff in my browser. I tried just running the browser straight, but got connection refused, so I don’t know if that browser side or squid side that needs some clean up. More as I preen it…

  16. E.M.Smith says:

    OK, first use it failed with proxy refusing… the “cookbook” said to do a particular thing, that I did, but …

    The “cookbook” has /usr/lib/squid3/ssl_crld

    rc-service squid stop
    rm -rf /var/lib/ssl_db
    /usr/lib/squid3/ssl_crtd -c -s /var/lib/ssl_db 
    rc-service squid start

    While first off I didn’t have any /varlib/ssl_db (but did the remove anyway) and secondly, the directory had 3 dropped and is now just /usr/lib/squid/ssl_crtd

    ls -l /var/lib/ssl_d
    (showed nothing)
    rc-service squid stop
    rm -rf /var/lib/ssl_db
    /usr/lib/squid/ssl_crtd -c -s /var/lib/ssl_db
    rc-service squid start

    Even though I’ve not “installed” the certificate into my browser, it looks like I’m connecting to things OK.

  17. E.M.Smith says:

    Interesting… Running a youtube video through the proxy server, it peaked at about 69% of the one CPU core, then backed off to about 30% with bursts to 56%. One might want to have YouTube on the proxy bypass list…

    Mem: 107948K used, 338412K free, 43284K shrd, 1212K buff, 61608K cached
    CPU:   3% usr   9% sys   0% nic  30% idle   0% io   0% irq  56% sirq
    Load average: 0.79 0.22 0.07 1/108 2408
     2323  2321 squid    S    33632   8%   0  33% {squid} (squid-1) -YC -f /etc/squi
        3     2 root     SW       0   0%   0  12% [ksoftirqd/0]
        7     2 root     SW       0   0%   0   2% [rcu_preempt]
     2405  2367 root     R     1616   0%   0   1% top
     2354  2352 chiefio  S     5576   1%   0   0% sshd: chiefio@pts/0
     2321     1 root     S    14540   3%   0   0% /usr/sbin/squid -YC -f

    I also note that memory usage has bumped up to 108 MB with the RAM buffer and added features. “Only” 338 MB more to figure out how to get busy ;-)

    CPU use when loading “regular” pages tends to run about 1% to 8% for one, so room for plenty of folks at the same time.

    I’ve also found that Android (i.e. my Tablet) doesn’t let you set Proxy Settings anywhere… “There’s an App for that” at a price… So I may yet set up the proxy server as “transparent” (once I’m comfortable it’s fine for the whole family, or maybe by putting it inside my own private network space instead… Decisions decisions…)

Comments are closed.