ARMing Tails – An Incremental Raspberry? Pi please…

Just a small note and modest suggestion.

TAILS Source Code

I was looking to see if the Tails source code is easily downloaded. It is available, but via GIT, that may be a great thing and is currently trendy, but not an archiving / source control system I’ve ever used… It has some alien ideas in it that likely are good for massively distributed code development, but not so good for “just let me un-tar this and look at the source”… The learning curve doesn’t look too steep, but when you have maybe 30 minutes to spend on a “Is this idea worth it?” question, sinking a day or two into installing and learning a source code control system and how to navigate it is, er, a show stopper.

Source tree starts here: git clone https: //git-tails.immerda.ch/tails

The good news is that they have a “web interface” (that I’ve not explored so can’t comment on ease of use… due to my browser wanting to use a higher level of security than they accept, or maybe the other way around):

https://git-tails.immerda.ch/tails/

Secure Connection Failed

An error occurred during a connection to git-tails.immerda.ch.

Unable to generate public/private key pair.

(Error code: sec_error_keygen_fail)

* The page you are trying to view can not be shown because the authenticity of the received data could not be verified.

* Please contact the website owners to inform them of this problem. Alternatively, use the command found in the help menu to report this broken site.

Or maybe the NSA is trying do a “man in the middle” on it and fumbled things so I’m locked out instead of tracked… (Hey, NSA Guy! Fix your MITM code, will ya? I just want to browse the Tails sources and see if it is a load of work or not. Thanks!)

I’ll likely try a few other platforms, browser and source IP addresses and see if I can find a combo that works… (Hey, NSA guy! Starbucks, near the HP Pavilion, downtown San Jose. I’ll be the one with the Sharks hat on…)

ARM Port Status?

I then did “the usual” of looking to see if anyone was already doing an ARM chip port. The mail thread on it says “kinda sorta slowly maybe”. The folks are focused on portable and phones / tablets, not on ARM as ARM. That puts a giant road block in the front end as you need things like touch screens and ‘boot from unusual media’ glued in before you can even get started. This is typical:

https://mailman.boum.org/pipermail/tails-dev/2014-January/004633.html

Nate,

My R&D group has actually been doing some work along these lines. I’ve
been working to get our current work open sourced so we can share some of
the lessons we have learned and some or all of our relevant code. I’m not
sure how long it will take for me to get permission, but I am hoping it
will be some time this month. Keep me in the loop on this discussion (if
it moves outside of this list, I already read this list).

One of the key issues here is that the “boot off CD” model for desktops /
laptops translates poorly into the main model of SD card boot on Android
devices. Most Android devices will not boot automatically from an SD card,
which means that in general traces must be left on the phone (we are
currently just working with phones) of the fact that you use / have used
Tails. We can ensure that anything that happens during a Tails session is
encrypted before it can touch persistent store, but we want the same level
of deniability offered by CD or USB boot on a laptop. If possible on an
un-rooted phone even.

On the same note: If someone has or wants to build a list of devices that
will automatically boot from SD card if it is inserted, or if some magic
key combination is pressed during boot, I would be insanely happy. I think
we have part of such a list which is one of the things I want to open up
when I can. I know there are some, but we want to build a solution for a
broader range of devices, and it seems like auto-SD boot is rare on phones.

On Thu, Jan 2, 2014 at 1:56 PM, Nathan of Guardian wrote:

> —–BEGIN PGP SIGNED MESSAGE—–
> Hash: SHA1
>
> Hello, everyone. Finally joining this list.
>
> I’d like to start an overdue discussion on how we can bring TAILS to
> smartphone or tablet hardware in a usable way. I know we can produce a
> firmware/ROM based on Android or possibly Ubuntu Touch that matches
> the TAILS spec, but the question for me has been how do we match the
> “boot from CD/USB” aspect of TAILS.
>
> There are two interesting developments on this front:
>
> 1) An increasing amount of devices allow you to mount USB storage [0]
> from the Micro USD port. This might be an opportunity to create a
> recover/bootloader that can load a TAILS Mobile image from attached
> storage.
>
> 2) Ubuntu has just released a dual boot system[1] that allows easy
> switching between Android and Ubuntu on one device. If TAILS Mobile
> were to be based on Ubuntu Touch, then this would allow for a nice
> device with a standard Android system for daily use, and then an easy
> to access TAILS mode for more sensitive work.
>
> Apologies if I have missed any discussion or progress on TAILS Mobile
> distribution, but better late than never!
>
> All the best,
> +n8fr8
>

There were a couple of messages “upstream” of this one that specifically just called out ARM, but it very rapidly moved to “phones and tablets” and not just ARM. No mention of ARM Chromebooks that can be made to boot Linux and are “good to go” as platforms, for example.

The FAQ is even less interesting:

https://tails.boum.org/support/faq/index.en.html#index7h2

Does Tails work on ARM architecture, Raspberry Pi, or tablets?

For the moment, Tails is only available on the x86 and x86_64 architectures. The Raspberry Pi and many tablets are based on the ARM architecture. Tails does not work on the ARM architecture so far.

Look for a tablet with an AMD or Intel processor. Try to verify its compatibility with Debian beforehand, for example make sure that the Wi-Fi interface is supported.

They also are rather absolutist about wanting 100% of every security feature with that meaning repudiation of Tails being on the box, so no Tails on an installed media. I think there ought to be a ‘middle ground’ where you can have a Tails box, like the Pi, but where the chip comes out and nothing is left on the box (just like pulling your USB drive from a PC…)

By using the Chromebook or Raspberry Pi, you get a keyboard, mouse, boot from SD card (IMHO just as able to be kept anon via pulling it) and all, and without taking on the hard bits of the cell phone “up front”. That gives an easier and likely quicker path to a phone port (IMHO). Stir in that all the main Linux and Onion bits are already ported, just how big is what’s left to port / adapt?

So my “modest suggestion” is just that maybe doing a Tails port to Raspberry Pi would be a lot easier place to start. It already has the ARM port of Debian done so has ported bits for pretty much all the bits that TAILS uses and has an existing TOR Router port.

Onion Pi turns Raspberry Pi into Tor proxy and wireless access point
“Foil the NSA and Prism with a Tor proxy,” Raspberry Pi Foundation says.

by Jon Brodkin – Jun 18, 2013 2:40pm EDT

There are folks talking about it on the Pi boards:

https://www.raspberrypi.org/forums/viewtopic.php?t=63335&p=468916

Though mostly complaining about paying for a DVD of the source archives. And some “Must be hard” negative waves.

by kamikazejoe » Tue Jul 29, 2014 9:04 pm
[…]
I don’t think there is any technical reason someone couldn’t port Tails to the Raspberry Pi. The biggest hurdle is likely the memory requirements. It would likely be unusable slow.

As they are both Debian based, and retrofitting the Tails mods into the Raspbian version of Debian ought to be straight forward, I fail to see why memory would be a significant hurdle. ( I’ve run Tails in small memory machines ) nor why a browser would be any slower than normal Tails.

But you get the picture. Both sides of the fence interested, two sides not talking, both thinking the other side is too hard. Sigh.

And Me?

I’m interested, and likely could “do it”, but my kernel and OS code exposure was 25 years ago and BSD / SunOS / Ultrix / Unicos. Not ARM and Linux. And no GIT.

So a slow ramp up and learning curve. Add in that I’m not exactly “rich” on free time, and it’s looking like a big time sink with unclear success parameters. The Project Manager in me thinks more and different “resource” would do it better and faster…

Don’t know if I’ll ever get a “round tuit”. I’m going to at least explore how hard it would be to extract a source code set for “diffing” against the base code. But once I’m over “one day” whacking on it, I’m likely to move on.

If anyone “has clue” on this already, give a holler.

Subscribe to feed

About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in Tech Bits and tagged , , , , , , . Bookmark the permalink.

22 Responses to ARMing Tails – An Incremental Raspberry? Pi please…

  1. Steve Crook says:

    Ultrix? Yaaaaaay. That and Xenix were my first experiences of programming in the Unix world in the late eighties. I came to it from programming in Cobol, on a mainframe. Culture shock doesn’t adequately describe the experience.

  2. p.g.sharrow says:

    @EMSmith, As a managerial sort, maybe a good use of your time, is to search out information rather then create it. Lay bread crumbs for others to find, and see what shows up. Too bad the need to work for a living gets in the way of more useful use of your efforts. ;(

    At 69 I still must earn food and shelter.
    If you retire, you die. ;-) pg

  3. E.M.Smith says:

    Well, just on a lark I asked “which git” on the R.Pi.
    It’s installed.

    pi@RaPiM2 ~ $ which git
    /usr/bin/git
    pi@RaPiM2 ~ $

    So then I issued the clone command from above. About 5 minutes later it was done. I can see why folks use git… that’s impressively fast. It’s one thing to have the words “efficient on networks” and another to have a download and unpack of an OS in 5 minutes on a regular WAN…

    pi@RaPiM2 ~ $ git clone https://git-tails.immerda.ch/tails
    Cloning into 'tails'...
    remote: Counting objects: 194876, done.
    remote: Compressing objects: 100% (54601/54601), done.
    remote: Total 194876 (delta 128923), reused 189350 (delta 125093)
    Receiving objects: 100% (194876/194876), 50.84 MiB | 232 KiB/s, done.
    Resolving deltas: 100% (128923/128923), done.
    Checking out files: 100% (2914/2914), done.
    pi@RaPiM2 ~ $ 
    

    Looking at it:

    drwxr-xr-x 14 pi   pi    4096 Aug  1 15:34 tails
    pi@RaPiM2 ~ $ du -s tails
    93276	tails
    pi@RaPiM2 ~ $ 
    pi@RaPiM2 ~ $ 
    pi@RaPiM2 ~ $ ls tails
    auto        COPYING   ikiwiki-cgi.setup    Rakefile              submodules
    bin         data      ikiwiki.setup        README                vagrant
    build-wiki  debian    import-translations  refresh-translations  wiki
    Changelog   doc       lib                  release
    config      features  po                   run_test_suite
    pi@RaPiM2 ~ $ 
    

    So it’s about 93 MB stored in GIT. ( I think it does a compressed store in the bit blobs it uses). Guess I ought to learn how to suck a chunk out as char… Even just a simple ‘diff’ of one small source code module vs the regular i86 Debian can give you an idea how much change is involved. ( Likely git has a ‘diff’ like function built in, I’ll explore that first, since first ‘check in’ ought to have been the base Debian code.)

    Simply knowing that the ‘size’ is closer to 90 MB than 900 MB gives clue. It is cut down and minimal (as they claim) with just what is needed to work as mail / browsing, and that isn’t a big chunk of code to port. It also looks like there are human readable bits already, and not everything is stuck in those binary blob things git uses. More on that after some responses to people:

    @P.G.:

    Um, just what do you think this posting might be? I know, it’s a but crumby… ;-)

    @Steve Crook:

    Hard to say which of them I liked better. BSD was more “pure”, but things were sometimes harder to do than they ought to be, and limited. ULTRIX seemed to almost always work right, have just what you wanted, and was more complete (even if some of it was ‘impure’…). SunOS somewhere between the two (and not yet the mutant that was System V Consider it Capriciouly Changed of Solaris…)

    Then there was Unicos. The “release date” coincides exactly with our first install date at Apple. Why? We had a project that was going to start on COS and migrate when UNICOS was released. Management looked at it, and we decided that the pain and suffering of a conversion would be less than the P&S of a beta release. After a LOT of convincing, Cray shipped our machine SN210 an XMP-48 with Unicos on it. And source code. As we would run into a problem, we would often just fix it and send the patch to Cray. (They had been worried we would cry in our beer and demand service… and I think were surprised at the exact opposite, despite our telling them that…) It was a very very young, not quite complete port at the start. But there was a quality and elegance about bits of it that was intriguing. Then most of it was just bog standard Unix and hard to notice as anything else.

    For ULTRIX, it took about 2 years of constant haranging and swapping one of our VAXEN to BSD while shouting at them that we would do it to all of them and shut off license payments to get them to send us the source code so we could fix bugs ourselves…. then they sent it on a box of microfisch… A few months later we finally got the source code on a disk pack IIRC, after pointing out to them that patching and recompiling from film was slow… Why on God’s earth they thought Unix was a trade secret is beyond me (even their enhancements…). Part of what shoved me firmly into the Free Software world. The BSD box was “no problems”… Sun was pretty easy about getting source code too.

    Not only did that let us fix bugs and keep our R&D on the fast track, but it also let us “trick out” the OS for security. Patching holes same day, sure, but also planting land mines. Like “shadow passworld file” before anyone expected it and a “special” way of becoming root. Get root by any other way and you rang bells in the firehouse. (We had booby trapped just about every “navigation” and “inspection” command like ls, cd, pwd, cat, cp, mv, vi with a check if you were ‘root’ and by the proper means…. If you didn’t know the secret sauce, as soon as you rooted and did nearly anything essential, we had folks on pagers and at terminals being alerted… ) I doubt tht it ‘spills the beans’ on that old security hack since it’s been 30 years… and the attacks are different now.

    So folks would “get root” on our honeypot that was directly internet connected, and by the time they figured out there was a router to the real stuff, we knew their kit and had them. Often inside an hour or so the machines were patched. How I got a record of 7 1/2 years as manager and no break-ins from outside. (Had one jerk employed inside the security perimiter ‘get root’ on one of the VAXen… we detected it almost real time and I just physically pulled the power plug – he had locked us out of root incidentally – with zero loss of data. Total time he “had root” about 2 minutes… Then looked at the paper logs and went to talk to his manager… Part of our assumptions was that “inside” folks would not “attack”. He claimed he was just exploiting a bug to motivate us. I pointed out that we had heard him when he told us about it, but a bug on an “inside only” machine was not a priority… as we had all those layers in depth… and we’d get to it in about a week. He wasn’t satisified… Yet his boss understood the cost of the machine in quesiton having been powered down and being out of action for a 1/2 day and “explained it to him”… We never had a problem with him again… )

    The Cray had a 1/2 TB data store attached ( StorageTek tape robot) and did something called “migration” where inodes stayed on disk, but data blocks that were unused ‘migrated’ out to tape. Still looked like a giant one layer file system, and would auto-retrive data blocks if needed. Basically using disk as cache for the tape robot. I extended the “df” command (named it “dm” for “disk migrated”) to inspect the inode for information about migrated vs non sizes. So you could get the regular output, or how much was on tape, for any file or file system level. Not exactly a giant bit of systems programming, but it let me get under the skins and look around and do someting non-managerial ;-)

    @Programmers:

    I looked one more layer down in the git tree. Here’s the ‘ls’ of that.

    pi@RaPiM2 ~/tails $ ls *
    build-wiki  ikiwiki-cgi.setup    Rakefile              release
    Changelog   ikiwiki.setup        README                run_test_suite
    COPYING     import-translations  refresh-translations
    
    auto:
    build  clean  config  scripts
    
    bin:
    delete-merged-git-branches
    
    config:
    amnesia                binary_rootfs          chroot_local-packageslists
    APT_overlays.d         chroot_apt             chroot_local-patches
    base_branch            chroot_local-hooks     chroot_local-preseed
    binary_local-hooks     chroot_local-includes  chroot_sources
    binary_local-includes  chroot_local-packages
    
    data:
    splash.png
    
    debian:
    changelog  compat  control  copyright  gbp.conf  link  rules
    
    doc:
    examples  README  README.eCAFE
    
    features:
    apt.feature                  ssh.feature
    build.feature                step_definitions
    checks.feature               support
    config                       time_syncing.feature
    dhcp.feature                 tor_bridges.feature
    domains                      tor_enforcement.feature
    electrum.feature             torified_browsing.feature
    encryption.feature           torified_git.feature
    erase_memory.feature         torified_gnupg.feature
    evince.feature               torified_misc.feature
    i2p.feature                  tor_stream_isolation.feature
    images                       totem.feature
    misc_files                   unsafe_browser.feature
    pidgin.feature               untrusted_partitions.feature
    po.feature                   usb_install.feature
    root_access_control.feature  windows_camouflage.feature
    scripts
    
    lib:
    python3
    
    po:
    ar.po  de.po     fr_CA.po  ja.po  pl.po          sk.po     tails.pot
    az.po  el.po     fr.po     km.po  POTFILES.in    sk_SK.po  tr.po
    ca.po  en_GB.po  hr_HR.po  ko.po  POTFILES.skip  sl_SI.po  uk.po
    cs.po  es.po     hu.po     lv.po  pt_BR.po       sq.po     zh_CN.po
    cy.po  fa.po     id.po     nb.po  pt.po          sr.po     zh.po
    da.po  fi.po     it.po     nl.po  ru.po          sv.po     zh_TW.po
    
    submodules:
    jenkins-tools  pythonlib
    
    vagrant:
    definitions  lib  provision  Vagrantfile
    
    wiki:
    lib  src
    pi@RaPiM2 ~/tails $ 
    

    So at a first blush it doesn’t look big, and the number of modules manageable. If git can ‘diff’ vs base Debian (or if ‘patches’ has that info) then it ought to just be a matter of applying them to the Raspbian sources and doing a ‘test / debug / QA’ cycle.

    We’ll see… if slowly…

  4. E.M.Smith says:

    Well, the kinds of things you learn using “new tools”.

    I was wandering the tree looking for where all the binary blobs of source were at. Didn’t see them.

    There’s a feature in Unix / Linux land that any file name starting with a ‘.’ is not shown by default on a listing like via ‘ls’. GIt exploits that to hide what looks like the actual source archive:

    pi@RaPiM2 ~/tails $ ls -a
    .     build-wiki  data      .git               ikiwiki.setup        Rakefile              run_test_suite
    ..    Changelog   debian    .gitignore         import-translations  README                submodules
    auto  config      doc       .gitmodules        lib                  refresh-translations  vagrant
    bin   COPYING     features  ikiwiki-cgi.setup  po                   release               wiki
    pi@RaPiM2 ~/tails $ ls .git
    branches  config  description  HEAD  hooks  index  info  logs  objects  packed-refs  refs
    pi@RaPiM2 ~/tails $ du -ks .[a-z]*
    57880	.git
    4	.gitignore
    4	.gitmodules
    pi@RaPiM2 ~/tails $ 
    

    So 57 MB of the total are hidden from normal view in the .git directory.

    Gee, thanks…

    (One of my personal ‘hotbuttons’ is things that obscure meaning. Like that “rainbow square” that means low volts on the R.PiM2… not obvious… See:
    https://chiefio.wordpress.com/2015/07/22/raspberry-pi-software-setup/#comment-63348 )

    So here we have a source code management system that hides the source code. Yeah, yeah, I know… “The experienced operator will know… ” and seeing things a million times is tedious. But really, hiding the core function? Sigh.

    But at least I did find where the giant Binary Blob is located:

    pi@RaPiM2 ~/tails/.git $ ls objects/
    info  pack
    pi@RaPiM2 ~/tails/.git $ ls -l */*
    -rwxr-xr-x 1 pi pi  452 Aug  1 15:27 hooks/applypatch-msg.sample
    -rwxr-xr-x 1 pi pi  896 Aug  1 15:27 hooks/commit-msg.sample
    -rwxr-xr-x 1 pi pi  189 Aug  1 15:27 hooks/post-update.sample
    -rwxr-xr-x 1 pi pi  398 Aug  1 15:27 hooks/pre-applypatch.sample
    -rwxr-xr-x 1 pi pi 1704 Aug  1 15:27 hooks/pre-commit.sample
    -rwxr-xr-x 1 pi pi 1239 Aug  1 15:27 hooks/prepare-commit-msg.sample
    -rwxr-xr-x 1 pi pi 4898 Aug  1 15:27 hooks/pre-rebase.sample
    -rwxr-xr-x 1 pi pi 3611 Aug  1 15:27 hooks/update.sample
    -rw-r--r-- 1 pi pi  240 Aug  1 15:27 info/exclude
    -rw-r--r-- 1 pi pi  168 Aug  1 15:34 logs/HEAD
    
    logs/refs:
    total 8
    drwxr-xr-x 2 pi pi 4096 Aug  1 15:34 heads
    drwxr-xr-x 3 pi pi 4096 Aug  1 15:34 remotes
    
    objects/info:
    total 0
    
    objects/pack:
    total 57396
    -r--r--r-- 1 pi pi  5457600 Aug  1 15:34 pack-05a782003d99b1fb3d560159e5565f942b23f3d6.idx
    -r--r--r-- 1 pi pi 53313166 Aug  1 15:34 pack-05a782003d99b1fb3d560159e5565f942b23f3d6.pack
    
    refs/heads:
    total 4
    -rw-r--r-- 1 pi pi 41 Aug  1 15:34 master
    
    refs/remotes:
    total 4
    drwxr-xr-x 2 pi pi 4096 Aug  1 15:34 origin
    
    refs/tags:
    total 0
    pi@RaPiM2 ~/tails/.git $ 
    

    Such a nice obvious name – NOT!

    -r–r–r– 1 pi pi 53313166 Aug 1 15:34 pack-05a782003d99b1fb3d560159e5565f942b23f3d6.pack

    53 MB of inscrutable blob. Looks like “use the tool” is the only way to see anything. But I think I knew that already…

  5. E.M.Smith says:

    Well, after more effort than it ought to have taken, I’ve found a way to get a copy of the sources dumped out where I can print, peruse, and diff them without spending too much more time screwing around learning a whole bunch of ‘git’ that is of no use to me.

    From:
    http://alvinalexander.com/git/git-cheat-sheet-git-reference-commands
    I got clue about exporting:

    Git export
    How to do a SVN-like “export” with Git:

    git archive master | bzip2 > project-source.tar.bz2

    After screwing around a bit with “git –help” and then “git archive -l” that lists available formats:

    pi@RaPiM2 ~/tails $ git archive -l
    tar
    tgz
    tar.gz
    zip
    pi@RaPiM2 ~/tails $ 
    

    I decided to see if I could get a tar archive dumped:

    pi@RaPiM2 ~/tails $ git archive --format tar master > tails.tar
    pi@RaPiM2 ~/tails $ file tails.tar 
    tails.tar: POSIX tar archive
    pi@RaPiM2 ~/tails $ 
    

    So now I’ve got it to a tar file and can ‘untar’ it into a non-binary-blob where I can then do simple print / listings and do ‘diff’ to compare basic Debian vs Tails vs Raspbian sources for any given chunk of code.

    UPDATE: After looking in the untarred blob, I’m not seeing the actual source listings. It may be I need a different set of options to my dump command or that I’ve just not looked in the right places. But this is not a ‘done deal’ yet. Why folks have to be so obscure about things is beyond me. A little care can make things much more transparent. End update.

    There may well be some more elegant way to do this using “git” built in facilities, but frankly cloning git repositories and spending the weekend learning to use git like an expert are not on my “todo” list at the moment. I’m much happier to just be able to “diff wget ../raspbian/wget” and see what all changes. Just “sizing the issue”, not needing to do a lot of edit, change, commit, push, etc. etc. etc…

    We’ll see what surprises I run into as I actually unpack the tar file and look at what is really in it ;-)

    BTW, that there are essentially nearly no directions for how to just “look at the source for a specific program” or how to “compare a listing of foo.c with “/not-git-archive/source/foo.c” is very discouraging. It either means that the builders and users of git don’t do that much / ever or that they think it beneath them to point out how to do such trivial things… or worse, question the ability of anyone who wants to do that instead of “git –obscurity-flag {command not documented well} [option you never heard of] “.

    Oh Well. Comes with the turf in SysProg land…

    @P.G.:

    FWIW, the “manager of systems programmers” is expected to use these kinds of tools in just this way (even if limited) to be able to monitor the work of others, and to size the work so decent project schedule and resource decisions can be made. If you ever wondered what a Linux Project Manager does (even if with a bit more technical fooling around than their hired expert sysprogs) this is it.

    Part of why I’m “bothering” to do this is just to update my skilz so that in an interview when someone says “We use git here” I don’t flinch or have to say “I’ve never used that” but instead say “I used it a little when looking at porting Tails to the Raspberry Pi.” MUCH more marketable answer…

  6. E.M.Smith says:

    Interesting….

    I may have been working on a false assumption. I’d assumed there were base listings of the C code for each module used. It looks like they may instead just point at the Debian archives and give “patches” to them for what they change. If so, that might well make for a very easy port to Raspbian (just port the patch and resolve conflicts… and repoint what archives are used for the base…)

    I know, I’m speculating at this point. Would be easier for someone who had any clue at all about how the Tails build is done. ( Or I could read their intro documents, but where’s the fun in that ;-)

    In the “debian” directory, there is a file named gbp.conf that contains:

    # Configuration file for git-buildpackage and friends

    [DEFAULT]
    debian-branch = master
    debian-tag = %(version)s

    [git-dch]
    full = True
    auto = True
    git-log = –no-merges
    snapshot-number = snapshot + 1
    id-length = 0
    meta = False

    and one named “control” that says:

    pi@RaPiM2 ~/tails/EMS/debian $ more control
    Source: tails
    Section: web
    Priority: optional
    Build-Depends: debhelper (>= 7.0.15)
    Build-Depends-Indep: dpkg-dev (>= 1.14.25)
    Maintainer: Tails developers
    Standards-Version: 3.8.1
    Homepage: https://tails.boum.org/
    Vcs-Browser: https://git-tails.immerda.ch/tails
    Vcs-Git: git://git.tails.boum.org/tails

    Package: tails
    Architecture: all
    Depends: ${misc:Depends},
    dpkg-dev,
    eatmydata,
    gettext,
    ikiwiki (>= 3.20110431),
    libyaml-perl,
    libyaml-yaml-perl,
    live-build (>= 2.0.3),
    perlmagick,
    po4a,
    syslinux (>= 2:4.01+dfsg-1),
    time,
    whois
    Conflicts:
    libyaml-libyaml-perl
    Description: a Tor-ified, amnesic Live System
    The Amnesic Incognito Live System is aimed at preserving its users’ privacy:
    – any outgoing connection to the Internet is forced to go through
    the Tor network
    – no trace is left on local storage devices unless explicitly asked
    .
    This package provides the source and tools that are necessary to build
    such a Live System using Debian Live.

    That seems to be pointing to yet another git source. Then there are the diffs and patches in config. This is chroot_local-patches/dhcp-dont-sent-hostname.diff

    A leading – means removed, a + means added

    diff -Naur orig/etc/dhcp/dhclient.conf new/etc/dhcp/dhclient.conf
    — orig/etc/dhcp/dhclient.conf 2014-07-31 22:31:11.363605131 +0200
    +++ new/etc/dhcp/dhclient.conf 2014-07-31 22:31:43.535349519 +0200
    @@ -14,7 +14,8 @@
    option rfc3442-classless-static-routes code 121 = array of unsigned integer 8;

    #send host-name “andare.fugue.com”;
    -send host-name = gethostname();
    +#send host-name = gethostname();
    +supersede host-name “amnesia”;
    #send dhcp-client-identifier 1:0:a0:24:ab:fb:9c;
    #send dhcp-lease-time 3600;
    #supersede domain-name “fugue.com home.vix.com”;
    diff -Naur orig/etc/NetworkManager/NetworkManager.conf new/etc/NetworkManager/NetworkManager.conf
    — orig/etc/NetworkManager/NetworkManager.conf 2014-07-31 22:31:19.347541763 +0200
    +++ new/etc/NetworkManager/NetworkManager.conf 2014-07-31 22:31:58.823227808 +0200
    @@ -1,5 +1,8 @@
    [main]
    -plugins=ifupdown,keyfile
    +plugins=keyfile

    [ifupdown]
    managed=false
    +
    +[ipv4]
    +dhcp-send-hostname=falsediff -Naur orig/etc/dhcp/dhclient.conf new/etc/dhcp/dhclient.conf
    — orig/etc/dhcp/dhclient.conf 2014-07-31 22:31:11.363605131 +0200
    +++ new/etc/dhcp/dhclient.conf 2014-07-31 22:31:43.535349519 +0200
    @@ -14,7 +14,8 @@
    option rfc3442-classless-static-routes code 121 = array of unsigned integer 8;

    #send host-name “andare.fugue.com”;
    -send host-name = gethostname();
    +#send host-name = gethostname();
    +supersede host-name “amnesia”;
    #send dhcp-client-identifier 1:0:a0:24:ab:fb:9c;
    #send dhcp-lease-time 3600;
    #supersede domain-name “fugue.com home.vix.com”;
    diff -Naur orig/etc/NetworkManager/NetworkManager.conf new/etc/NetworkManager/NetworkManager.conf
    — orig/etc/NetworkManager/NetworkManager.conf 2014-07-31 22:31:19.347541763 +0200
    +++ new/etc/NetworkManager/NetworkManager.conf 2014-07-31 22:31:58.823227808 +0200
    @@ -1,5 +1,8 @@
    [main]
    -plugins=ifupdown,keyfile
    +plugins=keyfile

    [ifupdown]
    managed=false
    +
    +[ipv4]
    +dhcp-send-hostname=false

    And here is the cupsd-IPv4_only.patch:

    — chroot.orig/etc/cups/cupsd.conf 2012-05-13 11:48:30.860005431 +0000
    +++ chroot/etc/cups/cupsd.conf 2012-05-13 11:48:38.600005570 +0000
    @@ -17,7 +17,7 @@

    # Only listen for connections from the local machine.
    -Listen localhost:631
    +Listen 127.0.0.1:631
    Listen /var/run/cups/cups.sock

    # Show shared printers on the local network.

    So IFF (and it’s a big IF) I’m understanding what I’m seeing correctly, Tails mostly just points at a Debian base release, adds a “change root” or chroot environment, removes a whole lot of other software that will not be used (and the holes in security the have) and patches or changes a lot of files to prevent data leakage. I.e. not a whole lot of custom compiled C code that I’ve found.

    OK, IFF that’s true, it ought to be “not too hard” to point at the Raspbian base instead and “give it a go”.

    I’m going to “take a break” with some garden time and ponder this with my bunny. (She often has very good advice about priorities…. like “Bean leaf fetching really is #1, you know” ;-) After that, depending…, I may actually read the build documentation for Tails and see what it says. Now that I’ve unseated my preconceived notions ;-)

    It’s looking to me like Tails is relatively comprehensible in what it does, and not all that hard to work through.

    BTW, as a side note, the first entry in the “changelog” says it started from Privatix as a base, but has since replaced almost all that code with Debian. Interesting point. Wonder if a look at Privatix source / build system would be interesting…

    amnesia (0.1) UNRELEASED; urgency=low

    * Forked Privatix 9.03.15, by Markus Mandalka:
    http://mandalka.name/privatix/index.html.en
    Everything has since been rewritten or so heavily changed that nothing
    remains from the original code… apart of a bunch of Gnome settings.
    * hardware support:
    – install a bunch of non-free wifi firmwares
    – install xsane and add the live user to the scanner group
    – install aircrack-ng
    – install xserver-xorg-video-geode on i386 (eCafe support)
    – install xserver-xorg-video-all
    – install firmware-linux from backports.org
    – install system-config-printer
    – added instructions in README.eCAFE to support the Hercules eCAFE EC-800
    netbook

  7. Larry Ledwick says:

    Our developers use git to manage code at work. If you have any issues with it ping me at my work email and I will put you in touch with the guy that sets up git and keeps it clean (ie fixing broken branches and enforcing proper methods — sort of like herding cats).
    I personally never touch it as it is outside the scope of the team I work on.

  8. E.M.Smith says:

    @Larry:

    Thanks! I think I’ve gotten over the worst of it. Set up a git repo via a clone. Wandered in it a bit. Found out how to dump a tar archive I can work with. Extracted that.

    At this point the next large chunk of effort is just coming to understand how Tails is made. I’m over the first big hump there, in that I’ve had the AhHa moment of realizing it is mostly a config / diff thing and not a libs and C complies thing. MUCH easier to deal with. (Largely because sysadmin and configs of a system are the sweet spot of my ‘wheel house’…. ) It also started from Privatix, and while that project is officially “abandoned” I was able to download the source code / how to build bits for it.

    This puts me in the position to relatively easily take a Raspian core, and start to tighten it up. First goal being the Privatix level (as that was where Tails started) and secondary goal being “full on Tails” ( I need to find a way to say that and not snicker ;-)

    Basically plotting out the steps to get from Raspbian to Tails one chunk of mods at a time. At that point a project plan can be laid out and shared. And it escapes my control… but perhaps some residual influence ;-) Or if nobody cares, it will be just me and the RaPiM2 making a portable hard core $60 NSA middle finger…

    (Hey, NSA guy! Nothing personal. I sent you guys my resume post 9-11 and it was dead air; then you started sucking up everything that was private, even mine – Supreme Court finding of an essential Right Of Privacy be damned – so I’m kinda forced into this. But for a call from a recruiter, I’d be in your chair now, so “I get it”. If ever an IRL moment happens, Mocha is on me – or IPA if off duty.)

    Maybe someday I’ll need to do the whole “check out / check in” thing in GIT, but that’s a long way in the future at this point. Since RaTails is started from an entirely different code base (Raspbian) is isn’t exactly a “fork” of Tails and is more like new beginning… Rather like Tails isn’t a fork of Privatix even though that is where it started. It will likely take a year or two before I’d be to the point where a “merge” into Tails could be remotely considered; and even then it’s more like a merge of brand names than of tech.

    Or it might just drown in the bottle of Sangria I’ve just opened as I find a new muse and realize this isn’t much value to the world and nobody but me really finds it of use or interest… Life is like that sometimes. So I’ll cast this bread upon the water and the fish and I will ponder who is the Evil Bastard, where is the hook, and do we have any common ground in a water world…

    The one really good bit of news, for me, is just that with maybe 4 work hours today (interleaved with garden, bunny, shopping, dinner prep, and now Sangria) I’ve gotten rather far. The scope from the taproot in Privatix (and source code in hand) to current source GIT repository built and inspected, to extract made and code base perused to enough depth to have some clue about it. Not bad, IMHO.

    Just think what you will “know” tomorrow.
    (“K” from Men In Black)….

  9. Paul Hanlon says:

    Hi Chiefio,

    I’d definitely persevere with what you’re doing. It hasn’t been done by anyone else. And I think you’re on the right track in that it is system admin and config changes mostly. From the tails website at https://tails.boum.org/contribute/how/code/.

    Our focus on low-effort maintainability has practical consequences.

    First of all, we tend to carry the smallest possible delta with our upstreams (i.e. upstream software and Debian). For details about this, read our relationship with upstream statement. Moreover, we encourage you to improve Tails by working on Debian.

    Second, we try not to reinvent the wheel, and we flee the Not invented here syndrome like the plague. Very little code is actually written specifically for Tails: most of what we call code work on Tails is more similar to system administration than it is to programming. We glue existing pieces together. When we need a feature that no software provides yet, we tend to pick the best existing tool, and do whatever is needed to get the needed feature upstream… which sometimes implies to write a patch ourselves.

    If you do plan on sharing it formally with people then you’re likely going to have to learn Git as it is the preferred medium for this sort of thing. There is a book called Progit, that is free. Terse, but covers all the commands.

    You’re right about the info out on the web. The “sexy” bit of Git is branching and merging so you get lots of stuff on that (and a lot of that badly written or explained), but less on the simple basics of using Git. That is part of the reason I’m porting this to a web interface, so that I can learn it once, but not have to use the various commands on a regular basis.

    Toughest bit I found was the concept of the staging area, a separate “place” where files become tracked. In reality, it is where pointers to the original files are kept. When you commit, you are just committing the differences between the last and current iterations of a file. There’s also “plumbing”, the low level commands, and “porcelain”, the high level commands like add, commit, push. merge etc. But normally, you won’t need to go into that. It was originally written almost entirely in bash scripts, but most of it has since been ported to C.

    As Larry alluded to up-thread , on big teams you will have a person (or persons) dedicated just to managing the Git repository. Well paid too.

  10. E.M.Smith says:

    @Paul:

    Thanks for the encouragement. That quote definitely does reflect what I saw, and is also encouraging.

    GIT is one of those things I’ve dreaded. Yet Another Complicated Tool that’s become trendy. I’ve been through so many of them now. But since it looks like this one is sticking, I need to know some of it. (As you pointed out, not a lot, but enough to deal with the specialist, or maybe manage them…).

    The good thing is that as a manager sort, you just need a basic working ability and general understanding of scope of function. The bad thing is that you need it over all the skill areas of your staff…

    In any case, I’m learning enough of it to be basically functional. Like it or not.

  11. Paul Hanlon says:

    With you on that bro’. It’s just another layer of bureaucracy (for the most part) getting between you and what you’re interested in. For small projects, it’s overkill, but for larger projects with “Agile Development” (continuous updates to live code) methods, it’s absolutely vital.

    Same with unit testing, where a program is run through a pile of pass/fail tests (basically assert()) before it is committed. It can get to a point where you spend more time writing the tests than you do actually coding. But what these do provide is a way of measuring the whole process, and replicating it, so for a software house it’s worth it.

    But yes, the constant need to update skillsets just so you can do exactly what you were doing before is wearying. Progress, I suppose, but an awful lot of “value” destroyed on the way.

  12. Larry Ledwick says:

    There is also an non-https link for the git stuff at:
    http://git.tails.boum.org/tails

  13. E.M.Smith says:

    @Larry:

    Interesting link. Shows what is happening in their Git world. Found the diff tab interesting too.

    @Paul:

    In that same vein… Looking through the docs on building your own Tails version has cemented my initial impressions. In large part is is just making a “Live-CD” version of Debian (minus a bunch of stuff you don’t need and without writes to the CD attempted) with a few special communications tools that run through The Onion Router (TOR).

    It really is fairly simple once you get past a lot of the semi-capricious complexity. ( AKA stuff that “jus’ Growed” over the years, undoubtedly). Things like having TWO build methods documented (one in a VM world using things like vagabond and rake…) and one that is more “long hand” without a VM. Yet that manual process has 2 different ways to cache the Debian repositories…

    It’s the old exponential growth of complexity via bifurcations problem…

    Like why use “rake” that I’ve never heard of? Because someone was a Ruby programmer and didn’t like the “make” sintax so invented “Ruby Make” aka “rake” and these folks liked it… So now I get to learn “Ruby” and “Ruby-Make” syntax to figure out what is going on… hope it isn’t too different from the thousand other things I’ve had to soak up in a few hours over the last decades… Besides, why wouldn’t I want to learn “Yet Another Syntax For Doing A Make”.. sigh or sarc; as you see fit…

    So conceptually (and in terms of total work) it is a little harder than making a custom Live-CD; since basically you ARE making a Live-CD with some custom bits; but some of those custom bits are themselves a bit more complicated than the typical custom desktop build. OK, I’m good with that… Especially given that TOR is already ported to the Pi

    https://learn.adafruit.com/onion-pi

    Feel like someone is snooping on you? Browse anonymously anywhere you go with the Onion Pi Tor proxy. This is fun weekend project that uses a Raspberry Pi, a USB WiFi adapter and Ethernet cable to create a small, low-power and portable privacy Pi.

    Using it is easy-as-pie. First, plug the Ethernet cable into any Internet provider in your home, work, hotel or conference/event. Next, power up the Pi with the micro USB cable to your laptop or to the wall adapter. The Pi will boot up and create a new secure wireless access point called Onion Pi. Connecting to that access point will automatically route any web browsing from your computer through the anonymizing Tor network.

    Who is this good for?
    If you want to browse anonymously on a netbook, tablet, phone, or other mobile or console device that cannot run Tor and does not have an Ethernet connection. If you do not want to or cannot install Tor on your work laptop or loan computer. If you have a guest or friend who wants to use Tor but doesn’t have the ability or time to run Tor on their computer, this gift will make the first step much easier.

    The $69 hardware kit: https://www.adafruit.com/products/1410

    and that I’ve found 2 different “Make a Live-CD type Raspbian” builds already done (by folks who wanted to halt SD card corruption on power fails and / or be able to swap out the SD card once the system was booted.)

    https://github.com/simonpoole1/raspbian-live-build
    https://www.raspberrypi.org/forums/viewtopic.php?f=66&t=91537&p=639924

    So at this point it is largely looking like an integration / merge of those three projects.

    Take one of the Pi Live builds. Use their “lessons learned” to modify the Tails Build. Test / debug. Look at the Onion Pi build for any lessons learned (and / or) if the Tails build hangs on it) and integrate. Test / fix / test / package / Beta / feedback -fix / test / package / ship…

    Probably about 1 long weekend for someone experienced with the parts in motion… I’d guess about a solid week for me (or about a month elapsed as I don’t have a ‘solid week’). But not onerous in any case.

  14. E.M.Smith says:

    Well the ‘tar’ extract seems to be identical to the “clone” minus the .git archive. Looks like there may be some other finesse to all this. I did find a note saying “probably don’t want master” and implying that the interesting bits were in devo or wherever… so a bit more polish needed to understand just what is happening there.

    Wandering through the extract (or the clone) basically shows that it’s a build of a Debian Live with some custom bits added in. The major build command is a “lb build” or an “lb config” to configure it. The “magic sauce” at the starting point looks to be in a directory named “auto”.

    pi@RaPiM2 ~/tails/auto $ ls
    build clean config scripts
    pi@RaPiM2 ~/tails/auto $

    That looks to be where “lb build”, “lb config”, and “lb clean” get their marching orders. “scripts” has a small collection of other bits that get done.

    pi@RaPiM2 ~/tails/auto $ ls scripts
    ikiwiki-supported-languages packages-from-acng-log tails-custom-apt-sources
    pi@RaPiM2 ~/tails/auto $

    So some language fix-up, no idea yet what “acng” is or why it is logged, and then the bit that likely does what little custom sources are used.

    Primarily the builds all look to the outside somewhere for sources in git archives. A key issue I need to overcome is how to get those sources out into a file where I can look at and edit them with “vi” (or any other editor). I’ll get there, but this way of doing builds sure hides the basic things like “get a copy and read it”. I know, I know: The experienced programmer will know… (but not when they keep moving the cheese every couple of years…)

    The “clean” script typically cleans up left over crap after a build. Usually don’t need to really care about it much, but worth checking to see where it thinks other bits leave crap:

    (Note that I make no warrantee that I will have spotted all the places WordPress might have stolen text due to angle brackets / < >

    pi@RaPiM2 ~/tails/auto $ cat clean
    #!/bin/sh

    set -x

    for dir in chroot/{dev/pts,proc,sys,var/lib/dpkg} ; do
    if mountpoint -q “$dir” ; then
    umount “$dir”
    fi
    done

    lb clean noauto ${@}

    # rm -f build-*.log

    # Remove generated files
    rm -f config/binary config/bootstrap config/chroot config/common config/source

    # Remove empty directories in config tree
    if ls config/*/ > /dev/null 2>&1 ; then
    rmdir –ignore-fail-on-non-empty config/*/
    fi

    # files copied or created in the config stage
    rm -f config/chroot_local-includes/etc/amnesia/environment
    rm -f config/chroot_local-includes/etc/amnesia/version
    rm -f config/chroot_local-includes/usr/share/doc/amnesia/Changelog
    for list in config/chroot_local-packageslists/*.list ; do
    if [ “$list” != ‘config/chroot_local-packageslists/tails-common.list’ ]; then
    rm -f “$list”
    fi
    done

    # files copied or created in the build stage
    rm -f config/chroot_local-includes/usr/share/amnesia/build/variables

    # static wiki
    rm -rf config/chroot_local-includes/usr/share/doc/tails/website wiki/src/.ikiwiki
    find wiki/src -name *.pot -exec rm {} \;
    pi@RaPiM2 ~/tails/auto $

    So it looks like the “config/chroot_local*” space is where work is done.

    pi@RaPiM2 ~/tails/auto $ ls ../config
    amnesia binary_local-includes chroot_local-includes chroot_local-preseed
    APT_overlays.d binary_rootfs chroot_local-packages chroot_sources
    base_branch chroot_apt chroot_local-packageslists
    binary_local-hooks chroot_local-hooks chroot_local-patches
    pi@RaPiM2 ~/tails/auto $

    So now we know what all those are for.

    Looking at “./auto/config” script, there is evidence for how to build an alternative architecture, so adding ARM in many ways will just be duplicating that (perhaps with some odd bits, there are always odd bits…)

    pi@RaPiM2 ~/tails/auto $ cat config
    #! /bin/sh
    # automatically run by “lb config”

    set -x

    # we require building from git
    if ! git rev-parse –is-inside-work-tree; then
    echo “${PWD} is not a Git tree. Exiting.”
    exit 1
    fi

    . config/amnesia
    if [ -e config/amnesia.local ] ; then
    . config/amnesia.local
    fi

    export LB_BOOTSTRAP_INCLUDE=’eatmydata’

    # init variables
    RUN_LB_CONFIG=”lb config noauto”

    # init config/ with defaults for the target distribution
    $RUN_LB_CONFIG –distribution wheezy ${@}

    # set Amnesia’s general options
    $RUN_LB_CONFIG \
    –apt-recommends false \
    –backports false \
    –binary-images iso \
    –binary-indices false \
    –checksums none \
    –bootappend-live “${AMNESIA_APPEND}” \
    –bootstrap “cdebootstrap” \
    –archive-areas “main contrib non-free” \
    –includes none \
    –iso-application=”The Amnesic Incognito Live System” \
    –iso-publisher=”https://tails.boum.org/” \
    –iso-volume=”TAILS ${AMNESIA_FULL_VERSION}” \
    –memtest none \
    –mirror-binary “http://ftp.us.debian.org/debian/” \
    –mirror-bootstrap “http://ftp.us.debian.org/debian/” \
    –mirror-chroot “http://ftp.us.debian.org/debian/” \
    –packages-lists=”standard” \
    –tasks=”standard” \
    –linux-packages=”linux-image-3.16.0-4″ \
    –syslinux-menu vesamenu \
    –syslinux-splash data/splash.png \
    –syslinux-timeout 4 \
    –initramfs=live-boot \
    ${@}

    # build i386 images on amd64 as well, include a bunch of kernels
    hw_arch=”`dpkg –print-architecture`”
    if [ “$hw_arch” = i386 -o “$hw_arch” = amd64 ]; then
    $RUN_LB_CONFIG \
    –architecture i386 \
    –linux-flavours “586 amd64” \
    ${@}
    # build powerpc images on powerpc64 as well, include only powerpc kernel
    elif [ “$hw_arch” = powerpc -o “$hw_arch” = powerpc64 ]; then
    $RUN_LB_CONFIG \
    –architecture powerpc \
    –linux-flavours powerpc \
    ${@}
    fi

    install -d config/chroot_local-includes/etc/amnesia/

    # environment
    TAILS_WIKI_SUPPORTED_LANGUAGES=”$(ikiwiki-supported-languages ikiwiki.setup)”
    [ -n “$TAILS_WIKI_SUPPORTED_LANGUAGES” ] || exit 16
    echo “TAILS_WIKI_SUPPORTED_LANGUAGES=’${TAILS_WIKI_SUPPORTED_LANGUAGES}'” \
    >> config/chroot_local-includes/etc/amnesia/environment

    # version
    echo “${AMNESIA_FULL_VERSION}” > config/chroot_local-includes/etc/amnesia/version
    if git rev-list HEAD 2>&1 >/dev/null; then
    git rev-list HEAD | head -n 1 >> config/chroot_local-includes/etc/amnesia/version
    fi
    echo “live-build: `dpkg-query -W -f=’${Version}\n’ live-build`” \
    >> config/chroot_local-includes/etc/amnesia/version
    # os-release
    cat >> config/chroot_local-includes/etc/os-release < config/chroot_local-includes/usr/share/amnesia/readahead-list
    fi

    # custom APT sources
    tails-custom-apt-sources > config/chroot_sources/tails.chroot
    pi@RaPiM2 ~/tails/auto $

    Those lines with the powerPC listed. That’s a very different chip from the Intel two.

    This implies the build system is already all set up to use alternative chips. Just need to swap what chip. (A “quick and dirty” test could be done just by changing PowerPC to ARM where it shows up, leaving all the logic constant. That would show where there is no library or git archive or ‘whatever’ for that chipset / arch in other scripts that expect one. Yeah, a hack. But a useful one…) To the extent that Raspbian isn’t in the Debian sources and is in it’s own repository, we might need to change where these lines point:

    –mirror-binary “http://ftp.us.debian.org/debian/” \
    –mirror-bootstrap “http://ftp.us.debian.org/debian/” \
    –mirror-chroot “http://ftp.us.debian.org/debian/” \

    We also see from the very top that it expects GIT trees and nothing else will do.

    Same thing in the build script. I won’t post that whole thing here as it is about twice as large as the above set. It also has a similar structure and expectations ( i.e. about GIT).

    Here is the chunk at the start of the main build process:

    ### Main

    # we require building from git
    git rev-parse –is-inside-work-tree &> /dev/null \
    || fatal “${PWD} is not a Git tree.”

    . config/amnesia
    if [ -e config/amnesia.local ] ; then
    . config/amnesia.local
    fi

    # a clean starting point
    rm -rf cache/stages_rootfs

    # get LB_BINARY_IMAGES
    . config/binary

    # get LB_ARCHITECTURE and LB_DISTRIBUTION
    . config/bootstrap

    # save variables that are needed by chroot_local-hooks
    echo “LB_DISTRIBUTION=${LB_DISTRIBUTION}” >> config/chroot_local-includes/usr/share/amnesia/build/variables
    echo “POTFILES_DOT_IN=’$(
    /bin/grep -E –no-filename ‘[^ #]*\.in$’ po/POTFILES.in \
    | sed -e ‘s,^config/chroot_local-includes,,’ | tr “\n” ‘ ‘
    )'” \
    >> config/chroot_local-includes/usr/share/amnesia/build/variables

    So we can see that when there are build problems (and maybe even before a test build) there are some bits in the “chroot” area that might need changing / tuning. Like …”build/variables”.

    It then sets permissions on a bunch of files, and I’m skipping that text. The builds the “squash file system” image of tails (The all $CAPS things are variables substituted at run time):

    # build the image

    : ${MKSQUASHFS_OPTIONS:=’-comp xz -Xbcj x86 -b 1024K -Xdict-size 1024K’}
    MKSQUASHFS_OPTIONS=”${MKSQUASHFS_OPTIONS} -wildcards -ef chroot/usr/share/amnesia/build/mksquashfs-excludes”
    export MKSQUASHFS_OPTIONS

    # get git branch or tag so we can set the basename appropriately, i.e.:
    # * if we build from a tag: tails-$ARCH-$TAG.iso
    # * if we build from a branch: tails-$ARCH-$BRANCH-$VERSION-$DATE.iso
    # * if Jenkins builds from a branch: tails-$ARCH-$BRANCH-$VERSION-$TIME-$COMMIT.iso
    if GIT_REF=”$(git symbolic-ref HEAD)”; then
    GIT_BRANCH=”${GIT_REF#refs/heads/}”
    CLEAN_GIT_BRANCH=$(echo “$GIT_BRANCH” | sed ‘s,/,_,g’)
    if [ -n “$JENKINS_URL” ]; then
    GIT_SHORT_ID=”$(git rev-parse –short HEAD)”
    BUILD_BASENAME=”tails-${LB_ARCHITECTURE}-${CLEAN_GIT_BRANCH}-${AMNESIA_VERSION}-${AMNESIA_NOW}-${GIT_SHORT_ID}”
    else
    BUILD_BASENAME=”tails-${LB_ARCHITECTURE}-${CLEAN_GIT_BRANCH}-${AMNESIA_VERSION}-${AMNESIA_TODAY}”
    fi
    else
    GIT_CURRENT_COMMIT=”$(git rev-parse HEAD)”
    if GIT_TAG=”$(git describe –tags –exact-match ${GIT_CURRENT_COMMIT})”; then
    CLEAN_GIT_TAG=$(echo “$GIT_TAG” | tr ‘/-‘ ‘_~’)
    BUILD_BASENAME=”tails-${LB_ARCHITECTURE}-${CLEAN_GIT_TAG}”
    else
    # this shouldn’t reasonably happen (e.g. only if you checkout a
    # tag, remove the tag and then build)
    fatal “Neither a Git branch nor a tag, exiting.”
    fi
    fi

    The reference to “jenkins” is to a ‘build server” regression testing product:

    http://www.yolinux.com/TUTORIALS/Jenkins.html

    So either “yet another thing to learn” or an option I can ignore for now… just like the build in a virtual machine environment…

    There is then a bunch of merges of branches where, once it is sorted out where Raspbian and any new / changed tor bits live, might need changing (or just different branches or tags or…)

    It also includes a call to “./build-wiki” to build the docs wiki. Followed by a ‘refresh’ of translations (found in the ‘po’ directory as ‘portable objects’). Then gets to the meat of it.

    case “$LB_BINARY_IMAGES” in
    iso)
    BUILD_FILENAME_EXT=iso
    BUILD_FILENAME=binary
    which isohybrid >/dev/null || fatal ‘Cannot find isohybrid in $PATH’
    installed_syslinux_utils_upstream_version=”$(syslinux_utils_upstream_version)”
    if dpkg –compare-versions \
    “$installed_syslinux_utils_upstream_version” \
    ‘lt’ \
    “$REQUIRED_SYSLINUX_UTILS_UPSTREAM_VERSION” ; then
    fatal \
    “syslinux-utils ‘${installed_syslinux_utils_upstream_version}’ is installed, ” \
    “while we need at least ‘${REQUIRED_SYSLINUX_UTILS_UPSTREAM_VERSION}’.”
    fi
    ;;
    iso-hybrid)
    BUILD_FILENAME_EXT=iso
    BUILD_FILENAME=binary-hybrid
    ;;
    tar)
    BUILD_FILENAME_EXT=tar.gz
    BUILD_FILENAME=binary-tar
    ;;
    usb-hdd)
    BUILD_FILENAME_EXT=img
    BUILD_FILENAME=binary
    ;;
    *)
    fatal “Image type ${LB_BINARY_IMAGES} is not supported.”
    ;;
    esac
    BUILD_DEST_FILENAME=”${BUILD_BASENAME}.${BUILD_FILENAME_EXT}”
    BUILD_MANIFEST=”${BUILD_DEST_FILENAME}.list”
    BUILD_PACKAGES=”${BUILD_DEST_FILENAME}.packages”
    BUILD_LOG=”${BUILD_DEST_FILENAME}.buildlog”
    BUILD_START_FILENAME=”${BUILD_DEST_FILENAME}.start.timestamp”
    BUILD_END_FILENAME=”${BUILD_DEST_FILENAME}.end.timestamp”

    Then it goes into clean up.

    So that’s the core of it, and where it all starts. But in terms of actually looking at sources, it’s not so easy. The Debian source is all in the Debian archives, but it will be a bit of ‘go fish’ to figure out just what, get just that, and look at it. Probably don’t need to most of the time, but… for a security review, you kinda hafta must read no alternative THE sources… While this makes the build easier, it (IMHO) introduces a minor risk at build time that a “Man In The Middle” attack could be done by a TLA (Three Letter Agency)…

    Then there’s still the question of Tails and TOR specific sources and modifications. While I think I know where those are now (and I’m going there next… “features”); that’s still TBD.

    Overall, the “project” is looking “not that hard” and is mostly over complicated by a intricate build process with lots of “options” on how to build. In a VM, or not. With jenkins, or not. Etc. (or not…)

    Time for coffee and then the next chunk…

  15. E.M.Smith says:

    Oh heck, didn’t make it to coffee… looked in ‘scripts’ instead. Just as an FYI, the one about “acng” starts off by telling you to read perl… Yet Another Trendy Language…

    #!/usr/bin/perl

    use strict;
    use warnings FATAL => ‘all’;
    use 5.10.1;

    use autodie;
    use Carp::Assert;
    use Carp::Assert::More;
    use IO::All;
    use List::MoreUtils qw{uniq};

    my $usage = “Usage: $0 ACNG_LOG IP EPOCH_START EPOCH_END [OUTPUT_BIN_PKGS OUTPUT_SRC_PKGS]”;

    I’ve used Perl a little, and read a book or two on it, so can usually puzzle out what it’s doing… I hope… (it has some tricky bits…)

    The key part seems here:

    ### Extract urls from loglines within the build time range

    my @urls = map {
    url_in_logline($_)
    } interesting_loglines(
    $logfile, $client_ip, $epoch_start, $epoch_end
    );

    It is fishing around to find URLs that are handed off to something else in print commands furtehr down. I think…

    So this one mostly looks like a ‘go fish and reformat’ bit of glue somewhere in the build process. One would need to watch out for changes of URLS in “wherever” loglines and that they don’t screw up something after this is called “wherever”…

    Tails-custom-apt-sources is a pointer to the git file at the remote source for tails specific bits:

    pi@RaPiM2 ~/tails/auto/scripts $ cat tails-custom-apt-sources
    #!/bin/bash

    set -e

    APT_MIRROR_URL=”http://deb.tails.boum.org/”
    DEFAULT_COMPONENTS=”main”
    BASE_BRANCHES=”stable testing devel feature/jessie”

    fatal() {
    echo “$*” >&2
    exit 1
    }

    git_tag_exists() {
    local tag=”$1″

    test -n “$(git tag -l “$tag”)”
    }

    version_was_released() {
    local version=”$1″

    version=”$(echo “$version” | tr ‘~’ ‘-‘)”
    git_tag_exists “$version”
    }

    version_in_changelog() {
    dpkg-parsechangelog | awk ‘/^Version: / { print $2 }’
    }

    output_apt_binary_source() {
    local suite=”$1″
    local components=”${2:-$DEFAULT_COMPONENTS}”

    echo “deb $APT_MIRROR_URL $suite $components”
    }

    output_overlay_apt_binary_sources() {
    for suite in $(ls config/APT_overlays.d) ; do
    output_apt_binary_source “$suite”
    done
    }

    current_branch() {
    git branch | awk ‘/^\* / { print $2 }’
    }

    on_base_branch() {
    local current_branch=$(current_branch)

    for base_branch in $BASE_BRANCHES ; do
    if [ “$current_branch” = “$base_branch” ] ; then
    return 0
    fi
    done

    return 1
    }

    base_branch() {
    cat config/base_branch | head -n1
    }

    branch_name_to_suite() {
    local branch=”$1″

    echo “$branch” | sed -e ‘s,[^.a-z0-9-],-,ig’ | tr ‘[A-Z]’ ‘[a-z]’
    }

    ### Sanity checks

    [ -d config/APT_overlays.d ] || fatal ‘config/APT_overlays.d/ does not exist’
    [ -e config/base_branch ] || fatal ‘config/base_branch does not exist’

    [ “$(cat config/base_branch | wc -l)” -eq 1 ] \
    || fatal ‘config/base_branch must contain exactly one line’

    if on_base_branch && ! [ “$(base_branch)” = “$(current_branch)” ] ; then
    echo “base_branch: $(base_branch)” >&2
    echo “current_branch: $(current_branch)” >&2
    fatal “In a base branch, config/base_branch must match the current branch.”
    fi

    ### Main

    if version_was_released “$(version_in_changelog)”; then
    if [ -n “$(ls config/APT_overlays.d)” ]; then
    fatal ‘config/APT_overlays.d/ must be empty while releasing’
    fi
    output_apt_binary_source “$(branch_name_to_suite “$(version_in_changelog)”)”
    else
    output_apt_binary_source “$(branch_name_to_suite “$(base_branch)”)”
    output_overlay_apt_binary_sources
    fi

    So to the extent that things need custom fitting for ARM, one either is married to their archive and gets permission to star an ARM fork, or does a “clone and split”… and reworks this to point at your own set.

    I would also point out that two “git submodules” look to be referenced in the submodules directory:

    pi@RaPiM2 ~/tails $ ls submodules
    jenkins-tools pythonlib
    pi@RaPiM2 ~/tails $ !!/*
    ls submodules/*
    submodules/jenkins-tools:

    submodules/pythonlib:
    pi@RaPiM2 ~/tails $

    So including pythonlib from “somewhere” and “jenkins-tools” from “somewhere” and unclear how to substitute ARM versions… but that can be left as an exercise for the student ;-) I note in passing that the “lib” directory includes only “python3” and the “vagrant” directory seems dedicated to folks using vagrant VM method builds. While the “wiki” directory, the “po” directory, and the “data” directory are all oriented toward documentation and appearance issues. (Language packs, spash.png image, docs, etc.).

    Oh, and “bin” only has delete-merged-git-branches
    in it as a clean up tool (“bin” is for “binary executable” in Unix / Linux speak).

    So that mostly just leaves “debian” and “features” directories (plus any .[whatever] hidden bits).

    Debian doesn’t have much in it:

    pi@RaPiM2 ~/tails $ ls debian
    changelog compat control copyright gbp.conf link rules
    pi@RaPiM2 ~/tails $ ls -l debian/
    total 204
    -rw-r–r– 1 pi pi 180706 Aug 1 15:34 changelog
    -rw-r–r– 1 pi pi 2 Aug 1 15:34 compat
    -rw-r–r– 1 pi pi 994 Aug 1 15:34 control
    -rw-r–r– 1 pi pi 921 Aug 1 15:34 copyright
    -rw-r–r– 1 pi pi 229 Aug 1 15:34 gbp.conf
    -rw-r–r– 1 pi pi 58 Aug 1 15:34 link
    -rwxr-xr-x 1 pi pi 29 Aug 1 15:34 rules
    pi@RaPiM2 ~/tails $ cat debian/rules
    #!/usr/bin/make -f
    %:
    dh $@
    pi@RaPiM2 ~/tails $
    pi@RaPiM2 ~/tails $ cat debian/gbp.conf
    # Configuration file for git-buildpackage and friends

    [DEFAULT]
    debian-branch = master
    debian-tag = %(version)s

    [git-dch]
    full = True
    auto = True
    git-log = –no-merges
    snapshot-number = snapshot + 1
    id-length = 0
    meta = False

    pi@RaPiM2 ~/tails $

    Some specifics on Debian that might need to change for a different Raspberry Pi archive, or maybe not.

    and that leaves “features” which looks to be of interest:

    I’ll just note that I’m going to wander through it, really after coffee this time, and that a brief look finds Yet Another Trendy Language in use… Nothing like “kitchen sink” languages in a build process…

    pi@RaPiM2 ~/tails $ ls features
    apt.feature images tor_enforcement.feature
    build.feature misc_files torified_browsing.feature
    checks.feature pidgin.feature torified_git.feature
    config po.feature torified_gnupg.feature
    dhcp.feature root_access_control.feature torified_misc.feature
    domains scripts tor_stream_isolation.feature
    electrum.feature ssh.feature totem.feature
    encryption.feature step_definitions unsafe_browser.feature
    erase_memory.feature support untrusted_partitions.feature
    evince.feature time_syncing.feature usb_install.feature
    i2p.feature tor_bridges.feature windows_camouflage.feature
    pi@RaPiM2 ~/tails $

    the ones ending in .feature look like descriptive text. Here is the top of one file:

    pi@RaPiM2 ~/tails $ cat features/tor_bridges.feature
    @product
    Feature: Using Tails with Tor pluggable transports
    As a Tails user
    I want to circumvent censorship of Tor by using Tor pluggable transports
    And avoid connecting directly to the Tor Network

    Background:
    Given a computer
    And the network is unplugged
    And I start the computer
    And the computer boots Tails
    And I enable more Tails Greeter options
    And I enable the specific Tor configuration option
    And I log in to a new session
    And the Tails desktop is ready
    And I save the state so the background can be restored next scenario
    […]

    The items named: config, domains, images, misc_files, scripts, step_definitions, and support are directories and it will take time to go through them and learn what they do. I did take a look into “support” and found some Ruby… ( the .rb ending usually means Ruby language inside…)

    pi@RaPiM2 ~/tails/features $ ls support
    config.rb env.rb extra_hooks.rb helpers hooks.rb
    pi@RaPiM2 ~/tails/features $

    So picking one at random, here’s the top bit of it:

    pi@RaPiM2 ~/tails/features $ cat support/hooks.rb
    require ‘fileutils’
    require ‘time’
    require ‘tmpdir’

    # For @product tests
    ####################

    def delete_snapshot(snapshot)
    if snapshot and File.exist?(snapshot)
    File.delete(snapshot)
    end
    rescue Errno::EACCES => e
    STDERR.puts “Couldn’t delete background snapshot: #{e.to_s}”
    end

    def delete_all_snapshots
    Dir.glob(“#{$config[“TMPDIR”]}/*.state”).each do |snapshot|
    delete_snapshot(snapshot)
    end
    end

    I’ve never used Ruby, but it looks similar to many other languages I have used. Define things, have begin / end blocks whatever words or symbols you use for them, do assignments and functions. Read things and print them out. Always the same and always mutated by someone with a need to stir pots…

    Step_definitions is full of Ruby too.

    pi@RaPiM2 ~/tails/features $ ls step_definitions/
    apt.rb erase_memory.rb root_access_control.rb unsafe_browser.rb
    build.rb evince.rb ssh.rb untrusted_partitions.rb
    checks.rb firewall_leaks.rb time_syncing.rb usb.rb
    common_steps.rb git.rb torified_gnupg.rb windows_camouflage.rb
    dhcp.rb i2p.rb torified_misc.rb
    electrum.rb pidgin.rb tor.rb
    encryption.rb po.rb totem.rb

    With luck none of those will need much changing (or any at all if very lucky).

    The other directories are either small, or a large block of png image files:

    pi@RaPiM2 ~/tails/features $ ls scripts
    otr-bot.py vm-execute
    pi@RaPiM2 ~/tails/features $ ls misc_files/
    sample.pdf sample.tex
    pi@RaPiM2 ~/tails/features $ ls images/
    CupsTestPage.png
    DesktopReportAnError.png
    [… and listing more .png files for a couple of pages]

    That just leaves:

    pi@RaPiM2 ~/tails/features $ ls config
    defaults.yml
    pi@RaPiM2 ~/tails/features $ ls domains
    default_net.xml default.xml disk.xml fs_share.xml storage_pool.xml volume.xml
    pi@RaPiM2 ~/tails/features $ vi config/defaults.yml

    so some .xml files ( ml is usually ‘markup language’ and xml is kinda sorta like html… only x-tended…) and that .yml file… Yet Another Markup Language?

    http://fileinfo.com/extension/yml

    File created in the YAML (YAML Ain’t Markup Language) format, a human-readable data format used for data serialization; allows data to be written and read independent of any particular language; can be incorporated into many different programming languages using supporting YAML libraries, including C/C++, Ruby, Python, Java, Perl, C#, PHP, and others.

    One example of YAML usage is the database.yml file, which is used by Ruby on Rails to save connection information when connecting to a database.

    Oh, how “fun”… someone naming something with a name that LOOKS like a markup language, but is instead a “backronym” to make a deliberately confusing name as a way of being “cute”. A format that exists solely to fix the trouble caused by “Kitchen Sink” language uses… Sigh. “Here’s your sign”…

    In any case, it has some data stuffed into it.

    pi@RaPiM2 ~/tails/features $ cat config/*
    DEBUG: false
    PAUSE_ON_FAIL: false
    SIKULI_RETRY_FINDFAILED: false
    MAX_NEW_TOR_CIRCUIT_RETRIES: 5
    TMPDIR: “/tmp/TailsToaster”

    Unsafe_SSH_private_key: |
    —–BEGIN RSA PRIVATE KEY—–
    MIIEowIBAAKCAQEAvMUNgUUM/kyuo26m+Xw7igG6zgGFMFbS3u8m5StGsJOn7zLi
    J8P5Mml/R+4tdOS6owVU4RaZTPsNZZK/ClYmOPhmNvJ04pVChk2DZ8AARg/TANj3
    qjKs3D+MeKbk1bt6EsA55kgGsTUky5Ti8cc2Wna25jqjagIiyM822PGG9mmI6/zL
    YR6QLUizNaciXrRM3Q4R4sQkEreVlHeonPEiGUs9zx0swCpLtPM5UIYte1PVHgkw
    ePsU6vM8UqVTK/VwtLLgLanXnsMFuzq7DTAXPq49+XSFNq4JlxbEF6+PQXZvYZ5N
    eW00Gq7NSpPP8uoHr6f1J+mMxxnM85jzYtRx+QIDAQABAoIBAA8Bs1MlhCTrP67q
    awfGYo1UGd+qq0XugREL/hGV4SbEdkNDzkrO/46MaHv1aVOzo0q2b8r9Gu7NvoDm
    q51Mv/kjdizEFZq1tvYqT1n+H4dyVpnopbe4E5nmy2oECokbQFchRPkTnMSVrvko
    OupxpdaHPX8MBlW1GcLRBlE00j/gfK1SXX5rcxkF5EHVND1b6iHddTPearDbU8yr
    wga1XO6WeohAYzqmGtMD0zk6lOk0LmnTNG6WvHiFTAc/0yTiKub6rNOIEMS/82+V
    l437H0hKcIN/7/mf6FpqRNPJTuhOVFf+L4G/ZQ8zHoMGVIbhuTiIPqZ/KMu3NaUF
    R634jckCgYEA+jJ31hom/d65LfxWPkmiSkNTEOTfjbfcgpfc7sS3enPsYnfnmn5L
    O3JJzAKShSVP8NVuPN5Mg5FGp9QLKrN3kV6QWQ3EnqeW748DXMU6zKGJQ5wo7ZVm
    w2DhJ/3PAuBTL/5X4mjPQL+dr86Aq2JBDC7LHJs40I8O7UbhnsdMxKcCgYEAwSXc
    3znAkAX8o2g37RiAl36HdONgxr2eaGK7OExp03pbKmoISw6bFbVpicBy6eTytn0A
    2PuFcBKJRfKrViHyiE8UfAJ31JbUaxpg4bFF6UEszN4CmgKS8fnwEe1aX0qSjvkE
    NQSuhN5AfykXY/1WVIaWuC500uB7Ow6M16RDyF8CgYEAqFTeNYlg5Hs+Acd9SukF
    rItBTuN92P5z+NUtyuNFQrjNuK5Nf68q9LL/Hag5ZiVldHZUddVmizpp3C6Y2MDo
    WEDUQ2Y0/D1rGoAQ1hDIb7bbAEcHblmPSzJaKirkZV4B+g9Yl7bGghypfggkn6o6
    c3TkKLnybrdhZpjC4a3bY48CgYBnWRYdD27c4Ycz/GDoaZLs/NQIFF5FGVL4cdPR
    pPl/IdpEEKZNWwxaik5lWedjBZFlWe+pKrRUqmZvWhCZruJyUzYXwM5Tnz0b7epm
    +Q76Z1hMaoKj27q65UyymvkfQey3ucCpic7D45RJNjiA1R5rbfSZqqnx6BGoIPn1
    rLxkKwKBgDXiWeUKJCydj0NfHryGBkQvaDahDE3Yigcma63b8vMZPBrJSC4SGAHJ
    NWema+bArbaF0rKVJpwvpkZWGcr6qRn94Ts0kJAzR+VIVTOjB9sVwdxjadwWHRs5
    kKnpY0tnSF7hyVRwN7GOsNDJEaFjCW7k4+55D2ZNBy2iN3beW8CZ
    —–END RSA PRIVATE KEY—–
    Unsafe_SSH_public_key: = “ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC8xQ2BRQz+TK6jbqb5fDuKAbrOAYUwVtLe7yblK0awk6fvMuInw/kyaX9H7i105LqjBVThFplM+w1lkr8KViY4+GY28nTilUKGTYNnwABGD9MA2PeqMqzcP4x4puTVu3oSwDnmSAaxNSTLlOLxxzZadrbmOqNqAiLIzzbY8Yb2aYjr/MthHpAtSLM1pyJetEzdDhHixCQSt5WUd6ic8SIZSz3PHSzAKku08zlQhi17U9UeCTB4+xTq8zxSpVMr9XC0suAtqdeewwW7OrsNMBc+rj35dIU2rgmXFsQXr49Bdm9hnk15bTQars1Kk8/y6gevp/Un6YzHGczzmPNi1HH5 amnesia@amnesia”
    pi@RaPiM2 ~/tails/features $

    Unlikely to need changing, IMHO, but I’d compare with settings in the OnionPi for things like number of retries and tmp/dir and such. As the SD card doesn’t take well to lots of writes, one thing that might be needed is a RAMdisk area where frequently written things can live. While I think that is taken care of by the live-CD build, it is worth a double check if ever one gets to the operational testing stage. I.e. make sure “tmp” isn’t on the SD card…

    Now with that out of the way, this time for sure, coffee calling my name…

    It also looks like a review of “how to read Ruby” is in order before I run through the .rb scripts and that I figuring out how to get actual source out of GIT archives is still on my ‘to do’ list… So it might be a while before I add more here.

    But this gives a decent view of the ‘lay out’, where to look for things, what skill sets are needed, likely size of task, potential pitfalls, etc. etc. All that Project Manager stuff ;-)

    From here on out, it becomes more “Technical Resource” and less “Tech P.M.”… For anyone who wondered what a computer technical P.M. did… this is it. Usually at this point one starts to sketch out the staffing plan and their skill sets, budget expectations, schedule desired, performance bonus compensation plan ;-) , computer and other hard resources needed, software licenses if any, hotelling or facilities needs, etc. I’m likely to skip most of that just because at this point ‘facilities’ is “what I’ve got” as is computers, software, etc. etc. and schedule is “whatever happens” while “compensation” is zero and bonus is 10% x compensation 8-{ while staffing is “me” and anyone who finds this compelling. Presently also zero.

    So I’m likely to transition to ‘technical resource’ going forward, and likely that will be in future postings IFF I proceed to do anything. While it doesn’t look that hard, it would suck up a chunk of my time that might be better elsewhere. And thinking about that does involve coffee ;-)

  16. E.M.Smith says:

    I’ve unpacked the Privatix build sources.
    http://www.mandalka.name/privatix/privatix_11.04.11_source.tar.bz2

    You can see the genetic relationship. Privatix build is MUCH simpler, but the same basic structure. Looking into that, it is the “live-build” software that brings much of that structure with it.

    On the generic Raspberry Pi there is no live-build, so I don’t know if it was a name change, or just not built. (Those liveCD Raspbian above are likely the answer on that one). At any rate, without that, neither Privatix nor Tails is going to build with the present scripts and methods.

    I found an online man page for the Ubuntu version:
    http://manpages.ubuntu.com/manpages/natty/man7/live-build.7.html
    and it talks of many of the same directories (config, chroot,…)

    So an important bit of clue here, from this:

    First, learn “live-build” and find a way to get it onto the R.Pi. Then the rest can follow with the right repositories et.al.

    So “next up” is just to look at the two existing “LiveCD Raspbian” efforts and see if one of them has already done the “live-build” porting work.

    Second is to use live-build “somewhere” to gain experience with it. Initially Privatix on an x86 box (since I have one and since it already is supposed to work there), then onto an ARM platform.

    2 things learned along the path:

    1) Live-build can only build for the architecture on which it is running. No cross compilation. So an ARM port can only be built on an ARM platform. Since there are ARM platforms out there, this can be done. However, it might require a different base system on ARM to do the first one ( or might not, I’m speculating about potential issues, depending on the existence of a Raspian live-build port).

    2) The build of Privatix wants an 8 GB build area that may be a single file or that size of /tmp. Since the R.Pi uses a FAT file system on the SD card, that is limited to 4 GB file size. This won’t build. (The error message in the code specifically calls out file system limitations as a potential reason for failure – implying FAT and 4 GB don’t cut it). So one needs to make an NTFS or EXTx file system on another device and mount it appropriately onto the R.Pi. for this to work at all. Since I already have that with my WD disk (that provides about 4 GB of swap along with 2 x 50 GB EXT3 file systems, that’s easy.

    So the ‘easy path’ for me to flow is to put the WD disk on the R.Pi and try a build of either Raspbian LiveCD or Privatix (or both) and see what breaks. Perhaps after building my skill using Privatix on the x86 box. Once the live-build works smoothly, then the build rig is working and proven (and I’ve built the needed live-build skill).

    The next step would clearly be to then move to porting Tails. With the build mastered, the rig proven, an ARM build of Privatix already done; then adding the layers of complexity in the Tails build just becomes a question of hacking at the jungle a few weekends in a row… (Maybe in time for Christmas ;-)

    Est. of Labor:

    Privatix on x86 – 1 day.
    Privatix on ARM – depends on existence of live-build for ARM. With it, 2 days. WIthout, 4.
    Raspbian on ARM Live-CD – likely about 1 day, depending on how they did it.

    Subtotal to get ARM Live-build ready to go: 4 to 6 work days. ( Likely closer to 16 to 24 elapsed days…)

    From that point to RaTails: Add another 8 work days, or about 1 elapsed month.

    Now multiply by 2 and add 10% for what will likely be a very accurate number.

    No, no smiley face. That’s the formula I have used for decades that is most accurate for my guesses. It has often come to withing single digit percents of actuals. So about 5 months working on it part time. Why that “bump”? Because people take coffee breaks, days off, have bad days, go to meetings, have sniffles and take their dog to the vet… In reality, nobody does a month straight at just one thing. (Or, rather, when they do it is hell on them and their family so best not to plan on it. Save that for the emergency Aw Shit that happens regularly… like when lightning hits your flag pole…)

    This method says the “crash schedule” could be as short as 12 to 14 work days. A hard press for 2 weeks burn out schedule. Or about 2 months with focus and priority, but without burn out schedules. Or about 5 months part time on a regular priority with The Usual interruptions of work life. That range is “about right” from my experience with a lot of projects. 10:1 ratio between Special Forces level of effort and “assigned to the draftee”. It is highly unlikely to fall outside that range to either side unless someone who has done this a thousand times with these bits of code were to do it. They might get it down to 1 work week. Maybe.

    Were I staffing this, and given the need for C, shell scripting, Ruby, Python, Perl, Linux / Debian and lord knows what else, I’d figure about $10,000 a month for staff and about $1000 of equipment. (But I already have the equipment and I’m not hiring the staff). Facilities would be one office with amenities. Those run a few hundred a month. So add in about $3000 for office with power light A/C desk…

    Estimated cost for project: Approx $54,000 if purchased. (bounds at about $20k-$100k)

    I’d say it’s worth that. Maybe.

    OK, next up I’m going to give a live build of Privatix on an x86 a whirl and see what surprises pop up. Probably tonight or tomorrow. And read up on those LiveCD R.Pi ports to see how they did them. Live-build or not. Eventually I’ll make a “LiveCD” version of one of them and then I can have my R.Pi running without the SD card in it (a fun thing in any case… think of the utility of a live-CD type R.Pi running. The “card” has a generic build on it, and it is just pull power to snuff the running copy. Someone takes your computer, you are out $40 and need to express one overnight from Amazon… Not a big hit…)

    That’s likely to take to the end of the month, so this thread will likely go more quiet for a while.

  17. Paul Hanlon says:

    Hi Chiefio,

    I’d like to help – if I can. I’m just not sure where I can fit in. So what I’ll do is download Tails and the Adafruit image and look over it through the weekend, and see where I can least get in the way. It’s definitely a worthwhile project.

  18. E.M.Smith says:

    @Paul:

    Well, one easy way to help is figure out how to get a copy of the actual source code being used in one of these “make” processes. I’m getting the impression that they suck in binary blobs in the git archives and never look at source code. The extract (tar ball) I made of the “master” is likely not the right one (they say on the Tails web page that it likely isn’t…) but I think that is because the context was “to do development” and the “master” is the finished product.

    At any rate, while not critical to “doing a port” at this point, at SOME point there will be a bug and something won’t compile. Then it will be essential to know how to get eyes on the text.c and I have no clue yet how to do that.

    The other bit I’m not getting to for a couple of days (need to make some money and exploring other things) is figuring out how closely the already ported Tor code in the OnionPi is to what is in Tails, and how close the “live-CD” versions of Raspbian are to a “Live-CD” build as done by Tails.

    The closer those are, the less work to do (just integrate) the more different, the more work (one or the other needs to be adopted and retrofit with Tails-x86).

    So:

    1) Compare build methods for OnionPi, “Live-CD” Raspbian, Tails. IFF the first two are using git archives, it might be as easy as pointing the Tails build at their blobs… and then bug fixing.

    2) Extract actual source files, somehow. (Lower priority, but necessary) I know, easy for someone who already knows GIT, but that isn’t me right now.

    3) Do a ‘test build’ of Tails and see how quickly it breaks and on what steps.

    4) Segment the Tails build into workable steps (so, for example, make it a straight “Live-CD” type build without worry about getting the TOR stuff running / built) and lay out the order of work to make each one go. For example: a) Straight Live-CD build. b) Add TOR facilities. c) Add custom apps that flow only through TOR. d) Add QA testing.

    5) Generic: Figure out, based on “live-build”, just which of those directories and all their files are really important to understand. (No sense spending a week learning Ruby if, for example, those are all QA suite tests and either work or don’t and are not architecture related…)

    How’s that for starters? ;-)

    And many thanks for “stepping up”!!

  19. Paul Hanlon says:

    Thanks, E.M.

    That gives me something to get my teeth into, and is in line with what I was going to do anyway. Let me get back to you in a couple of days.

  20. Paul Hanlon says:

    Hi Chiefio,

    Just an update so you know I’m not forgetting. Sorry to take so long. I have my own tale of woe to tell. Not only have I got busier, but I managed to hose ulibc on wheezy so badly that I couldn’t even chroot from a rescue disk and repair it. So I’m now the proud user of a Debian Jessie OS, and so far, I’m liking it. Could just be me but things seem a little snappier. Also, it’s got a more up to date ulibc, so won’t have that issue again.

    Anyway, I thought I’d try to load up the Onion Pi software and see what was changed, and also the difference in the dmesg’s on startup. Loading the software was straightforward enough, the instructions start here.

    It’s a basic install of Raspbian on a Pi 1 Model B. The usual things at startup, expand filesystem, enable ssh. Once that’s done, the Pi is then set up as a Wifi Access Point. Adafruit have a very comprehensive how-to here.

    The next stage is to load Tor. There is a script on the Adafruit website that does most of the work, but one section of it that adds text to /etc/tor/torrc isn’t properly quoted so I put that in manually.

    There is still a small misconfiguration somewhere because when I use an ordinary browser on another laptop it sets the ip different on first use but then keeps that ip thereafter. I’ll have a look at that later, along with loading Tails in VirtualBox

  21. Pingback: RaTails – Draft High Level Steps | Musings from the Chiefio

  22. Paul Hanlon says:

    Hi Chiefio,

    So I loaded up Tails (twice, first time didn’t take). This works quite differently to just loading the Tor browser on an existing system or using the Onion Pi solution. With both of them, when I accessed the internet, I would get a popup with a Captcha which had to be submitted to get the page requested, which to me totally negated trying to be anonymous.

    Doesn’t matter that this reply would be routed through Tor, it still smelt fishy. I thought it might be a misconfiguration on my part, but when it happened on both setups and multiple times under different circumstances, I didn’t go much further with them.

    With Tails however, there was none of that. Straight through to any page, and much faster too. It didn’t even feel like the request was being bounced around multiple times. It’s kind of scary being able to put a USB stick in a computer (any computer), do pretty much anything you want, remove the stick and nobody any the wiser, well at least until the secret police visit the office you did it from. I haven’t tested the proxy server capabilities of this yet, but soon enough for that. I’m sure it works fine, given how smooth everything else went.

    There was another very interesting thing I came about while browsing. It’s a toolset called Katoolin, which helps one install all the Kali Linux tools on an existing installation. So how about installing it on top of Tails. Now anybody who knows one end of a USB stick from the other can be a “hacker”. *Shudder*

Comments are closed.