In an earlier posting I’d been looking at how to make computing a bit more secure. One approach was to use a virtual machine on existing hardware. This has two security exposures (that I know of).
1) A software key logger on the host O/S will still log your keystrokes.
2) Network monitoring will still see the network traffic.
To fix #2, one can use any of several anonymizing systems. I’ve been using TOR (The Onion Router) Browser and it seems to work fairly well. Couple that with the use of a network “dongle” that can be disposed (taking the MAC Address with it) and using a public access point (such as your local Starbucks or library) ought to give fairly secure, and relatively anonymous, network connections.
To fix #1, I’ve proposed a ‘disposable host OS’. Build the host environment on CD, DVD, or Flash drive. Now every boot is a clean, new, boot. Any “wares” that get shoved onto the box have a very limited lifetime. Any moment of worry is solved with a simple reboot.
To fix the unstated #3 of “A Tallbloke and the Constable Moment” where they steal your hardware, I’ve proposed a cheap Single Board Computer base platform that is essentially disposable anyway.
I’d done the download of VirtualBox and found it worked fairly well, if a bit slow on some things / some OS types.
I’d done the download of TrueCrypt and like it very much. At this point, I’ve had most of my working files living inside TrueCrypt volumes for many months. No problems (other than forgetting the password to one testing image that, thanks to paranoia on my part, had nothing of importance in it… So I don’t make a dozen images with a variety of passwords for ‘testing’ without keeping notes… For “production”, you want a password memory key, but not a written note near the computer…)
Putting a VirtualBox machine inside a TrueCrypt volume also was tested and worked fairly well (modulo the potential for slowness).
The two packages are quick to install, the instructions are pretty clear, and not all that complicated. They generally “work as advertized” and while it helps to be ‘technical’ it likely is not necessary.
Then I hit the “testing OS versions” wall.
I wanted to try several variations of Linux / Unix and see which ones gave a good, fast email and browsing experience. Only after that point would it be ‘worth it’ to do the work to integrate even more security inside that basic system image. (So, for example, one might want to have an encrypted volume from a USB drive mounted inside that image or have the TOR browser installed, or have it configured to have any swap on an encrypted partition, or… you get the idea.)
Installing Ubuntu took ‘way long’ (in part as it insists on doing a long “play with fonts from the internet” step… and when you are trying to build a reasonably provably secure image, doing a ‘download a bunch of stuff from the internet to the target location’ puts you at risk of a ‘man in the middle’ type attack where what you download is not what you expected…) So after a few hours doing “download and install” just to find out you want a different browser: it gets old.
Wouldn’t it be nice to be able to try a new / different OS in 10 or 20 minutes instead of 5 or 10 hours?
Well, I ran into an interesting web page that informed me someone else had that issue too.
http://ryantrotz.com/2012/03/virtualbox-images-made-easy-with-virtualboxes-org/
If you have read my other articles such as the VirtualBox Beginners Tutorial, you would know that vboxVirtualBox is my favorite tool for desktop visualization. Aside from being cross platform and open source, it has opened up many creative solutions to problems that I could not tackle any other way. My only gripe getting started with VirtualBox is the time that it used to take to get a new operating system running.
Fortunately, the guys over at VirtualBoxes.org have created a set of pre-rolled Linux, OpenSolaris, FreeBSD, and BSD distributions that are (you guessed it) specifically designed to work with VirtualBox. By using VirtualBoxes.org, I am now able to spin up a new virtual machine in minutes.
So I immediately ran off to virtualboxs.org and checked them out. The download page for images is here:
http://virtualboxes.org/images/
As of this posting, the list of OS images is fairly large.
On their site each line is a link to the image:
GNU/Linux (GNU userland tools running on top of the Linux kernel)
Archlinux (website).
CentOS (website): the installation is done from the DVD, with default parameters set
Damn Small Linux (website): the installation is done from the CD, with default parameters set.
Debian (website): the installation has been done from the netinstall ISO image for the x86 architecture.
DeLi Linux (website).
Dreamlinux (website): the installation has been done from the CD, with default parameters set.
Fedora (website).
Fluxbuntu (website): the installation is done from the CD, with default parameters set.
Gentoo (website): the installation is done from the ISO image, then customized .
gNewSense (website): the installation is done from the CD, with default parameters set.
gOS (website).
Kubuntu (website): the installation is done from the CD, with default parameters set.
LinuxMint (website): the installation is done from the CD, with default parameter set.
Mandriva (website): the installation is done from the CD, with default parameters set.
Moblin 2 (website): the installation is done from the .img/.iso file provided by the project.
moonOS (website).
OpenSUSE (website).
PCLinuxOS (website).
Puppy Linux (website).
Sidux (website).
Slackware (website): the installation has been done from the first CD, selecting the bare minimum disk sets.
SliTaz (website)
Tiny Core Linux (website)
TinyMe (website)
Ubuntu (website): the installation is done from the CD, with default parameters set.
Ubuntu Server (website): the installation is done from the CD.
Ubuntu Studio (website): the installation is done from the CD, with default parameters set
Xubuntu (website): the installation is done from the CD, with default parameters set.
VectorLinux (website):the installation is done from the CD, with default parameters set.
Zenwalk (website): the installation is done from the Standard Edition CD, with default parameters set.GNU/OpenSolaris (GNU userland tools running on top of the OpenSolaris kernel)
OpenSolaris (website).
Nexenta (website): the installation is done from the CD.
MILAX (website): the installation has been done from the official ISO image.GNU/FreeBSD (GNU userland tools running on top of the FreeBSD kernel)
Debian GNU/kFreeBSD (website): the installation has been done from the daily mini.iso.
BSD
FreeBSD (website):the installation is done from the bootonly iso
Other
AROS (website): the installation has been done from the nightly build ISO image.
FreeDOS (website): the installation has been done from the official ISO image.
Haiku (website): the image has been done from the nightly build HDD raw image.
MINIX (website): the installation has been done from the official ISO image (MINIX 3).
ReactOS (website): the installation has been done from the official ISO image.
SYLLABLE (website): the installation has been done from the official ISO image.
Android-x86 (website): the installation has been done from the daily ISO image.
Plan 9 (website): the installation has been done from the ISO image.
On the one hand, I’m thrilled that I can just do a “grab and go” on several images for much more rapid performance testing.
On the other hand, my list of “targets” is now about 5 times as large ;-)
I don’t know if net that is a time savings or not. But the end result will likely be much better.
Once I’ve figured out “what I like”, then I can get on with the job of building the basic system “from scratch” on a secure machine (download sources, preen them, build on a box built from a CD set…) and customizing the OS (hardening some parts, installing TOR and / or similar, making a mounted Truecrypt volume be a default option, etc.) That, then, gives the “inner software” layer.
Yes, a lot of work.
But the end point ought to be a reasonably good performing and very secure environment for doing day to day things like email, browsing, and even financial transactions. As each boot is ‘new’, the OS can not be infected with a virus that then compromises your login / password. Using an encrypted https type page ought to be strong enough for everything except a TLA Three Letter Agency attack on you. (The TLA can just have the ‘institution’ hand over the transaction at their end…)
Putting that OS on a CD / DVD / locked Flash Drive lets it stay secure from change. Booting it into a Virtual Machine that lives inside an encrypted ‘container’ file system keeps forensics out of the bits of dross the OS leaves laying about. Putting your “stuff” on an encrypted USB drive lets you have persistent state and information, but it only becomes ‘available’ after decryption (after the OS is up and running).
But none of those steps can happen until I’ve settled on an OS that is “fast enough and good enough”… which is where I’d stopped.
Yes, I’d tested a few, but the process was very slow as each one could required a few installs to ‘get it right’. Using virtualboxs.org gets me past that “issue” ( I think…)
FWIW, I’d been thinking I needed to write up a rough “how to get VirtualBox and install inside Truecrypt”, but found that the same site has already done a pretty good job of it. Not fully ‘hand holding’ but gives the basic order of attack on the process:
http://ryantrotz.com/2011/08/how-to-secure-your-files-using-truecrypt-and-virtualbox/
So anyone wanting to join in can just “do what they describe” (which is roughly what I did too) and then try out some of the various pre-made system images and report back on what worked well and which were just painful to use or molasses slow.
The Other Path
It is worth mentioning that I’m not seeing a virtual machine inside TrueCrypt as a replacement for the “SBC with OS on Flash” disposable system. The two are related and interoperating. So on the one hand just having a basic Linux on an SBC booted from flash lets you have a ‘secure enough’ platform for things like general random browsing. The whole thing can fit into a “dongle” that just needs a USB / wireless keyboard / mouse and a video display plugged in. Doorbell rings, you hit “reboot” and go answer it. History is gone. RAM is re-written. Viruses that try to creep in get scrubbed on reboot too.
While that doesn’t prevent things like network monitoring and doesn’t have an encrypted file container for your ‘persistent’ stuff; it does work fast and is nice for casual things.
Having a VirtualBox with an entirely encapsulated OS (but without a lot of tweaks) inside a TrueCrypt volume will not prevent TLAs from putting keyloggers into your system nor protect you from all attacks, but is “nice” for things like keeping your private stuff private on an offline system. So an “R&D” box that doesn’t connect to the internet (so no keylogger things matter) is always left in a secure state just by shutting down the virtual machine / dismounting the encrypted volume.
Those two steps, and nothing else, give a nice mix of ‘private browsing’ with a virus free / reasonably secure internet presence along with a very secure and very private “work room” environment.
Those are the two bits I’m building first.
Then comes the long harder slog of making a fully “rolled together” SBC with full encryption and OS from Flash, that also has all the mounted encrypted removable personal volumes and anonymous browsing software pre-installed. That step will ‘take a while’ and depends on playing with / selecting from, many of those OS types listed above, then trying them on selected hardware.
So that’s where I am at the moment. VirtualBox and Truecrypt installed and working well. OS testing in progress. General “layout” of the approach defined. Steps to completion roughed out.
I’ve not made a formal ‘project’ of this, but did do a few different searches for TrueCrypt enabled Linux releases or other evidence that someone had already “gone there”. Didn’t find much. (It looks like TrueCrypt is not under a completely PC enough free enough license for the Linux distros to bundle it). So while I kept thinking “Someone ought to be doing that”, it is looking like “I am someone”… As I’m already up to my eyeballs in ‘projects’ (some falling behind already) I’m not thrilled at the idea of adding another one. Yet I think there is a real need for such a “demonstrably secure” bundle. So maybe I ought to make it a formal project, try to build a developer community, make and distribute a formal ‘disto’ of it. We’ll see.
….While Big Brother knows in which Starbucks or Library you are, while recording your voice anytime He needs just by activating your phone´s mic/camera too. Need to try telepathy instead.
Big Chief would say: That NWO and its Big Brother game through gadgets is utterly childish, white people will die when the fourth white buffalo has born.
@Adolfo:
Well, as the SBC has no microphone nor camera, and as I have tape / bandaids for the laptop if desired (not to mention software to monitor the state) I’m not all that worried. Cell phone is more of an issue, IMHO.
And whatever your taking, can it be sent through the mail? “White Buffalo?” Powdered or what?
;-)
Just google for “white buffalo” (also in youtube)
White bison/buffalo:
EM – You can burn a Linux Mint CD and run from CD just about in the time to download it.
EM – You don’t have to install Mint from CD, it will run from CD, and as you point out, from a thumb drive.
@jim2:
There are lots of BBC releases (Bootable Business Card) and similar Live CD releases. What I’m trying to figure out is just which one is complete enough to be useful, while being small and fast enough to work well in a Virtual Machine, while being VirtualBox friendly.
Unfortunately, that takes time to become familiar with the different distributions and how they work (or don’t) under VirtualBox… (For example, this is from Firefox inside Damn Small Linux. Speed is pretty good (especially given that it’s inside a VM running from an SD card…) Seems like “enough” for most things.
Except… by default, the window extends below the actual screen. So I’m trying to figure out how to ‘fix’ that… (Just now booted up, so barely begun). And will it STAY fast enough if running from inside an encrypted filesystem?
Don’t know of any way to figure that out other than trying each one and assessing the result.
Playing with anything that has to do with the OS is very scary for guys like me that don’t know all the ins and outs of what Microsoft does. A lot of what I read hear is very interesting, but to do it scares me to death. How hard would it be for someone to make a USB drive that had a full virtual capability installed on it. To run it I go to a program on the USB drive lets call it runV, where I open it and have it run a program to make itself the functioning OS, that way I don’t play with the regular OS. When I’m done I close and restart and it reverts. It could be a virtual system on a USB you carry around and can run from anyone’s computer. Run encrypted files and you have a pretty handy save system.
I’m guessing to not do it through the normal boot it would be too machine specific and jumping into locations would be difficult and fraught with danger.
@BobN:
What you are describing is a ‘virtual machine’ in one sense, and a ‘bootable OS’ device in another sense.
The two are different in a fundamental way.
For a “bootable USB” or “bootable CD” or “bootable business card” (that is really just a small CD) you are making a storage device with a full operating system build on it. Then you boot up the computer from that device (instead of the usual disk). In many ways, that’s the ideal. But there are “issues” (aren’t there always ;-) Mostly that the installed OS doesn’t really want to let go of the machine and many company I.T. shops try to prevent you from doing this (as it can be a bypass of their control and security… in fact, in MS Windows 8 they make the boot loader tied to ONLY Windows 8 to lock that option out).
Many systems admins have a “BBC” or CD or even just a “USB Stick” with their tools on it. Walk up to a box and reboot… go on your way. ( I have a few of these). Knoppix is a pretty good an common one. It would be my preferred option with this laptop (as it has been with many prior systems) but for the fact that this laptop wants a “special” video driver and I’m not all that interested in making my own re-built version with special drivers.
So the other option is a “Virtual Machine”. This requires that some software be installed on your existing operating system. (Like installing “Word” or any other software). The “Virtual Box” software is just such a virtual machine environment. OK, sounds great. But if you put it on a “USB Stick” or CD and just stick that in the machine, it isn’t going to work. Software must be installed… and if your I.T. Department has locked you out of doing software installs, you can’t do that. They also tend to be much slower than a directly booted operating system.
For example, Virtual Box by default only uses one of out four CPU “cores” on my laptop. Attempting to use more than that with some OS images doesn’t work. Even if you turn on more cores, at least one must be left to the “Host OS” so that it can do it’s part of the work. Then the “indirection” of the virtual environment takes some resources too. Sometimes the ‘guest’ operating system doesn’t like the virtual hardware and hangs or crashes. Sometimes that can be fixed with settings in the virtual machine, sometimes it can’t.
Finally, and the one that is most annoying to me, for Virtual Box I’m not getting it to use the whole screen. Things run in a small window. At this point I don’t know if I’m just not knowing the right places to set something, or if it just never will be ‘big’ and will always be a tiny window.
So for now, at least, I’m seeing the Virtual Machine as most useful for a limited set of tasks (and that it will take some time to find an efficient and stable release of a guest OS even at that.) That is part of why I’m also looking at the dedicated Single Board Computer option.
Hope that helps…
If YOU (not the I.T. department) own the computer and you have control of what can be run, or not, then an easy way to “play” and not break anything is just to install VirtualBox (or similar virtual machine software). Then you can play around with doing various software / OS installations inside of the virtual environment and not worry about damage to the host operating system
BTW, for “just trying something”, it’s easy to download a Knoppix CD image, stick it in the drive and reboot, and see if it works on your hardware.
http://knoppix.net/
The present release comes with the Chrome browser. I liked the older one with KDE and Firefox better. But it generally works.
@Jim2:
BTW, is that an endorsement of Mint? Ought I ‘give it a go’?
Also: Found an interesting site: http://livecdlist.com/
Long list of live CD distributions and a brief description of each. Several have “Security” in that field. Maybe someone has a hardened version ‘ready to go’…
Found an interesting UBUNTU distribution. On the one hand, it’s secure. On the other hand, it has networking shut off. Still, it has Truecrypt support built in and doesn’t leave bits of crap all over the file system.
https://www.privacy-cd.org/en/home-mainmenu-71/55-was-ist-ubuntu-privacy-remix
looks like it would be very useful for the “offline workstation”. They even have a ‘dual boot’ form that has the choice of booting “Tails” for using a TOR base to do web browsing, then doing a reboot to “Ubuntu Privacy” and working with the data. As a reboot happens between those steps, it’s very hard for anything to cross that barrier and do anything interesting to your work environment (which can’t communicate anything out anyway).
Not quite as secure as the whole deal of two different machines, one that never has connectivity, but pretty darned close. They also have a source code download so you can make your own CDs (for the truly worried ;-)
So that gets me about 3/4 of the way to what I’m looking for.
Also, Puppy Linux is small, clean, and fast in a virtual machine while having a decent browser experience. Did need to set the ‘screen size’ up, and I gave it a fair amount of memory. All in all, pretty comfortable set of tools (including “Libre Office” – OpenOffice with an easier license…) It also lets you save the machine state to a file, if desired, so likely can be configured to save data and changes across boot ups.
Also of interest is that “others” have had the same basic idea about ‘modestly secure’ portable environments. The DOD has a page where you can download their ‘browser and not much more’ with some security features built in bootable CD:
http://www.linuxjournal.com/content/linux-distribution-lightweight-portable-security
I’d be a little more worried about what was in the “public” download, so I’d quarantine it in a sandbox and monitor network traffic and such for a while to make sure it wasn’t a trojan; but still, it looks pretty good on first inspection.
I do find it encouraging that both of those distributions do things similar to what I was describing. It gives some (small?) confirmation to the approach. That other folks are building such systems (including the DOD) also confirms that the “alternative” of a Windoz laptop is pretty darned insecure…
At the moment I’ve got the Secure Ubuntu downloading. Then we’ll see how the CD works in a desktop box and in the Virtual Machine on the laptop.
(Why not use the CD in the laptop? Because this HP laptop needs a funny / new video driver and has had ‘issues’ getting the wireless to work per other folks online stories. So I figure about 6 months more before random releases are likely to support it without a lot of extra work on my part ;-)
There’s an interesting alternative to Truecrypt in FreeOTFE (http://www.freeotfe.org/). Enables you to make encrypted volumes in files, partitions or entire drives, lots of hashing and encryption algorithms, no ‘signature’ to identify the volumes, support for smartcards if you want to show off, etc., etc.. You can even use it on PDAs with the same volumes as on your PC(s), or without installation / administrator rights on PCs, etc. (If you know of good reasons NOT to use it, please shout, but it certainly looks pretty cool.). I occasionally use the TOR Browser too, but there’s always that worry that it’s sitting on top of the Windows underworks (a.k.a. giant security hole).
If you do sort out a “demonstrably secure bundle”, I’ll certainly give it a try if you can make it reasonably idiot-proof. For various historical reasons, though, I have to ask, could we have a bit of support for Wifi dongles please? That’s been one of the biggest hurdles to me ever getting a Linux system going – last time around, the thing expected me to compile drivers for a wifi chipset I didn’t have, then somehow ‘steal’ bits from that to make drivers for the set I did have, and all without net connectivity (by definition!) on a strange OS. Not impressed!
But with Windows spiralling down the plughole at the rate it is, I really want to find a quick semi-technical way to get into Linux, and securely for obvious reasons. I’m not stupid, just frustrated at having to start my experience of a new OS every time with trying to sort out major problems without having any idea where to start (though it is a while since I last tried a *nix, to be fair. They may have got their act(s) more together in that time – though I’ve thought that many times.). If there’s already a good guide to learning *nix tricks quickly, a pointer would be very welcome. Watching with interest.
@Steve C; I know how you feel. I have been fighting Microsoft since 1985. At present I have to deal with 98. XP and now Windows 8. I know I have to come up to speed with some kind of Linux at some point but this old dog does not look forward to the needed effort. Specially as I have to do it on my own, and there a couple of “have to have” programs that want to see a Microsoft window.
This EMSmith bundle may be the start of something that will let us get away from connecting the Internet to our important machines, and may even give us a way to change the way we look at personal computers. pg
@SteveC:
I’ll take a look at FreeOTFE. Many good folks work on the “free” stuff.
Linux generally has a 1 to 2 year lag on major hardware changes. More obscure chip sets take longer. Some vendors are better than others. Early WiFi was a major pain to make go. MUCH better now. All depends on the chip vendor in some ways. (IF they release a Linux driver with the Windows driver, it’s usually nearly the same ‘roll out’ time to the OS. IF they only release a Windows Binary, well, it takes a fair amount of high skill and / or reverse engineering without the docs that the company code monkey had…)
It pays to select the hardware that is known to “play well with others”. I knew this, yet still ‘just assumed’ my HP g6 would be in that group. It isn’t. (For a long time Dell was stellar at Linux support. Don’t know about now, though.) Part of why I find the Virtual Machine approach interesting is that it isolates the Linux into a virtual hardware environment that is usually based on the most common things that work well and ‘play very well with others’. So I’ve got several Linux releases / distributions running on my g6 that I could never get going on the real hardware. (Some very old that would NOT support this hardware.)
With that said, some large operations have committed to Linux (including large chunks of Germany). As China gets bit by the Microsoft lockout in Windows 8 (i.e. blocks piracy), I expect we will see LOTS of Linux compatibility from China source computers too ;-)
Since Linux is much more efficient than Windows, I’ve had great success just picking up “Old and useless” machines one major MS release back. By then, the 2 to 4 year old “new” hardware is supported in most common Linux releases and the speed is great. Typically the conversion is a LOT less painful than doing the Windows upgrade…
Just to “get your feet wet” the simplest way to “play” with Linux is to download Knoppix and make a CD / DVD. Boot from that you get a Linux that’s pretty complete. Only real downside is that things are not persistent between boots. (So stuff must be written to removable media and things like bookmarks need to be exported / imported). For sporadic learning or general browsing, its fine. As it uses it’s own hardware abstraction layer, it works on a lot of different hardware too. (Not all, though…)
@P.G.Sharrow:
Long ago I ‘made the shift’ from thinking of my computer as “My Computer” and committing to it for all sorts of stuff (trying to build the perfect mix); to thinking of it as place to temporarily put some data for particular processes. Data ought to live on “other media” whenever possible. OS and Apps configged for a use and then left alone (until convenient to you to do an upgrade / replace). Multiple machines for different uses is Just Fine (especially old laptops that fit in a small drawer…)
Then again, I was supporting “NFS File Severs” decades ago and the Cray was just a giant “Compute Engine” while email and network stuff were done “User home directory Front End” machines. It’s a standard compute paradigm in those kinds of shops.
So I have a laptop for my “User interface” and a Linux backend “compute server” for the GIStemp stuff. An old Windows (dual boot) with Office installed and some other Windows specific software for “old Windows file access and specific software”. It also boots a couple of year old Linux for use with particular relatively safe download / browsing things. (If it gets hosed, I’ll just re-install anyway, and the other disk partitions are not mounted so no contagion). Another Linux is my archival file server. Booted up every couple of years so the canonical backup can be accumulated. (It was my private DNS and email server, but I’ve let that laps…)
In a way, what I’m doing now, is to find a way to put that same “disposable Linux” environment for modestly secure web browsing / email / misc use into a Virtual Machine on the Laptop. Still a ‘dispose at will’ environment and “not MS so less likely to be infected” on browsing. Still isolated from my general “user interface” machine (even though living on the same hardware).
Basically, I trade about 4 feet of shelf space for a MUCH simpler upgrade / recovery path for lots of uses and a much more robust and diverse set of tools to use. Any one box can die and “not much happens” too. (So, for example, I managed to mangle the Windows in the Evo. Just shut it off. Plugged the backup disk into the laptop and copied the files here. Could have used the old Windows 2000 box instead. ) So one “drag and drop” to be back in business. now I can deal with the broken box at my leisure. Similarly, the GIStemp Linux can just sit there. When I need to use it, I fire it up. In between, I have no worry that some “upgrade” will break the environment and put me back in upgrade hell.
FWIW, when I do dump a box, I’ll often just harvest the disk and put it on the shelf. If I ever want that “personality” back, it goes into another box. ( Old “white box PCs” were great at this. Newer proprietary designs less so. For them I now do more “copy to a file server”…) On my “some day” list is to take those archival systems and encrypt them. But for now, that’s not needed. (My canonical list of rants on rec.skiing and email thread on ‘which sushi is best’ can serve as a time sync for anyone dumb enough to try dredging the archive after a ‘Tallbloke and the Constable’ moment ;-)
OK, long rambling thing… but I think you see how the “system” works. I tend NOT to “upgrade” but to “migrate selected operations” to new platforms. And later do a “decommission discontinued operations with archival or roll-forward” on final EOL for a box / platform.
This effort can be seen as just taking that “style” one step further. Data separation from process from the very start. Every boot a “new platform”… And I don’t really care if my bookmarks get loaded into a Mac, PC, or Linux browser… (Open Office / Libre Office helped a lot here. I’ve got a LOT of old docs that are now no longer tied to particular vendors…)
At any rate “back to work” on the OS testing for me ;-)
EM – I switched from Ubuntu to Mint when Ubuntu went to the Unity GUI. Mint is built on Ubuntu, though. The Mint group seems to listen to the users and not just cram stuff down their collective throat, plus it is not so adverse to proprietary drivers that are needed for the web and other stuff. I like it better than Ubuntu was before Unity even. Installs fast, too.
@Jim2:
OK, I’ll give it a try. I’m not at all liking that Unity interface, so it’s likely a distribution I’ll like.
So far, testing in Virtual Box has confirmed some expectations: Big Fat release run very slowly and small tiny releases work better. (Well Duh… things designed for limited resources work better in resource constrained slow emulators…) So Fedora was a way slow pig, yet Browser Linux is livable.
Some of the ‘prerolled’ distributions from virtualboxs.org work fine, several are broken (often in a ‘can not find the disk at boot up’ kind of way. The Fedora release had bizarre keymappings, for example. Tried to do this comment from inside it, but the @ had gone away (most of the special chars where those strange things they only use in Europe ;-)
So I’m now focusing the “search” in on smaller lighter distributions that are more likely to work well (even if I must ‘roll my own’ and do an actual Virtual Box install of it. So the BBC and CD Distros are the primary targets. (A couple have shown good promise). This ought to also make a good model for what will run OK on a SBC with one CPU and not much hardware…
Oh, and some of the distributions work Just Fine, but “have configuration issues”. Like the BSD releases. They do what ALL BSD releases do at boot up. Give you a root: login prompt. Kind of hard to test the browser when the whole GUI world is turned off (and as it is a volatile evaporates on reboot environment, doing the work to config it for startx is dim…)
The Secure Ubuntu worked just fine… but since it has had networking shut off, it doesn’t talk to anything. A bit of overkill for the ‘workstation’ when run inside a virtual machine…
Some releases have horrid latency from keystroke to character on the screen (Fedora) while others are hardly noticed (Puppy). It’s a ‘crap shoot’ for any given release inside the VM world.
I think I just need to pick a ‘good enough’ small distribution and move on to hardware selection, testing…
Another fantastic post that makes me feel very humble (my wife tells me I have much to be humble about).
Everywhere I teach there is guaranteed to be a great “Audio/Visual” set up which I can seldom use because university IT departments are reluctant to provide “Administrator” access. In order to run specialized programs I hook my laptop up to the A/V connectors and away we go. This is not always as simple as it sounds. The IT department in a large university in North Carolina went apes**t when I disabled their $300,000, “State of the Art” A/V system. It had a 20 foot screen that descended automatically plus six large flat screen displays with cameras for “Video Conferencing” plus a control room out of “Star Trek”.
The obvious solution is to leave the laptop at home, plug a 64 GB flash drive into the university’s computer and then reboot from the flash drive. No more rummaging around in a rat’s nest of cables, no need for screwdrivers or box wrenches and a more cordial relationship with the IT department.
When it comes to software Chiefio gets “under the hood” but the best I can do is to “kick the tires”, so I hunted for the appropriate “How To”:
http://www.pendrivelinux.com/
It sounded simple when I started working on it four years ago. Making a bootable flash drive is simple indeed. The trouble started when I tried to add special software and Windoze to my Ubuntu based system.
Unless any of you folks can tell me how to get dual boot (Windoze and Linux) working on a large flash drive I plan to switch my efforts towards a “Virtual Machine Environment”. Thank you, Chiefio, for giving a clue about what to try next.
Some notes on testing various Linux Distributions inside VirtualBox. While most of these are the prebuilt ones from virtualboxs.org, some are “live CD” images and some are direct traditional installations. Surprising to me was the number of “pre-built” V.B. versions that don’t boot. Don’t know if it is ‘release level’ or what (several of them are much older than the current releases on the respective web sites, plus the version of V.Box mentioned in the builds is much older).
I’m now less enthusiastic about their “service” in that it doesn’t look like you can reliably download a pre-build and just run it.
Overall, the smallest Distributions work best in a Virtual Box machine. Puppy Linux and Damn Small Linux both worked well (modulo some minor quirks) while Fedora and Mepis both worked, but were painfully fat and slow.
The implication here is that any environment to be run in a V.Box environment ought to have emphasis on small, fast, and efficient. (For basic things like Open Office, web browsing, and email / chat it doesn’t really take much operating system anyway…)
I’m likely going to proceed to using one of the Puppy or DSL releases as a test bed for learning things like “how to save files to USB” and “persistent environment” across boots. After that, I’ll integrate those “lessons learned” into one of the more security oriented releases to see if I can get both “small fast and usable” along with “Secure with personal settings persistent” in one package.
With that, here are some notes on particular trials:
Knoppix:
Boots, but incredibly slowly. Could have taken a nap…Attempting to launch Chrome causes a ‘startup sequence’ doover and back to the desktop environment.
Browser Linux:
Booting from flash is a tiny bit slow (only release tested) but works fine. This is a version of Puppy Linux that has only a browser in it.
Puppy.f:
Includes Libre Office. Works fine. Boots a little faster from disk than did Browser Linux, but not enough to matter much.
Slax.f:
From Bootable ISO image on Flash Drive. Boot is a tiny bit slow compared to disk. Generally nice looking. Firefox didn’t handle the mouse correctly and it took some effort to get things closed down. Got a “Kwin” error, so I think the KDE Windows manager has issues… maybe…
Mepis:
From the big., fat, and slow school. Even from hard disk. It works (once you remember the default password starts with a capital letter “Anon”…) I need to resize the window to avoid a lot of scroll bar usage, but that’s common. Firefox works too. But…. It’s just so darned slow! I’ve run into this before. Some versions of Firefox “spin” a lot on some kinds of script usage (or something like that). I remember fixing it in once case with shutting off some setting in Firefox. So possibly with enough ‘fiddling’ it can be made comfortable and fast enough to use. Lots of “glitz”, but at the expense of working well on small machines, IMHO. Clearly “developed” on a large box.
DSLinux:
Boots well. Works fine with one exception. Making a posting from Firefox wanted a ‘login’ to WordPress (it does that sometimes). The posting succeeded, but the browser crashed…. Overall, a functional fast product, but may need some tuning. Mouse Integration is best turned off. Full screen mode is nice
Debian Sarge:
Fails to boot.
“Partition check:
/dev/ide/host2/bus0/target0/lun0:hde: irq timeout: status=0x50 {DriveReady SeekComplete }
repeat from hde: 4 x, then reset happens then end_request: I/O error, dev 20:00 (hde), sector 0 (or 2 or 4 or 6) eventually ending in a kernel panic “Attempted to kill init”…
FreeBSD:
Boots to login prompt. In typical BSD fashion, you are now left to figure out how to make a GUI environment on your own. I love BSD as it is an old school robust and reliable as a rock Unix, but a simple fact of life is that Unix is known for being “User Hostile”. The need to ‘roll your own’ on a GUI is an example. Great feature to boot to login prompt for folks building servers and doing system recovery. Not so good for “Joe User” looking for a quick web browser.
FreeBSD Debian:
Supposedly has the Debian tools / look in it. Boots to login prompt.
TinyCore:
Boots to nearly empty environment. Useless for browsing. With additions likely usable.
SliTax:
Fails to boot. “atkbd.c: Spurious ACK on isa0060/serio0. “
OpenSuse:
Sort of boots, then can’t find disk and fails. “Waiting for device /dev/disk/by-id/scsi-SATA_VBOX_HARDDISK_Vbe7 etc etc -part2 to appear:” and hangs. At some point I’ll try a download from their web site rather than the ‘pre-rolled’ from virtualboxs.com that doesn’t work.
Gentoo:
eth0 does not exist. Fails. Then gives ‘gentoo login:’ prompt.
Fedora:
Boots nicely. VERY slow / fat. Keyboard map is wrong (Euro?) Could not make a comment on a web page from Firefox as the @ was not available on my keyboard (and most of the special characters were bizarre things with umlauts and accents and ‘stuff’) Not tolerable or usable in this form.
http://www.linux-kvm.org/page/Main_Page
I think that is the way to go. Virtual Box is now Oracle baby (And big companies are pals with TLAs)
Probably more trouble to set up first but more secure
There is great bunch of How -to’s on http://www.havetoknowhow.com
He is building media server but uses virtualization, KVM, VNC, SSH on ubuntu server.
Kernel Based Virtual Machine
KVM (for Kernel-based Virtual Machine) is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). It consists of a loadable kernel module, kvm.ko, that provides the core virtualization infrastructure and a processor specific module, kvm-intel.ko or kvm-amd.ko. KVM also requires a modified QEMU although work is underway to get the required changes upstream.
Using KVM, one can run multiple virtual machines running unmodified Linux or Windows images. Each virtual machine has private virtualized hardware: a network card, disk, graphics adapter, etc.
The kernel component of KVM is included in mainline Linux, as of 2.6.20.
KVM is open source software.
@EM – Thanks for your explanation. I will give the virtual thing a try and see if it works for me. I’m retired and miss having my IT guy a quick call away to do what I want. When I have to make the choices it gets a bit scary and makes me wish I had paid more attention to details and not relied on someone to do my bidding. I guess that a statement on life, best do it yourself if you want to know things.
This is a list of less ‘resource hog’ oriented Linux releases:
https://en.wikipedia.org/wiki/Lightweight_Linux_distribution
Of note:
I agree fully with that “interface stays out of your way”. A LOT of the newer distributions are all up in your grill with Graphics, Glitz, Garbage. Visually pretty, I suppose, to have an opening panel with 1024 x 1280 of 64 bit deep color in such a complex form that it will not compress well… sure does exercise that graphics card in your hot box… Then having fancy hidden dynamic menus is “way cool”… unless you are new to that environment and just want to see what software can be run, or how to shut down the box. So give me an old school menu driven system with a menu bar and clear clues to the user where to look to make the basics happen.
Then there is that point about Ubuntu by default expecting a 1 Ghz processor and 1 GB RAM. Sheesh! Talk about over the top! No wonder it runs like a pig on everything I’ve tried. I have ONE machine in that class (the newest laptop) and it is Linux Hostile so can only use Linux in a virtual machine environment (unless I want to reformat the whole disk and build a linux instal from the ground up with special video and wireless drivers)
OK, so I’m definitely seeing the pattern here. Avoid the releases built by “Code Bloat R Us” developers and reduce the “search space” to the low resource demand releases.
Is this a big hit to features and performance? Not that I’ve ever seen. One example: When you compile a Linux from sources, one of the things you can set is a default memory chunk size. Make that small, a program only gets small chunks of memory and may need to ask for them repeatedly if it needs a lot. Make if very large, a big chunk of memory is given to the program at launch and then any time more is needed, big chunks are carved out and handed over. That makes Big Fat Programs (like video games) go a bit faster, but means every little program that does some minor task ends up with a big chunk of allocated memory doing nothing. Set that number small, you get a much smaller Linux that runs faster for most things. Set it large, you get a large fat Linux that runs Large Fat Programs a bit more effectively but has bloated memory usage for other stuff.
Mandrake Linux regularly set that chunk size larger over time. (Mandrake was a derivative of Red Hat). So Mandrake would advertize that it did some things faster or better, but you needed twice as much memory in your box for the ‘minimum’ to install it. I’d run Mandrake and Red Hat on the same hardware, and frankly, there wasn’t much real difference. But Red Hat ran in much smaller boxes…
So some of those ‘lightweight’ Linux releases will just have some of those kinds of settings put back to the ‘small’ size. Like this one:
“Lubuntu – Lightweight in comparison to Ubuntu. 600 MB download. Package Manager: Synaptic”
This also means that I could build my own release and set those parameters to the “small” sizes.
In other cases, some developers ran off to build a Brave New World (often in the area of GUIs where they loaded up the Graphics end of things). So there are a half dozen ways to make the User Interface have more (whatever…) that someone wanted. Many of these were developed by folks who didn’t give a damn about resource usage. IMHO, GNOME is one of them. Others are much more “resource lite’. KDE is, IMHO, one of them. To some extent it is just a question of WHEN they were developed. Older versions were built in a more resource constrained time and equipment put a natural brake on exuberance… So:
“Fedora with Xfce – Lightweight version of Fedora”
https://en.wikipedia.org/wiki/Xfce
So folks who want a less “pudgy” system can look for Xfce interface systems, for example.
(Yes, this puts “Debian 7” on my list of things to look at).
At any rate, that “gives clue” as to what I’m looking at and what’s likely to end up in the final system. A “lightweight” Linux (meaning “not fat with bloat”, not meaning “low ability”) probably with KDE or XFCE.
I’m still hoping to avoid the need to “roll my own”, but will IFF I must. (I’ve done it before and it’s not all that hard, but takes time and testing.) And in any case, the list of low resource usage Linux releases gives a nice starting point.
@RZW:
Thanks for that briefing! To me, KVM meant “Keyboard Video Mouse” as in KVM Switch. But that’s likely a dying bit of jargon… I’d gone off to Virtual Box due to the laptop being Linux Hostile, but figured I’d come back to ‘what is the base system’ as a later iteration (moving it all onto a small desktop box that’s very Linux Friendly). Looks like I need to check what chip is in it and what virtualization it supports… ( I think it does).
Yes, Oracle owning V.B. is a bit of a worry. ANY large company gets a visit from a TLA and they are compromised. Not one of them will say “No, we will not let you insert things into our code” when they are told “You do $XXX Billion in Federal Business each year. I’m sure you would like to keep doing that business.” I’ve ‘seen things’ in various software that just shout “designed to be open to a TLA”. IMHO, some of the more persistent “bugs” in Microsoft are in that category.
That’s part of why I simply state as a fundamental principle that any REALLY secure work ought to be done on a box without a network connection. Heck, I’d even go so far as using CDs to move the data back and forth instead of a USB stick… or a R/W DVD could be used so you can “scrub” it after use. Then when really worried, feed it to the fireplace when done…
Unless someone wants to put a trojan in a few billion CDs per year (and it ought to show up in the color pattern on the platter anyway) it’s a very unlikely exposure path, and has NO path to moving data back OUT of the secure system unless you write that data on to the disk. Have each “burn” be a full disk worth, there just isn’t much room left for any ‘leakage’ anyway…
@BobN:
You’re welcome. As long as you don’t mind slow service times, I’m happy to ‘splain things.
FWIW, one “nice” thing about the way tech changes is that it pretty much ‘turns over’ in about 5 years. That means that anything you didn’t learn 5 years or more ago doesn’t matter much any more ;-) It also means that you can ‘come up to speed’ with only learning a modest amount of the more recent stuff.
Take Virtual Machines. I’d dealt with them a little bit professionally, but mostly like clusters of real hardware. ( I actually started with VM/ CMS on IBM gear. The VM is for Virtual Machine… https://en.wikipedia.org/wiki/VM/CMS ) So my experience was not very relevant now. I’ve gotten “OK’ with Virtual Box by simply doing the download, install, and playwith cycle. Maybe 6 months elapsed time? And about 10 hours contact time (prior to the latest Linux release testing).
That means you can “catch up with me” in about 10 hours. (Since I’m going to be doing something else over the weekend ;-) In 15, you can be giving ME advice… So just in the last couple of days I explored / learned about the “Full Screen” option (that works nice, but you need to know about rt-ctl-L and rt-clt-H etc. to swap back to non-full versions). Similarly, for many releases, turning off ‘mouse integration’ is more reliable. (rt-ctl on my system). Now you know those things ‘up front’. That gives you about 3 hours of my ‘learning time’ in a couple of minutes. Now you only need 7 hours to catch up ;-)
Similarly, much of my Linux experience is in a KDE based world on Red Hat (about 7.x era). Not all that useful in a completely different GUI environment or on an entirely different distribution… (Like Ubuntu doing that ‘fonts from the internet’ thing that was a surprise…) So we are starting off equal on those bits of “new stuff”.
Mostly it’s just a matter of being stubborn enough to not give up and keep searching to find out “what went wrong” and “how to fix it” or even just “where did they had that button?”…
Back at Virtual Box:
It has it’s quirks. So it’s very picky about where files are located. Build a virtual machine in one particular folder on your disk, and you better not move or rename anything. No way to edit the path in V.B. that I’ve found. It sticks “snapshots” in a particular directory in YOUR home directory. I think that can be changed, but you must look for it. For generally ‘playing’, neither of those is too bad an issue. For making a secure system inside and encrypted box, well, you don’t want a snapshot in your home directory…. And you need to know to build the system INSIDE that encrypted file system from the very start, ’cause you can’t move it later. (Though you can just erase it and start over). I also found that in deleting V.Machines it would ask “Delete all files?” and I’d say yes… but it didn’t delete ALL the files. Leaving a log file and some other bits in that directory in my home directory. So there is also the need to come up with a ‘secure deletion procedure’ that involves manually deleting / scrubbing and verifying deletion of ‘bits of dross’ left laying around. I have not done that yet, so you know as much about it as I do…
So hopefully that’s a bit encouraging. Near as I can tell, for general “playing and learning” Virtual Box is not that hard to ‘make go’ and doesn’t do any ‘bad thing’. A nice “sandbox” where you can try stuff without worry. THE biggest issue I’ve run into is just that it is dog slow on large full (bloated?) Linux releases. It really wants a large fast box for them. (One more reason to go with virtualizing one more layer down ala KVM on the final solution…) So I’d not expect Ubuntu to run worth a damn in V.Box by default. Thus my notes above about what does work).
One final point: While playing with Linux in the Virtual Box environment is fairly easy and safe, it is NOT the same as on real hardware. In an ideal world, it would be better to have an old PC you could ‘scrub’ and just play with doing ‘real intalls’ on it. It will be much more efficient and fast in the running. It is also more secure in that there is no “host” to compromise with things like key loggers. (Thus my end target is an SBC real hardware with real OS on it) So for “production”, I’d not use Virtual Box. Too slow and with some “issues” on security (though enough for casual browsing security / privacy).
So “Go For It!” and no worries! After all, by Monday you can know more about any particular Linux release on Virtual Box than I do!
(This is called “The Law Of Mutual Superiority” in geek land. “Anything I do, you can improve; and anything you do, I can improve.” Because each of us can explore into an area where the other has not gone, or apply a POV the other does not have. It is one of my favorite things about computer tech. ANYONE can become the Alpha Geek on some particular point… So RZW is more expert on KVM in our group. My notes here show my learning curve on Virtual Box. In most “Open Source” projects, folks pick a spot to raise their flag and just start in. Pretty soon they are the “expert” in that spot. Just pick a spot, stick your flag in the ground, and start digging… )
Hahaha more expert in KVM – didn’t tried it yet, just found that nice how-to on
http://www.havetoknowhow.com. From there I did Ubuntu server installation SSH, VNC but not KVM.
What I did interesting is network of Virtual Windows 2003 sever DNC and Virtual windows server 2003 with exchange server. All on laptop HP i5 with 8gb memory Windows 7 pro 64bit, with VMware workstation. Had to make image of disks with Clonezilla and gparted and convert them to virtual machines. That was fun. At the moment I am experimenting with a bunch of Ubuntu desktops with some spec software – servers and some automated routing of stuff between them, all on above mentioned laptop.
Well, Linx UbuntuMint KDE launched (after grumbling about something in the BIOS that it didn’t like, but didn’t seem to stop it from working).
The first launch was incredibly slow / painful. The Virutal Machine set up had not selected “Ubuntu” for the type as I’d called it “Mint”. So only gave it 256 MB by default. OK, a quck shutdown and change memory allocation to 1 GB, it boots faster. (Still slow, IMHO, but no longer Painfull)
The good: Looks like things work well and without much issue. Even “mouse integration” looks like it works right (so I can just slide the mouse out of the VM frame and it is working in the host OS frame – no need to click rt-ctrl). Firefox comes up clean, and works well. (I’m typing this on Linux Ubuntu Mint and there is no noticable key stroke lag / typeahead).
The Bad: It is definitely from the Big Fat Linux family. Could use a decent diet… Still, it’s not as slow as other BFLinux releases, so they’ve done some work on tuning.
The Mediocre: The GUI is NOT your typical KDE. It’s got transparent menues / panels and most everything you really need is hidden. ( I think they call that “elegant” and “clean”; in reality it is just a PITA for the novice and annoying for folks like me who know where things are; but they are not there anymore…) After some randomly clicking on things (that seems to be the only way to figure out what something does. Click it and hope it is not “shutdown” or “nuke the disk”…) I found a toggle icon on a slide out menu on an empty box it puts on your desktop by default. That let me turn on some kind of “show home directory on the desktop” and made progress. More “fiddling around” found an “application launcher” icon in the lower left (off screen… scroll scroll scroll) that let me launch something to let me see the applications.
This is better than an Applications Memu how exactly? Because it’s harder to find? More obscure? Uses more resources? Unintuitive to most new users?
OK, it’s also a High Glitz High Resources GUI. Transparent things that look cool (and make reading text harder…) and consume more memory and CPU. But hey, it looks “elegant” and “clean”… and has “cool”… I’m sure if installed native on relatively large hardware; then given a week of using it to learn where things are hidden, it would be just fine…
(Yes, I’m an old school Unix / Linux user who doesn’t like it when folks move my cheese or force me to find where they moved everything THIS time. I’ve been through it too many times…)
Summary:
All in all it’s a decent release that looks nice and has no obvious Aw Shits lurking. In a week or two I could get used to it. It’s a bit fat for a VM partition, yet Firefox is “fast enough” on it. Workable, but not ideal. If you need a “high feature set” environment, it’s likely one of the more usable. If you want “small, fast, and efficient” and maybe even ‘predictable where things are’ I’d look to one of the mini-releases / small-releases.
I’ll likely install it on dedicated hardware at some future time. We’ll see.
@RZW:
Another of the standard truisms of the industry: “A designated expert is the guy who’s read one manual page ahead of you.”
You’ve read one manual page more than I have. You are now the designated expert…
Congratulations!
EM – Hopefully you installed the Mate version of Mint. You can make Mint look pretty much like Windoze. I did that with the previous version of Mint, but just left the Gnome 3 as it came out of the box, this time. Also, you can add shortcuts to the panel for the stuff you really use a lot.
@Jim2: I can make any release of any of them look like anything… with enough work. The GUI level and themes are all “mobile”. The question is “what does the not-so-techy user get”?
IIRC it was a “Mate” that I downloaded. Mostly I was looking for the “Live CD” and “KDE” options:
linuxmint-13-kde-dvd-32bit
For a ‘first look’ I just wanted a quick sanity check of “Does it boot? Does it run? Is it a pig?”. At this point “It boots. It runs. It’s fast enough even in an emulator.” so it’s on the ‘short list’ of interesting releases. Frankly, just that EVERYTHING worked, even mouse integration, was a big surprise / bonus. First one yet that was completely problem free.
So yeah, I griped about the GUI. I really don’t like the way the whole industry is taking GUI styles. Way too much “insider glitz” and not enough “self prompting for the novice”.
On my “someday” list is to go into an old Corel Linux I have that had a spectacularly usable GUI / Theme and fish out the settings and files, port it to something newer. It was sort of 1/2 way between a Windoz and Mac look and feel. Crisp, clean, functional. (It’s my archive server. Never needed to upgrade it, not going to either.)
So my “gripe” isn’t that the interface CAN’T be fixed; it is that the novice and newby ought never be presented with a user hostile set of things. Advanced users can easily turn on an environment that is more “spartan” if they like. (In other words, having a menu item that says “hide Menu” is easier to use than NOT having a menu item that says “show menus”… and just needing to know that “Right Click – alt – esc brings up a popup menu and then on the fourth item pick options, then right click – ctl – shift on the third icon of a goat and on the 5th line, click it…)
Hey, I worked at Apple for 7+ years in ‘the early years’. We had a person who’s entire job was “Color Estheticist”… having an intuitive and consistent GUI is “a thing” for me…
Don’t ask me what I think of a lime green color scheme ;-)
EM – I’m not a fan of menus that change depending on what you have selected. That makes some things really, really hard to find, but alas, at work, I’m stuck with Windoze 7. Microdick should just try to keep what works for users and change things slowly, but, well …
OT: I was just wondering how your situation is, concerning the gas shortage in Cali.
EM-
Fascinating stuff, for me it’s like watching a bunch of shooting stars on a warm night. Don’t really know ‘exactly’ what y’all are talking about but it’s fascinating! Which is not to say that, per my normal mode, I feel too out of it to make a comment that I hope is close to intelligent or some nearby mark and actually on the dartboard. Here goes –
The human brain came to mind. We traditionally think of it as having three primary parts (or more if we count all the other nerves connected to it) –which I know you know and I can’s spell without looking them up. I basic terms as I am able to recall are 1.) Brain Stem —
medulla oblongata, 2.) Little Grey Ball –cerebellum, and 3.) Big Grey Ball –cerebrum–in two pieces. I now feel it’s appropriate to mention the 4.)Spinal Cord, and the 5.) Mayor and Minor Branching Cords and the 6.) Nerve Ends. Sounds like EM is building a multipart, stand alone brain, with some various cords, and a couple senses. Then I thought that the WiFi, or whatever, “hook-up” was like our voice and ears and the voice and ears of others –maybe just ‘sound waves’? Long analogy short, we’re talking about building an intelligent system, with all the senses, capabilities of thought, self protection, etc., etc.. We can’t detect or block the ‘problem’ others, there’s no privacy to speak of on this planet called the Web. We need our own Navaho ‘speakers’ because we’re still blind as a bat without radar/sonar to everything and everyone around us.
IOW – We Need Eyes, right?;-)
Well….but what is it the purpose of hiding out from the MONOCULAR an pyramidal sight of those New World Order BIG BROTHERS? . Remember what happened with all those who attempted the same in past history? Where are they now?
It seems that childish game it is worse than drinking Kool-Aid with Arsenic Acid…
This is why Chinese say: “Just wait at your front door…and you´ll see the corpse of your enemy passing by”
Hmmm…. Thought I’d posted a reply here from one of my virtual machine trials, but don’t see it… Maybe I need to check the SPAM queue ;-)
@Jim2: Looks like I didn’t get MATE the first time, so I’ve since downloaded it. Trial later today. Don’t be so quick to blow off work. Don’t know how ‘locked down’ your work machines are, but for some hardware “there are ways”… If they didn’t lock down the bios and you can boot from CD, just sticking in a Knoppix CD can give a working Linux… (Though any decent I.T. shop will be locking things down… it’s something I taught to do in the forensics unit I taught once upon a time… when I handed out Knoppix CDs to the class and demonstrated the ‘exposure’ ;-) Now “Don’t do this, OK! wink wink”…)
@Sera:
Didn’t know there was a gas shortage ’till I went to fill up yesterday. $4.79 / gallon… See the new posting ;-)
@Adolfo:
Simply put: I am not a fatalist. So folks are trying to kill me, steal my stuff, and put me in chains? Because of that I ought to give up and surrender? Is that your POV? Me? I’d rather go down swinging. So as “they” ratchet up the “control and intrusive monitoring”, I’ll ramp up the anarchy and privacy.
I’ll also do it visibly so that others can see it being done, and so that others can know how to do it as well. BOTH enhancing the ability to communicate despite barriers AND the ability to prevent “unwanted communications” to those who would pry.
As I have virtually nothing to hide or care about, I’m “safe” to do the work in public view. Worst case: I’m a geek playing with geek toys and a guy who’s done security for a living looking at security tools. If I’m really lucky, I’ll get hired by one side or the other. ( I’m not so sure I really care anymore which side, or can reliably tell them apart… on at least one occasion I think, I was working on a TLA project via an intermediary, but maybe not… ) All I require is that I not do anything that gets me arrested / fined / killed.
So ‘why bother’? Simply put: Because it CAN let the Average Guy do normal things outside the view of them, whoever they are. Be they Chinese Government Hackers, Russian Cowboys, Corporate Scum (like Google wanting to gather all possible information about you; or Facebook using it against you after the fact via changing privacy rules), or even random hackers. Oh, or even the US Government and TLAs world wide. Because IMHO I need it now to do my ‘regular stuff’. (Like, oh, online trading of stocks – where I need a provably clean browser when I log in and ‘do my stuff’. I presently use a dedicated machine, but eventually it’s likely to ‘pick up something’, so I need to go to a ‘fresh on boot’ process.)
So you can quit, or you can match the escalation.
To quote someone or other: “Sure the game is rigged, but it’s the only game in town.” So might as well play it better than most.
@pascvakcs:
Nothing so grand. Just trying to keep OUT of normal computer use all the folks trying to stick their electronic tongue down my throat… Essentially building an increasingly defensive environment as the folks “out there” get more offensive (both senses ;-)
Have a couple of candidates now that seem to work pretty well. So I’m slowly honing things to the point where it becomes a tolerable experience. Almost there. It’s still a bit easier and faster to just use the laptop and regular browser, but a couple of cases are now very close. I was about ready to praise SliTaz when I discovered that doing ‘package updates’ cause a reset.
It comes with Midori Secure browser built in and using duckduckgo.com for web searches by default. So they’ve already taken all the ‘track me’ stuff out of it. Supports “TOR Routing”, but I’ve not found how to turn it on yet. (Was installing the TOR package when I discovered the ‘reset’ effect. Happens on other packages too).
Oh, and the menus are obvious, easy to use, and include mostly stuff I want them to include. Adding packages is nearly trivial too.
So, until I find out how to prevent it from doing a reset in the VM, it’s probably best for a USB stick or a CD boot. (It even has a floppy boot option and you can download the entire package library to an .iso image, so even net booting off a file server and installing packages locally can be done). Eventually I’ll build a custom version and install it in the VM (avoiding that reset problem) even if I don’t find another way to avoid it. Oh, and I need to install a Flash player into Midori ;-)
It’s my ‘most likely’ candidate at this point (but I’m going to keep on trying alternatives as time permits… The Virtual Box process makes “just trying something” fast and easy, even if it also makes some things more prone to hang / crash issues).
At this point, I’m quite comfortable with the idea of booting any of several distributions in a V.M. and doing some browsing. Fast enough and comfortable enough for “times I care”.
Well. Looks like some of the “performance problems” may not have to do with the virtual machine.
I “burned” several .iso images onto real CDs instead of just feeding them to the V.Box. Then went in the other room and tried them in the Evo. (That, it turns out, can’t boot from a DVD, so those particular releases will need to await my connecting a monitor to ‘the other other box’ ;-)
This is a fairly beefy box with a couple of Gig of memory and a pretty good CPU.
Aside from learning that a couple of releases had ‘other issues’ (like discovering that my almost favorite so far, SliTaz, had ‘video driver issues’ on that box – so open a menu and close it, a white patch left on the wallpaper… and another had strange keyboard mappings so I could not find an “@” to type an email address…) there were a couple that OUGHT to have run like a bat out of hell (being small and light) but Firefox / Iceweasel / whatever you want to call Mozilla by another name… became “dog slow” after I opened a couple of windows….
Exploration showed that the CPU was pegged.
Further exploration found web pages saying to enter “about:config” and then set the browser disk cache entry to something like 16328 (instead of the 50,000 ish size mine was at) to “fix it”.
Which would be fine… except this is all booting from CD ROM as in Read Only… OK…
So now I need to revisit just how “slow” each “slow” release above really might be, if not using Firefox / Iceweasel…
The major “take away” I have from this, for me, is that the “download and go” is unlikely to work very well on many kinds of hardware. There really is a need for a ‘customize and config’ step. (Which I was reluctantly deciding anyway, based on the desire to have a ‘privacy hardened’ set of features…) So looks like at least some kind of “build” step will be needed.
Knoppix worked very well from CD, for example, except the Firefox has the CPU Hog Option locked on. Slax was nice too, except the CPU Hog issue was there. SliTaz was nice in the V.Box, except it barfed on package adds and TOR was a package add. ALL of those can be easily fixed if I do a “Modify then burn my own CD .iso file” step. Guess what I’m going to try doing next? (Probably next week some time…)
I’d always wanted to learn how to make a ‘bootable CD .iso file from scratch’, so I guess that’s what I’m going to (finally?) do… ready or not.
OTOH, I have at least one release that’s working “well enough” in the desktop box to be a “clean browser environment” for things like stock account log-in and other “financial type” transactions. (That is, I don’t have to depend on my anti-virus and firewall to prevent snoop codes… At this point, given the number of ‘day zero’ attacks that have surfaced, I have no confidence that one can depend on winning the ‘race condition’ enough to prevent folks from getting online financial information during normal use of a multi-purpose computer / browser combination.)
So I’m going to revisit the “slow” releases and assure they were slow, and it wasn’t just a stupid browser. Then I’m going to “pick one or two” to trial as ‘build my own .iso” and see how that goes. Once that step is well established, I’ll move on to the step of customizing for privacy and security, and making all three of CD / DVD / and USB Stick variations.
By the end of that, we ought to have a pretty damn secure “private work box”, and a PDS Public Browser / communicator boot system. After that it’s just working out the “store and forward” of data and files between them in a secure way and move parts of it onto a “disposo-system” SBC. I’d guess about 6 months to do it. (Less if I just did that, longer if other more interesting things come along…) During the process, if anyone sees a particular need, or interest, just holler. I’m likely to be a bit flexible on order of execution. (So I’m concentrating on browsers, presuming folks would be using an HTTPS connection to some cloud email and swapping files encrypted on the “private box” if they wanted to swap secure data. But if instead someone wanted a secure VPN site-to-site instead, I’d look to put that kind of code in first.)
I’m also dual tracking: Virtual Box on Windows and Bootable Linux Appliances. I’m planning a quick look at a Virtual Linux Box Environment (as I just don’t trust Windows…) and could likely be persuaded to make a “Virtual Box Linux Base” early on. (Boot from CD / DVD that launches Virtual Box inside of which one puts “whatever” Now it’s not possible for a key logger on the host to compromise what’s done in the V.Box, as the host boots new each time.)
A lot of work? Oh yeah…
But as I’m pretty much locked in to doing some financial things on line AND locked in to doing account maintenance / blog management on line, I’m pretty much forced to create “probably secure” ways to do it that are not based on winning a security race condition…
That I can also provide a solution for a “Tallbloke and the Constable Moment” in the process and / or help a thousand “FOIA-2011″s blossom, well, that’s just sugar on top ;-)
OK, back to work for me…
EM – I could get fired for using Linux from a thumb drive, or even just plugging a non-issued thumb drive. We use Iron Key, if we need a TD. It’s not THAT big of an issue :)
@Jim2:
Oh. That kind of shop. OK, I suggest a personal tablet with 4G ;-)
I am doing my best to understand all this. I tried “Red Hat” when it was North Carolina’s biggest company by total capitalization. Yes, bigger than Duke Power, Lowe’s and all the rest.
I failed the “Red Hat” test as I could not install the operating system from the pretty disk I bought for $20.
A few years later I tried again with Knoppix. This time my meager software smarts were sufficient to install the OS and run programs on my laptop. Eventually I gave up and returned to Windoze because the screen insisted on working at low resolution and I was too dumb to fix it.
Then Bill Gates clobbered me with Vista. This time the worm turned and I vowed to free myself from the tyrant who forced me to buy a new operating system every few years and also to pay for new versions of my applications!
This time Linux was easy. Ubuntu Edgy Eft was as slick as snot on a doorknob. Bless you Ubuntu, you freed me from captivity; Bill Gates will never get another penny from me. Sadly, Ubuntu has bloated to the point that I am looking for alteratives. Currently I am flirting with Mint Maya but thanks to Chiefio I am having second thoughts. Old though I am, a “Hard Body” still gets me going.
I was hoping that some of you would tell me how to create a flash drive capable of running a “State of the Art” A/V system. Currently, I use Mafalda (my laptop) called after a cute little lady from Argentina. While I love Mafalda, she weighs in at about 2 Kg and I would prefer to lug around something much smaller.
Chiefio talks about “bloat” so here is an anecdote about efficient code. For 12 years I worked in the physics department of a university in North Carolina. My job was to build a “Free Electron Laser” that eventually turned out to be the world’s brightest gamma ray source.
We needed all kinds of sophisticated software to design the machine. Our engineers were having problems with FEA (Finite Element Analysis) programs that required serious expenditures for super-computer time. That big waste of money came to a screeching halt when we hired a couple of engineers from BINP. The Budker Institute of Nuclear Physics is located in Novo-Sibirsk, one of the USSR’s “Secret Cities”.
The BINP engineers had a 6,000 point FEA program that ran on IBM PCs. In those days our mechanical engineers were using Intel 486 CPUs to run AutoCAD. The Russians gave us a 5.25″ floppy disk (1.2 MB) from which we installed their FEA program. We decided to test the program on a thorny magnetic field mapping problem. When we hit the “Run” button the Russians recommended that we make a cup of coffee as it would take at least 30 minutes before the “Post-processor” would kick in.
Before we could load up the coffee maker the computer beeped, much to the alarm of the BINP engineers who thought that something had gone wrong. It turned out that the Russians had designed amazingly efficient code. Necessity is the mother of invention. They only had IBM 286 machines while we in the “West” had IBM 486 machines. We threw away our advantage by writing bloated code.
galloping – 1) Sounds like a fun job. 2) It’s good to have engineers that know math. :)
@GallopingCamel:
Yup! I’ve shared before that my Brother-In-Law had a chart of improvement in “computes” at NASA in Aerospace had been greater due to software / algorithm than due to Moore’s Law / hardware. (And the necessary corollary being that BAD software can consume all hardware advances and then some… IMHO illustrated nicely by MicroSoft where the user available performance of their computers has not noticeably changed from x486 boxes to things with supercomputer speed… Spread sheet, browser, writing all about the same speed…)
Another story is when I was a Senior Consultant on the RAMIS II DBMS product. It was (is?) a very easy to use product for gluing together different kinds of files / databases and doing nearly natural language reporting. So: “Table file sales. Print yearly by salesman across regions and total.” (You could a lot more complicated and detailed stuff than that, but you get the idea…)
Each of us was to ‘pick a specialty’ along with generally being able to use the product well to create whole systems (like, oh, a sales tracking system). You could also do things like extract files / data from the accounting system (in some other product) and match that data to the Sales DB and make composite reports. On the fly… But it could be, er, less than efficient. Doing things like multiple sorts and extracts without any key fields, as one example. So I chose to specialize in “efficiency” since I care about it… I made a 26 point checklist of “things to do”.
So I get sent to the State Of California. They have a LARGE IBM mainframe that’s at about 98% utilization pretty much 24 x 7 and they are looking at a couple of million to buy another one… I take about a week. Extracts done many times by many reports get turned into one extract and multiple uses. Sorts done many times on a non-sorted field get a sorted key file added. Etc. etc. At the end of the week, when I left, the machine was at 5% utilization…. We had a Very Happy Customer!
So I’ve spent a few decades now just tying to convince folks that “Thinking Matters!”. Just THINK about what you are trying to do, and spend some of that time working up a better method and some of it finding a more efficient process. It can make a factor of 10 difference in resource usage pretty easily… The Russians have always been quite good at that. Partly because they had to. Partly because “they think well” and still value it. (Heck, chess players can be rock stars…)
Per running individual A/V systems from a flash drive: Depends on WHICH A/V system…
Are you looking for a thumb drive you can plug into whatever is on the desk where you walk in?
or
Are you looking for a commercial light laptop to buy that you can put your thumb drive into?
The first is high risk, as you WILL have compatibility issues eventually with some gear somewhere. The second is pretty easy, but you need to sort out the compatibility issues on your own. (Personally, I’d likely just get a Macintosh Laptop as it will drive a lot of A/V gear directly with some of the slickest software on the planet… then again, it runs $Thousand+ )
So look for a “Linux Laptop” and you are pretty much guaranteed that you can run it from a thumbdrive (leaving the MS stuff intact on the disk).
Just hit the Linux-laptops site and look at “candidates”. You want the one that has the A/V power and ports you want, but in the discussion says “Easy to install and worked fine”. It’s what I OUGHT to have done before buying my HP G6… ( I knew to do it, but just ‘had the hots’ to get a new laptop and bought what was on the shelf based on price performance and availability…without doing the ‘check software ease’ point..)
An example (from a slightly different HP g6 – the dual core AMD one):
http://www.linux-laptop.net/
has a list of vendor names. Click one, and you get a list of models. Click them and you get the “story” of what someone experienced with it. Like:
http://akria.wordpress.com/2008/06/30/installing-ubuntu-804-on-the-hp-g6031ea-laptop/
(Arrived at from the link in the index in the first site)
And on it goes… I have a similar network ‘issue’ and video driver ‘issue’ with this, newer, g6; but also have a brain dead 4 physical partition disk layout so get to do a dump / format / restore that I’m just not willing to risk on a new machine with painful recovery… So it’s staying Windoze Only for a couple of years. (THEN I’ll convert it ;-)
So you can hunt and peck on any given laptops of interest and see what looks easy and what’s just a “Stay Away!” and what has ‘no information’ meaning YOU will be the guinea pig.
Alternatively, you can just hit some of the sites that sell pre-configured laptops. (AFTER doing a web search on them and the particular product…) such as (random links from search):
http://www.linuxcertified.com/linux_laptops.html
http://www.thelinuxlaptop.com/
http://emperorlinux.com/
Even Amazon:
http://www.amazon.com/s?ie=UTF8&keywords=linux%20laptops&page=1&rh=i:aps,k:linux%20laptops&tag=duckduckgo-d-20
More discussion here:
http://www.pcworld.com/article/211113/how_to_buy_a_linux_laptop.html
Finally, don’t know if you need AV “production”, but there’s a Linux for that! ;-)
http://distrowatch.com/table.php?distribution=avlinux
A review / write up of it:
http://www.linuxjournal.com/content/home-av-linux
Hope that helps ;-)
I’m going to toss some links in here, with minimal comments. Why? They are links about how to make .iso CD images under various distributions of Linux. Things change over time, and the last time I did anything like this was about 15+ years ago. Since then, all sorts of distributions have run off into all sorts of directions. For example, I’ve never done much with “package” based builds. Even after Red Hat went to RPMs. By then I pretty much had what I wanted, or for special purposes / playing / testing didn’t need to build a system. Since rpms, we’ve had a dozen other package managers blossom. The one on SliTaz was incredibly easy to use (and frankly, has single handedly convinced me to start using package management ;-)
The end result, though, is that the administrative build process has now diverged between the various families of distributions. There isn’t ‘one way’ to make a CD of a distribution… That means I can’t proceed with ‘learning it‘ prior to release selection as it isn’t an “it” it’s a “them”… So, various links go here until I settle at least on a family of Linux to use. Then this list will save me some search time…
List of live CDs: http://livecdlist.com/ ( probably posted already)
The Pendrive USB installer that lets you make a variety of USB installed Linux releases:
http://www.pendrivelinux.com/universal-usb-installer-easy-as-1-2-3/#button
Knoppix Sources (based on Debian): http://debian-knoppix.alioth.debian.org/
SliTaz sources: http://mirror.slitaz.org/sources/
General idea how to duplicated a CD / DVD (pretty trivial if you know dd ):
http://www.linuxlookup.com/howto/create_iso_image_file_linux
As part of Unix lore, there’s a fun story behind “dd”. It stands for “convert and copy”… Why? Because “cc” was already used for the “C Compiler” ;-)
Making a customized Ubutu (another Debian derived OS):
https://aaporantalainen.wordpress.com/2010/04/16/howto-make-your-own-linux-livecd/
Done from inside Virtual Box and with an Audio / Visual orientation. Uses “remastersys” that has it’s own wiki: https://en.wikipedia.org/wiki/Remastersys and a how to use it example:
http://www.remastersys.com/ubuntu.html
And so much more: https://duckduckgo.com/?q=linux+howto+make+live+CD+
How to “flash” your bios using a CD (it’s just so obscure and unlikely anyone would ever need to do it that it’s actually curiously interesting as the kind of arcane thing that can be done with a Linux box…) http://www.nenie.org/misc/flashbootcd.html
A different way of making a Debian release using “live tools”:
http://fak3r.com/2012/04/30/howto-create-a-linux-livecd/
Discussion of method under Slackware:
http://www.linuxquestions.org/questions/slackware-14/howto-livecd-dvd-with-ramdisk-coolness-691480/
A Knoppix oriented search: https://duckduckgo.com/?q=knoppix+howto+liveCD+
Gentoo alternate installation methods: http://www.gentoo.org/doc/en/altinstall.xml
OK, I’m kicking around basing off of Debian, Slackware, or SliTaz (that has it’s own package manager). Likely to be a couple of days as I look at what worked fast and well under Virtual Box and what “had issues”. Sorting them into Debian base vs Slackware base vs (whatever). Once that’s done, I’ll be booting my (older, more Linux friendly) box off of the target OS (perhaps adding in a dedicated disk, or seeing if I can do the whole thing off of USB…) and doing a basic build / customize cycle. After that, I’ll do the “burn to CD” and THEN find out if I’ve “got clue” about this whole process by then… or not… Only after THAT step, can I start the testing / QA on that decision path to see if my choice was right. (Welcome to the world of software development… where you ‘make a good guess’ and find out weeks or months later if it was all wasted time or not… Why I spend so much time ‘up front’ assessing things before charging off down a path…)
So I’ll be spending time contemplating these charts and “trying things”:
https://en.wikipedia.org/wiki/Comparison_of_Linux_Live_Distros
Chiefio,
The light laptop (Mafalda) with Linux is what I have. It works every time without much effort on my part. Ubuntu never asks me whether I want to install the hardware it has just detected (dumb question). It just finds the necessary drivers and installs them. Most of the time all I have to do is connect my laptop to the projector via a VGA cable and let Linux work its magic. It only becomes a chore when I encounter a “High End” A/V system with fancy water cooled projectors that Linux is not familiar with.
I think you are telling me that my idea of booting N**U computers from a flash drive is “high risk”. That makes me feel better. I was thinking the problem was my lack of computer savvy.
Your insights are most helpful.
“……a LARGE IBM mainframe that’s at about 98% utilization pretty much 24 x 7 and they are looking at a couple of million to buy another one… I take about a week. Extracts done many times by many reports get turned into one extract and multiple uses. Sorts done many times on a non-sorted field get a sorted key file added. Etc. etc. At the end of the week, when I left, the machine was at 5% utilization…. ”
Great story! Millions of hardware dollars saved by taking the trouble to improve software perfomance. My guess is that big savings were made in operating costs as the batch processing no longer required staffing 24/7!
Today, high performance hardware is so cheap that there is not much interest in efficient code!
Here is another anecdote about software performance. The GE “Medium” transformer plant in Rome, Georgia needed 13 engineer weeks to design a transformer using AutoCAD. This may sound like a long time until you realize that a “Medium” transformer can weigh up to 40 tonnes.
In 1990 I reviewed GE’s engineering methods and recommended the introduction of DA (Design Automation). Engineers (like me) love automating other people’s jobs but finally we have outsmarted ourselves by automating our own jobs. Using a DA product pioneered by Babcock & Wilcox the design time was reduced from 13 man-weeks to one man-day.
Last year I dropped by the plant to see how things were going only to discover it had moved to Mexico!
Code efficiency can be an amazing thing. I ran a development group in the Bay and along with delivering a product with hardware, firmware and drivers we had to deliver the test software to manufacturing. I hired a crazy Russian to do reliability testing on the test code. The programmers soon came to me complaining that he was rewriting all their code when they handed it over. I took a few days and audited his results. Not only was he fixing bugs the efficiency was unbelievable. We had a typical run time of 20 minutes for our products, this guy was doing everything in the spec, plus a few more things he added and doing it in 7 minutes.
Every week we had status meetings and the Manufacturing guy would always demand the run time status of the software. This wasn’t my first Rodeo, so I told him 30 minutes. Every week he would yell and scream and say he needs more time remove for him to hit production numbers. Every week I would walk it down a bit and he was happy for the improvement, but each time would come up with an excuse why it had to be lowered still further. As we got close to the hand-off he said it was impossible to do his job and make his numbers unless he had a test time of 15 minutes.
We handed off shortly after that with the now 6 minutes of run time. Now it was my turn to scream about upping the numbers as he had such excess time. Its all a game, but efficient code can sure do wonders in a test environment. Its just too bad the Russian guy was half crazy, we practically had to lock him in a room away from the rest. Not the most social guy.
There is a feature called Undo Disks, at least in Windows Virtual PC, that you can turn on. This method can provide a means of reverting to the state of the OS prior to you using it before turning on the Undo Disks option.
From the Virtual PC help file:
“Undo Disks is a feature that saves changes to a virtual machine in a separate undo disk file in case you want to reverse the changes. The Undo Disks setting applies to all virtual hard disks attached to the virtual machine.
When you run a virtual machine that is using Undo Disks, any changes to a virtual hard disk are temporarily stored in an undo disk (.vud) file, rather than in the virtual hard disks attached to the virtual machine. As you continue to make changes to a virtual machine, those changes continue to grow in the undo disk. If you decide to either apply or discard the changes stored in an undo disk, that action applies to all changes stored in the undo disk—in other words, you cannot selectively apply or discard changes on an undo disk.
…
After you turn on the Undo Disks feature for a virtual machine, an undo disk stores all the changes associated with that virtual machine from the time that you start using the feature. These changes can include data that you add or delete to a virtual machine, updates applied to the guest operating system or applications, or even the addition or removal of applications. All virtual hard disks attached to the virtual machine are unaffected by the changes unless you apply the changes. To delete the undo disk file, you discard all the changes. This effectively leaves the virtual hard disks intact and eliminates the possibility that the changes will be applied to the virtual hard disk.
“
The problem with using one core or more than one core in the virtual pc environment is that if you install with only one core then later turn on more cores, the operating system is not expecting a multi core environment and did not install the necessary components.
However if you install the OS with 2 or more cores enabled, it will work properly even if later you change the setting to a single core environment.
I have had a lot of experience using virtual machines, in the Microsoft Windows environment.
@BobN:
We called them “raw meat” guys… “Put them in a room, lock the door, and slide raw meat under it every day…” Every team needs some ;-)
FWIW, I think it is a learnable mental state. I was originally from the “user” side of things. Touchy feely consulting on IBM Mainframes in ‘end user’ DBMS report writing…. When I wanted to move up in shops, I was told I “wasn’t technical enough”… so I moved myself over to a Unix shop at a high end manufacture… Years later I’m running the “Systems Guys” on a Unix Cray Supercomputer site…. So I go out to apply for Director jobs… I’m told “You are too technical, we need someone with more user focus”! ( I did get a Director of I.T. slot … at a tech start up with about 100 people rising to 200 in a year…)
So, in the middle of that process, I was doing Unix Sys Admin and SysProg work. Doing the sporadic “coding frenzy”… The intensity of the focus and all, well, some of the “social skills” go offline for a while… But the code is very very good… (And when you hear people muttering about “raw meat” you know you’ve made it ;-)
Then I slid back a bit more toward ‘social stuff’ and moved up the management food chain. Still, had to keep up the Skilz … So we got this Cray, and a giant (then) 1 TB tape robot backing store. Cray makes them work together, but the disk commands (like du disk utilization) are a bit confused. Data blocks can be on disk, or migrated out to tape… So one day, just to see if I still ‘had what it takes’ I hack the du command. Added a -m flag to show what blocks are ‘in the file, but migrated’. So now you can do “du” and get total file sizes on disk, or “du -m” to get size migrated to tape (and some other option I added for the sum of both but don’t remember now…)
Turned out to be easier than I expected, but still got “that look” from folks of “You did what?”… Not quite up to “raw meat muttering” standards, but enough to avoid “He’s a suit – no skilz.” looks ;-)
I think it’s my background (ranging over such wide programming turf) that makes me so critical of the “quality” of work in GIStemp and related. It’s just so… “limited” and “messy”. Like someone had a FORTRAN class, but never really wrote code for a living. Shipping production product. There’s just things you notice… (Though whoever did the Python chunk did quality work. Step 2 IIRC. Clearly Hansen contracted that bit out to someone with clue…)
@GallopingCamel:
The problem with “booting from USB” is just that the hardware and software must be compatible. As long as you built the two to work together, no problem. BUT, take a bootable USB drive and stick it in “some random hardware”… no idea what will work, or not.
IF the problem is just driving “strange” AV gear, and you have a laptop known to “play well with Linux”; consider making a custom version of Linux that lives on a thumb drive (or even on a CD / DVD ) in THAT laptop. Now that release only has to do one job: Drive THAT AV gear. You don’t care if the think can’t run Open Office or if it has a crappy GUI on the desktop…
Just that it gets lots of “odd” AV drivers early.
So that’s what I’d do. Look for a version of Linux that drives the target hardware, and build it onto an alternate boot device for the laptop you already use. Like putting AV Linux on a CD or installing a set of custom drivers downloaded from the vendor…
FWIW, Mexico is likely to continue to grow and even the Peso is likely to be stronger relative to the US$.
Oh, and love the story! A lot of the “nuts and bolts” of engineering is being automated; but it still takes someone who understands what they are making to decide what it ought to be and to “sanity check” the software… One of my favorites is the “compiler compiler”…
https://en.wikipedia.org/wiki/Compiler-compiler
You feed it a language grammar for a computer language, and it generates a compiler for you!
BobN,
Loved your story. You are lucky you only had one Russian. In my (Physics) department at Duke we had several on the staff and as many as six installing equipment made in Novosibirsk. They were really noisy. They would get together to sort out problems and within minutes they were yelling at each other. Kind of scary until you got used to it.
One Christmas we had three quite senior russian engineers visiting and they made the mistake of looking around a Sears store together. Imagine very sinister looking guys dresssed in long black leather coats walking around a busy store. They were arrested and I had to bail them out of jail.
Duke university took a very dim view of the incident. It ended on a happy note when the Mayor of Durham (Sylvia S. Kerckhoff) invited them to the council chambers and presented them with keys to the city.
@ gallopingcamel – a man that can appreciate the culture. I can just see the guys walking around. Its funny to think about, but at the time I’m sure it was upsetting. A Duke man! Great school, but I love to hate the basketball team, I’m a big Gonzaga man myself.
@ EM – Interesting life work story. Its amazing how people get pigeon holed in life. I think everyone goes through a bit of that, but its so good to prove them wrong.
I always had trouble with young guys just out of school, they were always convinced they could do firmware in high level languages. At first I would argue with them, but I soon learned that it was easier to just say, I don’t care what language you use, it just has to meet these requirements. Usually you would check in with them a week or two later and they would be Busy coding away in assembly. Nothing like looking at the key requirements to help you make decisions.
It was always funny to watch the old timers when the young guns would make those remarks, a bit of a smirk and they would grab their coffee and get busy.
The hardware guys were always so much easier to handle than the software/firmware designers. I always thought it must have been my liking hardware design better, but was never sure. I think programmers are just a different breed. Would love to had the opportunity to do some psychological testing between the groups and look for trends, I suspect there would be some.