Monster Crushes, PNY freezes, USB Sticks as Home Directory

Those who have been following a while will remember when Tallbloke was raided by the Constables and a bunch of his equipment was taken away for “inspection” to be used against him. Later returned, without much in the way of deep felt apology, IIRC…

This first block is just as a reminder of why I’m doing postings on things that involve preserving your privacy and / or thwarting ham handed police invasions. It is not out of some sense of admiration for criminals, nor any desire to break laws, nor even a tendency to paranoia. I am, by preference and by employment, on the “White Hat” side of things. I have been in an Eagle Scout post where the theme was “Law Enforcement” and we were the traffic cops at the Boy Scout Jamboree. (Also had police training in various other things including sniffing a bit of MACE… I suggest avoiding it…) I have often worked in computer security and taught a session on Computer Forensics at a college.


But for some minor variations, I fit the same model as Tallbloke did at the time. We occasionally visit each others site, and had the guy / gal who boosted the ClimateGate emails randomly chosen ME instead of HIM to contact for releasing things, I would have been the one raided. Once a bullet comes that close, you go looking for a resistant vest and protective behaviours.

So that’s why I do postings like this, even if this is a minor one. We’ve had some on Truecrypt, making a portable linux workstation (“dongle pi”) sockpuppet, encrypting files. Etc. This one is more about moving your data off of the hard disk.

For anyone who wants a trip down memory lane, some links:

The warrant entitles the police to enter and search the premises for “evidence of an indictable offence” referring to section 15 of the Police and Criminal Evidence Act. In its Climategate series, even the Guardian was unable to conclude that there had been a crime. So I wonder how the Detective Inspector came to the conclusion that the computers at Tall Bloke’s residence would provide “material that is likely to be relevant evidence and be of substantial value to the investigation of the offence”.

UK police seize computers of skeptic blogger in England

Anthony Watts / December 23, 2011

UPDATE: 12/21/11 4PM -BBC covers Tallbloke, finally, Richard Black still silent- Norfolk constabulary to share hand-off Climategate investigation, and Greg Laden caves – see below

Dec 14th -The first blogger to break the Climategate2 story has had a visit from the police and has had his computers seized. Tallbloke’s Talkshop first reported on CG2 due to the timing of the release being overnight in the USA. Today he was raided by six UK police (Norfolk Constabulary and Metropolitan police) and several of his computers were seized as evidence. He writes:

After surveying my ancient stack of Sun Sparcstations and PII 400 pc’s, they ended up settling for two laptops and an adsl broadband router. I’m blogging this post via my mobile.

That means his cellphone. In his blog report are all the details. including actions in the US involving WordPress and the US Department of Justice. Jeff Id at The Air Vent also has a report here.

Strange and troubling that they’d seize his computers for comments dropped onto a US service ( from the cloud. There wouldn’t be any record on his PC’s of the event from FOIA’s placing comments, that would be in the server logs.

As I’m fond of saying “It isn’t about me”. In this case, too, it isn’t about me. When Climategate2 first broke I decided it was going to be all over the place and didn’t want to just do more “me too” postings. Still, this will impact me. AND you.

Tallbloke visits here some times. That means any forensics / investigation team worth there salt will have looked to see what he was doing and ‘check out the place’. Realize that you are likely being watched, (though also likely to a very small degree. In the grand scheme of things, I’m small potatoes and not very interesting to police activities).

I will be taking (have already taken) protective measures. I can’t say what all of them are, but as of now if my computers are taken, it will have roughly no impact. The backup copy is out of the house and I’m posting this from a Starbucks. (Not paranoid, just it was on the way…) There are also offsite computers available to me. Nuf said. In general I recommend to folks that they make duplicate copies of anything they care about and keep at least one copy off site anyway. Mine tend to live in a vault. It doesn’t cost much and if you have a fire in the house, it is WELL worth it. Ditto the random thief. It is just prudent behaviour.

The USB Stick And SD Adapter

Now the reasons for taking the computer are to search through it for “interesting bits” of crap left laying around by the operating system and to root through the underwear drawer and your dirty laundry hamper of data looking for evidence to be used against you. Never mind “secure in your papers and effects” (and doubly so for folks out of the USA that don’t even have an ignored constitution to point at…). The dictum is “get old George to sign the warrant, he’ll sign anything” and sort it out later. Or with the new “security” laws, they have the “sign your own” warrants…

So you KNOW that you are not secure in your “papers and effects” and especially not so on your computer disk.

Some of these postings have been about things like using TAILS to do your web browsing for anything that isn’t suited to the front page or 6 O’Clock news (that can be quite innocent… like looking up “How to make a nuke” after reading about Iran in the news and wondering how hard is it, really, and will “the deal” actually make it any harder and what politician is lying the most to you… but maybe not what you want to be explaining to sleepy old George at 2 A.M. after being arrested for terrorist interests in nuclear technology…)

But hardening the OS doesn’t do much good if your home directory is full of saved files on how to build a nuke, and / or how the Israelis are likely to bomb Iran. Encryption can help (sometimes quite a lot) and is an element we will revisit in the future. But one simple thing is just to “not be there”.

In combat (and this IS about a kind of combat. Combat for your right to privacy and to be left alone in peace.) there is a phrase: “Hit them where they ain’t”. It means to attack your opponents weak spot. In Karate, there is an opposing ideal: “Be the empty vessel” This has many meanings, from be at peace and do not let anger drive your actions, to quiet the mind and let the brain stem and training do the fighting, to don’t be where they hit. (Though the last bit is more Aikido like, where you flow with the attack and let it pass you by… do not fight with your opponents and they must fight with themselves…)

This posting is about that point of view more than others. Then there is a very long technical bit about testing one bit of hardware…

The Empty Vessel

The goal of a computer hack and the goal of computer forensics is the same.

Break through your defenses, take your information, and use it against you.

Your first line of defense is simply to have the very best defense against system compromise that you can make. From hardened firewalls to hardened systems to encrypted system disks to hard to crack passwords. ( A phrase that is long but easy to remember is better than short, complicated, and forgotten… so “I love my daughter her child and my wife more than life itself.” is better than “!F*7G$d#th”.)

But none of that does much good if the Gendarmes have broken your door down, have you at gun point, have taken your computer and disks and run off with it all. Perhaps even demanding that you give over the pass phrase or you will be rotting in jail for years for just saying “Um, but I’m an innocent private person with rights, so, no. Thank you.”

In that case, it is convenient to have the hard disk full of things that you really don’t give a damn about. For me, it’s a canonical collection of Linux releases, images from various web sites about weather and climate, and a jpg / gif / png archive of most stuff I’ve ever put in a posting. Along with a bunch of crap accumulated over the years of no particular value, like my invoices for decades to companies that have gone away for things like being their electronic janitor… that the IRS already has. So squeeze me enough, I’ll “give it up”.
By design.

Now, were I to be doing anything I wanted private (Lord I wish my life was that interesting…) I’d have it off in a small patch of “something” that could be easily missed in just such a “search and steal seize” operation. At one employer, I was aware they were fond of archiving “all your stuff” for a very long time and, frankly, if I downloaded a copy of Tails, I didn’t want that in their “WTF Moment Archives” if someone searched them. So I put my “home directory” under their Windows 7 OS on an external plug in SD card. A 64 GB card worked very nicely and all my stuff left with me at the end of each day. I’ve been at companies when, over night, the Uber Management decided “all contractors must go” and you had zero opportunity to pick up even your toothbrush and coffee cup; forget the canonical collection of saved emails praising your work… so “my stuff” was always in my pocket when I was not at the keyboard. This is also a generally good practice. Not leaving things where someone can get to the keyboard and copy them.

That, having worked remarkably well, had me thinking that it was a great way to go for my home Linux box. It also would mean the end to 5 different home directories on 4 machines… and the maintenance headaches that come with it. (And since I use the “browser du jour” on each, for about 4 per machine [Opera, IceXxx, FireFox, Epiphany, Konqueror, Midori, …) that can run out to 20 sets of “favorites” or “Bookmarks” just in the one room(!)…

Except the home machine didn’t have a nice SD slot in it. And the nice little adapter was not something I wanted tied up all day… (My two other bigger older ones not working on Linux for unclear reasons…)

No Worries, I thought, I’ll just use a USB Stick…

In either case, you simply take “your stuff” and pull it from the machine when you are not at the keyboard. (Remember to unmount it first!)

In the case of a knock (or crash / bang) at the door, you reach over and yank.

For the SD card solution, a micro-SD card (used in an adapter) is about the size of my little fingernail. The ways that can be made to disappear are beyond counting. From the cuff of pants, to the cheek teeth space, to a large ear canal to a potted pant to a drop into the air vents on the heater… or even just into your shoe.

It makes finding it a matter of searching the entire space with a sieve.

Not something your typical warrant says is OK to do.

Were I doing this “seriously”, I’d be using a Class 10 or “Ultra” speed micro-SD card. But I’m not that serious…

So I used an SD “stick” about the size of my little finger to the first joint.


I’ve not (yet) tested speed and performance on an encrypted USB or SD “disk”. If you really really have important stuff to keep secure, and are not just doing it to pee in the Constables Tea, you WILL encrypt that file system. Then the yank out of the slot and power fail the box makes it useless in any case. In some future test, I’ll get to testing speed on encrypted versions.

For now, this is more about the convenience of “taking it with me” and “security by obscurity” of it being unclear where the data live and where the stick has gone.

Comparative Sticks

There’s nothing like a side by side comparison to tell you what comes up short.

I already had experience from the 64 GB micro-SD chip (in adapter) on a higher end Windows box. It was FINE. Didn’t notice a thing.

I expected the same in the move to the Stick. I have a couple of them, and as the two small ones were in use already, decided to use a new one. Off to Walmart…

(The 4 GB very old stick has a CentOS build on it and seemed to work fine, the 16 GB newer PNY with convenient wrist strap is used for moving things around and is the file truck du jour where speed is not important but the wrist strap is ;-)

So I bought a new, but very cheap, 32 GB PNY tiny little thing for something like $16 “on sale”. This looks like it:

Keep your important information safe and secure with the PNY 32GB USB Flash Drive. Its futuristic design offers a sleek look. The slim feature is a convenient way to store and share files. The 32GB flash drive is compatible with most PC and Mac laptop and desktop computers. Take it to the office, school and anywhere in between. Attach it to your key ring, book-bag or any other accessory to help meet your needs.

PNY 32GB Metal Attache USB Flash Drive:

Capacity: 32GB

USB 2.0 interface

Thin capless, metal design

Metal attache USB 2.0 flash drive is compatible with most PC and Mac laptop and desktop computers

Included key fob is perfect to attach to key chains, backpacks and more

That it is USB 2.0 didn’t matter as my machines are all 2.0 at this time (with the exception of the Chromebox… I think… but this would not be used there.)

I moved onto it. Easy enough. Just:
cd /home/{yourdir}; tar cf – . | (cd /mounted/name; tar xfv -)

in Unix / Linux terms. Then edit /etc/passwd and point your home directory to there.

I was all set to be a Happy Camper.

So I was somewhat surprised when things were noticeably slower…

The Chrome Browser is chatty. It tells you things like when it is hitting cache. It keeps copies of pages you have visited in .cache in your home directory ( I think… I’ve not verified what it is caching by actually looking in the files). So when running Chrome, I’d see “waiting for cache” at the bottom a lot. IceApe and IceWeasel (almost the same browser) also would take long pauses, but didn’t say why.

It was all just notably prone to “taking a break” from seconds to a minute or ???


Was Linux unsuited to this? Was the EXT3 file system with journals and 2 or 3x the number of writes the issue? (write log that you will do it, write file, write log that you finished doing it). What was the deal?

I’ll save the non-techies the pain of the details. Those are below.

The short form is:

The PNY USB stick has a sporadic “take a break” that can run from a fractional second to a minute. It does this often. It does it on NTFS as well as EXT Linux file systems (but not so much on FAT32) and is not suited to this use.

A “Monster” brand that in raw speed is about the same, was vastly better in actual testing and use. Moving onto it makes everything run fine. I’m using it now, and “no problems.”

Please note: I am NOT endorsing the following link. They just have the drive in question and I’ve not found it at Amazon o Walmart.

Mine is Pink. So I’ve named it “Pink Monster”… It is ONLY descriptive, OK? Not some clever inside joke (snicker) and not at all a play on (snort) words at ALL (giggle)… I got it at Walgreen’s on sale for something like $16? likely because folks didn’t look at the package long enough to figure out that the “micro” end with a cap over it was only ONE end and the other end fits a regular USB. The color they had was pink, ok? Well, they had black too, but most of my other ones are black and I didn’t want to be trying to pick them out… I really wanted blue… honest!

Now this is a 3.0 stick. But as my machines are 2.0, that ought not to be the issue. Also, the typical times are about the same between the PNY and the Monster. So I think I’m safe in saying the PNY “has issues”.

In short: WHICH USB stick you buy can have a drastic impact on your experience. I would avoid the PNY and use the Monster (and others only after testing).

The Technical Test and Details

The “test rig” is an old Compaq Evo (real Compaq, not after the HP merger ;-0 so quite old) with 2 GB of memory, a 1.x GHz processor, and hard disk. More details if folks really want them. But this being about the USB drives… other than it being a 2.0 box running Devuan variant of Debian I’m not seeing how it matters.

First off, what do the two file systems look like as mounted:

Filesystem     1K-blocks    Used Available Use% Mounted on
/dev/sda4       10230780 3968216   5742864  41% /home
/dev/sdb1        1021984       4   1021980   1% /CF32
/dev/sdb3        5119996   26204   5093792   1% /CNTFS
/dev/sdb4       23871356 2702116  19956620  12% /Chiefio
/dev/sdc1        2043984       4   2043980   1% /T1
/dev/sdc3        5119996   26204   5093792   1% /T2
/dev/sdc4       20734580   44996  19636300   1% /T3

The PNY was already in use so is the “sdb” group of /CF32 (FAT32) / CNTFS (NTFS formatted) and /Chiefio (EXT3 file system).

The “Monster” was mounted as /T1 (FAT32) /T2 (NTFS) and /T3 (EXT2 file system). I’d originally set out to test EXT2 vs EXT3 (with journaling) which is why those two don’t match.

You can see how NTFS and EXT both use some disk for preparation that is not used by FAT32 in that the partitions on the /T2, /T3 and /CNTFS are all “empty” yet have blocks taken while the FAT32 both show near none used. /Chiefio is full of my home directory so shows a lot of use (about 10% so ought not influence the tests).

The sda4 /home is on real hard disk. It’s the golden standard.
Then, both of the two USB sticks have a FAT32 partition on sdx1,
a linux swap on 2, and NTFS on 3.

Then the PNY has an EXT3 partition on sdb4 while the Monster has an EXT2 on sdc4.

We can use the comparative FAT or NTFS speeds to compare the two bits of USB hardware. The EXT partitions will compare relative EXT type speeds.

First up, the hard disk benchmark.

The benchmark makes a file with 100 bytes in it.
It then copies that file 10 times into another file to make 1000 bytes.
Then copies that 100 times to make a file with 100,000 bytes in it.
Finally, that 100kB file is copied 10 times to make a 1 MB file.

This tests a mix of reads, writes, and for hard disks, seeks.

Here is a test on the real disk. First up, we look at the file system type on the disk:

root@EVOdebian:/home/chiefio# file -s /dev/sda4
/dev/sda4: sticky Linux rev 1.0 ext3 filesystem data, UUID=18332e98-280a-4d0a-be25-277a43b6153a (large files)

We can see it says “ext3” in the description. So a journaling linux file system.

I’d done a change of home directory to be in this file system (so /home/chiefio).
The script prints out a date stamp at times that turns out to be less useful than expected.

chiefio@EVOdebian:~$ pwd
chiefio@EVOdebian:~$ time MB

Tue Aug 11 13:26:16 PDT 2015

Tue Aug 11 13:26:16 PDT 2015

Tue Aug 11 13:26:17 PDT 2015

Tue Aug 11 13:26:17 PDT 2015

Tue Aug 11 13:26:17 PDT 2015

-rw-r--r-- 1 chiefio chiefio     100 Aug 11 13:26 OneH
-rw-r--r-- 1 chiefio chiefio  100000 Aug 11 13:26 OneHT
-rw-r--r-- 1 chiefio chiefio 1000000 Aug 11 13:26 OneMillion
-rw-r--r-- 1 chiefio chiefio    1000 Aug 11 13:26 OneTh

real    0m0.615s
user    0m0.016s
sys     0m0.116s

So elapsed time of .6 seconds for all those file creations and bit moving. That’s our benchmark. We want things near the 1/2 second mark or a little more.

Next up, the script that does it.

chiefio@EVOdebian:/Chiefio/SPEEDTEST$ cat /usr/local/bin/MB

# Start by putting 100 Char into a file

cat /dev/null > OneTh
cat /dev/null > OneHT
cat /dev/null > OneMillion


echo "123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789" > OneH

# Then make 10 copies of that into another file for 1000 chars


for i in 0 1 2 3 4 5 6 7 8 9
        cat OneH >> OneTh

#Make 100 x copies of One Thousand for 100 thousand


for i in 0 1 2 3 4 5 6 7 8 9
        for j in 0 1 2 3 4 5 6 7 8 9
                cat OneTh >> OneHT

# Make 10 copies of 100 Thousand for 1 Million Bytes.


for i in 0 1 2 3 4 5 6 7 8 9
        cat OneHT >> OneMillion


ls -l One*

Why do it this way? Well, first off, it is easy ;-)

But it also mixes a variety of reads and writes of different sizes.

And here is a ‘run’ of it against the matching EXT3 file system “USB Stick”:

Doing the Speed Test on EXT3 with Journaling. PNY brand 32 GB stick.

chiefio@EVOdebian:~$ cd /Chiefio/SPEEDTEST/
chiefio@EVOdebian:/Chiefio/SPEEDTEST$ df .
Filesystem     1K-blocks    Used Available Use% Mounted on
/dev/sdb4       23871356 2703212  19955524  12% /Chiefio
chiefio@EVOdebian:/Chiefio/SPEEDTEST$ file -s /dev/sdb4
/dev/sdb4: sticky Linux rev 1.0 ext3 filesystem data, UUID=c34ebac6-e950-4a84-842a-f5091efe123f, volume name "chiefio" (needs journal recovery) (large files)

chiefio@EVOdebian:/Chiefio/SPEEDTEST$ time MB

Tue Aug 11 13:16:13 PDT 2015

Tue Aug 11 13:16:13 PDT 2015

Tue Aug 11 13:16:13 PDT 2015

Tue Aug 11 13:16:13 PDT 2015

Tue Aug 11 13:16:13 PDT 2015

-rw-r--r-- 1 chiefio chiefio     100 Aug 11 13:16 OneH
-rw-r--r-- 1 chiefio chiefio  100000 Aug 11 13:16 OneHT
-rw-r--r-- 1 chiefio chiefio 1000000 Aug 11 13:16 OneMillion
-rw-r--r-- 1 chiefio chiefio    1000 Aug 11 13:16 OneTh

real    0m0.589s
user    0m0.036s
sys     0m0.092s

After the Test. At 0.589s the elapse time is quite good. At this point the difference in personal experience between the hard disk and the PNY stick is hard to explain.

How about an EXT2 file system on the Monster 32 GB Stick?

Doing the Speed Test on EXT2 file system:

chiefio@EVOdebian:~$ cd /T3/EXT2speedTest/
chiefio@EVOdebian:/T3/EXT2speedTest$ df .
Filesystem     1K-blocks  Used Available Use% Mounted on
/dev/sdc4       20734580 46092  19635204   1% /T3
chiefio@EVOdebian:/T3/EXT2speedTest$ file -s /dev/sdc4
/dev/sdc4: sticky Linux rev 1.0 ext2 filesystem data (mounted or unclean), UUID=34e83e01-f370-4efc-b00e-e27afde9d44e, volume name "ChiefioEXT" (large files)
chiefio@EVOdebian:/T3/EXT2speedTest$ time MB

Tue Aug 11 13:16:20 PDT 2015

Tue Aug 11 13:16:20 PDT 2015

Tue Aug 11 13:16:20 PDT 2015

Tue Aug 11 13:16:20 PDT 2015

Tue Aug 11 13:16:20 PDT 2015

-rw-r--r-- 1 chiefio chiefio     100 Aug 11 13:16 OneH
-rw-r--r-- 1 chiefio chiefio  100000 Aug 11 13:16 OneHT
-rw-r--r-- 1 chiefio chiefio 1000000 Aug 11 13:16 OneMillion
-rw-r--r-- 1 chiefio chiefio    1000 Aug 11 13:16 OneTh

real    0m0.516s
user    0m0.032s
sys     0m0.096s

Hmmm…. Only slightly faster. So what explains the noticeable pauses when
running a browser out of the PNY Stick? They happen when, for example, Chrome
is waiting for a file from /{homedir}/.cache per the message it puts up.

Looking at the time on the FAT32 partitions:

Here’s the Monster stick:

root@EVOdebian:/T1# time MB

Tue Aug 11 13:39:01 PDT 2015

Tue Aug 11 13:39:01 PDT 2015

Tue Aug 11 13:39:01 PDT 2015

Tue Aug 11 13:39:01 PDT 2015

Tue Aug 11 13:39:01 PDT 2015

-rwxr-xr-x 1 root root     100 Aug 11 13:39 OneH
-rwxr-xr-x 1 root root  100000 Aug 11 13:39 OneHT
-rwxr-xr-x 1 root root 1000000 Aug 11 13:39 OneMillion
-rwxr-xr-x 1 root root    1000 Aug 11 13:39 OneTh

real    0m0.547s
user    0m0.016s
sys     0m0.088s

and then the PNY Stick:
(Due to the permissions when mounted I had to run this one as root)

root@EVOdebian:/CF32# time MB

Tue Aug 11 13:40:42 PDT 2015

Tue Aug 11 13:40:42 PDT 2015

Tue Aug 11 13:40:43 PDT 2015

Tue Aug 11 13:40:43 PDT 2015

Tue Aug 11 13:40:43 PDT 2015

-rwxr-xr-x 1 root root     100 Aug 11 13:40 OneH
-rwxr-xr-x 1 root root  100000 Aug 11 13:40 OneHT
-rwxr-xr-x 1 root root 1000000 Aug 11 13:40 OneMillion
-rwxr-xr-x 1 root root    1000 Aug 11 13:40 OneTh

real    0m0.521s
user    0m0.028s
sys     0m0.076s

A couple of hundredths of seconds. Not significant, IMHO.

For completion, the two NTFS partitions on those Sticks:

First, the PNY 32 GB stick.

chiefio@EVOdebian:/CNTFS$ pwd
chiefio@EVOdebian:/CNTFS$ time MB

Tue Aug 11 13:42:33 PDT 2015

Tue Aug 11 13:42:33 PDT 2015

Tue Aug 11 13:42:33 PDT 2015

Tue Aug 11 13:42:34 PDT 2015

Tue Aug 11 13:42:34 PDT 2015

-rwxrwxrwx 1 root root     100 Aug 11 13:42 OneH
-rwxrwxrwx 1 root root  100000 Aug 11 13:42 OneHT
-rwxrwxrwx 1 root root 1000000 Aug 11 13:42 OneMillion
-rwxrwxrwx 1 root root    1000 Aug 11 13:42 OneTh

real    0m0.782s
user    0m0.032s
sys     0m0.112s

Then the Monster 32 GB NTFS partition. Lets look at the partition layout first:

/dev/sdc3        5119996   26204   5093792   1% /T2
/dev/sdc4       20734580   45212  19636084   1% /T3
root@EVOdebian:/T1# cd ../T2
root@EVOdebian:/T2# file -s /dev/sdc3
/dev/sdc3: sticky DOS/MBR boot sector, code offset 0x52+2, OEM-ID "NTFS    ", sectors/cluster 8, Media descriptor 0xf8, sectors/track 32, heads 64, hidden sectors 8194048, dos < 4.0 BootSector (0x80), FAT (1Y bit by descriptor); NTFS, sectors/track 32, sectors 10239999, $MFT start cluster 4, $MFTMirror start cluster 639999, bytes/RecordSegment 2^(-1*246), clusters/index block 1, serial number 0595a52d55f6e31ef; contains Microsoft Windows XP/VISTA bootloader BOOTMGR

You can see in the /dev/sdc3 line “NTFS” in there if you scroll to the right… (since this one was not mounted with NTFS in the path name thought I’d document it this way).

root@EVOdebian:/T2# time MB

Tue Aug 11 13:44:45 PDT 2015

Tue Aug 11 13:44:45 PDT 2015

Tue Aug 11 13:44:45 PDT 2015

Tue Aug 11 13:44:45 PDT 2015

Tue Aug 11 13:44:45 PDT 2015

-rwxrwxrwx 1 root root     100 Aug 11 13:44 OneH
-rwxrwxrwx 1 root root  100000 Aug 11 13:44 OneHT
-rwxrwxrwx 1 root root 1000000 Aug 11 13:44 OneMillion
-rwxrwxrwx 1 root root    1000 Aug 11 13:44 OneTh

real    0m0.712s
user    0m0.012s
sys     0m0.096s

Almost identical. The PNY has slightly longer NTFS times while the FAT32 times are slightly shorter. Either “random variation” in the test conditions as things take precidence (like “TOP” or the terminal Xserver taking temporary service) or just below the level of accuracy.

doing a couple in a row in the same stick shows variations in the hundredths of a second place of about 0.02, so I’d not attribute significance to anything finer than that.

real    0m0.737s
user    0m0.028s
sys     0m0.104s

real    0m0.756s
user    0m0.048s
sys     0m0.088s

real    0m0.756s
user    0m0.024s
sys     0m0.116s

real    0m0.738s
user    0m0.024s
sys     0m0.116s

Overall conclusion? USB sticks are fast enough for multiple read / write cycles on small to medium files and with variation between sticks of less than the testing precision.

NTFS is a comparitively slow file system. Differences between EXT2 and EXT3 exist,
but not dramatic. This might change with repeated small file writes and journal updates.

Need a new test for that.

FAT32 times and EXTx times are not dramatically different.

Here’s another relatively fast time sample EXT2 run on the Monster stick:

real    0m0.522s
user    0m0.028s
sys     0m0.080s

Something odd happened with this next run. It “paused”.
Rather like I’d seen during browser use with this PNY stick:

Here’s one of the worst runs of the batch:

real    0m6.459s
user    0m0.032s
sys     0m0.100s

Note that first time as s in it, not a 0 like all the others.
Some oddity from the PNY, or what?

Who knows.

But in several tests in a row on the Monster, not a single pause. In a half dozen
on the PNY, 2 pauses of about 6 seconds each time.

I’m going to swap to the Monster for a while and see if the browser pauses end.

(These are mostly my live notes. I actually have now swapped to the Monster stick and at this time it is fine as my working directory.)

Interesting…. Did some speed tests in the other filesystem types on the PNY.

The NTFS parition gave me one with 2 seconds then a long boring series of normal .7ish.

Then this:

chiefio@EVOdebian:/CNTFS$ time MB

Tue Aug 11 14:07:49 PDT 2015

Tue Aug 11 14:07:49 PDT 2015

Tue Aug 11 14:07:49 PDT 2015

Tue Aug 11 14:07:50 PDT 2015

Tue Aug 11 14:07:50 PDT 2015

-rwxrwxrwx 1 root root     100 Aug 11 14:07 OneH
-rwxrwxrwx 1 root root  100000 Aug 11 14:07 OneHT
-rwxrwxrwx 1 root root 1000000 Aug 11 14:07 OneMillion
-rwxrwxrwx 1 root root    1000 Aug 11 14:07 OneTh

real    0m2.645s
user    0m0.060s
sys     0m0.088s
chiefio@EVOdebian:/CNTFS$ time MB
chiefio@EVOdebian:/CNTFS$ time MB

Tue Aug 11 14:07:52 PDT 2015

Tue Aug 11 14:07:52 PDT 2015

Tue Aug 11 14:07:52 PDT 2015

Tue Aug 11 14:07:53 PDT 2015

Tue Aug 11 14:07:53 PDT 2015

-rwxrwxrwx 1 root root     100 Aug 11 14:07 OneH
-rwxrwxrwx 1 root root  100000 Aug 11 14:07 OneHT
-rwxrwxrwx 1 root root 1000000 Aug 11 14:07 OneMillion
-rwxrwxrwx 1 root root    1000 Aug 11 14:07 OneTh

real    0m0.744s
user    0m0.020s
sys     0m0.124s
chiefio@EVOdebian:/CNTFS$ time MB

NINE tests with 0.72 to 0.77 or so skipped

chiefio@EVOdebian:/CNTFS$ time MB

Tue Aug 11 14:09:25 PDT 2015

Tue Aug 11 14:09:25 PDT 2015

Tue Aug 11 14:09:25 PDT 2015

Tue Aug 11 14:09:26 PDT 2015

Tue Aug 11 14:09:26 PDT 2015

-rwxrwxrwx 1 root root     100 Aug 11 14:09 OneH
-rwxrwxrwx 1 root root  100000 Aug 11 14:09 OneHT
-rwxrwxrwx 1 root root 1000000 Aug 11 14:09 OneMillion
-rwxrwxrwx 1 root root    1000 Aug 11 14:09 OneTh

real    1m4.521s
user    0m0.032s
sys     0m0.108s

Yes, that last one is ONE MINUTE and 4.5 seconds. Sheesh.

Notice, though, that the mid script date stamps show very little elapsed time. For some reason, the PNY is not giving the final acknowledge and close to the script for a long time.

Same thing showed up in the FAT32 partition, though not as strongly:

Tue Aug 11 14:13:26 PDT 2015

Tue Aug 11 14:13:26 PDT 2015

Tue Aug 11 14:13:26 PDT 2015

Tue Aug 11 14:13:26 PDT 2015

-rwxr-xr-x 1 root root     100 Aug 11 14:13 OneH
-rwxr-xr-x 1 root root  100000 Aug 11 14:13 OneHT
-rwxr-xr-x 1 root root 1000000 Aug 11 14:13 OneMillion
-rwxr-xr-x 1 root root    1000 Aug 11 14:13 OneTh

real    0m1.938s
user    0m0.032s
sys     0m0.072s

Methinks PNY “has issues”… More so with non-Fat partitions, but issues none the less.

Also of interest, I’ve copied the PNY partition (EXT3) over to the Monster (EXT2) and
There is a significant size difference on the media, but “du -ks *” gives the same sizes
for all directories in each:

/dev/sdb4       23871356 2702112  19956624  12% /Chiefio
/dev/sdc1        2043984       4   2043980   1% /T1
/dev/sdc3        5119996   26252   5093744   1% /T2
/dev/sdc4       20734580 2571596  17109700  14% /T3

I have to presume that extra 2 MB used is the log file journal…

In Conclusion

It is not possible to know how any given USB stick will perform with a live file system on it unless you test it. At $10 to $30 per each, that can be expensive for an individual. The Linux Community needs to do some benchmarks and publish them.

Me? I’m sticking with Monster and SD cards (where the speed class is printed on them) until further testing is done and published. Any new brand will have one full test before more instances are bought.

Subscribe to feed

About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in FOIA Climategate, Tech Bits and tagged , , , , . Bookmark the permalink.

17 Responses to Monster Crushes, PNY freezes, USB Sticks as Home Directory

  1. Larry Ledwick says:

    Another little bit regarding the PNY usb devices. I occasionally get pny usb or sd cards when the brand I normally use is not available. I just recently picked up a new camera and in reading up on the camera I came across a notation not to use even the class 10 PNY sd chips in this camera. I have 3 different cameras which all need the fast sd cards to avoid filling the buffers when taking rapid fire shots. The PNY Elite Performance 64 GB card (90 MB/s) works fine in 2 of the cameras but the new camera will not even recognize the chip and format it. I have no such problem with the SanDisk Extreme Pro 64GB chips (95 MB/s). They work fine in all three cameras.

    PNY is doing something odd (perhaps not following RFC’s) on their chip for a top line Nikon camera to not like their product. Just an FYI.
    I will stick with the SanDisk Extreme pro chips from now on, and will not buy or use these PNY’s except in non-critical applications.

  2. Kent Gatewood says:

    Do smart phones retain information?

    If yes, can the phone be left clean with the information transferred to a stick?

    I carry a pre-smart phone, does it retain anything of interest?

    We have an old rotary dial phone is it still usable and would it have any advantages thrawting a search?

    OT at what age would you start introducing a child to computers?

    Given all the fascinating stuff you post, the bunnies always pop in my mind. How are they?

    This maybe the equivalent of actors sharing the stage with children and animals.

    Thank you for your posts.

  3. E.M.Smith says:


    All good stuff, thanks! So it looks like it is a generic issue even into their SD cards. Hmmm….

    I generally bought SanDisk USBs up until the “Cruizer” type came out ( Windows 8 compatible, but not working well in other things… I have exactly ONE of them, and it only works in the Chromebox…) and have been happy with their SD cards (most of my fast / big cards are theirs).

    I did buy a variety of cheap ones for the old Nikon that only takes 2 GB cards and the old Kodak that is a P&S POS… Some of them seem fine in more pressed usage (like R.Pi and in an adapter as file systems…). Kingston, Patriot. Don’t know if I even have a PNY SD card, but I likely do. I have a lot of them ;-) Quickly checking….

    Looks like one 16 GB SD, and 2 x 8 GB micro-SD. Interestingly, all with ’empty’ notes in their cases… I tended to only use them for unimportant things anyway, and tended to use the Kingston and Sandisk for anything that mattered. (The Patriot were largely bought in a blister pack by the 1/2 dozen dirt cheap when the 2GB limit on the old Nikon was out of alignment with a world gone to 16 GB entry points… and I couldn’t find much of the good 2 GB still around. So used only in an old slow camera where they seem fine.)

    Maybe I ought to speed test my inventory ;-)

    Interesting to note that all the inventory of 32 GB and 64 GB chips of any kind tent to be Sandisk and high speed ones… I seem to allocate money for “good if basically expensive” and “cheap if the loss on a bad buy won’t matter”…

    Since I occasionally want to do “destructive testing” type things, I think the PNY will work well for that ;-) Like maybe putting swap on one of them and seeing how long it lasts…. (though occasionally taking a 1 second time out in swap might be a deal killer…)


    You are most welcome for the posts. I just hope the help or entertain someone somewhere.

    OK, some of my opinions:

    First, and most important, the Bunnies. I’m now down to the Last Bunny. As they typically only live a few years, just not having new batches of little ones lets the herd shrink fairly rapidly. Butterscotch is a panda pattern with butterscotch colored patches, and with the Mini-Rex coat, while also smaller. That had been my breeding goal at the start, so “we got there”. A lovely bunny. But little interest in anyone else to continue the line, and I needed to be preparing for life “on the road” more. Now that the kids have all moved out, if the wife and I go anywhere, there is nobody to tend livestock. At her present age, I think it’s about 1 or 2 years left until no bunnies.

    She has two whole yards to herself (“play yard” in front is fenced off from the rest of the front yard – done originally for the kids – and connects with the back yard garden. She can sometimes be found under the Tangelo tree out front enjoying the “pasture” under it. Often she is under a “bean arbor” between two garden squares. Also enjoys lounging under the hammock near the water dish and sometimes goes into hiding (somewhere… it wouldn’t be hiding if I knew where it was…) probably under the tool shed that has a bunny hole on at least one side.

    Daily I tend the food dish, water, and pick some bean leaves and collard leaves for her. She has transitioned from a ‘free born near wild’ bunny to ‘pet’ and now comes up and asks to be petted and given a bean leaf treat. It is one of the high points of accomplishment, IMHO, that I’ve been accepted. Friend by choice, not by cage.

    So you can picture me, about 2 pm most days, sometimes first thing in the morning, making a tour of the garden, seeing what to pick, plant, or water; and making a choice selection of leaves for Butterscotch as she keeps the space under the hammock under guard ;-)

    Kids and Computers:

    As soon as they show interest. There are lots of kids programs and games. IIRC mine had their own (Apple II?) at about 5 years old? Something like that. Other kids seem to start a bit earlier and some quite later. I don’t think it matters all that much.

    Basically, get some very young age games, and see if the kid is interested. If not, set them down with the other toys and try again in a few months.

    The rotary phone stores nothing and is useless to a searcher. The phone company does have your call records and will know who you called and when (but not which person in the family).

    I have a modestly dumb pre-smart phone. It knows my call history ( who and when) and my phone book (who I contact). That’s about it. Since the Telco has the call history anyway, I’m not seeing a big exposure increase from it. It also does store a very limited set of text messages, but the Telco ships those to the NSA and holds them for police anyway… So your only real exposure is to hackers or someone stealing the phone and them getting your contacts.

    Smart phones CAN be cleaned, and often have a micro-SD card slot where you can transfer / store information prior to wiping the phone.

    That said, a smart phone is just a hand held computer with a lousy keyboard and too small a screen, but pretty good audio. It has all the exposures and risks of any computer. You are 100% dependent on the Operating System provider to keep you and your data safe (and we know the two majors – Google with Android and Apple with iOS – are in bed with PRISM and the NSA). You are already “pre-hacked” but the Gov’t thinks “security by obscurity” will keep you safe from everyone but them… They are wrong. I’ll not be buying a smart phone at any time in the foreseeable future. I *may* build my own phone. ( I’ve made a kind of ‘kit phone’ that was large and clunky and used it via WiFi and it worked, but “needs a lot of polish” comes to mind). There is a project to make a really secure phone underway by folks better then me, so I’m mostly waiting for them. Think “Tails on a phone” kind of thing…

    NEVER EVER put financial information into your phone.

    OK, now back to work… I have some more data on the PNY chip that will come shortly in a new comment…

  4. E.M.Smith says:

    I made a variation on the test called GB that makes a BillionByte file. It has even more dismal times on the PNY as it more regularly gets caught by ‘whatever’ it is. Watching “top” while it ran, I could see both the journal update program and even “cat” diskwaiting more when the lags happened more.

    IMHO, the chip goes off to do some kind of ‘housekeeping’ and is not doing the main job of feeding data during that time. As these cards basically have a tiny little computer in them (sometimes just a 4 bit word size one…) and some folks have even hacked in and gotten the Linux prompt from them… I think the PNY folks either have a very weak controller chip driving the memory or programming that is ‘not so swift’ and instead of doing maintenance when otherwise idle goes off to do things like ‘leveling’ whenever it damn well pleases…

    The new data:

    The testing program basically just got another set of nested loops at the end to make an even bigger target file. Big enough to let you watch while it was running and see what is happening. I then ran “top” in another window – it shows you what processes are running, what is I/O Wait state, and how much the CPU is I/O waiting. I.e. I can assure it was “disk wait” on the chip and not some other program taking resources.

    Finally, as the file grew, I had an “ls” in another window track the growth. It is NOT linear. It has surges and pauses. Sometimes the PNY is fast, sometimes taking a break…

    The results:

    chiefio@EVOdebian:/T3/ST$ while true
    > do
    > ls -l OneBillion
    > sleep 30
    > done
    -rw-r--r-- 1 root root 114000000 Aug 12 14:01 OneBillion
    -rw-r--r-- 1 root root 149483520 Aug 12 14:02 OneBillion
    -rw-r--r-- 1 root root 150728704 Aug 12 14:02 OneBillion
    -rw-r--r-- 1 root root 232275968 Aug 12 14:03 OneBillion
    -rw-r--r-- 1 root root 241291264 Aug 12 14:03 OneBillion
    -rw-r--r-- 1 root root 312340480 Aug 12 14:04 OneBillion
    -rw-r--r-- 1 root root 337809408 Aug 12 14:04 OneBillion
    -rw-r--r-- 1 root root 394252288 Aug 12 14:05 OneBillion
    -rw-r--r-- 1 root root 413466624 Aug 12 14:05 OneBillion
    -rw-r--r-- 1 root root 497840128 Aug 12 14:06 OneBillion
    -rw-r--r-- 1 root root 517509120 Aug 12 14:06 OneBillion
    -rw-r--r-- 1 root root 570761216 Aug 12 14:07 OneBillion
    -rw-r--r-- 1 root root 651661312 Aug 12 14:07 OneBillion
    -rw-r--r-- 1 root root 684617728 Aug 12 14:08 OneBillion
    -rw-r--r-- 1 root root 713740288 Aug 12 14:08 OneBillion
    -rw-r--r-- 1 root root 790573056 Aug 12 14:09 OneBillion
    -rw-r--r-- 1 root root 873627648 Aug 12 14:09 OneBillion
    -rw-r--r-- 1 root root 885739520 Aug 12 14:10 OneBillion
    -rw-r--r-- 1 root root 925200384 Aug 12 14:10 OneBillion
    -rw-r--r-- 1 root root 1000000000 Aug 12 14:11 OneBillion
    -rw-r--r-- 1 root root 1000000000 Aug 12 14:11 OneBillion

    The start of this is me typing a “while do” loop after the GB script had been running a while. Then you get a “ls -l” or long file listing of the “One Billion” file as it is being built, one every 30 seconds. They OUGHT to get about the same amount bigger from sample to sample. They don’t.

    Look at the 2nd and 3rd line down. It goes from 149 MB to 150 MB. The next one goes from 150 MB to 232 MB. So a 1 MB increment, then an 82 MB increment. Then 9 MB. 71 MB. 25 MB. 57 MB. 19 MB. 84 MB. 20 MB. etc.

    Spectacularly unpredictable and irregular. Now during some of those ‘slow’ episodes, I could see individual programs like “cat” and the journal writer disk waited on the chip. It was doing “something” but that wasn’t servicing disk requests… (or “data” requests since it really isn’t a disk, it just pretends to be one…)

    Here’s the base case off of the Monster brand:

    Wed Aug 12 13:36:31 PDT 2015

    -rw-r–r– 1 chiefio chiefio 1000000000 Aug 12 13:36 OneBillion
    -rw-r–r– 1 chiefio chiefio 100 Aug 12 13:35 OneH
    -rw-r–r– 1 chiefio chiefio 100000 Aug 12 13:35 OneHT
    -rw-r–r– 1 chiefio chiefio 1000000 Aug 12 13:35 OneMillion
    -rw-r–r– 1 chiefio chiefio 1000 Aug 12 13:35 OneTh

    real 1m24.056s
    user 0m2.120s
    sys 0m7.016s

    I thought it was a bit slow at 1 min 24 seconds… but boy was I wrong. I briefly thought maybe I ought to cut the test back to fewer “rounds”… then I tested the PNY:

    Wed Aug 12 13:58:57 PDT 2015
    -rw-r--r-- 1 root root 1000000000 Aug 12 13:58 OneBillion
    -rw-r--r-- 1 root root        100 Aug 12 13:52 OneH
    -rw-r--r-- 1 root root     100000 Aug 12 13:52 OneHT
    -rw-r--r-- 1 root root    1000000 Aug 12 13:52 OneMillion
    -rw-r--r-- 1 root root       1000 Aug 12 13:52 OneTh
    All Done
    real	6m41.130s
    user	0m0.464s
    sys	0m9.089s

    (Doing an “ls -l” on the files lets us check that the time stamps match expectations…)

    Yes, SIX MINUTES. But wait, there’s more… But first, notice that the OneBillion file has a time stamp of 13:58 which is 6 minutes later than the 13:52 on everything else. Clearly the sloth is happening during the file writing, not after in some kind of script hang event.

    I then immediately reran it.

    Wed Aug 12 14:11:03 PDT 2015
    -rw-r--r-- 1 root root 1000000000 Aug 12 14:11 OneBillion
    -rw-r--r-- 1 root root        100 Aug 12 14:01 OneH
    -rw-r--r-- 1 root root     100000 Aug 12 14:01 OneHT
    -rw-r--r-- 1 root root    1000000 Aug 12 14:01 OneMillion
    -rw-r--r-- 1 root root       1000 Aug 12 14:01 OneTh
    All Done
    real	9m26.250s
    user	0m0.504s
    sys	0m9.321s

    Now we’re up to NINE MINUTES for what ought to take one.

    Just amazing.

    Conclusion? Use the PNY only for small files in situations where you don’t expect to move a lot of data quickly. “Archival” sounds about right… Or “kids toy”… On second thought, I’d not inflict this on a kid…

  5. E.M.Smith says:

    Well, this is interesting…

    I’d done a test of EXT3 on the Monster some time back and it was acceptable (but not this test so not comparable). Here, I’ve just assumed that was not related as the times were so far off.

    But maybe it is tickling an idiosyncrasy?…

    I just reformatted the PNY partition from EXT3 (with journaling recovery) to EXT2 (no journal). The two filesystem types are nearly identical other than that (by design to make conversion easy).

    So re-ran the test on the PNY. Surprise surprise…

    Done Making One Billion
    Wed Aug 12 16:29:51 PDT 2015
    -rw-r--r-- 1 chiefio chiefio 1000000000 Aug 12 16:29 OneBillion
    -rw-r--r-- 1 chiefio chiefio        100 Aug 12 16:28 OneH
    -rw-r--r-- 1 chiefio chiefio     100000 Aug 12 16:28 OneHT
    -rw-r--r-- 1 chiefio chiefio    1000000 Aug 12 16:28 OneMillion
    -rw-r--r-- 1 chiefio chiefio       1000 Aug 12 16:28 OneTh
    All Done
    real	1m36.968s
    user	0m2.824s
    sys	0m6.380s

    You got it… 1 1/2 minutes.

    30 second growth jumps? 304 MB, 314 MB, …

    So something in the PNY really doesn’t like EXT3 for large files? Or what?

    “Someday” I’ll need to test the Monster for large files on EXT3 also. As it stands, the test is unclear at this point. Was my prior assessment of “not much difference” between EXT2 and EXT3 based on too small a file and data transfer size? Or is the difference only the PNY?

    Tune in next episode for further exciting adventures of USB Drive Man!… ;-)

    Just be advised that on a ‘re-run’ it took longer:

    Wed Aug 12 16:40:38 PDT 2015
    -rw-r--r-- 1 chiefio chiefio 1000000000 Aug 12 16:40 OneBillion
    -rw-r--r-- 1 chiefio chiefio        100 Aug 12 16:37 OneH
    -rw-r--r-- 1 chiefio chiefio     100000 Aug 12 16:37 OneHT
    -rw-r--r-- 1 chiefio chiefio    1000000 Aug 12 16:37 OneMillion
    -rw-r--r-- 1 chiefio chiefio       1000 Aug 12 16:37 OneTh
    All Done
    real	3m29.752s
    user	0m2.536s
    sys	0m7.172s

    So over a ‘double’ run to run. Not nearly as bad as with EXT3, but not what you want for predictable performance…

    Now I’m doing a 3rd ‘re-run’ and it is even slower… I think we “have clue” now. Something in the “garbage collection” and / or “leveling” code is horrid. It has 20 GB to play with, so could just slam in the new GB and be done; but it looks like recent read/write/delete in the GB range makes it go get busy with “cleaning up” and give short service to immediate requests… So it works FAST, for a little while, then slows down as the workload goes up and as more housekeeping stacks up. In short, it doesn’t have it’s priorities straight…

    Wed Aug 12 16:48:10 PDT 2015
    -rw-r--r-- 1 chiefio chiefio 1000000000 Aug 12 16:48 OneBillion
    -rw-r--r-- 1 chiefio chiefio        100 Aug 12 16:42 OneH
    -rw-r--r-- 1 chiefio chiefio     100000 Aug 12 16:42 OneHT
    -rw-r--r-- 1 chiefio chiefio    1000000 Aug 12 16:42 OneMillion
    -rw-r--r-- 1 chiefio chiefio       1000 Aug 12 16:42 OneTh
    All Done
    real	6m1.250s
    user	0m2.328s
    sys	0m7.408s

    And we’re now back up to SIX minutes, in regular steps… Even on the EXT2 file system.

    The PNY just doesn’t like a lot of data moving around and changing…

  6. beng135 says:

    Thanks. Copied your script to see if I can get it running…

  7. E.M.Smith says:


    Here’s the bit I added at the bottom to make it do a Billion:

    echo Making One Billion

    for i in 0 1 2 3 4 5 6 7 8 9
    for j in 0 1 2 3 4 5 6 7 8 9
    for k in 0 1 2 3 4 5 6 7 8 9
    cat OneMillion >> OneBillion

    echo Done Making One Billion

    ls -l One*
    echo All Done

    You will notice I put some comment “echos” to say when starting and finishing the Billion just before the “date” markers.

    I also left the 3 levels of loop un-indented. Typically those are pretty printed with indents like the others, but for a quick “add a chunk” it is easier to just “copy, paste, change i to j; next”

  8. E.M.Smith says:

    It is worth noting that the GB version of the script can easily be extended, and that you can make an exponential growth disk consumption script just by doing something like:

    For I in 0 1 2 3 4 5 6 7 8 9
    cat A >> B
    cat B >> A

    Coupling these things together (i.e. make a Gig sized file, then do an exponential copy) can let you “knock together” a script in a few minutes that will completely scrub all empty space on a disk. I regularly do this when I’m deleting a bunch of crap for a disk. First delete all of it, then do a GB and expcopy and when that dies for being out of disk space… delete it all.

    Then anyone fishing in your deleted data blocks gets a very long string of digits from 0 to 9 (or whatever you put in there with the first echo “Don’t Waste Your Time It Is Deleted”>> First.file command…

    The list of things after the “for” doesn’t need to be the digits in order. It takes any values and will just stuff them into the variable ( “i” in the above example) so that in the loop you can use $i in whatever process you are doing. So instead of 10 items, you can just list 20 or 40 if you want the thing to take fewer nested loops. I use 0 to 9 and nest loops more just from habit.

    I’ve often made a “disk scrubber” on the fly that does something like

    cp /usr/bin/gcc junk

    (Or use any large convenient file full of stuff that isn’t of any interest to forensics)

    this puts about 1/2 MB in the first file up front. Then the script:

    for j in 1 2 3 4 5 6 7 8 9 0 a b c d e f g h i j k l m n o p
    cat junk >> junk2
    cat junk2 >> junk
    cat junk >> junk2
    cat junk2 >> junk
    cat junk >> junk2
    cat junk2 >> junk
    cat junk >> junk2

    now this starts with 1/2 MB that gets duplicated into junk2. That gets added to junk (making it 1 MB) and that gets added to Junk2 making 1.5 MB, then 2.5, then 4, then 6.5, then 10.5 MB

    In one quick bit, you have 10.5+6.5 or 17 MB used in the first pass of the script. Now the next pass starts doing doubles on that… and this continues for 26 rounds in this case. Huge, and I doubt you will get to the end before the disk is full.

    Since the script part never clears the files, only uses >> to append, you can run the script as many times as needed and each time starts from a larger start size for Junk and Junk2 … Say you named this script “doubler”, just typing:

    doubler; doubler; doubler; double; doubler; doubler; doubler; doubler&

    will launch 8 instances of it into the background. (The “&” says put this in the background and run it there) That will be an astounding number of bytes.

    I think you can see that the whole thing could be typed in and launched in under 2 minutes by “an experienced operator” and then you can just walk away from the machine knowing that the disk is going to lock up Real Soon Now…

    There are more elegant ways to fill a disk with trash, but this was my first one and I still have a fondness for it ;-)

    The one used in testing the SD cards / USB sticks is somewhat over long due to my desire to have the files be exact multiples of 1000 in size ;-)

    FWIW, on any new machine, one of the things I typically install with my set of personalized commands is a MB command that makes a Megabyte file, a GB command that makes a GB file, and some kind of exponential GB duplicator to fill all free space with a single nibble typed at the keyboard. ( ‘nohup’ tells a command to ignore ‘hangup’ signals, like being logged off of a terminal. So putting “nohup filldisk&” in a command that points to something like the above, and say you called it “bye”… now you could type “bye” at your keyboard, then ‘fix’ that by typing ‘exit’ and know that all free space in your disk was going to be overwritten and that someone watching over your shoulder would see little.

    One enhancement of great power and certain to cause you damage:

    rm is the ‘remove” command. rm takes options. So -r says to recursively descend into any directory and keep removing all the way to the bottom. The -f option says to “force” it, and don’t ask any questions. Now couple that with the “.” file name wild card (it means “my current directory”), and you can remove all files all the way to the bottom of any directory you are in. “rm -rf .” will destroy any files and directories to which you have write access from that point down. This means you could ‘enhance’ your “bye” script by putting “cd ; nohup rm -rf .” as the first line. At that point, by typing “bye”, you will remove ALL your files and directories, then fill all freespace with trash. Use sparingly…

    BTW, if someone is very very serious, they can sometimes recover the stuff written just before a regular overlay write was done on magnetic disk, but that takes really hard core effort. Agency kind of stuff. IFF you have that kind of issue, it takes more than this. You need to re-write each patch of disk a few times to homogenize the traces… and with more random data stream. There are “shredder” programs for that… this is more of a “quick and dirty good enough most of the time” for quickly cleaning up behind yourself. (it also doesn’t work if the power plug gets pulled as it takes time to erase and re-write 10 GB of disk… )

    Also, do note that if you type “rm -rf ./*” it will remove all the named files in your current working directory (not starting with a . like .cache hidden files); BUT, if you have an accidental space: “rm -rf . /*” this will remove your current working directory (the dot) including things like .cache, then start at “/” the root of all files, and remove absolutely everything (*) killing the box if you have root privileges. Be VERY VERY VERY careful doing “rm -rf” at any time, but especially as root… and don’t lean on the space bar…

    So make absolutely sure you type the right thing into any scrubber script… and don’t blame me… In particular, that “cd ; nohup rm -rf .” really ought to be “cd /home/[mydir]; nohup…” since a bare “cd” might take you somewhere unintended if, for example, you were logged in as ‘root’ or had done an ‘su’ to some other user name… Make sure you know where your fingers are located when using power saws and explosives…


    Also, note that if you put this as a script in one of the directories that you are deleting, it will be gone at the time that go to use the ‘fill up’ parts of it… so ‘bye’ needs to be somewhere like /tmp or you need to customize the rm part of it to leave ‘bye’ and ‘double’ alone when doing the delete… which is part of why I usually do this ‘long hand’ if I have the time, but if not, you need to have thought through what gets removed just before you will be using…

  9. E.M.Smith says:

    OK, some more testing on the PNY USB stick. I decided to try a ‘typical’ unpack of a gziped tarball archive and restore the data to a partition. This is partly to do just that and put back on the stick the things I’d packed up to test the Monster (that has now replaced the PNY while the PNY is going to replace it in a low demand FTP archive)… and partly to speed test things on different partition types.

    The results continue a trend of poor performance on non-FAT file systems, but with a surprise. NTFS is extraordinarily sucky.

    First off, I did a base case test against the hard disk. This is in some ways a ‘worst case’ as you have disk head seeks of increasing size as the thing has to shuttle back and forth between the large archive tarball and the destination write area. Watching “top” it was not cpu limited on the zcat unpacking decompression step.

    First, we can see that the ‘gzipped’ archive is about 2 GB. A nice size for testing.

    chiefio@EVOdebian:~$ cd /home
    chiefio@EVOdebian:/home$ ls -l Pink.gz 
    -rw-r--r-- 1 root root 1928066757 Aug 11 12:00 Pink.gz

    Now we do the ‘zcat’ that uncompresses it into a tar extract in a “junk” directory in the same file system. the “..” says go up one directory; then get the file Pink.gz and send it through the command zcat to unpack it. “time” gives us time statistics for execution.

    root@EVOdebian:/home/junk# time zcat ../Pink.gz | tar xf -
    real    3m13.179s
    user    0m20.637s
    sys     0m20.285s

    So on real hard disk with head seeks, it takes a bit over 3 minutes to unpack and restore this compressed tarball. Uncompressed size is also about 2 GB as the archive mostly consists of already maximally packed stuff.

    I formatted the PNY into 3 new partitions. Each about 10 GB. In order, FAT32, NTFS, and EXT2.

    I then timed the unpack / restore into each of them. The results are a real surprise to me.

    First up, the worst of the batch. NTFS. Normally this is a little bit slower than the rest, but in this case, it was startling. I’d done the exercise on the FAT32 and it was similar to the 3 minute real disk time.

    root@EVOdebian:/home# time zcat Pink.gz | (cd /T2; tar xf - )
    real    43m15.450s
    user    0m22.665s
    sys     0m33.134s

    Yes, you read that right. 43 minutes. Almost 3/4 of an HOUR. For a 2 GB file restore. The PNY has “issues” and they are not just with the EXT file system types. In fact, EXT2 was much better. Still sucky compared to what it ought to be, but not this sucky.

    root@EVOdebian:/home# time zcat Pink.gz | (cd /T3; tar xf - )
    real    8m12.827s
    user    0m21.445s
    sys     0m19.257s

    So eight minutes. About 3 x worse than it ought to be, but about 6 x better than NTFS. Sheesh.

    One to the “watching the file grow” logs. These mostly let you look for inconsistency of speed over time.

    Here’s the ‘as run size monitor’ for the FAT32 run. You can see that it was already at 1 GB by the time I got 5 trivial lines of text typed. Then it ‘steps up’ by about 300 MB per 30 second iteration to the end. Or about 600 MB / minute until full at 2 GB.

    chiefio@EVOdebian:/home$ while true
    > do
    > df /T1
    > sleep 30
    > done
    Filesystem     1K-blocks    Used Available Use% Mounted on
    /dev/sdc1       10229992 1286120   8943872  13% /T1
    Filesystem     1K-blocks    Used Available Use% Mounted on
    /dev/sdc1       10229992 1574128   8655864  16% /T1
    Filesystem     1K-blocks    Used Available Use% Mounted on
    /dev/sdc1       10229992 1850344   8379648  19% /T1
    Filesystem     1K-blocks    Used Available Use% Mounted on
    /dev/sdc1       10229992 2060232   8169760  21% /T1
    Filesystem     1K-blocks    Used Available Use% Mounted on
    /dev/sdc1       10229992 2060232   8169760  21% /T1

    The EXT2 logs show some wandering between about 20 MB and 200 MB ( so 40 MB / min to 400 MB / min) at each increment.

    chiefio@EVOdebian:/home$ while true
    > do
    > df /T3
    > sleep 30
    > done
    Filesystem     1K-blocks   Used Available Use% Mounted on
    /dev/sdc3       10768640 106796  10114824   2% /T3
    Filesystem     1K-blocks   Used Available Use% Mounted on
    /dev/sdc3       10768640 123216  10098404   2% /T3
    Filesystem     1K-blocks   Used Available Use% Mounted on
    /dev/sdc3       10768640 249908   9971712   3% /T3
    Filesystem     1K-blocks   Used Available Use% Mounted on
    /dev/sdc3       10768640 567364   9654256   6% /T3
    Filesystem     1K-blocks   Used Available Use% Mounted on
    /dev/sdc3       10768640 845528   9376092   9% /T3
    Filesystem     1K-blocks   Used Available Use% Mounted on
    /dev/sdc3       10768640 846348   9375272   9% /T3
    Filesystem     1K-blocks   Used Available Use% Mounted on
    /dev/sdc3       10768640 883044   9338576   9% /T3
    Filesystem     1K-blocks   Used Available Use% Mounted on
    /dev/sdc3       10768640 899916   9321704   9% /T3
    Filesystem     1K-blocks   Used Available Use% Mounted on
    /dev/sdc3       10768640 964296   9257324  10% /T3
    Filesystem     1K-blocks    Used Available Use% Mounted on
    /dev/sdc3       10768640 1005812   9215808  10% /T3
    Filesystem     1K-blocks    Used Available Use% Mounted on
    /dev/sdc3       10768640 1096096   9125524  11% /T3
    Filesystem     1K-blocks    Used Available Use% Mounted on
    /dev/sdc3       10768640 1107780   9113840  11% /T3
    Filesystem     1K-blocks    Used Available Use% Mounted on
    /dev/sdc3       10768640 1126616   9095004  12% /T3
    Filesystem     1K-blocks    Used Available Use% Mounted on
    /dev/sdc3       10768640 1408676   8812944  14% /T3
    Filesystem     1K-blocks    Used Available Use% Mounted on
    /dev/sdc3       10768640 1700776   8520844  17% /T3
    Filesystem     1K-blocks    Used Available Use% Mounted on
    /dev/sdc3       10768640 2007120   8214500  20% /T3
    Filesystem     1K-blocks    Used Available Use% Mounted on
    /dev/sdc3       10768640 2069900   8151720  21% /T3
    Filesystem     1K-blocks    Used Available Use% Mounted on
    /dev/sdc3       10768640 2074024   8147596  21% /T3

    Then we have the NTFS logs. I’m going to do this in parts so I can comment on things.

    chiefio@EVOdebian:/home$ while true
    > do
    > df /T2
    > sleep 30
    > done
    Filesystem     1K-blocks    Used Available Use% Mounted on
    /dev/sdc2       10239996   91628  10148368   1% /T2
    Filesystem     1K-blocks    Used Available Use% Mounted on
    /dev/sdc2       10239996  364956   9875040   4% /T2
    Filesystem     1K-blocks    Used Available Use% Mounted on
    /dev/sdc2       10239996  684828   9555168   7% /T2
    Filesystem     1K-blocks    Used Available Use% Mounted on
    /dev/sdc2       10239996  870120   9369876   9% /T2
    Filesystem     1K-blocks    Used Available Use% Mounted on
    /dev/sdc2       10239996  887056   9352940   9% /T2
    Filesystem     1K-blocks    Used Available Use% Mounted on
    /dev/sdc2       10239996  889052   9350944   9% /T2

    It goes from 91 MB to 364 MB “right quick” for about a 273 MB jump (or about 550 MB / minute).
    Next increment is to 684 MB, so about a 320 MB jump, about the same…
    Then about a 186 MB jump… slowing more…
    Then a 17 MB jump… oh oh, hit the skids… and then:

    That last one is only a 2 MB increment. In 30 seconds. That’s a 4 MB / minute rate. I get about 66 kB / second. WTF? We’re talking modem speeds here…

    It then continues on like that, fairly consistently, to the end:

    Filesystem     1K-blocks    Used Available Use% Mounted on
    /dev/sdc2       10239996  892864   9347132   9% /T2
    Filesystem     1K-blocks    Used Available Use% Mounted on
    /dev/sdc2       10239996  894924   9345072   9% /T2
    Filesystem     1K-blocks    Used Available Use% Mounted on
    /dev/sdc2       10239996  896920   9343076   9% /T2
    Filesystem     1K-blocks    Used Available Use% Mounted on
    /dev/sdc2       10239996  913028   9326968   9% /T2
    Filesystem     1K-blocks    Used Available Use% Mounted on
    /dev/sdc2       10239996  915028   9324968   9% /T2
    Filesystem     1K-blocks    Used Available Use% Mounted on
    /dev/sdc2       10239996  916776   9323220   9% /T2
    Filesystem     1K-blocks    Used Available Use% Mounted on
    /dev/sdc2       10239996  923284   9316712  10% /T2
    Filesystem     1K-blocks    Used Available Use% Mounted on
    /dev/sdc2       10239996  927120   9312876  10% /T2
    Filesystem     1K-blocks    Used Available Use% Mounted on
    /dev/sdc2       10239996  929124   9310872  10% /T2

    I stopped the logging part way through this as it was not worth it any more… Maybe it changed later, but I doubt it. The curious can divide 2 GB by 45 minutes (and maybe even take out the first burst) and get the average. I’m happy at “don’t even think about using NTFS on a PNY dongle”…

    And with this, I’m done with testing the PNY. It’s going off to be a fairly static archive of old stuff that gets plugged into my WiFi router and almost never used. (It has things like my canonical collection of Bibles on it… and some other misc interesting buy unchanging things like some Linux iso images and such. Nice to be able to grab if needed without looking around, but of zero tendency to change / be re-written and don’t mind if someone gets into the WiFi router and steals it… maybe they would benefit from the reading material…

    I MAY (and likely won’t as I’m pretty sure of the answer) do some similar A/B tests on the Monster and on some SanDisk chips and USBs, but that will be in some future posting as I’m now behind on things I thought I’d have done by now.

    With that peti-gripe out of the way: It is of value to have done this, though. It lets me know that I need to never by a PNY product again, that I need to segregate out the ones I already have for low change duties (and NOT for OS use on the R.Pi or home directories or, really, much of anything…) and it lets me know that I need to TEST any brand (and maybe even model) before attributing performance issues to ALL SD cards or USB dongles.

    It also makes me glad that most of my money for such devices has gone to SanDisk and Monster and Kingston, with only a tiny bit wasted on PNY.

    And finally, it lets me know that I can get decent performance out of the existing stock of PNY chips and dongles by making them all FAT32. I tend to prefer NTFS as it handles time stamps better, ownership better, and large files better. Since I tend to make large compressed tarballs of things, I often need over 4 GB file size and since I use timestamps a lot, that matters. But I’m willing to make these FAT32 for things like the crummy Kodak camera that needs a chip, but not a fast one, and only does FAT32; and as a FAT32 “general purpose mover” to move things between different systems that don’t know NTFS and EXT file system types. So my money on them isn’t wasted, just put into specific uses that will let me avoid wondering why that file move is taking almost an hour…

    And now, time for morning coffee… round two 8-)

  10. Larry Ledwick says:

    It has been fun watching you dig into this. I will also be retiring my PNY devices. I accidentally bought some on line for backup camera chips as noted above. I have gotten a couple USB PNY devices as cheap consumer flash devices. Perhaps with that use in mind is the reason they don’t care much for such large file transfers but expect that in most cases they will be used to save a copy of 5 images for Grandma and most users will never get into large file transfers. If your images are little 200 k jpg files, no big deal but the top line cameras can now write 40 mb files at 3-5 images a second on continuous shooting mode like you would use to capture fast sports action.

    I do wonder if Nikon intentionally blocks the PNY chips on the new D7200 or if that is only a secondary effect of something PNY does. The older D7000 and the newer D810 will both use the PNY chips no problem, so it has to be something specific on the D7200 that prevents it from even recognizing the chips as valid devices.

    I guess they will become just archive chips to store some backup stuff which won’t be changing much, and I will now intentionally standardize on the SanDisk Extreme Pro 64GB chips (95 MB/s) for my photography.In rapidly changing fast action situations, you only get one chance to get the shot. If the camera hangs for a couple seconds while the buffer is writing data out to the sd card you missed the action. ( had that happen many times on older cameras which would fill the buffer at 15 shots taken in continuous mode).

  11. E.M.Smith says:


    Well, I did a couple of tests on an actual SD card (not the USB “drive”) and it has the same problem, but to a lesser extent:

    root@EVOdebian:/home# time zcat Pink.gz | ( cd /T1; tar xf - )
    real	6m17.344s
    user	0m22.833s
    sys	0m35.806s
    root@EVOdebian:/home# !time
    time zcat Pink.gz | ( cd /T1; tar xf - )
    real	9m54.838s
    user	0m23.117s
    sys	0m34.034s
    root@EVOdebian:/home# !!
    time zcat Pink.gz | ( cd /T1; tar xf - )
    real	10m17.311s
    user	0m22.721s
    sys	0m33.982s

    And given that FAT32 seems sort of OK, I’ve just reformatted mine to all FAT32 and consigned them to the “Oh Yeah, that old stuff” usage bin. Anything important, and all new buys, to be SanDisk or sample tested on other brands. (somehow I sense a series of tests on my current stock of mixed bag brands nagging at me… maybe next week ;-)

    The round of NTFS tests on that SD Card were interesting. Since it is somewhat faster than the stick, the process was CPU bound on the mount.ntfs process that handles NTFS file systems under Linux for the first 1/2 GB, then it swapped to being I/O bound, eventually (rapidly) hitting 98% I/O wait state. Basically you hit a very hard wall fairly quickly. AND it gets worse with each use after new formatting…

    But at this point all my PNYs are off to FAT32 land (except the USB that has 3 partitions on it for now and I’m tired of looking at it ;-) where I might do a speed test off the FTP server with the three partitions… but only a couple of bottles of wine from now ;-)

    Oh Well. Lesson Learned, and now can have benefit for all. (And I’m glad you got a chuckle out of watching me flounder my way through this and figure out where the crap was smelliest… but “it’s what I do”…)

  12. Power Grab says:

    What strategies could help minimize the damages from recent break-ins into systems like the OPM, CVS, Anthem, IRS, etc…aside from simply not having your data in those systems?

  13. E.M.Smith says:

    @Power Grab:

    I likely ought to make a separate posting in answer to that. There’s rather a lot that can be done, and it isn’t well suited to a single quick comment.

    As a rough outline:

    1) Data not in a computer. Have it on removable media. If you can’t do that…

    2) Data in a computer not connected to the internet. If you can’t do that…

    3) Data in a “hardened” computer. This is a long list of things done to it. To start with, it isn’t Microsoft… The Macintosh is much better and the number of breakins and hacks far less out of the box, so a home user or business with little support staff can go a long way just by using a Mac. My preferred platform is a hardened BSD based operating system (BSD Unix) but Linux is up there too these days. That, though, requires some skill to set up properly and operate. Much of what I’ve posted here has been about how to make a slightly hardened computer for personal use via things like running from a CD-ROM, using a distinct system for web browsing that isn’t the one your data lives on, using Tails for hard core things, etc. Any computer ought to have ONLY the software it absolutely needs to do its job installed. So no ftp or web browser or even ssh installed on a box that is the email server. It gets the kernel and email and only anything else needed to support it (like a login shell and any languages like perl or phython that email services depends upon). The few tools on the box, the more the hacker as to bring with them, and the larger the odds of catching them.

    4) Tuned access control systems. Another long list. From using token type devices ( we were an early adopter of the “SecurID” cards at Apple a couple of decades back and it was a significant part of how I had zero break-ins over 7+ years.
    Use of “two factor” authentication is a big help. The person wanting access needs to know a secret (the password or pass phrase) and have a property (either possession of the token / card or a physical attribute in biometric authentication) Don’t have it? You don’t get it.

    5) Hack yourself first. Set up an ACTIVE and constantly operating intrusion detection, penetration testing, and activity profiling operation. Include a “honey pot” system and have it fully and specially instrumented to collect data on how others are attempting to gain entry.

    6) Perimeter Defense. I used to put this about #3, but what with everyone and their cousin using full of holes desktops ( from FLASH to Microsoft to Javascript to…) it is pretty much the rule that you DO have folks “past the perimeter” via everything from phishing to spam emails to ‘click me’ trojans to… ) But you still need to do it. A Darned Good Firewall, with REAL DMZ. Only those services in the DMZ subnet can connect out to internet data feeds or provide data (like via FTP). ALL other paths shut. (So your ‘bad guy’ can’t just FTP the data out unless he is in the DMZ…) Proxy server for web services in the DMZ. Etc. All DMZ hosts very hardened with only the very minimal installed software to do their jobs (preferably on CD-ROM or other ‘read only’ source).

    At that point, you have the basics done.

    Then you can start “tricking things out”.

    Set up scripts that run on a regular basis (say, ever day at 7 AM and 9 PM) that look at every file on the system. File size, access time, modification time can all be easily seen. Add a SHA or MD5 hash compute if you have the CPU power. Ship that to a different computer preferably via a non-obvious means. It compares the values to a locked down copy (i.e. Read Only or CD-ROM) and looks for changes. Part of the list is Things That Never Change. OS files like running computer binaries. Any of them change, it issues a Red Alert to the sys-admins who immediately start a forensic inspection / response to attack. Others, like passwd file, do change in normal use, so they go onto more of a “hey, this changed, ok?” list that is reviewed at start of shift… If, suddenly, the passwd file grew by 100%, most likely something is wrong…

    Make root access only via ‘special means’ like “sudo”. Hack it so that there’s a secret token written to a secret file. Now, all navigation ( cd, pwd ) and inspection (ls, cat, ed, vi) commands get a hack that checks for “Am I Root? If so, do I have the Magic Token set?” Any NO answer causes a Red Alert intrusion and defense response.

    And on like that.

    Realize that this is only a very top level overview of what can be done. We’re talking about a computer science specialty that fills several books (and that is with the stuff that is published…)

    Now the breakins you mentioned were all at major institutions. The OUGHT to have a dedicated staff of about 4 to 8 people doing just this stuff. (IFF you want 24 x 7 x 365 coverage). Typically nobody does until after they have been hacked and someone gets fired. Even folks who do have that kind of operation often get tired of spending that money and decide to reassign the headcount since “nothing happened”… and get a bonus for more profit and all… until something happens… but they hope that is on the next guys watch and he gets fired…

    In short, there are LOTS of things that can be done, and LOTS of companies and agencies not in the news… they are the ones willing to spend the money to make the systems and staff the operations to keep things safe.

    Which means: THE NUMBER ONE thing you need to do is have a culture of security valuation. From the I.T. manager to the grunts, they typically know it matters, but need to be reminded that the rest of management values it too (to avoid thinking that order to ‘cut costs 10% or else’ means the Big Boss doesn’t care about security…). The user community needs to be told, not asked, told, by upper management: IF the IT folks say something is not secure, we will not provide that service. ( I had to fight that battle, often almost alone, for many years…) And the upper management needs to understand that hiring the cheapest guy you can get on an H1b visa to do the minimum possible to get your operation running is NOT being security prepared…

    Value it. Remind folks of that value. Pay for it. Build it. Test it. Test it again every day. Monitor all of it. Do NOT depend on outside companies, especially those in the PRISM program, as they are known to be providing backdoors for the finding to their government keepers.

    BTW, anyone wanting that done can hire me. “It’s what I do”… Operations and installations of computers and networks. Usually I hire (or have the company hire if they don’t already have one) better systems jockeys than me to do the hard core bit twiddling and ACCLs configs in the routers (ACcess Control Lists), mostly because I’m usually slower at it than they are and often need to ‘catch up’ on the latests changes since the last contract. I’m usually the PM type. (Project Manager). I’ve also been Director Of Information Technology for a few years at some companies. Basically I do ‘bit twiddling’ to keep the skilz up, but mostly am employed to manage the process. Available now. Reasonable rates. Happy to travel anywhere a US passport is valid that is not in or near a war zone or fond of harassing ‘infidels’… functional tourist level of French, Spanish and maybe Italian (it’s been a while) with a smattering of German (enough to rent cars and get to hotels) and at one time a bit of Swedish ( I can read interoffice memos in it, given enough time and a dictionary…). Prefer places with decent beer and wine. Motel 6 or better rooms. (When I’m working I’m usually paying for the room… and not in it much. But hey, someone wants to pop for a Hilton or Ritz it up, I’m OK with that too ;-) And while I’m glad Snowden spilled the beans on what was happening, he violated the fundamental Sys Admin’s contract: What happens at the Client’s stays at the Client’s. Like the French reference in My Fair Lady: I don’t care what you do, as long as you say it properly… just don’t expect me to be actively engaged in anything that can get me arrested or shot. (Snoden’s gig had him making about double market rate, and living like a prince in Hawaii, and he flushed that? I admire his sense of duty to whistle blow, but man I could not walk away from that, nor violate the fundamental Sys Admin contract. Like the old saw goes: “Can you keep a secret? [answer: yes!] Good, so can I.”…

    Commercial over: Hope that answers your question.

  14. E.M.Smith says:

    I just re-read the question…

    I answered “What can the COMPANIES do to protect the data” and realized it might have meant “What can I, a customer, do to protect myself?”. If the latter: Not much. Once your data leaves your hands and enters their systems, its pretty much all theirs to protect and you have little other than recourse after the fact. (HIPPA suits etc.)…

    You can, I suppose, as what their data protection policies are, and if enough folks do that they might take it more seriously.

    For things where the name is not a critical element (like medical records) I’ve been known to rotate the middle initial, then when mail comes for “Michael R. Smith” I know who leaked or sold the name list… but aside from that kind of thing, and especially for govt and medical sites, you pretty much are stuck with what they’ve got.

  15. Power Grab says:

    Hmmm…yes…well, I guess my question was a bit ambiguous. Still, I’m glad to have you provide a capsule of valuable strategies. And a commercial. ;-) It’s all good. Thanks!

    Well, take health-related systems, for example. I wonder if it would be possible to have them give me a dump or transcript of the transactions they have logged under my ID.

    I’m thinking that if someone were to have obtained my credentials and happen to be using my identity to receive health care, then I would likely be the best person to identify it. I doubt that anyone on the other side of the desk would be working on that.

    Inaccuracies might not only be caused by illegitimate use, but also by errors. Say a doctor’s office had to convert their paper documentation to electronic form. Who’s to say that even got done accurately?

    At the risk of further hijacking this thread, I’m going to stop now.

    Thanks again for the list of strategies. I must make a hard copy.

    (I like to have hard copies of stuff that I want to continue to have access to even if the power goes out.)

  16. E.M.Smith says:

    @Power Grab:

    Paper copies also don’t have the drive fail, or become obsolete, or the needed software no longer runs or… (Let’s see… just HOW do I read that stuff I saved on my 5 1/4 inch floppies? The 3 1/2 inch floppies? The ones saved in Appleworks format?) Then there are the collection of links to VERY interesting web pages that have gone “404” and been deleted…

    OK: I think HIPPA gives you some kind of rights with respect to inspection of your medical records. (Frankly, IMHO, the whole mandate to have electronic records has NOTHING to do with efficiency and everything to do with our Spy Nation intrusive suck up all you can government) if you are in the USA. Outside the USA? Who knows…

    For credit stuff, my simple solution is to not do it. At present, I have exactly zero credit cards. I do have a couple of debit cards. Good for everything so far except for a rental car agency and one hotel in Riverside both of whom demanded a credit card. I paid cash at the hotel… someone else picked up the car and I gave them cash.

    One of them is from Walmart. For $6 you can buy the one with no “loading” charge. Any time you can put any amount of money onto it at any Walmart (many open to midnight) and it is available in minutes. I use it where ever there is a risk of the number being skimmed… which is anywhere that takes cards… ( The $3 or so one has a $3 charge each time you add money… so payback is one loading of the card…)

    It you turn $500 / month through it, it’s free. Less than that, there’s $5 / month deducted (or $60 / year if all months are light). Less than my old AMEX card cost. IFF anyone ever skims the number, I MIGHT be out the $100 or so that’s my typical “emergency money” sitting on it. At which time I get a new one for $6 and start over. Often it is closer to $30 on it if I’ve just spent $70 of that on gas and have not gone to Walmart yet to refill it.

    Now, use THAT card all those places folks want your card number, and you have little to worry about… banking hours better than at banks.

    You CAN pay more and get all sorts of other brands (including AMEX… if you really MUST have an ego boost), but I was fine with the Walmart brand and lower costs.

    Also, get yourself a rental box address from a local Mail Box R Us type company. Use THAT address on anything you can. (Voters reg in Florida didn’t like it, but I suspect more “liberal” States won’t mind ;-) Use it, too, on any registration of a domain name (since that is ‘discover-able’ electronically on line…) and that way anyone fingering “” gets the address of your mailbox service… and not a googlable picture of your front door…

    Now you have protected most of your electronic money transactions to a large degree and you have protected your physical location to some degree ( depending on if you get an address that doesn’t get sniffed out by voter reg or drivers license or…) and when your medical info gets hacked, they get a chance at a $100 lottery at most and a nice idea of where your mail is stored by someone else…

    Finally, it does help a lot if you have a name like John Smith or Jose Gonzales. I came by it naturally, but it is legal in many cases to use a different “stage name” (see all the actors and authors who do it?). Even a minor shift of spelling can be useful. Johnathan vs Jonnathan can show if someone has got your data from a place it ought to have been safe…

    Essentially, since you can’t depend on the security of the place holding the data, you have to ‘fuzz up’ the input. And since you can’t depend on the security of online and even card swipe in store transactions, you need to limit the potential damage. (The spouse has a credit card of low limit used for most things that just must have a credit card, or where the bank liability difference with a debit card matters…. YOU are out of luck if your debit card gets hit, but the bank is SOL if you protest a charge to your credit card… So NEVER use a debit card linked to your big fat checking account where your paycheck is deposited. That card is ONLY used AT THE BANK to get the cash to load the other debit card…)

    PITA? You bet. But this mess was brought to you by design by the bankers and government.

  17. Power Grab says:

    Heh…I’ve already got two of those: a plain vanilla name and a box address that looks like a street address. Also a screen name that looks like general text. :-) (I did that on purpose.)

Comments are closed.