Dear Sony (and any other company wanting security)

I’ve been hoping for a bit more detail on how the “hack” (really a “crack” or break-in) was done. I’m not particularly fond of spouting off about what was done ‘wrong’ when I don’t know just what was done. But details are sparse.

What is known is mostly what was taken. The usual emails. Some other docs, and the cream of it all – movie files.

So what can be done to prevent this kind of thing?

My Background

I’ve spent a good many years in computing, and particularly in computer security. I’ve worked contracts at major companies (like Charles Schwab and Disney) in computer security related areas. For 7 1/2 years at Apple Computer my group kept the Engineering Network secure (and ran “Apple.com” – even when The Internet Worm took down many companies, it did not take US down.) So I have a rather significant degree of experience with keeping companies secure.

A General Approach and Caveat

The caveat is that since the NSA has leaned on folks to leave back doors and exposures in their network equipment and operating systems, it is no longer possible to run commercial software and have real security. That’s not a political statement, or any kind of “position”. It’s just my observation of what the exposure means. Since the ’80s there has been a constant pressure from The Feds to leave things just insecure enough that they could break in, but (hopefully) individuals could not. (Anyone old enough to remember the fight over 56 bit DES – Data Encryption Standard – where they wanted fewer bits for their cracking pleasure? Or the persecution of folks over using PGP – Pretty Good Privacy?) So this is not new. It’s a long standing behaviour.

What has changed is that they have now largely succeeded. Apple ‘caved’ in about 2012 (the Snowden leaks give the exact dates for each major maker ‘giving in’ to the Prism program). Now it is simply vain and arrogant to think that such exposures will not be found by others and exploited by them. That they are not in the news does not mean they have not been found. A smart system cracker keeps his best cracks private.

So first and foremost, you must accept that if you have not “rolled your own” you ARE pre-cracked, buggered, bagged and tagged, and exposed. Period. Full stop. Let that sink in…

The good news is that there is a not-too-hard way out of that. China figured this out and has started rolling out their own official software for Chinese companies. Even better news is that this is based on a BSD or Linux core, with only modest added hardening. That’s the same thing my group was using at Apple. So a ‘quick start’ is just to take a look at what they have done with Kylin. Oh, and it handles kanji (Chinese characters) well ;-)

There are many other folks using the same base for secure computing. I was part of evaluating a secure / hardened Unix at Schwab, and it came from Israel. This is a well known place to look for security.

For a simple starting point, OpenBSD is a security oriented Unix. You can do anything desired on Unix / Linux systems. Some things may be a bit harder than you are familiar with (like running Excel in an emulated PC); but others are just as easy (like using Open Office instead of Excel). Yes, it is different. You can be hacked, cracked and served up as sushi, or you can run your shop securely. You have already seen what happens if you take the easy / cheap path…

Now that, alone, is not enough. Even Unix and Linux can be configured badly and have exposures; so for a major company like you, pop the bucks to hire some decent Unix / Linux guys and listen to them when they say it’s a bad idea to do something… (I’m presently available if you need someone to fix things right. My last contract just ended after 18 months; but there are plenty of folks as good as me and a fair number who are better.)

So, assuming you are converting your core servers to some form of Unix / Linux, have it properly hardened, and are following best practices for things like penetration testing and active monitoring; you are well along. Now your major remaining exposure is all those desktops running Microsoft and the Cisco routers with NSA doors in them that are your “firewall”. What can you do?

An Architectural View

The first thing that entered my mind on hearing of the hack and movie theft was: WHY oh WHY do you have your movie archive on your main corporate network?

Now at Apple, we had a top secret project. I think I can talk about it a bit now, as the time for that tech is long past. We were designing a new approach to desktops that was about 10 years ahead of the time, and about 5 years ahead of the competition. OK, that’s something a LOT of folks would like to get a peek at. We defended it in a fairly simple way. Segmenting the network.

The network was in 4 major ‘chunks’ that were NOT connected to each other in any direct (complete or network level) way. The innermost secure zone was able to log into the secure side of our main compute engine (a Cray Supercomputer) and get to the various services inside our building. The rest of Engineering could connect to the Cray as well, but only via a different network interface and that was automatically put into a “chroot” or “change root” limited compute space. Someone who broke into the main Engineering area would not even see all the very secure area systems or storage. No, it doesn’t need a Cray. Any Unix box will do. We just happened to be using that one. Folks in the secure area could initiate connections outbound, but outside could not initiate them inbound. Today, with the exposure of browsers and Java in virtual machines, I’d not allow them on desktop machines used for secure projects. Give those folks two computers. They are cheap compared to the risk. One is used for the day to day stuff of email and browsing. The other for secure work like rendering movies and such. An air gap is your friend.

Next up, all corporate functions were on their own network. Engineering was a distinct network. If one broke into the Engineering side, they could not get to the Corporate side. (Or the other way). We had an email bridge between them, but not much more than that. (Today I’d likely add some shared file services – again with chroot barriers inside the machines). Now a break in to the tech side does not expose corporate email logs.

So those three segments (secret areas, engineering, corporate) were the major breaks. But we had more. There was a semi-public network. A boundary network, if you will. Apple.com lived in it, as did a few other machines and services. We let most anyone in the company have an account on that machine if they wanted it for wide open internet access (we didn’t allow many services from inside the secret or secure zones… so things like FTP File Transfer Protocol were limited or controlled). Now there was the ability to pull a file from the Apple.com zone into the Engineering zone, so folks could get things, but it was a 2 step process. We also had a fairly slow and limited connection between the two networks so any unusual activity would show up very rapidly on monitoring. Folks could even get email accounts for spouses, if they wanted. Everyone had to agree that they understood this was for Non-secure and often non-work purposes. No important stuff to be left on that machine / network.

Why do that? Honeypot.

That whole network (now often called a DMZ or Demilitarized Zone network) was littered with traps, detectors, and Apple.com was a giant ‘honey pot’. It LOOKED like the right place to attack, but it wasn’t. A whole lot of folks tried. Some even managed to break into it. All the time they were doing that, we were watching. What was their skill set? What wares (tools) did they use? Did they have any new tricks? We had “buggered” all the navigation and inspection commands (things like ls to list files and cd to change directory) to look at if you had become root. IFF so, you had to have done it via a special method that left a marker in a secret spot. If you were root, but not via that method, alarms went off in the systems area and we started watching. Every hack was caught in that honeypot area and none made it to the inside. (This would now fail on things like phishing and using browser weaknesses to get directly to desktops, so one would need a different set of protections against that. See the above statement about having two computers, one on the secure net and one for outside facing activities…)

So folks would bang on Apple.com and it was so bad we had custom cut ‘login shells’ for user names “steve”, “jobs”, and “woz”. They would just let you try to log in, grab your screen, print something along the lines of “Go away and come back when you have more skilz” and drop your connection. Dozens a day. But if an ‘interesting’ attack came along, we got time to inspect it. There was a very limited router in the back of that network, named in an obscure way, that only let email and some limited other information go ‘inbound’. (NetNews in the days before browsing, and some log file transfers, mostly.) That router was very strongly locked down, monitored like crazy, and not very powerful. Turns out that was a feature…

The internet worm cracked into Apple.com but was not able to break through that stupid limited router to the inside before it had brought Apple.com to it’s knees on performance. We later figure out it eventually might have gotten in, but by then our honeypot / limited communications strategy had already brought the staff to alert and we had downed the equipment.

Now not all this is suited to today. It is shown as an example of how to think about things, not as a recipe for you, now. Each site needs it’s own approach.

The key elements, though, are relevant. Have a honeypot and a honeypot network. Booby trap and rig that thing for so many ways of watching and measuring that nothing much can be changed without someone taking a look. Segment your networks (and not just with segments inside some router… use things that can’t be bypassed with config changes or software – real hard wire segmentation.) Yes, it will cost a bit more and not be as fast or efficient. Security is always a bit more costly and slows things down. Put your company secrets and most valuable assets where they are NOT connected to the general network. Either on completely isolated networks, or not even on line at all. WHY is a movie on live disks on the network once it is a ‘wrap’? Once it has gone to a master, put those masters off line. Unplug the box or put it on a tape on the shelf. (I never have understood why folks think they need to put things like power plants and dams on the internet. Leave them disconnected, or put them on a private leased line if you must have remote access.) Put choke points between major segments of business function. Monitor those choke points heavily, and limit what they can do.

It really isn’t very hard to set up one email server for the executives, another for the general staff, and a third for engineering (and maybe even a 4th for marketing / sales, and…) Put them on their own segments, and now a system cracker has to break into 3 or 4 different machines (and discover them first) to ‘get it all’. They also have to traverse several choke points, find yet more (and sometimes different ;-) security traps and monitors. Eventually they might be able to do it all, but the time it takes drastically increases the odds they will get caught. Pay some folks to monitor those log files of crack attempts, not just look at them after the attack. We had a twisted pair hard wire from the system console for Apple.com that went to a different building a couple of blocks away where it printed the console log directly to ‘green stripe’ paper. We could have just left it as a patch of disk used for a file. We could have saved a box of paper every few days. Yet, one day, the operator was changing the paper and realized it was printing the log faster than it should have been. A systems guy looked into it and found the machine was under attack. The attacker had nuked the log file on disk… but could not erase the paper log… Our sysprog rapidly saw all that had been done, and owned the guy. A small cost, a bit of awareness, some more ‘indirect’ ways of doing things. The difference between a catch and being pwned… (now I’d likely send that log to another small log file server with ‘write once’ media).

The Hard Nut To Crack

The Desktop is going to be your largest point of grief. Between web browsers being hackable, phishing, the dominance of a known insecure system (Microsoft – aka Virus Central) and users not being so bright… Frankly, I don’t think it can be made secure. Given the MSoft dominance of workspace tools (Office / Word / Excel) and the general I.T. Dept attitude that MS is OK (often under pressure from the V.P. Finance… I’ve been there…) it would be very hard to convince folks to use something else.

Apple Macintosh is based on a Unix core. It’s pretty darned secure (especially compared to Microsoft). Were it not for their participation in the NSA ‘pre-buggered-R-Us’ program, I’d say to use them. They will keep out most folks, but not all. Doing a ‘roll your own’ on the desktop OS using Red Hat or OpenBSD will be quite a bit harder than just buying an Apple, a Chromebook (also pre-buggered but otherwise secure), or similar. Probably not worth doing it for the general desktop; but worth doing it for secure operations. The only way, frankly, that I can see to make the desktop reasonably secure today, would be to put in two networks. Put a Chromebox on each desktop for browsing and external email (all of $179 each) and have them on one network. Have a second machine for all other work (corporate documents, internal email, development). Don’t let the two networks connect. Drastic? Yes. But when each machine is known to come from the factory pre-compromised, fixing that takes drastic measures.

I was inside a secure network, on a corporate site, with a ‘locked down’ Microsoft box where I could not get admin privileges. I was able to bring up a virtual machine (on a USB drive) and from that launch a browser and have root access on my virtual machine (that means I could install all sorts of software and ‘goodies’). It is not enough to think you have the machines under administrative control. You must assume they will be compromised and put in a load of traffic analysis, penetration testing, monitoring, and intrusion detection. Having the network segmented into “gets to the internet” vs doesn’t buys a lot of safety for everything without that much trouble. Now if I ‘get root’ on a box (or a VM on that box), any attempt to send data to the outside world either fails outright, looks darned suspicious (to a person or a monitor box), or shows up as a performance hit at a choke point.

In Conclusion

This has just been a few highlights. There’s a lot more that can be done, but that gets down into the technical weeds. The major ‘take away’ is to realize that “business as usual” or “what everyone else does” is just not good enough. The rush to convenience and cool hates security. The government hates YOUR security. The vendors have been blackmailed into making insecure products (in the false belief that the only ones good enough to exploit those things are the Government TLAs – Three Letter Agencies.) The ‘pre-buggered’ products are promoted more than the others as ‘the standard’ so following that lead is accepting insecurity. The only clear way out is via using open software systems where they have many eyes looking at them. No, that doesn’t guarantee security (as the recent OpenSSL bug shows) but it is at least saying you know it isn’t ‘buggered deliberately at the factory’. At present, that means a BSD version or Linux. China figure this out, and they are not dumb.

I’ve used 2 or 3 machines for my personal needs for several years. Usually one for email and browsing and another for ‘private stuff’ (like the GIStemp analysis). It isn’t that hard, and not very expensive. Often just not tossing out your old machine when you buy a newer one. I typically use 2 or 3 different OS types at any one time. Microsoft when forced to by a work need. Unix / Linux for my personal or secure uses. Sometimes an Apple (or now a Chromebox) for User Interface focused things (like browsing or email). Sometimes that has meant finding a way to move something from one box to another. Easier in the days of floppies and CDs; harder now that tablets often don’t even have a USB connection. But I’ve always found a way. It makes me very comfortable to know that most of the time 99%+ of all my files are ‘offline’ in unreadable media; that when I’m using a ‘browser box’ it can’t compromise my main private work, my email, or my archives. Having email on one network (now an external server) and ‘my stuff’ on a separate one, only connected when needed and then usually via a network box with ‘blinky lights’ that let me know when data is moving. I’ve so far avoided any significant hack or virus. The only ‘failure’ was a recent loss of email as AOL decided to nuke my account for low usage. Even there, I could have avoided that by using my own email server and doing an SMTP download rather than trust the vendor. But I digress… The point is that segmented use segments risk, and that’s a very good thing.

So, Sony, I suggest asking your head I.T. Guy some very pointed questions, AND asking his “customers” why they insist on convenience over security AND asking the I.T. Guy’s boss why he doesn’t listen when the I.T. Guy says something isn’t all that secure (or ask the I.T. Guy why he’s given up warning…) And ask yourself WHY everything has to be connected to everything and online at all times. If you don’t get the answer that “It doesn’t”, you aren’t talking to the right folks… No, it’s not cool, or trick, or trendy, or easy. But it is secure and effective.

Subscribe to feed

About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in Tech Bits and tagged , , , , , . Bookmark the permalink.

22 Responses to Dear Sony (and any other company wanting security)

  1. Bernd Palmer says:

    I’m not very “fluent” in security matters, but I’ve asked myself the same type of questions: why does everything have to be linked or linkable to the outside world, why does everybody in the company need access to everything at any time within the company (marketing to production to financial to development to …).

  2. Larry Ledwick says:

    When I worked for Sun Microsystems around the year 2000, they used similar physical segmentation on their system. They had a large interconnected system that most day to day work was done on and a separate core system that was very difficult to get to for their core critical business functions. It used more than a simple password to access and only certain users had login access to the only direct access server. Except for that one limited path, it was completely airgapped from the rest of the day to day system. When we needed to move a file from the outer system to the inner system or vice versa it was done by sneaker net. We wrote the data out to a tape, then walked the tape across the data center and loaded it in a tape drive that was only accessible on the high security side.

    I use a similar air gap setup here at home, right now I have 3 computers sitting on my desks here (only one is powered up at the moment). One that I use as my daily browsing system. Its high capacity storage is a remote 1 TB drive that I normally leave turned off. I only power that puppy up when I need to access those files. Rest of the time a hacker would have to figure out how to physically press the power button on that external USB drive to access the data. For simple things like email and browsing, all are done on the local drives. The OS is loaded by itself on the C: drive so it is trivial to do a fresh install of the OS if I ever felt compromised.

    My second system I use for my photography hobby and it rarely is even turned on, and all the 8+ TB of disk storage is partitioned up onto 5 separate USB external drives which are only powered on when I need to get to a specific batch of files.

    If I am feeling paranoid (ie a high vulnerability exploit has recently been announced) I physically cut off the external network connection by unplugging the cat 5 cable. This however pisses off photoshop and similar consumer applications which feel a need to always talk to the internet even when I have no need for it.

    This is one of my pet peeves, why in the hell are help files on the internet?

    If your network is down, you are hosed if you can’t get to the help files and other critical files???

    Throw in a good high end antivirus package with permissions locked down and you are good to go. On the antivirus package I always order the physical media for the install so if I have to I can bring up the bare system with antivirus without ever accessing the internet.

    I do not even use wifi in my home. All my internet connections are hard wire cat 5. I live just a few miles from several major IT companies and my apartment complex has lots of high end geeks in residence. If you power up wifi on a lap top you get a list of about 30-40 wifi networks you could hit from my living room so the exposure by that path is pretty high if you muff security on your wifi system. Best to just not enable it. That is much stronger protection than any password or encryption system.

    My third computer is my most recently retired system. It is physically powered down all the time, and has a piece of tape over the ethernet connector on the mother board. If I want to connect to the internet with it, I have to really want to! Most of the time I power it up briefly to check for some old archive file and if I find it, I down load it onto a flash drive stick and physically move that over to the other systems.

    I periodically write a disk image of my most important files out to brand new USB external disks and those disks get put in a box and stored some place else as a poor man’s off site storage.

    some day I will get a few more 3 TB drives and do an image to them and put them in a safe deposit box at a local bank as a true off site storage. At the current price of high capacity spinning disks, it is the cheapest way to go for high reliability backups. Better than the cloud in my opinion
    a) you never let the data out of your control
    b) unlike cloud storage it can’t just magically disappear when someone you have no control over does something stupid like go out of business or have internet connectivity issues.

    I also periodically get on “shields up” and probe the system from the outside to verify it is still locked down like I expect it to be. (at least for bone stupid stuff like open ports and file sharing).

    If things start acting strange I unplug the network connection and initiate a full in depth scan before I get back on line.

    something about an ounce of prevention vs a pound of cure.

  3. philjourdan says:

    Crackers do not tell of ANY of their cracks. The news is made when security companies discover compromised systems, or discover the hole themselves.

  4. philjourdan says:

    An air gap is the only truly secure method, and then it is only as good as the employees. I know one of the top security specialists in the world (other than you), Darren Manners. While there are still some crackers who romp around in lightly secured networks, the vast majority today of cracks are not some secret squirrel stuff. It is much simpler. It is social Engineering. And there is NOTHING that is going to stop that (especially at a place like a movie studio where egos are easily stroked). An Air Gap can allow a cracker to get one time access from a Diva, but since it requires action by an inside person (to move the protected data from a secure network to an open one), it also requires ongoing interaction by the cracker. And most are not going to take that risk (they prefer to be in and gone, not start a relationship with some inflated ego).

    Your suggestions are excellent to secure networks that are accessible from the Internet. But the sad fact is the way real secrets are stolen has not changed since the days of Mata Hari. It is someone on the inside either through nefarious intentions, or dubious intelligence.

  5. Ian W says:

    As said by E.M, and Philjourdan state the problem is the weakness in human security. When that becomes a real problem is when the insecure individual is a security guru or SysAdmin. There may be no protection against exploits by them as they are writing the monitoring and reporting code and the security systems. This was the case with the recent NSA leaks. Here the issue was that Snowden knew what secure files to access.
    So I propose a different partitioning and that is the security staff do not ‘need to know’ what the files they are protecting are. They should not have any expertise in the function of the company nor any knowledge of the content of the files or why they are being secured. They should not have access to the decrypted content or means to decrypt content – that is not their job. This makes the task of a disaffected SysAdmin or IT Security wanting to leak information far more difficult.

  6. E.M.Smith says:

    @Bernd Palmer:

    While there are some really arcane aspects of computer security, IMHO simply asking some direct and obvious questions catches some really big lumps. Like “Why does Finance and Accounting need to be on an internet accessible network?” or “Why does the Heating Ventilation and A/C system have to be on the internet AND connected to the corporate network?” (A couple of recent hacks have happened via the HVAC internet connection). It’s bad enough that you are exposing your heat and A/C to hackers, but giving them a largely non-secure and outside of your control access point computer that on your internal network is just dumb. Frankly, how hard is it to set the A/C to be “heat at 68 F cool at 74F” and walk away? Maybe add a calendar and TOD on/off. All that fits in a dedicated box that does not need internet access. Just Say NO! to the ‘internet of things’… (Last thing I want is some internet connected A/C, fridge, lights, whatever – either in my home or at work.)

    Similarly, I’m not all that keen on having a TV that is internet aware. A recent hacker conference featured how to crack into internet capable TVs and do things like turn on the camera to watch folks at home… TVs are not very secure platforms… and the crack was pretty easy. Yes, I use Netflix. From a specific computer, that is reasonably secure, that is left off when not in use, and that even has the line to the TV disconnected when not needed. Oh, and no camera or microphone in it…

    None of that needs special computer understanding to ‘get it’…

    Maybe I’m a bit paranoid, or maybe I’ve just been doing Sys Admin work long enough to qualify for that old saw “I’m not paranoid, I’m the Sys Admin – They ARE out to get me!”… but it sure looks to me like several government agencies and more than a couple of companies are actively pushing folks to want insecure behaviours and products. The ‘best fit’ to the pattern is a deliberate attempt to convince folks to adopt products and methods the expose everyone everywhere. The question then becomes “Why?”…

    @Larry:

    Sounds like you’ve been around the block a few times ;-)

    It’s a lot like what I do. I like to plug my systems into a hub or router with ‘blinky lights’ so I can watch the traffic. Any spike when I didn’t ask for something to happen, the disconnect button gets pushed. I can’t stand “auto update” software and shut off everything I can (increasingly hard to do).

    FWIW, I was at a VAR (Value Added Reseller) in Silicon Valley when Sun was building out their (then new) Newark campus. We designed their network layout (including labs) and installed the router / switch gear and all the patch panels. Sun folks knew how to do security (and it was rather like what I’ve done and was described above).

    @PhilJourdan:

    Well… sometimes they tell… I’m fond of telling folks what my first ‘crack’ method was. Why? Well, it was about 40 years ago and no longer relevant ;-) (Exploited a compiler / swap space interaction to get released swap space that had not been scrubbed then searched for passwords… worked well, then, on that B6700 machine.)

    Per people: Sadly, yes. At Apple, we had professional bug (spying device) sweep done. The consulting firm also gave us a bit of training on some things. In one case, they asked us what we thought the “going rate” was for an employee to “sell out” for a bribe. I said “everyone has a price, I’d guess mine would be enough money to live the rest of my life unemployed as it would mean the end of my career if caught… Call it a $Million or two.” I was rather astounded when the instructor said it was considerably less than that at about $1000 to $2000.

    So that’s part of why you segment things. Someone in “Shipping” with a computer terminal can’t get access to the “Engineering” network or the “Executive” mail server. Now you must get to, and corrupt, the RIGHT person in the RIGHT place (who is much more likely to leave finger prints in the monitor and tracking systems for that place) to ‘get in’. It’s harder to bribe the exec staff to leak the exec email…

    FWIW, we had a large budget for KVM switches (back when that meant Keyboard, Video, Mouse ;-) and each desk had two machines with one KVM set. Flip switch one way, you were on the secret side, flip the other, on the Engineering network. Two completely distinct hard wired networks at each desk. Yes, someone could have built a router and connected the two sides (but that would have been caught ‘in other ways’…). We also physically ‘walked the network’ every so often to assure no unexpected connections existed. Today I’d likely have a hard wired working / secure network and a WiFi ‘outside’ network. Give folks an iPad / Chromebook or Android / Linux tablet for external connectivity and a desktop for internal.

    None of that will stop the janitor from being bribed to plug in a dongle, or walk out with a box. But physical security is a separate topic. ( I had to deal with a janitor stealing boxes once… but he was taking ‘new in box’ for the sale…) There are ways to monitor and record to mitigate that. The old “trust but verify”. It needs to be clear that being caught is probable.

    Also putting browsers only on the WiFi tablets means that phishing and similar things can’t get to the corporate side, so protects to some extent from the Stupid User Problem.

  7. E.M.Smith says:

    Oh, I probably ought to add that at a few companies we used a ‘physical token’ in addition to a login secret (password) for access to the more secure parts. Mostly we used SecureID cards (that make a new number each minute and you must enter than number at the challenge… so no card, no number, no access). Folks complained about the need to carry that card around, but it did a fine job of keeping folks out of important systems. Multifactor authentication is your friend too.

  8. Jason Calley says:

    Hey philjourdan! “It is social Engineering. And there is NOTHING that is going to stop that”

    Maybe an air gap at the power outlet?

    I am only half joking… I am just wondering whether there really are some things that it might be wiser to leave out of digital storage completely.

  9. E.M.Smith says:

    @Ian W:

    I’m a bit conflicted on that one. On the one hand, at the sites I’ve protected: me and my guys had full access to everything. I selected folks with a strong moral compass and helped build a culture where it would be failing to have information leak. On the other hand, it was just such a moral compass that caused Snowden to leak (and while I don’t think I could have done that leak were I in his shoes, it does look to me like he is on the moral side… showing that our own government is pervasively attacking our privacy and security).

    Then again, at Apple, we had little interest in what anyone was doing ‘inside the files’. It would have had almost no impact if folks had bags of encrypted bits and only decrypted them at time of use. So yeah, I’m generally an advocate of having secret / private stuff encrypted. (Frankly, the only reason I can see for email not being 100% automatically encrypted with public key methods by all email clients is that The Government (via the NSA / TLAs) likely leaned on some folks to not include it by default. The only downside I can see is that there WILL be a fair flood of ‘help desk calls’ asking us to decrypt various files where we’d need to say we could not. From forgetting what it was, to folks leaving the company and not decrypting first, to folks having health issues that cause them to be unable to decrypt. Especially going back to backups from 6 months back and realizing it was a very different key then… Heck, as it is most folks click the ‘remember me’ boxes for logins and passwords at various sites.

    Overall, I’d have to put that in the “nice to have but likely too painful to the users” to succeed. Useful for really secret projects, where requiring that the manager keep a copy of the keys would be acceptable and where a very secure way to archive them would be paid for… Not so useful for things like marketing brochure creation or janitor scheduling…

    One major “win” from it would be easier assignment of responsibility for leaks. If the only folks with the decryption key to the (for example) movie file are the creator of it and their manager, you have exactly two folks who could have leaked it.

    As an aside: On Apple.com I liked to put up a file of a few MB size (back when that was slow to move over the network – now I’d use a few GB). It would be named with ‘attractive’ names. Things like “Mac ROM code” or “Project New Mac”. It would actually contain something useless (one of my favorites was to have multiple copies of MicroSoft binaries ;-) or it was all blanks with the first line saying: “Congratulations! You have found that the password to this file is YouHaveBeenPwned!”

    Nothing like spending a half hour downloading a file (and showing on monitors…) then spending a few days to crack it; only to find out it was a total waste. Psych warfare can be fun! ;-)

    I do advocate for having ALL WAN connections encrypted, having encrypted wireless, having browsers set up to request HTTPS by default, etc. When in doubt, encrypt. But it needs to be relatively user friendly or folks won’t use it.

    I guess what I’m saying is that at the systems level, automatically encrypting those things that can be done automatically is just fine. (Network, disks) While at the user data level, I’d rather leave that as a utility the user is responsible to use (so I don’t get those phone calls…) Then sprinkle in things like some honeypot time wasting distractors just for fun ;-) I’d also strongly advocate for 100% public key encryption of all email – but that’s largely just pissing into the wind of public apathy… it needs to be built in to email clients by the vendors as an auto-key-exchange and auto-encrypt-going-forward function; and they are not doing it.

    Heck, in my ‘setting up’ the encryption on containers on my disks I managed to forget the key on one container. (Lucky for me it was just a test case, not actual data). But as a consequence I tend to decrypt containers before making archive copies. Presently I have a 32 GB “system on an SD card” where I’ve forgotten the password. Not a big deal. It’s a firewall / router with a DHCP and BitTorrent server on it. I can recreate it in about 4 hours and have the data elsewhere. But that password / key management issue is not small.

    On the topic of sys admins: It’s also worth noting that their ought to be distinct logins for Admin use (that can get special privs) and a completely separate login used for general email, browsing, whatever. That prevents a phishing or malware attack from getting a sysadmin account as those activities are no done on that account. I usually have 3 different logins just for me on each system I run. One for “junk uses”. One for “admin access” (though even that still needs a sudo to actually ‘get root’). One for “special projects” – often a new one for each project. Again that ‘segment the space’ behaviour.

  10. Larry Geiger says:

    Larry’s Computer Rule
    Rule # 1: If it’s digital, it’s everywhere.
    Rule # 2: Never forget Rule #1.

    If it’s encrypted and someone wants to see it, it will eventually be decrypted. Eventually. Maybe long after you care, but nevertheless.
    Most business and personal stuff is innocuous and there is so much of it that it’s irrelevant. It’s ok to email your mom and say how the grandchildren are doing or that you played Bunko on Tuesday or the weather is nice here in Florida. Just be careful.
    Images are another matter. Your child’s FaceBook photo is on a hard drive in some far out of the way place in China. Bet on it. Your high school annual photo is on someone’s hard drive in Brazil. I have very few images on the inner-webs. Fewer and fewer as time goes on. IMHO. YMMV.

  11. Power Grab says:

    I love it when you talk security to us. :-)

    Things are so much more complicated these days, especially with all the backdoors built into everything! But back in the day, when we had a dedicated word processing system that ran on 8 inch floppy disks, my officemate and I discovered evidence that someone was entering our office during the night and using our system. We learned from the janitor that the perps claimed to have permission because they had a ring of keys from an administrator.

    We couldn’t get anyone higher up to take any action to help keep our system secure, so we took it upon ourselves to do it. We agreed together to remove the system boot disk every night and hide it. We hid it in a disk box in a filing cabinet. The box was labeled “Bad Disks”.

    We stopped having trouble with unauthorized use after that. :-)

  12. Larry Ledwick says:

    Looks like the sony hack may have been social engineering or related lax security, news is reporting the breach was because they stole admin credentials from someone and that is why they had the keys to the kingdom.

    http://www.cnn.com/2014/12/18/politics/u-s-will-respond-to-north-korea-hack/index.html

  13. E.M.Smith says:

    @Larry Geiger:

    Encryption is always a time based race form of security. At Apple in the ’90s we had a Cray. It was a $40,000,000.00 machine. We had 2 TB of STK tape archive. That was a $100,000.00 or so IIRC (or that might be the tape cost with the robot being $250,000.00… it’s been a while). Now my old HP laptop has as much processing power and 2 TB of about the same speed USB drive is about $100.

    Then, we figured we could ‘pre-compute’ all possible passwords and salt combinations and just do a password “lookup” via the public visible ‘cryptext’ of passwords in the passwd file. We didn’t do that, but planned to go to hidden password cryptext “soon”. We did use it to test password strength on various logins / systems. Sometimes doing brute force decrypts and passing judgment on the clear text ;-) (Why have an idle daemon when you can have a password cracking program running ;-)

    At present, minimum password strength requires about a dozen characters from the full character set. Then it was about 8 of a limited set. In a few more years, it will be “bigger than people can remember and use” and we will need to go to some other system (likely token based).

    Pretty much everything that was DES encrypted is wide open today, so anyone who recorded traffic in the DES days can do a replay / decrypt on it. MicroSoft triple-DES has been shown to be DES+fluff in actual difficulty (gee thanks, PRISM…) so all MS based encryption until very recently is essentially wide open now.

    BUT…

    Things like AES, Blowfish, and other systems are exponentially more difficult. Moor’s Law is coming to an end. We are likely entering the period where encryption done today will hold secure for more than a decade or two. (Yes, Moor’s Law is ending. Already we have to go to multiple cores to handle heat / size / performance limits, and that is a linear gain not an power gain… also the number of electrons that store a bit will be ‘less than one’ with not too many more halves of cell size. At that point it isn’t possible to use smaller cells with fewer electrons…)

    Time will tell.

    @Power Grab:

    Sometimes the simple solutions are the most elegant ;-)

    Also it is often the case that the folks who care least about security are the management. The higher up, the less interested. Go figure. “Security” is overhead, and overhead costs are to be minimized.

    @Larry Ledwick:

    Not surprised. As I noted above, I’d have a hard separation between the browsing / external email services and all the internal services. Even to the point of separated machines, networks, logins, and credentials. Have a ‘browsing and email’ network and give everyone a login of their name and unique password. Internally have a different network, a distinct login (likely a bit more cryptic like SMEM123 for Smith E.M.xxx and an enforced different password (have the external rule require a number and and internal require a number, special char, caps…) along with NO browsers or no allowing browser protocols through to the outside… Also different admin logins and different admin passwords. Also have a ‘2-factor’ method to ‘get root’ or privs. (biometrics or physical token).

    That’s the best I can think of to block that line of attack.

    Then all that segmentation (noted above) prevents a hack of someone’s email or even their internal machine from getting to all the corporate cookies. They get EITHER the financials OR the e-mail OR the movies OR the engineering OR … not ideal, but it limits the damage a lot. Having 2 factor credentials on the inside for privs also prevents a lot of it from working, especially the social engineering part.

    But all that depends on someone being willing to pay for it. Rarely is that the case (Apple and Schwab were two that did – that are long enough in the past I can mention them ;-)

    Frankly, the hardest part about security is getting management to care about it and fund it. Even the PCI / Sar-Box stuff that is mandated by law is often done more as a check box for ‘compliance’ than an actual full on commitment. ( PCI Is Payment Card Industry. There is also PII Personal Identifying Information that shows up in payment and HIPPA / health contexts while Sar-Box is Sarbanes Oxley legislation on financial exposures, email retention, and related. But that’s getting way down in the legal weeds…) There are many companies “doing it right”, but they don’t make the news often. Many more are exposed, but since everyone else does it that way the box is checked and they don’t see why they ought to pay more for ‘good enough’…

    Oh, and try explaining things like wanting to build a honeypot with spousal access to a non-technical exec… or why you need $40,000 for hardening routers and firewalls that they think they bought 2 years ago already… Heck, I was at one site where I found the Sys Admin was in cahoots with the folks buying computers. Flat out stealing equipment for resale. He had a set of backdoors in the Cisco router firewall AND I found a twisted pair from the phone closet that went into a wall with a hidden ISDN dial in (it was fast then…) for his private use. Had to do an ‘on the fly’ complete rebuild of the boundary router/ firewall config, pull his wires, and more. The company declined to prosecute the guy as they didn’t want the publicity and they resented the request for a new firewall to be bought. He was gone, so why upgrade it?… Sigh.

    That kind of attitude is the major problem. Once you are past that you can start to talk tech solutions and level of defense.

  14. Larry Ledwick says:

    One interesting thing about passwords is the length is more important than the complexity in human terms. From a brute force cracking point of view, the following two passwords are just as difficult to crack.

    H5kisyGx3#enR7tw%
    Ilikel0ngPaswds3$
    

    http://whatis.techtarget.com/definition/password-entropy

    As a result a very long pass phrase with an easily remembered insertion of uppercase, special characters and numbers and ideally a segment of a nonsense or non-dictionary word group can be a very strong password but easily remembered. Using a nonsense or non-dictionary segment in the password string helps defeat tools like “rainbow tables” which presume that common words and symbols are used.

    http://en.wikipedia.org/wiki/Dictionary_attack

    The rainbow tables attack basically is the precomputed passwords you were talking about, but even there you have to know some information like what hash system the password you are attacking used, and the likely character set, or dictionary language used by the person making the password.

    One way to get around remembering difficult passwords is to use a secure password archive tool like keypass to store them.

    http://keepass.info/index.html

    That particular tool also allows you to let the system generate a very complex password for high security applications like your financial or medical files and then a relatively secure way to store the passwords in a recoverable form. This can also simplify the issue you mentioned earlier of keeping track of old passwords which you might find needed to decrypt some old file.

    If you keep the keypass info on a separate system that is the better choice, as the bad guy needs to access two systems to get to the solution. If the keypass is stored on some old lap top which is never connected to anything else, it becomes a physically impossible hurdle for the attacker to get to the password archive file unless he has unlimited access to the physical facility, and some local knowledge which if good security is used should only be known to one person.

  15. philjourdan says:

    @Jason Calley – Actually your joke is about the only solution. The PC empowered the people, but alas, most of them have no clue on how to wisely use the power. Witness Facebook.

  16. Larry Ledwick says:

    Technical details of the attack method from CERT
    https://www.us-cert.gov/ncas/alerts/TA14-353A

  17. Late last century, I was a contracted programming/admin slave to a division large multi-national.

    One fine morning, I was advised that there was going to be a security audit by a third party (very large consluting firm) of the network and server security. So I installed some “root kits”, back-doors, SUID-root shells in the dotted directories of long-gone users, multiple root logins, etc.

    A few weeks later, the results of the “security audit” came back; eliminate user access to FTP and force everybody to use anonymous FTP. No mention of the classic security holes in the mission-critical ERP/MRP servers.

    Over a cup of coffee, I had a word to the branch manager who requested that I not to put it in writing. It was more important for senior management to appear to have had the security audit done than to for it to have been effective.

    If security is actually important: Watch the watchdogs.

  18. E.M.Smith says:

    @Larry Ledwick:

    Good points. I tend to use that “phrase” approach, but with ‘twists’. Number for words, ‘alternate’ spelling of words, special chars and sometimes other language substitutions.

    Love to eat spaghetti on Sunday four dinner.
    becomes
    7ove2Eatspagettisur$unday4comeda!

    Makes a password that is not too hard to remember, resistant to dictionary attack, and rainbow table resistant. Also hard to brute force.

    Another one you can do is “keyboard pattern”. So in the middle of that you put:
    asdfghjkl;’
    just the middle row of the keyboard (or up/down runs or squares or…). Easy to remember a physical pattern and that it goes just before the $… With just the ‘around the outside’ of the key layout you get about 27 keys. It CAN vary by particular keyboard too, and then you can add ‘change ups’ like having every 4th one be with caps lock…

    By that point, anything short of brute force is not going to work.

    @PhilJourdan:

    So true. I have a pretty smart friend who is involved with computer management… he doesn’t see any issue worth really worrying about with Facebook and Linked-In. Sees the issue, just doesn’t care enough…

    @Larry Ledwick:

    Time for some reading…

    @Bernd:

    I’ve worked in The Compliance Department of a large co. It was “all about the check box” and you were not making friends if you pointed out real problems that existed. Another guy in the shop was just laid off in a RIF. ‘Oddly’, just after showing where there was a long standing problem in the actual compliance that was being ‘checked off’ regularly anyway…

    I’ve regularly found that I can “do interesting things” after security audits. Why? Many “audit” firms just “round up the usual suspects” with some script. Never going to catch the ‘neat stuff’.

    One example? Saw a proposal for a fully integrated file system where networked files would be referenced via three dots before the slash. So “/” is the “root” of all file systems. “../” means go up one directory from where you are. ( so if in /bar/bite/me and you do ‘cd ..’ you end up in /bar/bite/ and another ‘cd ..’ puts you in /bar ). Now for these folks, doing …/ would move you out of the machine name space. …/machine/bar would get files from the other machine (named ‘machine’). Think anyone would find that you had put in a new form of the file system that fully integrated that syntax? What if instead of …/machine it was just …/hiddenfilespace in nominally free blocks? A binary compare of things that manage that space would show a difference, but would they know what that difference meant?

    How about spotting a VM in a ‘bag o bits”? One for an ‘unusual’ OS? ( Mentioned before was that on a ‘fully locked down’ MicroSoft machine I was able to launch a VM with Linux from a USB stick… that would not even be in the machine at the time of a sweep…) I mentioned this to my boss… who asked that I not tell anyone… I wasn’t supposed to be able to “install unapproved software”… So I had access to every bit of Debian (or any other Linux / Unix I wanted) with root on my VM. Think that might be useful for scanning other systems? (In their defense, they had pretty good pro-active network security and had I started a port sweep, even a slow one, they would have found me… and my boss didn’t want me walked out 8-)

    But yes: The “compliance business” is often much more about check boxes than reality. Sorry to say.

    The point about ./ directories is a good one. You can hide all sorts of interesting stuff in . directories. I’ve also used non-printing characters as file names. Shows up as a blank line in the ls listing, but… make it a .nonprint and … (Found that by accident when a name was formed badly and I suddenly could not type anything that would remove it… it had one such spurious char in the name, but make the whole name of them and…

    Oh Well.

  19. kevosphere says:

    Hi EM

    Just for posterity – a guy writing at the Daily Beast who also has computer skills:

    ( director of security operations for DEF CON, the world’s largest hacker conference, and the principal security researcher for the world’s leading mobile security company, Cloudflare )

    thinks it was a Sony insider job.

    http://www.thedailybeast.com/articles/2014/12/24/no-north-korea-didn-t-hack-sony.html

    Ignore the comments thread – as the commenters seem to have ignored the writer’s assessment, and it devolves into an ‘Obama did/didn’t do it’ food fight.

  20. E.M.Smith says:

    @Kevosphere:

    Interesting article. I’d not looked into the hack particularly in any detail. What he lists / found are fairly interesting issues. In particular, the hard coded inside info. How was it gotten, if not from inside? Did some undetected hack happen, that did no damage, but got all the needed goods? From someone suddenly POd about a picture? Um…. Then the assertion that a list of proxies is a N.K. signature? Dodgy at best.

    Then again, I may be biased. Any time I’ve tried to call the FBI in an a hack they have been nice, professional, slow, and not all that impressive… Now the last of those was a good 25+ years ago, so not the same folks as today, and not with the “new” emphasis they’ve put on computer tech. But still… organizations tend to have persistent styles.

    Which brings me to the final point of agreement… So we have code with a lot of hard coded stuff in it, and some “fingerprints”, yet nothing it it looks like Korean phrases or names? No little Easter Eggs or “digs”? Um, OK…

    So from my POV, it’s a toss up as to a N.Korean op, or a “disgruntled” using a N.K. False Flag. Until more info is shown, not a lot of more to say.

  21. Larry Ledwick says:

    I used to have a DBA that was super paranoid and would append a non-printing character to the end of a file name. If you tried to type the file name based on what you saw in an ls listing it would not work, because you did not include the hidden non-printing character, but if you did an ls of the directory and used cut and paste on the file name you could cd to it with no problem. A quick visual scan of the file system names would not raise anyone’s curiosity but it was actually a pretty effective security feature to block day to day users from getting into files that they should not be playing around in, even if they technically had permissions to access the files.

  22. E.M.Smith says:

    @Larry:

    I’ve done that ;-)

    Not very often, and after a while it’s a bit of a PITA so tended to be “let go”. It can also be fun to put in a / with quoting that is not actually a level in the file name… but also a PITA. Now, once, just to see if I could do it, I embedded a ” /” in a file name. Yes, blank and /. As you might guess, even a ‘cut and paste’ gives undesired results…. At least then, about 25+ years ago, you had to type it by hand with the escapes to get it to work.

    Then there is the classic … directory. Any file name or directory name starting with a . is non-printing, so by naming it … you had a non-printing directory that looked sort of like a reference ‘up one’ or like you skipped something in the printout (if you forced print). That one was a real PITA… One company came out with an extended file system where … was used as the flag for “machine name follows” or “exit local file space” and it didn’t work in that context. (So, for them, a path name of …/peaches/etc/fstab was a reference to the fstab file on the machine ‘peaches’ in the /etc directory. Elegant, in a way…)

    Also, on one machine some long time ago, I got the ‘backspace’ to work without ‘erasing’ the previous character from existence, but with removal from display. Listings would momentarily show the ‘missing’ character, then overlay it with a blank. Unless you knew what it was, and where it went, it was very hard to ‘get it’… Now I’ve forgotten what the magic dust was that did that, but could likely recreate it.

    I’ve also had an NFS mounted directory full of symbolic links to the actual file locations. When not mounted, you could get to those files (as they were real on the machine) but not via those NFS directory paths. Anyone using “my listings” when I was not logged on got hidden garbage files (that were overlaid when the NFS mount was in place with the symlink redirections). Not a particularly effective way to protect the files (as they were still there and readable, scattered around the real file space) but more of an annoyance to anyone looking over the shoulder or trying to steal / use some script with embedded paths (as they would only work right when I was logged on and the NFS mount for the redirects done…) The “less clueful” would be frustrated and prone to giving up (or leaving evidence in the garbage file access dates…) while the more clueful would observe the symlink flag on an ls of the directory…

    Ah the joys of obfuscated file naming competition ;-)

    Maybe I had too much free time as a full time Sys Admin ;-)

Comments are closed.