Apple vs. FBI vs. Spy vs. Spy

Or “Why Apple is right and the FBI is wrong”.

I’ve made a short list of the things I can see that are “wrong” with the position taken by the FBI on the question of forcing Apple to create an unlockable version of their operating system. (I’ll be calling that “The Hack” further down).

The basic problem for the FBI is that Apple has put security features into their operating system that make “guessing” the unlock code a bad strategy. BTW, this is common in other places, too. The iPhone gives you 10 “guesses” on the unlock code, then it will (can) erase all the data in the phone. Since there are many unlock codes possible ( for 4 digits, 10,000 ) it is unlikely you will ‘get it right’ and highly likely that the phone will be scrubbed and / or bricked. (Bricking meaning locked up never to be opened again).

That feature set was not in older iPhones, so all the arguments of the form “Apple helped them get into other iPhones” are broken arguments. Apple helped them get into a completely different kind of iPhone that did not require breaking these security features.

That’s the basic problem. The FBI wants a “hack” that lets them have unlimited “guesses” to unlock the phone, then they can take out the information. There is a second step to this in that the software prevents guessing at computer speeds. You can only guess at human speed and wait a few seconds to try again. The FBI wants that taken out too, so that automated search can be done on the unlock code keyspace.

Now think for just a moment about what happens if you try too many times to get into your bank account or even your online email. For many / most systems you either get increasingly long wait times (usually exponentially so) or you simply get a password lockout and must get a password reset from the vendor (for some vendors, they can’t do that and you are just locked out forever). This is the usual and customary way of doing things. The FBI wants to change the usual and customary so that THEY get unfettered access.

Once Apple has done this, all other vendors WILL be subject to this same legal precedent. It doesn’t matter at all that the FBI says they don’t care about the precedent, it will still exist.

All arguments of the form “The FBI doesn’t want to set a precedent” are void since a precedent WILL be set, want it or not.

So that’s our basic problem space.

What are the specific “issues” I see, beyond just the FBI wanting in and Apple saying “no”? Here’s my list with a few notes. I’m sure there are other issues as well, and folks are welcome to add anything they see that I’ve missed.

The Problems

This DOES create a generic Hack Tool

This is NOT the same as prior phone unlocks. This creates a capability that all sorts of folks will want to grab. It is fundamentally the equivalent of an iPhone Encryption Nuclear Bomb. It, simply put, causes massive destruction of the security feature across all iPhones globally. ANYONE who gets their hands on this new weak version of the operating system can open any iPhone. That’s the whole design goal of the software the FBI is demanding: Open an iPhone against the will of the owner.

Yes, the warrant says to ‘key’ the unlock code to that one phone in some half-assed attempt to prevent a general solution; however that is simply dumb. (Sorry judge, it is). That just means any stolen copy will need to have one serial number swapped for another (or perhaps a hashed version of it). “Agencies” deal with that kind of problem all the time. Hard, but not too hard. Furthermore, before you can do that ‘key to one phone’ you must create the software that does the unlock and test it to show it works. Either you are going to create it without that ‘key to the phone’ and then add that step (thus making the ‘not keyed to one phone’ generic tool) or you are going to key it to some other phone for testing, then change the key. Thus creating the “update the number for a new phone” process and / or code. In either case, you have a general purpose unlock tool. Any argument of the form “It is just this one phone” is bogus and ignorant.

A VERY valuable Hack Tool brings new threats

This tool is now VERY valuable. The iPhone is now used for all sorts of financial transactions. It is more correctly a ‘pocket computer that makes calls’ than just a phone with features. We’re talking $Billions of dollars behind those accounts that can be exposed via an unlock feature. This changes the nature of the folks who would want that tool. No longer just a kid down the street hacking for fun and a thrill. We now bring in all sorts of State actors and Agencies and organized crime. The existence of the unlock code paints a Giant $Billion Target on the back of Apple.

Quis custodiet ipsos custodes?

Who will watch the watchers and who will guard the guards?

But not just Apple as a legal entity. It also paints a target on the programmers involved, on the data archive and I.T. department, on the managers of those areas, on their service providers (like telcos and networks) and on down the list to the janitors that have access to the building. Who is going to keep them all safe for decades? Who is going to assure not one of them succumbs to a $Million bribe? (Perhaps coming with an offer of a free mansion in China… and State protection). Who is going to assure they NEVER get a photo in their email of their family- taken through a gun sight… Who is going to assure no non-state actor ever gets access to that program or any backup / archive copies of it? Even if they present with a perfect work history, no ‘priors’ and apply for a job working in that group? Are you going to forbid Muslims from getting jobs in that group? Really? Are you going to cover the discrimination law suits if you do? Who is going to assure ISIS and / or any other such group doesn’t get someone planted? Or Russia or China or Germany or the UK or…

When I ran a secure site, we had to deal with such issues. We found out that, then, the going rate to subvert an employee was about $3000. Yeah, that cheap. Call it $30,000 today and it is still cheap. For that, you can, typically, buy physical access to a site, or get a copy of the backup tapes dropped off. We took great pains to assure that was not possible at our site, but bump that up to $100,000 and even our measures would likely have failed. Oh, and we would not have been resistant to that telescopic sight photo of family nor to State actors with “other means”… The site I ran had a large Cray and it was considered “Export of a Munition” to let “the wrong people” have accounts on it. I was subject to a prison term if I made a mistake on granting access. I’ve dealt with this world, at least from the border post to it… BTW, we specifically decided NOT to put in truck barriers around our computer room figuring we didn’t have anything valuable enough to worry about ‘explosive entry’. With The Hack code, ‘explosive entry’ would be an expected risk. Apple would need to build a bunker somewhere to properly protect it, even during development. Yes, it is THAT big a risk.

As someone who kept the Apple Engineering Network and Supercomputer Center safe for over 7 years, I’ve spent some of the “best years of my life” solving just this kind of problem, and that was 25 years ago when we had a much simpler problem set to defend against. With the present set of risks (including all the weaknesses in operating systems and security methods put in place by our own NSA and buggered firmware in devices from China and including weaknesses / exposures from having web enabled code run on most platforms) I’m fairly confident that kind of “never got hacked” record can not be repeated. Especially against this caliber of threat level. It is that which is meant when Tim Cook says The Hack is “too risky to create”. He KNOWS he will get that gunsight photo (and perhaps a ‘buy out offer’ from a Chinese company and…)

Frankly, protecting The Hack during development and testing scares the poo out of me. The code will exist at a minimum in a data vault / archive, on the development computers, and on backups of everything. It will be accessible as it passes through network devices (and wires) and unless care is taken to shield the building, via the air (either as WiFi network access, as WiFi snooping, or as ‘leakage’ in Spy vs Spy tools that can scrape screens and capture keystrokes). It could well end up on some USB dongles for a ‘grab and go’ exploit, too. Locking all that down would take Lockheed Blue Cube or NSA site security levels… yet how well did those work against Snowden? Hmmm? It is easy to secure something of little value, but The Hack is of very high value.

Apple must protect the The Hack code and any device it has ever been on (or destroy them quickly) for a very long time. Further, it must provide exceptional levels of protection to the PEOPLE and the PLACES where they work, potentially for years to decades. A level of protection that is presently not in evidence, even at the NSA.

Slave Labor Is Illegal

So who is going to pay for all this added labor and security? Last time I looked, slave labor was illegal. Here the FBI is NOT just asking for a device or software to be handed over, it is asking for labor to be expended, against the will of the provider, and without payment. Last I looked, that was called slavery. Is it OK to enslave companies as long as the employees are paid? To steal from the shareholders? Do we want to establish that precedent? (And it will be a precedent if it happens.)

It Breaks the iPhone, a “taking”

The Hack exists for the purpose of breaking security on the iPhone. As that is a major feature, The Hack breaks the iPhone. This has implications. Implications can not be simply ignored. This constitutes “a taking” under law. The value spent by millions of iPhone users will be destroyed for the benefit of the FBI, the US Government (and, via the precedent set) dozens of OTHER governments world wide. Who will be paying all those iPhone owner for the “taking”? Who will make good their losses when a Chinese Customs agent takes their phone to the back room “for inspection” and sucks all the information out? Who will compensate Apple for lost sales (to ANY and ALL innocent people who want a really secure phone) once this deed is done? How will the “taking” be compensated? Hmmm? And make no mistake about it, there will be others happy to provide a competitor to Apple in the “security” space, and some of them well outside the reach of US law. at the bottom has a competitor secure phone review.

The Hack WILL eventually leak

It is just a matter of time. All encryption is a race condition. Over time, weaknesses are exposed. Even if the code is stored on an encrypted disk, briefly, and then erased once the phone is unlocked, there will be backups, or someone will have ‘snagged a copy’. Eventually even what is today unbreakable becomes breakable. Methods improve, computers get faster.

So The Hack will eventually be free ‘in the wild’. The only question is “How soon?”. Soon enough to be a problem, or only a historical artifact? Nobody knows.

My bet would be on “about 2 years”. After the first year, folks will start getting sloppy about it. There will also have been another doubling of compute power via Moore’s Law. State Actors who ‘snagged a copy’ of it even as encrypted network traffic and / or encrypted disk will likely have found a way to get the goods out. Or they will have paid someone enough money to get an Apple badge and access.

What will the world do then? Probably buy the iPhone 27 … but I snarc ;-)

This WILL set bad security precedents

It does not matter one bit what the FBI says about precedent, they will be setting them no matter what anyone wants. The precedent for slave labor. The precedent for unfettered access. The precedent for “create custom code on demand”. The precedent for “break your product for us”. The precedent for “Use YOUR secret signing key for OUR purposes”. And likely a few more.

We will need to live with those precedents for a very long time.

Unbreakable Encryption is already here, and has been for a while

From “One Off Pads” to several years of use of the Enigma Device by the Nazi in W.W.II, we’ve always lived in a world with “Unbreakable” Encryption. (Though the Enigma was broken due to some spectacular skill, luck in capturing one, and stupidity on the part of the users using repeated sign on /off message headers and footers). Some encryption, like one off pads, if used properly remain unbreakable forever. The Navajo Code Talkers were Unbreakable during W.W.II and kept a national secret until just a few decades back (that same system would still work, IMHO). Even now, our online banking and dozens of other uses (including the “Software Signing Key” used by Apple and others to assure only official updates to software get installed) are all based on presently unbreakable encryption.

There is nothing new, at all, in the ‘threat’ of unbreakable encryption. Any argument of the form that this is somehow new or an existential threat is bogus. It’s a very old and very mundane threat at best.

So the FBI wants to have unbreakable encryption broken and to establish the precedent that it MUST be done “on demand”. That, then, will assure ALL those other uses are subject to the same rules. No bank, no telecom company, no software company, heck, no television maker, will stand in the way if Apple folds. Expect the Internet Of Things to be full of Government Eyes and Ears and ALL communications (including the microphone and camera in your laptop or your internet connected TV to be pwned by them. Oh, and EVERY OTHER GOVERNMENT on the planet too. (What? You think only the USA would use the power of law to force divulgence of The Hack and / or creation of new ones? A precedent is a global thing…)

But Wait, there’s More:

Al Qaeda and ISIS already have their own I.T. departments and already have created their own encryption and messaging tools. They have a hot line message system to tell folks what is secure, and what isn’t. At best, the FBI demand for The Hack will simply move all of their traffic onto those systems and off of the iPhone.

I’d speculate further that the reason the other devices were destroyed and this one was not is simply because there is nothing of interest on it. The San Bernardino shooters were not dumb. They knew which devices to break and which to ignore. They most likely were already using those non-US sourced encrypted apps to communicate, and not the Apple device. The Apple iPhone requires that you get apps from the approved vetted source. Not the kind of thing that is attractive to the Jihadi I.T. Department…

So we’re going through all these histrionics for what? All the shouting over “IF YOUR FAMILY WERE AT RISK!!!” is just political sob story machinery. They ARE at risk, but not from the iPhone. From those other devices that used the ISIS Approved Apps or the Al Qaeda Approved Apps, not the iPhone that uses Apple Approved apps and doesn’t allow the use of the unapproved ones. (Yes, with a lot of work you can jail break your phone and get unapproved apps onto it, but for a phone issued by your employer, you would have the constant risk of THEIR I.T. department noticing and blowing the whistle on you. Not a risk worth taking.)

In short, The Hack will do precisely NOTHING to uncover any plot, protect anyone in the future, or prevent terrorists from using unbreakable encryption. They already have it in a ‘roll your own’ way, and would rather use other devices where they can install the apps of their choice from their own I.T. department.

You Will Never Know The Hack Was Spread

It is very important to realize that after the FBI gets The Hack made, it will be immediately available to the NSA and others, and YOU will never be allowed to know it was taken by them for their use.

Under FISA procedures, all hearings and decisions are conducted in secret. The Department of Justice has not disclosed even the most basic information about the court’s activities despite repeated requests from Congress, the American Civil Liberties Union and other advocacy groups.

Furthermore, by skirting reports of illegal warrants and unlawful surveillance by the FISA court itself, the FISA Court of Appeals and the U.S. Supreme Court have failed to address several fundamental issues. It is critical that the Congress ensure our judicial system is lawful and proper by providing proper oversight of this secret court.

So the NSA walks up to the FISA court, says “We need a copy to combat terrorism” and they are given a positive ruling. At that point, Apple can say exactly nothing. It’s all a secret and you go to jail for spilling anything. So the next Snowden can then leak The Hack to the world from the NSA… if it hasn’t already leaked from someone else.

Violation Of Contract

This is the most minor of the issues, IMHO. The Hack is a violation of implied contract. In all sorts of materials, Apple has promoted their security. To then create The Hack is a violation of the implied contract to provide that security. Many governments and agencies around the world depended on those assurances when the bought the product. (I’ve seen one report that this feature set was put in at the request of one US Agency so that they would be able to buy the phones).

Every person who bought any iPhone subject to The Hack can now walk into small claims court and claim a loss due to breach of implied contract to provide a secure phone. Who’s going to pay for all that legal time? All those phones?

Don’t tell me it won’t happen. It is a very large world full of many different legal systems. Somewhere it will be upheld as a breach of contract. Then there’s that whole precedent thing again…

Then all the loss of marketing value and sales comes back to the “taking” aspect…

In Conclusion

I think this makes clear some of the more murky corners of this issue. It isn’t just a simple “Apple wants to sell to criminals” vs “Apple wants to protect my privacy” vs “The FBI wants everything”.

It is also SPY vs SPY on a global scale. It opens Apple to all sorts of risks, costs, and hideous side effects. At the same time, it does NOTHING to prevent unbreakable encryption communications by bad actors, since they already have their own “app for that”. (Several, in fact). All the time opening every single one of us to attack from all corners of the globe WHEN (and it is a when) The Hack leaks out, or is bribed out, or is extorted out, or FISA gets a request…

Apple is doing the right and moral thing. It is the Government that is being excessive and grabby.

Subscribe to feed


About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in News Related, Political Current Events, Tech Bits and tagged , , , , , , . Bookmark the permalink.

40 Responses to Apple vs. FBI vs. Spy vs. Spy

  1. Larry Ledwick says:

    Your thoughts mirror mine on this issue. It is a nose under the tent problem more than anything else and would live on over and over again for every other encryption based product in the world.

    Interesting to note that John Mcafee has offered to crack the phone with some of his security team using a truly one off method which would not open that back door. He points out that the US government is about 20 years behind the cutting edge technology of the hacker and security industry. With physical access to the phone he asserts it would take him about 3 weeks to crack the phone, but physically opening up the phone and at the chip level pulling all the data off onto another system where a brute force attack could be done successfully without needing to back door the OS. He could just put that image on another computer, and bang on it until he got the right access code. If it locks up you just start again on another clone image where you left off on the previous attempt. Crude but effective and it completely side steps the OS protection system in a manner that does not defeat every Apple phone in existence.

  2. Serioso says:

    Here’s the part I don’t understand: Assuming the iPhone has a physical structure like other computers, why doesn’t the FBI make 1000 copies of the memory chip and then simply try all 10,000 code combinations? Doesn’t sound like very hard work! Or does Apple have a way to make the memory chip inaccessible/uncopyable? Could this all be a way for the FBI to obfuscate the actual lack of difficulty in breaking into an iPhone? I remain suspicious…

  3. Larry Ledwick says:

    I think you nailed it Serioso, the way the FBI worded the order it demands that they use only that one technique. If they just wanted the phone’s contents they could have simply gone to Apple and said we need your help, can you extract the data from this phone? and left the method up to Apple.
    If they had done that Apple might have helped as long as the process was kept on the QT so that they did not damage their brand by openly cracking the phone.

    The technique you mention is implied in some of John Mcafee’s media discussions. In one video I watched he more or less said the same thing you did but did not go into enough detail to answer your question.

    I assume a hack like that would involve buying a half dozen iPhones and working out the method before you actually crack the case on the real one. In any case to maintain chain of custody, an FBI observer would have to watch the whole process so they would at least know the method used and take it back to the FBI for in house development once it was proven to work.

  4. Sandy McClintock says:

    @EMS, that was a really well argued and thought provoking article :)

  5. Terry Jay says:

    An aside. I read where the County is the one that enabled the 10 and destroy option after it had been returned to them as the owner of the phone, and now the County can’t recall the pass code they installed. The story may be hooey, but it was in the press a few days ago.

  6. Larry Ledwick says:

    Interesting video clip regarding the Apple vs US Gov issue. This commentator at least recognizes that there might be other ways to do it rather than a generic one size fits all OS hack. She mentions basing the crack on specific info unique to the phone like its MAC address.

    I could see where you could do something that would lift the pin limit for the phone using something like two factor authentication.

    For example to bypass the limit you would need the MAC address of the device, plus a secret key from apple plus a secret key from the FBI plus some hash of something else unique to the phone not easily acquired without physical access to the device or perhaps info like the original date of activation.

    My using multifactor authentication from competing interests (ie FBI secret key, and manufactures secret key) you make it very difficult for some unrelated third party to aquire a master key to all phones.

    Of course you still have the problem that it would only be as secure as those secret keys and someone could be bought to divulge those secret keys. That could be partially limited by changing those secret keys periodically but the phone would need some means like apt-get update to go look up the current value or like a security certificate validate those updated keys.

    Certainly not a trivial matter, but at least this commentator recognizes that there might be multiple ways to skin the cat.

  7. Larry Ledwick says:

    Just had a follow up thought, to force the user trying to crack the phone to be in physical possession of the phone you could require that for the access pin limit to be lifted they would first have to change the configuration of the physical phone.

    Remember the days when you had to set jumper settings on disk drives and mother boards to enable or block features.

    You could require that the pin limit crack would only work if the phone had been physically opened and one or more jumpers on the PC board opened or closed. This jumper setting could be tied to the specific device so that different devices would need different jumper changes.

    Again a huge management over head for the manufacturer but at least achievable without opening every similar phone in the world, without the users/ custodian granting deep physical access to the device.

  8. Kneel says:

    Even IF they get what they want, it will be like the limited encryption rules were – a complete joke that only stopped honest, law abiding US citizens. Everyone else will laugh and go elsewhere for the product they want and know is available.

  9. Since they have the physical phone in their possession, then extracting the memory chip (presumably Flash) and reading the contents is a fairly-simple rework job. By doing the same thing to a phone with a known memory content (assuming that the data is stored encrypted) they would know what sort of data to look for and could thus decrypt the target chip. A brute-force attack ought to work. Apple’s help should not be needed.

    There’s thus no reason to create a hack that will work on a phone that is not in your physical possession, and the request to do so stinks somewhat. In this case, the government is the legal owner of the phone and its data – a recent ruling showed that personal messages on a work-supplied phone are not considered as private to the user but instead belong to the employer. This isn’t unreasonable – if you want to send private messages you shouldn’t be using the company kit and bandwidth, and you certainly shouldn’t do it on company time. Apple could reasonably comply with the request by helping with the extracted memory-image, but that would set a precedent that would kill their business. Damned if you do and damned if you don’t…. By making the request through the courts, they’ve put Apple in an impossible position.

    As Chiefio has stated, codes are unbreakable but encryption is just difficult to break. Encryption thus buys you some time before someone can work out what you said, and that time reduces as the computer power rises, and maybe there’ll be step-changes as people work out a new algorithm for cracking or how to actually use quantum computing. State-of-the-art encryption is probably good for about 5 years, though, unless someone really wants to know what’s inside and expense is no object.

    Still, if the FBI say they can’t crack the code when they’ve got the phone in their hands and a free hand with what they do to it, I suspect they aren’t telling the truth. Even lifting the write-enable pin on the Flash would work, after all. It looks to me that they just want an easier method that can be applied to phones they don’t own. That bit of code should not be written for all the reasons Chiefio states. It’s not needed if you have the phone. As part of the production-line tests, there will most likely be a rig that can read/write the memory anyway, so even the chip-desoldering probably isn’t really required. There will be simulations available of the phone that can read that memory-image, and the simulations can be run in parallel to find the PIN. No new software really needed after all, just legwork.

    It’s pretty certain that the phone will end up decoded in the end, though, and that nothing useful will be found on it.

  10. Richard Bellew says:

    I state up-front that I do not have the expertize to enter this discussion on a technical level (much though I enjoy reading about it) but I do have an iPhone and so have a (probably dumb) question I would like to ask. My understanding is that the FBI have requested Apple to build ‘The Hack’ as they cannot at present get past the iOS Lock-Screen. Suppose Apple were to comply with this request. How does The Hack then get installed on the target iPhone? Presumably as an iOS upgrade? (Is there any other way?) But my iPhone (OK, it’s an old 4S) will download an upgrade, but then won’t install it till I ask it to. But I can’t ask it to until I’ve entered the unlock code. And the reason the FBI want The Hack is because they haven’t got the unlock code? That does not compute, unless I’ve misunderstood something pretty fundamental. Simon Derricutt said “…they just want an easier method that can be applied to phones they don’t own.” Certainly sounds like it, doesn’t it.

  11. p.g.sharrow says:

    Apple has built Millions of these “hack proof” phones and Law enforcement agencies have hundreds of these things in their hands already, that they want to down load. Governments all over the world want a back door for their use and Apple says there IS NO BACK DOOR. As long as there is no back door Apple is safe from a deluge from demands from any “pissant” jurisdiction owned judge making demands that the unhackable phone be unlocked for them. The FBI publicly says that they need this one phone opened but privately that this is just a drive to make all equipment providers MUST Provide back doors. “Let no crisis go unused” is always the cry of bureaucrats that want more control over the people. If there is a crack, NSA will seize it on the quiet, for their own purposes. Apple has no alternative, “There Is no back door” built in by the manufacture, is the only reply they can give. We know that nothing is hack proof if you have enough time and resources to spend. The FBI is lying to make Apple create a back door into all IPhones for all governments to demand the use of.
    In my opinion there is nothing of value in that phone as the terrorists knew that the local government owned it and he signed an affidavit that they could down load any contents at any time from it. They did not waste the effort to destroy it because there is nothing in it of value in it…pg

  12. p.g.sharrow says:

    @Richard; as I understand, the “Hack Proof Apple Iphone” is version 6s or newer. There is an unlocking software that is available that must be installed Before the phone is “locked” but the County IT guys neglected to install it before the phone was handed out…pg

  13. Larry Ledwick says:

    I saw one item that said the intended method was to use the update feature on the phone via the data port. Apparently Apple can “force” certain critical OS upgrades without the users intervention by signing the update with the proper secret key. The phone then recognizes the update as a critical OS upgrade from Apple and bypasses the need for the user to approve it.
    I can’t find the specific link right now but that is what I recall off the top of my head.

  14. Petrossa says:

    just overwriting the secure enclave firmware in memory via software is a perfectly straightforward solution. Bios’s get copied into RAM since forever, Secure Enclave is nothing more than a specialized BIOS. That Apple wants to pretend it’s dwindling Iphone market share is impervious to information capture is understandable but not realistic nor possible.

    In other words, if you really care about your privacy don’t use anything connected to Internet, client cards, credit cards, don’t pass by License Plate readers, don’t use tollbooths, etc. anything that registers your existence.

    Which means in fact become a hermit living of the land.

    This fight was lost decades ago. Don’t bother.

  15. Richard Bellew says:

    @pg: Thank you for the explanation. I knew I was behind the curve, but I hadn’t realized it was that bad!

  16. E.M.Smith says:

    Well, a bit of digging showed the guts of the iPhone 6:

    NAND flash is a distinct chip.

    Law Enforcement can “lift the chip” and subject it to a bruit force decryption. Yeah, PITA and a load of computes needed.


    It all depends on just how strong the encryption used might be. While there is only a few digit unlock code, they might have designed it to use a VERY complicated encryption key. In that case, bruit force will not work.

    HOWEVER, however… in that case, the decryption key must be stored somewhere in the phone in the ‘unlock’ process or area, which just means you need to bruit force THAT step of the decrypt / unlock step.

    At a first blush look-over, IMHO, one ought to be able to ‘lift the chip’ and with directed memory access, pluck out any hidden key and then decrypt the rest of the chip. Anyone modestly familiar with how the IOS works on the security side ought to be able to ‘flesh out’ that line of attack (i.e. a ‘position paper’ from Apple on how it does the encryption ought to be enough) and I’d guess about a couple of weeks work after that. Oddly (or maybe not so oddly…) that’s about the same estimate of work as McAfee gave for “his team” to break in…

    So I’m even more of the opinion now that this is a move to force Apple to create an unlock / backdooring bit of code that can be “pushed” as a “critical update” onto any iPhone remotely to enable wide open access “on demand” by Agencies and “law enforcement”.

    Were it REALLY about “just this one phone”, they would already have that NAND Flash in a mount outside the phone, write enable turned off, and the data in it backed up on alternative media…

    Oh, and unless there is some other bit of ‘scratch space’ that isn’t shown in the diagram (or that I missed in a brief look-see) that holds something “special”, the “magic bits” are in NAND on a chip and that implies strongly that the idea of “just make 1000 copes of it” and run each one through 10 code trials really ought to work.

    I’m seeing less and less need for any “special” IOS version or “critical update” push and a very reasonable path via hardware extraction and direct attack on a locked copy of the NAND chip.


    Not everyone needs to be a technologist to have an opinion about what is right, or to participate by asking interesting questions.

    Per your question: Substantially what P.G. said. It’s all about the ability to push a ‘critical update’ into a locked phone as a generic attack. For a specific attack, the NSA is excellent at things like extracting data from chips and doing decryption attacks. Made even easier by the simple fact that they KNOW what method Apple used, how the keys are generated, how big they are, etc. etc. In short, they know exactly what attacks to apply.

    Frankly, I’m pretty sure the NSA has the skilz to make their own unlocked version of IOS and push it… Apple will NOT be secure against an NSA attack / infiltration and I’d wager they already have had their “Secret Signing Key” lifted by the NSA. As mentioned above: I was in charge of Engineering Computer Operations for 7+ years at Apple and that included network security and security of the Engineering machines. Had the NSA showed up at my building with a Warrant and said “Give us access, or go to jail. Oh, and you MUST be silent about this.” I would have given a “I have to ask Legal for permission” statement (followed by “I have to kick this up to my Management”), then when they pulled out their guns +/or had The Lawyer show me The Law, just handed over the keys. Now it’s MUCH more likely they would have simply gone to the V.P. or P. and had THEM tell me to “Give this new employee access”, but if for some reason that didn’t work, they just work on the “food chain”… FWIW, I was one of the MOST careful about access and the MOST resistant to handing over anything to anyone. When guys show up with guns, badges, and a paddy wagon parked out front who can ‘disappear’ you and yours, with a warrant in hand, well… ‘better part of valor’ and all that…

    Which really gets back to my statement in the posting above that Apple would need to create a bunker of some kind for the creation of this code. Simply put: You can NOT depend on any single or few PEOPLE for your security. They can be coerced into compliance. (Look at the IRS, and they rarely even use guns…) You need a bunker that has very strong physical barriers to entry, and very strong SYSTEMS to prevent entry, with multiple authorizations required for access and with constant MONITORING by remote security and by local guards. That whole “Quis custodiet ipsos custodes?” thing. For this level of exposure, you need that level of protection, so that WHEN the Power Actors show up, you just point at the camera, say “Above my pay grade, but The Boss is now being informed” and wait for the call, the guards, or the police as necessary.

    That level of protection was not in place at any Silicon Valley company where I’ve worked. To me, that says an NSA “Visit” would be an easy success. (As evidence, I would point to the PRISM program that is essentially the same thing, but minus the physical visit and with a ‘persuasion of management’ as the main entry point…). Now, in this case, the assertion is that Tim Cook has said “No” to that level of approach. To me, that says “Next stop, elsewhere on the food chain”. As NSA is very very good at doing intrusions, unless Apple has been very better very good at defending, NSA has the “signing keys” and source code and can do what they want. (They have already used the “critical update” push method before and know it is very useful. None of this is a new idea to them; more like S.O.P. Standard Operating Procedure.)

    With that said: The FBI is no NSA. Last time I dealt with them (admittedly 30 years ago…) when we called up to say “We have what is most likely a Russian agent bouncing off our router and attacking a military site” we were ‘informed’ that “THE Agent who handles that kind of crime is on vacation. He can call you back in a week or 2.” Yes, that bad. We “shut down” the attack on our own, AFTER calling the military site that was under attack using a phone number we got out of their router… and having them say “How did YOU get this NUMBER?”… See, they were not supposed to have any internet connection… Yes, they were only worried about having their clandestine internet spigot advertised to their bosses, not worried that a probable Russian had already gotten into it… (Said Russian had been unable to get past our ‘Honey Pot’ and into our site, so had ‘moved on’. We were watching the whole time and when we saw who he was looking at next, decided to ‘blow our cover’ by taking action. FWIW, he never came back to our router again…) So I’m fairly sure that the FBI (even with their recent large expansion of staffing and attention to the area) is unlikely to have NSA level skilz, or even Apple level skilz… And “Agencies” rarely like to ask each other for “help”…

    Oh Well.

    Sometimes I miss the “old days” and Spy vs Spy… Other times not so much ;-)


    I think your approach might work, after looking at the ‘teardown’ link / photos. It would depend a little bit on how good the Apple folks were at obfuscating which part of the FLASH held what. A few ‘trials’ with other iPhones ought to show where updates get planted, where unlock code changes get written, etc. IFF Apple were very trick about it, that location would ‘wander’. It would be a Royal PITA to make that happen, but that’s the only ‘defeat’ I see on first glance. With that, you have an encrypted Bag’O-bits and no idea which part does what. (BUT, somehow the BIOS needs to know… so attack there…)

    Looks to me like a much better line of attack in any case.

    I’m also made rather suspicious by the FACT that the warrant specifies the exact method Apple must use to do The Hack. A more reasonable and expected warrant would just say “Help us get the bits as clear text”… and not specify to “build a tool that does exactly this”.


    Globally, there will be thousands to tens of thousands “standing by”…

    BTW, this general “problem” of vendor compromise is WHY I’ve put up articles about how to encrypt disks and files using the Raspberry Pi and Linux. With the BIOS under YOUR control and with the encryption key only in YOUR hands, those encrypted blobs are fundamentally unbreakable. ( Modulo the “guy with a gun asking for the passphrase” issue…)

    FWIW, I’ve used a few of these “containers” over the last few years and they work rather well. Biggest issue I’ve run into is trying to remember the phassphrase used on a file on a backup copy a couple of years after the fact… I’ve not yet found a solution I really like to the issue of “Key Escrow” for old keys. Then again, most folks probably don’t have a dozen “containers” with different keys changed a few times as they were ‘experimenting’ ;-) Not lost anything (yet), but it’s the biggest risk, IMHO. Minor issue is putting an SD card into some OTHER device that doesn’t have a clue it holds an encrypted LINUX file system and it saying “Empty card. Format?”… so have a way of knowing which chips you encrypted…

    Second FWIW: I also make sure that anything I really care about on the phone is stored on the removable mini-SD card. Why destroy a phone if all you need to do is take out the dinky card and melt it?

    Oh well, the shadow game must go on.

  17. E.M.Smith says:

    While driving today, I was thinking. It’s something I do to keep busy while the brain stem runs the car. (Really, it does. I’m mostly just a passenger making suggestions about best route…)

    IF (and it is an IF at this point, but a likely IF…) the NAND Flash can be duplicated and re-flashed onto target phones, then one can do a “search the key space” attack. I pondered “What would be the budget and staff?” Something manager types tend to do ;-)

    It ought to take about 5 to 6 seconds to key in a key sequence. Ten of those is “about a minute”. So for one phone, the “search 10” time is 1 minute plus reload FLASH time. Figuring it might be a long process to get the flash reflashed and / or Govt Employees are not the fastest, figure on about 4 minutes to reflash the phone. That makes it about 5 minutes / 10 keys. Call it 120 per hour. Just to allow some coffee time, call it 100 per hour per phone-person. That makes a regular work shift about 800 keys searched. (In reality, you would have a ‘flashing station’ where several phones would be reflashed at once and the worker would not be idle for 4 minutes waiting, they would just pick up the next one and ‘move on’. But I want this to be a worst case case…)

    There are some 10,000 choices as I understand it ( I could be very wrong, but somewhere or other I saw it was a 4 digit code. IF it can be longer than that, all bets are off…) So that makes a full search about 10,000 / 800 staff-days. Or 12.5 staff-days. Since there are 5 days in a work week, that would be 2.5 work weeks for one person, or one week for 2 full time and one 1/2 time employee. Add in a Supervisor and a Manager, and lose a bit of time to staff meetings / training, figure a staff of 5 could have it all done and wrapped up in about a week.

    Even if you allow a week for a good Engineer to work out the process for cloning the FLASH and making the cloning station, we’re talking way less than a month, closer to 1/2 month, and a staff level of less than 1/2 dozen, max, with 2 “overhead” positions in the supervisor and manager.

    Sure, this all depends on the assumption that “the FLASH can be copied” and that all you need do is find the 4 digit unlock code. Apple might have put some tricks in that prevent such a thing (though a test could be done in 1 day to find out…) and bollix up the estimate. BUT, the simple fact is that IFF chip cloning is possible (and it very much ought to be), this approach is something that would be achievable by a small group with little funding.

    Yes, the work would be boring as hell. So is filing billing jackets. So is managing backup tapes. I’ve done both of them… Most work is pretty dull. It still gets done.

    Looks to me like there is one Very Big and Very Important question to be answered before all sorts of legal machinations take place: Can the iPhone 6 NAND Flash be cloned into another phone and then the unlock sequence key used? (I could see a case where, for example, some CPU chip serial number is made part of the unlock hash…) It ought to take a decent hardware Engineer with a good “phone repair” bench about an hour to answer that question. (But I’d give him 2 to 4 in his budget since it is a bit ‘explorational’…)

  18. Terry Jay says:

    If the architecture is known, what connects to what, any reason not to simply disassemble the whole thing to start with? Air gaps everywhere. Then proceed to clone or whatever.

    And returning to the earlier comment that the County installed/activated the 10-and-done issue, how is this the manufacturers’ problem?

  19. EM – it’s almost certain that Apple have an emulator for their iPhones, so that rather than running on real hardware they have complete control. In that emulator, if there’s a processor ID it can be reset to whatever is wanted. In any case, the call to read processor ID could instead be patched in a real phone.

    Meantime the NSA have a lot of practice cracking encryption, and ought to know how Apple have encrypted the data so they have a good start-point. With a 4-number code to crack, the problem almost seems trivial given the resources available.

    In your time protecting data, your main weapon was really the air-gap. That doesn’t apply here, since they have the hardware in their hands. The whole thing looks like both sides are lying as to what their capabilities are.

    An interesting point – at the moment the implications are that if you forget the code for your own iPhone (or your ex-partner has maliciously changed it, or your kid plays with the numbers >10 times) then Apple won’t help you recover the data. Although this is always a problem with safe encryption there will probably be a lot of people this is going to bite. A simple hardware problem could cause a problem too (misreading the kb input). It’s great while it’s all working to spec, but a simple error could cause a critical data-loss. Unintended consequences.

  20. Larry Ledwick says:

    An interesting point – at the moment the implications are that if you forget the code for your own iPhone (or your ex-partner has maliciously changed it, or your kid plays with the numbers >10 times) then Apple won’t help you recover the data.

    Which is why they have a backup to the cloud feature which in this case was not activated recently enough to save recent data.

    Good security requires use of good practices, if you choose to activate the encryption 10 and done feature you also need to look at the other feature sets to cover those issues. Such as the remote management feature the county was paying for but had not activated and the backup to cloud feature and slap the hand of the local IT guy who messed with the phone after it was seized as evidence, once it was evidence it should only have been touched by a well trained forensic data specialist who knew the consequences of any steps taken to access the phone.

  21. Larry – once you put data in the cloud I presume it is also encrypted, and probably using the same pass-code, and probably just as locked to the actual handset if so desired. Same problem in trying to recover it, really. I don’t however put data in the cloud since I don’t know where it is, under what jurisdiction, and who has access to it.

    I’d hazard also that most iPhone users won’t naturally use good practices on backups and security, but instead rely on the built-in safety features until they have a big dataloss. Things are after all a lot more reliable these days and when people get bitten it’s a big loss and a harsh lesson.

    This particular netbook I’m using now is Android-based, which makes backup not easy except for to the cloud. OK for me since it’s only used for net stuff and no backup is needed. If I was using it for local data I’d be annoyed. I suspect part of the reason for cloud-based backup is to provide another source for data-mining. I haz your data and I’ll use it…. It’s even implied that Apple could have got the data back easily from the cloud, though I’m not certain about that. Is it secure or not? Kind of hard to be certain from the reporting, where they say that the lack of backup to the cloud meant they couldn’t help. Or have I misinterpreted that?

    One way or another, this has opened up a nice can of worms.

  22. Swordmaker says:

    Let me clear up some of your questions and misunderstandings about Apple’s system for data security on iOS devices. Keep in mind this applies to A8, A9. and later integrated system on a chip (SoaC) processors but some of this also applies to dedicated separate hardware on the older iPhones such as was used in the iPhone 5C in question. The iPhone 5C has an A7 processor.

    Secure Enclave Explanation

    Apple uses a dedicated chip to store and process the encryption. They call this the Secure Enclave processor. The secure enclave stores a full 256-bit AES encryption key.

    Within the secure enclave itself, you have the device’s Unique ID (UID) . The only place this information is stored is within the secure enclave. It can’t be queried or accessed from any other part of the device or OS. Within the phone’s processor you also have the device’s Group ID (GID). Both of these numbers combine to create 1/2 of the encryption key. These are numbers that are burned into the silicon, aren’t accessible outside of the chips themselves, and aren’t recorded anywhere once they are burned into the silicon. Apple doesn’t keep records of these numbers. Since these two different pieces of hardware combine together to make 1/2 of the encryption key, you can’t separate the secure enclave from it’s paired processor.

    The second half of the encryption key is generated using a random number generator chip. It creates entropy using the various sensors on the iPhone itself during boot (microphone, accelerometer, camera, etc.) This part of the key is stored within the Secure Enclave processor as well, where it resides and doesn’t leave. This storage is tamper resistant and can’t be accessed outside of the encryption system. Even if the UID and GID components of the encryption key are compromised on Apple’s end, it still wouldn’t be possible to decrypt an iPhone since that’s only 1/2 of the key.

    The secure enclave is part of an overall hardware based encryption system that completely encrypts all of the user storage. It will only decrypt content if provided with the unlock code. The unlock code itself is entangled with the device’s UDID so that all attempts to decrypt the storage must be done on the device itself. You must have all 3 pieces present: The specific secure enclave, the specific processor of the iphone, and the flash memory that you are trying to decrypt. Basically, you can’t pull the device apart to attack an individual piece of the encryption or get around parts of the encryption storage process. You can’t run the decryption or brute forcing of the unlock code in an emulator. It requires that the actual hardware components are present and can only be done on the specific device itself.

    The secure enclave also has hardware enforced time-delays and key-destruction. You can set the phone to wipe the encryption key (and all the data contained on the phone) after 10 failed attempts. If you have the data-wipe turned on, then the secure enclave will nuke the key that it stores after 10 failed attempts, effectively erasing all the data on the device. Whether the device-wipe feature is turned on or not, the secure enclave still has a hardware-enforced delay between attempts at entering the code: Attempts 1-4 have no delay, Attempt 5 has a delay of 1 minute. Attempt 6 has a delay of 5 minutes. Attempts 7 and 8 have a delay of 15 minutes. And attempts 9 or more have a delay of 1 hour. This delay is enforced by the secure enclave and can not be bypassed, even if you completely replace the operating system of the phone itself. If you have a 6-digit pin code, it will take, on average, nearly 6 years to brute-force the code. 4-digit pin will take almost a year. if you have an alpha-numeric password the amount of time required could extend beyond the heat-death of the universe. Key destruction is turned on by default.

    Even if you pull the flash storage out of the device, image it, and attempt to get around key destruction that way it won’t be successful. The key isn’t stored in the flash itself, it’s only stored within the secure enclave itself which you can’t remove the storage from or image it. Even the user’s passcode is not stored on the device. It’s been converted into a one-way HASH which is stored in the Secure Enclave. Even if the hash could be read, which it can’t, being one-way you cannot derive the original passcode from it. Each time the passcode is entered the hash is recalculated and then compared with the stored hash. If they match, the log-in continues to the build the encryption key stage and unlocking the encryption. Once these steps are completed, the passcode is discarded and not retained anywhere in RAM so no persistent shadow of that input can be read by any app.

    Each boot, the secure enclave creates it’s own temporary encryption key, based on it’s own UID and random number generator with proper entropy, that it uses to store the full device encryption key in ram. Since the encryption key is also stored in ram encrypted, it can’t simply be read out of the system memory by reading the RAM bus.

    The length of the actual encryption key itself is huge, possibly at least 132 characters given the combination of entangled user passcode, internal UID, GID, and random number seeded from sensors (which was read when the encryption was first initiated and the the seed stored in the Secure Enclave so each re-creation of the key is identical) each of which can be any of the 223 possible characters in the Apple characters set.

    Attempting to brute force the data itself on the Flash drive or a cloned copy is futile. I once calculated the time it would take using a supercomputer capable of trying and testing 300,000 possible keys per second, or about 9.5 trillion per year (not a trivial task as you’d have to determine if the data were decrypted into anything sensible as text in almost every known language, compressed photos, compressed videos, etc, to determine if a key was or was not a hit!). It turns out that it would only take 5.62 X 10^195 years to try every possible encryption key.

    I think the need or interest in what’s on that iPhone would be moot before you got too far into the task. Besides, it’s estimated that the Universe will have run down to a soup of sub-atomic particles by about 4.6 X 10^80 years as entropy causes all atoms, protons, neutrons, and even electrons to decay. So, not to worry. Of course, the probers could get very, very lucky and hit the right key on the first ten thousand tries, but then I could win the lottery without buying a ticket, too.

    I hope this help you understand how secure the later iOS devices really are.

    On the iPhone 5C, a lot of this is done in software. Later models moved it all to hardware. That’s why Apple said it could get data out of the 5C, but building the hack puts ALL iPhones and iPads at risk.

  23. Company I, 23rd South Carolina says:

    …a lot of you guys are clouding the issue with tech talk….I’ll clear it up for you:
    A)Terrorists are coming to kill us in the name of Islam.
    B) Apple can help protect us from that.
    C) Apple won’t.
    ….if a Muslim walked into Apple HQ and hosed down the employee cafeteria with automatic weapons fire I wouldn’t shed a tear.

  24. E.M.Smith says:

    Looks like Facebook and Twitter have just been threatened by ISIS over account closures:

    Zuckerberg and Dorsey now under threat personally. So that “scenario” of opening the phone painting a big target on Apple now has significant support…


    Thanks for the technical details. I’d expected something like that in the “next” iPhone, and didn’t realize it was already in this one. That is the kind of hideously devious stuff I’d expect from the folks at Apple, and rather well thought out (as was the defense of the place when I was there, though this is very much more grown up in the world of bigger threats).

    Given that, my speculation about attack on the FLASH is moot. (Believe it or not, I’d actually thought to myself during that speculation “It would be kind of dumb to just encrypt it with the short key… they would likely use a larger key internally… but then ought to protect that with hardware… likely in the next iPhone”… So I was headed in the right direction, just had not kept watching them closely enough to know they’d already gone there. Oh Well.)

    Given your description, I’m not seeing any obvious line of attack. Which is sort of the point behind security design. As the timer is in hardware, and the hardware is immutable, the timer IS going to bite. Not seeing how a swap / rewrite of the OS is going to let multiple trials happen, either. At this point the problem is either “beyond my pay grade”, or would require a large P.O. to Starbucks and a dedicated month or two to find an angle to explore.

    Maybe I will get an iPhone after all… (The rest of the family have them. I have an old LG flip phone ;-)

    @Company I, 23rd South Carolina:

    Well, since I am a tech type and since this is a tech posting and since the problem must be a technical one, it is entirely appropriate to explore the technical aspects. As noted by Swordmaker, the tech aspect looks set to dominate any legal ruling anyway. That whole secure hardware enclave angle is incredibly secure and as someone with about 30+ years of dealing with computer security and encryption I’m not seeing even what direction to attack it. I’m thinking maybe things like reading out the secure enclave memory with exotic scanning for electron density and other experimental starting points since all the non-exotic look closed.

    Now, to your position:

    Terrorists will always be coming in the name of Islam
    Apple most likely can’t help protect us, given the Secure Enclave structure described.
    I’m glad Apple will not waste time on this.

    Now, realize too:

    Your GOVERNMENT is coming to get you and yours in the name of “security”.
    Apple can help with that, and has.
    I’m sure they will do even more.

    If an FBI agent walked into Apple HQ with a warrant and got politely shown the door, I’d not shed a tear…

  25. “Those who surrender freedom for security will not have, nor do they deserve, either one.”
    – Benjamin Franklin

  26. E.M.Smith says:

    Looking into it a bit more…

    Has some useful details.

    First off, it implies that the phone is an iPhone 5c so not the absolute newest.

    Another one of the researchers, Senior Security Consultant at IOActive and hardware reverse engineering specialist Andrew Zonenberg, explained the complex and delicate process of de-capping followed by an invasive microchip attack, a hack he suspects is known to and used by some U.S. government intelligence agencies, if probably not the FBI. It could be used on the iPhone 5c, he said, though he’s never attempted it on that particular device.

    It then covers a method of taking the chip out of the encapsulation and reading off bits one at a time to get the UID extracted in an exotic kind of scanning.

    Assuming that the hacker has already poured months and tens of thousands of dollars into research and development to know ahead of time exactly where to look on the chip for the target data — in this case the iPhone’s unique ID (UID) — the hacker would, micron by micron, attempt to expose the portion of the chip containing exactly that data.

    The hacker would then place infinitesimally small “probes” at the target spot on the chip and read out, literally bit by bit, the UID data. The same process would then be used to extract data for the algorithm that the phone normally uses to “tangle” the UID and the user’s passkey to create the key that actually unlocks the phone.

    I’d envisioned a more generic energetic based approach (though not worked out any details – thinking maybe an NMR type approach of looking for energetic atoms) but this physical probe looks more attainable.

    Has some other interesting bits in it. Looks like the ARM chip used is also designed with security in mind:

    There are numerous reasons Apple moved to the A7 processor. One reason is the hardware requirements of Touch ID. To economically create the Secure Enclave, Apple needed a processor that is already aware of the concept of encryption and security at a native level and has the dedicated hardware to make a segregated and secure area with in the processor architecture.

    About three years ago ARM began to look into this very issue, and through a number of partnerships created what is now known as TrustZone/SecurCore [4]. TrustZone technology is tightly integrated into the A7 processor and extends throughout the system via the AMBA AXI bus and specific TrustZone System IP blocks. This system approach means that it is possible to secure peripherals such as secure memory, crypto blocks, keyboard, screen and sensors to ensure they can be protected from software attack.

    Then they go on to say it looks hard to crack…

    The A7 Is Optimized For Secure Mobile Payments

    Thus we can really see just how deep the security runs in DNA of the A7 processor. The deep level hardware based secure architecture is rather rock solid. It would require a rather large magnitude of hardware hacking to even attempt access to the data stored in the Secure Enclave.

    It also looks like the idea of using a fingerprint has been carefully limited:

    To use Touch ID you will also have to create a passcode as a backup. Only that passcode can unlock the phone if the phone is either rebooted (example full battery drain) or hasn’t been unlocked for 48 hours. This is a genius feature that is meant to set a time limit for criminals if try to find a way to circumvent the fingerprint scanner.

    So someone finds your phone they can’t just come and collect a finger 3 days later…

    It’s looking more and more like there isn’t a clean line of attack on this thing and that any line of attack is going to be experimental and take months, at best.

    Unless there is something special about this particular release of the iPhone (C vs S vs ?) that has it open to the Software Signing Key approach, it looks pretty much locked to me.

    (Yes, that’s a ‘dig here!’ on the particular model software / hardware maturity and level; but I’m unlikely to do that digging as I have “things to do”…)

    This, too, might explain why Tim Cook is willing to stand up and take a position of open defiance. IFF he already knows it can’t be technically done in any reasonable period of time and money, then it’s easier to make the morality and privacy argument (setting that precedent) knowing all the time you have the Ace in the pocket of “can’t do it anyway”…

  27. Larry Ledwick says:

    …a lot of you guys are clouding the issue with tech talk….I’ll clear it up for you:
    A)Terrorists are coming to kill us in the name of Islam.
    B) Apple can help protect us from that.
    C) Apple won’t.
    ….if a Muslim walked into Apple HQ and hosed down the employee cafeteria with automatic weapons fire I wouldn’t shed a tear.

    As noted above, terrorism is embedded in Islam and has threatened this country and her citizens from the day of its founding. By the year 1650 the Barbary pirates held 30,000 captives in Algiers alone from their piracy on the high seas in the Mediterranean.

    March 1786, Thomas Jefferson and John Adams went to London to negotiate with Tripoli’s envoy (about the piracy on American ships)
    Tripoli’s envoy, ambassador Sidi Haji Abdrahaman
    “It was written in their Koran, that all nations which had not acknowledged the Prophet were sinners, whom it was the right and duty of the faithful to plunder and enslave; and that every mussulman (follower of Islam) who was slain in this warfare was sure to go to paradise.”

    We fought our first war with them in (1801-1805), The British made two attempts to suppress Algerian piracy after 1815, and it was finally ended by the French in 1830. This is not new only the historical illiteracy of the public makes it seem new. This newest rampage of terror started in the 1970’s ( yes Martha we the western Democracies have been at war with radical Islam for almost 50 years now.) Just a few of the major incidents:
    September 5, 1972 Munich Olympic Games
    October 23, 1983 Marine Barracks Lebanon
    December 27, 1985 machine gun attacks at Vienna and Rome Airports
    1998 bombings of the U. S. Embassies in Nairobi, Kenya and Dar es Salaam, Tanzania
    October 12, 2000 USS Cole
    December 21, 1988, Pan Am Flight 103
    February 26, 1993 First Tower bombing New York
    September 11, 2001 Second bombing New York

    Apple cannot help, because the technology of their newer phone’s make it impossible. You can issue a court order which orders you to put the planet Jupiter in a quart mason jar and put it on Funk & Wagnall’s porch but that does not mean it is possible to do that.

  28. Larry Ledwick says:

    Hmmm tried to just put the link in but I guess wordpress wanted to dump the whole thing.

  29. E.M.Smith says:


    I’m seeing a Cato story (perhaps the tumbler format confused wordpress for a little while?).


    One hopes the iPhone has an anti-butt-dial feature built in, or else a whole lot of folks can end up locked out PDQ… ;-)

    BTW, “in my time protecting data” started in about 1980 and was Air Gap then whenever possible. Yet by about 1983? at Apple I had given the OK to “connect to the internet” and installed our (their) very first such connection. For at least 1/2 decade “my time” included many much more elegant solutions than an “air gap”. (We had to build these before I’d allow the connection) Not the least of which was one of the first uses of the (then new) BSD idea of shadow password files (though we left what LOOKED like the original in place so as to cause folks to spin on it and tip their hand, unlike the new version that signals the crypt-text is missing…, and hid the real shadow password file much more deeply in a very hard to find place / way) and a chroot architecture on the Cray (so that the really really secret 1/2 was not even visible to the folks in the rest of Engineering who were already inside the Engineering network that was already inside the Keep Out The Rest Of Company firewall that was already inside the Firewall to DMZ that was already inside the Corporate Boundary Firewall that was connected to the Internet…) Plus a few dozen other “neat tricks” that I’ll not be divulging just here. Well, maybe just one more: We used the SecurID card that changes the code every minute. On connecting to anything “secure”, you needed to have The Card to get privs. 2 Factor Authentication way before it was trendy. You need to know “your secrets” and have “your card”. (It also included a ‘distress code’ that would grant access, but set off silent alarms that you were being threatened…) It was Very New then, now more ‘old hat’. Very effective, too.

    I don’t think it is spilling any beans now, at this late date, given that the Cray was sold in the ’90s just about when I left and both chroot and shadow passwords are pretty standard (expected) things today. So while I love the Air Gap, I’ve used a much broader bag of tricks, then and now.

    THE big difference between then and now is the plethora of “click me and I run code on YOUR machine with YOUR privs” exposures. The risks from The Stupid User Trick is way higher now. (Curse You Java and Javascript!!!) There was some phishing exposure then, but not much really. Yet now we have intrusion monitor hardware you can buy (no need to roll your own anymore!) and networks that can detect “it looks wrong” on their own. Today I’d likely have 2 internal networks. One for machines with browsers on them and one for things that had to be secure (servers, admin stations, etc.) and a fat firewall in between with loads of authentication to get in to that ‘inner circle’ even IF you were logged onto your desktop machine. Rather like PII or HIPAA level network and desktop network. (Though any actual PII would be on Yet Another network with ACLS so only selected people / machines could get any access at all). Then I’d have every machine fingerprinted and a periodic check of the fingerprint done. ANY change to the machine gets a flag and a red light… Now the phish can’t change the machine (preferably booted via something like a PXE boot from read-only server) and unless it captures your 2 Factors can’t get into the inner circle. It COULD still watch what you do, at least until a reboot… but I have some ideas how to protect against that, too… but haven’t done the whole thing since I just don’t need to do that any more ;-) Oh, and I’d likely try to force all web activity to be conducted from inside a Virtual Machine that was quarantined and disposed at the end of the session, but would likely get a lot of ‘push back’ on that. Maybe having any internal oriented traffic in the VM would be acceptable…

    Alphabet Soup, for anyone wondering what I was talking about: PII Personal Identifying Information (credit card data, name, address, …) and the law about protecting it, HIPAA the Health law about privacy of medical records (Health Insurance Portability and Accountability Act) . ACLS Access Control Lists used in routers to select who gets in, or not, and when, and from where, and… PXE Preboot Execution Environment – a method of booting your computer from a network server rather than local disk (there are others that may be better). CHROOT is the Linux / Unix “change root” command that moves your world view inside the machine into a small jail that looks normal and works normally, but doesn’t have all the ‘goods’ in it. DMZ Demilitarized Zone a network connected to the outside (boundary router usually) via some protections, but not as protected as the really private company network. BSD Berkeley Software Distribution – probably THE best version of UNIX out there, IMHO. “Crypt-text” what your password looks like when encrypted. Phishing – that letter from the Nigerian Prince asking to send you money… just click here… Privs “privileges” – the ability to do things, especially powerful things, on a machine.

    Hmmm…. I wonder if I’m missing being “in the game” again…

  30. Larry Ledwick says:

    Another take on the Apple vs FBI pretty much in line with your observations:

  31. Larry Ledwick says:

    McAfee takes another tack on opposing the FBI’s demand for Apple to perform work that they are opposed to doing. Just where is the line between government compulsion and slavery?

  32. Larry Ledwick says:

    If you build it he will come (Field of Dreams)
    If there is demand, someone will try to fill it. Looks like a security vendor may have/is working on on a crack for the Apple phones.

  33. Larry Ledwick says:

    15:57 mountain time 3/28/16
    The Associated Press Verified account

    BREAKING: FBI uses mystery method to break into gunman’s iPhone without Apple’s help, ending court case.

  34. E.M.Smith says:

    Yeah, I was watching Fast Money and they announced it. They guessed it was a particular Israeli company. Something like Cell{????}

    Security is always a race condition…

    I doubt we will ever hear what was found on the phone, citing “security concerns”… meaning “there wasn’t anything there and we didn’t want to let the world know this was a big waste of time just to try and leverage Apple into a backdoor”…

    5 days ago these folks:
    look to have pegged it as:

  35. p.g.sharrow says:

    Sounds like a win-win to me. The feds get into an old Iphone and no precedent established. to force the equipment maker to provide a back door into all newer devices…pg

  36. E.M.Smith says:


    And Apple gets an acquisition target for all those $Billions stranded off shore due to a stupid tax policy ;-)

  37. Larry Ledwick says:

    Now that begs the question which third party attribution is the misdirection?

    I only see 4 possibilities:
    They did it with in house technical capabilities (cough NSA) and the court order and the subsequent two offered 3rd parties are simply plausible deniability covers?

    They still have not cracked the phone but want to put out a rumor that the phones can be cracked as a counter measure, and don’t want the down side of a court case or to reveal that the phone is really secure.

    One of the disclosed 3d parties actually cracked the phone, and the other tried unsuccessfully, but to induce FUD alternate options were put out.

    Or both 3rd parties cracked the phone, have party A crack the phone (Israeli company), examine contents, then tweak contents to place false leads then provide the phone to party B (Chinese company), they crack the phone and capture the doctored contents and run off in a hall of mirrors chasing leads which are not real, or at least demonstrate that China has the ability to crack the phone.

    Isn’t electronic counter intelligence a fun game?

Comments are closed.