ARM chips to beat Intel Real Soon Now?

I love it when a video makes me feel like a prescient seer ;-)

So for the last couple of years I’ve stated I’m intentionally abandoning Intel chips in favor of ARM chips. While my reasons don’t directly match those given in this video, they are related. For example, I hate fans and want a silent machine. ARM dominates phones and tablets as they want low power usage / low battery drain – that also means low heating and no fan needed.

He does not mention the UEFI or Intel Management Engine issues. Too bad on that front. But does mention the cost advantages of lower transistor counts (and that M.E. does not come without more transistors and more power drain…)

Surprising to me is the idea that Microsoft is pushing for ARM chips. Then again, they know they have lost the phone & tablet markets so need to embrace that landscape as it stands.

Apple was an early adopter of ARM. Back about 1983 I sat a couple of hundred feet away from the guy who was evaluating it for the Newton device, and worked in the same department. So no real suprise that Apple is finding ways to expand use of ARM.

Also note that Apple has gone through 4 different processor types over the years. The Apple I & II were on very old arch chips, 8 bit MOS Technology 6502. Then there were the Motoroloa 68000 family in the Mac. Apple helped develop, and used, the Power PC chips for a good while, then lately moved to Intel chips. Their code base is designed for rapid porting.

Then there is just the point that for decades Intel has lived off of Moore’s Law (first stated by an Intel employee), and Moore’s Law is coming to an end. Chips are being made with 5 nm physical features. We’re running into quantum effects and the limits of light and stencils for masks. That wall is forcing multi-core with ever more cores. Much easier to do with small power efficient ARM cores.

So, 25 minutes:

Strange how companies that get in bed with TLA’s and align themselves against their customers best interests eventually run into troubles…

One of the features of the ARM business model is everyone can design their own particular mix of a kit of parts (CPU, GPU, etc) and then have if fabbed up at a general semiconductor fab (fabrication) shop. MUCH cheaper. Some ARM cores run a few pennies / CPU. Compare to Intel at $Hundreds. That is why the SBC (Single Board Computer) market is largely dominated by ARM chips. Hard to make a full computer on a board for under $30 when the CPU is $200.

Then there is just the fact that over time Moore’s Law has had an increasing chunk of the the problem space that can be done on ever cheaper platforms. In the beginning, even simplest problems took the biggest computers available. Then the mainframe gave way to the Minicomputer. It could handle much of the load. Then the PC carved off a large chunk of that. Over time this left a trail of dead companies behind as their niche moved to smaller hardware. Gone are DEC, Tandem, and others. Merged are Sperry, Univac, Burroughs. IBM moved strongly into services to make up for hardware sales losses. Even Apple moved into iPad, iPhone, and other dinky devices as the line of PC power class jobs moved into them.

The $40 Million Supercomputer of 1984 handled problems that now fit on my Raspberry Pi M3 with memory and computes to spare. Ever less of the problem space is left for the giant computers of today, and that inevitable march of the line of problem class is now crossing over the Intel CPUs. Some supercomputers now have an option of thousands of ARM chips instead of Intel.

As it stands now, I have ever less need for anything faster or larger than an ARM based SBC system. Everything I do can be done on them, with the possible exception of compile FireFox – though that looks to be a memory limit more than a CPU limit ;-)

Intel isn’t dead yet, but it has been stagnant for a while and it is certainly taking arrows. ARM chips are consuming it’s problem space / market. Nibbled to death by ducks is not a good way to go. They need a major rethink. A disruptive tech, not incremental Moore’s Law change. After 30 years of living from it, I doubt their corporate culture is capable of the needed change and disruptive leap that is needed.

Subscribe to feed

Advertisements

About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in Economics - Trading - and Money, Tech Bits and tagged , , . Bookmark the permalink.

7 Responses to ARM chips to beat Intel Real Soon Now?

  1. Ossqss says:

    Both are susceptable to meltdown and spectre however. Kinda makes ya wonder about those possibly mandated back doors. I recalled reading about foreshadow a few months ago also. Perhaps it is time to revert to the Kenbak-1? ;-p

    https://www.wired.com/story/foreshadow-intel-secure-enclave-vulnerability/

  2. philjourdan says:

    I would not write Intel off just yet. They have had poop dumped on them many times, and yet each time have managed to dig their way out and come out smelling like roses.

    I would not be surprised to learn they already are designing a chip to compete with ARM. Remember, Intel makes more than CPUs. And they are usually some of the better in those other fields. The “CISC do it all” may be dying, but Intel will survive.

  3. Steve says:

    ESR writes on the death of the PC and the rise of compute bricks. Compute bricks work by separating the display & keyboard from the processor, allowing us to reduce the size of the system needed. Non-x86 processors play here as they don’t have the power & heat of the x-86 processors.

  4. hubersn says:

    I doubt that, in 1983, anybody at Apple hat access to any info on the first ARM (development on ARM1 started in 1983, with first working silicon available in 1985, but I don’t think this was available outside Acorn and VLSI), let alone would think about anything like the Newton.

    Only in 1990 was the Acorn-VLSI-Apple ARM spinoff which lead to the development of the ARM600/610 which had the integrated MMU that Apple wanted for the Newton.

  5. E.M.Smith says:

    @Hubersn:

    I was a bit sloppy with my sentence. I joined Apple in about then and sat in the building with the guy that did the ARM evaluation work. So saying I sat near him is correct, but implies he started looking at the ARM at my start date at Apple, which is not the case. It was after I’d been there “a little while”. I was in that building a couple of years, and can’t say exactly when it was during those years he was doing the initial ARM work. It could easily have been 1985. Then I moved to the building where we constructed our Cray site for the rest of my time there. I probably ought to have said something like “from about 1983 to 1985 I sat near the guy that evaluated the ARM for various uses and eventually the Newton”….

    I remember that the ARM chip was not yet a product. He was in contact with the engineers doing the development work and at first was just describing the “paper spec” (under non-disclosure) for what was intended to be made. At the time the Newton was not even an idea. The idea was something called Magic Crystal (that eventually ended up in competition with the Newton and got spun out as General Magic company – where I eventually worked a couple of years in the early ’90s). The idea of Magic Crystal is now embodied in your iPhone, a few generations later. I don’t know exactly when Magic Crystal divided out a Newton variation but both projects ran in parallel as ideas for ‘a while’. But had I said “Magic Crystal” nobody would know what I was talking about, so I used the later daughter project name of Newton that people do know. Newton work began in 1987 per the wiki.

    As we were in the same general group (Advanced Technology Group) we attended the same meetings and in staff meetings folks would pitch their projects. What I remember of it was he generally did processor evaluations (for all uses) and was saying basically ~”We need to watch these guys ’cause if they deliver what they are saying, it will be a big deal” and a discussion of RISC vs CISC followed. IIRC, the then VP of ATG said to keep following it and order some samples when they had something fabricated. The guys who made the Newton a couple of years later were in the same staff meetings. I also vaguely remember they wanted some changes before the chip would meet their needs so worked with the ARM guys to get them, which took a few years. Does all that add up to 5 years? I’d guess about so.

    Some months to a year after the first discussions, the engineer who did CPU evals had an early ARM chip in a breadboard rig on his desk and was doing actual hardware evaluation on a very early ship of the first product. I got to eyeball it but didn’t really care much so don’t remember much other than looking at the breadboard and thinking it looked a mess ;-) He was excited about the speed for the price.

    As Apple was always trying to build the future, they would start choosing a design point based on what would exist at the time they intended to sell a product, not based on what was on the shelf today (as that would be obsolete in 3 to 5 years when you started shipping product…). Then they would revise the design as reality caught up. The company threw away more “projects” than they kept (a constant source of angst and annoyance in ATG…) as sometimes reality didn’t catch up as expected. The ARM chip discussion was about just that kind of “what will be in a couple of years”, and it was not about “what we can buy today” and it was about a startup company.

    So was that ’83, or ’84, or maybe even ’85? I can’t say for certain. By ’85 I think we were in the other building, so I’d say it was most likely ’84. But then you start getting into “what month” as my start dates at the company and in each building were not on neat year boundaries.

    Best calibration would likely be to ask “When was the ARM chip spec done enough for someone to start ‘talking it up’ with potential customer engineering departments; but not yet in fab?” Remember that this was not “Salesman to customer” but “Engineer to Engineer” (and they may have been using an informal channel of fellows who met at conventions or who just knew each other from school… IIRC this particular Engineer was from overseas somewhere, perhaps Britain, but it’s fuzzy trying to remember details of accent from 35 years ago when I wasn’t paying attention to it anyway at the time… Lots of folks at Apple were ‘a bit different’ and translating that into origin is not always accurate.)

    I can say with certainty it was no later than ’85 and most likely earlier by about a year. It was not my first month on the job, so while I was there earlier I can’t say just which month of which year that discussion happened. IIRC it was about 6 months to a year later he got early evaluation silicon to play with and felt it was inadequate to the projected needs (likely the MMU issue you mentioned) but was still enthusiastic about it and pitched that this was just the first iteration of a design life cycle. Then some long cycle of him working with them on development goals happened and I was busy doing Cray stuff and not paying much attention.

    When Apple got the silicon they wanted the Newton won the internal competition and Magic Crystal was “canned”. The Engineer who thought it up got permission to take the idea and run, and formed General Magic. Then a couple of years later Apple did another round of layoffs and I “laid myself off” and went to General Magic as a Director of I.T. for a couple of years.

    I can likely consult some old copy of my resume to get it down to start months at each company and narrow it even more, but I don’t see where that’s particularly helpful. My only point was that I’d been aware of the ARM chip from the start, whenever that was, and that the folks at Apple had an early adopter involvement with it and with the development of it.

    Sidebar on the Cray:

    It was bought to develop a CPU chip set for Apple. That was why I was in the group that did CPU stuff. The “high concept” was to simulate the chip instruction set on the Cray so you could do all the software development work before you had silicon. This would give about a 3 year earlier time to market than any competition. We did, in fact, achieve that technical goal. At one point we had the worlds largest PC – Personal Cray ;-) as it had a Gigabit network on it to (serial number 1) a frame buffer for full motion 3-D animation (we used a Mac to encode the mouse… yes, we put a mouse on the Cray ;-)

    The project eventually got killed just months from silicon (IMHO for stupid internal political reasons) by an “Apple Fellow” who’s only claim to fame seemed to be killing things that he could kill. We had the software layer running, we had the chip instruction set done and coded and transistor /gate design done, all we needed to do was the layout of traces / masking and fab. When killed, some of the instruction set design work went on to become part of the Power PC chip design. This added about 4 years to the eventual ship date and completely blew the time-to-market advantage, so instead of a world dominating system that was more than a Silicon Graphics workstation in performance at the price of a PC, it was “just another Mac”… but so it goes.

    It was rather neat to be running a new OS build with an application layer on a simulated chip set on a Cray. I can now do roughly the same things (application layer graphics) on real silicon on my Raspberry Pi. The simulated chip set was to be 4 cores at a time when most folks only had one in their designs, and with a performance level way ahead of everyone else at the time, at about 400 M-Flop, but that now is easily found on a cheap SBC. It is all about time-to-market, which was the high concept… What Silicon Graphics could sell for $Tens of $Thousands Apple intended to sell for about $4000 and you can now buy for $40. Get time-to-market early, you rake in the extra bucks. Be late, you take in about the cost of a dinner out… Had Apple done the ‘reduce to silicon’ they would have taken most of the business of Sun on desktop workstations and Silicon Graphics and a few others. Oh Well…

    Now the time of the hot desktop is pretty much over. The line of performance needed for that level of task has moved down scale into the cheap desktop PC and is headed for the world of even cheaper ARM chips with multiple cores. 6 to 8 are now available in the $60 range SBCs That will take over the laptop just as they (ARM chips) took over phones & tablets earlier. Then the desktop / tower will fall to Moore’s Law side effects as ever less $$ are needed for that level of performance. Even high end gaming stations are approaching photo-realistic video and once you have that, you don’t really need much more computes. But they will be the last to flip (of the desktops). So maybe in the really long run it was OK that Apple dumped the hot desktop concept for the iPhone…

    Hopefully that all clarifies enough on the timing of things.

    UPDATE:
    https://en.wikipedia.org/wiki/ARM_architecture#Acorn_RISC_Machine:_ARM2

    In the late 1980s Apple Computer and VLSI Technology started working with Acorn on newer versions of the ARM core.

    That roughly fits the timeline I remember. A year of discussion & getting the first test sample evaluated, then about another year of discussing what was desired, then the “working together officially” project approval at about 1987-88 ish and things moved from “one guy with a contact” to formal project with product ideas…

    https://en.wikipedia.org/wiki/Apple_Newton

    “Apple started developing the platform in 1987 and shipped the first devices in 1993.”

    Which is also about what I remember. ARM chip discussed & evaluated a couple of years, then Newton guys were told they could get what they wanted done and it became a project about the same time the formal relationship started.

  6. Taz says:

    Can you provide insight as to how Intel was “surprised” by spectre? Or were they surprised?

  7. jim2 says:

    Microsoft has a new open source project — Project Mu. This is the company’s open-source release of the Unified Extensible Firmware Interface (UEFI) core which is currently used by Surface devices and Hyper-V.

    https://it.slashdot.org/story/18/12/20/1457229/microsoft-announces-project-mu-an-open-source-release-of-the-uefi-core

Anything to say?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.