AMD – Poster Child for tech change vs sloth

I’ve mentioned before my paradigm of ‘problem space’ vs technical advancement. Basically the entire problem space of computing is a bell curve. Easy problems that need a computer are few, then it rises through a large number of problems that benefit from medium levels of compute power. Finally, at the very high end, there are a very few problems that take all the compute power you can create and then some. In the beginning, the largest computers could only take on the smaller problems. As compute power and technology improved, the scope of the problem space that could be address by computers expanded. We saw the explosive growth of companies like IBM, Control Data, Cray, Burroughs, Sperry and Univac. All making main frame and “supercomputer” scale hardware.

As they reached the far end of the bell curve, other competitors came along making smaller cheaper hardware that used newer technology. They began to ‘nibble off’ the lower demand areas of the compute curve. Eventually they expanded into the middle ground. At that point we had the growth of the “Mini-computer” makers. HP, DEC Digital Equipment Corporation with the PDP-11 and VAX, Sun Microsystems. Eventually Intel and the micro-processor gave rise to the dominance of the Personal Computer at the low end of the compute scale.

IBM made a successful transition into smaller machines (and services, increasingly) but the others did not. As each class of machine could take on ever larger problems, the space ‘left over’ that needed “supercomputers” became less and less at the far right of the bell curve. Cray survived, but only after transitioning to a design using thousands of microprocessors in an integrated cluster. Sperry, Burroughs, and Univac all merged. Parts eventually becoming parts of Honeywell while the remainder survives as Unisys. Increasingly focused on software and services. Other companies, like RCA and GE made computers for a while, then stopped. Some, like CDC, gave up and stopped entirely.

All the time, the “little boxes” consumed ever more of the “problem space”. Adding features, like color animated displays and sound that take an ever larger compute load. Shrinking in size. The Personal Computer was large enough to provide many kinds of “server” functions and increasingly computer rooms were filled with racks of microprocessor driven machines. Now the Tablet (and even the cell phone) are continuing the trend to ever smaller boxes doing ever more of the large compute load tasks. This trend is not ending yet.

So Tandem, DEC, and other mini-computer makers went away (via merger into HP) while other makers of PC scale machines and even those rack mounted “server” PCs, found ever tougher going. Compaq was also absorbed into HP. Others simply went out of business (it’s a long list of CP/M and DOS based machine makers). Apple was in decline at one point, then transitioned to the small phone and pad / tablet market. Folks rarely buy a full sized “box” computer these days, they buy laptops or tablets. This trend to fewer and smaller “Personal Computers” is showing up in sales data today. Sales of Personal Computers, in the sense of desk side or desk top boxes, are down. Tablets and smart phones are up.

When Intel was The Big Dog of micro-processors, many folks wanted to compete with them. One of those was AMD Advanced Micro Devices. They are one of several CPU processor makers, but Intel is the leader. When a market slows down, the leader has more room to survive than the guy competing on price with smaller economies of scale. AMD has spent LOADS of money to keep making big micro-processors to compete in the desktop and laptop markets. The special purpose (smaller lower power and dedicated embedded processor) chips used in cell phones and tablets were largely made by others. AMD is fighting for a larger share of the shrinking market. ARM chips are the “hot chip” in tablets and cell phones. Folks buying an iPhone or iPad are getting an ARM chip, not an Intel / AMD architecture chip…

So what has AMD stock done lately?

AMD After earnings 19 Oct 2012

AMD After earnings 19 Oct 2012

It’s now just a $2 “perpetual option” on not going out of business…

Here is a 10 year chart of INTC Intel, vs AMD Advanced Micro Devices, vs ARM Arm Holdings.

AMD vs INTC ARM 10 yr

AMD vs INTC ARM 10 yr

Here we can clearly see that Intel has a large diverse sales base and isn’t particularly winning, nor losing, over time. Stable, with wobbles. So it can be traded on a modestly fast chart for “rolling” movements inside those ranges. ARM has had a rocket ride up (on the explosion of demand in tablets and phones) but now is stabilized. AMD was a “Golden Child” a while back, looking to carve off a large chunk of Intel business. Unfortunately, they grabbed that nettle just as the tablets were poised to take over. At the time of highest stock price, they OUGHT to have bought out ARM Holdings. (Frankly, I was at Apple a long time back when the first design evaluations of the ARM chip were underway. At that point, Apple ought to have bought them. But they didn’t ask me…)

AMD was a great ride, but “buy and hold” just doesn’t work. Things move in “spike and drop” unless you see management preparing for the NEXT tech cycle. Also visible are the roughly annual pops and runs. Trading those runs with a 1 year chart, knowing that to buy at the end of a ‘rip’ is a mistake and selling the ‘dip’ is also a mistake, would make a lot of money. There’s a reason the market aphorism is “buy the dips, sell the rips”… Once that big blow off top was reached, shorting was the right thing to do. PSAR (those red dots on the price chart) tell you when to buy / sell by changing sides of the price lines. Also note RSI at 80, then MACD just plunges. All the time that MACD is below zero, it’s a short, not a long. In the last three years, MACD is wobbling each side of zero, mostly below. Some few “long side trades” but mostly a short. DMI red on top also says when to be out. Blue on top was strong during that “spike” run up.

http://www.marketwatch.com/story/amd-reports-third-quarter-results-and-announces-restructuring-2012-10-18

press release

Oct. 18, 2012, 4:15 p.m. EDT
AMD Reports Third Quarter Results and Announces Restructuring
Fourth Quarter Actions to Target Cost Savings of More Than $200 Million Through 2013

Restructuring sometimes works; more often it is the kiss of death in a dying business.

AMD AMD -16.79% today announced revenue for the third quarter of 2012 of $1.27 billion, a net loss of $157 million, or $0.21 per share, and an operating loss of $131 million. The company reported a non-GAAP net loss of $150 million, or $0.20 per share, and a non-GAAP operating loss of $124 million. AMD is also announcing a restructuring plan designed to reduce operating expenses and better position the company competitively.

First off, notice that AFTER that long plunge from the top to down ‘near nothing’, we got a 16%+ drop in one day. That’s why you never ride a loser down. Once price is under the SMA stack and dropping, you exit. Also note that they are focused on cutting costs, not creating a new RISC chip to compete with ARM Holdings…

But at least they seem to recognize what is the core problem:

“The PC industry is going through a period of very significant change that is impacting both the ecosystem and AMD,” said Rory Read, AMD president and CEO. “It is clear that the trends we knew would re-shape the industry are happening at a much faster pace than we anticipated. As a result, we must accelerate our strategic initiatives to position AMD to take advantage of these shifts and put in place a lower cost business model. Our restructuring efforts are designed to simplify our product development cycles, reduce our breakeven point and enable us to fund differentiated product roadmaps and strategic breakaway opportunities.”

Doing that with less money, making losses, and a stock price about to hit “Penny Stock” levels is not good. But at least they know they screwed up… a half decade ago…

So that’s the charts. What about behind the tech?

Chips and Changes

Some couple of years back I figured a rough time when Moore’s Law would fail (for memory devices based on electricity at least). The number of electrons being used to store “one bit” had become small enough to be roughly counted. A couple of hundred thousand. You can only “divide by two” so many times before you have ONE electron, indivisible. We’re reaching the point where fundamental limits of physics are preventing much shrinkage or higher speed of storage cells or switches.

http://www.tgdaily.com/hardware-features/57314-arm-risc-chips-to-erode-x86-market-share

ARM RISC chips to erode x86 market share
Posted on July 18, 2011 – 18:18 by Aharon Etengoff

I’d say “has eroded”… but better late than never…

A prominent industry analyst has confirmed that ARM’s RISC-based chips remain on track to claim a sizable chunk of the lucrative notebook PC market by 2015.

According to IHS principal analyst Matthew Wilkins, RISC-powered ARM processors will ship in nearly one out of every four notebook PCs (22.9%) by 2015.

Unsurprisingly, the projected jump in market share for ARM is directly linked to Microsoft’s upcoming RISC-friendly Windows 8 operating system.

“Over the next generation, billions of PCs were shipped based on x86 microprocessors supplied by Intel and assorted rivals - mainly AMD. However, the days of x86′s unchallenged domination are coming to an end as Windows 8 opens the door for the use of the ARM processor, which already has achieved enormous popularity in the mobile phone and tablet worlds.”

Indeed, ARM support will enable the full-fledged Windows PC operating system to work on highly integrated chips that are more space- and power-efficient than traditional x86 microprocessors.



So it’s going to get worse under Windows 8…

The ARM chip is a RISC chip. Reduced Instruction Set Computer. The Intel / AMD chips are CISC. Complex Instruction Set Computer. There has been a very long fight between RISC and CISC. The pendulum shifts from time to time based on which tech moves the cost curve or performance curve furthest. Right now, CISC is reaching a hard limit on clock rate and chip size.

So “what comes next”?

We’ve already gone to “more cores” in CPU chips from Intel and AMD. My laptop has a ’4 core’ chip. Supercompters have gone to thousands of CPUs working in massively parallel arrays. (My Cray was only 4 CPUs, but they were very fast ones for the day. Now the laptop has about the same processor power, also with 4 CPUs…) So one might speculate that more processors, and RISC ones, packed in ever smaller packages and cheaper…

A 64 core chip of RISC processors

A 64 core chip of RISC processors

From: http://www.zdnet.com/chipmaker-takes-to-kickstarter-to-become-the-raspberry-pi-of-parallel-computing-7000005771/

Chipmaker takes to Kickstarter to become the Raspberry Pi of parallel computing

Summary: Chip company Adapteva hopes to launch a Raspberry Pi-style parallel computing revolution via a Kickstarter funding campaign, but getting enough developers could be difficult.

Established chip company Adapteva has launched a Kickstarter fundraising campaign to create a low-cost parallel chip board for supercomputing — a kind of Raspberry Pi for parallel programming.
[...]
However, there need to be more developers skilled in parallel programming for future applications to get the most out of the chips they run on. This is because in the mid-2000s, chip companies stopped being able to economically deliver significant clock rate increases on CPU cores and were forced to move to a multicore strategy.

If you can’t make the clock faster, and are having trouble shrinking the die size by orders of magnitude, going to more CPUs in a “cluster” (even if on one small chip) is the way to go.

That article spends some time moaning over how few folks have software that runs on multiple CPUs. They need to look at all the (large) software libraries written by massively parallel supercomputer companies… This is a known technology. On Linux, there is a message passing kernel (several variations) that move processes to other boxes. That same code will work on multi-core machines. (Mosix is one example). It is a start, but not an end point.

So my point?

Expect to see ever more parallel processing inside your computers. Windows will have more trouble using that effectively than will Linux. AMD will struggle coming up with a non-CISC competitor to the ARM chip family. ( Were I at AMD, I’d look to fund that startup with an option to buy the company…) Expect to see Intel continue along as a Ho Hum company making OK stuff in a wide range of products, but no big stock move. It’s the folks taking advantage of the new move into ARM / RISC chips and ever more parallel processing who will win.

Me? I built a Beowulf Cluster from scratch once “just for fun”. Didn’t do much with it (other than put ppmake in play so distributed compile of complex code was sped up). I’m currently looking mostly at secure computing Linux solutions, but along the way I’m keeping track of those Linux Distributions that are set up for cluster or parallel computing. Mosix, as of the 2.6 kernel, became a ‘for profit’ company and stopped development of the free public code. That has moved into the LinuxPMI project. This is focused on process migration over a network to separate machines; but that’s an easier problem than closely coupled CPU sharing. Still, if you can share over a network, you can share faster inside the box. The Linux kernel already has support for ‘multi-core’ CPUs (up to 8 is easy, I’ve seen up to 64 listed as a settable flag). So it’s ready to go for the next few years (decade?).

Who will lose the most from this? Likely the mini-computer / mainframe makers and to lesser extent the supercomputer makers. The supercomputers have already been pushed out into the long tail of ‘near infinite’ computes demand problems. But a “PC” with 64 CPUs in it will likely run most any engineering codes pretty easily, and handle computing for accounting for a large number of companies… It will also make a damn fine graphics heavy tablet ;-)

Postscript on Bittorrent

There are interesting things you can learn by accident. I’ve started using Bittorrent to download Linux releases. (And promptly became mildly addicted… now having a few hundred GB of them ;-)

It has a nice display of how many folks are “seeds” making the release available for download; and, for most, resolves thier IP address to a location and displays a flag. For slightly older ARM releases, the major locations of seeds are in Germany, Ukraine, Sweden, and some in Russia. Finland also shows up for some releases. For newer releases, France comes into play. For releases that have heavy US seeding, they tend to be Intel / AMD focused.

The implication here is interesting. It implies that northern and eastern Europe are ahead of the curve on the use of Linux on ARM chips. ( It’s also possible that they are playing ‘catch up’ and the USA guys already did their download a couple of years ago when I wasn’t looking…) Still, the number of seeds for x86 is ‘way high’ when compared to the number for “ARM”. The further implication here is that there’s plenty of room for Some Guy in the USA to get good on the ARM chips and have a niche. Guess I really DO need to order that Raspberry Pi now. ;-)

As a side note, the x86 chip releases were more likely to have a Latin American seed. Argentina and Columbia both showed up. Their were also many entries that were only an IP number. Some of those mapped to China and other Asian sites. Only once do I remember seeing a Japanese flag show up. (Then again, I’ve started with the older, smaller, releases. They “expire” sooner. So I might well find that the ‘trendy’ Japanese show up on the newest releases of the hottest topics, or mostly on Japanese language releases). Still, it’s a bit fun to notice ‘who and where’ are looking at any particular release level and technology base.

For example, one ARM release had only 2 seeds, one German and one Swedish. The same era x86 release of the same Debian code had several hundred seeds. Of those I connected with (only a dozen) many were U.S., French, Latin American, etc. Though at that scale, Bittorrent does some selection for ‘near and fast’, that will bias the ‘data’ of the connections. But just the scale of present interest in x86 vs ARM implies there’s a lot of folks not ready for the transition to RISC massively parallel cores just yet…

One other note to make: The technology of “Go Fast” tends to be invented in the supercomputer world, then moves ‘down scale’ over time. So pipeline architecture and ‘look ahead’ and parallel computing of threads and throwing away the path that wasn’t taken; first in supercomputers, now in PC chips. Supercomputers went massively parallel in the last dozen or so years, now it’s time to reach the mini and personal computer spaces.

Subscribe to feed

About these ads

About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in Economics, Trading, and Money, Tech Bits and tagged , , , , , , . Bookmark the permalink.

33 Responses to AMD – Poster Child for tech change vs sloth

  1. p.g.sharrow says:

    After reading this post–twice– I think I need to sleep on it. pg

  2. j ferguson says:

    I apologize for not having coalesced this thought into something you could take to the bank.
    I suspect that simultaneously with the doubling and redoubling expressed in Moore’s law has come a halving and re-halving of the information content of a bit – not in the Claude Shannon sense, but more as a part of conceptual content. A bit which is a component of a byte, and in turn an ASCII letter or numeral most likely is carrying more fractional information than a bit contributing to a graphic or video.

    What percentage of the activity of our processor population is consumed by singularly insignificant graphic bit-handling?

  3. philjourdan says:

    Intel, in addition to making the x86 CISC chips, also branched out into a lot of other peripheral chips that is the core of their solid earnings (the x86 is volatile due to RISC, AMD, and now ARM). AMD was never much more than a copy cat, but they were competing with the big dog (as you stated), so yes, they have had a nice long run, but it appears to be over.

    Intel will continue – not as a behemoth (that may change with the changing market), but as a solid player. Intel is the GM of stocks now. Nothing flashy or get rich quick, just something to retire on.

  4. adolfogiurfa says:

    What about a photon processing chip?

  5. Robbo says:

    “Basically the entire problem space of computing is a bell curve. ”

    I suggest it’s very skewed and fat-tailed. The appetite for computer-intensive tasks keeps growing, as does the appetite to needlessly computer-intensify simple tasks.

  6. philjourdan says:

    @Adolfoguirfa – patent it and you can be the next Bill Gates!

  7. E.M.Smith says:

    @Adolfo:

    There are some “optical computers” or at least parts for them:
    https://en.wikipedia.org/wiki/Optical_computing
    the problem is that they are about as smart and compact as vacuum tubes…
    http://www.telegraph.co.uk/science/science-news/6128693/Optical-computer-performs-first-ever-calculation.html


    ‘Optical computer’ performs first ever calculation
    An ‘optical computer’ which uses light particles rather than traditional circuitry has performed the first ever calculation, as scientists hope it could pave the way for a computer smaller and faster than anything seen before.

    Scientists have hailed the step, despite the calculation taking longer than a schoolchild.

    The optical quantum chip uses single particles of “whizzing” light which could eventually pave the wave for a “super-powerful quantum computer”.

    The photonic chip, roughly the size of a penny, managed to find the prime factors of 15, and give the answer – three and five (3 X 5).

    The team at Bristol University argued that while finding prime factors may seem like a mathematical abstraction, the task lies at the heart of modern encryption schemes, including those used for secur internet communication.

    Cherry Lewis, the project spokeswoman, said: “We are almost getting to the point now where conventional computers cannot go any smaller so we need to go down a completely new route. We are talking nano-scale. Particles of light.”

    Article includes a picture of a table top with mirrors and such on it…

    On second thought, vacuum tubes and wires would be much smaller and much faster…

    @PhilJourdan:

    I think he needs to make something small and fast first ;-)

    Per Intel: Yup. It’s sort of the AD/Delco of computer parts… nothing exciting…

    @P.G.Sharrow:

    Mostly just says “Things change and in the CPU world you need to look at least 5 years ahead or lose. To see 10 years ahead, look at what the Supercomputer guys are doing at any given time.” followed by “AMD didn’t look very far ahead”, which they admitted…

    @J. Ferguson:

    You have hit on an important point. When computers were doing mostly cost accounting and / or computing exactly how much steel to put in a bridge, each bit ‘mattered more’. As they are doing more “Playing MP3s” or Flash movies, the results not only “matter less” but even inside that, each “bit” is somewhat less important (some more so than others…)

    Modern computers also move more “stuff” into dedicated processors. GPU Graphics Processing Units are becoming very common / universal. An FPU Floating Point Unit was always essential for math intensive systems (and Cray took that even further having an array of 64 math processors and doing “vector” processing where you could, with one instruction, have 64 number sets worked on in parallel. (It was called a ‘stride’, and if you had things like a “do loop” with a chunk of 64 steps, it would be unrolled into a parallel stride by the compiler. This encouraged what looked like silly code some time, but was really communicating to the VPU “take these 64 and multiply by those 64 in one stride / one instruction / one clock…” )

    So over time the “bits” have been put into more specific buckets. Some for the GPU, some for the FPU, some for the Vector Units… Now the pendulum is swinging a bit back toward a gaggle of general purpose processors (a LARGE gaggle), and when that reaches a limit, then more FPU / GPU / Vector units will be added to each one. Stir and repeat ;-)

    But it’s worse even than that… A character is 8 bits. Part of what we’ve done is make word sizes longer. Now at 64 bits. Ever larger storage, memory, etc. So you either waste more of that, or spend time / CPU cycles “packing and unpacking” 64 bit words.

    So it goes…

    FWIW, I’m posting this from a “SeaMonkey” browser (Yet Another FireFox variation) from Slako Puppy. Running on an HP Vectra 733 MHz box with 254 MB of memory. It’s slow and halting. The same box runs Red Hat 7.2 so fast I never had any issues. Then again, it has no browser on it that works these days… Looks like even Puppy can have bloat issues…

    Ah, well… on to the next candidate (perhaps a non-Slackware Puppy?).

  8. j ferguson says:

    E.M.
    I confess to being hopelessly ignorant, but I suspect that bits continue to march into and out of our machines in single file, no matter how they are split up and paralleled once they get onto the net or once inside the machine.

    Likely every bit pays the same fare whether it’s part of the white background in a jpeg, or a component of text of a document or calculation. One could suppose that the graphic pixel bits although having minimal information value are helping pay the passage for the heavy lifting bits.

    Another way to look at it might be by comparison to development of the VCR which as i understand it was driven by the pornography market, at least initially. There were a couple of decades where MS could push the next OS by issuing a new version of Flight Simulator which required hotter processor and new OS. Games drove hardware and OS development.

    If you believe all the stuff that’s being posted about the unlikelihood that Windows 8 will ever make a dent in the business world, you might assume that its market will be hearth and home, but this time not driven by the next Flight Simulator because they ran out of steam on that track – ah, er, got out of Train Simulator too.

    Running on Verizon 3G (grandfathered unlimited volume) we find ourselves throttled by others on the same cell skyping and downloading movies while we’re struggling to do things which could be done with command line, or text based app.

    If bits are not all the same in the dark, then neither are bytes and maybe we should be thinking about how the bandwidth is consumed.

    But that would grind against my basically libertarian druthers.

    Maybe there is some way to trick the system.

    Maybe I’m nuts.

  9. E.M.Smith says:

    @j ferguson:

    It is very important to realize that “Not all bits are created equal”. This matters as many things take advantage of that fact.

    For example, a pixel of display. You can choose how many “bits of depth” you give to color. Early displays were 8 or 16. Gives a few colors, and things look crayola and blocky. Move that up to 24 bits and you have a delightful color pallet of millions of colors. Photo quality shading and appearance. Change the “high order bit” (that first one of 24) and the whole color family being displayed changes. Reds become blues or yellows become blacks. ( Not sure exactly what the top bit value is, so that’s an example, not an exact… but change the first bit, all hell breaks loose in your esthetics…) yet that low order bit, that ‘millionth color shade’ isn’t even something normal humans can see. It is just overkill.

    So some smart folks figured they could ‘diddle the last bit’ of an image and use that to send hidden information. Steganography. Even a app for it so kids can send hidden messages to each other in school that look like pictures of ‘school stuff’… “Honest, I was asking about the Mona Lisa…” All sorts of things (even porn and secret messages) can be sent that way. (In response, some ISPs started diddling the low order bits of images themselves, to thwart this; which caused folks who had a legitimate photo quality copy of the Mona Lisa to cry foul…)

    Similarly, in an image, you may have a wide field of “all ‘the same’ color” Maybe a few low order bits wander back and forth, but no real information. So you could exchange a few thousand bytes that are all “about the same” for the text “1000 x ‘color value’ goes here”. That, essentially, is how all the various compression methods work. So your MPEG video is compressed because “not all bits are created equal” and lots of them can be ignored… and replaced with one value repeated. JPG, MPEG, etc. all depend on that process.

    The flip side is that the compressed image is a whole lot more sensitive to ‘flipping a bit’ as you’ve removed all the ones that did not matter… Something similar happens in encryption. (the two processes are related, mathematically speaking…)

    So when you use a lot of encrypted and compressed files, you want a much better and more reliable disk system… (There’s a large technology involved in making the various speed vs compression vs risk vs privacy vs… tradeoffs, so many compression / encryption methods…)

    CRC codes (cyclical redundancy check) are used as “ECC” Error Correcting Codes. In that case you add some bits that let you ‘reconstruct’ some number of lost bits. Adding MORE redundant information to make “loss of a bit or two” less of an issue.

    Now blend the added bits of CRC codes with the compression of unwanted redundancy… You get files that are much smaller (as the unwanted redundant bits are squeezed out) but resistant to damage from a lost bit (so work OK on storage systems that are not perfect). LOTS of work goes into that bit of magic… Getting just the redundancy you want, and no more.

    So yes, a LOT of work going in to finding ways to make all bits equally important, so you can use them the most efficiently possible.

    FWIW, very few bits move “single file” any more. Those on a “serial bus” do. (So USB Universal SERIAL bus) will. Those from the keyboard go as full bytes. Those to the display go as a more complex structure, but typically with lots of wires, not just one as serial feed… But most data goes to / from disks. They have very wide data paths. (One of the tricks used on the old Crays was to have multiple banks of read heads so you could pick up a LOAD of data fast without waiting as much). There’s many a ‘black art’ goes into how many heads, how many tracks read at once, how those are bundled up into packets of bits before they ever go toward the computer. (The “disk controller” does a lot of assembly prior to sending to the CPU). Similarly, networking that used to be more ‘serial’ as it was over the phone, one whistle at a time, has gone to more parallel (more different pitch whistles at the same time). So some bits march in a line, others march abreast…

    The initial development of the VCR wasn’t porn driven, but Sony lost dominance due to porn. They came out with a very much better VCR, the Beta Max machine, but by then Porn was already on VHS of the older format, and folks “stayed away in droves” from the new format as they had those old porn tapes to watch…. Sony tried again with the 8mm… Same lack of change. So “Porn” is / was more a preventer of advance, but it DID really push the growth of sales… mostly of the older lower quality tape decks.

    So no, you are not nuts. Just playing a bit of ‘catch up’. Folks long ago figured out they needed to do what you propose, and you have watched it happen on your TV. It’s ‘gone digital’ so more channels can be packed in the same space via such bit compression. You now get video from wars all over the place, sent over very low bit rate phone lines (that stuff that’s blocky and smeary when things move) due to very very high compression ratios (at the loss of even more quality… but you get SOME picture…)

    Whole herds of programmers are devoted to cramming as much as possible down as small a chunk of bandwidth as possible, and each new step forward in compression method creates a new hero (with a new big fat bonus…)

    So it goes…

  10. j ferguson says:

    E.M. What? My internet connection isn’t serial? I really have been asleep.

  11. E.M.Smith says:

    @J. Ferguson:

    Well “it is and it isn’t”. It only has a couple of wires, so things go ‘serial’, but as it’s an analog signal, it can have multiplexed tones, so things are carried in ‘parallel’…. (Doesn’t really map to apply digital terms to an analog box… but you get the idea. In reality it does really complex whistling that the box decodes into digital stuff… that gets sent to your computer as digital bits that are probably serial again… Swapping from parallel to serial happens all over the compute process…)

  12. Pascvaks says:

    Random Thoughts -
    GM is a classic case of startup – grow a bunch – consolidate – grow a lot – mature – grow big – age – consolidate – grow less – age – get fat – get fatter – get fattest – diet – shrink a little – diet – shrink more – diet – shrink – get dumb – get dumber – dry up – blow away (or something like that;-).
    Anyway, as with humans, corporations age. As EM has pointed out, the time between startup and consolidate and fattest and shrink and die seems to be accelerating. Guess there’s a corollary, of sorts, to Moore’s Law: The bigger you get the slower you think.

    Humpback wales talk to humpback wales, dolphins to dolphins, people to some people. The ‘reproductive’ systems of everything comes to mind as another form of ‘communication’. Don’t we need different ‘species’? Totally different ‘languages’? Totally different physical systems? One type of ‘cell’ is not enough? Most cells are vegetable. Many others are animal. Within each group there are so many different types of major systems, genres, philums, classes, etc. As things get stupid in computerland, shouldn’t we be getting to some evolutionary moments for the protection of our own against “them”? Shouldn’t we get out of this killer soup called the computer and internet ocean and start crawling out onto dry land? Just a thought.

  13. adolfogiurfa says:

    @E.M. Interesting thing that of bits in color. If it could be taken into consideration FREQUENCY, then, as in any key you press in a piano keyboard, has within itself, or rather develops, an “inner octave”, where one DO for example, turns into a complete set of seven notes of a shorter wavelength, thus the same computer system could become a super-computing system if allowed to read a variable wavelength/ frequency input so as to produce also a variable output:
    http://www.giurfa.com/tetraktis_language.pdf

  14. adolfogiurfa says:

    So, you see: In the beginning was the Word, and the Word was with God, and the Word was God.
    In this business of “creation” we have to manage the wavy nature of nature.

  15. Jay says:

    AMD has a nice new processor…the FX8350. It has 8 (almost) cores, just out yesterday, about 200 bucks. Benchmarks are holding up well against Intel I7 especially for multi-threaded applications. Single thread performance is lagging a bit. See the linux benchmarking site…

    http://www.phoronix.com/scan.php?page=article&item=amd_fx8350_visherabdver2&num=1

    So, a bit higher power consumption, almost as fast as the Intel top of the line, but it is way cheaper.
    I will be buying one. The direction forward will be to make application use multi-cores more and more, so a good effort by AMD, improving on the earlier Bulldozer FX8150 released earlier this year.

    -Jay

  16. BobN says:

    @EM – what you describe I lived through on the hardware side. The whole cycle of innovate, growth, consolidation obsolescence can be seen in almost every aspect of life, some refer to it as the “Yellow brick road”.
    My first Job out of College was for Univac designing Memory Systems for the mainframes. They produced good solid computers, but took no risk, why should they, they had a great Business. I remember sitting in a meting where upper management wanted our opinion, should they buy this little company called Intel. The established leaders all said we don’t need them, if we need it we will design it. I was the lone dissenting vote to buy out of about 40 guys. Later on a wanted to use a microprocessor, the new Intel 4 bitter to simplify my built in tester. I could move 4 big boards full of chips into a half sized board, but was denied the use of the part, it was to radical for a new design. I soon packed my bags and moved on to an environment where I could use the new technology.
    I moved into the disk world which was more competitive and had plenty of opportunity to innovate.
    This area thrived, starting with the Big box 14 inch disk companies, but soon they were challenged by the 5.25 inch drives and then the 3.5, 2.5 and 1.8 inch drives. The big box companies fell on hard time and were replaced by the fast moving start-ups. Soon the competitive pressures kicked in and it too consolidated and has continued to consolidate until their is a handful left. They are at the end of the Yellow brick road and are being challenged by Solid State Memory. Some will argue,
    but the hay day of drives is ending and will fade out as new advances gradually obsolete this older mechanical technology.
    The point of the previous paragraph is to show an example of another sector besides computers undergoing the life cycle of to obsolescence. Its happening everywhere, it just looks a little different with each. Rimm, Motorola and Nokia owned the cell phone market, but did not adapt and are paying dearly for their inability to adapt to a changing world.
    The traditional paths just mentioned now has a new disruptive force, global competition. Previously it was mostly US based companies fighting it out, but Asia is playing havoc with the traditional life cycles. Samsung in particular is interesting to observe. They go after everything, but only after the market is proven. Memory chips, they dominate through manufacturing muscle. Actually any high volume product they are there, the latest example being the iphone. With this type of business approach the small start-ups that usually enter a field after a breakthrough product are not occurring, the Samsungs make it impossible to innovate and compete, they are killing the start-up environment, which is not only jobs, but the technical base of the country.
    With this developing business environment the Venture capital people are pulling back and are much more cautious. The innovation forces are dieing and will be difficult to resurrect.

    Its funny, things being designed into modern chips were all borrowed from the mainframe designs, it just took a number of years to get it all incorporated. I always smiled when a new x86 device came out, the big new features were old hat in mainframes. The processor revolution was really just a miniaturization path.

  17. j ferguson says:

    BobN. Great story. does make one wonder what would have become of Intel had Univac bought it. Likely not good, don’t you think?

  18. philjourdan says:

    @j ferguson – I agree – a big company would have killed Intel. And perhaps delayed the PC revolution.

    @BobN – many, MANY years ago (about the same time you were fighting with your bosses at Univac), the lesson of adapt or die was learned by the Swiss Watch Making industry. Their time pieces were second to none, but the new quartz technology was far cheaper and easier to build and more accurate. They ignored it – and the rest is history. So it is not like these changes in the computer industry are somehow unique – the willingness of men to ignore new trends is a constant.

    And finally, I am intrigued by your throw away statement about hard drives. While I salivate at the thought of a solid state drive for my next computer (I tend to shy away from bleeding edge, and jump on things usually with revision 2.0), the cost/unit of storage plus the density just does not seem to be there. Are you confident those obstacles will disappear, and if so, how long do we wait?

    I agree SSDs will have a market, but for mass storage, it seems that the old HDs still have a huge market.

  19. j ferguson says:

    My father in law, EE-MIT ’34 went to work for Western Electric in 34 – stayed with Bell System entire career. He was at Teletype in the late ’50s where he attended a meeting where the possibility of data by wire by means other than rtty was discussed. The issue was what was Teletype’s business, communications or the manufacturing and deployment by lease of exquisite (term of art) electro-mechanical devices. The argument on the manufacturing side was that they were better at the design and manufacture of these things than anyone else in the world, so why not continue with this edge that they had? They decided to stick with electro-mechanical devices.

    He transferred to Holmdel, where he went to work as project manager (according to family lore) of successors to Bell 103 dataset, the 300 Baud acoustical modem with the pulse dial.

    He could remember the arguments in the Teletype meeting quite well and that sticking with machinery was the dominant opinion. And you can see what that got them.

  20. Richard Ilfeld says:

    At my entry investment — Intel is paying me 4.19% dividends.
    Helpful in tolerating the volitility. & this return has been there for a while.
    Processing power has been growing far faster than our ability to access it. And the industry is still to young for us to properly value reliability.
    I think the future will be in “assist” uses of the technology — like the assist given the application engineering for the internal combustion engine when microprocessing is mated to it. Many will seem trivially stupid (I don’t want a talking refrigerator either) but the incremental gains in precision moving from mechanics to electronics will have incalculable value….and potentially increase our risks as well. My guess is that the number of microprocessors consumed annually has several orders of magnitude left to go — but (re:AMD) companies usually succeed by being the best at something, not by being a cheaper me-to. Someone who can pay a 4% dividend can eat you lunch in a price war.

  21. j ferguson says:

    Richard Ilfeld. When I was a Sun Microsystems reseller in the late ’80s, we thought Windows was a “cheaper me-to.” I had no idea what the competition of a pretty-good, but not the best GUI would have on our business. I had, by then, realized that our business was not in the bulge of the bell-curve E.M. referred to earlier, but to keep in the pointy part, we had continuously to find applications which couldn’t be supported by PC’s that did things that enough people needed to keep us going. Me-to can be very effective as a business approach so long as you understand that it is what you are doing.

  22. Paul Hanlon says:

    I have a PC that I built myself for about EUR1200. It has two top of the range graphics cards in it, and its “theoretical” speed is 4 Teraflops. The SSD on it means it boots in about 20 seconds or so. I bought it to teach myself OpenCL and try and figure out this parallel processing paradigm. We’re seeing ordinary devices like phones and tablets now coming with GPU acceleration, along with things like WebGL (and it’s brother, WebCL), which will allow us to program 3D games and apps in Javascript for the browser. Parallel processing definitely seems to be the way it is going to go.

    The Raspberry Pi has now had its RAM increased to 512MB (for no extra cost), and it too has a GPU. They’ve also open sourced all of the ARM code that communicates with the GPU, which gives people the same access to the GPU as an OS would have. My guess is that eventually they will release the VideoCore4 drivers themselves and then it will be a totally open computer. You could strap a load of these together for a supercomputer (indeed, it has already been done). There’s even a RTOS port being worked on. Even though there isn’t an OpenCL port to it, you can still use the old methods in OpenGL to get your supercomputing goodness.

    I’m sure I read somewhere that AMD were going to make an announcement that was strongly hinted at being a tie up in some way with ARM to develop a new RISC chip. My box is completely AMD. The chip is AMD and the graphics cards are ATI (which AMD owns). I do hope they are thinking along these lines, because it is getting to the point where there is nothing that an X86 chip can do that an ARM can’t, with the ARM using way less power and generating almost no heat. I’d hate to see AMD fail.

  23. BobN says:

    @ j ferguson – Yes, buying Intel would have killed the culture and spirit and it would have never achieved all it has. We all got lucky on that.

    @ philjourdan – I figured the drive comment would rile a few feathers. I worked Disk and Flash memory most of my career and we always heard that the end was near, but innovation and the all important cost per bit kept it alive. I was always a huge skeptic myself as I just never saw what would replace it. I now believe different. Flash prices have gotten down to be competitive and the speed and power make for very competitive advantages. Flash is capturing everything mobile and usage keeps shifting from desk top to laptop, ipad or phones. The low end is already gone.
    In the corporate environment SSD’s are being added for performance and power/cooling savings. The IOPS offered by SSD allow for fewer servers to be required to get the job done. Drives are being used more and more as backup devices, but they are mechanical and will fail. It is price competitive as a JBOD arrangement, but as a RAID it is not that great a solution. New memory devices are on the horizon that will be offering Terra Byte solutions, when they are reality then the volume on the Disk business will shrink and there will be further consolidation and volume will decrease drive up prices and further ending their advantage.
    Having said all that, I could could argue the other side, but my feeling is the end of the mechanical storage is coming to an end. Just an opinion.

    As a side note, I believe successful companies keep adding product lines and diversifying until they are not competitive in any of them. They loose focus and let others gain market share, which they never get back. Do one thing well and focus is the key to long term success.

  24. philjourdan says:

    @BobN – No feathers ruffled, just a lot of curiosity! Like you, I did not see how they would break the 1gb limit (back when 32mb drives were common). I have no doubt that it will continue to expand and what is here in 100 years, no one today has yet imagined.

    I accept your expertise about SSDs. But I guess I am from Missouri in that regard, so I will buy them when I see them! ;-)

  25. j ferguson says:

    Somewhere we have a Canon 2 mega-pixel camera with an IBM 460 meg drive (circa 2001) which is about the size of a postage stamp. I think they sold the design to Hitachi who i suppose made it in larger capacities. I think I’ll have a look and see if it still works.

    It might be really interesting to learn if there were any internal disputes at IBM about the wisdom of making this thing and what the arguments might have been.

    I had always supposed they did it because they could.

  26. BobN says:

    @ j ferguson – I worked with Camera companies and we spent hours arguing over their business models. They were all convinced that Flash cards would grow there business and would make people take a lot more pictures and that would just make them print out more pictures, so their business model would just expand. I tried to convince them that the business would evolve to an electronics business and with that it would evolve and totally change. Just look what viewing pictures on an ipad has done to the business. Their core competency, chemicals, is become a minor part of the business

    I have come to the conclusion over the years that if you want to properly asses your business, ask an outsider. They aren’t bogged down with the nih and don’t over estimate the impact of things on the drain board. I had a top executive in this area call me just a few years ago and say he wished he had listened and not tried to sell me on their model.

  27. E.M.Smith says:

    @Pascvaks:

    The “growth cycle” of companies varies by industry. In “high tech” where things change fast, lack of fast change inside the company leads to rapid corporate senescence and death. See Rim for example. Apple is consuming it’s business. From “indispensable” must have business device a decade back to “you have a what?”… Or “pagers”. Anyone remember pagers? Mandatory for off hours support work 20 years ago. Don’t know if entry level folks even know what they are today. Other industries are not fast to change, so the “cycle” depends more on financial size and national policies. Steel, for example. As long as you are big, and in a country with decent (low) taxes and labor costs, you win. In the USA, the survivors are mostly doing scrap recycle and specialty steels. Bulk plain steel is cheaper from low cost basis industrialized places like Brazil, China, and Russia. Big enough for economies of scale, poor enough to be cheap…

    Then there are things like cars where “fashion” and “style” can dominate. So Mercedes and BMW survive in Germany despite high costs due to reputation and “style” issues. Japanese and Korean makers displaced a lot of American production (especially in small efficient cars) and are working on the BMW / Mercedes share now. But the cost basis in Japan has been rising… So for cars, it’s a complex interplay of design / style issues, cost basis, and reputation. Play those well, the larger companies can persist forever. Play them badly, well, Studebaker merged to become Rambler that merged to become American Motors that merged with Jeep that merged into Chrysler that was bought by Mercedes who couldn’t fix it and sold it to FIAT who are seeing what they can do with it/them… Compare FORD who’s been just fine, thank you very much, all along… So how you run your company matters more than technical change…

    Short form: Fields with rapid technical change have rapid corporate life cycles. Fields with slow technical change have slower life cycles dominated by other things.

    To extend your analogy: Then there are fungi. Sort of ‘plant like’ but sort of “animal like” in some cellular ways. So perhaps we need a different kind of compute species. More durable like some fungi… FWIW, the machines are getting a whole lot smarter. What’s getting dumber is the software. We have collectively chosen to write crappy software that’s fast to make as ‘hardware is cheap now’. IMHO that’s a mistake. Eventually we’ll figure that out…

    @Adolfo:

    We had such ‘multi frequency’ computers early on. Went to binary systems as silicon was advancing fastest. Nothing prevents returning to ‘multistate’ devices, but they have a hard hill to climb now to catch up. Analog computers can be very fast for some types of problems, but not to as much precision… Then there was multi-state logic. Still being worked on by some:
    http://www.ternarylogic.com/multi-valuedlogic.html

    @Jay:

    AMD has typically been cyclical with ‘near death experiences’ followed by new big pushes. I think it can keep flogging CISC for a while, but not too long. It really needs to get some kind of RISC multi-core deal going. ARM licenses to many chip makers, so it ought to be fast for them to ‘ramp up’. Even if all they did was pack more cores in the can or make one with integrated secondary support chips, it would sell.

    While a ‘bear’ to make software for it, I personally would like to see a single package with a standard Intel architecture CISC processor AND a ‘few’ RISC co-processors for doing some specific tasks damn-fast in small silicon. Kind of like having an attached Floating Point Unit or Graphics Processor, but more generalized… Probably not much chance of a commercial success, though… software folks would not want to deal with two instruction sets / architectures…. Being able to do all the ‘general purpose’ person oriented things on a CISC chip, while handing off large blocks of simple repetitive things (like calculating a 100,000 cell spread sheet) to a small cluster of RISC chips… well, some modestly well done code could just fly…

    @BobN:

    I’ve had some similar experiences. At companies that are no longer in business…

    I have an 8 GB real spinning disk drive inside a “flash card”. Bought it simply because I thought it was kind of cool. As it is infinitely writable, unlike FLASH, you can use if for things like Linux / Unix swap space. ( I once ran Linux on a ‘thumb drive’ directly, including SWAP. It lasted about a month… then the thing started having bit errors as I’d written it too many times…) That FLASH card plugs into a USB adapter, so I can have a REAL fully installed Linux on a USB dongle that fits in a couple of square inches…. IMHO it will be a while longer before spinning disks are completely obsolete. Keeps for years unplugged too ;-)

    But yes, nearing the end of that Yellow Brick Road…

    Per Samsung et. al. and startups: There are still startups, but they’ve moved focus. Social Media and Cloud stuff. And geography. California made it too painful to do much here, and China made it cheap and easy to work there. The incubators moved… Some when to Texas and some to China or elsewhere. Samsung isn’t really in the ‘startup’ area. It is in the high volume economies of scale business. So a startup can just contract construction to a company like them (or, as Apple has done, to Foxconn). Lot of rip off of intellectual property, though, so better to keep some bits secret ;-)

    @philjourdan:

    And before those watches we had the buggy makers and before them we had the village Blacksmith (my great Grandpa…) thus has it always been… Wells Fargo ran a stage coach line; but knew how to change with the times.

    I have a 500 GB USB hard disk that cost me about $50. It’s about the size of a deck of cards. But it’s a couple of years old. Last time I looked they were about 2 TB in size… More than fast enough for anything I do.

    @J. Ferguson:

    Oddly, I used a Teletype machine (with attached paper tape punch / reader) in college. To this day I wish I had one. Storing a program on paper tape had a certain charm to it. Especially the “Mylar paper tape” used by HP in the ’70s… Still have a couple of programs (FORTRAN I think) on roles of that stuff in a shoe box in the garage…Really reliable for printing console logs that could not be deleted too… ( we did that at Apple, despite some folks making snide remarks, on a TTY of some sort. Caught a couple of system crackers that way…) Oh well… Times change…

    @Richard Ilfeld:

    Some industries are “commodity products” and it’s all about costs. Make steel ‘to spec’ and after that it is pretty much a price driven issue. Competing on price and availability are THE way to success. Others, like clothes, are all style and name. NOBODY will buy a discount Tiffany, so cutting price is an error, period.

    CPU processors are in a middle ground. Not quite commodity yet, but headed that way. Still a lot to ride on reputation for quality and new features. BUT, they must be categorized into differnt types. The “big ones” in your high end servers style PCs (or even the ones in a high end laptop) are very pricey and we’re likely using nearly as many as we ever will (in terms of ‘packages’… there will be more ‘cores’ in each package). It’s the ‘little ones’ that are going gangbusters. “Embedded systems” CPUs are now so cheap that you can add ‘intelligence’ to anything for $10 or so. (The whole Raspberry Pi board is $25 in single units…). At that point if you can replace a $15 industrial strength switch set of push buttons with a voice recognition program and a cheap microphone, well… “call it a feature and charge extra”!

    My favorite bad example is the GM transmission. Eliminating a lot of very good design work making a ‘hydraulic control system’ to decide when to shift gears with a little dab of computer. Just hate it. Works fine, and much cheaper; until it dies… So it is going with many other machines. My oven has a computer chip in it. Didn’t want one, but can’t avoid it. Just wanted a thermostat / dial. That takes more machining and quality parts… Similarly the washer / dryer. It’s all on one small computer board now with the nearly ubiquitous ‘touch pad’ keypad that’s dirt cheap to make (and about as reliable… the microwave oven has a few numbers wearing off already…) That is where the computers are invading everything.

    So how long before nobody knows how to make a mechanical control system? Probably close to that now. Last time I went looking for a simple mechanical thermostat for the heater I could not find one. (Between the banning of anything with mercury in it and the cheapness of ‘programmable thermostats’ it’s just killed the simple bulb on a metal spiral with a handle…) I do NOT want a programmable thermostat and I’m the one the family will expect to keep it programmed. I want a simple “move the lever back and forth for hot and cold”… But I get a computer, want it or not…

    At any rate, Intel has dominant position and is cost conscious, so unlikely to be displaced any time soon. (Or ever if they keep doing things right).

    @Paul Hanlon:

    Don’t think AMD will fail, just suffer a lot…

    Per “DIY Supercomputers”: It’s not that hard. The iconic one was the “Stonesouper Computer”:

    https://en.wikipedia.org/wiki/Stone_Soupercomputer

    I made an 8 node cluster once just for fun. For a while in the late ’80s and early ’90s there were folks making racks of ‘board systems’ that were clustered that way. Then the large makers started making 2000 CPU clusters under one skin.

    You are quite correct, though, that doing it with Raspberry Pi will also work. One of my first thoughts on seeing it was “big hub and a stack of those is a nice B’Wulf…for cheap”. It’s a ‘someday’ project once I’ve got ‘a few’. I’ve collected a few versions of Linux that ‘self cluster’ for just that day. (Mosix and the Quantian release, along with a couple of others).

    I’d not be surprised a tall to see someone take a board, put a dozen multi-core chips on it (with memory and com gear) and put a few dozen of them into a backplane and have an ‘instant cheap supercomputer’… Heck, I’d do it and sell them, but for the fact that the life time of the niche is very very short…

    Just about 2 years ago I finally disposed several of the ‘cluster nodes’. They were old and not used any more. I kept the 400 Mhz AMD ones, but the 486 class and even some P100 ish ones were just not interesting any more. I moved on. I’m pretty sure I’ll make another cluster before I’m done with the tech. Likely a cluster of tiny little boards… each one faster than the Cray from the ’80s ;-) Don’t know what to do with it though… Maybe sign up for one of those distributed computing projects, like protein folding for medical research of the folks finding every larger prime numbers…

  28. Paul Hanlon says:

    The first time I read about Beowulf clusters was the CalTech one, I think around 2004, the name escapes me atm, and I was fascinated by it. It put the power of a supercomputer within reach of the ordinary person. Then graphics cards came along and were found to be much better at Big Computation. Then the kerfuffle happened over the NIWA dataset in, I think the end of 2009.

    So I derived my own dataset from the Daily Raw GHCN2 dataset, with the intention being that it would be absolute temperatures rather than anomaly. Using the laptop, and learning C along the way, I got to parsing every station, and even had graphs and other info on a dynamically generated web page for each station.

    Then I got bogged down in how to derive a global average absolute temperate. Grids are useless, only 10% of the planet is covered by grids with an actual temp station in it, using a 2.5 x 2.5 degree grid. To cut a long story short, I hit on Voronoi diagrams, which gets the land area of a station by splitting it with its nearest neighbours, but to calculate them I would need something substantially more powerful than a laptop, especially if I was going to be including the ICOADS dataset for sea temps, hence the idea to build the one I have.

    Along the way I had the idea that once it was setup, I would open it out to other people to use remotely on an ad-hoc basis. I would put up an interface like GitHub which would let people code and store their programs and data on one computer, and then run it on the “supercomputer”.

    Unfortunately, the problems that this is trying to solve, are so disparate that only a limited subset of them would benefit. Anything with Big Data is best solved by a server farm, because they are optimised for I/O, whereas a graphics card based solution is ideal for small data with lots of computations.

    I may yet still do it, and take a leaf out of your book by using Mosix, but security would be challenging, and I wouldn’t want people using it for breaking passwords, unless it was the FOIA2011 one :-). But even if I don’t, I will have learnt OpenCL and how to calculate Voronoi sets, so I’m not completely wasting my time :-).

  29. CompuGator says:

    BobN says (25 October 2012 at 8:31 pm):

    [....] I worked Disk and Flash memory most of my career and we always heard that the end was near, but innovation and the all important cost per bit kept it alive.

    IBM has been expecting for “the end” of that mechanical technology for something like 4 decades. Altho’ in the mid1970s, ’twas (magnetic?) bubble memory that seemed the best candidate for the spoiler of the rotating magnetic storage (i.e.: disk & drum) party, and new champion of its direct-access storage business. IBM was especially concerned that it was close to hitting the limits for disks on access time and transfer rate. So IBM’s researchers kept working away, continuing to increase bubble-memory reliability by decimal orders of magnitude. But its disk engineers in San José kept moving the goal posts, continuing to stay better, by what I recall as about 10^4–10^5, in data-error rates.

    I was always a huge skeptic myself as I just never saw what would replace it. I now believe different.

    Alas, about 1 decade ago, IBM packed up its pioneering San Jos&#233 main plant, shipping it lock, stock, and barrel, over to Hitachi in China.

  30. BobN says:

    @CompuGator – I am one of the few souls around that actually designed a controller for a Bubble memory. It was interesting and as fun as those things get, but we never got it out of the lab. Just way to many issues to even considering a market run against a disk.
    Something that was very interesting as a design project was a Disk build from Charge coupled devices.It was a real challenge to keep rotating the data and keep track of it, and still provide a fast access. Ended up with a crazy, barber pole scheme. Boy did we search looking for the alpha particle problem.
    I had a couple projects that never went anywhere, but I learned as much from them as the success.

    Speaking of magnetics. Here is an interesting link.
    http://www.networkworld.com/news/2013/061913-ferromagnetics-271033.html

  31. CompuGator says:

    [Re:] “Ferromagnetics breakthrough could change storage as we know it”
    (www.networkworld.com URL offered by BobN, immediately above)

    I especially liked the claim of “non-volatile memory”. But isn’t the “flash memory” (i.e.: glorified EEPROM?), that an experiment by Chiefio wore out by using for paging or swapping, also at least nominally “non-volatile”?

    The prospect of wearing out a “solid-state driveinside its sealed box that’s celebrated for being free of moving mechanisms, by using it to replace traditional applications of rotating magnetic memory (e.g.: paging or swapping), sure ain’t the great leap forward” that that it seems to be marketed as.

  32. BobN says:

    There is a huge difference in the two memories. EEPROM have a much worse failure rate and can not be used for much other than 10-100 code changes.

    Using SSD to replace traditional disk is making big changes to computer centers. The high IOP offered by SSD saves in the number of servers required to get a job done, so less can be used which saves power and money, along with huge power differences. Because of poor disk performance, servers used to have to use a lot of drives and spread the data around to get the desired performance. They just kept adding drives to solve the speed problem. Much of the Web serving type applications would be hard pressed to exist without SSD. It may seem mundane, but the reality is a sped revolution.
    I think you will see another big jump in system performance when SSD access methods change, right now they emulate disk and they could be significantly better with Architecture changes.

Comments are closed.