The End of x86?

by mjfern

For the last several decades the x86 architecture, with its major proponents Intel and AMD, have dominated the CPU market in personal computers (PC). Today, Intel and AMD control 80.4% and 11.52%, respectively, of the worldwide microprocessor market (iSuppli, 2010).

Despite the prevalence of x86, there are tell-tale signs that the x86 architecture is in the early stages of being disrupted. Drawing on work by Clayton Christensen, the classic signs of disruption are as follows:

1. The current technology is overshooting the needs of the mass market.

Due to a development trajectory that has followed in lockstep with Moore’s Law, and the emergence of cloud computing, the latest generation of x86 processors now exceed the performance needs of the majority of customers. Because many customers are content with older generation microprocessors, they are holding on to their computers for longer periods of time, or if purchasing new computers, are seeking out machines that contain lower performing and less expensive microprocessors.

2. A new technology emerges that excels on different dimensions of performance.

While the x86 architecture excels on processing power – the number of instructions handled within a given period of time – the ARM architecture excels at energy efficiency. According to Data Respons (datarespons.com, 2010), an “ARM-based system typically uses as little as 2 watts, whereas a fully optimized Intel Atom solution uses 5 or 6 watts.” The ARM architecture also has an advantage in form factor, enabling OEMs to design and produce smaller devices.

3. Because this new technology excels on a different dimension of performance, it initially attracts a new market segment.

While x86 is the mainstay technology in PCs, the ARM processor has gained significant market share in the embedded systems and mobile devices markets. ARM-based processors are used in more than 95% of mobile phones (InformationWeek, 2010). And the ARM architecture is now the main choice for deployments of Google’s Android and is the basis of Apple’s A4 system on a chip, which is used in the latest generation iPod Touch and Apple TV, as well as the iPhone 4 and iPad.

4. Once the new technology gains a foothold in a new market segment, further technology improvements enable it to move up-market, displacing the incumbent technology.

With its foothold in the embedded systems and mobile markets, ARM technology continues to improve. The latest generation ARM chip (the Cortex-A15) retains the energy efficiency of its predecessors, but has a clock speed of up to 2.5 GHz, making it competitive with Intel’s chips from the standpoint of processing power. As evidence of ARM’s move up-market, the startup Smooth-Stone recently raised $48m in venture funding to produce energy efficient, high performance chips based on ARM to be used in servers and data centers. I suspect we will begin seeing the ARM architecture in next generation latops, netbooks, and smartphones (e.g., A4 in a MacBook Air).

5. The new, disruptive technology looks financially unattractive to established companies, in part because they have a higher cost structure.

In 2009, Intel’s costs of sales and operating expenses were a combined $29.6 billion. In contrast, ARM Holdings, the company that develops and supports the ARM architecture, had total expenses (cost of sales and operating) of  $259 million. Unlike Intel, ARM does not produce and manufacture chips; instead it licenses its technology to OEMs and other parties and the chips are often manufactured using a contract foundry (e.g., TSMC). Given ARM’s low cost structure, and the competition in the foundry market, “ARM offers a considerably cheaper total solution than the x86 architecture can at present…” (datarespons.com, 2010). Intel is loathe to follow ARM’s licensing model because it would reduce Intel’s revenues and profitability substantially.

In short, the ARM architecture appears to be in the early stages of disrupting x86, not just in the mobile and embedded systems markets, but also in the personal computer and server markets, the strongholds of Intel and AMD. This is evidenced in part by investors’ expectations for ARM’s, Intel’s and AMD’s future performance in microprocessor markets: today ARM Holdings has a price to earnings ratio of 77.93, while Intel and AMD have price to earnings ratios of 10.63 and 4.26, respectively.

For Intel and AMD to avoid being disrupted, they must offer customers a microprocessor with comparable (or better) processing power and energy efficiency relative to the latest generation ARM chips, and offer this product to customers at the same (or lower) price point relative to the ARM license plus the costs of manufacturing using a contract foundry. The Intel Atom is a strong move in this direction, but the Atom is facing resistance in the mobile market and emerging thin device markets (e.g., tablets) due to concerns about its energy efficiency, form factor, and price point.

The x86 architecture is supported by a massive ecosystem of suppliers (e.g., Applied Materials), customers (e.g., Dell), and complements (e.g., Microsoft Windows). If Intel and AMD are not able to fend off ARM, and the ARM architecture does displace x86, it would cause turbulence for a large number of companies.

  • Anonymous

    As has a segment of of .Net… it doesn’t mean ARM is going to get the full breadth and width of either platform

  • MrCrisp

    Ok for the purposes of my amusing analogy lets say
    Ferrari (googling; http://www.ferrari-tractors.com/ ) DOH, trying;
    Porsche (googling; http://www.porschetractors.com/ DOH). err ok a non-descript performance car from a manufacturer that doesn’t make tractors. Not quite as snappy but technically more accurate!

  • MrCrisp

    The point I was trying to make is that performance no longer doubles every couple of years as it used to. My previous PCs bought at about 3 year intervals were 40 Mhz, 350 Mhz, 1.7 GHz and 3.2 GHz. Now 5 years after my 3.2 machine you can’t buy a machine with the expectation of the increases in performance that I got from my previous upgrades.

    Yep, Murphys Law whould have been a good one for me to quote on point 1).

  • MrCrisp

    Ok could well be, I learnt ARM back in ’95, at that point it was sololy targeted at the low power market and thats something that hasn’t changed and probably won’t change.

  • MrCrisp

    My PC power supply about 750 watt + the monitors and I think the chip uses about 75 watts, ie less that 10%, so other ways to cut back usage maybe more effective. Like; I’ve got a work PC (2 monitors), home PC (2 monitors) and laptop (17 inch) all of which are on at the moment, sorry planet!

  • Pingback: La disrupción del ordenador personal « Disruptive Sketchbook 2.0

  • Conrad Carter

    “Moore’s Law failed about 5 years ago,…” You’re conflating two things. Moore’s law is not about clock speed or overall speed, but about the density of components on a chip. That factor has continued in near-perfect conformity with Moore’s prediction during these last five years.

  • http://stereochro.me/ Keith Gaughan

    Not just could, was and is. ’95 is just when the market realised how well suited it was to applications outside of its high-performance origins. There’s absolutely nothing about ARM *requiring* it to be targeted towards embedded use, it’s just that its low-power aspects make it well-suited to it.

    There’s not really anything stopping it from emerging as a force in the server market for the very same reasons it became attractive to low-power and embedded market.

  • MrCrisp

    I guess I dont understand why you would want to replace an x86 with something substantially slower, which ARM currently is.

    I can see how power usage maybe an issue if you work off a battery, but I plug my computers into the mains. If ARM increased GHz, instruction set, etc to compete with x86 on speed then their power consumption would go up anyway.

  • Pingback: The End of x86? « Yet Another Technology Blog

  • http://www.best-registrycleaner.net registrycleaner

    Energy efficiency is always important.

  • Shender

    You obviously do not understand Moore’s Law. In fact it was never really a law just a statement of device shrink physics. It states that the # of transistors on IC’s doubles every 18months. It never said anything about the frequency. This statement has held true for the most part since he coined the phrase.

  • Joe

    Microsoft declared that Windows 8 will be designed for ARM as well.

  • Pingback: Intel’s Delusions | Danilo Campos.blog

  • old computer architect

    Most of the comments here are quite skeptical with the prospects of an x86 end (which doesn’t necessarily mean the end of Intel or AMD). Some are just the opinion of fan-boys, while others maybe need some history lessons:

    In the beginning of computing there where a few big computers (almost custom built) in the world. A few companies made a name manufacturing mainframes (notably IBM & CDC). Mainframes dominated the market for 20-30 years.

    In the late 60s early 70s a new type of computer emerged and dominated the market for almost 30 years more. This was the minicomputer whose maximum exponent where the PDPs and VAX from DEC.

    DEC was so dominant, that it failed to see another revolution coming (actually the quote “There is no reason for any individual to have a computer in his home” is atributed to Ken Olson, President, Digital Equipment Corporation). That revolution where the personal computers, that ended with the x86 architecture dominating the personal computer market, and it has done so for more than 30 years. DEC was acquired in June 1998 by Compaq (a PC maker)

    The x86 survived the Risc revolution in the 90′s, however there are 2 factors that contributed significantly: (1) Code compatibility was important to PC users and (2) the revolution was coming from a superior platform (servers & workstations) with a smaller market volume, so despite the superiority of the RISC designs the much wider market of the x86 ended up with that platform eating the server, workstation and supercomputer market.

    For those that are clever, you may have already realized a pattern here, dominant platforms have always been replaced by an “inferior” platform with a much broader market. A broader market is critical since it allows the “inferior” platform to be more cost competitive, the lower benefit margin is compensated by the higher sales volume.

    In the ARM vs x86 debate the volume clearly favors the ARM. Each year, about 100 million x86 are sold, however ARM processors sold are counted by billions. ARM is used in many embedded products (cars, HDTVs, BlueRay players, …) and in cell phones. CellPhones alone represent about a billion ARMS manufactured every year. Regarding code compatibility, do you know anyone worrying about the ISA executed by his TV, his car or his cellphone? People just replaces the product by a new one and is not even conscious that he is throwing away old software and buying new software (bundled in the new product).

    My 2 cents.

  • dinox

    Did you understand what Keith is saying? “as a force in the server market for the very same reasons it became attractive to low-power and embedded market.”
    x86 is for higher compute architecture(Floating point computations). Do you think Servers are used for that thing. Most common servers are used for databases/file storage, and hosting; so servers don’t compute too much(i.e Calculating pi to 32 million decimals, run SiSoft Sandra, run Prime95 and others).
    The low power consumption and smaller die size can create a server with many cores used for multitasking and that thing can overpass x86.
    Common measurements for ARM are in IPS(Instructions per Second) and Integer Compute Performance.

  • Pingback: ARM-based Processors Moving Up-market into Servers | x86 | Intel | AMD

  • Coineltech

    ARM Development boards for more info contact umesh coineltech@gmail.com http://www.coineltech.com 91-9743301006 

  • Ryan

    You got the order all wrong. Don’t ever become an engineer.

  • Ryan

    Umm, the Windows registry is probably the dumbest, most inefficient idea ever conceived for an OS. Who are you to talk about efficiency?

  • Ryan

    Clueless cretin…

  • Ryan

    The fact is, dumb people conflate clock speed with throughput and instruction set effeciency all the time. Especially tech journalists. They’re all so infatuated with big numbers but they don’t have the slightest clue what they mean.

    For example a dual core CPU running at 2.5GHz can theoretically handle double the load of a single core in perfect circumstances. That doesn’t equate to “2 x 2.5GHz = 5GHz” but most people don’t know any better. If they can’t even comprehend this, god help any layman trying to comprehend RISC vs CISC, FLOPS vs clock speed, branch prediction, and caching etc.

    In short, I wish tech journos would put their CPU dick-swinging competition away and get an education.

  • Ryan

    Suuuure. Air-conditioning a large room full of huge, inefficient Intel machines (doubling as space heaters) sounds really cost-effective. Capital investments in large buildings full of oversized rack mounts sounds like a great use of money.

    It’s only a matter of time before this kind of stop-gap garbage gets sent to the same trash heap as mainframes and tape drives.

  • Ryan

    It’s nothing to do with RISC vs CISC you douchbag. The supporting technology doesn’t define the implementation. RISC is not ARM and CISC is not x86. They’re just the most successful examples.

    The model T was a box of metal powered by an exploding chamber of gas. Why over-engineer a problem horses can do so well already?

    The revelation here is that RISC processors are finally more than toy CPUs and glorified microcontrollers. They’re handling real, general purpose workloads like never before. You can reduce history to vague statements like “It’s been this way since the 80s” if you like but it doesn’t make you look any less ignorant about the facts.

  • Ryan

    Errmm, Brosef, GHz is a measure of clock speed, which is only a mere factor in overall performance. Moore’s law is about “power”, not clock speed. Power is more accurately measure in FLOPS, which believe it or not, is still closely adhereing to Moore’s law.

    Opps, went a made yourself look like an idiot didn’t you?

  • Ryan

    By the way, since even Intel and AMD are accepting the fact that future progress now lies in mutli-core designs, their former advantage of having more powerful cores is no longer relevant.

    Many cheap and effcient ARM cores beats a few overweight and expensive x86 cores.

  • Ryan

    I’ve got a headless ARM plug server running on about 2.5 watts peak draw. See the difference?

  • Ryan

    Multi-core machines are even keeping the performance on Moore’s curve too. Let’s face it, most people are just idiots. If they haven’t some big numbers fancy graphs to compare, they’ll just make shit up. Quite a sad phenomenon. 

  • Pingback: The End of x86? An Update.

Previous post:

Next post: