Copy of Apple, ARM, and Intel

Apple, ARM, and Intel

Stratechery by Ben Thompson

Posted onTuesday, June 16, 2020

Mark Gurman at Bloomberg is reporting that Apple will finally announce that the Mac is transitioning to ARM chips at next week’s Worldwide Developer Conference (WWDC):

彭博社(Bloomberg)的马克•格曼(Mark Gurman)报道称,在下周的全球开发者大会(world Developer Conference, WWDC)上,苹果将最终宣布Mac将改用ARM芯片:

Apple Inc. is preparing to announce a shift to its own main processors in Mac computers, replacing chips from Intel Corp., as early as this month at its annual developer conference, according to people familiar with the plans. The company is holding WWDC the week of June 22. Unveiling the initiative, codenamed Kalamata, at the event would give outside developers time to adjust before new Macs roll out in 2021, the people said. Since the hardware transition is still months away, the timing of the announcement could change, they added, while asking not to be identified discussing private plans. The new processors will be based on the same technology used in Apple-designed iPhone and iPad chips. However, future Macs will still run the macOS operating system rather than the iOS software on mobile devices from the company.

I use the word “finally” a bit cheekily: while it feels like this transition has been rumored forever, until a couple of years ago I felt pretty confident it was not going to happen. Oh sure, the logic of Apple using its remarkable iPhone chips in Macs was obvious, even back in 2017 or so:

  • Apple’s A-series chips had been competitive on single-core performance with Intel’s laptop chips for several years.
  • Intel, by integrating design and manufacturing, earned very large profit margins on its chips; Apple could leverage TSMC for manufacturing and keep that margin for itself and its customers.
  • Apple could, as they did with iOS, deeply integrate the operating system and the design of the chip itself to both maximize efficiency and performance and also bring new features and capabilities to market.

The problem, as I saw it, was why bother? Sure, the A-series was catching up on single-thread, but Intel was still far ahead on multi-core performance, and that was before you got to desktop machines where pure performance didn’t need to be tempered by battery life concerns. More importantly, the cost of switching was significant; I wrote in early 2018:

- First, Apple sold 260 million iOS devices over the last 12 months; that is a lot of devices over which to spread the fixed costs of a custom processor. During the same time period, meanwhile, the company only sold 19 million Macs; that’s a much smaller base over which to spread such an investment. - Second, iOS was built on the ARM ISA from the beginning; once Apple began designing its own chips (instead of buying them off the shelf) there was absolutely nothing that changed from a developer perspective. That is not the case on the Mac: many applications would be fine with little more than a recompile, but high-performance applications written at lower levels of abstraction could need considerably more work (this is the challenge with emulation as well: the programs that are the most likely to need the most extensive rewrites are those that are least tolerant of the sort of performance slowdowns inherent in emulation). - Third, the PC market is in the midst of its long decline. Is it really worth all of the effort and upheaval to move to a new architecture for a product that is fading in importance? Intel may be expensive and may be slow, but it is surely good enough for a product that represents the past, not the future.

However, the takeaway from the Daily Update where I wrote that was that I was changing my mind: ARM Macs felt inevitable, because of changes at both Apple and Intel.

Apple and Intel

A year before that Daily Update, Apple held a rather remarkable event for five writers where the company seemed to admit it had neglected the Mac; from TechCrunch:

Does Apple care about the Mac anymore? That question is basically the reason that we’re here in this room. Though Apple says that it was doing its best to address the needs of pro users, it obviously felt that the way the pro community was reacting to its moves (or delays) was trending toward what it feels is a misconception about the future of the Mac. “The Mac has an important, long future at Apple, that Apple cares deeply about the Mac, we have every intention to keep going and investing in the Mac,” says Schiller in his most focused pitch about whether Apple cares about the Mac any more, especially in the face of the success of the iPhone and iPad.“ And if we’ve had a pause in upgrades and updates on that, we’re sorry for that — what happened with the Mac Pro, and we’re going to come out with something great to replace it. And that’s our intention,” he says, in as clear a mea culpa as I can ever remember from Apple.

Yes, Schiller was talking about the Mac Pro, which is what the event was nominally about, but that wasn’t the only Mac long in the teeth, and the ones that had been updated, particularly the laptops, were years into the butterfly keyboard catastrophe; meanwhile there was a steady-stream of new iPhones and iPads with new industrial designs and those incredible chips.

Those seemingly neglected Macs, meanwhile, were stuck with Intel, and Apple saw the Intel roadmap that has only recently become apparent to the world: it has been a map to nowhere. In 2015 Intel started shipping 14nm processors in volume from fabs in Oregon, Arizona, and Ireland; chip makers usually build fabs once per node size, seeking to amortize the tremendous expense over the entire generation, before building new fabs for new nodes. Three years later, though, Intel had to build more 14nm capacity after hiring Samsung to help it build chips; the problem is that its 10nm chips were delayed by years (the company just started shipping 10nm parts in volume this year).

Meanwhile, TSMC was racing ahead, with 7nm chips in 2017, and 5nm chip production starting this year; this, combined with Apple’s chip design expertise, meant that as of last fall iPhone chips were comparable in speed to the top-of-the-line iMac chips. From Anandtech:

We’ve now included the latest high-end desktop CPUs as well to give context as to where the mobile is at in terms of absolute performance. Overall, in terms of performance, the A13 and the Lightning cores are extremely fast. In the mobile space, there’s really no competition as the A13 posts almost double the performance of the next best non-Apple SoC. The difference is a little bit less in the floating-point suite, but again we’re not expecting any proper competition for at least another 2-3 years, and Apple isn’t standing still either.Last year I’ve noted that the A12 was margins off the best desktop CPU cores. This year, the A13 has essentially matched best that AMD and Intel have to offer – in SPECint2006 at least. In SPECfp2006 the A13 is still roughly 15% behind.
image

The Intel Core i9-9900K Processor in those charts launched at price of $999 before settling in at a street price of around $520; it remains the top-of-the-line option for the iMac for an upgrade price of $500 above the Intel Core i5-8600K, a chip that launched at $420 and today costs $220. The A13, meanwhile, probably costs between $50~$60.

This is what made next week’s reported announcement feel inevitable: Apple’s willingness to invest in the Mac seems to have truly turned around in 2017 — not only has the promised Mac Pro launched, but so has an entirely new MacBook line with a redesigned keyboard — even as the cost of sticking with Intel has become not simply about money but also performance.

The Implications of ARM

The most obvious implication of Apple’s shift — again, assuming the reporting is accurate — is that ARM Macs will have superior performance to Intel Macs on both a per-watt basis and a per-dollar basis. That means that the next version of the MacBook Air, for example, could be cheaper even as it has better battery life and far better performance (the i3-1000NG4 Intel processor that is the cheapest option for the MacBook Air is not yet for public sale; it probably costs around $150, with far worse performance than the A13).

What remains to be seen is just how quickly Apple will push ARM into its higher-end computers. Again, the A13 is already competitive with some of Intel’s best desktop chips, and the A13 is tuned for mobile; what sort of performance gains can Apple uncover by building for more generous thermal envelopes? It is not out of the question that Apple, within a year or two, has by far the best performing laptops and desktop computers on the market, just as they do in mobile.

This is where Apple’s tight control of its entire stack can really shine: first, because Apple has always been less concerned with backwards compatibility than Microsoft, it has been able to shepherd its developers into a world where this sort of transition should be easier than it would be on, say, Windows; notably the company has over the last decade deprecated its Carbon API and ended 32-bit support with the current version of macOS. Even the developers that have the furthest to go are well down the road.

Second, because Apple makes its own devices, it can more quickly leverage its ability to design custom chips for macOS. Again, I’m not completely certain the economics justify this — perhaps Apple sticks with one chip family for both iOS and the Mac — but if it is going through the hassle of this change, why not go all the way (notably, one thing Apple does not need to give up is Windows support: Windows has run on ARM for the last decade, and I expect Boot Camp to continue, and for virtualization offerings to be available as well; whether this will be as useful as Intel-based virtualization remains to be seen).

What is the most interesting, and perhaps the most profound, is the potential impact on the server market, which is Intel’s bread-and-butter. Linus Torvalds, the creator and maintainer of Linux, explained why he was skeptical about ARM on the server in 2019:

Some people think that “the cloud” means that the instruction set doesn’t matter. Develop at home, deploy in the cloud. That’s bullshit. If you develop on x86, then you’re going to want to deploy on x86, because you’ll be able to run what you test “at home” (and by “at home” I don’t mean literally in your home, but in your work environment). Which means that you’ll happily pay a bit more for x86 cloud hosting, simply because it matches what you can test on your own local setup, and the errors you get will translate better… Without a development platform, ARM in the server space is never going to make it. Trying to sell a 64-bit “hyperscaling” model is idiotic, when you don’t have customers and you don’t have workloads because you never sold the small cheap box that got the whole market started in the first place… The only way that changes is if you end up saying “look, you can deploy more cheaply on an ARM box, and here’s the development box you can do your work on”. Actual hardware for developers is hugely important. I seriously claim that this is why the PC took over, and why everything else died…It’s why x86 won. Do you really think the world has changed radically?

ARM on Mac, particularly for developers, could be a radical change indeed that ends up transforming the server space. On the other hand, the shift to ARM could backfire on Apple: Windows, particularly given the ability to run a full-on Linux environment without virtualization, combined with Microsoft’s developer-first approach, is an extremely attractive alternative that many developers just don’t know about — but they may be very interested in learning more if that is the price of running x86 like their servers do.

Intel’s Failure

What is notable about this unknown — will developer preferences for macOS lead to servers switching to ARM (which remember, is cheaper and likely more power efficient in servers as well), or will the existing x86 installation base drive developers to Windows/Linux — is that the outcome is out of Intel’s control.

What started Intel’s fall from king of the industry to observer of its fate was its momentous 2005 decision to not build chips for the iPhone; then-CEO Paul Otellini told Alexis Madrigal at The Atlantic what happened:

“We ended up not winning it or passing on it, depending on how you want to view it. And the world would have been a lot different if we’d done it,” Otellini told me in a two-hour conversation during his last month at Intel. “The thing you have to remember is that this was before the iPhone was introduced and no one knew what the iPhone would do…At the end of the day, there was a chip that they were interested in that they wanted to pay a certain price for and not a nickel more and that price was below our forecasted cost. I couldn’t see it. It wasn’t one of these things you can make up on volume. And in hindsight, the forecasted cost was wrong and the volume was 100x what anyone thought.”

What is so disappointing about this excuse is that it runs directly counter to what made Intel great; in 1965, Bob Noyce, then at Fairchild Semiconductor, shocked the semiconductor world by announcing that Fairchild would price its integrated circuit products at $1, despite the fact it cost Fairchild far more than that to produce them. What Noyce understood is that the integrated circuit market was destined to explode, and that by setting a low price Fairchild would not only accelerate that growth, but also drive down its costs far more quickly than it might have otherwise (chips, remember, are effectively zero marginal cost items; the primary costs are the capital costs of setting up manufacturing lines).

That is the exact logic that Otellini “couldn’t see”, so blinded he was by the seemingly dominant PC paradigm and Intel’s enviable profit margins.

Worse, those volumes went to manufacturers like TSMC instead, providing the capital for research and development and capital investment that has propelled TSMC into the fabrication lead.

CORRECTION: A source suggested that this sentence was wrong:

What started Intel’s fall from king of the industry to observer of its fate was its momentous 2005 decision to not build chips for the iPhone.

XScale, Intel’s ARM chips, were engineered to be fast, not power-efficient, and Intel wasn’t interested in changing their approach; this is particularly striking given that Intel had just recovered from having made the same mistake with the Pentium 4 generation of its x86 chips. Moreover, the source added, Intel wasn’t interested in doing any sort of customization for Apple: their attitude was take-it-or-leave-it for, again, a chip that wasn’t even optimized correctly. A better sentence would have read:

Intel’s fall from king of the industry to observer of its fate was already in motion by 2005: despite the fact Intel had an ARM license for its XScale business, the company refused to focus on power efficiency and preferred to dictate designs to customers like Apple, contemplating their new iPhone, instead of trying to accommodate them (like TSMC).

What is notable is that doesn’t change the sentiment: the root cause was Intel’s insistence on integrating design and manufacturing, certain that their then-lead in the latter would leave customers no choice but to accept the former, and pay through the nose to boot. It was a view of the world that was, as I wrote, “blinded…by the seemingly dominant PC paradigm and Intel’s enviable profit margins.”

My apologies for the error, but also deep appreciation for the correction.

That is why, last month, it was TSMC that was the target of a federal government-led effort to build a new foundry in the U.S.; I explained in Chips and Geopolitics:

Taiwan, you will note, is just off the coast of China. South Korea, home to Samsung, which also makes the highest end chips, although mostly for its own use, is just as close. The United States, meanwhile, is on the other side of the Pacific Ocean. There are advanced foundries in Oregon, New Mexico, and Arizona, but they are operated by Intel, and Intel makes chips for its own integrated use cases only. The reason this matters is because chips matter for many use cases outside of PCs and servers — Intel’s focus — which is to say that TSMC matters. Nearly every piece of equipment these days, military or otherwise, has a processor inside. Some of these don’t require particularly high performance, and can be manufactured by fabs built years ago all over the U.S. and across the world; others, though, require the most advanced processes, which means they must be manufactured in Taiwan by TSMC. This is a big problem if you are a U.S. military planner. Your job is not to figure out if there will ever be a war between the U.S. and China, but to plan for an eventuality you hope never occurs. And in that planning the fact that TSMC’s foundries — and Samsung’s — are within easy reach of Chinese missiles is a major issue. The The first time

I think the focus on TSMC was correct, and I am encouraged by TSMC’s decision to build a foundry in Arizona, even if they are moving as slowly as they can on a relatively small design; at the same time, what a damning indictment of Intel. The company has not simply lost its manufacturing lead, and is not simply a helpless observer of a potentially devastating shift in developer mindshare from x86 to ARM, but also when its own country needed to subsidize the building of a foundry for national security reasons Intel wasn’t even a realistic option, and a company from a territory claimed by China was.

To that end, while I am encouraged by and fully support this bill by Congress to appropriate $22.8 billion in aid to semiconductor manufacturers (the amount should be higher), I wonder if it isn’t time for someone to start the next great U.S. chip manufacturing company. No, it doesn’t really make economic sense, but this is an industry where aggressive federal industrial policy can and should make a difference, and it’s hard to accept the idea of taxpayer billions going to a once-great company that has long-since forgotten what made it great. Intel has prioritized profit margins and perceived lower risk for decades, and it is only now that the real risks of caring about finances more than fabrication are becoming apparent, for both Intel and the United States.