Some time ago, in an Apple campus building, a group of engineers got together. Isolated from others in the company, they took the guts of old MacBook Air laptops and connected them to their own prototype boards with the goal of building the very first machines that would run macOS on Apple’s own, custom-designed, ARM-based silicon.
To hear Apple’s Craig Federighi tell the story, it sounds a bit like a callback to Steve Wozniak in a Silicon Valley garage so many years ago. And this week, Apple finally took the big step that those engineers were preparing for: the company released the first Macs running on Apple Silicon, beginning a transition of the Mac product line away from Intel’s CPUs, which have been industry-standard for desktop and laptop computers for decades.
In a conversation shortly after the M1 announcement with Apple SVP of Software Engineering Craig Federighi, SVP of Worldwide Marketing Greg Joswiak, and SVP of Hardware Technologies Johny Srouji, we learned that—unsurprisingly—Apple has been planning this change for many, many years.
Ars spoke at length with these execs about the architecture of the first Apple Silicon chip for Macs (the Apple M1). While we had to get in a few inquiries about the edge cases of software support, there was really one big question on our mind: What are the reasons behind Apple’s radical change?
Why? And why now?
We started with that big idea: “Why? And why now?” We got a very Apple response from Federighi:
The Mac is the soul of Apple. I mean, the Mac is what brought many of us into computing. And the Mac is what brought many of us to Apple. And the Mac remains the tool that we all use to do our jobs, to do everything we do here at Apple. And so to have the opportunity… to apply everything we’ve learned to the systems that are at the core of how we live our lives is obviously a long-term ambition and a kind of dream come true.
“We want to create the best products we can,” Srouji added. “We really needed our own custom silicon to deliver truly the best Macs we can deliver.”
Apple began using x86 Intel CPUs in 2006 after it seemed clear that PowerPC (the previous architecture for Mac processors) was reaching the end of the road. For the first several years, those Intel chips were a massive boon for the Mac: they enabled interoperability with Windows and other platforms, making the Mac a much more flexible computer. They allowed Apple to focus more on increasingly popular laptops in addition to desktops. They also made the Mac more popular overall, in parallel with the runaway success of the iPod, and soon after, the iPhone.
And for a long time, Intel’s performance was top-notch. But in recent years, Intel’s CPU roadmap has been less reliable, both in terms of performance gains and consistency. Mac users took notice. But all three of the men we spoke with insisted that wasn’t the driving force behind the change.
“This is about what we could do, right?” said Joswiak. “Not about what anybody else could or couldn’t do.”
“Every company has an agenda,” he continued. “The software company wishes the hardware companies would do this. The hardware companies wish the OS company would do this, but they have competing agendas. And that’s not the case here. We had one agenda.”
When the decision was ultimately made, the circle of people who knew about it was initially quite small. “But those people who knew were walking around smiling from the moment we said we were heading down this path,” Federighi remembered.
Srouji described Apple as being in a special position to make the move successfully: “As you know, we don’t design chips as merchants, as vendors, or generic solutions—which gives the ability to really tightly integrate with the software and the system and the product—exactly what we need.”
Designing the M1
What Apple needed was a chip that took the lessons learned from years of refining mobile systems-on-a-chip for iPhones, iPads, and other products then added on all sorts of additional functionality in order to address the expanded needs of a laptop or desktop computer.
“During the pre-silicon, when we even designed the architecture or defined the features,” Srouji recalled, “Craig and I sit in the same room and we say, ‘OK, here’s what we want to design. Here are the things that matter.’”
When Apple first announced its plans to launch the first Apple Silicon Mac this year, onlookers speculated that the iPad Pro’s A12X or A12Z chips were a blueprint and that the new Mac chip would be something like an A14X—a beefed-up variant of the chips that shipped in the iPhone 12 this year.
Not exactly so, said Federighi:
The M1 is essentially a superset, if you want to think of it relative to A14. Because as we set out to build a Mac chip, there were many differences from what we otherwise would have had in a corresponding, say, A14X or something.
We had done lots of analysis of Mac application workloads, the kinds of graphic/GPU capabilities that were required to run a typical Mac workload, the kinds of texture formats that were required, support for different kinds of GPU compute and things that were available on the Mac… just even the number of cores, the ability to drive Mac-sized displays, support for virtualization and Thunderbolt.
There are many, many capabilities we engineered into M1 that were requirements for the Mac, but those are all superset capabilities relative to what an app that was compiled for the iPhone would expect.
Srouji expanded on the point:
The foundation of many of the IPs that we have built and that became foundations for M1 to go build on top of it… started over a decade ago. As you may know, we started with our own CPU, then graphics and ISP and Neural Engine.
So we’ve been building these great technologies over a decade, and then several years back, we said, “Now it’s time to use what we call the scalable architecture.” Because we had the foundation of these great IPs, and the architecture is scalable with UMA.
Then we said, “Now it’s time to go build a custom chip for the Mac,” which is M1. It’s not like some iPhone chip that is on steroids. It’s a whole different custom chip, but we do use the foundation of many of these great IPs.
Unified memory architecture
UMA stands for “unified memory architecture.” When potential users look at M1 benchmarks and wonder how it’s possible that a mobile-derived, relatively low-power chip is capable of that kind of performance, Apple points to UMA as a key ingredient for that success.
Federighi claimed that “modern computational or graphics rendering pipelines” have evolved, and they’ve become a “hybrid” of GPU compute, GPU rendering, image signal processing, and more.
UMA essentially means that all the components—a central processor (CPU), a graphics processor (GPU), a neural processor (NPU), an image signal processor (ISP), and so on—share one pool of very fast memory, positioned very close to all of them. This is counter to a common desktop paradigm, of say, dedicating one pool of memory to the CPU and another to the GPU on the other side of the board.
When users run demanding, multifaceted applications, the traditional pipelines may end up losing a lot of time and efficiency moving or copying data around so it can be accessed by all those different processors. Federighi suggested Apple’s success with the M1 is partially due to rejecting this inefficient paradigm at both the hardware and software level:
We not only got the great advantage of just the raw performance of our GPU, but just as important was the fact that with the unified memory architecture, we weren’t moving data constantly back and forth and changing formats that slowed it down. And we got a huge increase in performance.
And so I think workloads in the past where it’s like, come up with the triangles you want to draw, ship them off to the discrete GPU and let it do its thing and never look back—that’s not what a modern computer rendering pipeline looks like today. These things are moving back and forth between many different execution units to accomplish these effects.
That’s not the only optimization. For a few years now, Apple’s Metal graphics API has employed “tile-based deferred rendering,” which the M1’s GPU is designed to take full advantage of. Federighi explained:
Where old-school GPUs would basically operate on the entire frame at once, we operate on tiles that we can move into extremely fast on-chip memory, and then perform a huge sequence of operations with all the different execution units on that tile. It’s incredibly bandwidth-efficient in a way that these discrete GPUs are not. And then you just combine that with the massive width of our pipeline to RAM and the other efficiencies of the chip, and it’s a better architecture.
The next step for Mac hardware
Some tech enthusiasts have expressed skepticism that what Apple has done with the M1 will scale to the point that it unseats the high-end performance in desktops like a specced-out iMac or a Mac Pro.
It’s one thing to improve performance on low-end machines, where performance has not previously been a significant priority. It’s another entirely to challenge machines that were already built to deliver for professional workflows or gamers, the argument goes.
Unfortunately, Apple doesn’t tend to talk about upcoming products in any helpful detail, so that remains a question mark for doubters. “We set out with M1 as the first in a series of chips, and we designed it to solve this problem,” Federighi told us. “And I suggest that your readers judge us by: Apple put their mind to solving that problem, how did that work out?”
Despite Apple’s unwillingness to get specific, we do have some clues in the current lineup. This week, Apple launched three different Macs with the M1 inside: the MacBook Air and entry-level variants of the Mac mini desktop and the 13-inch MacBook Pro. For the most part, the chip is the same in all of them. (The low-end MacBook Air’s M1 has seven GPU cores instead of eight.) When we reviewed the Mac mini this week, we found that it beat all but the highest-end desktop machines in CPU tasks and was knocking on the door of fast discrete GPUs.
Users expect the MacBook Pro to provide more robust performance than the Air and have expressed some confusion as to how these machines are now differentiated. So we asked Apple how—apart from that single GPU core—the MacBook Pro offers better performance.
“It’s all about performance per watt, right? Power efficiency,” Srouji answered. He pointed out the thermal budget of the system can be different based on the form factor. “It can also be different based on the cooling you have or you don’t have. And that determines what we call sustainable peak performance.”
In other words, the MacBook Air and Pro should perform similarly when bursts of speed are needed, but the Pro can maintain that performance for longer—critical for many demanding workflows like video editing. In such scenarios, the Air would throttle sooner to stay cool. According to Apple, the M1 scales to better performance the more power you can give it.
To that point, Srouji showed a chart during his presentation on new Apple Silicon Macs a while back. “It wasn’t a marketecture chart,” Federighi said. “It was a real chart, and it showed performance at different power levels.”
Federighi said that anyone looking at the chart can see where the MacBook Air fits on the scale, “but you saw that line kept going out to the right, and when it did, the performance went up. That area to the right of that 10-watt line is the difference between being in a MacBook Air and being in a MacBook Pro.”
“And it gets bigger,” Srouji added.
Notably, Apple didn’t update the higher-end configurations of the MacBook Pro or Mac mini—the devices with pro-user-friendly 32GB or 64GB of RAM, or with four Thunderbolt 3 ports, or 4TB of storage.
“We designed M1 targeting a specific set of systems. Those are the systems we set out to design, and those are the ones we’re selling,” Federighi said when we asked why there weren’t any M1 MacBook Pros with 32 or 64GB of RAM. But Apple has already confirmed that it plans to move the entire lineup to Apple Silicon within a couple of years—including big performers like the Mac Pro.
Below: Benchmarks of the M1 from our Mac mini review.
The implication seems to be that this is merely the opening volley. These might be some of the fastest Macs yet released, but they are likely the slowest Macs with Apple Silicon the company will ever introduce. Unless huge changes are coming to the basic structure of the Mac product line, it only goes up from here.
(And FYI: we also asked if Apple plans to introduce cheaper Macs, on the assumption that using its own silicon is more economical. “We don’t do cheap—you know that,” Joswiak admitted. “Cheap is for other people, because we try to build a better product.”)
Of course, future hardware performance isn’t the only question that Mac users have about this transition. All three of the Apple executives we spoke with talked about the company’s focus on integrating hardware and software—and there’s a lot to consider in the latter.
The software side
A move to a new architecture brings the expectation that Macs will now natively run new kinds of software—and stop natively running other kinds, including the entire existing Mac software lineup.
That said, the new version of macOS includes Rosetta 2, which translates applications made for Intel Macs to run on Apple Silicon and the M1. For more on that, read our review. We were surprised at how well it does that.
Further, Apple has shipped new, universal versions of each and every one of its currently maintained Mac apps, from Mail to Safari, Voice Memos, and Xcode. And some critical software-as-a-service companies like Microsoft and Adobe have announced plans to do the same. With users likely to still be running Intel Macs for years to come, universal apps will probably be the norm on the Mac for a long time.
But in addition to that, Apple’s shift in architecture opens the floodgates to software made for the iPhone and the iPad.
Bringing iPhone and iPad apps to the Mac
For the first time, apps for those platforms can run natively on the Mac. While the Mac is often associated with high-quality apps, it hasn’t seen the sheer scale of third-party software support that iOS and Windows get. Opening it up to iOS apps might be a promising way to solve for that, but it’s essential that users have the confidence that they’ll find only good experiences in the Mac app store.
Last year, Apple introduced Project Catalyst, a framework meant to help developers fairly easily adapt iPad apps to work well on the Mac. So the tools are there, in theory. In our review this week, we found some neat examples of apps that work well in macOS, but we felt that there were far too many that don’t offer an ideal user experience.
According to Federighi, “about 90 percent of apps work well.” As for why some might not, he said:
There are a number of reasons an app might not work well. It might use technology that just isn’t available or sensible on the Mac. A gyroscope would be an excellent example. There are other apps that just for one reason that the other, they’re old apps and use some kind of SPI, make certain assumptions about the underlying execution environment that cause them not to work. So we do a certain amount of automated testing of existing titles to see if they crash and so forth. We’ll automatically screen them from the store if they have those problems.
For “top titles,” Apple also does a manual review, and apps that have been through that process in addition to the automated review are presented to users slightly differently in the Mac App Store.
It’s important to note exactly how these apps end up on the App Store. Essentially, Apple sends out a developer agreement that active developers must sign annually to continue releasing and supporting apps on the App Store. According to Federighi, “Part of that sign-off said, essentially, ‘Do you want to make your apps available for [Apple Silicon Macs]?’ So they had an option right there to opt out, and they can continue to opt out at any time.”
As time goes on, we’re likely to see some developers put in some effort to make their apps play nicely on the Mac. It’s early days still for Apple Silicon, but within a few years, many millions of Macs with the M1 or M1 successors or siblings will be on the market. The more there are, the easier it will be for developers to justify the effort. Time will tell how quickly that process moves.
What about Windows?
But while Apple Silicon Macs can run existing Mac, iPhone, and iPad software, the new architecture cannot immediately run applications built for x86 operating systems besides macOS, and Rosetta 2 doesn’t offer any help on this front. The ability to run Windows software was a small part of the success of Intel-based Macs, so some users—particularly those with certain professional workflows—will see that as a loss.
We asked what an Apple Silicon workflow will look like for a technologist who lives in multiple operating systems simultaneously. Federighi pointed out that the M1 Macs do use a virtualization framework that supports products like Parallels or VMWare, but he acknowledged that these would typically virtualize other ARM operating systems.
“For instance, running ARM Linux of many vintages runs great in virtualization on these Macs. Those in turn often have a user mode x86 emulation in the same way that Rosetta does, running on our kernel in macOS,” he explained.
While running Linux is important for many, other users are asking about Windows. Federighi pointed to Windows in the cloud as a possible solution and mentioned CrossOver, which is capable of “running both 32- and 64-bit x86 Windows binaries under a sort of WINE-like emulation layer on these systems.” But CrossOver’s emulation approach is not as consistent as what we’ve enjoyed in virtualization software like Parallels or VMWare on Intel Macs, so there may still be hills to climb ahead.
As for Windows running natively on the machine, “that’s really up to Microsoft,” he said. “We have the core technologies for them to do that, to run their ARM version of Windows, which in turn of course supports x86 user mode applications. But that’s a decision Microsoft has to make, to bring to license that technology for users to run on these Macs. But the Macs are certainly very capable of it.”
It’s likely to be a while before we see how the Windows-on-Apple-Silicon-Macs future plays out. In the meantime, though, Apple plans to continue to provide software updates to Intel-based Macs. Federigh said:
From a software point of view, we haven’t created a branch of macOS. There’s not the version of macOS for M1-based Macs and a different version of macOS for intel. They’re literally the same installer. It’s the same source tree. It’s the same OS we’re building every night. It’s a single project, and that will continue to be the case.
So as we build next year’s [major macOS release] and so forth, we’re building it as a universal OS that works on both systems. And so, if you buy an Intel Mac today, or if you already own one, you’re going to continue—just as you would have expected—getting free macOS upgrades for years to come.
We asked if there will be new Intel Mac hardware launches, too. Joswiak responded:
When we said we would support Intel systems for years to come, that was talking about the operating system… What we did say from a system standpoint, is that we still had Intel systems that were in the pipeline, that we were yet to introduce. And certainly that was so. The very next month, we introduced an Intel-based iMac.
It’s clear, then, that Apple Silicon is not just part of the Mac’s future plans. Beyond ongoing software support for Intel devices, it will soon drive the entirety of the Mac strategy.
When the decision to move to Apple-designed chips for the Mac was made those many years ago, Federighi said the team “kind of drew an X on the calendar and said, ‘We’re going to do this.'” He added that years later, the goal was met “pretty much the day we picked.”
Recalling the whole ordeal, Srouji said:
People have been very passionate working with this project, the whole thing. Not only M1—the software, the system, putting all this together. It’s been hard, but most rewarding. We love Mac. We love computers, building computers. I think we’re making history, and that’s how our engineers take it. And this is a year that has had many challenges, by the way. And despite all of these challenges, we nailed it, we delivered on time. It’s something I’m going to remember for the rest of my life, how rewarding and hard it was.
To a similar point, Federighi said, “There was plenty of blood, sweat, and tears, but I’d say at some level we know how to do this. We knew what we were getting into.”
“We’ve done this before,” added Joswiak, referring to previous transitions to PowerPC and to Intel. But this time, it’s all baked at home—and that makes it a different kind of victory for Apple. “We are giddy. We are excited about it,” he said. “This has been one heck of a week, and our enthusiasm for this is incredibly genuine.”
Years after Apple engineers gathered in a room to modify MacBook Airs to become the first Apple Silicon Macs, the company delivered the culmination of all that tinkering—well, the first step of the culmination, anyway.
That might conjure the mythology of Woz in a garage for some, but it’s a completely different world now, of course, and a very different Apple. We’re entering a new era for the Mac—and we don’t yet know what that era will look like.
For now, though, Srouji seemed plenty confident. “There were many moments where it was hard and tough,” he admitted. “But me personally, I never doubted that the decision we made was the right decision.”
Listing image by Aurich Lawson / Apple