This weekend, Intel released preliminary information on its newest laptop part—the Xe Max discrete GPU, which functions alongside and in tandem with Tiger Lake’s integrated Iris Xe GPU.
We first heard about Xe Max at Acer’s Next 2020 launch event, where it was listed as a part of the upcoming Swift 3x laptop—which will only be available in China. The new GPU will also be available in the Asus VivoBook Flip TP470 and the Dell Inspiron 15 7000 2-in-1.
Intel Xe Max vs. Nvidia MX350
During an extended product briefing, Intel stressed to us that the Xe Max beats Nvidia’s entry-level MX 350 chipset in just about every conceivable metric. In another year, this would have been exciting—but the Xe Max is only slated to appear in systems that feature Tiger Lake processors, whose Iris Xe integrated GPUs already handily outperform the Nvidia MX 350 in both Intel’s tests and our own.
The confusion here largely springs from mainstream consumer expectations of a GPU versus what Intel’s doing with the Xe Max. Our GPU tests largely revolve around gaming, using 3DMark’s well-known benchmark suite, which includes gaming, fps-focused tests such as Time Spy and Night Raid. Intel’s expectations for the Xe Max instead revolve, almost entirely, around content creation with a side of machine learning and video encoding.
Xe Max is, roughly speaking, the same 96 Execution Unit (EU) GPU to be found in the Tiger Lake i7-1185G7 CPU we’ve tested already this year—the major difference, beyond not being on-die with the CPU, is a higher clock rate, dedicated RAM, and separate TDP budget.
Tiger Lake’s Iris Xe has a peak clock rate of 1.35GHz, and it shares the CPU’s TDP constraints. Iris Xe Max has its own 25W TDP and a higher peak clock rate of 1.65GHz. It also has its own 4GiB of dedicated RAM—though that RAM is the same LPDDR4X-4266 that Tiger Lake itself uses, which is something of a first for discrete graphics and might lead to better power efficiency.
An enhancement to Iris Xe, not a replacement
Intel’s marketing material promotes the idea of workloads that run on both the integrated Iris Xe GPU and discrete Xe Max GPU simultaneously, using the term “additive AI.” However, this shouldn’t be confused with traditional GPU interleaving, such as AMD Crossfire or Nvidia SLI—Xe Max and Iris Xe won’t be teaming up to handle display duties on a single display.
Intel’s version of “additive AI” refers instead to workloads that can be easily divvied up to run on separate GPUs, with separate memory spaces. Such tasks can be assigned to both iGPU and dGPU simultaneously, and—since the two have very similar performance profiles—they can do so without much complication in potentially slowing the overall workload down due to significant portions running on the slightly slower integrated GPU.
Sharpening images with Gigapixel AI
One of Intel’s more impressive demonstrations involved using Topaz Gigapixel AI to sharpen grainy images. If you’re not familiar with Gigapixel AI, it’s essentially the modern-world, real-life version of Blade Runner‘s infamous enhance scene. Of course, Topaz can’t add genuine information to a photo that wasn’t already there—but it can, and does, use machine learning to produce information that looks like it should belong.
In Intel’s demonstration, an Xe Max-equipped laptop used Gigapixel AI to enhance a very large, grainy photo seven times faster than a similar laptop equipped with an Nvidia MX 350 GPU could. While that was impressive, we pressed Intel for comparisons to other hardware, which an Xe Max-equipped laptop might more reasonably compete with “in the wild.”
After a day or so, an Intel engineer got back to us and said we could expect an Xe Max-equipped laptop to complete Gigapixel AI workloads seven times as fast as an MX 350, five times as fast as an RTX 1650, and 1.2 times as fast as a Tiger Lake laptop with Iris Xe graphics alone.
The engineer also noted that Xe Max has considerably untapped potential to be unlocked in further optimizations—and that in MLPerf ResNet-50 workloads, using INT8 data, Intel is already seeing 1.4x improvements over standalone Tiger Lake systems without Xe Max.
Xe Max vs. Iris Xe
The real question—for now, at least—is who will benefit enough from an Xe Max-equipped laptop to justify the additional cost. We don’t think either gamers or general-purpose computing users should get too worked up about it yet—neither workload is likely to see much benefit.
Xe Max doesn’t always accelerate gaming workloads at all—as the slide above demonstrates, running Metro Exodus on an Xe Max boosts performance roughly 7fps. But running DOTA 2 on it would decrease performance by roughly the same amount. Luckily, Intel’s implementation of the Xe Max includes automatically shifting workloads appropriately. An Xe Max-equipped Tiger Lake system should shift each game to the better-suited GPU, without needing the gamer to know which is which.
For machine learning, the story is more positive—Xe Max does, at least, seem to consistently outperform the integrated Tiger Lake GPU. But Tiger Lake’s integrated-graphics success story takes much of the shine off of Xe Max. A 20- to 40-percent performance increase for a few hundred dollars extra isn’t a terrible proposition—but it’s a considerably smaller boost than most people expect from adding a discrete GPU.
For now, users who encode a lot of video throughout their day are likely the best target for Xe Max. For this one very specific task, an Xe Max-equipped laptop runs rings around a Tiger Lake laptop without it—or even a laptop with an otherwise vastly more powerful RTX 2080 Super.