The release of Intel 4th generation of Xeon processors, code named Sapphire Rapids, is fast approaching, with the current plan being to have it released around the February or March timeframe. It is easy to look at the Sapphire Rapids release as being another “bigger, better, faster” release, but in my mind, it isn’t so for reasons I will develop in this post.
Getting the obvious out of the way, yes, Sapphire Rapids is “bigger, better, faster” than its predecessor, Ice Lake. Some of the information leaked here, paint an interesting picture around the “faster” capabilities of Sapphire Rapids. For instance, clock speed-wise, Sapphire Rapids isn’t really faster than Ice Lake. The fastest Platinum Ice Lake has a base clock speed 2.8GHz whereas, currently the fastest Platinum Sapphire Rapids has an estimated base clock speed of 2.8GHz as well. I use the word estimated as nothing is for sure until Sapphire Rapids hits the floor, despite leaked information.
Where Sapphire Rapids packs a much “bigger” punch than Ice Lake is in core count. Ice Lake tops at 40 cores whereas Sapphire Rapids is estimated to top at 60 cores, which is a 30% increase in core count. Definitely not insignificant from a performance perspective, but, as nothing in life is free, the cost in clock speed for all these cores is pretty significant, as the 60 cores Platinum CPU looks to have a base clock speed of 1.9GHz, when the 40 cores Ice Lake processor has a clock speed of 2.3GHz, so slightly faster. The bottom line is that, nowadays, most applications prefer more cores at the expense of clock speed vs faster clock speed with less cores.
Talking about cores, Intel with Sapphire Rapids has completely changed its die architecture. All previous generation of Intel Xeon Scalable processors were monolithic processors, with everything, I/O controller, memory controller, different levels of cache, included on the same die. With Sapphire Rapids, Intel is moving away from monolithic dies to a tile-based architecture as shown in the picture below.
According to leaked information, each tile will have access to the resources of every other tile, effectively making the processor look like a monolithic processor from the outside. Each tile also contains everything: core, memory controller, I/O controller and cache. It will be interesting to see if Intel realizes the same benefits with its modular architecture as AMD did with its “chiplet” architecture.
Another interesting aspect of Sapphire Rapids is the “hyper-specialization” of the CPU lineup. The Sapphire Rapids lineup is divided in 9 different categories:
- P – Cloud-laaS
- V – Cloud-SaaS
- M – Media Transcode
- H – DB & Analytics
- N – Network/5G/Edge (High TPT/Low Latency)
- S – Storage & HCI
- T – Long-Life Use/High Tcase
- U – 1-Socket
- Q – Liquid Cooling
Each category leverages different key features/accelerators to offer a performance boost for their respective categories. This is the first significant paradigm shift brought by the Sapphire Rapids release. Time will tell if customers truly see the value in this “hyper-specialization” or if they prefer the previous, and current AMD, more generalist lineup.
One of the categories caught my eye: The Q category, which stands for liquid cooling. For the first time, Intel is releasing a CPU that requires liquid cooling in order to enable higher performance. Related to liquid cooling, as shown in the leaked article, one characteristic of the top Sapphire Rapids top bins is their TDP: 350W, up from 270W for the top Ice Lake bin. That’s a 30% increase in TDP, which is going to have significant consequences from a heating perspective, which then explains the need for a liquid cooled-only CPU to harness more performance out of the platform.
In previous post (https://www.engineeringtechnologists.com/post/the-chilling-effect-of-dell-smart-cooling and https://www.engineeringtechnologists.com/post/liquid-cooling-in-the-datacenter-why-what-how), I talked about the advent of liquid cooling in the datacenter. With the Q labelled bins, Intel is sending a clear signal to the industry, that going forward, if you want the top performance out of its processors, liquid cooling will be a requirement and based on the graph below, TDP is not going down any time soon:
So, liquid cooling is there to stay and will start gaining footholds in datacenters across the world.
The “hyper-specialization” of processors is the first shift brough by Sapphire Rapids, but it isn’t the only one. Which brings me to what I consider the biggest paradigm shift (always keep the best for last, right? 😊). Before I get to it, I need to go back in history.
Since the advent of the first PC, 40 years ago, despite the increase in performance and technology, something has always been true: the processor and the memory have always been collocated on the motherboard. For as far as most of us can remember, we have always seen motherboards with the CPU socket(s) and the RAM slots next to it. Sapphire Rapids is about to break that paradigm. One of the universal truths in computer is about to not be true anymore.
Sapphire Rapids breaks that paradigm by implementing 2 separate technologies:
– High Bandwidth Memory, aka HBM,
– Compute Express Link, aka CXL.
In my next post, I will dive deeper into HBM and CXL, but let me just say today that HBM will allow motherboards with no DIMM slots and CXL will allow DRAM memory to be outside of the motherboard. Both being unheard of in general purpose compute architectures.
In my view, CXL is such a game changer that I will host an interview with Amnon Izhar, an expert on everything CXL related, so stay tuned for that too.
Now to answer my own question, which is the title of this post, I think Sapphire Rapids is truly a game changer. Cascade Lake was a significant jump in performance from Sandy Bridge, but wasn’t a game changer, whereas between the new “fit for purpose hyper-specialization”, HBM and CXL, Sapphire Rapids is that game changer.
Opinions expressed in this article are entirely our own and may not be representative of the views of Dell Technologies.