There are many who are not aware of what Intel Optane is and in turn not aware that VxRail supports Intel Optane media. It’s not just a cool name that makes you feel like getting into a racecar and doing 150 mph on the freeway – although that sounds like a lot of fun! I would also assume that if you are reading this you might be someone who would like to consume Optane for its performance benefits, but aren’t quite sure how to leverage it and how its configured and managed by VxRail. Hopefully by the end of this you feel more comfortable about what it is, it’s benefits and how you can leverage Optane in VxRail.
What is Optane?
I am going to make this as brief and concise as I can, as you can dive very deep into how the innards of how Optane actually works and that’s not the objective of this post.
Simply put, Intel Optane is a technology that is a combination of memory and storage media, interface hardware, and Software IP. There are 2 types of storage media you can leverage:
1. Intel Optane Persistent Memory (PMEM or DCPMM as its referred to)
2. Intel Optane SSDs
The thing to note here is that it’s not necessarily just a HW play but rather BOTH HW and SW. You have to have the right chipset, CPU and media devices to make this thing happen.
The first requirement in order to leverage Intel Optane is that you must have 2nd Generation Intel Scalable Processors (Cascade Lake). Without this, it’s a non-starter.
The second requirement is….I’ll let you guess it….YES! Intel Optane PMEM and/or SSDs. There are considerably more details to leveraging Optane PMEM versus Optane SSDs and we will get into that here in a bit.
Intel sees Optane as bridging the gap between Memory and Storage. They are bringing dense and persistent storage into Memory-land and at the same time giving Storage the performance that you get with memory. This provides a wider range of options when it comes to addressing workloads and catering to their needs.
You can see on this pyramid diagram where Optane fits into the Memory/Storage spectrum. Hopefully this puts into perspective where Optane fits in the stack and provides a little bit more understanding of WHAT it is.
How can you leverage Intel Optane Technology in your VxRail environment?
Today, with VxRail we ship PowerEdge 14G nodes which leverage the 2nd Gen Intel Scalable CPUs and we also offer both Intel Optane PMEM and SSDs as configurable options. There are certain configuration guidelines to note that are helpful to know going into this.
These configuration guidelines apply to all Intel based VxRail Node Models which are based on the PowerEdge R640 2S 1U1N, R740XD 2S 2U1N, R840 4S 2U1N and the C6420 2U4N appliance, with the exception of a couple of specific models: D-Series Node which is based on the PowerEdge XR2 which does NOT support Intel Optane. The V570/F and S570 do not currently support NVMe or Optane SSDs but supports Optane PMEM configurations.
Intel Optane PMEM
VxRail 14G Nodes with Intel 2nd Gen Scalable CPUs, have a total of 24 DIMM slots per node on a Dual Socket System and 48 DIMM slots per node on the Quad Socket systems. Each CPU has 12 DIMMs and they are divided up into 6 Memory Channels, each with 2 DIMMs per Channel (DPC). Not all 2nd Gen CPUs support Optane and I have included a matrix (Fig. A) to show which CPU Bins support Optane PMEM.
*Side Note: All VxRail node options are capable of Dual Socket with the exception of the P580N which is a Quad Socket node. The Quad Socket Node is a direct scale-up from the 2 Socket node configurations.
Figure A – DCPMM CPU Support Chart
Here is a matrix (Fig. B) that shows which memory types are able to be configured together in a VxRail node. On this chart, Optane PMEM is named DCPMM (Data Center Persistent Memory Module):
Figure B – DCPMM DIMM Support Chart
The PowerEdge Memory DIMM layout is as per the diagram below (Fig. C) and for the sake of simplicity this represents a 2-socket config:
Figure C – PowerEdge Memory Layout
General DCPMM Configuration Rules:
- Gold or Platinum 2nd Generation Intel Xeon Scalable processor (Cascade Lake) CPU
- PowerEdge does not support the single Silver SKU that Intel enabled for Intel Optane DC persistent memory
- Runs at 2666 MT/s (which is the max speed when there are 2 DIMMs on a channel)
- Note that large memory configs require an M or L CPU SKU
- No special SKU when: memory <= 1024GB
- M SKU when: 1024GB < memory <=2048GB
- L SKU when : 2048GB < memory
- Maximum of one PMem per channel.
- If only one DIMM is populated on a channel, it should always go to the first slot in that channel (white slot).
- If a PMem and a DDR4 DIMM are populated on the same channel, always plug PMem on the second slot (black slot).
- If the PMem is configured in Memory Mode, the recommended DDR4 to PMem capacity ratio is 1:4 to 1:16 per iMC/CPU.
- Each socket in a multi-socket system must be populated identically.
- Intel Optane PMEM part numbers must be the same for all installed PMEM modules.
Intel Optane PMEM Operating Modes:
There are 2 Operating Modes to choose from when leveraging Intel Optane PMEM:
1. Memory Mode – DRAM acts as a cache for the most frequently accessed data while the PMEM modules give you a large memory capacity. The cache operations are maintained by the CPUs integrated mem controller. In this mode, PMEM is volitile and acts as traditional DRAM. Its seen as DRAM to the system and in vSphere.
2. App Direct Mode – Apps and Operating Systems are aware that there are 2 types of direct memory on the host and can direct which reads or writes are suitable for DRAM or PMEM. This option gives you persistence as long as the OS or Hypervisor is enabled with a persistent mem aware file system – ESXi is one of those that support this feature! Lucky you!
Mixing PMEM operating modes on a system is NOT supported so you will need to pick a horse and ride it. Here is a picture to help put some color to the 2 modes. I like pictures and I hope you like this one.
Configuring Intel Optane PMEM in VxRail Systems
There are some steps to configuring Optane PMEM to be leveraged by the node and ultimately by ESXi. Instead of making this blog even longer and more painful to read, I will spare you an include the links to the documents you will want to leverage when configuring Optane PMEM.
The first is the PMEM Users Guide. This document will cover the necessary BIOS configuration parameters needed to enable App-Direct Mode or Memory Mode, as well as OS and Hypervisor configuration.
Second, you will want to familiarize yourself with the VMware KB article referencing Optane PMEM here. This is for good measure but one thing to keep in mind is that whichever Optane configurations we allow in VxRail are supported by VMware as we maintain compatibility between HW and SW versions.
Intel Optane SSDs
This one will be easy for you! Intel Optane SSDs are a vSAN Cache Disk option which will benefit your cluster by providing lower latency and higher endurance than the other cache alternatives which are SAS and NVMe. They come in 2 different capacity options: 375GB and 750GB.
Now, I know what you may be thinking….Those are kinda small. They are, but for good reason! Obviously there is a price premium that comes along with Optane and we know the drill…the larger the capacity the more expensive they are! Until the tech breathes a bit and eventually comes down in price p/GB.
Intel Optane SSDs endurance rating is 20.5 PBW on the 375GB and 40.1 PBW on the 750GB. These equal out to be quite a bit more than SAS or NVMe!
The biggest reason Optane is justifiable and why the size of these drives is of slightly less concern is the performance of the drives compared to SAS and NVMe caching disks. As you can see on the performance chart below we are netting 25-30% lower latency than NVMe drives and is sustained as the load increases. From a vSAN perspective, because these drives perform so well they are able to handle the de-stage from cache to capacity with less latency than SAS and NVMe so they really make up for the smaller capacity in a variety of different ways.
*6 x VxRail P570 Hosts, 2 x 6226R CPU, Two DG, Cap=3 x 3.84 TB Toshiba SAS
VxRail provides disk slot assignments that allow for easy performance and capacity planning when it comes to architecting a vSAN storage based environment. Here is an easy breakdown of how many and what slots for caching we support in our node types:
E-Series Nodes: 2x Disk Groups (Slots 8,9)
P,V-Series: Up to 4x Disk Groups (Slots 20,21,22,23)
P580N: Up to 3x Disk Groups (Bay 1, Slots 8,9,10,11)
G-Series: 1x Disk Group per node, 4 nodes per chassis (Slots 0,6,12,18)
*Note: You cannot mix Optane with SAS or NVMe cache disks on the same system, but for systems that support All NVMe (cache and capacity) you may use Optane SSDs as the cache disk selection and NVMe as the capacity disk selection.
I see this as a very exciting and critical technology to the future of computing. We are seeing a shift away from traditional SAS/SATA media to a focus on NVMe from a storage perspective. We also see Memory DIMM sizes increase to allow servers to have upwards of 4TB of RAM per node and in some cases beyond that! The 2 are converging – Memory and Storage. I think eventually we will be leveraging hosts that have nothing but Memory DIMMs for storage capacity! I guess we will see how this plays out over the coming years, but it’s fun to think about!