Running vCD on VxRail

The topic of running VMware Cloud Director (vCD) on VxRail has come up a few times lately, with some varying questions around potential requirements, so I thought that it may be useful to create a post to talk through it at a high level.

Two of the questions that stood out were:

Q. “Must a customer have VCF on VxRail in order to run vCD?”

A. No, VCF is not required in order to run vCD on a VxRail.

Q. “Does VxRail provide seamless integration with vCD?”

A. I am going to say Yes to this. Seamless can mean different things to different people, especially in a Sales cycle (!), but wrt VxRail, the integration with vCD is as straightforward as it can be, while at the same time adding the benefits of VxRail Lifecycle Management to the infrastructure supporting vCD.

Both questions are easy to address, but it showed that some clarification around the topic would be helpful. In this post I plan to talk to:

  • How VxRail can be leveraged and architected for a vCD environment
  • How VxRail Lifecycle Management can benefit the solution
  • How vCD can run with VCF on VxRail

Please keep in mind that the planning and implementation of vCD is entirely a VMware engagement, and that this post does not constitute any level of validation, though I will try to explain what the primary checks and balances are to look out for!

Consuming VxRail resources

In terms of how vCD consumes, partitions and then presents underlying infrastructure resources to its Tenant, there are many great options within vCD to suit various use cases, but from VxRail’s own perspective it is very straightforward: “Here’s my vCenter Server”.

vCenter Server

It doesn’t matter if the VxRail’s vCenter Server is embedded or external, managing a single cluster or multiple clusters. vCD discovers and configures the vCenter Server as standard.

When connecting a vCenter Server, vCD does have the option of configuring it as a Shared or Dedicated instance, but again, this is irrelevant to VxRail. As far as VxRail is concerned, it’s resources are presented to vCD via the vCenter Server connection, so how they are subsequently configured and distributed by vCD is secondary to VxRail.


The networking requirements for vCD are layered on top of the VxRail as they might be with any application. While VxRail has its own fundamental networking hardware and software requirements, additional vCD network components can be added as required.

From a hardware perspective, and this would depend on associated network architectures requiring traffic separation, this would likely require additional NICs and Ports. VxRail provides plenty of flexibility around networking options, so this is not an issue. Refer to VxRail Spec Sheet for more info.

What is required by vCD in association with the addition of a vCenter Server is the configuration of an associated instance of NSX-V or NSX-T (btw, it’s one or the other, it cannot be mixed/both).

In the case of NSX-V, the vCD Portal UI will require the NSX-V details to be entered immediately along the vCenter Server, as shown below:

This step can be disabled/skipped if NSX-T is in use, as the addition/registration of NSX-T can be completed after the vCenter Server has been added, as shown below:

For more information about adding vSphere and network resources, please refer to the VMware Cloud Director Service Provider Admin Portal Guide.

General Architecture

How the overall solution architecture is deployed on or with VxRail is also quite straightforward, and is generally down to following vCD architecture guidance.

vCD Management Components

A VMware Cloud Director instance consists of one or more vCD Cells configured in a Server Group, where these Cells can be Linux or appliance-based. While vCD connects to and consumes vCenter Servers, ESXi Hosts and NSX resources, other related management components can be used also, for example:

  • vCD Database (with Linux deployment)
  • RabbitMQ
  • Management vCenter Server
  • NSX-V or NSX-T
  • vRealize Operations Manager (vROps)
  • vRealize Log Insight (vRLI)
  • vRealize Network Insight (vRNI)
  • vRealize Orchestrator (vRO)
  • vRealize Suite Lifecycle Manager (vRSLCM)

A separate cluster for Management is generally the preferred approach. The management components can be deployed on a VxRail or a non-VxRail platform, for example:

vCD Mgmt On-VxRail

If a dedicated/separate Management cluster is required for vCD and other related management components, such as is outlined in VVD for Cloud Providers then they can be deployed on a VxRail as standard. This VxRail cluster would be in addition to any additional workload VxRail clusters that would be consumed by vCD Provider vDCs.

vCD Mgmt With-VxRail

Alternatively, if a Management cluster is already deployed, or to be deployed, on non-VxRail hardware, then that is fine also. No hard feelings, do what you gotta do. In this case, vCD would then point at and use VxRail clusters solely for vCD Provider vDC use.

Consolidated / All-in-one

There may also be a case for running Management and Workloads within a single Cluster, in which case both Management and Tenant Workloads would be running on the same VxRail system. Resource consumption, contention, and separation would be a significant consideration in this case, especially if only a single cluster is used!

vCD Tenant Workload Infrastructure

vCD can logically partition access to the available storage, compute and networking resources as required, providing vCD Tenants secure multi-tenant access to the appropriate resources, as determined by their respective Organizations and the associated Organization vDC and Provider vDC logical constructs.

The following graphic suggests how VxRail could be used within a VMware Cloud Director solution, where a single VxRail Cluster hosts the vCD Management components, while multiple VxRail Clusters, via their associated vCenter Servers, provide the tenant workload infrastructure.

As described previously, the VxRail platform is consumed by vCD via the vCenter Server, and it is this vCenter Server that is immediately ingested by the vCD Provider vDC in order to provide the associated VxRail infrastructure resources up through the logical vCD constructs for Tenant access.

Also suggested above is how various VxRail offerings could be leveraged to provide appropriate resources to specific workloads and use cases in vCD. For example where VDI services are being provided from vCD, then VxRail V-Series nodes may be most appropriate (V Series: VDI-optimized 2U/1Node platform with GPU hardware for graphics-intensive desktops and workloads). Similar consideration should be given to VxRail nodes optimised for Efficiency, performance, Storage, or Compute workloads. Refer to the VxRail spec sheet for more details.

Lifecycle Management (LCM)

The overall lifecycle management of the solution can be separated out into 3 layers:

  • Infrastructure (VxRail)
  • Networking (NSX)
  • Management (vCD +)

Keep the VMware Interoperability Matrix close to hand at all times, as manual validation of compatibility between the various layers will be required.

Infrastructure LCM

The automated VxRail LCM will take care of upgrading all of the vCD infrastructure as required, ensuring that each upgrade automatically transitions the customer between VxRail ‘Continuously Validated States‘. Approx 25k engineering hours go into the testing and validation of VxRail major releases, ensuring that each release/state is a ‘Happy’ state, requiring no additional customer efforts around tweaking of drivers or otherwise. The outcome of this effort is a single update bundle that is used to update the entire HCI stack, including the operating system, the hardware’s drivers and firmware, and management components such as VxRail Manager and vCenter.

If an embedded vCenter Server is managing a VxRail cluster, then the VxRail LCM process automatically upgrades the vCenter Server (and PSC with 4.7.x) as well as the ESXi hosts. If an external vCenter Server is used to manage the VxRail cluster(s) then the vCenter Server (and PSC) will require manual upgrade outside of the automated ESXi host LCM.

In a multi-cluster VxRail environment such as we are discussing, VxRail SaaS MultiCluster Management provides the ability to monitor and lifecycle manage all of the associated VxRail systems across the business.

Networking LCM

The upgrade of NSX is independent of the VxRail LCM process, and should be completed as required in advance of any VxRail upgrade. Manual validation of compatible vSphere versions is required. For more details refer to NSX-T Data Center Upgrade Guide, as well as Dell KB000184538 specific to VxRail major version upgrades with NSX-T in place (This is not a consideration when using NSX-V on VxRail).

Management LCM

Specific to the vCD architecture discussed previously, one of the benefits of separating the Management stack from the Tenant workloads is the ability to decouple/separate the lifecycle of their respective infrastructures.

Specific to the upgrade of vCD, primary consideration should be given to the new/target version compatibility with vSphere and NSX, while also considering any other components directly interacting with vCD.

Individual component upgrades in the management stack can be managed as required, taking note of the vCD Release Notes as well as referencing the VMware Product Interoperability Matrices. Depending on what vRealize Suite components are deployed, vRealize Suite Lifecycle Manager could also be used to automated multiple vRealize component upgrades.

vCD with VCF on VxRail

The topic of running vCD on/with VCF on VxRail comes up from time to time. While vCD is not integrated with VCF, and VCF is in no way aware of vCD, it is possible to run vCD on top of the VCF platform, where vCD will consume VCF VI Workload Domain resources in much the same manner as we have discussed in this post, as well as leverage the existing NSX infrastructure.

All associated versioning and LCM of the VCF on VxRail platform would be owned exclusively by SDDC Manager, with the vCD specific components requiring manual compatibility checking and LCM.

Please take a look at this recent whitepaper from VMware on this topic :

VMware Cloud Provider Platform: Architectural Guidelines Powered by VMware Cloud Foundation 4.0

So that’s it, that pretty much covers the high level talking points that have come up on this topic in recent times. Obviously there’s much much more to be discussed in terms of vCD itself, but hopefully the above info helps provide a better understanding of what is possible with VxRail with vCD.

Thank you for reading, hopefully you found the information on here useful!


Check out more of Steve’s posts on his personal blog: Scamallach

Share the Post:

Related Posts

%d bloggers like this: