Google Anthos is a modern application development and hybrid cloud technology platform. Anthos enables deployment of some of the key public cloud capabilities in on-premises data centers. Google Kubernetes Engine (GKE) is at the core of the Anthos offering, which enables development of modern applications based on micro-services architecture. With GKE on premises, you can run fully managed Kubernetes clusters in your data center, managing them in the GCP console alongside cloud-based clusters.

In addition to GKE, Anthos includes service management via Anthos Service mesh, configuration management, server less capabilities (Cloud Run for Anthos) as well as additional monitoring and logging components.

Anthos on VxRail

VxRail is in a unique position to deliver a private cloud platform for Anthos as well as offerings from AWS, Azure and VMware. It provides TCO reduction benefits over traditional 3-tiered architecture by eliminating infrastructure silos. VxRail allows for infrastructure agility and scalability by converging traditional compute and storage silos onto industry-standard servers, dramatically simplifying operations.

Running Anthos on VxRail delivers a seamless and automated operations experience for VxRail infrastructure across cloud native and traditional workloads. Intelligent lifecycle management with VxRail automates non-disruptive upgrades and patches to keep the VxRail infrastructure in a continuously validated state to ensure workloads are running and clusters are optimized. Together, VxRail and Anthos make it easy to standardize both IT and developer operations on-premises as in the Google public cloud.

The storage class definition in Kubernetes maps to policies defined through vSAN Storage Policy Based Management (SPBM) to achieve different levels of SLAs. This also provides the option to consume data services such as snapshots, encryption, deduplication and compression on a container volume level of granularity.

VMware Cloud-Native Storage and its corresponding Kubernetes CSI driver are used to present storage to containers. The CSI driver is leveraged by DevOps and platform teams to deliver dynamic and automated provisioning capabilities of container Persistent Storage Volumes utilizing native Kubernetes APIs, enabling infrastructure as code operations.

Anthos Architecture

An Anthos clusters on VMware system contains an admin cluster, one or more user clusters, and an admin workstation.

Admin cluster

The admin cluster hosts the base layer of Anthos clusters on VMware. It runs the following Anthos components:

  • Admin cluster control plane: The admin cluster’s control plane includes the Kubernetes API server, the scheduler, and several controllers for the admin cluster.
  • User cluster control planes: For each user cluster, the admin cluster has a node that runs the control plane for the user cluster. The control plane includes the Kubernetes API server, the scheduler, and several controllers for the user cluster.
  • Add-ons: The admin cluster runs several Kubernetes add-ons, like Grafana, Prometheus, and Google Cloud’s operations suite. Anthos clusters on VMware launches add-ons on different admin cluster nodes than other control plane components.

The user control planes are managed by the admin cluster. They run on nodes in the admin cluster, not in the user clusters.

User cluster

User clusters are where you deploy and run your containerized workloads and services.

Anthos Networking

Anthos on vSphere uses an Island Mode configuration in which Pods can directly talk to each other within a user cluster but can’t be reached from outside the cluster.
Clusters form a full node to node mesh across the cluster nodes, allowing a Pod to reach other Pods within the cluster directly. All egress traffic from the Pod to targets outside the cluster originates from the Pod using the host’s node IP address. Anthos on vSphere includes a L7 load balancer with an Envoy-based ingress controller that handles Ingress object rules for ClusterIP Services deployed within the cluster. The ingress controller itself is exposed as a NodePort Service in the cluster.

Anthos includes a built-in L4 load balancer and provides support for external F5 Networks L3/L4 load balancers. The VIP on the load balancer points to the ports in the NodePort Service for the ingress controller. This is how external clients can access services within the cluster. Alternatively, manual load balancing mode can be enabled.

Serverless on Anthos

Integrated with Anthos, Cloud Run for Anthos provides a flexible serverless development platform for hybrid and multi-cloud environments. Cloud Run for Anthos is Google’s managed and fully supported native offering, an open-source project that enables serverless workloads on Kubernetes. Cloud Run is also available as a fully managed serverless platform on Google Cloud, without the Kubernetes platform requirements.

Cloud Run for Anthos is suitable for running stateless applications. Some examples of applications that run great on Cloud Run for Anthos include Microservices, web front ends, API gateways, API middleware, Event handlers, ETL.

Anthos on vSphere Installation

Below the high-level installation steps:

1. Install gcloud CLI

2. Create Cloud project

3. Create service accounts

4. Install admin workstation

5. Deploy admin cluster

6. Deploy user cluster(s)

7. Connect cluster from GCP Console

After completing the Anthos on vSphere deployment, the Kubernetes cluster can be managed from the GCP portal. The following screenshots highlight the system deployment from the GCP portal.

For further details on Anthos on vSphere review the official documentation: https://cloud.google.com/anthos/clusters/docs/on-prem/

#IWORK4DELL

Opinions expressed in this article are entirely my own and may not be representative of the views of Dell Technologies.

Related Posts

Leave a Reply

Discover more from

Subscribe now to keep reading and get access to the full archive.

Continue reading