Saturday, April 24, 2021
vSphere with Tanzu using NSX-T Blog Series
Part1 - Prerequisites
Labels:
BGP,
capw,
Containers,
content library,
etcd,
K8s,
NSX,
NSX-T,
Segments,
SPBM,
T0 Gateway,
T1 Gateway,
tanzu,
Tanzu Kubernetes Grid,
TKC,
TKG,
VMware,
vSphere with Tanzu,
WCP,
workload management
Sunday, April 18, 2021
vSphere with Tanzu using NSX-T - Part7 - Enable workload management
In the previous posts we discussed the following:
Part1: Prerequisites
Part2: Configure NSX-T
Part3: Edge Cluster
Part4: Tier-0 Gateway and BGP peering
Part5: Tier-1 Gateway and Segments
Part6: Create tags, storage policy, and content library
We are all set to configure and enable workload management. Before stepping into the configurations I just want to give an overall picture of vSphere with Tanzu architecture and different components.
Once you enable workload management, the vSphere cluster will transform to a supervisor cluster. The supervisor cluster consists of 3 supervisor control plane VMs, and the ESXi hosts that act as worker nodes too. Now you can run traditional VMs, and containers side by side. You can run the containers as native vSphere pods directly running on the ESXi hosts, or you can deploy Tanzu Kubernetes clusters in VM form factor on the vSphere namespace and then run container workload on them.
Following are the steps to enable workload management:
- Login vCenter - Menu - Workload Management.
- Click Get started.
- Select NSX-T and click next.
- Select the cluster.
- Select a size and click next.
- Select the storage policy and click next.
- Provide management network details and click next.
- Provide workload network details and click next.
- Add the content library and click next.
- Click finish.
- This process will take few minutes to configure and bring up the supervisor cluster. In my case, it took around 30 minutes to complete.
- You can see the progress in the vCenter UI.
- You can now see the supervisor control plane VMs are deployed.
Workload management is now enabled and the vSphere cluster is transformed to a supervisor cluster. Let's have a look at the objects that are automatically created in NSX-T.
- You can see a T1 Gateway is now provisioned.
- Multiple segments are now created corresponding to each namespace inside the supervisor control plane.
- Multiple SNAT rules are also now in place for the newly created T1 Gateway, which helps the control plane Kubernetes objects residing in their corresponding namespaces to reach the external network/ internet. It uses the egress range 192.168.72.0/24 that we provided during the workload management configuration for address translation.
- You can also see two load balancers attached to the T1 Gateway:
- Distributed Load balancer: All services of type ClusterIP are implemented as distributed load balancer virtual servers. This is for east-west traffic.
- Server load balancer: All services of type Loadbalancer are implemented as server load balancer L4 virtual servers. And all ingress is implemented as L7 virtual servers.
- Under the server load balancer, you can see two virtual servers. One for the KubeAPI (6443) and the other for downloading the CLI tools (443) to access the cluster.
The next step is to create namespaces, and you can then create Tanzu Kubernetes clusters on it. Usually, the vSphere administrator will create namespaces for developers and provide the access so that they can either deploy TKG clusters, vSphere pods, or VMs on the respective namespace. We will cover all these in the next part.
Hope it was useful. Cheers!
Subscribe to:
Posts (Atom)