Saturday, April 24, 2021

vSphere with Tanzu using NSX-T Blog Series


Part1 - Prerequisites
Part2 - Configure NSX
Part3 - Edge Cluster
Part4 - Tier-0 Gateway and BGP peering
Part5 - Tier-1 Gateway and Segments
Part6 - Create tags, storage policy, and content library
Part7 - Enable workload management
Part8 - Create namespace and deploy Tanzu Kubernetes Cluster
Part9 - Monitoring
Part10 - Upgrade Tanzu Kubernetes Cluster
Part11 - Troubleshooting Tanzu Kubernetes Cluster
Part12 - Deploy application on TKC and access it
Part13 - Export WCP admin kubeconfig
Part14 - Testing TKC storage using kubestr
Part15 - Working with etcd on TKC with one control plane
Part16 - Troubleshooting content library related issues
Part17 - Troubleshooting TKC stuck at updating phase
Part18 - Troubleshooting vSphere pods with ProviderFailed status
Part19 - Troubleshooting TKC stuck at creating phase
Part20 - Safely deleting NotReady nodes from a TKC
Part21 - Pointers while upgrading the stack
Part22 - Working with NGINX Ingress Controller
Part23 - Supervisor cluster certificates expiry
Part24 - Kubernetes component certs in TKC
Part25 - Spherelet
Part26 - Jumpbox kubectl plugin to SSH to TKC node
Part27 - nullfinalizer kubectl plugin
Part28 - Create a custom VM Class
Part29 - Logging using Loki stack
Part30 - Troubleshooting inaccesssible TKC with server pool members missing in the LB VS
Part31 - Troubleshooting inaccessible TKC with expired control plane certs
Part32 - Troubleshooting BGP related issues
Part33 - Troubleshooting intermittent connection timeouts to apiserver and workloads
Part34 - CPU and Memory utilization of a supervisor cluster
Part35 - Monitoring supervisor cluster health with Python and vCenter APIs

Sunday, April 18, 2021

vSphere with Tanzu using NSX-T - Part7 - Enable workload management

In the previous posts we discussed the following: 

Part1: Prerequisites

Part2: Configure NSX-T

Part3: Edge Cluster

Part4: Tier-0 Gateway and BGP peering

Part5: Tier-1 Gateway and Segments

Part6: Create tags, storage policy, and content library


We are all set to configure and enable workload management. Before stepping into the configurations I just want to give an overall picture of vSphere with Tanzu architecture and different components. 


Once you enable workload management, the vSphere cluster will transform to a supervisor cluster. The supervisor cluster consists of 3 supervisor control plane VMs, and the ESXi hosts that act as worker nodes too. Now you can run traditional VMs, and containers side by side. You can run the containers as native vSphere pods directly running on the ESXi hosts, or you can deploy Tanzu Kubernetes clusters in VM form factor on the vSphere namespace and then run container workload on them.

Following are the steps to enable workload management:

  • Login vCenter - Menu - Workload Management.
  • Click Get started.
  • Select NSX-T and click next.

  • Select the cluster.

  • Select a size and click next.

  • Select the storage policy and click next.

  • Provide management network details and click next.

  • Provide workload network details and click next.

  • Add the content library and click next.

  • Click finish.

  • This process will take few minutes to configure and bring up the supervisor cluster. In my case, it took around 30 minutes to complete.
  • You can see the progress in the vCenter UI.



  • You can now see the supervisor control plane VMs are deployed.




Workload management is now enabled and the vSphere cluster is transformed to a supervisor cluster. Let's have a look at the objects that are automatically created in NSX-T.
  • You can see a T1 Gateway is now provisioned.

  • Multiple segments are now created corresponding to each namespace inside the supervisor control plane.

  • Multiple SNAT rules are also now in place for the newly created T1 Gateway, which helps the control plane Kubernetes objects residing in their corresponding namespaces to reach the external network/ internet. It uses the egress range 192.168.72.0/24 that we provided during the workload management configuration for address translation.

  • You can also see two load balancers attached to the T1 Gateway:
    • Distributed Load balancer: All services of type ClusterIP are implemented as distributed load balancer virtual servers. This is for east-west traffic.
    • Server load balancer: All services of type Loadbalancer are implemented as server load balancer L4 virtual servers. And all ingress is implemented as L7 virtual servers.

  • Under the server load balancer, you can see two virtual servers. One for the KubeAPI (6443) and the other for downloading the CLI tools (443) to access the cluster.

Note that this newly created T1 Gateway (domain-c8:6ea515f0-39da-431b-93bf-0d6a5e4a0f77) is connected to the T0 Gateway for external connectivity through BGP.
 
The next step is to create namespaces, and you can then create Tanzu Kubernetes clusters on it. Usually, the vSphere administrator will create namespaces for developers and provide the access so that they can either deploy TKG clusters, vSphere pods, or VMs on the respective namespace. We will cover all these in the next part. 

Hope it was useful. Cheers!