In the previous posts we discussed the following:
Part1: Prerequisites
Part2: Configure NSX-T
Part3: Edge Cluster
Part4: Tier-0 Gateway and BGP peering
Part5: Tier-1 Gateway and Segments
Part6: Create tags, storage policy, and content library
- Login vCenter - Menu - Workload Management.
- Click Get started.
- Select NSX-T and click next.
- Select the cluster.
- Select a size and click next.
- Select the storage policy and click next.
- Provide management network details and click next.
- Provide workload network details and click next.
- Add the content library and click next.
- Click finish.
- This process will take few minutes to configure and bring up the supervisor cluster. In my case, it took around 30 minutes to complete.
- You can see the progress in the vCenter UI.
- You can now see the supervisor control plane VMs are deployed.
- You can see a T1 Gateway is now provisioned.
- Multiple segments are now created corresponding to each namespace inside the supervisor control plane.
- Multiple SNAT rules are also now in place for the newly created T1 Gateway, which helps the control plane Kubernetes objects residing in their corresponding namespaces to reach the external network/ internet. It uses the egress range 192.168.72.0/24 that we provided during the workload management configuration for address translation.
- You can also see two load balancers attached to the T1 Gateway:
- Distributed Load balancer: All services of type ClusterIP are implemented as distributed load balancer virtual servers. This is for east-west traffic.
- Server load balancer: All services of type Loadbalancer are implemented as server load balancer L4 virtual servers. And all ingress is implemented as L7 virtual servers.
- Under the server load balancer, you can see two virtual servers. One for the KubeAPI (6443) and the other for downloading the CLI tools (443) to access the cluster.