Saturday, April 24, 2021
vSphere with Tanzu using NSX-T Blog Series
Part1 - Prerequisites
Sunday, April 18, 2021
vSphere with Tanzu using NSX-T - Part7 - Enable workload management
In the previous posts we discussed the following:
Part1: Prerequisites
Part2: Configure NSX-T
Part3: Edge Cluster
Part4: Tier-0 Gateway and BGP peering
Part5: Tier-1 Gateway and Segments
Part6: Create tags, storage policy, and content library
- Login vCenter - Menu - Workload Management.
- Click Get started.
- Select NSX-T and click next.
- Select the cluster.
- Select a size and click next.
- Select the storage policy and click next.
- Provide management network details and click next.
- Provide workload network details and click next.
- Add the content library and click next.
- Click finish.
- This process will take few minutes to configure and bring up the supervisor cluster. In my case, it took around 30 minutes to complete.
- You can see the progress in the vCenter UI.
- You can now see the supervisor control plane VMs are deployed.
- You can see a T1 Gateway is now provisioned.
- Multiple segments are now created corresponding to each namespace inside the supervisor control plane.
- Multiple SNAT rules are also now in place for the newly created T1 Gateway, which helps the control plane Kubernetes objects residing in their corresponding namespaces to reach the external network/ internet. It uses the egress range 192.168.72.0/24 that we provided during the workload management configuration for address translation.
- You can also see two load balancers attached to the T1 Gateway:
- Distributed Load balancer: All services of type ClusterIP are implemented as distributed load balancer virtual servers. This is for east-west traffic.
- Server load balancer: All services of type Loadbalancer are implemented as server load balancer L4 virtual servers. And all ingress is implemented as L7 virtual servers.
- Under the server load balancer, you can see two virtual servers. One for the KubeAPI (6443) and the other for downloading the CLI tools (443) to access the cluster.
Monday, March 29, 2021
vSphere with Tanzu using NSX-T - Part6 - Create tag, VM storage policy, and content library
In the previous posts we discussed the following:
Part1: Prerequisites
Part2: Configure NSX-T
Part3: Edge Cluster
Part4: Tier-0 Gateway and BGP peering
Part5: Tier-1 Gateway and Segments
Create and assign a tag
- Select the datastore. Click Assign.
- Add Tag.
- Provide a name, and click on create new category.
- Provide a category name and click create.
- Click create.
- Now a tag is created and you can assign it to the vSAN datastore.
- Select the tag and click assign.
Create VM storage policy with tag based placement
- Provide a name and click next.
- Select tag based placement and click next.
- Click the drop-down for tag category, select "Tanzu", and then click browse tags, select "Tanzu" and click next.
- Select the compatible datastore, vSAN in this case, and click next.
- Click finish.
- You can now see the newly created VM storage policy.
Create content library
- Click create.
- Provide details and next.
- Provide subscription URL and next.
- Accept the SSL thumbprint by clicking yes. Select the vSAN datastore as the storage location for library contents, and next.
- Click finish.
- You can now see the newly created content library.
Friday, March 19, 2021
vSphere with Tanzu using NSX-T - Part5 - Tier-1 Gateway and Segments
In the previous posts we discussed the following:
Part1: Prerequisites
Part2: Configure NSX-T
Part3: Edge Cluster
Part4: Tier-0 Gateway and BGP peering
- Add Tier-1 Gateway.
- Provide name, select the linked T0 Gateway, and select the route advertisement settings.
- Add Segment.
- Provide segment name, connected gateway, transport zone, and subnet.
- Here we are creating an overlay segment and the subnet CIDR 172.16.10.1/24 will be the gateway IP for this segment.
Sunday, February 7, 2021
vSphere with Tanzu using NSX-T - Part4 - Tier-0 Gateway and BGP peering
In the previous posts we discussed the following:
Part1: Prerequisites
Part2: Configure NSX-T
Part3: Edge Cluster
Configure Tier-0 Gateway
- Add Segments.
- Create a segment "ls-uplink-v54"
- VLAN: 54
- Transport Zone: "edge-vlan-tz"
- Create a segment "ls-uplink-v55"
- VLAN: 55
- Transport Zone: "edge-vlan-tz"
- Add Tier-0 Gateway.
- Provide the necessary details as shown below.
- Add 4 interfaces and configure them as per the logical diagram given above.
- edge-01-uplink1 - 192.168.54.254/24 - connected via segment ls-uplink-v54
- edge-01-uplink2 - 192.168.55.254/24 - connected via segment ls-uplink-v55
- edge-02-uplink1 - 192.168.54.253/24 - connected via segment ls-uplink-v54
- edge-02-uplink2 - 192.168.55.253/24 - connected via segment ls-uplink-v55
- Verify the status is showing success for all the 4 interfaces that you added.
- Routing and multicast settings of T0 are as follows:
- You can see a static route is configured. The next hop for the default route 0.0.0.0/0 is set to 192.168.54.1.
- The next hop configuration is given below.
- BGP settings of T0 are shown below.
- BGP Neighbor config:
- Verify the status is showing success for the two BGP Neighbors that you added.
- Route re-distribution settings of T0:
- Add route re-distribution.
- Set route re-distribution.
Configure TOR Switches
---On TOR A---
conf
router bgp 65500
neighbor 192.168.54.254 remote-as 65400 #peering to T0 edge-01 interface
neighbor 192.168.54.254 no shutdown
neighbor 192.168.54.253 remote-as 65400 #peering to T0 edge-02 interface
neighbor 192.168.54.253 no shutdown
neighbor 192.168.54.3 remote-as 65500 #peering to TOR B in VLAN 54
neighbor 192.168.54.3 no shutdown
maximum-paths ebgp 4
maximum-paths ibgp 4
---On TOR B---
conf
router bgp 65500
neighbor 192.168.55.254 remote-as 65400 #peering to T0 edge-01 interface
neighbor 192.168.55.254 no shutdown
neighbor 192.168.55.253 remote-as 65400 #peering to T0 edge-02 interface
neighbor 192.168.55.253 no shutdown
neighbor 192.168.54.2 remote-as 65500 #peering to TOR A in VLAN 54
neighbor 192.168.54.2 no shutdown
maximum-paths ebgp 4
maximum-paths ibgp 4
---Advertising ESXi mgmt and VM traffic networks in BGP on both TORs---
conf
router bgp 65500
network 192.168.41.0/24
network 192.168.43.0/24
Thanks to my friend and vExpert Harikrishnan @hari5611 for helping me with the T0 configs and BGP peering on TORs. Do check out his blog https://vxplanet.com/.
Verify BGP Configurations
The next step is to verify the BGP configs on TORs using the following commands:
show running-config bgp
show ip bgp summary
show ip bgp neighbors
Follow the VMware documentation to verify the BGP connections from a Tier-0 Service Router. In the below screenshot you can see that both Edge nodes have the BGP neighbors 192.168.54.2 and 192.168.55.3 with state Estab.
In the next article, I will talk about adding a T1 Gateway, adding new segments for apps, connecting VMs to the segments, and verify connectivity to different internal and external networks. I hope this was useful. Cheers!