At this stage, we are almost done with the networking/ NSX-T related prerequisites. Before getting into Workload Management configuration we need to create a tag, assign the tag to the datastore, create a VM storage policy, and content library.
Create and assign a tag
Select the datastore. Click Assign.
Add Tag.
Provide a name, and click on create new category.
Provide a category name and click create.
Click create.
Now a tag is created and you can assign it to the vSAN datastore.
Select the tag and click assign.
Create VM storage policy with tag based placement
Provide a name and click next.
Select tag based placement and click next.
Click the drop-down for tag category, select "Tanzu", and then click browse tags, select "Tanzu" and click next.
Select the compatible datastore, vSAN in this case, and click next.
Click finish.
You can now see the newly created VM storage policy.
Create content library
Click create.
Provide details and next.
Provide subscription URL and next.
Accept the SSL thumbprint by clicking yes. Select the vSAN datastore as the storage location for library contents, and next.
Click finish.
You can now see the newly created content library.
All the prerequisites are done now. The next step is to configure workload management which will be covered in the next part. Hope it was useful. Cheers!
The next step is to create a Tier-1 Gateway and network segments.
Add Tier-1 Gateway.
Provide name, select the linked T0 Gateway, and select the route advertisement settings.
Add Segment.
Provide segment name, connected gateway, transport zone, and subnet.
Here we are creating an overlay segment and the subnet CIDR 172.16.10.1/24 will be the gateway IP for this segment.
Now, let's verify whether this segment is being advertised (route advertisement) or not. Following is the screenshot from both edge nodes and you can see that the Tier-0 SR is aware of 172.16.10.0/24 network:
As Tier-0 Gateway is connected to the TOR switches via BGP, we can verify whether the TOR switches are aware about this newly created segment.
You can see that the TORs are aware of 172.16.10.0/24 network via BGP. Let's connect a VM to this segment, assign an IP address, and test network connectivity.
You can also view the network topology from NSX-T.
This is the traffic flow: VM - Network segment - Tier-1 Gateway - Tier-0 Gateway - BGP peering - TOR switches - NAT VM - External network.
The next step is to create a Tier-0 Gateway, configure its interfaces, and BGP peer with the L3 TOR switches. Following is a high-level logical representation of this configuration:
Configure Tier-0 Gateway
Before creating the T0-Gateway, we need to create two segments.
Add Segments.
Create a segment "ls-uplink-v54"
VLAN: 54
Transport Zone: "edge-vlan-tz"
Create a segment "ls-uplink-v55"
VLAN: 55
Transport Zone: "edge-vlan-tz"
Add Tier-0 Gateway.
Provide the necessary details as shown below.
Add 4 interfaces and configure them as per the logical diagram given above.
edge-01-uplink1 - 192.168.54.254/24 - connected via segment ls-uplink-v54
edge-01-uplink2 - 192.168.55.254/24 - connected via segment ls-uplink-v55
edge-02-uplink1 - 192.168.54.253/24 - connected via segment ls-uplink-v54
edge-02-uplink2 - 192.168.55.253/24 - connected via segment ls-uplink-v55
Verify the status is showing success for all the 4 interfaces that you added.
Routing and multicast settings of T0 are as follows:
You can see a static route is configured. The next hop for the default route 0.0.0.0/0 is set to 192.168.54.1.
The next hop configuration is given below.
BGP settings of T0 are shown below.
BGP Neighbor config:
Verify the status is showing success for the two BGP Neighbors that you added.
Route re-distribution settings of T0:
Add route re-distribution.
Set route re-distribution.
Now, the T0 configuration is complete. The next step is to configure BGP on the Dell S4048-ON TOR switches.
Configure TOR Switches
---On TOR A--- conf router bgp 65500 neighbor 192.168.54.254 remote-as 65400#peering to T0 edge-01 interface neighbor 192.168.54.254 no shutdown neighbor 192.168.54.253 remote-as 65400#peering to T0 edge-02 interface neighbor 192.168.54.253 no shutdown neighbor 192.168.54.3 remote-as 65500#peering to TOR B in VLAN 54 neighbor 192.168.54.3 no shutdown maximum-paths ebgp 4 maximum-paths ibgp 4
---On TOR B--- conf router bgp 65500 neighbor 192.168.55.254 remote-as 65400#peering to T0 edge-01 interface neighbor 192.168.55.254 no shutdown neighbor 192.168.55.253 remote-as 65400#peering to T0 edge-02 interface neighbor 192.168.55.253 no shutdown neighbor 192.168.54.2 remote-as 65500#peering to TOR A in VLAN 54 neighbor 192.168.54.2 no shutdown maximum-paths ebgp 4 maximum-paths ibgp 4
---Advertising ESXi mgmt and VM traffic networks in BGP on both TORs---
Thanks to my friend and vExpert Harikrishnan @hari5611 for helping me with the T0 configs and BGP peering on TORs. Do check out his blog https://vxplanet.com/.
Verify BGP Configurations
The next step is to verify the BGP configs on TORs using the following commands:
In the next article, I will talk about adding a T1 Gateway, adding new segments for apps, connecting VMs to the segments, and verify connectivity to different internal and external networks. I hope this was useful. Cheers!
K-Bench is a framework to benchmark the control and data plane aspects of a Kubernetes cluster. More details are available at https://github.com/vmware-tanzu/k-bench. In my case, I am going to conduct this benchmarking study on a Tanzu Kubernetes cluster which is provisioned using Tanzu Kubernetes Grid service on a vSphere 7 U1 cluster.
Once the installation is done it will say, "Completed k-bench installation.".
Step 3: Run the benchmark
./run.sh
If you don't specify any test, then it is going to conduct the default set of tests. All sets of tests are defined under the config directory. If you browse to the config directory and list, there are separate folders specific to each test. You can see folders starting with cp and dp, and it refers to control plane and data plane related tests.
If no specific test is mentioned, then it is going to run all that is defined in the default directory. You can also see details of the test and results in the logs. The directories starting with "results" will have log files corresponding to each test run.
Following is a sample log that shows a summary of pod creation throughput, pod creation average latency, pod startup total latency, list/ update/ delete pod latency, etc.
Now, if you want to run a specific test case, you can do it as follows:
As soon as you run the above command, two pods will be created inside "kbench-pod-namespace" on two worker nodes as you see below.
It will then start "iperf3" process inside those two pods to create a network load following a client-server model as per the actions defined in the config.json file.
Sample logs are given below. It shows details like the amount of data transferred, transfer rate, network latency, etc.
Once the test run is complete, the pods and other resources created will be automatically deleted. Similarly, you can select the other set of tests that are pre-defined in the framework. I believe you have the flexibility to define custom test cases too as per your requirements. I hope it was useful. Cheers!