Sunday, February 7, 2021

vSphere with Tanzu using NSX-T - Part4 - Tier-0 Gateway and BGP peering

In the previous posts we discussed the following: 

Part1: Prerequisites
Part2: Configure NSX-T
Part3: Edge Cluster


The next step is to create a Tier-0 Gateway, configure its interfaces, and BGP peer with the L3 TOR switches. Following is a high-level logical representation of this configuration:


Configure Tier-0 Gateway


Before creating the T0-Gateway, we need to create two segments.

  • Add Segments.
    • Create a segment "ls-uplink-v54"
      • VLAN: 54
      • Transport Zone: "edge-vlan-tz"
    • Create a segment "ls-uplink-v55"
      • VLAN: 55
      • Transport Zone: "edge-vlan-tz"

  • Add Tier-0 Gateway.
    • Provide the necessary details as shown below.

    • Add 4 interfaces and configure them as per the logical diagram given above.
      • edge-01-uplink1 - 192.168.54.254/24 - connected via segment ls-uplink-v54
      • edge-01-uplink2 - 192.168.55.254/24 - connected via segment ls-uplink-v55

      • edge-02-uplink1 - 192.168.54.253/24 - connected via segment ls-uplink-v54
      • edge-02-uplink2 - 192.168.55.253/24 - connected via segment ls-uplink-v55
    • Verify the status is showing success for all the 4 interfaces that you added.

  • Routing and multicast settings of T0 are as follows:

    • You can see a static route is configured. The next hop for the default route 0.0.0.0/0 is set to 192.168.54.1. 

    • The next hop configuration is given below.

  • BGP settings of T0 are shown below.

    • BGP Neighbor config:

    • Verify the status is showing success for the two BGP Neighbors that you added.

  • Route re-distribution settings of T0:

    • Add route re-distribution.
    • Set route re-distribution.

Now, the T0 configuration is complete. The next step is to configure BGP on the Dell S4048-ON TOR switches.

Configure TOR Switches


---On TOR A---
conf
router bgp 65500
neighbor 192.168.54.254 remote-as 65400
#peering to T0 edge-01 interface
neighbor 192.168.54.254 no shutdown
neighbor 192.168.54.253 remote-as 65400
#peering to T0 edge-02 interface
neighbor 192.168.54.253 no shutdown
neighbor 192.168.54.3 remote-as 65500
#peering to TOR B in VLAN 54
neighbor 192.168.54.3 no shutdown
maximum-paths ebgp 4
maximum-paths ibgp 4


---On TOR B---
conf
router bgp 65500
neighbor 192.168.55.254 remote-as 65400
#peering to T0 edge-01 interface
neighbor 192.168.55.254 no shutdown
neighbor 192.168.55.253 remote-as 65400
#peering to T0 edge-02 interface
neighbor 192.168.55.253 no shutdown
neighbor 192.168.54.2 remote-as 65500
#peering to TOR A in VLAN 54
neighbor 192.168.54.2 no shutdown
maximum-paths ebgp 4
maximum-paths ibgp 4


---Advertising ESXi mgmt and VM traffic networks in BGP on both TORs---

conf
router bgp 65500
network 192.168.41.0/24
network 192.168.43.0/24


Thanks to my friend and vExpert Harikrishnan @hari5611 for helping me with the T0 configs and BGP peering on TORs. Do check out his blog https://vxplanet.com/


Verify BGP Configurations


The next step is to verify the BGP configs on TORs using the following commands:

show running-config bgp

show ip bgp summary

show ip bgp neighbors


Follow the VMware documentation to verify the BGP connections from a Tier-0 Service Router. In the below screenshot you can see that both Edge nodes have the BGP neighbors 192.168.54.2 and 192.168.55.3 with state Estab.


In the next article, I will talk about adding a T1 Gateway, adding new segments for apps, connecting VMs to the segments, and verify connectivity to different internal and external networks. I hope this was useful. Cheers!

Saturday, January 23, 2021

Benchmarking Kubernetes infrastructure using K-Bench

K-Bench is a framework to benchmark the control and data plane aspects of a Kubernetes cluster. More details are available at https://github.com/vmware-tanzu/k-bench. In my case, I am going to conduct this benchmarking study on a Tanzu Kubernetes cluster which is provisioned using Tanzu Kubernetes Grid service on a vSphere 7 U1 cluster.

Step 1: Clone the K-Bench repo

git clone https://github.com/vmware-tanzu/k-bench.git


Step 2: Install

./install.sh


Once the installation is done it will say, "Completed k-bench installation.".

Step 3: Run the benchmark

./run.sh


If you don't specify any test, then it is going to conduct the default set of tests. All sets of tests are defined under the config directory. If you browse to the config directory and list, there are separate folders specific to each test. You can see folders starting with cp and dp, and it refers to control plane and data plane related tests.


If no specific test is mentioned, then it is going to run all that is defined in the default directory. You can also see details of the test and results in the logs. The directories starting with "results" will have log files corresponding to each test run.


Following is a sample log that shows a summary of pod creation throughput, pod creation average latency, pod startup total latency, list/ update/ delete pod latency, etc.


Now, if you want to run a specific test case, you can do it as follows:
Usage: ./run.sh -r <run-tag> [-t <comma-separated-tests> -o <output-dir>]
DP network internode test

For example, you can run a data plane test to check the network performance between two nodes as shown below.

./run.sh -r "kbench-run-on-tkg-cluster-02"  -t "dp_network_internode" -o "./"


As soon as you run the above command, two pods will be created inside "kbench-pod-namespace" on two worker nodes as you see below.


It will then start "iperf3" process inside those two pods to create a network load following a client-server model as per the actions defined in the config.json file.


Sample logs are given below. It shows details like the amount of data transferred, transfer rate, network latency, etc.


Once the test run is complete, the pods and other resources created will be automatically deleted. Similarly, you can select the other set of tests that are pre-defined in the framework. I believe you have the flexibility to define custom test cases too as per your requirements. I hope it was useful. Cheers!

Related posts


Storage performance benchmarking of Tanzu Kubernetes clusters
Monitoring Tanzu Kubernetes cluster using Prometheus and Grafana


References



Friday, January 1, 2021

Dell EMC PowerFlex MP for vROps 8.x - Part7 - Create custom reports

In March 2020, I published a blog on how to create custom views and reports in vROps 8.x. This article explains how to create a custom storage report for Dell EMC PowerFlex using the PowerFlex Management Pack for vROps 8.x. 

Sample PowerFlex Storage Report PDF and template is available in my GitHub repo for download. You can use it as a starting point/ modify it as per requirement.

To create a new view: Dashboards - Views - Add.

Provide a name and description for the new view. Here, for example, I will create a view that shows PowerFlex Protection Domain Info.



Select List.


Select Protection Domain as subject and group it by PowerFlex Rack/ Appliance System.


Double click or drag and drop the selected metrics or properties to include in the view. In the following screenshot, I selected 4 capacity metrics to include in the view.


You can also select and change the units and transformation as per requirements. Once it is done, click Save.

Now a view is created. Similarly, you can create multiple views for the different PowerFlex resource kinds. The next step is to include this view in an existing template or in a new template. 

To create a new report template: Dashboards - Reports - Add.

  • Provide a name and description for the new report template.
  • From the views and dashboards, find the PowerFlex Protection Domain Info view that we created earlier, double-click or drag and drop them to the right pane. You can add multiple views to be included in this report template.
  • Select PDF and CSV.
  • Select all the layout options if you like to and click Save.
  • Now the custom report template is created. You can select it and click Run.

Select PowerFlex and then select PowerFlex World and click ok.


The report will run in the background and will be available to download under the "Generated Reports" tab. You can select it and download the PDF or CSV file. You can even configure a schedule to generate a report and email it or save it to a location automatically based on your requirements. Hope it was useful. Cheers!

Related posts


Monday, December 28, 2020

vSphere with Tanzu using NSX-T - Part3 - Edge Cluster

In the previous post, we went through basic NSX-T configurations. The next step is to deploy Edge VMs and create an Edge cluster. Before creating Edge VMs, we need to create two trunk port groups on the VDS using the VCSA web UI. Uplinks of the Edge VMs will be connected to these two trunk port groups.

  • Create 2 trunk port groups on the VDS.
    • "Trunk01-NSXT-Edge"
    • "Trunk02-NSXT-Edge"


  • Add Edge Transport Nodes.
    • Create two Edge VMs "edge-01, edge-02".
    • Click ADD EDGE VM, follow the wizard, and provide all the details.






  • Click Finish. Follow the same process and deploy one more Edge VM.

  • The next step is to create an Edge Cluster "edge-cluster-01and add the two Edge VMs to it.


The NSX-T Edge Cluster is now ready. Next, we have to add a Tier-0 Gateway and configure BGP peering with the router or L3 TOR switches. This will be covered in the next part. Hope it was useful. Cheers!

Related posts


Part1: Prerequisites
Part2: Configure NSX-T

Tuesday, December 15, 2020

Dell EMC PowerFlex MP for vROps 8.x - Part6 - Create custom alerts

In this post, we will take a look at creating custom alerts for PowerFlex by adding symptom definitions and alert definitions. Refer to my previous blog post to understand more about the alerting aspects in vROps. Here we will take an example scenario and see how we can create custom symptom definitions and alert definitions.

Scenario


The user is running some latency-sensitive business-critical applications using PowerFlex storage. Below are the symptoms that he would like to define and alerts should be produced for the same and these should affect the "Health" badge of the PowerFlex volume object.


Step1: Add Symptom Definitions


Go to Alerts - Symptom Definitions - Click Add.

Select base object type: Expand PowerFlex Adapter - Select Volume.

  • Select the metric User Data SDC Read Latency (ms): double click on it twice so that you can define both warning and critical symptoms.
  • Select the metric User Data SDC Write Latency (ms): double click on it twice so that you can define both warning and critical symptoms.

Now, fill all the required fields as per the conditions we defined earlier.


Click Save. Now as you can see below the 4 symptom definitions are created.


Step2: Add Alert Definitions


Go to Alerts - Alert Definitions - Click Add.

  • Provide alert name, select the base object type and advanced settings and click Next.

  • Filter and search the symptoms that we created earlier. Drag and drop the two volume read latency related symptoms and select Any. Click Next.

  • If you want to provide any recommendations you can add it in this step and click Next.
  • Select vSphere Solution's Default Policy and click Next and click Create.
Similarly, you can create an alert definition for PowerFlex Volume Write Latency too.


Now, we are all done. Let's test the alerts! I am using FIO to generate IO load on one of the PowerFlex volume.


You can see the Read Latency for this volume is grater than 1 ms, and so a warning alert should be produced for this specific volume.




Hope it was useful. Cheers!

Related posts


Part1: Install
Part2: Configure
Part3: Dashboards
Part4: Resource kinds and relationships
Part5: Collection interval 


References