Showing posts with label Segments. Show all posts
Showing posts with label Segments. Show all posts

Saturday, April 24, 2021

vSphere with Tanzu using NSX-T Blog Series


Part1 - Prerequisites
Part2 - Configure NSX
Part3 - Edge Cluster
Part4 - Tier-0 Gateway and BGP peering
Part5 - Tier-1 Gateway and Segments
Part6 - Create tags, storage policy, and content library
Part7 - Enable workload management
Part8 - Create namespace and deploy Tanzu Kubernetes Cluster
Part9 - Monitoring
Part10 - Upgrade Tanzu Kubernetes Cluster
Part11 - Troubleshooting Tanzu Kubernetes Cluster
Part12 - Deploy application on TKC and access it
Part13 - Export WCP admin kubeconfig
Part14 - Testing TKC storage using kubestr
Part15 - Working with etcd on TKC with one control plane
Part16 - Troubleshooting content library related issues
Part17 - Troubleshooting TKC stuck at updating phase
Part18 - Troubleshooting vSphere pods with ProviderFailed status
Part19 - Troubleshooting TKC stuck at creating phase
Part20 - Safely deleting NotReady nodes from a TKC
Part21 - Pointers while upgrading the stack
Part22 - Working with NGINX Ingress Controller
Part23 - Supervisor cluster certificates expiry
Part24 - Kubernetes component certs in TKC
Part25 - Spherelet
Part26 - Jumpbox kubectl plugin to SSH to TKC node
Part27 - nullfinalizer kubectl plugin
Part28 - Create a custom VM Class
Part29 - Logging using Loki stack
Part30 - Troubleshooting inaccesssible TKC with server pool members missing in the LB VS
Part31 - Troubleshooting inaccessible TKC with expired control plane certs
Part32 - Troubleshooting BGP related issues
Part33 - Troubleshooting intermittent connection timeouts to apiserver and workloads
Part34 - CPU and Memory utilization of a supervisor cluster
Part35 - Monitoring supervisor cluster health with Python and vCenter APIs

Friday, March 19, 2021

vSphere with Tanzu using NSX-T - Part5 - Tier-1 Gateway and Segments

In the previous posts we discussed the following: 

Part1: Prerequisites

Part2: Configure NSX-T

Part3: Edge Cluster

Part4: Tier-0 Gateway and BGP peering


The next step is to create a Tier-1 Gateway and network segments. 

  • Add Tier-1 Gateway.
    • Provide name, select the linked T0 Gateway, and select the route advertisement settings.

  • Add Segment.
    • Provide segment name, connected gateway, transport zone, and subnet.
    • Here we are creating an overlay segment and the subnet CIDR 172.16.10.1/24 will be the gateway IP for this segment.

Now, let's verify whether this segment is being advertised (route advertisement) or not. Following is the screenshot from both edge nodes and you can see that the Tier-0 SR is aware of 172.16.10.0/24 network:


As Tier-0 Gateway is connected to the TOR switches via BGP, we can verify whether the TOR switches are aware about this newly created segment. 


You can see that the TORs are aware of 172.16.10.0/24 network via BGP. Let's connect a VM to this segment, assign an IP address, and test network connectivity. 


You can also view the network topology from NSX-T.


This is the traffic flow: VM - Network segment - Tier-1 Gateway - Tier-0 Gateway - BGP peering - TOR switches - NAT VM - External network.

Hope this was useful. Cheers!

Sunday, February 7, 2021

vSphere with Tanzu using NSX-T - Part4 - Tier-0 Gateway and BGP peering

In the previous posts we discussed the following: 

Part1: Prerequisites
Part2: Configure NSX-T
Part3: Edge Cluster


The next step is to create a Tier-0 Gateway, configure its interfaces, and BGP peer with the L3 TOR switches. Following is a high-level logical representation of this configuration:


Configure Tier-0 Gateway


Before creating the T0-Gateway, we need to create two segments.

  • Add Segments.
    • Create a segment "ls-uplink-v54"
      • VLAN: 54
      • Transport Zone: "edge-vlan-tz"
    • Create a segment "ls-uplink-v55"
      • VLAN: 55
      • Transport Zone: "edge-vlan-tz"

  • Add Tier-0 Gateway.
    • Provide the necessary details as shown below.

    • Add 4 interfaces and configure them as per the logical diagram given above.
      • edge-01-uplink1 - 192.168.54.254/24 - connected via segment ls-uplink-v54
      • edge-01-uplink2 - 192.168.55.254/24 - connected via segment ls-uplink-v55

      • edge-02-uplink1 - 192.168.54.253/24 - connected via segment ls-uplink-v54
      • edge-02-uplink2 - 192.168.55.253/24 - connected via segment ls-uplink-v55
    • Verify the status is showing success for all the 4 interfaces that you added.

  • Routing and multicast settings of T0 are as follows:

    • You can see a static route is configured. The next hop for the default route 0.0.0.0/0 is set to 192.168.54.1. 

    • The next hop configuration is given below.

  • BGP settings of T0 are shown below.

    • BGP Neighbor config:

    • Verify the status is showing success for the two BGP Neighbors that you added.

  • Route re-distribution settings of T0:

    • Add route re-distribution.
    • Set route re-distribution.

Now, the T0 configuration is complete. The next step is to configure BGP on the Dell S4048-ON TOR switches.

Configure TOR Switches


---On TOR A---
conf
router bgp 65500
neighbor 192.168.54.254 remote-as 65400
#peering to T0 edge-01 interface
neighbor 192.168.54.254 no shutdown
neighbor 192.168.54.253 remote-as 65400
#peering to T0 edge-02 interface
neighbor 192.168.54.253 no shutdown
neighbor 192.168.54.3 remote-as 65500
#peering to TOR B in VLAN 54
neighbor 192.168.54.3 no shutdown
maximum-paths ebgp 4
maximum-paths ibgp 4


---On TOR B---
conf
router bgp 65500
neighbor 192.168.55.254 remote-as 65400
#peering to T0 edge-01 interface
neighbor 192.168.55.254 no shutdown
neighbor 192.168.55.253 remote-as 65400
#peering to T0 edge-02 interface
neighbor 192.168.55.253 no shutdown
neighbor 192.168.54.2 remote-as 65500
#peering to TOR A in VLAN 54
neighbor 192.168.54.2 no shutdown
maximum-paths ebgp 4
maximum-paths ibgp 4


---Advertising ESXi mgmt and VM traffic networks in BGP on both TORs---

conf
router bgp 65500
network 192.168.41.0/24
network 192.168.43.0/24


Thanks to my friend and vExpert Harikrishnan @hari5611 for helping me with the T0 configs and BGP peering on TORs. Do check out his blog https://vxplanet.com/


Verify BGP Configurations


The next step is to verify the BGP configs on TORs using the following commands:

show running-config bgp

show ip bgp summary

show ip bgp neighbors


Follow the VMware documentation to verify the BGP connections from a Tier-0 Service Router. In the below screenshot you can see that both Edge nodes have the BGP neighbors 192.168.54.2 and 192.168.55.3 with state Estab.


In the next article, I will talk about adding a T1 Gateway, adding new segments for apps, connecting VMs to the segments, and verify connectivity to different internal and external networks. I hope this was useful. Cheers!

Monday, December 28, 2020

vSphere with Tanzu using NSX-T - Part3 - Edge Cluster

In the previous post, we went through basic NSX-T configurations. The next step is to deploy Edge VMs and create an Edge cluster. Before creating Edge VMs, we need to create two trunk port groups on the VDS using the VCSA web UI. Uplinks of the Edge VMs will be connected to these two trunk port groups.

  • Create 2 trunk port groups on the VDS.
    • "Trunk01-NSXT-Edge"
    • "Trunk02-NSXT-Edge"


  • Add Edge Transport Nodes.
    • Create two Edge VMs "edge-01, edge-02".
    • Click ADD EDGE VM, follow the wizard, and provide all the details.






  • Click Finish. Follow the same process and deploy one more Edge VM.

  • The next step is to create an Edge Cluster "edge-cluster-01and add the two Edge VMs to it.


The NSX-T Edge Cluster is now ready. Next, we have to add a Tier-0 Gateway and configure BGP peering with the router or L3 TOR switches. This will be covered in the next part. Hope it was useful. Cheers!

Related posts


Part1: Prerequisites
Part2: Configure NSX-T

Tuesday, December 8, 2020

vSphere with Tanzu using NSX-T - Part2 - Configure NSX

In this post, we will take a look at the different configuration steps that are required before enabling workload management to establish connectivity to the supervisor cluster, supervisor namespaces, and all objects that run inside the namespaces, such as vSphere pods, and Tanzu Kubernetes Clusters. 

At this point, I assume that the vSAN cluster is up and NSX-T 3.0 is installed. NSX-T appliance is connected to the same management network where the VCSA and ESXi nodes are connected. In my case, it will be through VLAN 41. Note that all the ESXi nodes of the vSAN cluster are connected to one vSphere Distributed Switch and has two uplinks from each node that connects to TOR A and TOR B.


NSX-T configurations

  • Add Compute Managers. I've added the vCenter server here.


  • Add Uplink Profiles.
    • Create a host uplink profile "nsx-uplink-profile-lbsrc" (this is for host TEP using VLAN 52).
    • Create an edge uplink profile "nsx-edge-uplink-profile-lbsrc" (this is for edge TEP using VLAN 53).



  • Add Transport Zones.
    • Create an overlay transport zone "tz-overlay".
    • Create a VLAN transport zone "edge-vlan-tz".




  • Add IP Address Pools.
    • Create an IP address pool for host TEPs "TEP-Pool-01" (this is for host TEP using VLAN 52).
    • Create an IP address pool for edge TEPs "Edge-TEP-Pool-01" (this is for Edge TEP using VLAN 53).



  • Add Transport Node Profiles.


  • Configure Host Transport Nodes. Select the required cluster and click configure NSX to convert all the ESXi nodes as transport nodes.

The next step is to deploy Edge VMs (Edge Transport Nodes) and create a Edge Cluster. We will cover it in the next part. Hope it was useful. Cheers!

Related posts


Part1 - Prerequisites


References


Monday, December 7, 2020

vSphere with Tanzu using NSX-T - Part1 - Prerequisites

In this blog series, I would like to share my experience deploying vSphere with Tanzu using NSX-T 3.0 networking. Following is a very high-level workflow of setting up the environment from scratch:


Software versions used for this study:
  • vSphere 7 U1
  • NSX-T 3.0.1.1

If you are using VCF, some of the deployment steps mentioned above are already automated. But, VCF is not in the scope of this blog series, and this post is aimed at explaining the workflow and background configurations that are required to prepare the environment before enabling workload management. This will be useful to understand the backend configuration tasks and logical workflow and will be helpful in case of troubleshooting too. In my lab, I have 4 Dell EMC PowerEdge rack servers connected to 2 TOR switches with 2 uplinks from each server. This is a consolidated architecture where both management components and application workload run on the same 4 node vSAN cluster. I am using ESXi 7 U1 and VCSA 7 U1. Here, the TORs are acting as L3 switches and we are doing BGP peering between TOR switches and NSX-T T0 Gateway which will be explained in a later part of this blog series.

Network connectivity



IP address schema and VLANs




I will not be covering how to create VLANs, configuring switch ports, deploying a vSAN cluster, etc. I assume that the network switches are correctly configured and the vSAN cluster is up and running. The next step is to deploy an NSX-T 3.0 appliance. You can either deploy it as a single node or in HA using 3 NSX-T appliances. In my lab, I have only one NSX-T 3.0 instance. Following are some of the reference material to deploy NSX-T 3.0: