Showing posts with label nat. Show all posts
Showing posts with label nat. Show all posts

Friday, March 19, 2021

vSphere with Tanzu using NSX-T - Part5 - Tier-1 Gateway and Segments

In the previous posts we discussed the following: 

Part1: Prerequisites

Part2: Configure NSX-T

Part3: Edge Cluster

Part4: Tier-0 Gateway and BGP peering


The next step is to create a Tier-1 Gateway and network segments. 

  • Add Tier-1 Gateway.
    • Provide name, select the linked T0 Gateway, and select the route advertisement settings.

  • Add Segment.
    • Provide segment name, connected gateway, transport zone, and subnet.
    • Here we are creating an overlay segment and the subnet CIDR 172.16.10.1/24 will be the gateway IP for this segment.

Now, let's verify whether this segment is being advertised (route advertisement) or not. Following is the screenshot from both edge nodes and you can see that the Tier-0 SR is aware of 172.16.10.0/24 network:


As Tier-0 Gateway is connected to the TOR switches via BGP, we can verify whether the TOR switches are aware about this newly created segment. 


You can see that the TORs are aware of 172.16.10.0/24 network via BGP. Let's connect a VM to this segment, assign an IP address, and test network connectivity. 


You can also view the network topology from NSX-T.


This is the traffic flow: VM - Network segment - Tier-1 Gateway - Tier-0 Gateway - BGP peering - TOR switches - NAT VM - External network.

Hope this was useful. Cheers!

Tuesday, December 8, 2020

vSphere with Tanzu using NSX-T - Part2 - Configure NSX

In this post, we will take a look at the different configuration steps that are required before enabling workload management to establish connectivity to the supervisor cluster, supervisor namespaces, and all objects that run inside the namespaces, such as vSphere pods, and Tanzu Kubernetes Clusters. 

At this point, I assume that the vSAN cluster is up and NSX-T 3.0 is installed. NSX-T appliance is connected to the same management network where the VCSA and ESXi nodes are connected. In my case, it will be through VLAN 41. Note that all the ESXi nodes of the vSAN cluster are connected to one vSphere Distributed Switch and has two uplinks from each node that connects to TOR A and TOR B.


NSX-T configurations

  • Add Compute Managers. I've added the vCenter server here.


  • Add Uplink Profiles.
    • Create a host uplink profile "nsx-uplink-profile-lbsrc" (this is for host TEP using VLAN 52).
    • Create an edge uplink profile "nsx-edge-uplink-profile-lbsrc" (this is for edge TEP using VLAN 53).



  • Add Transport Zones.
    • Create an overlay transport zone "tz-overlay".
    • Create a VLAN transport zone "edge-vlan-tz".




  • Add IP Address Pools.
    • Create an IP address pool for host TEPs "TEP-Pool-01" (this is for host TEP using VLAN 52).
    • Create an IP address pool for edge TEPs "Edge-TEP-Pool-01" (this is for Edge TEP using VLAN 53).



  • Add Transport Node Profiles.


  • Configure Host Transport Nodes. Select the required cluster and click configure NSX to convert all the ESXi nodes as transport nodes.

The next step is to deploy Edge VMs (Edge Transport Nodes) and create a Edge Cluster. We will cover it in the next part. Hope it was useful. Cheers!

Related posts


Part1 - Prerequisites


References


Monday, December 7, 2020

vSphere with Tanzu using NSX-T - Part1 - Prerequisites

In this blog series, I would like to share my experience deploying vSphere with Tanzu using NSX-T 3.0 networking. Following is a very high-level workflow of setting up the environment from scratch:


Software versions used for this study:
  • vSphere 7 U1
  • NSX-T 3.0.1.1

If you are using VCF, some of the deployment steps mentioned above are already automated. But, VCF is not in the scope of this blog series, and this post is aimed at explaining the workflow and background configurations that are required to prepare the environment before enabling workload management. This will be useful to understand the backend configuration tasks and logical workflow and will be helpful in case of troubleshooting too. In my lab, I have 4 Dell EMC PowerEdge rack servers connected to 2 TOR switches with 2 uplinks from each server. This is a consolidated architecture where both management components and application workload run on the same 4 node vSAN cluster. I am using ESXi 7 U1 and VCSA 7 U1. Here, the TORs are acting as L3 switches and we are doing BGP peering between TOR switches and NSX-T T0 Gateway which will be explained in a later part of this blog series.

Network connectivity



IP address schema and VLANs




I will not be covering how to create VLANs, configuring switch ports, deploying a vSAN cluster, etc. I assume that the network switches are correctly configured and the vSAN cluster is up and running. The next step is to deploy an NSX-T 3.0 appliance. You can either deploy it as a single node or in HA using 3 NSX-T appliances. In my lab, I have only one NSX-T 3.0 instance. Following are some of the reference material to deploy NSX-T 3.0:

Wednesday, September 17, 2014

NAT

-NAT is the process of mapping one IP to another IP
-NAT is used to save public IP
-It provides security by hiding the internal IP

Types of NAT

-Static NAT (one to one)
  • One private IP is mapped to one public IP
  • Generally used for hosting public servers
-Dynamic NAT (many to many)
  • Many private IPs are mapped to many public IPs
  • Can only be configured for outbound trafiic
-PAT (many to one)
  • Many private IPs are mapped to one public IP
  • All users can access internet at the same time
  • Can only be configured for outbound traffic

-Practically what we do is dynamic source or destination NAT which depends on the purpose
-In case of accessing a public hosted web server, we are using a dynamic destination NAT rule, where all the requests coming to the public IP of that web server will be mapped to a internal private IP
-A group of internal users accessing internet can be done using a dynamic source NAT rule, where all the internal requests are mapped to a external public IP