Showing posts with label vSphere. Show all posts
Showing posts with label vSphere. Show all posts

Tuesday, December 8, 2020

vSphere with Tanzu using NSX-T - Part2 - Configure NSX

In this post, we will take a look at the different configuration steps that are required before enabling workload management to establish connectivity to the supervisor cluster, supervisor namespaces, and all objects that run inside the namespaces, such as vSphere pods, and Tanzu Kubernetes Clusters. 

At this point, I assume that the vSAN cluster is up and NSX-T 3.0 is installed. NSX-T appliance is connected to the same management network where the VCSA and ESXi nodes are connected. In my case, it will be through VLAN 41. Note that all the ESXi nodes of the vSAN cluster are connected to one vSphere Distributed Switch and has two uplinks from each node that connects to TOR A and TOR B.


NSX-T configurations

  • Add Compute Managers. I've added the vCenter server here.


  • Add Uplink Profiles.
    • Create a host uplink profile "nsx-uplink-profile-lbsrc" (this is for host TEP using VLAN 52).
    • Create an edge uplink profile "nsx-edge-uplink-profile-lbsrc" (this is for edge TEP using VLAN 53).



  • Add Transport Zones.
    • Create an overlay transport zone "tz-overlay".
    • Create a VLAN transport zone "edge-vlan-tz".




  • Add IP Address Pools.
    • Create an IP address pool for host TEPs "TEP-Pool-01" (this is for host TEP using VLAN 52).
    • Create an IP address pool for edge TEPs "Edge-TEP-Pool-01" (this is for Edge TEP using VLAN 53).



  • Add Transport Node Profiles.


  • Configure Host Transport Nodes. Select the required cluster and click configure NSX to convert all the ESXi nodes as transport nodes.

The next step is to deploy Edge VMs (Edge Transport Nodes) and create a Edge Cluster. We will cover it in the next part. Hope it was useful. Cheers!

Related posts


Part1 - Prerequisites


References


Monday, December 7, 2020

vSphere with Tanzu using NSX-T - Part1 - Prerequisites

In this blog series, I would like to share my experience deploying vSphere with Tanzu using NSX-T 3.0 networking. Following is a very high-level workflow of setting up the environment from scratch:


Software versions used for this study:
  • vSphere 7 U1
  • NSX-T 3.0.1.1

If you are using VCF, some of the deployment steps mentioned above are already automated. But, VCF is not in the scope of this blog series, and this post is aimed at explaining the workflow and background configurations that are required to prepare the environment before enabling workload management. This will be useful to understand the backend configuration tasks and logical workflow and will be helpful in case of troubleshooting too. In my lab, I have 4 Dell EMC PowerEdge rack servers connected to 2 TOR switches with 2 uplinks from each server. This is a consolidated architecture where both management components and application workload run on the same 4 node vSAN cluster. I am using ESXi 7 U1 and VCSA 7 U1. Here, the TORs are acting as L3 switches and we are doing BGP peering between TOR switches and NSX-T T0 Gateway which will be explained in a later part of this blog series.

Network connectivity



IP address schema and VLANs




I will not be covering how to create VLANs, configuring switch ports, deploying a vSAN cluster, etc. I assume that the network switches are correctly configured and the vSAN cluster is up and running. The next step is to deploy an NSX-T 3.0 appliance. You can either deploy it as a single node or in HA using 3 NSX-T appliances. In my lab, I have only one NSX-T 3.0 instance. Following are some of the reference material to deploy NSX-T 3.0:

Thursday, July 9, 2020

Tanzu Kubernetes Grid (TKG) on vSphere 6.7 U3 - Part3

In this blog, I will explain how to deploy an FIO application pod with persistent storage on your Tanzu Kubernetes workload cluster.

Step 1: Deploy a K8s workload cluster

tkg create cluster <cluster name> --plan=dev


Now the workload K8s cluster is deployed with a Master, LB, and Worker node.


Wednesday, June 24, 2020

Tanzu Kubernetes Grid (TKG) on vSphere 6.7 U3 - Part2

In this post, I will explain how to deploy and manage multiple Kubernetes workload clusters using TKG CLI.

To view the management cluster: tkg get management-cluster
To create a new workload cluster: tkg create cluster <cluster name> --plan=<cluster plan>


Now as per default dev plan one master, one worker, and a load balancer are deployed.


Tuesday, June 23, 2020

Tanzu Kubernetes Grid (TKG) on vSphere 6.7 U3 - Part1


TKG is an enterprise-ready Kubernetes runtime which provides a consistent, upstream-compatible implementation of Kubernetes, that is tested, signed, and supported by VMware. 

Installation

I am using a 3 node vSAN cluster running vSphere 6.7 U3 to deploy TKG. The first step is to prepare a VM that will be used to kickstart the deployment process. Here I am using a CentOS 7 VM with desktop UI. Download the TKG CLI, TKG Kubernetes OVA, and Load Balancer OVA from the following link:


I am using the following versions:
  • VMware Tanzu Kubernetes Grid CLI 1.1 Linux
  • VMware Tanzu Kubernetes Grid 1.1.0 Kubernetes v1.18.2 OVA
  • VMware Tanzu Kubernetes Grid 1.1 Load Balancer OVA

Unzip and install TKG CLI on the CentOS VM.

Saturday, March 21, 2020

vRealize Operations Manager 8 - Custom views and reports

vROps has a number of inbuilt views and reports templates. In this post, I will explain how to create custom views and reports. 

To create a new view: Dashboards - Views - New view

Provide a name and description for the new view. Here, for example, I will create a view that shows datastore write performance stats.


Select List.


Select Datastore as subject and group it by cluster compute resource.


Drag and drop the selected metrics or properties to include in the view.
In the following screenshot, I selected a property "Is Local" and included in the view.


Added another property "Type" to the view.


Similarly, I added some metrics like Write IOPS, Latency and Throughout to the view.
I selected "Maximum" from transformation to get the peak value of the selected metrics.


In the below screenshot, you can see I selected "Maximum" and "Average" transformation for Write IOPS, Latency and Throughput metrics. You can also select the units as per your requirement. Here, I selected "us" for latency and "MBps" for throughput.

Click Save. 


Similarly, I created three custom views that show:
  • "Datastore read performance stats"
  • "Datastore write performance stats" 
  • "Datastore capacity usage"


To create a new report template: Dashboards - Reports - New template


Provide a name and description for the new report template.


From the views and dashboards, find the three custom datastore views that we created earlier, drag and drop them to the right pane as shown below.


Select PDF and CSV.


Select all the layout options if you like to and click Save.


Now the custom report template is created. You can select it and click Run.


Select vSpehre Hosts and Clusters and then select vSphere World and click ok.


The report will run in the background and will be available to download under the "Generated Reports" tab. You can select it and download the PDF or CSV file.


You can even configure a schedule to generate a report and email it or save it to a location automatically based on your requirements. Hope it was useful. Cheers!

Related posts


Friday, December 20, 2019

VMware PowerCLI 101 - part6 - vSphere networking

Networking is one of the important factors for ensuring service availability. Incorrect network configurations will lead to the unavailability of data and services and if this happens in a production environment it will negatively affect the business. 

In this article, I will briefly explain how to use PowerCLI to work with basic vSphere networking.

Connect to vCenter server using:
Connect-VIServer <IP address of vCenter>


VM IP


To get all IP details of a VM:
(Get-VM -Name <VM name>).Guest.IPAddress




VM network adapters, MAC, and IP


To get all network adapters, MAC address and IP details of a VM:
(Get-VM -Name <VM name>).Guest.Nics | select *



OR

(Get-VM -Name <VM name>).ExtensionData.Guest.Net


VDS


To get all the Virtual Distributed Switches (VDS):
Get-VDSwitch



To get all the details of a specific VDS:
Get-VDSwitch -Name <VD Switch name> | select *




To get VDS security policy:
Get-VDSwitch -Name <VD Switch name> | Get-VDSecurityPolicy | select *



VD Port group


To get all port groups of a specific VDS:
Get-VDPortgroup -VDSwitch <VD Switch name>



To get all the details of a specific port group in a VDS:
Get-VDPortgroup -VDSwitch <VD Switch name> -Name <Port group name> | select *




VD Port


To get all VD ports of a specific VD port group in a VDS:
Get-VDSwitch <VD Switch name> | Get-VDPortgroup <Port group name> | Get-VDPort

To get only active VD ports of a specific VD port group in a VDS:
Get-VDSwitch <VD Switch name> | Get-VDPortgroup <Port group name> | Get-VDPort -ActiveOnly


To get all details of a specific VD port in a VDS:
Get-VDPort -Key <Value> -VDSwitch <VD Switch name> | select * 




VM connected to a VD port


To get the VM that is connected to a VD port:
(Get-VDPort -Key <Value> -VDSwitch <VD Switch name>).ExtensionData.Connectee
Get-VM -Id <VM Id>



Find VM using NIC MAC


Get-VM | where {$_.ExtensionData.Guest.Net.MacAddress -eq '<MAC Address>'}


Hope it was useful. Cheers!


Related posts