Showing posts with label vSAN. Show all posts
Showing posts with label vSAN. Show all posts

Monday, December 28, 2020

vSphere with Tanzu using NSX-T - Part3 - Edge Cluster

In the previous post, we went through basic NSX-T configurations. The next step is to deploy Edge VMs and create an Edge cluster. Before creating Edge VMs, we need to create two trunk port groups on the VDS using the VCSA web UI. Uplinks of the Edge VMs will be connected to these two trunk port groups.

  • Create 2 trunk port groups on the VDS.
    • "Trunk01-NSXT-Edge"
    • "Trunk02-NSXT-Edge"


  • Add Edge Transport Nodes.
    • Create two Edge VMs "edge-01, edge-02".
    • Click ADD EDGE VM, follow the wizard, and provide all the details.






  • Click Finish. Follow the same process and deploy one more Edge VM.

  • The next step is to create an Edge Cluster "edge-cluster-01and add the two Edge VMs to it.


The NSX-T Edge Cluster is now ready. Next, we have to add a Tier-0 Gateway and configure BGP peering with the router or L3 TOR switches. This will be covered in the next part. Hope it was useful. Cheers!

Related posts


Part1: Prerequisites
Part2: Configure NSX-T

Tuesday, December 8, 2020

vSphere with Tanzu using NSX-T - Part2 - Configure NSX

In this post, we will take a look at the different configuration steps that are required before enabling workload management to establish connectivity to the supervisor cluster, supervisor namespaces, and all objects that run inside the namespaces, such as vSphere pods, and Tanzu Kubernetes Clusters. 

At this point, I assume that the vSAN cluster is up and NSX-T 3.0 is installed. NSX-T appliance is connected to the same management network where the VCSA and ESXi nodes are connected. In my case, it will be through VLAN 41. Note that all the ESXi nodes of the vSAN cluster are connected to one vSphere Distributed Switch and has two uplinks from each node that connects to TOR A and TOR B.


NSX-T configurations

  • Add Compute Managers. I've added the vCenter server here.


  • Add Uplink Profiles.
    • Create a host uplink profile "nsx-uplink-profile-lbsrc" (this is for host TEP using VLAN 52).
    • Create an edge uplink profile "nsx-edge-uplink-profile-lbsrc" (this is for edge TEP using VLAN 53).



  • Add Transport Zones.
    • Create an overlay transport zone "tz-overlay".
    • Create a VLAN transport zone "edge-vlan-tz".




  • Add IP Address Pools.
    • Create an IP address pool for host TEPs "TEP-Pool-01" (this is for host TEP using VLAN 52).
    • Create an IP address pool for edge TEPs "Edge-TEP-Pool-01" (this is for Edge TEP using VLAN 53).



  • Add Transport Node Profiles.


  • Configure Host Transport Nodes. Select the required cluster and click configure NSX to convert all the ESXi nodes as transport nodes.

The next step is to deploy Edge VMs (Edge Transport Nodes) and create a Edge Cluster. We will cover it in the next part. Hope it was useful. Cheers!

Related posts


Part1 - Prerequisites


References


Monday, December 7, 2020

vSphere with Tanzu using NSX-T - Part1 - Prerequisites

In this blog series, I would like to share my experience deploying vSphere with Tanzu using NSX-T 3.0 networking. Following is a very high-level workflow of setting up the environment from scratch:


Software versions used for this study:
  • vSphere 7 U1
  • NSX-T 3.0.1.1

If you are using VCF, some of the deployment steps mentioned above are already automated. But, VCF is not in the scope of this blog series, and this post is aimed at explaining the workflow and background configurations that are required to prepare the environment before enabling workload management. This will be useful to understand the backend configuration tasks and logical workflow and will be helpful in case of troubleshooting too. In my lab, I have 4 Dell EMC PowerEdge rack servers connected to 2 TOR switches with 2 uplinks from each server. This is a consolidated architecture where both management components and application workload run on the same 4 node vSAN cluster. I am using ESXi 7 U1 and VCSA 7 U1. Here, the TORs are acting as L3 switches and we are doing BGP peering between TOR switches and NSX-T T0 Gateway which will be explained in a later part of this blog series.

Network connectivity



IP address schema and VLANs




I will not be covering how to create VLANs, configuring switch ports, deploying a vSAN cluster, etc. I assume that the network switches are correctly configured and the vSAN cluster is up and running. The next step is to deploy an NSX-T 3.0 appliance. You can either deploy it as a single node or in HA using 3 NSX-T appliances. In my lab, I have only one NSX-T 3.0 instance. Following are some of the reference material to deploy NSX-T 3.0:

Friday, October 23, 2020

VMware PowerCLI 101 - part8 - Working with vSAN

This article explains how to work with vSAN resources using PowerCLI. 

Note I am using the following versions:
PowerShell: 5.1.14393.3866
VMware PowerCLI: 12.1.0.17009493


Connect to vCenter:
Connect-VIServer <IP of vCenter server>

List all vSAN get cmdlets:
Get-Command Get-Vsan*


vSAN runtime info:
$c = Get-Cluster Cluster01
Get-VsanRuntimeInfo -Cluster $c


vSAN space usage:
Get-VsanSpaceUsage


vSAN cluster configuration:
Get-VsanClusterConfiguration


vSAN disk details:
Get-VsanDisk


View all properties of a disk:
(Get-VsanDisk)[31] | select *


View disk vendor, model, firmware revision, physical location, operational state:
(Get-VsanDisk)[31].ExtensionData


 vSAN disk group details:
Get-VsanDiskGroup


Get all properties of a disk group:

Thursday, July 9, 2020

Tanzu Kubernetes Grid (TKG) on vSphere 6.7 U3 - Part3

In this blog, I will explain how to deploy an FIO application pod with persistent storage on your Tanzu Kubernetes workload cluster.

Step 1: Deploy a K8s workload cluster

tkg create cluster <cluster name> --plan=dev


Now the workload K8s cluster is deployed with a Master, LB, and Worker node.


Tuesday, June 23, 2020

Tanzu Kubernetes Grid (TKG) on vSphere 6.7 U3 - Part1


TKG is an enterprise-ready Kubernetes runtime which provides a consistent, upstream-compatible implementation of Kubernetes, that is tested, signed, and supported by VMware. 

Installation

I am using a 3 node vSAN cluster running vSphere 6.7 U3 to deploy TKG. The first step is to prepare a VM that will be used to kickstart the deployment process. Here I am using a CentOS 7 VM with desktop UI. Download the TKG CLI, TKG Kubernetes OVA, and Load Balancer OVA from the following link:


I am using the following versions:
  • VMware Tanzu Kubernetes Grid CLI 1.1 Linux
  • VMware Tanzu Kubernetes Grid 1.1.0 Kubernetes v1.18.2 OVA
  • VMware Tanzu Kubernetes Grid 1.1 Load Balancer OVA

Unzip and install TKG CLI on the CentOS VM.

Saturday, April 18, 2020

vSAN performance benchmarking

In this article, I will explain briefly on performance benchmarking considerations, factors affecting performance, and some of the best practices. We do performance benchmarking to understand the capabilities and bottlenecks of a system. When I say system it could be a storage system, CPU, GPU, network switch, etc. Now let's consider a VMware vSAN cluster infrastructure. It includes multiple components and each of these contributes to the performance. In this case, the vSAN cluster is the solution under test. We will have to conduct performance benchmarking to understand the storage performance behavior of the cluster. When I say storage behavior it includes the IOPS, latency, and throughput that the cluster can produce under varying loads.

The goal of benchmarking
  • Identify bottlenecks
    • Hardware bottleneck
    • Software bottleneck
    • Application bottleneck
  • Compare tradeoffs
  • Manage expectations
  • Make decisions

Usually in a real-world scenario, benchmarking will be done once the cluster is deployed/ ready and before starting to host production workload on top of it. As these benchmark values define the performance maximums it will be helpful to decide on when to scale or upgrade the cluster before it hits a bottleneck.

Fundamental factors of vSAN performance

Server hardware
  • Compatibility as per vSAN HCL
Host
  • Number of hosts in the cluster
  • Power settings
  • CPU - number of cores and frequency 
Storage
  • Hybrid or All-flash
  • NVMe, SAS, or SATA
  • Number of disk groups per host
  • Storage controller configuration
  • Compatibility of hardware devices as per vSAN HCL
Network
  • 10/ 25/ 40 GbE
  • MTU 
  • LAG
SPBM policy
  • FTT (Failures To Tolerate)
  • FTM (Mirroring/ Erasure coding)
  • Thin or Thick provision
Security
  • Encryption
  • Checksum
Other
  • Stripe width
  • Flash read cache reservation
  • IOPS limit for object
All of the above factors will affect performance. So you should know the benefits and tradeoffs. 

Benchmarking methodology

Image credit: VMware

Storage benchmarking tools

IO load generation tools
Application-specific tools
  • HammerDB (MSSQL, Oracle)
  • Jetstress (MS Exchange)
  • SLOB (Oracle)
  • DBGen (MSSQL, Oracle)

Best practices

  • Understand the production performance metrics.
  • Test what you plan to deploy.
  • Workload modeling.
  • Plan for use case testing.
  • Choose an appropriate size for benchmarking
  • Choose the right tool.
  • Pre-allocate blocks while testing.
  • Test for a longer time duration.
  • Deploy multiple VMs with multiple VMDKs.

References

Friday, November 1, 2019

vRealize Operations Manager 7.5 - Part9 - Dashboard sharing

The ability to share dashboards using URLs without login requirements is a very useful feature starting from vROps 7.0.

  • Select the dashboard that you want to share and click on the share icon as marked in the screenshot below.



  • Click on "COPY LINK" and provide it to whoever necessary. You can also set link expiry to 1 Day, 1 Week, 1 Month, 3 Months, or Never Expire as per your requirement.

  • To unshare a link, enter the link that you want to unshare as shown below and click "UNSHARE".

Hope it was useful. Cheers!


Related posts