Showing posts with label vCenter. Show all posts
Showing posts with label vCenter. Show all posts

Saturday, December 10, 2022

vSphere with Tanzu using NSX-T - Part21 - Pointers while upgrading the stack

Here are some pointers.

The main workflow while upgrading the entire stack is:

  1. upgrade NSX-T
  2. upgrade vCenter server
  3. upgrade ESXi nodes
  4. upgrade vSAN and vDS (if required)
  5. upgrade WCP

Note: Make sure you follow the compatibility matrix while performing upgrades in a production environment.

NSX-T

While NSX-T is getting upgraded, the host transport nodes will also need to be updated. NSX components will be installed/ updated on the ESXi nodes and may need a reboot also. 

In NSX-T: System > Lifecycle Management > Upgrade > Upgrade NSX > Hosts

Plan

-Upgrade order across groups - Serial

-Pause upgrade condition - When an upgrade unit fails to upgrade

Host Groups

-Upgrade order within group - Serial

-Upgrade mode - Maintenance

 

Notes: 

  • There is an in-place upgrade mode for the host groups, and in this mode the NSX component on the ESXi nodes will get updated first and then the node will be placed in MM and then gets rebooted. But we have had some cases where the NSX component update on the ESXi node fails, and the workload/ TKC VMs running on that ESXi loses network connectivity. To fix this the ESXi node has to be rebooted, and while trying to put the node in MM, the TKC VMs running on it fails to get migrated, and the final option is to force reboot the ESXi and that affects the workload running on it. To avoid this case, the safest option is to choose the upgrade mode as Maintenance, so that the TKCs VMs will get migrated off the ESXi node first before updating the NSX component, and even if the component update fails, we are safe to reboot the node as it is in already in MM. 
  • We have also noticed in some cases the ESXi nodes in a WCP cluster fails to enter MM. Following are some common cases:
  • Orphaned or inaccessible VMs are one of the main causes. You can follow this article to find those and clean them up.
  • I have noticed cases like some VMs (vSphere pods) were present on the ESXi host in poweredoff state. In this case the ESXi was stuck 100% in maintenance mode. You will need to check on those VMs and then delete them if required to proceed further.
  • There are cases where some TKC VMs were present on the ESXi host and they fail to get migrated. Check whether these VMs are still present at the Kubernetes layer. If not, delete those VMs!
  • Check if there are any TKC nodes still present on the ESXi node. There can be cases where the TKC node or nodes are unable to migrate to other available nodes in the cluster due to resource constraints. Check whether the TKC nodes that are unable to migrate is using guaranteed vmclass. The solution is to find out unused resources/ TKCs in the cluster, check with the owner/ user and then delete it if not being used. Another way is to check with the user and change the guaranteed vmclass to besteffort if possible or temporarily reduce the number of worker nodes if the user agrees to do so.

  • Verify whether any pods are running on the respective ESXi worker node:
    kubectl get pods -A -o wide | grep ESXi-FQDN
    Safely drain the node.

Hope it was useful. Cheers!

Sunday, May 30, 2021

vSphere with Tanzu using NSX-T - Part8 - Create namespace and deploy Tanzu Kubernetes Cluster

In the previous posts we discussed the following:

vSphere with Tanzu using NSX-T - Part1 - Prerequisites

vSphere with Tanzu using NSX-T - Part2 - Configure NSX

vSphere with Tanzu using NSX-T - Part3 - Edge Cluster

vSphere with Tanzu using NSX-T - Part4 - Tier-0 Gateway and BGP peering

vSphere with Tanzu using NSX-T - Part5 - Tier-1 Gateway and Segments

vSphere with Tanzu using NSX-T - Part6 - Create tags, storage policy, and content library

vSphere with Tanzu using NSX-T - Part7 - Enable workload management


Now that we have enabled workload management, the next step is to create namespaces on the supervisor cluster, set resource quotas as per requirements, and then the vSphere administrator can provide access to developers to these namespaces, and they can either deploy Tanzu Kubernetes clusters or VMs or vSphere pods. 

  • Create namespace.

  • Select the cluster and provide a name for the namespace.

  • Now the namespace is created successfully. Before handing over this namespace to the developer, you can set permissions, assign storage policies, and set resource limits.

Let's have a look at the NSX-T components that are instantiated when we created a new namespace.
  • A new segment is now created for the newly created namespace. This segment is connected to the T1 Gateway of the supervisor cluster.

  • A SNAT rule is also now in place on the supervisor cluster T1 Gateway. This helps the Kubernetes objects residing in the namespace to reach the external network/ internet. It uses the egress range 192.168.72.0/24 that we provided during the workload management configuration for address translation.

We can now assign a storage policy to this newly created namespace.

  • Click on Add Storage and select the storage policy. In my case, I am using Tanzu Storage Policy which uses a vsanDatastore.

Let's apply some capacity and usage limits for this namespace. Click edit limits and provide the values.


Let's set user permissions to this newly created namespace. Click add permissions.


Now we are ready to hand over this new namespace to the dev user (John).


Under the first tile, you can see copy link, you can provide this link to the dev user. And he can open it in a web browser to access the CLI tools to connect to the newly created namespace.


Download and install the CLI tools. In my case, CLI tools are installed on a CentOS 7.x VM. You can also see the user John has connected to the newly created namespace using the CLI.


The user can now verify the resource limits of the namespace using kubectl.


You can see the following limits:
  • cpu-limit: 21.818
  • memory-limit: 131072Mi
  • storage: 500Gi
Storage is limited at 500 GB and memory at 128 GB which is very straightforward. We (vSphere admin) had set the CPU limits to 48 GHz. And here what you see is cpu-limit of this namespace is limited to 21.818 CPU cores. Just to give some more background on this calculation, the ESXi host that I am using for this study has 20 physical cores, and the total CPU capacity of a host is 44 GHz. I have 4 such ESXi hosts in the cluster. Now, the computing power of one physical core is (44/ 20) = 2.2 GHz. So, in order to limit the CPU to 48 GHz, the number of cpu core should be limited to (48/ 2.2) = 21.818.  

Apply the following cluster definition yaml file to create a Tanzu Kubernetes cluster under the ns-01-dev-john namespace.

apiVersion: run.tanzu.vmware.com/v1alpha1
kind: TanzuKubernetesCluster
metadata:
 name: tkg-cluster-01
 namespace: ns-01-dev-john
spec:
 topology:
   controlPlane:
     count: 3
     class: guaranteed-medium
     storageClass: tanzu-storage-policy
   workers:
     count: 3
     class: guaranteed-xlarge
     storageClass: tanzu-storage-policy
 distribution:
   version: v1.18.15
 settings:
  network:
   services:
    cidrBlocks: ["198.32.1.0/12"]
   pods:
    cidrBlocks: ["192.1.1.0/16"]
   cni:
    name: calico
  storage:
   defaultClass: tanzu-storage-policy


Login to the Tanzu Kubernetes cluster directly using CLI and verify.


You can see corresponding VMs in the Center UI.


Now, let's have a look at the NSX-T side.
  • A Tier-1 Gateway is now available with a segment linked to it.


  • You can see a server load balancer with one virtual server that provides access to KubeAPI (6443) of the Tanzu Kubernetes cluster that we just deployed.


  • You can also find a SNAT rule. This helps the Tanzu Kubernetes cluster objects to reach the external network/ internet. It uses the egress range 192.168.72.0/24 that we provided during the workload management configuration for address translation.

Note: This architecture is explained on the basis of vSphere 7 U1. In the newer versions there are changes. With vSphere 7 U1c the architecture changed from a per-TKG cluster Tier 1 Gateway model to a per-Supervisor namespace Tier 1 Gateway model. For more details, feel free to refer the blog series published by Harikrishnan T @hari5611.

In the next part we will discuss monitoring aspects of vSphere with Tanzu environment and Tanzu Kubernetes clusters. I hope this was useful. Cheers!

Sunday, April 18, 2021

vSphere with Tanzu using NSX-T - Part7 - Enable workload management

In the previous posts we discussed the following: 

Part1: Prerequisites

Part2: Configure NSX-T

Part3: Edge Cluster

Part4: Tier-0 Gateway and BGP peering

Part5: Tier-1 Gateway and Segments

Part6: Create tags, storage policy, and content library


We are all set to configure and enable workload management. Before stepping into the configurations I just want to give an overall picture of vSphere with Tanzu architecture and different components. 


Once you enable workload management, the vSphere cluster will transform to a supervisor cluster. The supervisor cluster consists of 3 supervisor control plane VMs, and the ESXi hosts that act as worker nodes too. Now you can run traditional VMs, and containers side by side. You can run the containers as native vSphere pods directly running on the ESXi hosts, or you can deploy Tanzu Kubernetes clusters in VM form factor on the vSphere namespace and then run container workload on them.

Following are the steps to enable workload management:

  • Login vCenter - Menu - Workload Management.
  • Click Get started.
  • Select NSX-T and click next.

  • Select the cluster.

  • Select a size and click next.

  • Select the storage policy and click next.

  • Provide management network details and click next.

  • Provide workload network details and click next.

  • Add the content library and click next.

  • Click finish.

  • This process will take few minutes to configure and bring up the supervisor cluster. In my case, it took around 30 minutes to complete.
  • You can see the progress in the vCenter UI.



  • You can now see the supervisor control plane VMs are deployed.




Workload management is now enabled and the vSphere cluster is transformed to a supervisor cluster. Let's have a look at the objects that are automatically created in NSX-T.
  • You can see a T1 Gateway is now provisioned.

  • Multiple segments are now created corresponding to each namespace inside the supervisor control plane.

  • Multiple SNAT rules are also now in place for the newly created T1 Gateway, which helps the control plane Kubernetes objects residing in their corresponding namespaces to reach the external network/ internet. It uses the egress range 192.168.72.0/24 that we provided during the workload management configuration for address translation.

  • You can also see two load balancers attached to the T1 Gateway:
    • Distributed Load balancer: All services of type ClusterIP are implemented as distributed load balancer virtual servers. This is for east-west traffic.
    • Server load balancer: All services of type Loadbalancer are implemented as server load balancer L4 virtual servers. And all ingress is implemented as L7 virtual servers.

  • Under the server load balancer, you can see two virtual servers. One for the KubeAPI (6443) and the other for downloading the CLI tools (443) to access the cluster.

Note that this newly created T1 Gateway (domain-c8:6ea515f0-39da-431b-93bf-0d6a5e4a0f77) is connected to the T0 Gateway for external connectivity through BGP.
 
The next step is to create namespaces, and you can then create Tanzu Kubernetes clusters on it. Usually, the vSphere administrator will create namespaces for developers and provide the access so that they can either deploy TKG clusters, vSphere pods, or VMs on the respective namespace. We will cover all these in the next part. 

Hope it was useful. Cheers!

Monday, March 29, 2021

vSphere with Tanzu using NSX-T - Part6 - Create tag, VM storage policy, and content library

In the previous posts we discussed the following: 

Part1: Prerequisites

Part2: Configure NSX-T

Part3: Edge Cluster

Part4: Tier-0 Gateway and BGP peering

Part5: Tier-1 Gateway and Segments


At this stage, we are almost done with the networking/ NSX-T related prerequisites. Before getting into Workload Management configuration we need to create a tag, assign the tag to the datastore, create a VM storage policy, and content library.

Create and assign a tag

  • Select the datastore. Click Assign.
  • Add Tag.
  • Provide a name, and click on create new category.
  • Provide a category name and click create.
  • Click create.
  • Now a tag is created and you can assign it to the vSAN datastore.
  • Select the tag and click assign.

Create VM storage policy with tag based placement

  • Provide a name and click next.
  • Select tag based placement and click next.
  • Click the drop-down for tag category, select "Tanzu", and then click browse tags, select "Tanzu" and click next.
  • Select the compatible datastore, vSAN in this case, and click next.
  • Click finish.
  • You can now see the newly created VM storage policy.

Create content library

  • Click create.
  • Provide details and next.
  • Provide subscription URL and next.
  • Accept the SSL thumbprint by clicking yes. Select the vSAN datastore as the storage location for library contents, and next.
  • Click finish.
  • You can now see the newly created content library.

All the prerequisites are done now. The next step is to configure workload management which will be covered in the next part. Hope it was useful. Cheers!