Showing posts with label VM. Show all posts
Showing posts with label VM. Show all posts

Friday, August 15, 2025

Understanding NUMA: Its Impact on VM Performance in ESXi

VMware ESXi hosts use Non-Uniform Memory Access (NUMA) architecture to optimize CPU and memory locality. Each NUMA node consists of a subset of CPUs and memory. Accessing local memory within the same NUMA node is significantly faster than remote memory access. Misaligned NUMA configurations can lead to latency spikes, increased CPU Ready Time, and degraded VM performance.


Key symptoms

The common symptoms for Virtual Machines (VMs) on ESXi that have a misconfigured or misaligned Non-Uniform Memory Access (NUMA) configuration primarily manifest as performance degradation and latency. The main issue caused by NUMA misalignment is that the VM's vCPUs end up frequently having to access memory that belongs to a different physical NUMA node on the ESXi host (known as Remote Access), which is significantly slower than accessing local memory.

The resulting symptoms for the VM include:

  • Overall Slowness and Unresponsiveness: Services and applications running inside the guest OS may respond slowly or intermittently. The entire VM can feel sluggish.

  • High CPU Ready Time (%RDY): This is the most critical ESXi-level metric. CPU Ready Time represents the percentage of time a VM was ready to run but could not be scheduled on a physical CPU. High %RDY times (often above 5% or 10%) can indicate that the VM's vCPUs are struggling to get scheduled efficiently, which happens when they are spread across multiple NUMA nodes (NUMA spanning).

  • Excessive Remote Memory Access: When a VM consumes more vCPUs or memory than is available on a single physical NUMA node, a portion of its memory traffic becomes "remote." You can check this using the esxtop utility on the ESXi host.

Common misconfigurations


Misalignment often occurs when the VM's vCPU and memory settings exceed the resources of a single physical NUMA node on the host. Common causes include:

  • Over-Sized VM: Allocating more vCPUs than the physical cores available in a single physical NUMA node or allocating more memory than the physical memory on a single NUMA node.
  • Hot-Add Features: Enabling CPU Hot-Add or Memory Hot-Add can disable vNUMA (Virtual NUMA) for the VM, preventing the VMkernel from presenting an optimized NUMA topology to the guest OS.
  • Incorrect Cores per Socket Setting: While vSphere 6.5 and later are smarter about vNUMA, configuring the Cores per Socket value manually in a way that doesn't align with the host's physical NUMA topology can still lead to poor scheduling and memory placement, particularly when licensing dictates a low number of virtual sockets.
  • Setting VM Limits: Setting a memory limit on a VM that is lower than its configured memory can force the VMkernel to allocate the remaining memory from a remote NUMA node.

Check NUMA assignments in ESXi

  • SSH into the ESXi node.
  • Issue the esxtop command and press m for memory view, then press f to enable the fields, G to enable NUMA information.
  • You should be able to view the NUMA related information like NRMEM, NLMEM, and N%L.
    • NRMEM (MB): NUMA Remote MEMory
      • This is the current amount of a VM's memory (in MB) that is physically located on a remote NUMA node relative to where the VM's vCPUs are currently running.
      • High NRMEM indicates NUMA locality issues, meaning the vCPUs must cross the high-speed interconnect (like Intel's QPI/UPI or AMD's Infinity Fabric) to access some of their data, which results in slower performance.
    • NLMEM (MB): NUMA Local MEMory
      • This is the current amount of a VM's memory (in MB) that is physically located on the local NUMA node, meaning it's on the same physical node as the vCPUs accessing it.
      • The ESXi NUMA scheduler's goal is to maximize NLMEM to ensure fast memory access.
    • N%L: NUMA % Locality
      • This is the percentage of the VM's total memory that resides on the local NUMA node.
      • A value close to 100% is ideal, indicating excellent memory locality. If this value drops below 80%, the VM may experience poor NUMA locality and potential performance issues due to slower remote memory access.
  • Issue the esxtop command and press v to see the virtual machine screen.
  • From the virtual machine screen note down the GID of the VM under consideration, and press q to exit the screen.
  • Now issue the sched-stats -t numa-clients command. This will list down NUMA details of the VM. Check the groupID column to match the GID of the VM.
  • For example, the GID of the VM I am looking at is 7886858. This is a 112 CPU VM which is running on an 8-socket physical host.

  • You can see the VM is spread/ placed under NUMA nodes 0, 1, 2, and 3.
  • The remoteMem is 0, for each of these NUMA nodes, which means they are accessing all the local memory of the NUMA node.
  • To view physical NUMA details of the ESXI you can use sched-stats -t numa-pnode command. You can see this server has 8 NUMA nodes.
  • To view the NUMA latency, you can use the sched-stats -t numa-latency command.

Verify NUMA node details at guest OS


Windows
  • Easiest way is to go to Task Manager - Performance - CPU
    • Right click on the CPU utilization graph and select Change graph to - NUMA nodes
    • If there only one NUMA node, you may notice the option as greyed out.
  • To get detailed info you can consider using the sysinternals utility coreinfo64.

Linux
  • To view NUMA related details from the Linux guest OS layer, you can use the following commands:
lscpu | grep -i NUMA
dmesg | grep -i NUMA

Remediation


The most common remediation steps for fixing Non-Uniform Memory Access (NUMA) related performance issues in ESXi VMs revolve around right-sizing the VM to align its resources with the physical NUMA boundaries of the host.

The primary goal is to minimize Remote Memory Access (NRMEM) and maximize Local Memory Access (N%L). The vast majority of NUMA issues stem from a VM's resource allocation crossing a physical NUMA node boundary.

  • Right-Size VMs: Keep vCPU count within physical cores of a single NUMA node.
  • Evenly Divide Resources: For monster/ wide VMs, ensure the total vCPUs are configured such that they are evenly divisible by the number of physical NUMA nodes they span.
    • Example: If a VM needs 16 vCPUs on a host with 12-core NUMA nodes, configure the vCPUs to be a multiple of a NUMA node count (e.g., 2 sockets $\times$ 8 cores per socket to create 2 vNUMA nodes, aligning with 2 pNUMA nodes).
  • Cores per Socket Setting (Important for older vSphere/Licensing): While vSphere 6.5 and later automatically present an optimal vNUMA topology, you should still configure the Cores per Socket setting on the VM to create a vNUMA structure that aligns with the physical NUMA boundaries of the host. This helps the guest OS make better scheduling decisions.
  • Disable VM CPU/ Memory Hot-Add: Plan capacity upfront.

NUMA awareness is critical for troubleshooting and optimizing VM performance on ESXi. Misconfigured NUMA placements can severely impact latency-sensitive workloads like databases and analytics. Regular checks at both the hypervisor and guest OS layers ensure memory locality, reduce latency, and improve efficiency.

References


Hope it was useful. Cheers!

Saturday, January 18, 2025

VMware PowerCLI 101 - part12 - Get uptime and boot time of ESXi nodes and VMs

You can use the following PowerCLI commandlets to get the uptime and boot time of ESXi hosts and VMs.

  • Get ESXi boot time.
> Get-Cluster | Get-VMHost | Select Name, @{Name="Boot time";Expression={$_.ExtensionData.Runtime.BootTime}}

Name                                  Boot time
----                                  ---------
esxi1.wxxxxx.com                      2/27/2025 10:13:45 AM
esxi2.wxxxxx.com                      6/18/2025 2:39:38 PM

  • Get ESXi uptime.
> Get-Cluster | Get-VMHost | Select Name, @{Name="Uptime (Days)";Expression={(Get-Date) - $_.ExtensionData.Runtime.BootTime | Select-Object -ExpandProperty Days}}

Name                                  Uptime (Days)
----                                  -------------
esxi1.wxxxxx.com                      116
esxi2.wxxxxx.com                      5

  • Get VM boot time.
> Get-Cluster | Get-VM | Select Name, @{Name="Boot time";Expression={$_.ExtensionData.Runtime.BootTime}}

Name                                      Boot time
----                                      ---------
test-vineeth
rhel84-vineeth
VMware vCenter Server                     2/25/2025 11:28:13 AM
vCLS-50a1b507-5a1d-5e00-8410-7599fc72e5b5 6/18/2025 2:51:59  PM
  • Get VM uptime.
> Get-Cluster | Get-VM | Select Name, @{Name="Uptime (Days)";Expression={(Get-Date) - $_.ExtensionData.Runtime.BootTime | Select-Object -ExpandProperty Days}}

Name                                      Uptime (Days)
----                                      -------------
test-vineeth
rhel84-vineeth
VMware vCenter Server                     118
vCLS-50a1b507-5a1d-5e00-8410-7599fc72e5b5 5

Hope it was useful. Cheers!

Saturday, June 22, 2024

vSphere with Tanzu using NSX-T - Part31 - Troubleshooting inaccessible TKC with expired control plane certs

In the course of managing multiple Tanzu Kubernetes Clusters (TKC), I encountered an unexpected issue: the control plane certificates had expired, preventing us from accessing the cluster using the kubeconfig file. To make matters worse, we were unable to SSH into the TKC control plane Virtual Machines (VMs) due to the vmware-system-user password expiring in accordance with STIG Hardening.

The recommended workaround for updating the vmware-system-user password expiry involves applying a specific daemonset on Guest Clusters. However, this approach requires access to the TKC using its admin kubeconfig file, which was unavailable due to the expired certificates.

Warning: In case of critical production issues that affect the accessibility of your Tanzu Kubernetes Cluster (TKC), it is strongly advised to submit a product support request to our team for assistance. This will ensure that you receive expert guidance and a timely resolution to help minimize the impact on your environment.

To resolve this issue, I followed an alternative workaround: I reset the root password of the TKC control plane VMs through the vCenter VM console, as outlined in this knowledge base article. Once the root password was reset, I was able to log directly into the TKC control plane VM using the VM console.




After gaining access to the TKC control plane VM, I proceeded to renew the control plane certificates using kubeadm, as detailed in this blog post. It's essential to apply this process to all control plane nodes in your cluster to ensure proper functionality.

root [ /etc/kubernetes ]# kubeadm certs check-expiration

root [ /etc/kubernetes ]# kubeadm certs renew all
[renew] Reading configuration from the cluster...
[renew] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[renew] Error reading configuration from the Cluster. Falling back to default configuration

certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
certificate for serving the Kubernetes API renewed
certificate the apiserver uses to access etcd renewed
certificate for the API server to connect to kubelet renewed
certificate embedded in the kubeconfig file for the controller manager to use renewed
certificate for liveness probes to healthcheck etcd renewed
certificate for etcd nodes to communicate with each other renewed
certificate for serving etcd renewed
certificate for the front proxy client renewed
certificate embedded in the kubeconfig file for the scheduler manager to use renewed

Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates.

Although this workaround required some additional steps, it ultimately allowed us to regain access to our Tanzu Kubernetes Cluster and maintain its security and functionality.

Hope it was useful. Cheers!

Saturday, May 25, 2024

vSphere with Tanzu using NSX-T - Part30 - Troubleshooting inaccessible TKC with server pool members missing in the LB VS

Encountering issues with connectivity to your TKC apiserver/ control plane can be frustrating. One common problem we've seen is the kubeconfig failing to connect, often due to missing server pool members in the load balancer's virtual server (LB VS).

The Issue

The LB VS, which operates on port 6443, should have the control plane VMs listed as its member servers. When these members are missing, connectivity problems arise, disrupting your access to the TKC apiserver.

Troubleshooting steps

  1. Access the TKC: Use the kubeconfig to access the TKC.
    ❯ KUBECONFIG=tkc.kubeconfig kubectl get node
    Unable to connect to the server: dial tcp 10.191.88.4:6443: i/o timeout
    
    
  2. Check the Load Balancer: In NSX-T, verify the status of the corresponding load balancer (LB). It may display a green status indicating success.
  3. Inspect Virtual Servers: Check the virtual servers in the LB, particularly on port 6443. They might show as down.
  4. Examine Server Pool Members: Look into the server pool members of the virtual server. You may find it empty.
  5. SSH to Control Plane Nodes: Attempt to SSH into the TKC control plane nodes.
  6. Run Diagnostic Commands: Execute diagnostic commands inside the control plane nodes to verify their status. The issue could be that the control plane VMs are in a hung state, and the container runtime is not running.
    vmware-system-user@tkc-infra-r68zc-jmq4j [ ~ ]$ sudo su
    root [ /home/vmware-system-user ]# crictl ps
    FATA[0002] failed to connect: failed to connect, make sure you are running as root and the runtime has been started: context deadline exceeded
    root [ /home/vmware-system-user ]#
    root [ /home/vmware-system-user ]# systemctl is-active containerd
    Failed to retrieve unit state: Failed to activate service 'org.freedesktop.systemd1': timed out (service_start_timeout=25000ms)
    root [ /home/vmware-system-user ]#
    root [ /home/vmware-system-user ]# systemctl status containerd
    WARNING: terminal is not fully functional
    -  (press RETURN)Failed to get properties: Failed to activate service 'org.freedesktop.systemd1'>
    lines 1-1/1 (END)lines 1-1/1 (END)
    
  7. Check VM Console: From vCenter, check the console of the control plane VMs. You might see specific errors indicating issues.
    EXT4-fs (sda3): Delayed block allocation failed for inode 266704 at logical offset 10515 with max blocks 2 with error 5
    EXT4-fs (sda3): This should not happen!! Data will be lost
    EXT4-fs error (device sda3) in ext4_writepages:2905: IO failure
    EXT4-fs error (device sda3) in ext4_reserve_inode_write:5947: Journal has aborted
    EXT4-fs error (device sda3) xxxxxx-xxx-xxxx: unable to read itable block
    EXT4-fs error (device sda3) in ext4_journal_check_start:61: Detected aborted journal
    systemd[1]: Caught <BUS>, dumped core as pid 24777.
    systemd[1]: Freezing execution.
    
  8. Restart Control Plane VMs: Restart the control plane VMs. Note that sometimes your admin credentials or administrator@vsphere.local credentials may not allow you to restart the TKC VMs. In such cases, decode the username and password from the relevant secret and use these credentials to connect to vCenter and restart the hung TKC VMs.
    ❯ kubectx wdc-01-vc17
    Switched to context "wdc-01-vc17".
    
    ❯ kg secret -A | grep wcp
    kube-system                                 wcp-authproxy-client-secret                                               kubernetes.io/tls                                  3      291d
    kube-system                                 wcp-authproxy-root-ca-secret                                              kubernetes.io/tls                                  3      291d
    kube-system                                 wcp-cluster-credentials                                                   Opaque                                             2      291d
    vmware-system-nsop                          wcp-nsop-sa-vc-auth                                                       Opaque                                             2      291d
    vmware-system-nsx                           wcp-cluster-credentials                                                   Opaque                                             2      291d
    vmware-system-vmop                          wcp-vmop-sa-vc-auth                                                       Opaque                                             2      291d
    
    ❯ kg secrets -n vmware-system-vmop wcp-vmop-sa-vc-auth
    NAME                  TYPE     DATA   AGE
    wcp-vmop-sa-vc-auth   Opaque   2      291d
    ❯ kg secrets -n vmware-system-vmop wcp-vmop-sa-vc-auth -oyaml
    apiVersion: v1
    data:
      password: aWAmbHUwPCpKe1Uxxxxxxxxxxxx=
      username: d2NwLXZtb3AtdXNlci1kb21haW4tYzEwMDYtMxxxxxxxxxxxxxxxxxxxxxxxxQHZzcGhlcmUubG9jYWw=
    kind: Secret
    metadata:
      creationTimestamp: "2022-10-24T08:32:26Z"
      name: wcp-vmop-sa-vc-auth
      namespace: vmware-system-vmop
      resourceVersion: "336557268"
      uid: dcbdac1b-18bb-438c-ba11-76ed4d6bef63
    type: Opaque
    
    
    ***Decrypt the username and password from the secret and use it to connect to the vCenter.
    ***Following is an example using PowerCLI:
    
    PS /Users/vineetha> get-vm gc-control-plane-f266h
    
    Name                 PowerState Num CPUs MemoryGB
    ----                 ---------- -------- --------
    gc-control-plane-f2… PoweredOn  2        4.000
    
    PS /Users/vineetha> get-vm gc-control-plane-f266h | Restart-VMGuest
    Restart-VMGuest: 08/04/2023 22:20:20	Restart-VMGuest		Operation "Restart VM guest" failed for VM "gc-control-plane-f266h" for the following reason: A general system error occurred: Invalid fault
    PS /Users/vineetha>
    PS /Users/vineetha> get-vm gc-control-plane-f266h | Restart-VM
    
    Confirm
    Are you sure you want to perform this action?
    Performing the operation "Restart-VM" on target "VM 'gc-control-plane-f266h'".
    [Y] Yes  [A] Yes to All  [N] No  [L] No to All  [S] Suspend  [?] Help (default is "Y"): Y
    
    Name                 PowerState Num CPUs MemoryGB
    ----                 ---------- -------- --------
    gc-control-plane-f2… PoweredOn  2        4.000
    
    PS /Users/vineetha>
    
  9. Verify System Pods and Connectivity: Once the control plane VMs are restarted, the system pods inside them will start, and the apiserver will become accessible using the kubeconfig. You should also see the previously missing server pool members reappear in the corresponding LB virtual server, and the virtual server on port 6443 will be up and show a success status.

Following these steps should help you resolve the connectivity issues with your TKC apiserver/control plane effectively.Ensuring that your load balancer's virtual server is correctly configured with the appropriate member servers is crucial for maintaining seamless access. This runbook aims to guide you through the process, helping you get your TKC apiserver back online swiftly.

Note: If required for critical production issues related to TKC accessibility I strongly recommend to raise a product support request.

Hope it was useful. Cheers!

Saturday, December 10, 2022

vSphere with Tanzu using NSX-T - Part21 - Pointers while upgrading the stack

Here are some pointers.

The main workflow while upgrading the entire stack is:

  1. upgrade NSX-T
  2. upgrade vCenter server
  3. upgrade ESXi nodes
  4. upgrade vSAN and vDS (if required)
  5. upgrade WCP

Note: Make sure you follow the compatibility matrix while performing upgrades in a production environment.

NSX-T

While NSX-T is getting upgraded, the host transport nodes will also need to be updated. NSX components will be installed/ updated on the ESXi nodes and may need a reboot also. 

In NSX-T: System > Lifecycle Management > Upgrade > Upgrade NSX > Hosts

Plan

-Upgrade order across groups - Serial

-Pause upgrade condition - When an upgrade unit fails to upgrade

Host Groups

-Upgrade order within group - Serial

-Upgrade mode - Maintenance

 

Notes: 

  • There is an in-place upgrade mode for the host groups, and in this mode the NSX component on the ESXi nodes will get updated first and then the node will be placed in MM and then gets rebooted. But we have had some cases where the NSX component update on the ESXi node fails, and the workload/ TKC VMs running on that ESXi loses network connectivity. To fix this the ESXi node has to be rebooted, and while trying to put the node in MM, the TKC VMs running on it fails to get migrated, and the final option is to force reboot the ESXi and that affects the workload running on it. To avoid this case, the safest option is to choose the upgrade mode as Maintenance, so that the TKCs VMs will get migrated off the ESXi node first before updating the NSX component, and even if the component update fails, we are safe to reboot the node as it is in already in MM. 
  • We have also noticed in some cases the ESXi nodes in a WCP cluster fails to enter MM. Following are some common cases:
  • Orphaned or inaccessible VMs are one of the main causes. You can follow this article to find those and clean them up.
  • I have noticed cases like some VMs (vSphere pods) were present on the ESXi host in poweredoff state. In this case the ESXi was stuck 100% in maintenance mode. You will need to check on those VMs and then delete them if required to proceed further.
  • There are cases where some TKC VMs were present on the ESXi host and they fail to get migrated. Check whether these VMs are still present at the Kubernetes layer. If not, delete those VMs!
  • Check if there are any TKC nodes still present on the ESXi node. There can be cases where the TKC node or nodes are unable to migrate to other available nodes in the cluster due to resource constraints. Check whether the TKC nodes that are unable to migrate is using guaranteed vmclass. The solution is to find out unused resources/ TKCs in the cluster, check with the owner/ user and then delete it if not being used. Another way is to check with the user and change the guaranteed vmclass to besteffort if possible or temporarily reduce the number of worker nodes if the user agrees to do so.

  • Verify whether any pods are running on the respective ESXi worker node:
    kubectl get pods -A -o wide | grep ESXi-FQDN
    Safely drain the node.

Hope it was useful. Cheers!

Friday, December 10, 2021

ESXi in a HA cluster fails to Enter Maintenance Mode and gets stuck

Recently we came across a situation where when we try to put a ESXi host in Maintenance Mode, it is getting stuck at certain level. These ESXi nodes were part of a vSphere with Tanzu 7 U3 cluster. While troubleshooting we noticed that there are some VMs that are either orphaned or inaccessible running on it. We deleted those orphaned and inaccessible VMs and then the ESXi node enters Maintenance Mode successfully.

You can use VMware PowerCLI to list those orphaned and inaccessible VMs.

(Get-VMHost <host_fqdn> | Get-VM | Where {$_.ExtensionData.Summary.Runtime.ConnectionState -eq "orphaned"}) | select Name,Id,PowerState

(Get-VMHost <host_fqdn> | Get-VM | Where {$_.ExtensionData.Summary.Runtime.ConnectionState -eq "inaccessible"}) | select Name,Id,PowerState

We then deleted those orphaned and inaccessible VMs. You can try to delete them using Remove-VM command. 

Remove-VM -VM <vm_name> -DeletePermanently 

If that does not work, you can try with dcli.

dcli> com vmware vcenter vm delete --vm <vm-id>

Hope it was useful.

Friday, December 20, 2019

VMware PowerCLI 101 - part6 - vSphere networking

Networking is one of the important factors for ensuring service availability. Incorrect network configurations will lead to the unavailability of data and services and if this happens in a production environment it will negatively affect the business. 

In this article, I will briefly explain how to use PowerCLI to work with basic vSphere networking.

Connect to vCenter server using:
Connect-VIServer <IP address of vCenter>


VM IP


To get all IP details of a VM:
(Get-VM -Name <VM name>).Guest.IPAddress




VM network adapters, MAC, and IP


To get all network adapters, MAC address and IP details of a VM:
(Get-VM -Name <VM name>).Guest.Nics | select *



OR

(Get-VM -Name <VM name>).ExtensionData.Guest.Net


VDS


To get all the Virtual Distributed Switches (VDS):
Get-VDSwitch



To get all the details of a specific VDS:
Get-VDSwitch -Name <VD Switch name> | select *




To get VDS security policy:
Get-VDSwitch -Name <VD Switch name> | Get-VDSecurityPolicy | select *



VD Port group


To get all port groups of a specific VDS:
Get-VDPortgroup -VDSwitch <VD Switch name>



To get all the details of a specific port group in a VDS:
Get-VDPortgroup -VDSwitch <VD Switch name> -Name <Port group name> | select *




VD Port


To get all VD ports of a specific VD port group in a VDS:
Get-VDSwitch <VD Switch name> | Get-VDPortgroup <Port group name> | Get-VDPort

To get only active VD ports of a specific VD port group in a VDS:
Get-VDSwitch <VD Switch name> | Get-VDPortgroup <Port group name> | Get-VDPort -ActiveOnly


To get all details of a specific VD port in a VDS:
Get-VDPort -Key <Value> -VDSwitch <VD Switch name> | select * 




VM connected to a VD port


To get the VM that is connected to a VD port:
(Get-VDPort -Key <Value> -VDSwitch <VD Switch name>).ExtensionData.Connectee
Get-VM -Id <VM Id>



Find VM using NIC MAC


Get-VM | where {$_.ExtensionData.Guest.Net.MacAddress -eq '<MAC Address>'}


Hope it was useful. Cheers!


Related posts


Friday, June 7, 2019

VMware PowerCLI 101 - Part3 - Basic VM operations

Previous posts of this blog series talked about how to install PowerCLI, connecting to ESXi host and basics of working with the vCenter server. In this article, we will go through basic VM operations.

Get basic VM details:
Get-VM -Name <name of the virtual machine>

To view all properties of a VM object:
Get-VM -Name <name of the virtual machine> | Get-Member

To start a VM:
Get-VM -Name <name of the virtual machine> | Start-VM

To restart a VM:
Get-VM -Name <name of the virtual machine> | Restart-VMGuest

To shut down a VM:
Get-VM -Name <name of the virtual machine> | Shutdown-VMGuest

To delete a VM permanently:
Get-VM -Name <name of the virtual machine> | Remove-VM -DeletePermanently

Let's have a look into the "ExtensionData" property of a virtual machine.

Status related details of a VM:
(Get-VM -Name <name of the virtual machine>).ExtensionData | select GuestHeartbeatStatus,OverallStatus,ConfigStatus


CPU, Memory config details of a VM:
(Get-VM -Name <name of the virtual machine>).ExtensionData.Config.Hardware



CPU, Memory hot add/ remove features:


Layout details of a VM:
(Get-VM -Name <name of the virtual machine>).ExtensionData.LayoutEx.File | sort Key | ft


VM tools status and other guest details:
(Get-VM -Name <name of the virtual machine>).ExtensionData.Guest




Network related info of a VM:
(Get-VM -Name <name of the virtual machine>).ExtensionData.Guest.Net


Hope this was useful. Cheers!

Friday, May 24, 2019

VMware PowerCLI 101 - Part2 - Working with vCenter server

In my previous blog, we discussed how to install PowerCLI module and connect to a stand-alone ESXi host. Now let's start with connecting to the vCenter server.

Connect-VIServer <IP of vCenter server>

Get the list of data centers present under this vCenter server: Get-Datacenter


Get the list of clusters under a datacenter: Get-Datacenter DC01 | Get-Cluster

To verify the status of HA, DRS, and vSAN of a cluster:
(Get-Cluster Cluster01) | select Name,DrsEnabled,DrsAutomationLevel,HAEnabled,HAFailoverLevel,VsanEnabled


Get the list of ESXi hosts part of a specific cluster: 
Get-Datacenter DC01 | Get-Cluster Cluster01 | Get-VMHost


To get the list of VMs hosted on a specific ESXi host:
Get-Datacenter DC01 | Get-Cluster Cluster01 | Get-VMHost 192.168.105.11 | Get-VM


To get a list of powered on and powered off VMs:
(Get-Cluster Cluster01 | Get-VM).where{$PSItem.PowerState -eq "PoweredOn"}
(Get-Cluster Cluster01 | Get-VM).where{$PSItem.PowerState -eq "PoweredOff"}


An efficient way of doing it in a single go is given below:
$vmson, $vmsoff = (Get-Cluster Cluster01 | Get-VM).where({$PSItem.PowerState -eq "PoweredOn"}, 'split')

$vmson will have the list of VMs that are PoweredOn and $vmsoff will have the list of VMs that are PoweredOff.


Cluster level inventory:
Get-Cluster | Get-Inventory

Datastore details of a cluster:
Get-Cluster | Get-Datastore | select Name, FreeSpaceGB, CapacityGB, FileSystemVersion, State


Hope this was useful. Cheers! In the next post, we will talk about performing basic VM operations using PowerCLI.

Saturday, November 17, 2018

Real time VMware VM resource monitoring using PowerShell

This post is about monitoring resource usage of a list of virtual machines hosted on VMware ESXi clusters using PowerCLI. Output format is given below which gets refreshed automatically every few seconds.

Prerequisites:
  • VMware.PowerCLI module should be installed on the node from which you are running the script
  • You can verify using: Get-Module -Name VMware.PowerCLI -ListAvailable
  • If not installed, you can find the latest version from the PSGallery: Find-Module -Name VMware.PowerCLI
  • Install the module: Install-Module -Name VMware.PowerCLI
Note:
  • I am using PowerCLI Version 11.0.0.10380590

Latest version of the project and code available at: github.com/vineethac/vmware_vm_monitor

    Sample screenshot of output:


    Notes:

    VMware guidance: CPU Ready time and Co-Stop values per core greater than 5% and 3% respectively could  be a performance concern.

    Hope this will be useful for resource monitoring as well as right sizing of VMs. Cheers!

    References:

    Monday, January 29, 2018

    PowerCLI to remove orphaned VM from inventory

    Recently I came across a situation where one of the VM running on ESXi 6.0 was orphaned. This happened after I updated the ESXi with a patch. All the files associated with that VM was available in the datastore. But the VM was displayed in the inventory as orphaned. There was no option available to remove the VM from inventory from the web console. Inorder to register the original VMX file, the orphaned VM needs to be removed from inventory first.

    Steps to resolve this issue:

    1. Connect to vCenter server using PowerCLI: Connect-VIServer 192.168.14.15
    2. Get list of VMs: Get-VM | Select Name



    3. In my case "HCIBench_1.6.5.1_14G" was orphaned.
    4. Remove the orphaned VM: Remove-VM HCIBench_1.6.5.1_14G


    5. As you can see in the above screenshot, orphaned VM is removed from inventory. Now you can go to the datastore where the original VM files are stored, click on the VMX file and register (In my case, while I tried to register the VM it resulted in an error mentioning that the item already exists. I had to reboot the vCenter server to fix this issue. After rebooting the vCenter I was able to register the VMX file of the VM).

    Hope it was helpful to you. Cheers!

    Reference: blog.vmtraining.net