Showing posts with label VM. Show all posts
Showing posts with label VM. Show all posts

Saturday, June 22, 2024

vSphere with Tanzu using NSX-T - Part31 - Troubleshooting inaccessible TKC with expired control plane certs

In the course of managing multiple Tanzu Kubernetes Clusters (TKC), I encountered an unexpected issue: the control plane certificates had expired, preventing us from accessing the cluster using the kubeconfig file. To make matters worse, we were unable to SSH into the TKC control plane Virtual Machines (VMs) due to the vmware-system-user password expiring in accordance with STIG Hardening.

The recommended workaround for updating the vmware-system-user password expiry involves applying a specific daemonset on Guest Clusters. However, this approach requires access to the TKC using its admin kubeconfig file, which was unavailable due to the expired certificates.

Warning: In case of critical production issues that affect the accessibility of your Tanzu Kubernetes Cluster (TKC), it is strongly advised to submit a product support request to our team for assistance. This will ensure that you receive expert guidance and a timely resolution to help minimize the impact on your environment.

To resolve this issue, I followed an alternative workaround: I reset the root password of the TKC control plane VMs through the vCenter VM console, as outlined in this knowledge base article. Once the root password was reset, I was able to log directly into the TKC control plane VM using the VM console.




After gaining access to the TKC control plane VM, I proceeded to renew the control plane certificates using kubeadm, as detailed in this blog post. It's essential to apply this process to all control plane nodes in your cluster to ensure proper functionality.

root [ /etc/kubernetes ]# kubeadm certs check-expiration

root [ /etc/kubernetes ]# kubeadm certs renew all
[renew] Reading configuration from the cluster...
[renew] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[renew] Error reading configuration from the Cluster. Falling back to default configuration

certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
certificate for serving the Kubernetes API renewed
certificate the apiserver uses to access etcd renewed
certificate for the API server to connect to kubelet renewed
certificate embedded in the kubeconfig file for the controller manager to use renewed
certificate for liveness probes to healthcheck etcd renewed
certificate for etcd nodes to communicate with each other renewed
certificate for serving etcd renewed
certificate for the front proxy client renewed
certificate embedded in the kubeconfig file for the scheduler manager to use renewed

Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates.

Although this workaround required some additional steps, it ultimately allowed us to regain access to our Tanzu Kubernetes Cluster and maintain its security and functionality.

Hope it was useful. Cheers!

Saturday, May 25, 2024

vSphere with Tanzu using NSX-T - Part30 - Troubleshooting inaccessible TKC with server pool members missing in the LB VS

Encountering issues with connectivity to your TKC apiserver/ control plane can be frustrating. One common problem we've seen is the kubeconfig failing to connect, often due to missing server pool members in the load balancer's virtual server (LB VS).

The Issue

The LB VS, which operates on port 6443, should have the control plane VMs listed as its member servers. When these members are missing, connectivity problems arise, disrupting your access to the TKC apiserver.

Troubleshooting steps

  1. Access the TKC: Use the kubeconfig to access the TKC.
    ❯ KUBECONFIG=tkc.kubeconfig kubectl get node
    Unable to connect to the server: dial tcp 10.191.88.4:6443: i/o timeout
    
    
  2. Check the Load Balancer: In NSX-T, verify the status of the corresponding load balancer (LB). It may display a green status indicating success.
  3. Inspect Virtual Servers: Check the virtual servers in the LB, particularly on port 6443. They might show as down.
  4. Examine Server Pool Members: Look into the server pool members of the virtual server. You may find it empty.
  5. SSH to Control Plane Nodes: Attempt to SSH into the TKC control plane nodes.
  6. Run Diagnostic Commands: Execute diagnostic commands inside the control plane nodes to verify their status. The issue could be that the control plane VMs are in a hung state, and the container runtime is not running.
    vmware-system-user@tkc-infra-r68zc-jmq4j [ ~ ]$ sudo su
    root [ /home/vmware-system-user ]# crictl ps
    FATA[0002] failed to connect: failed to connect, make sure you are running as root and the runtime has been started: context deadline exceeded
    root [ /home/vmware-system-user ]#
    root [ /home/vmware-system-user ]# systemctl is-active containerd
    Failed to retrieve unit state: Failed to activate service 'org.freedesktop.systemd1': timed out (service_start_timeout=25000ms)
    root [ /home/vmware-system-user ]#
    root [ /home/vmware-system-user ]# systemctl status containerd
    WARNING: terminal is not fully functional
    -  (press RETURN)Failed to get properties: Failed to activate service 'org.freedesktop.systemd1'>
    lines 1-1/1 (END)lines 1-1/1 (END)
    
  7. Check VM Console: From vCenter, check the console of the control plane VMs. You might see specific errors indicating issues.
    EXT4-fs (sda3): Delayed block allocation failed for inode 266704 at logical offset 10515 with max blocks 2 with error 5
    EXT4-fs (sda3): This should not happen!! Data will be lost
    EXT4-fs error (device sda3) in ext4_writepages:2905: IO failure
    EXT4-fs error (device sda3) in ext4_reserve_inode_write:5947: Journal has aborted
    EXT4-fs error (device sda3) xxxxxx-xxx-xxxx: unable to read itable block
    EXT4-fs error (device sda3) in ext4_journal_check_start:61: Detected aborted journal
    systemd[1]: Caught <BUS>, dumped core as pid 24777.
    systemd[1]: Freezing execution.
    
  8. Restart Control Plane VMs: Restart the control plane VMs. Note that sometimes your admin credentials or administrator@vsphere.local credentials may not allow you to restart the TKC VMs. In such cases, decode the username and password from the relevant secret and use these credentials to connect to vCenter and restart the hung TKC VMs.
    ❯ kubectx wdc-01-vc17
    Switched to context "wdc-01-vc17".
    
    ❯ kg secret -A | grep wcp
    kube-system                                 wcp-authproxy-client-secret                                               kubernetes.io/tls                                  3      291d
    kube-system                                 wcp-authproxy-root-ca-secret                                              kubernetes.io/tls                                  3      291d
    kube-system                                 wcp-cluster-credentials                                                   Opaque                                             2      291d
    vmware-system-nsop                          wcp-nsop-sa-vc-auth                                                       Opaque                                             2      291d
    vmware-system-nsx                           wcp-cluster-credentials                                                   Opaque                                             2      291d
    vmware-system-vmop                          wcp-vmop-sa-vc-auth                                                       Opaque                                             2      291d
    
    ❯ kg secrets -n vmware-system-vmop wcp-vmop-sa-vc-auth
    NAME                  TYPE     DATA   AGE
    wcp-vmop-sa-vc-auth   Opaque   2      291d
    ❯ kg secrets -n vmware-system-vmop wcp-vmop-sa-vc-auth -oyaml
    apiVersion: v1
    data:
      password: aWAmbHUwPCpKe1Uxxxxxxxxxxxx=
      username: d2NwLXZtb3AtdXNlci1kb21haW4tYzEwMDYtMxxxxxxxxxxxxxxxxxxxxxxxxQHZzcGhlcmUubG9jYWw=
    kind: Secret
    metadata:
      creationTimestamp: "2022-10-24T08:32:26Z"
      name: wcp-vmop-sa-vc-auth
      namespace: vmware-system-vmop
      resourceVersion: "336557268"
      uid: dcbdac1b-18bb-438c-ba11-76ed4d6bef63
    type: Opaque
    
    
    ***Decrypt the username and password from the secret and use it to connect to the vCenter.
    ***Following is an example using PowerCLI:
    
    PS /Users/vineetha> get-vm gc-control-plane-f266h
    
    Name                 PowerState Num CPUs MemoryGB
    ----                 ---------- -------- --------
    gc-control-plane-f2… PoweredOn  2        4.000
    
    PS /Users/vineetha> get-vm gc-control-plane-f266h | Restart-VMGuest
    Restart-VMGuest: 08/04/2023 22:20:20	Restart-VMGuest		Operation "Restart VM guest" failed for VM "gc-control-plane-f266h" for the following reason: A general system error occurred: Invalid fault
    PS /Users/vineetha>
    PS /Users/vineetha> get-vm gc-control-plane-f266h | Restart-VM
    
    Confirm
    Are you sure you want to perform this action?
    Performing the operation "Restart-VM" on target "VM 'gc-control-plane-f266h'".
    [Y] Yes  [A] Yes to All  [N] No  [L] No to All  [S] Suspend  [?] Help (default is "Y"): Y
    
    Name                 PowerState Num CPUs MemoryGB
    ----                 ---------- -------- --------
    gc-control-plane-f2… PoweredOn  2        4.000
    
    PS /Users/vineetha>
    
  9. Verify System Pods and Connectivity: Once the control plane VMs are restarted, the system pods inside them will start, and the apiserver will become accessible using the kubeconfig. You should also see the previously missing server pool members reappear in the corresponding LB virtual server, and the virtual server on port 6443 will be up and show a success status.

Following these steps should help you resolve the connectivity issues with your TKC apiserver/control plane effectively.Ensuring that your load balancer's virtual server is correctly configured with the appropriate member servers is crucial for maintaining seamless access. This runbook aims to guide you through the process, helping you get your TKC apiserver back online swiftly.

Note: If required for critical production issues related to TKC accessibility I strongly recommend to raise a product support request.

Hope it was useful. Cheers!

Saturday, December 10, 2022

vSphere with Tanzu using NSX-T - Part21 - Pointers while upgrading the stack

Here are some pointers.

The main workflow while upgrading the entire stack is:

  1. upgrade NSX-T
  2. upgrade vCenter server
  3. upgrade ESXi nodes
  4. upgrade vSAN and vDS (if required)
  5. upgrade WCP

Note: Make sure you follow the compatibility matrix while performing upgrades in a production environment.

NSX-T

While NSX-T is getting upgraded, the host transport nodes will also need to be updated. NSX components will be installed/ updated on the ESXi nodes and may need a reboot also. 

In NSX-T: System > Lifecycle Management > Upgrade > Upgrade NSX > Hosts

Plan

-Upgrade order across groups - Serial

-Pause upgrade condition - When an upgrade unit fails to upgrade

Host Groups

-Upgrade order within group - Serial

-Upgrade mode - Maintenance

 

Notes: 

  • There is an in-place upgrade mode for the host groups, and in this mode the NSX component on the ESXi nodes will get updated first and then the node will be placed in MM and then gets rebooted. But we have had some cases where the NSX component update on the ESXi node fails, and the workload/ TKC VMs running on that ESXi loses network connectivity. To fix this the ESXi node has to be rebooted, and while trying to put the node in MM, the TKC VMs running on it fails to get migrated, and the final option is to force reboot the ESXi and that affects the workload running on it. To avoid this case, the safest option is to choose the upgrade mode as Maintenance, so that the TKCs VMs will get migrated off the ESXi node first before updating the NSX component, and even if the component update fails, we are safe to reboot the node as it is in already in MM. 
  • We have also noticed in some cases the ESXi nodes in a WCP cluster fails to enter MM. Following are some common cases:
  • Orphaned or inaccessible VMs are one of the main causes. You can follow this article to find those and clean them up.
  • I have noticed cases like some VMs (vSphere pods) were present on the ESXi host in poweredoff state. In this case the ESXi was stuck 100% in maintenance mode. You will need to check on those VMs and then delete them if required to proceed further.
  • There are cases where some TKC VMs were present on the ESXi host and they fail to get migrated. Check whether these VMs are still present at the Kubernetes layer. If not, delete those VMs!
  • Check if there are any TKC nodes still present on the ESXi node. There can be cases where the TKC node or nodes are unable to migrate to other available nodes in the cluster due to resource constraints. Check whether the TKC nodes that are unable to migrate is using guaranteed vmclass. The solution is to find out unused resources/ TKCs in the cluster, check with the owner/ user and then delete it if not being used. Another way is to check with the user and change the guaranteed vmclass to besteffort if possible or temporarily reduce the number of worker nodes if the user agrees to do so.

  • Verify whether any pods are running on the respective ESXi worker node:
    kubectl get pods -A -o wide | grep ESXi-FQDN
    Safely drain the node.

Hope it was useful. Cheers!

Friday, December 10, 2021

ESXi in a HA cluster fails to Enter Maintenance Mode and gets stuck

Recently we came across a situation where when we try to put a ESXi host in Maintenance Mode, it is getting stuck at certain level. These ESXi nodes were part of a vSphere with Tanzu 7 U3 cluster. While troubleshooting we noticed that there are some VMs that are either orphaned or inaccessible running on it. We deleted those orphaned and inaccessible VMs and then the ESXi node enters Maintenance Mode successfully.

You can use VMware PowerCLI to list those orphaned and inaccessible VMs.

(Get-VMHost <host_fqdn> | Get-VM | Where {$_.ExtensionData.Summary.Runtime.ConnectionState -eq "orphaned"}) | select Name,Id,PowerState

(Get-VMHost <host_fqdn> | Get-VM | Where {$_.ExtensionData.Summary.Runtime.ConnectionState -eq "inaccessible"}) | select Name,Id,PowerState

We then deleted those orphaned and inaccessible VMs. You can try to delete them using Remove-VM command. 

Remove-VM -VM <vm_name> -DeletePermanently 

If that does not work, you can try with dcli.

dcli> com vmware vcenter vm delete --vm <vm-id>

Hope it was useful.

Friday, December 20, 2019

VMware PowerCLI 101 - part6 - vSphere networking

Networking is one of the important factors for ensuring service availability. Incorrect network configurations will lead to the unavailability of data and services and if this happens in a production environment it will negatively affect the business. 

In this article, I will briefly explain how to use PowerCLI to work with basic vSphere networking.

Connect to vCenter server using:
Connect-VIServer <IP address of vCenter>


VM IP


To get all IP details of a VM:
(Get-VM -Name <VM name>).Guest.IPAddress




VM network adapters, MAC, and IP


To get all network adapters, MAC address and IP details of a VM:
(Get-VM -Name <VM name>).Guest.Nics | select *



OR

(Get-VM -Name <VM name>).ExtensionData.Guest.Net


VDS


To get all the Virtual Distributed Switches (VDS):
Get-VDSwitch



To get all the details of a specific VDS:
Get-VDSwitch -Name <VD Switch name> | select *




To get VDS security policy:
Get-VDSwitch -Name <VD Switch name> | Get-VDSecurityPolicy | select *



VD Port group


To get all port groups of a specific VDS:
Get-VDPortgroup -VDSwitch <VD Switch name>



To get all the details of a specific port group in a VDS:
Get-VDPortgroup -VDSwitch <VD Switch name> -Name <Port group name> | select *




VD Port


To get all VD ports of a specific VD port group in a VDS:
Get-VDSwitch <VD Switch name> | Get-VDPortgroup <Port group name> | Get-VDPort

To get only active VD ports of a specific VD port group in a VDS:
Get-VDSwitch <VD Switch name> | Get-VDPortgroup <Port group name> | Get-VDPort -ActiveOnly


To get all details of a specific VD port in a VDS:
Get-VDPort -Key <Value> -VDSwitch <VD Switch name> | select * 




VM connected to a VD port


To get the VM that is connected to a VD port:
(Get-VDPort -Key <Value> -VDSwitch <VD Switch name>).ExtensionData.Connectee
Get-VM -Id <VM Id>



Find VM using NIC MAC


Get-VM | where {$_.ExtensionData.Guest.Net.MacAddress -eq '<MAC Address>'}


Hope it was useful. Cheers!


Related posts


Friday, June 7, 2019

VMware PowerCLI 101 - Part3 - Basic VM operations

Previous posts of this blog series talked about how to install PowerCLI, connecting to ESXi host and basics of working with the vCenter server. In this article, we will go through basic VM operations.

Get basic VM details:
Get-VM -Name <name of the virtual machine>

To view all properties of a VM object:
Get-VM -Name <name of the virtual machine> | Get-Member

To start a VM:
Get-VM -Name <name of the virtual machine> | Start-VM

To restart a VM:
Get-VM -Name <name of the virtual machine> | Restart-VMGuest

To shut down a VM:
Get-VM -Name <name of the virtual machine> | Shutdown-VMGuest

To delete a VM permanently:
Get-VM -Name <name of the virtual machine> | Remove-VM -DeletePermanently

Let's have a look into the "ExtensionData" property of a virtual machine.

Status related details of a VM:
(Get-VM -Name <name of the virtual machine>).ExtensionData | select GuestHeartbeatStatus,OverallStatus,ConfigStatus


CPU, Memory config details of a VM:
(Get-VM -Name <name of the virtual machine>).ExtensionData.Config.Hardware



CPU, Memory hot add/ remove features:


Layout details of a VM:
(Get-VM -Name <name of the virtual machine>).ExtensionData.LayoutEx.File | sort Key | ft


VM tools status and other guest details:
(Get-VM -Name <name of the virtual machine>).ExtensionData.Guest




Network related info of a VM:
(Get-VM -Name <name of the virtual machine>).ExtensionData.Guest.Net


Hope this was useful. Cheers!

Friday, May 24, 2019

VMware PowerCLI 101 - Part2 - Working with vCenter server

In my previous blog, we discussed how to install PowerCLI module and connect to a stand-alone ESXi host. Now let's start with connecting to the vCenter server.

Connect-VIServer <IP of vCenter server>

Get the list of data centers present under this vCenter server: Get-Datacenter


Get the list of clusters under a datacenter: Get-Datacenter DC01 | Get-Cluster

To verify the status of HA, DRS, and vSAN of a cluster:
(Get-Cluster Cluster01) | select Name,DrsEnabled,DrsAutomationLevel,HAEnabled,HAFailoverLevel,VsanEnabled


Get the list of ESXi hosts part of a specific cluster: 
Get-Datacenter DC01 | Get-Cluster Cluster01 | Get-VMHost


To get the list of VMs hosted on a specific ESXi host:
Get-Datacenter DC01 | Get-Cluster Cluster01 | Get-VMHost 192.168.105.11 | Get-VM


To get a list of powered on and powered off VMs:
(Get-Cluster Cluster01 | Get-VM).where{$PSItem.PowerState -eq "PoweredOn"}
(Get-Cluster Cluster01 | Get-VM).where{$PSItem.PowerState -eq "PoweredOff"}


An efficient way of doing it in a single go is given below:
$vmson, $vmsoff = (Get-Cluster Cluster01 | Get-VM).where({$PSItem.PowerState -eq "PoweredOn"}, 'split')

$vmson will have the list of VMs that are PoweredOn and $vmsoff will have the list of VMs that are PoweredOff.


Cluster level inventory:
Get-Cluster | Get-Inventory

Datastore details of a cluster:
Get-Cluster | Get-Datastore | select Name, FreeSpaceGB, CapacityGB, FileSystemVersion, State


Hope this was useful. Cheers! In the next post, we will talk about performing basic VM operations using PowerCLI.

Saturday, November 17, 2018

Real time VMware VM resource monitoring using PowerShell

This post is about monitoring resource usage of a list of virtual machines hosted on VMware ESXi clusters using PowerCLI. Output format is given below which gets refreshed automatically every few seconds.

Prerequisites:
  • VMware.PowerCLI module should be installed on the node from which you are running the script
  • You can verify using: Get-Module -Name VMware.PowerCLI -ListAvailable
  • If not installed, you can find the latest version from the PSGallery: Find-Module -Name VMware.PowerCLI
  • Install the module: Install-Module -Name VMware.PowerCLI
Note:
  • I am using PowerCLI Version 11.0.0.10380590

Latest version of the project and code available at: github.com/vineethac/vmware_vm_monitor

    Sample screenshot of output:


    Notes:

    VMware guidance: CPU Ready time and Co-Stop values per core greater than 5% and 3% respectively could  be a performance concern.

    Hope this will be useful for resource monitoring as well as right sizing of VMs. Cheers!

    References:

    Monday, January 29, 2018

    PowerCLI to remove orphaned VM from inventory

    Recently I came across a situation where one of the VM running on ESXi 6.0 was orphaned. This happened after I updated the ESXi with a patch. All the files associated with that VM was available in the datastore. But the VM was displayed in the inventory as orphaned. There was no option available to remove the VM from inventory from the web console. Inorder to register the original VMX file, the orphaned VM needs to be removed from inventory first.

    Steps to resolve this issue:

    1. Connect to vCenter server using PowerCLI: Connect-VIServer 192.168.14.15
    2. Get list of VMs: Get-VM | Select Name



    3. In my case "HCIBench_1.6.5.1_14G" was orphaned.
    4. Remove the orphaned VM: Remove-VM HCIBench_1.6.5.1_14G


    5. As you can see in the above screenshot, orphaned VM is removed from inventory. Now you can go to the datastore where the original VM files are stored, click on the VMX file and register (In my case, while I tried to register the VM it resulted in an error mentioning that the item already exists. I had to reboot the vCenter server to fix this issue. After rebooting the vCenter I was able to register the VMX file of the VM).

    Hope it was helpful to you. Cheers!

    Reference: blog.vmtraining.net

    Monday, November 14, 2016

    Best practices while virtualizing Microsoft SQL servers using Hyper-V

    -Limit min and max memory for SQL server
    -Use fixed size VHDX
    -Split data and log files into separate VHDX disks
    -Use multiple SCSI controllers
    -Right sizing and not over allocating resources
    -Making use of multiple RAID disk groups (for sequential and random access)
    -For tier-1 mission critical applications use RAID 10 for data, log files, and tempDB for best performance and availability
    -For lower tier SQL workloads when cost is a concern, data and tempDB can be on RAID 5
    -Use of SSDs or tiered storage for higher IOPS
    -If using VMQ on Hyper-V environment, on the guest OS side you can enable vRSS for processing network load across multiple CPUs
    -I prefer using fixed size memory for SQL VMs
    -Exclude SQL DB related files  from (*.mdf, *.ldf, *.ndf, *.bak, *.trn) on access antivirus scan
    -Disable content indexing on SQL data/ log/ tempDB drives
    -Enable lock pages in memory (group policy setting)
    -DB IFI (Instant File Initialization)
    -Set OS power plan to high performance
    -OS performance options - visual effects - adjust for best performance

    Tuesday, August 9, 2016

    Hyper-V VM deployment using powershell and VHDX templates

    Following powershell script can be used to deploy virtual machines on a Hyper-V host.

    CODE:

    #Start
    #VM name
    [string]$vmname = Read-Host "Name of VM"
    $vmcheck = Get-VM -name $vmname

    #To check for duplicate VM on the host
    if(!$vmcheck)
    {
    Write-Host "Above warning can be ignroed as there is no duplicate VM. Please proceed and enter following details. `n"
    [int32]$gen = Read-Host "Generation type"
    [int32]$cpu = Read-Host "Number of vCPU"
    [string]$vmpath = Read-Host "Enter path for VM configuration files (Eg: E:\VM)"
    [string]$dynamic = $null

    while("yes","no" -notcontains $dynamic)
    {
    $dynamic = Read-Host "Will this VM use dynamic memory? (yes/no)"
    }

    #Dynamic memory parameters
    if($dynamic -eq "yes")
    {
    [int64]$minRAM = Read-Host "Minimum memory (MB)"
    [int64]$maxRAM = Read-Host "Maximum memory (MB)"
    [int64]$startRAM = Read-Host "Starting memory (MB) [Note: Specify value between $minRAM and $maxRAM]"
    $minRAM = 1MB*$minRAM
    $maxRAM = 1MB*$maxRAM
    $startRAM = 1MB*$startRAM

    #Creating the VM with dynamic RAM
    New-VM -Name $vmname -Path $vmpath -Generation $gen
    Set-VM -Name $vmname -DynamicMemory -MemoryStartupBytes $startRAM -MemoryMinimumBytes $minRAM -MemoryMaximumBytes $maxRAM
    }

    else
    {
    #Creating the VM with static RAM
    [int64]$staticRAM = Read-Host "Static memory (MB)"
    $staticRAM = 1MB*$staticRAM
    New-VM -Name $vmname -Path $vmpath -Generation $gen -MemoryStartupBytes $staticRAM
    }

    #Setting VM auto start to none and auto stop to shutdown
    Set-VM -Name $vmname -ProcessorCount $cpu -AutomaticStartAction Nothing -AutomaticStopAction ShutDown

    #Creating VM hard disk directory
    New-Item -path $vmpath\$vmname -name "Virtual Hard Disks" -type directory

    #Enabling processor compatibility configuration for migration
    Set-VMProcessor $vmname -CompatibilityForMigrationEnabled $true
    }#vmcheck ends here

    else
    {
    Write-Host "A VM named $vmname already exists"
    }
    #End

    Now the VM is created. But it doesn't have virtual hard disk (VHDX file). Assuming that you already have a syspreped VHDX template. Copy that VHDX template to the virtual hard disk folder of the VM that you just created. Rename it as per your standard. Now attach the disk to SCSI controller if Gen 2 or to IDE controller if Gen 1. Change the boot order and select hard drive as first boot entry. Connect the NIC to vSwitch. Now you can start your VM.


    Reference:

    techthoughts
    starwindsoftware

    Saturday, July 9, 2016

    How Live Migration works on Hyper-V

    1.Live migration setup

    • Source host  creates TCP connection with destination host
    • VM configuration data is transferred to destination host
    • A skeleton VM is setup at destination host
    • Physical memory is allocated to that VM
    2.Memory pages are transferred from the source to destination host

    • In-state memory (working set) of the VM will be transferred first
    • Default page size is 4 KB
    • All utilized pages will be copied to destination
    • Modified pages are tracked by source and marked as being modified
    • Several iteration of copy process will take place
    3.Remaining modified pages will be transferred to destination host

    • VM is then registered and the device state is transferred
    • Less modified pages implies fast migration
    • Total working set is copied to destination
    4.Move storage handle from source to destination

    • Till this step VM at destination host is not online
    5.Once control of storage is transferred, VM will be online and resumed at destination

    • Now the VM is completely migrated and running on destination host
    6.Network cleanup

    • Message is sent to physical network switch causes it to relearn MAC address of migrated VM 

    Sunday, April 26, 2015

    VM Resource Metering


    'VM Resource Metering' is a feature in Windows Server 2012 R2, which will help us to keep track of the resources consumed by virtual machines. By default this feature is disabled.

    You can enable metering for a particular VM using the PowerShell command : Get-VM <virtual machine name> |  Enable-VMResourceMetering

    If you want to enable it for all VMs, you can use : Get-VM | Enable-VMResourceMetering

    You can view the resource usage list of all virtual machines using : Get_VM | Measure-VM

    You can sort the list using : Get_VM | Measure-VM |sort AvgRAM –descending