Sunday, July 10, 2016

What happens when you enable Intel VT

Lets consider the difference between virtualized and non-virtualized platforms.


Here VMM refers to Hypervisor. There are different privilege levels in the processor for instruction execution. These levels are called Rings (Ring 0, 1, 2, 3).

When you enable Intel VT:
  • In a non-virtualized environment OS runs on ring 0. A single operating system controls all hardware resources
  • Four privilege levels (rings) are employed on VT platforms
  • When it is enabled hypervisor now runs on Ring 0 instead of an OS. Guest OS runs in Ring 1 or Ring 3
  • VT allows the hypervisor to present each guest OS a virtual machine (VM) environment that emulates the hardware environment needed by the guest OS

When you enable Intel VT-x:
  • Intel (VT-x) - is a hardware assisted virtualization technology
  • Hardware support for processor virtualization enables system vendors to provide simple, robust, and reliable hypervisor software
  • VT-x consists of a set of virtual machine extensions (VMX) that support virtualization of processor hardware for multiple software environments using virtual machine
  • A hypervisor written to take advantage of the Intel®Virtualization Technology runs in a new CPU mode called “VMX Root” mode and the guest OS in the “VMX Non-root” mode. The VMM will manage the virtual machines through the VM Exit and VM Entry mechanism
  • Hypervisor has its own privileged level (VMX Root) where it executes

Below figure shows difference in Ring levels of Intel VT and Intel VT-x


Reference: Intel

Saturday, July 9, 2016

Anatomy of Hyper-V cluster debug log

  • Get-ClusterLog dumps the events to a text file
  • Location: C:\Windows\Clsuter\Reports\Cluster.log
  • It captures last 72 hours log
  • Cluster log is in GMT (because of geographically spanned multi-site clusters)
  • Usage: Get-ClusterLog -timespan (which gives last "x" minutes logs)
  • You can also set the levels of logs
  • Set-ClusterLog -Level 3 (level 3 is default)
  • It can be from level 0 to level 5 (increasing level of logging has performance impact)
  • Level 5 will provide the highest level of detail
  • Log format:
    [ProcessID] [ThreadID] [Date/Time] [INFO/WARN/ERR/DBG] [RescouceType] [ResourceName] [Description]

Troubleshooting Live Migration issues on Hyper-V

  1. Check whether enough resources (CPU, RAM) are available at the destination host
  2. Make sure all nodes in the cluster follow same naming standard for vSwitches
  3. Check NUMA spanning is enabled or not. If NUMA spanning is disabled, VM must fit entirely within a single physical NUMA node or the VM will not start or be restored or migrated
  4. Constrained delegation should be configured for all servers in the cluster if you are using Kerberos authentication protocol for live migration
  5. Check live migration setting is enabled on Hyper-V settings
  6. Verify Hyper-V-High-Availability logs in event viewer
  7. Finally check cluster debug log (Get-Clusterlog -timespan) in C:\Windows\Cluster\Reports\Cluster.log 

How Live Migration works on Hyper-V

1.Live migration setup

  • Source host  creates TCP connection with destination host
  • VM configuration data is transferred to destination host
  • A skeleton VM is setup at destination host
  • Physical memory is allocated to that VM
2.Memory pages are transferred from the source to destination host

  • In-state memory (working set) of the VM will be transferred first
  • Default page size is 4 KB
  • All utilized pages will be copied to destination
  • Modified pages are tracked by source and marked as being modified
  • Several iteration of copy process will take place
3.Remaining modified pages will be transferred to destination host

  • VM is then registered and the device state is transferred
  • Less modified pages implies fast migration
  • Total working set is copied to destination
4.Move storage handle from source to destination

  • Till this step VM at destination host is not online
5.Once control of storage is transferred, VM will be online and resumed at destination

  • Now the VM is completely migrated and running on destination host
6.Network cleanup

  • Message is sent to physical network switch causes it to relearn MAC address of migrated VM 

Wednesday, May 11, 2016

Nutanix Certifications

For all those who are enthusiastic to learn the Nutanix Administration course and to be a certified NPP (Nutanix Platform Professional), I strongly recommend to visit their education portal http://nuschool.nutanix.com and enroll yourself. I've completed the course and cleared NPP Certification Exam 4.5 last Monday. Its a decent self paced course which may take around 8-10 hours.

The exam has 50 objective questions and you need a minimum of 80% to pass. If you are not successful at the first attempt, don't worry, you have two more chances. In my first attempt I was able to score 3523/ 5000 only. And on my second attempt I scored 4500/ 5000. 

Once you pass the NPP, you can apply for the next level of certification which is NSS (Nutanix Support Specialist) and then to the ultimate level NPX (Nutanix Platform Expert).

Tuesday, March 29, 2016

Hyper-V on Nutanix

This article explains briefly about Hyper-V on Nutanix virtual computing platform. The below figure shows a 'N' node Nutanix architecture, where each node is an independent server unit with a hypervisor, processor, memory and local storage (combination of SSD and HDD). Along with this there is a CVM (controller VM) through which storage resources are accessed.

'N' node Hyper-V over Nutanix architecture
Local storage from all the nodes are combined together to form a virtualized and unified storage pool by the Nutanix Distributed File System (NDFS). This can be considered as an advanced NAS which delivers a unified storage pool across the cluster and having features like striping, replication, error detection, failover, automatic recovery etc. From this pool shared storage resources are presented to the hypervisors using containers.


NDFS - Logical view of storage pool and containers


As mentioned above, each Hyper-V node has its own CVM which is shown below.
Hyper-V node with a CVM

PRISM console

View of storage pool in PRISM
View of containers in PRISM
Containers are mounted to Hyper-V as SMB 3.0 based file shares where virtual machine files are stored. This is shown below.

SMB 3.0 share path to store VHDs and VM configuration files

Once share path is given properly as mentioned above, you can create virtual machines on your Hyper-V server.

Tuesday, January 26, 2016

Zoning and LUN masking

Zoning and LUN masking are used to isolate SAN traffic and to restrict access to storage devices. For example you might manage different zones separately for testing and production environment, so that they will not interfere. If you want to restrict certain hosts from accessing the storage devices then you have to setup zoning. This is generally done at FC switch level. Zoning are of two types : soft zoning and hard zoning.

Soft zoning is based on WWN name of the device and hard zoning is configured at FC switch port level. Soft zoning offers a greater range of flexibility. That means even if you move a device from one port to another on the FC switch, it will have the same access rights as the restriction is based on WWN name of the device. But the down side of this is using WWN spoofing you can gain access to zones that you aren't supposed to see. In case of hard zoning at switch port level, you will get a tighter access control but with less flexibility compared to soft zoning. Now if you change the device from one port to another as we done before, it won't be able to see its partner. In this case you can't spoof a physical port unless you are standing in the same room at the switch.

Once zoning is done you can further restrict access to SAN LUNs by using LUN masking. This will prevent certain devices from seeing specific LUNs hosted in the storage device. LUN masking is done at storage controller level or OS level of the storage device. It is recommended to use zoning and LUN masking together for securing storage traffic.