Showing posts with label ESXI. Show all posts
Showing posts with label ESXI. Show all posts

Friday, March 15, 2019

VMFS vs VVOL

Let's start with a quick comparison.

VMFS
VVOL
  • LUN centric approach
  • Pre-provisioning of LUNs
  • Use of multiple datastores for different performance capabilities
  • Management difficulties as a single VM may span across multiple datastores

  • VM centric
  • vSphere is now aware of array capabilities through VASA provider
  • No pre-provisioning
  • One VVOL datastore can represent the whole array
  • Storage policies can be applied per vmdk level
  • Some of the vSphere operations are offloaded to array using VAAI (full cloning, snapshots)
  • VVOL snapshots are faster (different from traditional snapshots with redo logs)
            

Note: The explanations below regarding SAN is based on Dell EMC Unity (hybrid array) 

VMFS environment

A traditional VMFS datastore setup is given below. Say, you have two VMs. One with 3 virtual disks and the second one with 2 virtual disks. And your array has 4 storage pools with specific capabilities. Based on those capabilities the pools are classified into 4 service levels (Platinum, Gold, Silver and Bronze). In this case you need to provision 4 datastores/ LUNs from the respective pool to meet requirements of the two virtual machines.

VMFS

You can clearly see that the first VM is spanned across 3 datastores and the second VM is spanned across 2 datastores. If your environment has hundreds and thousands of VMs management becomes too complicated. In this case, as the datastores are properly named the vSphere admin can easily identify the service level/ capability of a specific datastore. But if naming convention is not followed, then vSphere admin has to contact the storage admin to know about the capabilities of that specific datastore/ LUN. Again lots of communication needed, making it a complex process!

VVOL environment

Now let’s have a look at the VVOL environment which is shown the figure below. You are having the same array, but instead of provisioning datastores/ LUNs from the respective pools, here we are creating one vvol datastore. The array has 4 storage pools, each having specific capabilities like drive type, RAID level, tiering policy, FAST Cache ON/ OFF etc. and are classified into 4 service levels as Platinum, Gold, Silver, and Bronze. So each pool has its own capability profile. You can provide any name, but here I just gave the same name as the service level of each pool.

Pool_01   -> Service level: Platinum         -> Capability profile: Platinum
Pool_02   -> Service level: Silver               -> Capability profile: Silver
Pool_03   -> Service level: Bronze            -> Capability profile: Bronze
Pool_04   -> Service level: Gold                -> Capability profile: Gold


Note: vvol datastores cannot be created without capability profile

Next steps are:

  • Create vvol datastore in the SAN
  • Provide a name for the vvol datastore
  • Add capability profiles that need to be part of the vvol datastore
  • In this case, all 4 capability profiles are part of the vvol datastore
  • You can also limit the amount of space that will be used from each capability profile by the vvol datastore
  • Once it’s done, vvol datastore is created in the SAN
VVOL

At this point, you can go ahead and configure host access to this vvol datastore. Now you have to let your Vsphere environment know about the capabilities of the storage array. That is done by registering the new storage provider on your vCenter server making use of VASA (VMware vStorage API for Storage Awareness) provider. Once registered the vSphere environment will communicate with the VASA provider through the array management interface (OOB network) which forms a control path. Next step is creating a vvol datastore on the ESXI host which you provided access earlier. For each vvol datastore created on the storage array, two protocol endpoints (PEs) will be automatically generated to communicate with an ESXI host forming a data path. If you create another vvol datastore on the array and provide access to same ESXI host, two more PEs will be created. PEs act like a target. And on the ESXI side, if you look at the storage devices, you can see 2 proxy LUNs which connects to the respective PEs.

Now you have to create VM storage policies based on service levels. Here you have 4 service levels, so you have to create 4 storage policies. Assign storage policy per vmdk basis as per requirements.

Eg: Storage_Policy_Gold -> VMDK 02 (second VM) -> it will be placed in Pool_04 automatically

So in this case, instead of having 4 VMFS datastores we needed only 1 VVOL datastore which has all the required capabilities. This means you don't need to provision more LUNs from the SAN. Storage management becomes easy with just one VVOL. There is more granularity with SPBM at each VMDK level. With VVOLs the SAN is aware of all the VMs and its corresponding files hosted on it. This makes space reclamation very easy and straight. The moment a VM or a VMDK is deleted, that space will be immediately made free as SAN is having the complete insight of virtual machines stored in it. Data mobility between different storage pools based on its service level becomes effortless as it is handled directly by the SAN internally based on SPBM. All together VVOL simplifies storage/ LUN management.

Hope it was useful. Cheers!

References:



Wednesday, December 12, 2018

Inactive or missing VMware VMFS datastore

Today I came across a situation where one of the shared VMFS datastores in a 4 node ESXi 6.5 cluster was found missing/ inactive after a planned reboot. This post is about the steps I followed to resolve this issue/ re-mount the inactive shared datastore.

On a ESXi node, to list the datastores that are available to mount: esxcfg-volume –l
To mount the available datastore: esxcfg-volume –M <UUID>

Sample screenshot is given below:


Hope it was useful. Cheers!

Reference:
https://community.spiceworks.com/topic/2108624-missing-datastore-after-upgrade-from-esxi-6-0-to-6-5

Saturday, November 17, 2018

Real time VMware VM resource monitoring using PowerShell

This post is about monitoring resource usage of a list of virtual machines hosted on VMware ESXi clusters using PowerCLI. Output format is given below which gets refreshed automatically every few seconds.

Prerequisites:
  • VMware.PowerCLI module should be installed on the node from which you are running the script
  • You can verify using: Get-Module -Name VMware.PowerCLI -ListAvailable
  • If not installed, you can find the latest version from the PSGallery: Find-Module -Name VMware.PowerCLI
  • Install the module: Install-Module -Name VMware.PowerCLI
Note:
  • I am using PowerCLI Version 11.0.0.10380590

Latest version of the project and code available at: github.com/vineethac/vmware_vm_monitor

    Sample screenshot of output:


    Notes:

    VMware guidance: CPU Ready time and Co-Stop values per core greater than 5% and 3% respectively could  be a performance concern.

    Hope this will be useful for resource monitoring as well as right sizing of VMs. Cheers!

    References:

    Saturday, October 20, 2018

    Real time VMware datastore performance monitoring using PowerShell

    I had a scenario where we had to monitor the real time performance statistics of multiple shared VMFS datastores which are part of a multi-node VMware ESXi cluster. So this post is about the short script which uses VMware PowerCLI to get read/ write IOPS and latency details of these shared datastores across all nodes in the cluster. The output format is given below which gets refreshed automatically every few seconds.

    Prerequisites:
    • VMware.PowerCLI module should be installed on the node from which you are running the script
    • You can verify using: Get-Module -Name VMware.PowerCLI -ListAvailable
    • If not installed, you can find the latest version from the PSGallery: Find-Module -Name VMware.PowerCLI
    • Install the module: Install-Module -Name VMware.PowerCLI
    Note:
    • I am using PowerCLI Version 11.0.0.10380590


    Latest version of the project and code available at: github.com/vineethac/datastore_perfmon

    Sample screenshot of output:


    Hope it was useful. Cheers!

    Reference:

    Monday, January 29, 2018

    PowerCLI to remove orphaned VM from inventory

    Recently I came across a situation where one of the VM running on ESXi 6.0 was orphaned. This happened after I updated the ESXi with a patch. All the files associated with that VM was available in the datastore. But the VM was displayed in the inventory as orphaned. There was no option available to remove the VM from inventory from the web console. Inorder to register the original VMX file, the orphaned VM needs to be removed from inventory first.

    Steps to resolve this issue:

    1. Connect to vCenter server using PowerCLI: Connect-VIServer 192.168.14.15
    2. Get list of VMs: Get-VM | Select Name



    3. In my case "HCIBench_1.6.5.1_14G" was orphaned.
    4. Remove the orphaned VM: Remove-VM HCIBench_1.6.5.1_14G


    5. As you can see in the above screenshot, orphaned VM is removed from inventory. Now you can go to the datastore where the original VM files are stored, click on the VMX file and register (In my case, while I tried to register the VM it resulted in an error mentioning that the item already exists. I had to reboot the vCenter server to fix this issue. After rebooting the vCenter I was able to register the VMX file of the VM).

    Hope it was helpful to you. Cheers!

    Reference: blog.vmtraining.net

    Thursday, August 31, 2017

    Benchmarking hyper-converged vSphere environment using HCIBench

    In this article I will explain briefly about using HCIBench for storage benchmarking vSphere deployments. I recently had a chance to try HCIBench on a ScaleIO cluster running on a 3 node ESXi 6.0 cluster. Latest version of the tool can be downloaded from here.

    I conducted the tests with HCIBench version 1.6.5.1.  Installation steps are very straight forward. You can install it on one of the ESXi nodes using "Deploy OVF Template" option.

    *Browse and select the template.
    *Provide a name and select location.
    *Select a resource to host HCIBench.
    *Review details and click Next.
    *Accept license agreements.
    *Select a datastore to store HCIBench files.
    *Select networks (as shown below).


    Note:
    Public network is the network through which you can access management web GUI of HCIBench.
    Private network is the network where HCIBench test VMs will be deployed.

    *Provide management IP details (I am using static IP)  and root password for HCIBench  as shown below.


    *Click Next and Finish.
    *Once the deployment is complete, you can power on the HCIBench VM.
    *Web GUI can be accessed by: <IP address>:8080
    *Provide root credentials to access the configuration page.

    In the configuration page fill the necessary details as given below.

    *vCenter IP.
    *vCenter username and password.
    *Datacenter name.
    *Cluster name.
    *Network name (the network on which HCIBench test VMs will be deployed).

    Note:
    The network on to which HCIBench test VMs will be deployed should have a DHCP server to provide IPs to the test VMs. If you do not have a DHCP server available in that network, select "Set Static IP for Vdbench VMs" as shown below. In this case HCIBench will provide IPs to the test VMs through the private network that you have chosen in the earlier installation step. Here, I don't have a DHCP server in VMNet-SIO network. So I connected eth1 of HCIBench to VMNet-SIO (private network) and checked the option "Set Static IP for Vdbench VMs".

    *Provide datastore names.


    *Provide ESXi host username and password.
    *Total number of VMs.
    *Number of data disks per VM.
    *Size of each data disk.

    Note:
    Make sure to keep a ratio between the total number of datastores, number of ESXi nodes and the number of test VMs. Here I have 3 ESXi nodes and 3 datastores in the cluster. So if I deploy 9 VMs for the test, each ESXi node will host 3 test VMs, one on each datastore. 


    *Give a name for the test.
    *Select the test parameters.
    *You can use the Generate button to customize the parameters (shown in next screenshot).


    *Input parameters as per your requirement and submit.


    *If you are doing this the very first time, you will find the below option at the end of the page to upload Vdbench ZIP file. You can upload the file if you have one, or you can download it from the Oracle website using the Download button.
    *After providing necessary inputs, you can save the configuration and validate. If the validation passes, you can click on test to begin the benchmarking test.


    *Once the test is complete, click on the Result button and it will take you to the directory where result files are stored. 

    Hope the article was useful to you. Happy benchmarking. Cheers!

    Tuesday, December 8, 2015

    Installing Nutanix CE on nested ESXI 5.5 without using SSD

    This is a great way to test/ learn Nutanix using the community edition (CE) which is totally free with Nutanix software based controller VM (CVM) and Acropolis hypervisor. This article will help you install Nutanix CE on a nested ESXI 5.5 without using SSD drives.

    -Download links :


    -Extract the downloaded *.gz file and you will get a *.img file. Rename that file to ce-flat.vmdk.

    
    Files
    -Now create a descriptor file as given below :

    # Disk DescriptorFile
    version=4
    encoding="UTF-8"
    CID=a63adc2a
    parentCID=ffffffff
    isNativeSnapshot="no"
    createType="vmfs"

    # Extent description
    RW 14540800 VMFS "ce-flat.vmdk"
    # The Disk Data Base
    #DDB

    ddb.adapterType = "lsilogic"
    ddb.geometry.cylinders = "905"
    ddb.geometry.heads = "255"
    ddb.geometry.sectors = "63"
    ddb.longContentID = "2e046b033cecaa929776efb0a63adc2a"
    ddb.uuid = "60 00 C2 9b 69 2f c9 76-74 c4 07 9e 10 87 3b f9"
    ddb.virtualHWVersion = "10" 

    -Copy the above content, paste it in a text file and save it as ce.vmdk.
    -Create a VM on ESXI 5.5
    
    VM properties
    -RAM (min) 16 GB
    -CPU (min) 4 (here I used 2 virtual sockets * 6 cores per socket)
    -Network adapter E1000
    -VM hardware version 8
    -Guest OS selected as Linux CentOS 4/5/6 (64-bit)
    -Initially the VM is created without a hard disk
    -Enable CPU/ MMU virtualization


    CPU/ MMU Virtualization
    -Upload ce.vmdk and ce-flat.vmdk to the datastore

    Datastore
    
    
    -Add hard disk (ce.vmdk) to the VM
     
    -Add 2 more hard disks to the VM of size 500 GB each. So now we have total 3 hard disks.

    Hard disk 1 - ce.vmdk - SCSI (0:0)
    Hard disk 2 - 500 GB - SCSI (0:1) (this drive will be emulated as SSD drive)
    Hard disk 3 - 500 GB - SCSI (0:2)

    -Download *.vmx file from the datastore and add line vhv.enable = "TRUE" to it and upload it back to the datastore. This is to enable nested hypervisor installation over ESXI 5.5.

    -To emulate SSD, edit VM settings - options - advanced - general - configuration parameters - add row and add name scsi0:1.virtualSSD with value 1

    -Power up the VM and Nutanix will start the installation wizard
    -Login as root and password : nutanix/4u
    -vi /home/install/phx_iso/phoenix/sysUtil.py
    -Click insert and then edit SSD_rdIOPS_thresh = 50 and SSD_wrIOPS_thresh = 50
    -And save it (press Esc button then :wq! to save the file) and reboot
    -Login in with 'install' and hit enter (no password)
    -Click proceed
    -The installer will now verify all prerequisites and then you can enter the following details
    -Host IP, subnet mask, gateway
    -CVM IP, subnet mask, gateway
    -Select create single-node cluster and enter DNS IP (8.8.8.8)
    -Scroll down through the license agreement and select accept and click proceed
    -Set promiscuous  mode in vSwitch to Accept
    -Once the installation is complete, you can access the PRISM dashboard with the CVM IP address and NEXT account credentials

    
    PRISM dashboard

    Monday, March 30, 2015

    Choosing a NIC for your VM in VMware ESXI

    If you let VMware to choose a network adapter automatically while creating a VM, it will select a compatible NIC type, but it may not be the one with better performance. Each generation of virtual network adapter has different features and performance levels. Considering ESXI 5 or higher versions, following are the list of network adapters available in order preference best to worst.

    -VMXNET 3
    -VMXNET 2 (Enhanced)
    -E1000

    Reference :
    VMware 

    How to run Hyper-V on a VM in ESXI 5.5

    You can now virtualize Hyper-V as a VM running on VMware ESXI 5.5. Initially if you try to install Hyper-V role, you will get an error message mentioning that, "Hyper-V cannot be installed". Follow the below steps to get it installed.

    1. Enable CPU/ MMU virtualization in VM settings.


    2. Power off the Hyper-V VM and remove it from inventory. Browse the data store and download the VMX configuration file of the VM and save to your local machine. Open it using WordPad and change the guestOS line to :

    guestOS = “winhyperv”

    Save the file and upload it back to data store. Now right click on the VMX file and add it to inventory. Power on the VM and install Hyper-V role. It must now get installed without any error.

    Note :
    It is NOT recommended in a production environment. Use this only for self study and testing in labs.