Friday, May 24, 2019

VMware PowerCLI 101 - Part2 - Working with vCenter server

In my previous blog, we discussed how to install PowerCLI module and connect to a stand-alone ESXi host. Now let's start with connecting to the vCenter server.

Connect-VIServer <IP of vCenter server>

Get the list of data centers present under this vCenter server: Get-Datacenter


Get the list of clusters under a datacenter: Get-Datacenter DC01 | Get-Cluster

To verify the status of HA, DRS, and vSAN of a cluster:
(Get-Cluster Cluster01) | select Name,DrsEnabled,DrsAutomationLevel,HAEnabled,HAFailoverLevel,VsanEnabled


Get the list of ESXi hosts part of a specific cluster: 
Get-Datacenter DC01 | Get-Cluster Cluster01 | Get-VMHost


To get the list of VMs hosted on a specific ESXi host:
Get-Datacenter DC01 | Get-Cluster Cluster01 | Get-VMHost 192.168.105.11 | Get-VM


To get a list of powered on and powered off VMs:
(Get-Cluster Cluster01 | Get-VM).where{$PSItem.PowerState -eq "PoweredOn"}
(Get-Cluster Cluster01 | Get-VM).where{$PSItem.PowerState -eq "PoweredOff"}


An efficient way of doing it in a single go is given below:
$vmson, $vmsoff = (Get-Cluster Cluster01 | Get-VM).where({$PSItem.PowerState -eq "PoweredOn"}, 'split')

$vmson will have the list of VMs that are PoweredOn and $vmsoff will have the list of VMs that are PoweredOff.


Cluster level inventory:
Get-Cluster | Get-Inventory

Datastore details of a cluster:
Get-Cluster | Get-Datastore | select Name, FreeSpaceGB, CapacityGB, FileSystemVersion, State


Hope this was useful. Cheers! In the next post, we will talk about performing basic VM operations using PowerCLI.

Friday, May 17, 2019

VMware PowerCLI 101 - Part1 - Installing the module and working with stand-alone ESXi host

This blog post series is for all those who would like to kickstart and learn VMware PowerCLI from the very basic level. Let's start with installing the PowerCLI module.

First, verify whether VMware.PowerCLI module is already installed on your machine.







Get-Module VMware.PowerCLI -ListAvailable


In this case, as shown above, VMware.PowerCLI module is not installed. Now let's find the latest version of the module available from PowerShell gallery.

Find-Module VMware.PowerCLI



11.2.0.12780525 is the latest version that is available in PSGallery.

To install this module use: Install-Module VMware.PowerCLI




Installation of this module may take a couple of minutes as this is the very first time. Once successfully installed you can verify using: Get-Module VMware.PowerCLI -ListAvailable

To list all the available cmdlets: Get-Command -Module VMware*

Now, let's go ahead and connect to an ESXi host.

Connect-VIServer <IP of ESXi server>

Provide the necessary credentials and once connected successfully you will see the below.


To get basic details of the ESXi host: Get-VMHost <IP of ESXi server>


To get more information about a cmdlet you can make use of the PowerShell Help System.

You can find all the properties and methods available for an object using: Get-Member

Example: Get-VMHost 192.168.105.1 | Get-Member

To retrieve specific properties of an ESXi host object: 

$h1 = Get-VMHost 192.168.105.1

Model detail: $h1.Model
Processor type: $h1.ProcessorType


Another very useful property is "ExtensionData". Let's have a detailed look at this property.


This property provides you a lot of info regarding system runtime, hardware health status, etc.


System and hardware health status:


Memory and CPU health status:


Configuration status: $h1.ExtensionData.ConfigStatus
Configuration issues: $h1.ExtensionData.ConfigIssue

Few more useful details that are available under "ExtensionData" are given below.


There is a property called "RebootRequired" which actually shows whether there is any pending reboot for the system.

System uptime and resource utilization are available under "QuickStats" property.


Note:
"OverallCpuUsage" is in MHz, "OverallMemoryUsage" is in MB and "Uptime" is in Seconds.

List of all system capabilities: $h1.ExtensionData.Capability

BIOS version: $h1.ExtensionData.Hardware.BiosInfo


In the next article, we will discuss connecting to the vCenter server using PowerCLI and performing day-to-day operations. Hope this was useful. Cheers!

Related posts:


Friday, April 19, 2019

Cisco switch configuration backup using PowerShell

In this article, I will briefly explain how to back up the running configuration of Cisco switches to a TFTP server location using PowerShell.

Prerequisites
  • A TFTP server should be configured and running.
  • PowerShell module Posh-SSH should be installed on the node from which the script is running.

Workflow
  1. Collect credentials to SSH into the switch
    $creds = Get-Credential
  2. Create a new SSH session to the first switch in the list
    $sw_ssh = New-SshSession -ComputerName <Management IP of Cisco switch> -Credential $creds -Force -ConnectionTimeout 300
  3. Invoke the command to backup running config to TFTP server over the SSH session
    $cmd_backup = "copy running-config tftp://<IP of TFTP server>/config_backup.txt vrf management"
    Invoke-sshcommand -Command $cmd_backup -SSHSession $sw_ssh

You can schedule this PS script using a task scheduler so that the running configuration of switches can be backed up automatically on a daily basis or as per requirements. Hope this was useful. Cheers!

Complete project reference
https://github.com/vineethac/cisco_switch_backup

Related article
Dell EMC switch configuration backup using PowerShell

Friday, April 12, 2019

VMware VVols: Integrating Dell EMC Unity with vSphere environment

This article provides the step by step procedure to configure a Virtual Volume (VVol) based vSphere environment using Dell EMC Unity SAN storage. You can go through my previous post to get an understanding of the differences between VMFS and VVol.

Step1: Register a new storage provider in vCenter

Note: Registering a storage provider exposes all the array capabilities to vCenter through VASA API.

Select: vCenter server -> Configure -> Storage providers -> Register a new storage provider (+)

Register new storage provides in vCenter

After successful registration of storage provider


Step2: Create a VVol datastore on the storage array (Unity)

Create VVol datastore (storage container)

Provide a name

Select capability profiles for the VVol datastore

Note: Each pool has its own characteristics and is associated with a specific capability profile. Adding capability profiles to a VVOL datastore basically adds the corresponding pools to that VVOL construct. In the above figure we have added 3 capability profiles, which means pool_01/02/03 are now part of VVOL_Datastore_01.

Configure access to ESXi hosts

Summary

VVol datastore (storage container) creation completed

The storage container (VVOL datastore) has been created on the array and the next step is to add it to ESXi hosts.

Step3: Add a new datastore in the vSphere environment through vCenter

Add new VVol datastore

Select the VVol datastore and provide a name

VVol datastore creation complete

VVol datastore

Step4: Create VM storage policies

Select VM Storage Policies

Create VM storage policy

Select vCenter and provide a name



Storage type and rule

Select service level

Select the compatible storage


Now the storage policy (Platinum) has been created. Similarly, I have created Silver and Bronze policies which are shown below.

Sample storage policies

Step5: Migrate virtual machines to VVol datastore

Migrate VM01 to VVol datastore

While migrating the VM, you can choose the disk format and the VM storage policy and it will display the compatible VVOL datastores.


Select compatible storage


If you would like to apply storage policies at disk level you can edit the VM storage policies setting of the VM and apply policy as per the requirement as shown below.


Select VM storage policy per VMDK
Hope this was helpful. Cheers!

References:




Friday, March 15, 2019

VMFS vs VVOL

Let's start with a quick comparison.

VMFS
VVOL
  • LUN centric approach
  • Pre-provisioning of LUNs
  • Use of multiple datastores for different performance capabilities
  • Management difficulties as a single VM may span across multiple datastores

  • VM centric
  • vSphere is now aware of array capabilities through VASA provider
  • No pre-provisioning
  • One VVOL datastore can represent the whole array
  • Storage policies can be applied per vmdk level
  • Some of the vSphere operations are offloaded to array using VAAI (full cloning, snapshots)
  • VVOL snapshots are faster (different from traditional snapshots with redo logs)
            

Note: The explanations below regarding SAN is based on Dell EMC Unity (hybrid array) 

VMFS environment

A traditional VMFS datastore setup is given below. Say, you have two VMs. One with 3 virtual disks and the second one with 2 virtual disks. And your array has 4 storage pools with specific capabilities. Based on those capabilities the pools are classified into 4 service levels (Platinum, Gold, Silver and Bronze). In this case you need to provision 4 datastores/ LUNs from the respective pool to meet requirements of the two virtual machines.

VMFS

You can clearly see that the first VM is spanned across 3 datastores and the second VM is spanned across 2 datastores. If your environment has hundreds and thousands of VMs management becomes too complicated. In this case, as the datastores are properly named the vSphere admin can easily identify the service level/ capability of a specific datastore. But if naming convention is not followed, then vSphere admin has to contact the storage admin to know about the capabilities of that specific datastore/ LUN. Again lots of communication needed, making it a complex process!

VVOL environment

Now let’s have a look at the VVOL environment which is shown the figure below. You are having the same array, but instead of provisioning datastores/ LUNs from the respective pools, here we are creating one vvol datastore. The array has 4 storage pools, each having specific capabilities like drive type, RAID level, tiering policy, FAST Cache ON/ OFF etc. and are classified into 4 service levels as Platinum, Gold, Silver, and Bronze. So each pool has its own capability profile. You can provide any name, but here I just gave the same name as the service level of each pool.

Pool_01   -> Service level: Platinum         -> Capability profile: Platinum
Pool_02   -> Service level: Silver               -> Capability profile: Silver
Pool_03   -> Service level: Bronze            -> Capability profile: Bronze
Pool_04   -> Service level: Gold                -> Capability profile: Gold


Note: vvol datastores cannot be created without capability profile

Next steps are:

  • Create vvol datastore in the SAN
  • Provide a name for the vvol datastore
  • Add capability profiles that need to be part of the vvol datastore
  • In this case, all 4 capability profiles are part of the vvol datastore
  • You can also limit the amount of space that will be used from each capability profile by the vvol datastore
  • Once it’s done, vvol datastore is created in the SAN
VVOL

At this point, you can go ahead and configure host access to this vvol datastore. Now you have to let your Vsphere environment know about the capabilities of the storage array. That is done by registering the new storage provider on your vCenter server making use of VASA (VMware vStorage API for Storage Awareness) provider. Once registered the vSphere environment will communicate with the VASA provider through the array management interface (OOB network) which forms a control path. Next step is creating a vvol datastore on the ESXI host which you provided access earlier. For each vvol datastore created on the storage array, two protocol endpoints (PEs) will be automatically generated to communicate with an ESXI host forming a data path. If you create another vvol datastore on the array and provide access to same ESXI host, two more PEs will be created. PEs act like a target. And on the ESXI side, if you look at the storage devices, you can see 2 proxy LUNs which connects to the respective PEs.

Now you have to create VM storage policies based on service levels. Here you have 4 service levels, so you have to create 4 storage policies. Assign storage policy per vmdk basis as per requirements.

Eg: Storage_Policy_Gold -> VMDK 02 (second VM) -> it will be placed in Pool_04 automatically

So in this case, instead of having 4 VMFS datastores we needed only 1 VVOL datastore which has all the required capabilities. This means you don't need to provision more LUNs from the SAN. Storage management becomes easy with just one VVOL. There is more granularity with SPBM at each VMDK level. With VVOLs the SAN is aware of all the VMs and its corresponding files hosted on it. This makes space reclamation very easy and straight. The moment a VM or a VMDK is deleted, that space will be immediately made free as SAN is having the complete insight of virtual machines stored in it. Data mobility between different storage pools based on its service level becomes effortless as it is handled directly by the SAN internally based on SPBM. All together VVOL simplifies storage/ LUN management.

Hope it was useful. Cheers!

References: