Showing posts with label lun. Show all posts
Showing posts with label lun. Show all posts

Monday, October 7, 2019

VMware PowerCLI 101 - Part5 - Real time storage IOPS and latency

It is very important to monitor and analyze the performance of storage subsystem components as it direcly affects the application performance. In this article, I will briefly explain how to use PowerCLI to get real time storage IOPS and latency of the following: 

              • Virtual disk
              • Datastore
              • Disk/ LUN 
              • Storage adapter
              • Storage path
Connect to vCenter server using:
Connect-VIServer <IP address of vCenter>

To understand the list of all available stats for a specific entity, you can use Get-StatType. For example, to list all real time stats for a virtual machine you can use:
Get-StatType -Entity <VM name> -Realtime | sort

Virtual disk

To get real-time IOPS and latency of all virtual disks of a VM named 'lustre01':
Get-Stat -Entity lustre01 -Realtime -MaxSamples 1 -Stat virtualDisk.numberReadAveraged.average,virtualDisk.numberWriteAveraged.average,virtualDisk.totalReadLatency.average,virtualDisk.totalWriteLatency.average | sort Instance,MetricId | select MetricId, Value, Unit, Instance




Datastore

To get real-time IOPS and latency of a datastore (with Uuid: 5bea72bb-5d72ed6a-1d85-246e96792988) from an ESXi host (IP: 192.168.105.10):
Get-Stat -Entity 192.168.105.10 -Stat datastore.numberReadAveraged.average,datastore.numberWriteAveraged.average,datastore.totalReadLatency.average,datastore.totalWriteLatency.average -Realtime -MaxSamples 1 -Instance 5bea72bb-5d72ed6a-1d85-246e96792988 | Select MetricId, Value, Unit, Instance | Sort-Object MetricId

Note: You can get Uuid of a datastore using (Get-Datastore vol01).ExtensionData.Info.Vmfs.Uuid


Refer my article "Real time VMware datastore performance monitoring using PowerShell" for monitoring the real time performance statistics of multiple shared VMFS datastores which are part of a multi-node VMware ESXi cluster.

Disk/ LUN

To get real-time IOPS and latency of a disk (eui.387de1af35b93f6ff0a9bef000000000): 
Get-Stat -Entity 192.168.105.10 -Disk -Realtime -Instance eui.387de1af35b93f6ff0a9bef000000000 -MaxSamples 1 -Stat disk.numberWriteAveraged.average,disk.numberReadAveraged.average,disk.totalWriteLatency.average,disk.totalReadLatency.average | Select MetricId, Value, Unit, Instance


Storage adapter

To get real-time IOPS and latency of a storage adapter: 
Get-Stat -Entity 192.168.105.10 -Realtime -MaxSamples 1 -Stat storageAdapter.totalReadLatency.average, storageAdapter.totalWriteLatency.average, storageAdapter.numberReadAveraged.average, storageAdapter.numberWriteAveraged.average -Instance vmhba64 | Select-Object MetricId, Value, Unit, Instance | Sort-Object MetricId


Storage Path

To get real-time IOPS and latency of a storage path:
Get-Stat -Entity 192.168.105.10 -Realtime -MaxSamples 1 -Stat storagePath.totalReadLatency.average, storagePath.totalWriteLatency.average, storagePath.numberReadAveraged.average, storagePath.numberWriteAveraged.average -Instance fc.300fb123ba76519c:b436362bae5b217-fc.300fb123ba76519c:b436362bae5b217-eui.387de1af35b93f6ff0a9beec00000001 | Select MetricId,Value,Unit,Instance | Sort-Object MetricId


Friday, March 15, 2019

VMFS vs VVOL

Let's start with a quick comparison.

VMFS
VVOL
  • LUN centric approach
  • Pre-provisioning of LUNs
  • Use of multiple datastores for different performance capabilities
  • Management difficulties as a single VM may span across multiple datastores

  • VM centric
  • vSphere is now aware of array capabilities through VASA provider
  • No pre-provisioning
  • One VVOL datastore can represent the whole array
  • Storage policies can be applied per vmdk level
  • Some of the vSphere operations are offloaded to array using VAAI (full cloning, snapshots)
  • VVOL snapshots are faster (different from traditional snapshots with redo logs)
            

Note: The explanations below regarding SAN is based on Dell EMC Unity (hybrid array) 

VMFS environment

A traditional VMFS datastore setup is given below. Say, you have two VMs. One with 3 virtual disks and the second one with 2 virtual disks. And your array has 4 storage pools with specific capabilities. Based on those capabilities the pools are classified into 4 service levels (Platinum, Gold, Silver and Bronze). In this case you need to provision 4 datastores/ LUNs from the respective pool to meet requirements of the two virtual machines.

VMFS

You can clearly see that the first VM is spanned across 3 datastores and the second VM is spanned across 2 datastores. If your environment has hundreds and thousands of VMs management becomes too complicated. In this case, as the datastores are properly named the vSphere admin can easily identify the service level/ capability of a specific datastore. But if naming convention is not followed, then vSphere admin has to contact the storage admin to know about the capabilities of that specific datastore/ LUN. Again lots of communication needed, making it a complex process!

VVOL environment

Now let’s have a look at the VVOL environment which is shown the figure below. You are having the same array, but instead of provisioning datastores/ LUNs from the respective pools, here we are creating one vvol datastore. The array has 4 storage pools, each having specific capabilities like drive type, RAID level, tiering policy, FAST Cache ON/ OFF etc. and are classified into 4 service levels as Platinum, Gold, Silver, and Bronze. So each pool has its own capability profile. You can provide any name, but here I just gave the same name as the service level of each pool.

Pool_01   -> Service level: Platinum         -> Capability profile: Platinum
Pool_02   -> Service level: Silver               -> Capability profile: Silver
Pool_03   -> Service level: Bronze            -> Capability profile: Bronze
Pool_04   -> Service level: Gold                -> Capability profile: Gold


Note: vvol datastores cannot be created without capability profile

Next steps are:

  • Create vvol datastore in the SAN
  • Provide a name for the vvol datastore
  • Add capability profiles that need to be part of the vvol datastore
  • In this case, all 4 capability profiles are part of the vvol datastore
  • You can also limit the amount of space that will be used from each capability profile by the vvol datastore
  • Once it’s done, vvol datastore is created in the SAN
VVOL

At this point, you can go ahead and configure host access to this vvol datastore. Now you have to let your Vsphere environment know about the capabilities of the storage array. That is done by registering the new storage provider on your vCenter server making use of VASA (VMware vStorage API for Storage Awareness) provider. Once registered the vSphere environment will communicate with the VASA provider through the array management interface (OOB network) which forms a control path. Next step is creating a vvol datastore on the ESXI host which you provided access earlier. For each vvol datastore created on the storage array, two protocol endpoints (PEs) will be automatically generated to communicate with an ESXI host forming a data path. If you create another vvol datastore on the array and provide access to same ESXI host, two more PEs will be created. PEs act like a target. And on the ESXI side, if you look at the storage devices, you can see 2 proxy LUNs which connects to the respective PEs.

Now you have to create VM storage policies based on service levels. Here you have 4 service levels, so you have to create 4 storage policies. Assign storage policy per vmdk basis as per requirements.

Eg: Storage_Policy_Gold -> VMDK 02 (second VM) -> it will be placed in Pool_04 automatically

So in this case, instead of having 4 VMFS datastores we needed only 1 VVOL datastore which has all the required capabilities. This means you don't need to provision more LUNs from the SAN. Storage management becomes easy with just one VVOL. There is more granularity with SPBM at each VMDK level. With VVOLs the SAN is aware of all the VMs and its corresponding files hosted on it. This makes space reclamation very easy and straight. The moment a VM or a VMDK is deleted, that space will be immediately made free as SAN is having the complete insight of virtual machines stored in it. Data mobility between different storage pools based on its service level becomes effortless as it is handled directly by the SAN internally based on SPBM. All together VVOL simplifies storage/ LUN management.

Hope it was useful. Cheers!

References:



Monday, December 5, 2016

Generic Storage System Architecture


The above diagram shows a generic stand-alone storage system architecture, where a storage OS is installed over a bare metal server and thus making it a storage server. Here I am using an enterprise class storage OS named Open-E DSS V7 which is installed on a Dell PowerEdge R720xd. R720xd can have up to twelve 3.5" drives at the front plus two 2.5" drives at the rear. Here in the diagram, the last 2 disks (Disk Group03) are 2.5" drives installed at the rear and being used as OS drive in RAID1. Apart from that we have five SAS 7.2K and 15K disks that are grouped into two RAID groups. Comparing the disk IOPS 'Disk Group01' can be considered fast and 'Disk Group02' as slow.

As I haven't mentioned the size of each SAS disk, lets assume using 'Disk Group01' a 10TB RAID5 virtual disk (VD) is created and using 'Disk Group02' a 12TB RAID5 VD is created. You can configure hot spares for each disk group if you have additional disks. As I mentioned above, for OS installation we have created a 10GB RAID1 VD using 'Disk Group03'. After installation of the OS (Open-E DSS V7), it scans and shows 10TB and 12TB as available storage units. 

Now the next step is to create volume groups. Here we created two volume groups (VG00 and VG01) to differentiate fast and slow storage. 

  • VG00 uses VD01
  • VG01 uses VD02
Once volume groups are created, you can now carve out LUNs separately based on your requirements. For example, if you want a LUN that is going to be used as a datastore to store your virtual machines, then you can create it on VG00 (fast), or if you need it for storing some general purpose backup files, then you can create it on VG01 (slow). So depending on your requirement you can decide where to create your LUN.

Note: Here I classified RAID disk groups based on speed. You can divide it based on reads and writes. So that you can choose RAID10 disk group for write intensive operations and RAID5 disk group for reads. It can even be divided based on access type. That means a disk group exclusively for sequential file access (SQL logs) and another disk group for random access (SQL data, VM datastore etc).