Showing posts with label VCSA. Show all posts
Showing posts with label VCSA. Show all posts

Friday, June 21, 2019

vRealize Operations Manager 7.5 - Part2 - Configure vCenter adapter and dashboards overview

In this post, I will explain how to configure the vCenter adapter and will also walk through some of the native dashboards.

Configuring vCenter adapter


  • Login to vROps.
  • Click the Administration tab.
  • Select the vCenter Adapter. 
  • Click the gears icon.



  • Provide necessary details.


  • Test connection.




  • Click save settings and close.
  • Once the above steps are done, in a few seconds you can see "Adapter Status" as Data receiving and "Collection State" as Collecting.  

Note: After configuring the vCenter adapter you should actually wait for few days for all the data to get collected and populated.

Dashboards


This is the place where most of the System administrators/ Operations Engineers spend their time for understanding/ evaluating the operational aspects of their virtual infrastructure, capacity planning, troubleshooting various issues, performance optimizations, etc.  vROps has many pre-canned dashboards that you get out-of-the-box. Below screenshot shows how to select/ navigate multiple dashboards that are available in vROps.


Now, I will just briefly explain some of my favorite dashboards. 

Operations Overview


This dashboard provides data center summary. It provides info about the total number of clusters, hosts, total VMs, running VMs, datastores, etc. This dashboard also has widgets showing details about the top VMs experiencing CPU contention, memory contention, and disk latency.


Utilization Overview


This dashboard provides a summary of the environment based on the selection. In this case, I selected a cluster. It shows the total (CPU/ memory/ storage) capacity of the selected environment, usable capacity, used capacity, remaining capacity, etc. This will be very useful for capacity planning of resources.  

Cluster Utilization


This dashboard shows the CPU, memory, disk IOPS and network usage trends at the cluster level.


Datastore Utilization


This dashboard provides detailed info on datastore usage trends and heatmap based on datastore capacity/ utilization.


Heavy Hitter VMs


This dashboard provides cluster-level CPU, memory, IOPS and network throughput. It also gives a list of VMs which has generated the highest CPU demand, memory demand, highest IOPS and network throughput. This is very useful for identifying the VMs that has the highest resource consumption. 



Hope it was useful. Cheers!

Saturday, June 15, 2019

vRealize Operations Manager 7.5 - Part1 - Installation


For evaluation, you can download a trial version of vROps 7.5.

Installation


  • Log in to VCSA.
  • Select an ESXi host and Deploy OVF Template.
  • Browse and select the vROps 7.5 OVA file.
  • Provide a name and select a location.
  • Select a host and click next.

Now, if you are using a Chrome browser and if you haven't opened your VCSA with FQDN name, you might face the below error, "Transfer failed: The OVF descriptor is not available".


A workaround for the above issue is to open and login VCSA with FQDN using Firefox browser.

  • Perform the above same steps and this time you will not get the above error.

  • Accept the license agreements.
  • Select a deployment configuration.

  • Select storage.

  • Select a destination network.
  • Provide time zone and network details (You can leave network properties blank if you have a DHCP server configured in the destination network selected in the above step). Click next and finish.
  • Once the OVF template deployment is completed you can power on the VM.
  • Open the console to view installation progress. It will take a few minutes.

  • Once the installation is complete you will get the console screen as below. To proceed to the next stage of the installation process open the IP in a web browser.


  • You will get the below options. As it is a fresh installation I prefer to go with Express Installation as it is the fastest way to deploy it with minimal inputs.

  • Click next on the getting started page.
  • Set the admin account password, click next and finish.


  • Accept EULA.
  • Enter a product key or you can continue in evaluation mode.
  • Click next and finish.

Hope it was useful. Cheers!


Related posts

References



Friday, March 15, 2019

VMFS vs VVOL

Let's start with a quick comparison.

VMFS
VVOL
  • LUN centric approach
  • Pre-provisioning of LUNs
  • Use of multiple datastores for different performance capabilities
  • Management difficulties as a single VM may span across multiple datastores

  • VM centric
  • vSphere is now aware of array capabilities through VASA provider
  • No pre-provisioning
  • One VVOL datastore can represent the whole array
  • Storage policies can be applied per vmdk level
  • Some of the vSphere operations are offloaded to array using VAAI (full cloning, snapshots)
  • VVOL snapshots are faster (different from traditional snapshots with redo logs)
            

Note: The explanations below regarding SAN is based on Dell EMC Unity (hybrid array) 

VMFS environment

A traditional VMFS datastore setup is given below. Say, you have two VMs. One with 3 virtual disks and the second one with 2 virtual disks. And your array has 4 storage pools with specific capabilities. Based on those capabilities the pools are classified into 4 service levels (Platinum, Gold, Silver and Bronze). In this case you need to provision 4 datastores/ LUNs from the respective pool to meet requirements of the two virtual machines.

VMFS

You can clearly see that the first VM is spanned across 3 datastores and the second VM is spanned across 2 datastores. If your environment has hundreds and thousands of VMs management becomes too complicated. In this case, as the datastores are properly named the vSphere admin can easily identify the service level/ capability of a specific datastore. But if naming convention is not followed, then vSphere admin has to contact the storage admin to know about the capabilities of that specific datastore/ LUN. Again lots of communication needed, making it a complex process!

VVOL environment

Now let’s have a look at the VVOL environment which is shown the figure below. You are having the same array, but instead of provisioning datastores/ LUNs from the respective pools, here we are creating one vvol datastore. The array has 4 storage pools, each having specific capabilities like drive type, RAID level, tiering policy, FAST Cache ON/ OFF etc. and are classified into 4 service levels as Platinum, Gold, Silver, and Bronze. So each pool has its own capability profile. You can provide any name, but here I just gave the same name as the service level of each pool.

Pool_01   -> Service level: Platinum         -> Capability profile: Platinum
Pool_02   -> Service level: Silver               -> Capability profile: Silver
Pool_03   -> Service level: Bronze            -> Capability profile: Bronze
Pool_04   -> Service level: Gold                -> Capability profile: Gold


Note: vvol datastores cannot be created without capability profile

Next steps are:

  • Create vvol datastore in the SAN
  • Provide a name for the vvol datastore
  • Add capability profiles that need to be part of the vvol datastore
  • In this case, all 4 capability profiles are part of the vvol datastore
  • You can also limit the amount of space that will be used from each capability profile by the vvol datastore
  • Once it’s done, vvol datastore is created in the SAN
VVOL

At this point, you can go ahead and configure host access to this vvol datastore. Now you have to let your Vsphere environment know about the capabilities of the storage array. That is done by registering the new storage provider on your vCenter server making use of VASA (VMware vStorage API for Storage Awareness) provider. Once registered the vSphere environment will communicate with the VASA provider through the array management interface (OOB network) which forms a control path. Next step is creating a vvol datastore on the ESXI host which you provided access earlier. For each vvol datastore created on the storage array, two protocol endpoints (PEs) will be automatically generated to communicate with an ESXI host forming a data path. If you create another vvol datastore on the array and provide access to same ESXI host, two more PEs will be created. PEs act like a target. And on the ESXI side, if you look at the storage devices, you can see 2 proxy LUNs which connects to the respective PEs.

Now you have to create VM storage policies based on service levels. Here you have 4 service levels, so you have to create 4 storage policies. Assign storage policy per vmdk basis as per requirements.

Eg: Storage_Policy_Gold -> VMDK 02 (second VM) -> it will be placed in Pool_04 automatically

So in this case, instead of having 4 VMFS datastores we needed only 1 VVOL datastore which has all the required capabilities. This means you don't need to provision more LUNs from the SAN. Storage management becomes easy with just one VVOL. There is more granularity with SPBM at each VMDK level. With VVOLs the SAN is aware of all the VMs and its corresponding files hosted on it. This makes space reclamation very easy and straight. The moment a VM or a VMDK is deleted, that space will be immediately made free as SAN is having the complete insight of virtual machines stored in it. Data mobility between different storage pools based on its service level becomes effortless as it is handled directly by the SAN internally based on SPBM. All together VVOL simplifies storage/ LUN management.

Hope it was useful. Cheers!

References:



Friday, February 15, 2019

Deleting inaccessible objects from vSAN cluster

Scenario:


Resolution:

  • Login to vSAN VCSA as root
  • rvc
  • Provide a username that has administrator privileges @localhost as shown below
  • Change directory to localhost
  • Now browse through and set the directory to the respective cluster (in my case cluster name is "Cluster-Rack7")
  • vsan.check_state -r <path to your cluster>
  • You can purge vswp objects using: vsan.purge_inaccessible_vswp_objects <path to cluster>
  • In this case its not a vswp object. So we have to find the owner node of the object and delete it forcefully from there.
  • vsan.cmmds_find -u <UUID of object> <path to cluster> to find owner of the object
  • SSH into the owner node and delete the inaccessible object file forcefully using /usr/lib/vmware/osfs/bin/objtool delete -u <UUID of inaccessible object> -f -v 10
  • Perform vSAN health check to verify status

Hope it was useful. Cheers!

References: