Showing posts with label high availability. Show all posts
Showing posts with label high availability. Show all posts

Wednesday, July 3, 2019

vRealize Operations Manager 7.5 - Part4 - High availability

In this post, I will explain the steps to expand an existing vROps installation and enable high availability.

Before making a design/ architecture decision on enabling high availability for vROps I strongly recommend you to go through the below VMware vROps documentation links and understand the functional/ technical implications of it.

  1. About vRealize Operations Manager High Availability
  2. High Availability Considerations
  3. About vRealize Operations Manager Cluster Nodes

Expand an existing installation


Deploy a new vROps appliance. Once the deployment is complete, open the management IP of the appliance in a web browser. This time select expand an existing installation


Click next.

Provide a name for this new node.
Select the node type as "Data".
Provide the IP address or FQDN of the master node and click validate.
Accept the certificate and click next.


Provide admin password and click next.

Note: If you don't know the admin password you can request the vROps admin to provide a shared pass-phrase and can use it.

Click finish.

After this step, you will be redirected automatically to the admin page.


As you can see in the above screenshot, installation is in progress and is waiting to finish the cluster expansion. It may take a few minutes. Once it is complete you can see a button to "Finish adding new node(s)".


Click "Finish adding new node(s)" and click ok.


This will take a few minutes.


Now, you can see that the new data node is online and running. Next step is to enable high availability.  


Enable High Availability


To configure high availability, click "Enable".

Cluster Restart Required: The cluster needs to be restarted in order to configure HA. This may require up to 20 minutes during which vRealize Operations Manager will not be available.

To ensure complete protection the two nodes (Master and Replica) should not share hardware.

Click Ok.
Click "Yes" to continue HA configuration.


This will take a few minutes. The cluster will be taken offline for enabling HA.


After a few minutes, the cluster is back online and HA is enabled. And as you can see in the below screenshot one node is "Master" and the other one is "Master Replica".


vROps also has dashboards that provide you details on the health status of the complete vROps environment, cluster statistics and performance details of vROps itself, etc. Sample screenshot of the "Self Health" dashboard is shown below.  


Hope it was useful. Cheers!

Related posts:


Tuesday, December 27, 2016

Highly Available SOFS On Clustered Storage Spaces

This article explains briefly about the design and steps required to deploy a highly available scale-out-file server (SOFS) on clustered storage spaces using Windows Server 2012 R2. 

Note: Here we are using only a single JBOD. But there are several storage spaces certified JBODs available in market (eg: DATAON) which are enclosure aware. That means when you are using multiple JBODs together, you will have data redundancy at the JBOD level too. 


  1. JBOD is connected to both file servers using shared SAS HBA connectors
  2. MPIO is enabled on both servers (SERVER 01/ 02)
  3. Make sure JBOD disks are available to both servers
  4. In this case we have total 24 disks (6 SSD and 18 SAS HDD)
  5. Create new storage pool
    1. By default all available disks are included in a pool named the primodial pool
    2. If primodial pool is not listed under storage spaces, then it indicates that the storage doesn't meet requirements of storage spaces
    3. If you select primodial pool, all available disks will be listed under physical disks
    4. Select option create new storage pool
    5. Give it a name
    6. Select physical disks you want to be in the pool
    7. Hot spares can be configured too at this stage
  6. Verify new storage pool is listed under storage pools
  7. Next step is to create virtual disks (these are not vhdx files, they call called spaces)
    1. Now select the new storage pool that you have just created in step 5
    2. Click on tasks, select new virtual disk
    3. Give a name
    4. Select the storage layout (Mirror, Parity, Simple)
    5. Choose provisioning type (Thin, Fixed)
    6. Specify size
  8. Now create volumes
    1. Right click the virtual disk (space) that you have just created in the previous step and select new volume
    2. Select server name and then the virtual disk name
    3. Specify volume size, drive letter, file system type, allocation unit size and volume label 
  9. Create failover cluster using the 2 file servers
    1. Provide a cluster name
    2. Volumes that you created in step 8 will be listed as available volumes
    3.  Add those as cluster shared volumes
    4. Now it appears in C:\ClusterStorage\
  10. Next step is to create a highly available SOFS
    1. On failover cluster manager - roles - new clustered role - file server - file server for scale out application data (SOFS) 
    2. Provide client access point name (eg: SMB-FS01)
    3. Right click on SOFS role in failover cluster manager and select add shared folder
    4. Choose SMB share server applications
    5. Provide a name (eg: DATA01)
    6. Local path to share (C:\ClusterStorage\volume1\shares\DATA01
    7. Remote path to share (\\SMB-FS01\DATA01)
Now you have a highly available file share (\\SMB-FS01\DATA01) to store your virtual machine files which is built over clustered storage spaces.

Reference: Microsoft 

Wednesday, October 26, 2016

3 Node Hyper-V 2012 R2 Cluster Design

Below diagram shows a traditional 3 tier highly available cluster with 3 Hyper-V nodes and 2 shared storage nodes (storage in Active-Passive/ Active-Active mode) all connected via network switches.

Compute

*Hyper-V nodes are running on DELL PowerEdge R630 rack servers

Networking

*Shared storage is accessed via MPIO over two separate VLANs (61 and 62)
*VM traffic is over NIC teaming (Switch independent and dynamic)
*Live migration/ cluster network is also teamed together

Storage

*We are using DELL PowerEdge R320 with Open-E DSS V7 as storage nodes
*Each node has 8 x 10K SAS drives with RAID 5
*Storage can be either in Active-Passive/ Active-Active cluster mode
*In Active-Passive mode, only one of the storage nodes will be active. That means resources of only one storage node will be utilized at a time
*In Active-Active mode, both servers will be active and serving storage traffic 


Saturday, July 9, 2016

How Live Migration works on Hyper-V

1.Live migration setup

  • Source host  creates TCP connection with destination host
  • VM configuration data is transferred to destination host
  • A skeleton VM is setup at destination host
  • Physical memory is allocated to that VM
2.Memory pages are transferred from the source to destination host

  • In-state memory (working set) of the VM will be transferred first
  • Default page size is 4 KB
  • All utilized pages will be copied to destination
  • Modified pages are tracked by source and marked as being modified
  • Several iteration of copy process will take place
3.Remaining modified pages will be transferred to destination host

  • VM is then registered and the device state is transferred
  • Less modified pages implies fast migration
  • Total working set is copied to destination
4.Move storage handle from source to destination

  • Till this step VM at destination host is not online
5.Once control of storage is transferred, VM will be online and resumed at destination

  • Now the VM is completely migrated and running on destination host
6.Network cleanup

  • Message is sent to physical network switch causes it to relearn MAC address of migrated VM