Showing posts with label nutanix. Show all posts
Showing posts with label nutanix. Show all posts

Saturday, June 30, 2018

Introduction to Nutanix cluster components

In this article I will briefly explain about the different components of a Nutanix cluster. The major components are listed below.

Nutanix cluster components
  1. Stargate: Data I/O manager for the cluster.
  2. Medusa: Access interface for Cassandra.
  3. Cassandra: Distributed metadata store.
  4. Curator: Handles Map Reduce cluster management and cleanup.
  5. Zookeeper: Manages cluster configuration.
  6. Zeus: Access interface for Zookeeper.
  7. Prism: Management interface for Nutanix UI, nCLI and APIs.
Stargate
  • Responsible for all data management and I/O operations.
  • It is the main point of contact for a Nutanix cluster.
  • Workflow: Read/ write from VM < > Hypervisor < > Stargate.
  • Stargate works closely with Curator to ensure data is protected and optimized.
  • It also depends on Medusa to gather metadata and Zeus to gather cluster configuration data.
Medusa
  • Medusa is the Nutanix abstraction layer that sits infront of DB that holds the cluster metadata.
  • Stargate and Curator communicates to Cassandra through Medusa.
Cassandra
  • It is a distributed high performance and scalable DB.
  • It stores all metadata about all VMs stored in a Nutanix datastore.
  • It needs verification of atleast one other Cassandra node to commit its operations.
  • Cassandra depends on Zeus for cluster configuration.
Curator
  • Curator constantly access the environment and is responsible for managing and distributing data throughout the cluster.
  • It does disk balancing and information life cycle management.
  • It is elected by a Curator master node who manages the task and job delegation.
  • Master node coordinates periodic scans of the metadata DB and identifies cleanup and optimization tasks tat Stargate or other components should perform.
  • It is also responsible for analyzing the metadata, this is shared across all Curator nodes using a Map Reduce algorithm. 
Zookeeper
  • It runs on 3 nodes in the cluster.
  • It can be increased to 5 nodes of the cluster.
  • Zookeeper coordinates and distributes services.
  • One is elected as leader.
  • All Zookeeper nodes can process reads.
  • Leader is responsible for cluster configuration write requests and forwards to its peers.
  • If leader fails to respond, a new leader is elected.
Zeus
  • Zeus is the Nutanix library interface which all other components use to access cluster configuration information.
  • It is responsible for cluster configuration and leadership logs.
  • If Zeus goes down, all goes down!
Prism
  • Prism is the central entity of viewing  activity inside the cluster.
  • It is the management gateway for administrators to configure and monitor a Nutanix cluster.
  • It also elects a node.
  • Prism depends on data stored in Zookeeper and Cassandra.

Note: All the info provided above are based on Nutanix 4.5 Platform Professional (NPP) administration course.

Wednesday, May 11, 2016

Nutanix Certifications

For all those who are enthusiastic to learn the Nutanix Administration course and to be a certified NPP (Nutanix Platform Professional), I strongly recommend to visit their education portal http://nuschool.nutanix.com and enroll yourself. I've completed the course and cleared NPP Certification Exam 4.5 last Monday. Its a decent self paced course which may take around 8-10 hours.

The exam has 50 objective questions and you need a minimum of 80% to pass. If you are not successful at the first attempt, don't worry, you have two more chances. In my first attempt I was able to score 3523/ 5000 only. And on my second attempt I scored 4500/ 5000. 

Once you pass the NPP, you can apply for the next level of certification which is NSS (Nutanix Support Specialist) and then to the ultimate level NPX (Nutanix Platform Expert).

Tuesday, March 29, 2016

Hyper-V on Nutanix

This article explains briefly about Hyper-V on Nutanix virtual computing platform. The below figure shows a 'N' node Nutanix architecture, where each node is an independent server unit with a hypervisor, processor, memory and local storage (combination of SSD and HDD). Along with this there is a CVM (controller VM) through which storage resources are accessed.

'N' node Hyper-V over Nutanix architecture
Local storage from all the nodes are combined together to form a virtualized and unified storage pool by the Nutanix Distributed File System (NDFS). This can be considered as an advanced NAS which delivers a unified storage pool across the cluster and having features like striping, replication, error detection, failover, automatic recovery etc. From this pool shared storage resources are presented to the hypervisors using containers.


NDFS - Logical view of storage pool and containers


As mentioned above, each Hyper-V node has its own CVM which is shown below.
Hyper-V node with a CVM

PRISM console

View of storage pool in PRISM
View of containers in PRISM
Containers are mounted to Hyper-V as SMB 3.0 based file shares where virtual machine files are stored. This is shown below.

SMB 3.0 share path to store VHDs and VM configuration files

Once share path is given properly as mentioned above, you can create virtual machines on your Hyper-V server.

Tuesday, December 8, 2015

Installing Nutanix CE on nested ESXI 5.5 without using SSD

This is a great way to test/ learn Nutanix using the community edition (CE) which is totally free with Nutanix software based controller VM (CVM) and Acropolis hypervisor. This article will help you install Nutanix CE on a nested ESXI 5.5 without using SSD drives.

-Download links :


-Extract the downloaded *.gz file and you will get a *.img file. Rename that file to ce-flat.vmdk.


Files
-Now create a descriptor file as given below :

# Disk DescriptorFile
version=4
encoding="UTF-8"
CID=a63adc2a
parentCID=ffffffff
isNativeSnapshot="no"
createType="vmfs"

# Extent description
RW 14540800 VMFS "ce-flat.vmdk"
# The Disk Data Base
#DDB

ddb.adapterType = "lsilogic"
ddb.geometry.cylinders = "905"
ddb.geometry.heads = "255"
ddb.geometry.sectors = "63"
ddb.longContentID = "2e046b033cecaa929776efb0a63adc2a"
ddb.uuid = "60 00 C2 9b 69 2f c9 76-74 c4 07 9e 10 87 3b f9"
ddb.virtualHWVersion = "10" 

-Copy the above content, paste it in a text file and save it as ce.vmdk.
-Create a VM on ESXI 5.5

VM properties
-RAM (min) 16 GB
-CPU (min) 4 (here I used 2 virtual sockets * 6 cores per socket)
-Network adapter E1000
-VM hardware version 8
-Guest OS selected as Linux CentOS 4/5/6 (64-bit)
-Initially the VM is created without a hard disk
-Enable CPU/ MMU virtualization


CPU/ MMU Virtualization
-Upload ce.vmdk and ce-flat.vmdk to the datastore

Datastore


-Add hard disk (ce.vmdk) to the VM
 
-Add 2 more hard disks to the VM of size 500 GB each. So now we have total 3 hard disks.

Hard disk 1 - ce.vmdk - SCSI (0:0)
Hard disk 2 - 500 GB - SCSI (0:1) (this drive will be emulated as SSD drive)
Hard disk 3 - 500 GB - SCSI (0:2)

-Download *.vmx file from the datastore and add line vhv.enable = "TRUE" to it and upload it back to the datastore. This is to enable nested hypervisor installation over ESXI 5.5.

-To emulate SSD, edit VM settings - options - advanced - general - configuration parameters - add row and add name scsi0:1.virtualSSD with value 1

-Power up the VM and Nutanix will start the installation wizard
-Login as root and password : nutanix/4u
-vi /home/install/phx_iso/phoenix/sysUtil.py
-Click insert and then edit SSD_rdIOPS_thresh = 50 and SSD_wrIOPS_thresh = 50
-And save it (press Esc button then :wq! to save the file) and reboot
-Login in with 'install' and hit enter (no password)
-Click proceed
-The installer will now verify all prerequisites and then you can enter the following details
-Host IP, subnet mask, gateway
-CVM IP, subnet mask, gateway
-Select create single-node cluster and enter DNS IP (8.8.8.8)
-Scroll down through the license agreement and select accept and click proceed
-Set promiscuous  mode in vSwitch to Accept
-Once the installation is complete, you can access the PRISM dashboard with the CVM IP address and NEXT account credentials


PRISM dashboard

Monday, November 30, 2015

Nutanix : a web-scale hyper converged infrastructure solution for enterprise datacenters

Nutanix is an industry leader in hyper converged infrastructure and software defined storage that is optimized for virtual workloads. You can even think it as a cluster-in-a-box solution with compute, storage and hypervisor consolidated together into a 1U or 2U enclosure. And its interesting that in a Nutanix architecture there is no RAID and no need of a SAN storage too. Storage is totally local and they are using direct attached local disks (combination of both SSD and SAS disks) for storing data.

How does it look like ?


Front and rear side of a Nutanix appliance (eg : NX-1000)


Each Nutanix box contains 4 independent nodes with are clustered together. This is shown in the figure below.

Nutanix box with 4 nodes

Each of these nodes operate independently, it has its own CPU, RAM, HDDs etc and all those nodes are clustered together. so each time you want to increase the compute and storage capacity, you can add more boxes (with 1, 2 or 4 nodes depending on the need) to the cluster. Detailed logical architecture of a single node is given below.

Single Nutanix node architecture

You can see in each node, there are SSDs as well as SAS HDDs for storage. And there is a controller VM, which is actually a virtual storage controller that runs on each and every node for improving scalability and resiliency while preventing performance bottlenecks. This controller VM is something like a VSA, but it does more than that. It is intelligent than a traditional VSA and is capable of  functionalities like automated tiering, data locality, de-duplication etc and much more. All storage controllers in a cluster communicates with each other forming Nutanix distributed file system. For each read, there are 3 levels of cache. An in-memory cache within each node, then a hot tier (SSDs) and finally cold tier (SAS HDDs). Here the hypervisor communicates with the controller VM just like it would communicate to a physical storage controller. When a write operation happens, the VM will contact the virtual storage controller and then it is written first to the local SSDs. To ensure the protection data is then replicated to multiple nodes in the cluster, so that it is always available even if a node fails. We can have RF2 (2 way replication) or RF3 (3 way replication). It is an auto healing system, so that if a node fails and if it has only one copy of data left, then the system will automatically identify it using map reduce or those type of analytics and then it will be replicated to another nodes.

If you want to add more nodes, all you have to do is to connect it to the network and power it on, the system will be auto discovered using a auto discovery protocol which runs on top of IPV6. So its very easy to add a new node to a cluster. You can dynamically expand your cluster resources by adding more boxes without shutting down the cluster. Rolling upgrades can be done with out downtime by updating the controller VM one by one in a cluster. Now, each node is clustered at the Nutanix architecture level and you can cluster it at the hypervisor level too (say, VMware ESXI cluster using vCenter server) and providing a highly available web-scale hyper converged solution.

DELL and Nutanix partnered together and they have introduced DELL XC Series appliances optimized for virtual workloads.

References :
www.nutanix.com