Tuesday, December 8, 2015

Installing Nutanix CE on nested ESXI 5.5 without using SSD

This is a great way to test/ learn Nutanix using the community edition (CE) which is totally free with Nutanix software based controller VM (CVM) and Acropolis hypervisor. This article will help you install Nutanix CE on a nested ESXI 5.5 without using SSD drives.

-Download links :


-Extract the downloaded *.gz file and you will get a *.img file. Rename that file to ce-flat.vmdk.


Files
-Now create a descriptor file as given below :

# Disk DescriptorFile
version=4
encoding="UTF-8"
CID=a63adc2a
parentCID=ffffffff
isNativeSnapshot="no"
createType="vmfs"

# Extent description
RW 14540800 VMFS "ce-flat.vmdk"
# The Disk Data Base
#DDB

ddb.adapterType = "lsilogic"
ddb.geometry.cylinders = "905"
ddb.geometry.heads = "255"
ddb.geometry.sectors = "63"
ddb.longContentID = "2e046b033cecaa929776efb0a63adc2a"
ddb.uuid = "60 00 C2 9b 69 2f c9 76-74 c4 07 9e 10 87 3b f9"
ddb.virtualHWVersion = "10" 

-Copy the above content, paste it in a text file and save it as ce.vmdk.
-Create a VM on ESXI 5.5

VM properties
-RAM (min) 16 GB
-CPU (min) 4 (here I used 2 virtual sockets * 6 cores per socket)
-Network adapter E1000
-VM hardware version 8
-Guest OS selected as Linux CentOS 4/5/6 (64-bit)
-Initially the VM is created without a hard disk
-Enable CPU/ MMU virtualization


CPU/ MMU Virtualization
-Upload ce.vmdk and ce-flat.vmdk to the datastore

Datastore


-Add hard disk (ce.vmdk) to the VM
 
-Add 2 more hard disks to the VM of size 500 GB each. So now we have total 3 hard disks.

Hard disk 1 - ce.vmdk - SCSI (0:0)
Hard disk 2 - 500 GB - SCSI (0:1) (this drive will be emulated as SSD drive)
Hard disk 3 - 500 GB - SCSI (0:2)

-Download *.vmx file from the datastore and add line vhv.enable = "TRUE" to it and upload it back to the datastore. This is to enable nested hypervisor installation over ESXI 5.5.

-To emulate SSD, edit VM settings - options - advanced - general - configuration parameters - add row and add name scsi0:1.virtualSSD with value 1

-Power up the VM and Nutanix will start the installation wizard
-Login as root and password : nutanix/4u
-vi /home/install/phx_iso/phoenix/sysUtil.py
-Click insert and then edit SSD_rdIOPS_thresh = 50 and SSD_wrIOPS_thresh = 50
-And save it (press Esc button then :wq! to save the file) and reboot
-Login in with 'install' and hit enter (no password)
-Click proceed
-The installer will now verify all prerequisites and then you can enter the following details
-Host IP, subnet mask, gateway
-CVM IP, subnet mask, gateway
-Select create single-node cluster and enter DNS IP (8.8.8.8)
-Scroll down through the license agreement and select accept and click proceed
-Set promiscuous  mode in vSwitch to Accept
-Once the installation is complete, you can access the PRISM dashboard with the CVM IP address and NEXT account credentials


PRISM dashboard

Monday, November 30, 2015

Nutanix : a web-scale hyper converged infrastructure solution for enterprise datacenters

Nutanix is an industry leader in hyper converged infrastructure and software defined storage that is optimized for virtual workloads. You can even think it as a cluster-in-a-box solution with compute, storage and hypervisor consolidated together into a 1U or 2U enclosure. And its interesting that in a Nutanix architecture there is no RAID and no need of a SAN storage too. Storage is totally local and they are using direct attached local disks (combination of both SSD and SAS disks) for storing data.

How does it look like ?


Front and rear side of a Nutanix appliance (eg : NX-1000)


Each Nutanix box contains 4 independent nodes with are clustered together. This is shown in the figure below.

Nutanix box with 4 nodes

Each of these nodes operate independently, it has its own CPU, RAM, HDDs etc and all those nodes are clustered together. so each time you want to increase the compute and storage capacity, you can add more boxes (with 1, 2 or 4 nodes depending on the need) to the cluster. Detailed logical architecture of a single node is given below.

Single Nutanix node architecture

You can see in each node, there are SSDs as well as SAS HDDs for storage. And there is a controller VM, which is actually a virtual storage controller that runs on each and every node for improving scalability and resiliency while preventing performance bottlenecks. This controller VM is something like a VSA, but it does more than that. It is intelligent than a traditional VSA and is capable of  functionalities like automated tiering, data locality, de-duplication etc and much more. All storage controllers in a cluster communicates with each other forming Nutanix distributed file system. For each read, there are 3 levels of cache. An in-memory cache within each node, then a hot tier (SSDs) and finally cold tier (SAS HDDs). Here the hypervisor communicates with the controller VM just like it would communicate to a physical storage controller. When a write operation happens, the VM will contact the virtual storage controller and then it is written first to the local SSDs. To ensure the protection data is then replicated to multiple nodes in the cluster, so that it is always available even if a node fails. We can have RF2 (2 way replication) or RF3 (3 way replication). It is an auto healing system, so that if a node fails and if it has only one copy of data left, then the system will automatically identify it using map reduce or those type of analytics and then it will be replicated to another nodes.

If you want to add more nodes, all you have to do is to connect it to the network and power it on, the system will be auto discovered using a auto discovery protocol which runs on top of IPV6. So its very easy to add a new node to a cluster. You can dynamically expand your cluster resources by adding more boxes without shutting down the cluster. Rolling upgrades can be done with out downtime by updating the controller VM one by one in a cluster. Now, each node is clustered at the Nutanix architecture level and you can cluster it at the hypervisor level too (say, VMware ESXI cluster using vCenter server) and providing a highly available web-scale hyper converged solution.

DELL and Nutanix partnered together and they have introduced DELL XC Series appliances optimized for virtual workloads.

References :
www.nutanix.com

Saturday, November 28, 2015

Shared Nothing Live Migration

Shared Nothing Live Migration is a Hyper-V 3.0 feature that help us live migrate virtual machines from one Hyper-V server to another without a shared storage and cluster membership.
 
Note : Failover clustering provides HA, but shared nothing live migration is a mobility solution that gives flexibility in a planned movement of VMs between Hyper-V hosts without downtime.

Hyper-V settings for Live migrations 

As a prerequisite for this, we need to standardize network connectivity on Hyper-V host machines (eg : vSwitches should have same names for VM traffic, iSCSI traffic etc). And for this shared nothing Live Migration traffic we can use a separate VLAN (say, VLAN 90) so that it won’t affect local LAN.

Separate VLAN for live migration traffic


Also we need to configure constrained delegation on Hyper-V servers to use Kerberos authentication protocol when managing the servers remotely. This is shown below.

Use Kerberos

Delegation to specified services


 

Best practice recommendations for iSCSI network adapters

Best practice recommendations for iSCSI network adapters


Note : all those settings are enabled by default, we need to disable it as best practice on all iSCSI NICs

Also, if your network/ network devices supports jumbo frames, then that should be enabled too on the network adapters.


Recommended BIOS settings for DELL PowerEdge 12G servers

BIOS settings for optimal performance

Memory mode : Optimizer
Node interleave : Disabled
Logical processor : Enabled
QPI frequency : Maximum frequency
CPU power management : Maximum performance
Turbo boost : Enabled
C1E : Disabled
C-states : Disabled
Memory frequency : Maximum performance

Thursday, November 5, 2015

RAID configuration using PERC

PERC stands for PowerEdge Raid Controller. Here we have 3 physical disks present. We will be configuring 2 virtual disks (VD) of RAID 5 using these 3 physical disks.

VD00 - 100 GB
VD01 - 1.7 TB

Once the system starts press Ctrl+R to enter PERC configuration utility and follow the steps as shown below.

No configuration present and 3 disks available

Press F2 and create new VD

VD00 properties

Click OK

VD00 - 100 GB created

Press F2 and add new VD

VD01 properties

VD00 and VD01 created

 Now we have successfully created 2 VDs. Next step is to initialize both VDs.

Initialization of VD00

Start Init

Click OK

Initialization VD00 in progress

Similarly initialize the next VD too. Once its completed you can exit from the PERC utility and reboot the machine.

Saturday, October 24, 2015

Enabling jumbo frame on Hyper-V 2012 R2 Core NIC using powershell

Enabling jumbo frame on a NIC using powershell is shown below :

Enabling jumbo frames using powershell


To list advanced properties of NIC3 : Get-NetAdapterAdvancedProperty -Name "NIC3"

To change jumbo mtu to 9000 : Get-NetAdapterAdvancedProperty -Name "NIC3" -RegistryKeyword "*jumbopacket" | Set-NetAdapterAdvancedProperty -RegistryValue 9000


Jumbo frames

Jumbo frames are Ethernet frames with more than 1500 bytes of payload. Generally, jumbo frames can carry up to 9000 bytes, but variations do exist. Enabling jumbo frames in specific cases (say, in your Storage Area Network) can improve performance and network throughput. But, if you are enabling it, all devices connecting it including the source, destination and other devices in between like switches etc must support jumbo frames and should be enabled.

Enabling jumbo frames on a VMware virtual network adapter (vmxnet3) is shown below :

Jumbo Packet on vmxnet3 adapter
Steps to enable jumbo frames on a VMware vSphere 4.1 vSwitch is shown below :

List vSwitch
 By default it is 1500 MTU. Say, you want to change vSwitch1 to 9000 MTU. It can be done as follows :

Enabling jumbo frame with MTU 9000 on vSwitch1
You can verify connectivity after enabling jumbo frames by specifying packet size using ping command as given below :

Verifying jumbo frames



Creating VHD/ VHDX templates

Templates can save you lot of time, instead of building a server from scratch. Following are the steps to create a VHD/ VHDX template that can be used on Hyper-V servers.

1.Create a virtual machine (say Windows Server 2008 R2 with 80 GB hard drive)
2.Make sure you create fixed disk
3.Install OS
4.Install Windows updates
5.Install Hyper-V integration services
6.Install any applications as per your requirement

Once you are done with all the above steps, now its time to sysprep your machine. This is shown below :

sysprep

Make sure you check Generalize and shutdown options. Once the system completes sysprep operation, it will automatically shutdown. You can now take a copy of this VHD disk, rename it as you wish and save it to a location where you keep your templates. Also, make sure to make this file as read-only, so that you can avoid booting it up accidentally.

Note : You can create VHD templates for OS earlier than Windows Server 2012 and VHDX templates for Windows Server 2012 and later.

Friday, October 2, 2015

VMware virtual switches

Virtual Standard Switch (VSS)

VSS


Virtual Distributed Switch (VDS)

VDS


Reference : mylearn.vmware.com


Monday, September 28, 2015

Ports on FC Switched Fabric (FC-SW)

The diagram given below gives a brief idea about the different types of ports on a Fiber Channel Switched Fabric.

Ports on FC Switched Fabric



Friday, August 28, 2015

RAID controller

As RAID controller is responsible for the operations on RAID array drives, it is very important to have an enterprise class controller for enhanced performance, increased reliability and fault tolerance. Considering the business requirement and budget, you can choose a RAID controller. Here I will be explaining about DELL Power Edge RAID Controller (PERC). When choosing a controller, there are few critical hardware features that affect performance to keep in mind :
  • Read policy
  • Write policy
  • Controller cache memory
  • CacheCade
  • Cut-through I/ O
  • FastPath

PERC Series

PERC H810 - high performance RAID controller that can be connected to JBOD

PERC H710P - ideal for implementing hybrid server platforms based on SAS HDDs with high performance and enterprise class reliability

PERC H310 - entry-level RAID controller (no cache)

Reference :
Dell
Microsoft

Saturday, August 8, 2015

How to calculate total IOPS supported by a disk array

IOPS stands for input/ output operations per second. 

Consider a RAID array with 4 disks in RAID 5. Each disk is 4TB 15K SAS drive. We can use the below formula for calculating maximum IOPS supported by the RAID array.

Raw IOPS = Disk Speed IOPS * Number of disks

Functional IOPS = (Raw IOPS * Write % / RAID Penalty) + (RAW IOPS * Read %)

No: of disks = 4 (4TB 15K SAS)
IOPS of a single 15K SAS disk = 175 - 210
RAID penalty = 4 (for RAID 5)
Read - Write ratio = 2:1 (say we have 66 % reads and 33 % writes)

From the above details :

Raw IOPS = 175 * 4 = 700
Functional IOPS = (700 * 0.33 / 4) + (700 * 0.66) = 519.75

Therefore, the maximum IOPS supported by this RAID array  = 520

The above calculation is entirely based on assumption that the read - write ration is 2:1. In real time scenarios it may vary depending on the type of workload. That means workload characterization is also important while calculating maximum IOPS value supported by your RAID array. It also depends on the type of RAID, as penalty will be different for different type of RAID. Disk type (SSD, SAS, SATA etc), disk RPM and number of disks also affects the total IOPS value.

From this we can conclude that, if your current IOPS usage is closer to the maximum IOPS supported, then you have to be very cautious as it may lead to I/O contentions due to heavy workloads causing high latency and performance degradation to your storage server.

Thursday, July 9, 2015

Server load balancing using KEMP Load Master

This article explains the basic configuration steps for load balancing multiple web servers using KEMP load balancer. In my setup, I've two web servers (INVLABSWEB01 and INVLABSWEB02) which are load balanced using a KEMP Load Master. For the purpose of testing I've used a virtual load master appliance (VLM-5000). After the installation is complete, you have to activate the license. Once you are done with that, you can get to the home page of the load master using a web browser as shown below.

Home page

Now you have to add a virtual service. Click add new, provide a virtual address, give it a name, select a protocol and click add this virtual service.

Add new virtual service

Select a service type. Check the box to activate the service.


Expand standard options. Select the options as shown below. If you don't select Force L7 option, then the virtual service will be forced to Layer 4. Transparency can be enabled or disabled depending on the use case.

If persistence mode is enabled, the same client will subsequently connect to the same real server depending the mode selected. And there is a timeout value, that can be set which determines for how long this particular connection is remembered.

Scheduling method determines the method by which the load master selects a real server for a particular service. There are several methods like round robin, weighted round robin, least connection, resource based (adaptive) etc. Here I have selected round robin.

Basic properties and standard options

If you want to enable SSL acceleration, that can be done here.

SSL acceleration

Advanced options like enabling caching, compression, access control etc can be done here.

Advanced properties

Edge Security Pack (ESP) feature can be enabled in this option.

ESP feature

Click add new to add real servers to the virtual service (VS).

Real servers

Provide real server address, port number and click add this real server.

Add real server

Similarly, I've added two web servers (192.168.6.30 and 192.168.6.31) here.

Real servers

Click on view/ modify services to view the VS that you have just created.

Virtual services

Click on real servers to view the real servers (INVLABSWEB01 and INVLABSWEB01).

Real servers

Now, both of my web servers are load balanced. If you want to disable any of the servers from the load balancer, click disable button for the respective server.

Reference :
Kemp Technologies

Wednesday, July 8, 2015

Multi-Master Model and FSMO Roles

Consider an enterprise with multiple Domain Controllers (DC). A multi-master enabled database like Windows Active Directory (AD), allows to update changes to any DC in the enterprise. But, in this case there are chances/ possibilities of conflicts which may lead to problems. As AD role is not bound to a single DC, it is referred as a Flexible Single Master Operation (FSMO) role. Currently in Windows there are 5 FSMO roles. These roles prevent conflict operations and are vital for handling the smooth operation of AD as a multi-master system. Out of the 5 FSMO roles, there are 2 forest wide roles per forest and 3 domain wide roles in each domain.

Forest wide roles

-Schema master : controls all updates and modifications to the schema (eg : changes to attributes of an object).

-Domain naming master : responsible while adding or removing a domain  in a forest.


Domain wide roles

-RID master : allocates Relative IDs (RID) to DCs within a domain. When an object is created it will have an SID, which contains a domain SID (same for all SIDs created in the domain) and RID which is unique to the domain.

-PDC emulator : responsible for time sync, password changes etc.

-Infrastructure master : responsible for updating references from objects in its domain to objects in other domains. Infrastructure master role should not be on the same DC that is hosting the Global Catalogue (GC), unless there is only one DC in a domain. If they are on the same server, infrastructure master will not function, it will never find data that is out of date and so will never replicate changes to other DCs in a domain. If all DCs in the domain hosts a GC, then it doesn't matter which DC has the infrastructure master role as all DCs will be up to date due to the GC.


If you want to transfer FSMO roles from one DC to another, you can follow the below steps.

To check current FSMO status
Steps before role transfer
Use ntdsutil to transfer the roles. You have to connect to the server to which you want to transfer the role. Above screenshot explains this whole process.

Transfer roles
Click Yes to transfer the role and then transfer all roles one by one.

FSMO status after role transfer
Now all roles are moved from INVLABSDC02 to INVLABSDC01.