Showing posts with label virtualization. Show all posts
Showing posts with label virtualization. Show all posts

Sunday, March 11, 2018

PowerShell quick start guide

This post is for all those who would like to kickstart and learn PowerShell from the very basic level. When I started learning PowerShell I wrote few articles so that it will be helpful for folks who are looking forward to 'how to start and learn it' and I can also refer to it whenever I need. Here in this post, I am just putting all of them together in proper order so that anyone can easily make use of it.

PowerShell 101 blog series

Tuesday, July 18, 2017

Best practices while building a Hyper-V host

This article explains briefly about some of the best practice considerations while building a stand-alone Hyper-V 2012 R2/ 2016 host. You can use any compatible hardware, but here I will be explaining using Dell PowerEdge servers as I am working with them everyday.

  1. Select proper hardware

    You have to be really careful before purchasing a server. Analyze the requirements first and work a bit on capacity planning too. For example, PowerEdge R630 will be a good choice to start with as it is a 1U system with 2 processors. It can have up to 1.5 TB memory, but generally most SMB customers go with somewhere around 128 GB. Choosing the right network controller is also very important as it directly impacts the data transfer performance of the virtual machines. If you are planning to use converged networking on the host, select appropriate adapter. I recommend using one 10G dual port network card at the minimum for a converged network configuration. If you are looking for redundancy at network card level, then you can go with two 10G dual port cards. You can also go forward with multiple dual port or quad port 1G cards as per your requirements in case of budget limitations.  

  2. Number of disks, type of disks and RAID controller

    On a stand alone host, mostly customers will be using the local drives and if they need additional storage it will be provisioned from a SAN. The number of disks and the type of disks (SSD, SAS, NL-SAS etc.) will have direct impact on performance at storage level. Also, it is very important to select the right RAID controller. RAID types supported, size of controller cache, read/ write policies etc. are some of the major parameters that you have to consider while choosing the RAID controller card. Dell uses PERC (PowerEdge RAID Controller). For example, you can select PERC H730P which has 2 GB cache memory and a BBU and supports RAID levels 0,1,5,6,10,50 and 60. H710P also supports 4 KB block size disk drives. For more info please have a look at my article RAID configuration using PERC.

  3. Always use and follow HCL (Hardware Compatibility List)
  4. Configure out-of-band management. Dell uses iDRAC which helps you to manage your server remotely
  5. Update BIOS, firmware and drivers to the latest and greatest version
  6. Make sure you install all the necessary Windows updates
  7. Partition style, file system and AUS (Allocation Unit Size) of the drive where VM files will be saved

    Say, you are going to save all the VM files in drive D. While creating this drive select GPT partition style. If you are running Hyper-V 2012 R2, then use NTFS file system with AUS 64 KB. If you are having Hyper-V 2016, then use ReFS with 4 KB AUS. Please refer this MSFT article for more info.

  8. Always create a NIC team and select right teaming modes

    The most widely used teaming mode is switch independent + dynamic load balancing as it is the least complicated in terms of configuration and has no dependency on your switches. But if your switches support VLT (for Dell) or vPC (for Cisco) technology then the best teaming mode will be LACP + dynamic load balancing which provides you redundancy as well as aggregated throughput of all the active links in the team.

  9. Use separate VLANs for different types of traffic

    It is recommended to use separate VLANs for management, VM traffic, iSCSI, live migration and iDRAC.

  10. If using converged networking assign proper minimum bandwidths. Reference article for QoS recommendations linked here.
  11. Use minimum number of vSwithes
  12. Configure MPIO if adding additional storage from a SAN
  13. Set jumbo frames for iSCSI interfaces 
  14. Disable NetBIOS over TCP/ IP and DNS registration on iSCSI NICs

    Please check my post: Best practice recommendations for iSCSI network adapters

  15. Enable shared nothing live migration. For more info please check my previous post 
  16. If using 10G NICs make use of VMQ

    Please have a look at this MSFT article which provides you tips on VMQ CPU assignment

  17. Add exclusions for VHD/ VHDX files from scanning if antivirus is installed
  18. Run necessary stress tests to benchmark the system

    Benchmarking helps to get an overview of the IOPS numbers in the best/ worst case scenarios, so that you can provision your workload accordingly avoiding IO congestion at storage level. You can use synthetic benchmarking tools like iometer, diskspd etc. for conducting stress tests. Also go through my article:  How to calculate total IOPS supported by a disk array. But please note that these calculations doesn't take in to account on the effect of controller cache. That means the actual IOPS values while benchmarking the system will be higher than that of the values you got from the formula which is because of the effect of cache. When you select a write-back policy, all write IOs will land directly on the controller cache and will be acknowledged. Later those will be flushed to the disk array. Larger the cache size higher the IO performance. This shows the importance of choosing the right RAID controller.

  19. Choose OS power plan High Performance
  20. Make sure PSRemoting is enabled
  21. Enable proper monitoring either using a monitoring tool or custom scripts
  22. As a DR plan you can consider using Hyper-V replica
  23. Enable RDP
  24. I strongly recommend to create a diagram to visualize connectivity of the host to your network
  25. Organize VM files and folders properly as shown below
Here all VMs are stored in E:\VM folder

Virtual hard disks, VM config files, Snapshot files etc. of each VM is organized in a proper folder structure

I hope this will be helpful if you are totally new to Hyper-V and please feel free to let me know if you have any other best practice suggestions which I missed to mention here. Cheers!

Sunday, July 10, 2016

What happens when you enable Intel VT

Lets consider the difference between virtualized and non-virtualized platforms.


Here VMM refers to Hypervisor. There are different privilege levels in the processor for instruction execution. These levels are called Rings (Ring 0, 1, 2, 3).

When you enable Intel VT:
  • In a non-virtualized environment OS runs on ring 0. A single operating system controls all hardware resources
  • Four privilege levels (rings) are employed on VT platforms
  • When it is enabled hypervisor now runs on Ring 0 instead of an OS. Guest OS runs in Ring 1 or Ring 3
  • VT allows the hypervisor to present each guest OS a virtual machine (VM) environment that emulates the hardware environment needed by the guest OS

When you enable Intel VT-x:
  • Intel (VT-x) - is a hardware assisted virtualization technology
  • Hardware support for processor virtualization enables system vendors to provide simple, robust, and reliable hypervisor software
  • VT-x consists of a set of virtual machine extensions (VMX) that support virtualization of processor hardware for multiple software environments using virtual machine
  • A hypervisor written to take advantage of the Intel®Virtualization Technology runs in a new CPU mode called “VMX Root” mode and the guest OS in the “VMX Non-root” mode. The VMM will manage the virtual machines through the VM Exit and VM Entry mechanism
  • Hypervisor has its own privileged level (VMX Root) where it executes

Below figure shows difference in Ring levels of Intel VT and Intel VT-x


Reference: Intel

Monday, November 30, 2015

Nutanix : a web-scale hyper converged infrastructure solution for enterprise datacenters

Nutanix is an industry leader in hyper converged infrastructure and software defined storage that is optimized for virtual workloads. You can even think it as a cluster-in-a-box solution with compute, storage and hypervisor consolidated together into a 1U or 2U enclosure. And its interesting that in a Nutanix architecture there is no RAID and no need of a SAN storage too. Storage is totally local and they are using direct attached local disks (combination of both SSD and SAS disks) for storing data.

How does it look like ?


Front and rear side of a Nutanix appliance (eg : NX-1000)


Each Nutanix box contains 4 independent nodes with are clustered together. This is shown in the figure below.

Nutanix box with 4 nodes

Each of these nodes operate independently, it has its own CPU, RAM, HDDs etc and all those nodes are clustered together. so each time you want to increase the compute and storage capacity, you can add more boxes (with 1, 2 or 4 nodes depending on the need) to the cluster. Detailed logical architecture of a single node is given below.

Single Nutanix node architecture

You can see in each node, there are SSDs as well as SAS HDDs for storage. And there is a controller VM, which is actually a virtual storage controller that runs on each and every node for improving scalability and resiliency while preventing performance bottlenecks. This controller VM is something like a VSA, but it does more than that. It is intelligent than a traditional VSA and is capable of  functionalities like automated tiering, data locality, de-duplication etc and much more. All storage controllers in a cluster communicates with each other forming Nutanix distributed file system. For each read, there are 3 levels of cache. An in-memory cache within each node, then a hot tier (SSDs) and finally cold tier (SAS HDDs). Here the hypervisor communicates with the controller VM just like it would communicate to a physical storage controller. When a write operation happens, the VM will contact the virtual storage controller and then it is written first to the local SSDs. To ensure the protection data is then replicated to multiple nodes in the cluster, so that it is always available even if a node fails. We can have RF2 (2 way replication) or RF3 (3 way replication). It is an auto healing system, so that if a node fails and if it has only one copy of data left, then the system will automatically identify it using map reduce or those type of analytics and then it will be replicated to another nodes.

If you want to add more nodes, all you have to do is to connect it to the network and power it on, the system will be auto discovered using a auto discovery protocol which runs on top of IPV6. So its very easy to add a new node to a cluster. You can dynamically expand your cluster resources by adding more boxes without shutting down the cluster. Rolling upgrades can be done with out downtime by updating the controller VM one by one in a cluster. Now, each node is clustered at the Nutanix architecture level and you can cluster it at the hypervisor level too (say, VMware ESXI cluster using vCenter server) and providing a highly available web-scale hyper converged solution.

DELL and Nutanix partnered together and they have introduced DELL XC Series appliances optimized for virtual workloads.

References :
www.nutanix.com

Monday, March 30, 2015

How to run Hyper-V on a VM in Hyper-V

If you try to install Hyper-V role on a VM in Hyper-V, you will get an error message mentioning that, "Hyper-V cannot be installed : A hypervisor is already running". Follow the steps given in this blog to get it installed using PowerShell commands.

Note:
It is NOT recommended in a production environment. Use this only for self study and testing in labs. 

How to run Hyper-V on a VM in ESXI 5.5

You can now virtualize Hyper-V as a VM running on VMware ESXI 5.5. Initially if you try to install Hyper-V role, you will get an error message mentioning that, "Hyper-V cannot be installed". Follow the below steps to get it installed.

1. Enable CPU/ MMU virtualization in VM settings.


2. Power off the Hyper-V VM and remove it from inventory. Browse the data store and download the VMX configuration file of the VM and save to your local machine. Open it using WordPad and change the guestOS line to :

guestOS = “winhyperv”

Save the file and upload it back to data store. Now right click on the VMX file and add it to inventory. Power on the VM and install Hyper-V role. It must now get installed without any error.

Note :
It is NOT recommended in a production environment. Use this only for self study and testing in labs. 

Wednesday, September 17, 2014

What is IT IMS ?

IT IMS refers to IT Infrastructure Management Services (IMS). Information Technology is critical in every business now-a days; right from entertainment, media, banking, automobile, education and so on. The use of computing devices and networked systems are growing rapidly. It is vital for a business, to keep its hardware resources, networks and applications fully functional, available and running in 24/7 mode. The discipline of managing and maintaining the hardware devices, networks and applications for continuous service availabiltity with no down time is referred as IT IMS.

It mainly covers the following areas :

-End user computing
-Desktop administration
-OS administration
-Desktop troubleshooting and maintenance
-Server administration
-Server monitoring
-Networking (routing and switching)
-Network security
-Network monitoring
-Service desk operations
-Virtualization of resources and its management
-Data center operations and maintenance
-Management of on-premise as well as cloud IT infrastructure etc.

IT IMS includes multiple operations and it can be broadly classified into three, based on the process models and operational levels.

1.Service desk operations
2.Network administration
3.Server administration