Showing posts with label SSD. Show all posts
Showing posts with label SSD. Show all posts

Tuesday, December 27, 2016

Highly Available SOFS On Clustered Storage Spaces

This article explains briefly about the design and steps required to deploy a highly available scale-out-file server (SOFS) on clustered storage spaces using Windows Server 2012 R2. 

Note: Here we are using only a single JBOD. But there are several storage spaces certified JBODs available in market (eg: DATAON) which are enclosure aware. That means when you are using multiple JBODs together, you will have data redundancy at the JBOD level too. 


  1. JBOD is connected to both file servers using shared SAS HBA connectors
  2. MPIO is enabled on both servers (SERVER 01/ 02)
  3. Make sure JBOD disks are available to both servers
  4. In this case we have total 24 disks (6 SSD and 18 SAS HDD)
  5. Create new storage pool
    1. By default all available disks are included in a pool named the primodial pool
    2. If primodial pool is not listed under storage spaces, then it indicates that the storage doesn't meet requirements of storage spaces
    3. If you select primodial pool, all available disks will be listed under physical disks
    4. Select option create new storage pool
    5. Give it a name
    6. Select physical disks you want to be in the pool
    7. Hot spares can be configured too at this stage
  6. Verify new storage pool is listed under storage pools
  7. Next step is to create virtual disks (these are not vhdx files, they call called spaces)
    1. Now select the new storage pool that you have just created in step 5
    2. Click on tasks, select new virtual disk
    3. Give a name
    4. Select the storage layout (Mirror, Parity, Simple)
    5. Choose provisioning type (Thin, Fixed)
    6. Specify size
  8. Now create volumes
    1. Right click the virtual disk (space) that you have just created in the previous step and select new volume
    2. Select server name and then the virtual disk name
    3. Specify volume size, drive letter, file system type, allocation unit size and volume label 
  9. Create failover cluster using the 2 file servers
    1. Provide a cluster name
    2. Volumes that you created in step 8 will be listed as available volumes
    3.  Add those as cluster shared volumes
    4. Now it appears in C:\ClusterStorage\
  10. Next step is to create a highly available SOFS
    1. On failover cluster manager - roles - new clustered role - file server - file server for scale out application data (SOFS) 
    2. Provide client access point name (eg: SMB-FS01)
    3. Right click on SOFS role in failover cluster manager and select add shared folder
    4. Choose SMB share server applications
    5. Provide a name (eg: DATA01)
    6. Local path to share (C:\ClusterStorage\volume1\shares\DATA01
    7. Remote path to share (\\SMB-FS01\DATA01)
Now you have a highly available file share (\\SMB-FS01\DATA01) to store your virtual machine files which is built over clustered storage spaces.

Reference: Microsoft 

Tuesday, December 8, 2015

Installing Nutanix CE on nested ESXI 5.5 without using SSD

This is a great way to test/ learn Nutanix using the community edition (CE) which is totally free with Nutanix software based controller VM (CVM) and Acropolis hypervisor. This article will help you install Nutanix CE on a nested ESXI 5.5 without using SSD drives.

-Download links :


-Extract the downloaded *.gz file and you will get a *.img file. Rename that file to ce-flat.vmdk.


Files
-Now create a descriptor file as given below :

# Disk DescriptorFile
version=4
encoding="UTF-8"
CID=a63adc2a
parentCID=ffffffff
isNativeSnapshot="no"
createType="vmfs"

# Extent description
RW 14540800 VMFS "ce-flat.vmdk"
# The Disk Data Base
#DDB

ddb.adapterType = "lsilogic"
ddb.geometry.cylinders = "905"
ddb.geometry.heads = "255"
ddb.geometry.sectors = "63"
ddb.longContentID = "2e046b033cecaa929776efb0a63adc2a"
ddb.uuid = "60 00 C2 9b 69 2f c9 76-74 c4 07 9e 10 87 3b f9"
ddb.virtualHWVersion = "10" 

-Copy the above content, paste it in a text file and save it as ce.vmdk.
-Create a VM on ESXI 5.5

VM properties
-RAM (min) 16 GB
-CPU (min) 4 (here I used 2 virtual sockets * 6 cores per socket)
-Network adapter E1000
-VM hardware version 8
-Guest OS selected as Linux CentOS 4/5/6 (64-bit)
-Initially the VM is created without a hard disk
-Enable CPU/ MMU virtualization


CPU/ MMU Virtualization
-Upload ce.vmdk and ce-flat.vmdk to the datastore

Datastore


-Add hard disk (ce.vmdk) to the VM
 
-Add 2 more hard disks to the VM of size 500 GB each. So now we have total 3 hard disks.

Hard disk 1 - ce.vmdk - SCSI (0:0)
Hard disk 2 - 500 GB - SCSI (0:1) (this drive will be emulated as SSD drive)
Hard disk 3 - 500 GB - SCSI (0:2)

-Download *.vmx file from the datastore and add line vhv.enable = "TRUE" to it and upload it back to the datastore. This is to enable nested hypervisor installation over ESXI 5.5.

-To emulate SSD, edit VM settings - options - advanced - general - configuration parameters - add row and add name scsi0:1.virtualSSD with value 1

-Power up the VM and Nutanix will start the installation wizard
-Login as root and password : nutanix/4u
-vi /home/install/phx_iso/phoenix/sysUtil.py
-Click insert and then edit SSD_rdIOPS_thresh = 50 and SSD_wrIOPS_thresh = 50
-And save it (press Esc button then :wq! to save the file) and reboot
-Login in with 'install' and hit enter (no password)
-Click proceed
-The installer will now verify all prerequisites and then you can enter the following details
-Host IP, subnet mask, gateway
-CVM IP, subnet mask, gateway
-Select create single-node cluster and enter DNS IP (8.8.8.8)
-Scroll down through the license agreement and select accept and click proceed
-Set promiscuous  mode in vSwitch to Accept
-Once the installation is complete, you can access the PRISM dashboard with the CVM IP address and NEXT account credentials


PRISM dashboard