Showing posts with label storage pool. Show all posts
Showing posts with label storage pool. Show all posts

Friday, April 28, 2017

Storage Spaces Direct - Volumes and Resiliency

Storage Spaces Direct (S2D) is the Microsoft implementation of software defined storage (SDS). This article briefly explains about the different types of volumes that can be created on a S2D cluster. Once you enable S2D using Enable-ClusterS2D cmdlet, it will automatically claim all physical disks in the cluster and forms a storage pool. On top of this pool you can create multiple volumes which is explained below.

Mirror
  • Recommended for workloads that have strict latency requirements or that need lots of mixed random IOPS
  • Eg: SQL Server databases or performance-sensitive Hyper-V VMs
  • If you have a 2 node cluster: Storage Spaces Direct will automatically use two-way mirroring for resiliency
  • If your cluster has 3 nodes: it will automatically use three-way mirroring
  • Three-way mirror can sustain two fault domain failures at same time
new-volume -friendlyname "Volume A" -filesystem CSVFS_ReFS -storagepoolfriendlyname S* -size 1TB
  • You can create two-way mirror by mentioning "PhysicalDiskRedundancy 1"
new-volume -friendlyname "Volume A" -filesystem CSVFS_ReFS -storagepoolfriendlyname S* -size 1TB -PhysicalDiskRedundancy 1

Parity
  • Recommended for workloads that write less frequently, such as data warehouses or "cold" storage, traditional file servers, VDI etc.
  • For creating dual parity volumes min 4 nodes are required and can sustain two fault domain failures at same time
new-volume -friendlyname "Volume B" -filesystem CSVFS_ReFS -storagepoolfriendlyname S* -size 1TB -resiliencysettingname Parity
  • You can create single parity volumes using the below
new-volume -friendlyname "Volume B" -filesystem CSVFS_ReFS -storagepoolfriendlyname S* -size 1TB -resiliencysettingname Parity -PhysicalDiskRedundancy 1

Mixed/ Tiered / Multi-Resilient (MRV)
  • In Windows Server 2012 R2 Storage Spaces, when you create storage tiers you dedicated physical media devices. That means SSD for performance tier and HDD for capacity tier
  • But in Windows Server 2016, tiers are differentiated not only by media types; it can include resiliency types too
  • MRV = Three-way mirror + dual-parity
  • In a MRV, three-way mirror portion is considered as performance tier and dual parity portion as capacity tier
  • Recommended for workloads that write in large, sequential, such as archival or backup targets
  • Writes land to mirror section of the volume and then it is gradually moved/ rotated in to parity portion later
  • Each MRV by default will have 32 MB Write-back cache 
  • ReFS starts rotating data into the parity portion at 60% utilization of the mirror portion and gradually as utilization increases the speed of data movement to parity portion also increases
  • You should have min 4 nodes to create a MRV
new-volume -friendlyname "Volume C" -filesystem CSVFS_ReFS -storagepoolfriendlyname S* -storagetierfriendlynames Performance, Capacity -storagetiersizes 1TB, 9TB

References:

Tuesday, December 27, 2016

Highly Available SOFS On Clustered Storage Spaces

This article explains briefly about the design and steps required to deploy a highly available scale-out-file server (SOFS) on clustered storage spaces using Windows Server 2012 R2. 

Note: Here we are using only a single JBOD. But there are several storage spaces certified JBODs available in market (eg: DATAON) which are enclosure aware. That means when you are using multiple JBODs together, you will have data redundancy at the JBOD level too. 


  1. JBOD is connected to both file servers using shared SAS HBA connectors
  2. MPIO is enabled on both servers (SERVER 01/ 02)
  3. Make sure JBOD disks are available to both servers
  4. In this case we have total 24 disks (6 SSD and 18 SAS HDD)
  5. Create new storage pool
    1. By default all available disks are included in a pool named the primodial pool
    2. If primodial pool is not listed under storage spaces, then it indicates that the storage doesn't meet requirements of storage spaces
    3. If you select primodial pool, all available disks will be listed under physical disks
    4. Select option create new storage pool
    5. Give it a name
    6. Select physical disks you want to be in the pool
    7. Hot spares can be configured too at this stage
  6. Verify new storage pool is listed under storage pools
  7. Next step is to create virtual disks (these are not vhdx files, they call called spaces)
    1. Now select the new storage pool that you have just created in step 5
    2. Click on tasks, select new virtual disk
    3. Give a name
    4. Select the storage layout (Mirror, Parity, Simple)
    5. Choose provisioning type (Thin, Fixed)
    6. Specify size
  8. Now create volumes
    1. Right click the virtual disk (space) that you have just created in the previous step and select new volume
    2. Select server name and then the virtual disk name
    3. Specify volume size, drive letter, file system type, allocation unit size and volume label 
  9. Create failover cluster using the 2 file servers
    1. Provide a cluster name
    2. Volumes that you created in step 8 will be listed as available volumes
    3.  Add those as cluster shared volumes
    4. Now it appears in C:\ClusterStorage\
  10. Next step is to create a highly available SOFS
    1. On failover cluster manager - roles - new clustered role - file server - file server for scale out application data (SOFS) 
    2. Provide client access point name (eg: SMB-FS01)
    3. Right click on SOFS role in failover cluster manager and select add shared folder
    4. Choose SMB share server applications
    5. Provide a name (eg: DATA01)
    6. Local path to share (C:\ClusterStorage\volume1\shares\DATA01
    7. Remote path to share (\\SMB-FS01\DATA01)
Now you have a highly available file share (\\SMB-FS01\DATA01) to store your virtual machine files which is built over clustered storage spaces.

Reference: Microsoft