Paper: Microsoft Hyper-V scalability with EMC Symmetrix VMAX

EMC has released a white paper titled: Microsoft Hyper-V scalability with EMC Symmetrix VMAX, the paper which contains 17 pages highlights a Hyper-V scalability test performed by EMC during the creation of one of the largest Hyper-V environments in the world.

The environment consists of a 16-node Hyper-V cluster, using the Symmetrix VMAX storage arrays capable of scaling to 64 virtual machines per cluster node for a total of 1024 virtual machines, the goal was to see how well the Symmetrix VMAX storage array performs and how performance can be optimized.

For deployment of the VMs initially System Center Virtual Machine Manager was used, where templates were used in combination with Windows Powershell to create the VMs, but because SCVMM uses the network to deploy VMs using this method would take days. Therefore it was decided to use the TimeFinder local storage replication feature which was capable of duplicating a fully populated data LUN containing 64 VMs, 15 times.

The performance results were measured by starting every 30 minutes in parallel virtual machines on a group of four parent nodes.

clip_image001

By using graphs it is shown that as the I/O from virtual machines is increased, the array activity increased to accommodate the requests. IOPS reached nearly 40,000 after 5 minutes and by the end of the test the disks reached over 85% busy.

Optimizing performance can be achieved by:

  • Utilizing multiple HBA’s, so that saturation of I/O paths is prevented
  • Utilizing multiple front-end controllers, preventing hitting a queue full situation causing a I/O bottleneck on the host server
  • Using EMC’s multipath software PowerPath which optimizes load balancing

clip_image002

  • Quick formatting NTFS volumes for thin space saving, using disk space on an as-needed basis.
  • Avoiding dynamic VHDs for heavy workloads, as a fixed VHD performs 10 to 15 percent better than a dynamic VHD.
  • Using appropriate drive counts when using Virtual Provisioning, and spreading data devices in a thin poll to keep up with the aggregated I/O requirements.
  • Watch the accumulated I/O load for Cluster Shared Volumes (CSVs), I/O can come from any number of parent nodes in a cluster in parallel.
  • Sizing snap pools with sufficient drives for change rate and workload, supporting the copy on first write activity coming from source devices, by spreading the save devices across a sufficient number of drives.
  • Using metadevices with larger number of smaller hypers, which are better than metadevices with a low number of large hypers.

The paper further discusses an even larger environment, where 7 times a 16 node cluster providing 7168 VMs is attached to a scaled-out VMAX storage solution.

clip_image003