Benchmarks: 64 blades storage performance with VMware VI 3.5

The VMware performance department published a new study about VMFS peformances when a system with 64 blades running VMware Infrastructure 3.5 and concurrently accessing the same shared volume through a 2Gbps Fibre Channel link:

…It is clear from Figure 1 that except for sequential read there is no drop in aggregate throughput as we scale the number of hosts. The reason sequential read drops is that the sequential streams coming in from different ESX Server hosts are no longer sequential when intermixed at the storage array, and thus become random. Writes generally do better than reads because they are absorbed by the write cache and flushed to disks in the background.

…Virtual machines deployed in typical customer environments may not have as high a rate and therefore may be able to scale further. In general, because of varying block sizes, access patterns, and number of outstanding commands, the results you see in your VMware environment will depend on the types of applications running. The results will also depend on the capabilities of your storage and whether it is tuned for the block sizes in your application. Also, processing very small commands adds some compute overhead in any system, be it virtualized or otherwise…