Capacity planning has always been a key phase in most virtualization projects. Unfortunately, just a limited number of customers sees a real value in this activity as it requires expensive products, skills to use them and a significant amount of time to produce results that sometimes are only partially useful.
While the adoption of capacity planning tools still is very low, their importance is higher than ever as virtual infrastructures grow in complexity and add more dimensions to be considered.
Virtualization architects don’t have to deal anymore with well-known problems like the virtual machine density per host (VM / core) and proper storage capacity for basic server consolidation vs VDI use cases.
Here’s three good examples:
- The optimal use of next generation CPUs with six or more cores, which increases the VM density, depends on network capacity, and the adoption of different technologies, like 10GBit Ethernet, should become a fundamental constrain to consider during planning.
- The capability to run archived virtual machines to verify the integrity of guest operating systems after a hot backup, something that vendors will start to deliver within 6-9 months, requires additional, unplanned resources inside virtual infrastructures that should be considered in any new virtualization project.
- The advent of new high-performance remote desktop protocols for VDI will require multiple GPUs inside virtualization hosts, and their number will depend on how many virtual desktops the company wants to run and what kind of workloads will be executed inside them.
In 2011 GPU’s processor speed and on-board memory size may become a key element to evaluate in VDI projects.
Other recent and upcoming technologies may add other dimensions to capacity planning and there are at least two approaches to manage this increasing complexity.
The first one is to fully embrace the unified/fabric computing approach, where one or more vendors working as a joint venture offer a complete, monolithic stack that is sized and sold to serve a specific amount virtual machines, running a specific kind of workloads, for a specific amount of users.
The other is to stick with more flexible virtual data center designs, and start seriously investing in capacity planning, assuming the existing products will evolve rapidly and introduce new dimensions well before customers really need to consider them.