Guide to virtualization adoption – Part 7

In the last part of this series we examined some security challenges virtualization raises up, mainly talking about disaster recovery strategies and different approaches current products are offering.

In this new part we’ll consider an aspect of virtual datacenter management still rarely evaluated: automatic virtual machines provisioning and automation in general.

We’ll see how this apparently superfluous capability will become a top need very soon, driving vendors’ efforts in the next few years.

From servers sprawl to VMs sprawl

At beginning server virtualization capability to consolidate tents of physical servers in just one host has been considered a real solution for reducing the uncontrolled proliferation of new servers in several companies. But early adopters experienced just the opposite result.

Cost of implementing a new server in a virtual datacenter has been dramatically reduced, considering provisioning operations now takes hours, sometimes minutes, instead of weeks or months.

The only real limitations in deployment are given by availability of physical resources to assign new virtual machines and Windows license price, when used. Latter doesn’t even impact when a large corporate has a volume licensing agreement with Microsoft.

Suddenly IT managers find moving from planning to real implementation is as easy as never before, often providing a false perception of infrastructure limits.

Multi-tier architectures seems now less complex to build, isolation of applications for security, performance or compatibility reasons are the first contemplated scenario, deployment of new applications for testing is done with no hesitation.

In this scenario companies not enforcing a strict policy, depending on their dimension, have to front different challenges.

Bigger corporation, still trying to understand how to account virtualization use in their costs centres, will allow new resources to requiring departments, but will oblige infrastructure administrators to manually track down, at a later time, which virtual machines are really used and by whom.

Smaller companies not having an authorization process to respect may grant provisioning capabilities to several individuals, even without deep virtualization skills, just to improve projects speed.

Within a short period everybody needing a new virtual machine can simply assemble and power it on.

In such uncontrolled provisioning environments two things usually happen: first of all who deploy new virtual machines have no understanding of the big picture, knowing how many virtual machine a physical host can really handle, how many are planned to be hosted, and which kind of workloads are best suited for a certain location. Second, every new deployment compromises the big picture itself, leading to performance issues and continuous rebuilding of consolidation plans.

Last but not least every new virtual machine means a set of operating system and application licenses, which require special attention before being assigned.

Without really realizing it’s happening a company could grow up a jungle of virtual machines. Without documentation or related licenses, a precise role, or even an owner, impacting on overall health of its own virtual datacenter.

The need for automation

Even working in a more controlled environment, when the virtual datacenter grows a lot, IT managers need new ways to perform usual operations, with tools able to further scale up when needed.

The biggest problem to address when handling a high number of virtual machines is their placement: as we said many times during this series correct distribution of workloads is mandatory to achieve good performances with given physical resources.

Choosing the best host to serve a virtual machines, depending on its free resources and already hosted workloads, is not easy already during planning phase, when capacity planning tools are highly desirable.

Doing the same operation manually during the everyday datacenter life is overwhelming, not only because of time needed to decide placement, but also because the whole environment is almost liquid, with several machines moving from a host to another for balancing resources usage, for hosts maintenance, or other reasons.

Best placement becomes a relative concept and administrators will find every day more difficult identifying it clearly.

Another remarkable problem of large virtual infrastructure is customization of virtual machines on deployment.

While virtualization technologies used in conjunction with tools like Microsoft Sysprep made easy create clones and distribute them with new parameters, current deployment processes don’t scale well, and only consider single operating systems.

In large infrastructures, business units rarely require single virtual machines, asking more often for a multi-tier configuration. Just consider testing of a new e-commerce site, which at least implies a front-end web server, a back-end database server, and a client machine, which could be run automated tasks to measure performances and efficiency.

So every time mini virtual infrastructures need to be deployed, IT administrators have to manually put in place specific network topologies, access permissions, service level agreement policies, etc.

In such scenario is also improbable required virtual machines will need just simple customization Sysprep can offer: installation of specific applications, interconnection to existing remote services, execution of scripts able to run before and after deployment, etc., are all operations to be performed for each virtual infrastructure, with a huge loss of time.

Finally, if the required virtual infrastructure represents a typical scenario where to test several different stand-alone projects from several departments, it will have to be destroyed and recreated on demand.

On any new provisioning both requestors and administrators will have to remember correct settings and customizations for all tiers.

An emerging market

Considering such big risks and needs in today’s virtual datacenters, it’s not surprising that few companies are working hard to offer reliable and scalable automated provisioning tools.

Young start-ups like Dunes, from Switzerland, Surgient from Austin, or VMLogix from Boston, have to compete against current virtualization market leader VMware, which decided to acquire know-how and an already available product from another young company, Akimbi, in summer 2006.

Akimbi Slingshot proven to be an interesting product much before acquisition, but VMware spent a lot of time improving further, integrating it with its ESX Server and VirtualCenter flagship solutions.

This integration will be an important selling point, since it leverages already acquired skills of VMware customers in a familiar management environment.

On the other side every day more IT managers look at agnostic products able to automate virtual machines provisioning in mixed environments, where chosen virtualization platform doesn’t become an issue.

Here Surgient products (VQMS/VTMS/VMDS) or VMLogix LabManager have much more appeal, able to support VMware platforms as well as Microsoft ones, and in a near future Xen too.

Apart Dunes, all mentioned vendors are now focusing their products on the very first practical application of automated provisioning: the so called virtual lab management.

So it’s easy to find a priority commitment on basic provisioning capabilities, like multi-tier deployments, enhanced customizations of deployed clones, and physical resources scheduling.

And it’s probably all customers feel is needed at the moment, when virtual datacenters still have to reach critical masses.

Tomorrow they will look for less easy is to find features like management of provisioning authorization flow, or license management.

In any case autonomic datacenter is still far from here, and so far only Dunes, with its Virtual Service Orchestrator (VS-O), is offering a true framework to perform full automation of today’s virtual datacenters.

This article originally appeared on SearchServerVirtualization.