Guide to virtualization adoption – Part 1

Embracing virtualization is much simpler to say than to do, mostly when you decide to migrate your existing production environment and not only creating from scratch virtual lab for small testing and development scenarios.

What’s behind an enterprise virtualization project and what issues IT managers will have to face during it?

In this long series we’ll identify which phases a virtualization adoption is made of, including:

If the virtualization market seems crowded today we’ll discover that is quite the opposite: it’s largely premature and several areas have still to be covered in a decent way.

Candidates’ recognition

The very first phase of a wide virtualization project involves recognition of physical servers to be virtualized.

This operation can be much less simple than expected and possibly is the most time consuming one when our company is missing an efficient enterprise management policy.

So one of the issues here is discovering and inventorying the whole datacenter.

A second point, none the less critical, is performing a complete performance measurement on the whole server population, storing precious data for the capacity planning phase.

This step is often overlooked because the IT management usually has a raw idea of what servers are less resource demanding and immediately believes it’s enough to go ahead.

Sometimes a benchmarking analysis reveals unexpected bottlenecks, depending on problems or simply bad evaluation of servers’ workloads.

The best suggestion in the first case is to immediately pause the project and proceed solving bottleneck: moving a bad performing server inside a virtual environment can seriously impact on the whole infrastructure, with subsequent huge difficulties in troubleshooting.

A precise calculation of performance averages and peeks is also fundamental for next phase of our virtualization adoption, the capacity planning, when will be necessary consolidating complementary roles on the same host machine.

After measurement collecting we have to identify good candidates for virtualization among inventoried population.

Contrary to what some customers could think (or some consultants could pretend) not every physical server can be virtualized at today.

Three factors are critical in deciding which service can go into a virtual machine: virtualization overhead, dependency on highly specific hardware, and product support.

Virtualization overhead is something future improvements of virtualization will mitigate more and more, but it’s something we still have to seriously consider.

I/O workload is a critical stop issue in virtualization adoption and some servers highly relying on data exchange cannot be migrated so easily.

Databases and mail servers are particularly hard to move in virtual infrastructures.

In both cases overhead added by virtualization on I/O stream is significantly impacting on performances, sometimes at a point that migration is discouraged.

But there isn’t a general rule on these or others servers’ types: it really depends on the workload.

There are case studies where customers achieved virtualization without any particular effort, others where the migration was successful only when virtual machines received double of expected resources.

The second stop issue is related to special hardware on which our production servers are depending.

At the moment virtualization products are able to virtualize the standard range of ports, including old serials and parallel, but vendors are still not able to virtualize new hardware components on demand.

An effective example, without going any far, is provided by modern and powerful video adapters, needful for games development or CAM/CAD applications, which are the most contested unsupported hardware at today.

The third stop issue in confirming a server as a virtualization candidate is product support.

The market is seriously considering modern server virtualization only in the last two years and vendors are very slow in stating they support their products inside virtual machines.

It’s easy to understand why: the amount of factors in a virtual infrastructure impacting on performances is so big that application behaviour can be severely influenced by something vendor’s support staff cannot control, or even know is present.

Microsoft itself, while offering a virtualization solution, has been reluctant supporting its products inside its own Virtual Server, and at today few of Windows Server technologies are still unsupported.

So, whenever your server seems a good machine for virtualization the final word depends on vendor providing applications inside it.

While every virtualization provider has its own, usually undisclosed, list of supporting vendors, it’s always better directly query your application’s vendor for a confirmation.

Going virtual despite missing support is a high risk to take and wouldn’t be suggested neither after a long period of testing.

From a product point of view in this virtualization aspect the market is still poor of alternatives even if candidates’ recognition problem can be approached by four kinds of specialists: hardware vendors, operating systems vendors, application vendors and neutral virtualization specialists.

Hardware vendors providing big iron for virtualization and offering outsourcing services, like IBM or HP, usually has internal technologies for recognizing virtualization candidates.

In rare cases these tools are even available for customers use, like the IBM Consolidation Discovery and Analysis Tool (CDAT).

Operating systems vendors are not actually providing tools for virtualization, but the trend is going to change very soon: all of them, from Microsoft to Sun, passing through Novell and Red Hat, are going to implement hypervisors in their platforms and will have to offer tools for accelerating virtualization adoption.

It’s not a case Microsoft announced at WinHEC 2006 conference in May a new product called Virtual Machine Manager, addressing this phase needs and more.

Application vendors will hardly ever offer specific tools for virtualization, even if they have more knowledge of anybody else to achieve the task.

The best move to expect from them would be an application profiling tool, featuring a database of average values, to be used in performance comparisons between physical and virtual testing environments.

A today’s concrete solution for customers comes from the fourth category, the neutral virtualization specialists.

Among them the most known is probably PlateSpin with its PowerRecon, which offers a complete and flexible solution for inventorying and benchmarking datacenter physical machine, eventually passing data to physical to virtual migration tools, which we’ll cover in the fourth phase of our virtualization adoption.

In the next part we’ll address the delicate phase of capacity planning, on which depends success or failure of the whole project.

This article originally appeared on SearchServerVirtualization.