Guide to virtualization adoption – Part 7

In the last part of this series we examined some security challenges virtualization raises up, mainly talking about disaster recovery strategies and different approaches current products are offering.

In this new part we’ll consider an aspect of virtual datacenter management still rarely evaluated: automatic virtual machines provisioning and automation in general.

We’ll see how this apparently superfluous capability will become a top need very soon, driving vendors’ efforts in the next few years.

From servers sprawl to VMs sprawl

At beginning server virtualization capability to consolidate tents of physical servers in just one host has been considered a real solution for reducing the uncontrolled proliferation of new servers in several companies. But early adopters experienced just the opposite result.

Cost of implementing a new server in a virtual datacenter has been dramatically reduced, considering provisioning operations now takes hours, sometimes minutes, instead of weeks or months.

The only real limitations in deployment are given by availability of physical resources to assign new virtual machines and Windows license price, when used. Latter doesn’t even impact when a large corporate has a volume licensing agreement with Microsoft.

Suddenly IT managers find moving from planning to real implementation is as easy as never before, often providing a false perception of infrastructure limits.

Multi-tier architectures seems now less complex to build, isolation of applications for security, performance or compatibility reasons are the first contemplated scenario, deployment of new applications for testing is done with no hesitation.

In this scenario companies not enforcing a strict policy, depending on their dimension, have to front different challenges.

Bigger corporation, still trying to understand how to account virtualization use in their costs centres, will allow new resources to requiring departments, but will oblige infrastructure administrators to manually track down, at a later time, which virtual machines are really used and by whom.

Smaller companies not having an authorization process to respect may grant provisioning capabilities to several individuals, even without deep virtualization skills, just to improve projects speed.

Within a short period everybody needing a new virtual machine can simply assemble and power it on.

In such uncontrolled provisioning environments two things usually happen: first of all who deploy new virtual machines have no understanding of the big picture, knowing how many virtual machine a physical host can really handle, how many are planned to be hosted, and which kind of workloads are best suited for a certain location. Second, every new deployment compromises the big picture itself, leading to performance issues and continuous rebuilding of consolidation plans.

Last but not least every new virtual machine means a set of operating system and application licenses, which require special attention before being assigned.

Without really realizing it’s happening a company could grow up a jungle of virtual machines. Without documentation or related licenses, a precise role, or even an owner, impacting on overall health of its own virtual datacenter.

The need for automation

Even working in a more controlled environment, when the virtual datacenter grows a lot, IT managers need new ways to perform usual operations, with tools able to further scale up when needed.

The biggest problem to address when handling a high number of virtual machines is their placement: as we said many times during this series correct distribution of workloads is mandatory to achieve good performances with given physical resources.

Choosing the best host to serve a virtual machines, depending on its free resources and already hosted workloads, is not easy already during planning phase, when capacity planning tools are highly desirable.

Doing the same operation manually during the everyday datacenter life is overwhelming, not only because of time needed to decide placement, but also because the whole environment is almost liquid, with several machines moving from a host to another for balancing resources usage, for hosts maintenance, or other reasons.

Best placement becomes a relative concept and administrators will find every day more difficult identifying it clearly.

Another remarkable problem of large virtual infrastructure is customization of virtual machines on deployment.

While virtualization technologies used in conjunction with tools like Microsoft Sysprep made easy create clones and distribute them with new parameters, current deployment processes don’t scale well, and only consider single operating systems.

In large infrastructures, business units rarely require single virtual machines, asking more often for a multi-tier configuration. Just consider testing of a new e-commerce site, which at least implies a front-end web server, a back-end database server, and a client machine, which could be run automated tasks to measure performances and efficiency.

So every time mini virtual infrastructures need to be deployed, IT administrators have to manually put in place specific network topologies, access permissions, service level agreement policies, etc.

In such scenario is also improbable required virtual machines will need just simple customization Sysprep can offer: installation of specific applications, interconnection to existing remote services, execution of scripts able to run before and after deployment, etc., are all operations to be performed for each virtual infrastructure, with a huge loss of time.

Finally, if the required virtual infrastructure represents a typical scenario where to test several different stand-alone projects from several departments, it will have to be destroyed and recreated on demand.

On any new provisioning both requestors and administrators will have to remember correct settings and customizations for all tiers.

An emerging market

Considering such big risks and needs in today’s virtual datacenters, it’s not surprising that few companies are working hard to offer reliable and scalable automated provisioning tools.

Young start-ups like Dunes, from Switzerland, Surgient from Austin, or VMLogix from Boston, have to compete against current virtualization market leader VMware, which decided to acquire know-how and an already available product from another young company, Akimbi, in summer 2006.

Akimbi Slingshot proven to be an interesting product much before acquisition, but VMware spent a lot of time improving further, integrating it with its ESX Server and VirtualCenter flagship solutions.

This integration will be an important selling point, since it leverages already acquired skills of VMware customers in a familiar management environment.

On the other side every day more IT managers look at agnostic products able to automate virtual machines provisioning in mixed environments, where chosen virtualization platform doesn’t become an issue.

Here Surgient products (VQMS/VTMS/VMDS) or VMLogix LabManager have much more appeal, able to support VMware platforms as well as Microsoft ones, and in a near future Xen too.

Apart Dunes, all mentioned vendors are now focusing their products on the very first practical application of automated provisioning: the so called virtual lab management.

So it’s easy to find a priority commitment on basic provisioning capabilities, like multi-tier deployments, enhanced customizations of deployed clones, and physical resources scheduling.

And it’s probably all customers feel is needed at the moment, when virtual datacenters still have to reach critical masses.

Tomorrow they will look for less easy is to find features like management of provisioning authorization flow, or license management.

In any case autonomic datacenter is still far from here, and so far only Dunes, with its Virtual Service Orchestrator (VS-O), is offering a true framework to perform full automation of today’s virtual datacenters.

This article originally appeared on SearchServerVirtualization.

Guide to virtualization adoption – Part 8

At the end of our long journey in virtualization world, we finally arrived at the last stage of enterprise management.

After evaluating security solutions which best fits our needs in last instalment of this series, we now just miss to consider performance measurement and reporting issues before having a really good understanding of all aspects in a virtualization adoption project.

Actually tracking virtual machines performances, to pinpoint problems or just to have meaningful reports of resources consumption, is one of the most complex task for virtualization professionals. And not only because virtual machines behaviour is strictly related to the underlying host, but also because it heavily depends on what other virtual machines are doing.

And like with other challenges faced so far we will see market is currently offering few products really able to address them.

Virtualization needs new metrics

First of all we must understand traditional ways to measure performances in a datacenter don’t successfully apply to virtual infrastructures.

Depending on perspectives virtual servers are pretty identical to physical servers or completely different.

Looking from inside, virtual machines offers all traditional counters a performance monitor may need and usually tracks. So that existing reporting products are good enough if you simply install their agents in every guest operating system.
But in a virtual world some of obtained numbers are much less valuable, while others are simply meaningless.

A typical example is memory consumption and memory paging in a VMware ESX Server environment.

VMware’s flagship product has a special feature called ballooning.

Thanks to ballooning ESX can temporary use for other purposes some memory which system administrator assigned to a virtual machine: in any moment a special driver included in VMware Tools can request memory to the guest OS , just like inflating a balloon, which is freed away and immediately reallocated to other VMs in need of it.

While this happens the operating system is obliged to page out, showing unexpected, slight performances degradations.

When everything is back to normal, ESX deflates the balloon and give memory back to its original machine.

In the above scenario we have a guest OS reporting an incorrect memory and page file usage, which may lead to completely wrong deduction about how a virtual machine is performing.

Going further we could easily recognize how some other measurements have sense only related to what’s happening on the host.

In a scenario where a virtual machine is frequently reporting too high CPU usage, we couldn’t conclude it’s time for a virtual hardware upgrade, place a second virtual CPU, and feel confident about an improvement.

Sometimes a too high vCPU usage means the virtual machine is not served fast enough at host level, which may required a fine tuning of hypervisor’s resources management or upgrading number of physical CPU. And this can be discovered only tracking specific values at host level.

So we need to change our measuring approach, but what exactly do we need to track?

In a highly dense virtual datacenter, with tents of virtual machines in a single host, we mandatory need to consider interdependencies, and track the whole system as a single entity rather than a sum of elements.

And since relationship between virtual machines and hosts becomes critical, reporting solutions have to handle liquidity of every virtual datacenter, seamlessly adapting to hot or cold guest operating systems migrations within the infrastructure.

Last but not least these products have to address scalability: when administrators have to consider performances of thousand of virtual machines deployed on hundreds of hosts reporting solutions must work in fully automated mode, and provide smart summaries which are still human readable and meaningful.

Populating an almost empty segment

Performance tracking and reporting solutions segment is one of the emptiest in today’s virtualization industry.

In part because of complexity, in part because of still short demand, and in part because of little awareness that traditional solutions are quickly becoming inadequate.

Obviously virtualization platforms’ vendors offer more or less enhanced reporting tools, but at the moment none of them is addressing customers needs with a serious, dedicated solution.

So at the moment we have to look for 3rd party virtualization ISVs, which are providing only few products addressing a limited market.

Among current players we surely can mention vizioncore, which focus exclusively on VMware with its esxRanger.

This product provides a wide amount of charts, tracking history of virtual machine and hosts performances, and it’s a very good entry-level product.

vizioncore also offers a free edition which grants low-budged departments a decent capability to understand what’s happening in their infrastructure.

Devil Mountain Software (DMS) tries to embrace a much wider audience with its Clarity Suite 2006, supporting hardware virtualization solutions (VMware, Microsoft, but only Windows-based virtual machines) as well as application virtualization ones (Softricity, Altiris).

Clarity Suite is a hosted solution more focused on virtualized workloads profiling, comparing performances with a scoring system.

The solution does some simple correlations between virtual machines and hosts metrics, useful for capacity planning and what-if scenarios, but it’s still far from being the most complete reporting system for virtualized environments.

Like vizioncore also DMS offers a free version of Clarima Suite, which is unfortunately very limited in amount of deployable agents and in features.

A last company worthwhile of mention is the new entry Netuitive, which focuses on VMware ESX Server only as vizioncore, but offers innovative features: the SI solution automatically profiles virtual machines and hosts performances creating behaviour profiles, which correlates and uses to recognize odd behaviours.

As soon as they appear Netuitive SI reacts asking the VMware infrastructure a reconfiguration of its resource pools, so that performances bottlenecks are immediately addressed, much before any human intervention.

Is reporting the first aspect where datacenter automation will begin?

This article originally appeared on SearchServerVirtualization.

Virtual hardware hot add may not be a Viridian exclusive

As Mark Russinovich WinHEC 2007 keynote confirms, Windows Server 2008 (formerly codename Longhorn) introduces some remarkable kernel changes so that new Microsoft operating system now supports hot add of hardware without any downtime.

This feature, called Dynamic Hardware Partitioning (DHP), depends on Windows itself and it’s designed to work with processors, memory and some PCI Express cards like network cards (depending on availabily of DHP-aware drivers), regardless if added hardware is physical or virtual.

DHP is available inside OS since Windows Server 2003 Service Pack 1, where it’s limited to memory hot add. And same limitation will be present in Windows Server 2008 32bit versions.

Another big limitation (valid for all operating systems, 32 or 64bits) is Microsoft still doesn’t support hot remove of hardware (planned for future Windows versions), so that system administrators will still have to turn off the physical or virtual machine before removing a component.

So, in theory, every virtualization product on the market able to run a 64bit Windows Server 2008 guestOS will also be able to offer hot add capabilities Microsoft presented talking about upcoming Windows Server Virtualization hypervisor (codename Viridian).

And since first version of Viridian will not expose anymore hot add capability as previously announced, VMware and other virtualizatio providers have a chance to offer it before Microsoft itself.

Rumor: VMware to buy Opsware after IPO

SYS-CON is reporting a brief news claiming VMware is in talk to buy US vendor Opsware, once collected money from its Initial Public Offering, expected for this June.

While it’s expected VMware will proceed with more acquisition after IPO, at today there are no signs this may involve a target like Opsware, which provides datacenter automation capabilities and may extend VMware domain way further virtualization.

Despite that, such acquisition may make sense, considering virtualization is first step (and datacenter automation is the second) toward general purpose grid computing, a market where VMware may want to move once hardware virtualization becomes a free commodity.

Such scenario is backed by kind of acquisitions VMware made so far: Akimbi (in 2006), which provides automation in virtual lab management, and Propero (2007), which provides automation in virtual desktop provisioning.

Will Microsoft sunset VMware?

Massimo Re Ferrè, IT Architect at IBM, published a new interesting insight on his personal blog, about chances Microsoft has to subtract control of virtualization market from VMware’s hands:

However, looking ahead, with some level of confidence we can say that if Xen is going to make storm-like damages to VMware … MS Viridian, also known as Windows Virtualization will likely have the potential of causing hurricane-like devastations (to VMware).

This is true for a number of reasons…

  • …first being that Viridian will be close in terms of performance, architecture and features to VI3 (so in a nutshell nothing to do with the current MS Virtual Server product).
  • The other reason is that MS is a marketing machine and despite the fact that the product is good or bad as long as it has the Microsoft label in front of it, it will get LOTS of visibility.
  • Last but not least most of these functions will be embedded into the OS costs so the MS value proposition will be “free” or very cheap depending on how they will decide to license some add-on management features.

Massimo also covers chances Xen has to impose as preferred hypervisor:

As far as I can see “Xen included in the distributions” is not taking the market by storm and the reasons, in my opinion are:

  • Missing management functionalities: this is not Suse and RedHat primary business so what they have done (so far at least) is to add the open-source code and provide a very basic interface to use it (mostly text base)
  • Not perceived as an agnostic virtual platform: although it can technically support Windows I don’t see many customers going crazy to install RedHat or Suse Linux to host their Windows servers
  • Not clear strategy: Suse and RedHat have just added this to their distributions and they are already talking about adding new open source hypervisors (such as the KVM – Kernel Virtual Machine). While this could be a good strategy for a geek I don’t think that it’s going to interest any “business customer”: they don’t want “the latest cool stuff”, they rather want something stable/solid to run their applications on

Despite Massimo post has been published before Microsoft announced of key features drop from first Viridian release, it still is a valuable analysis of possible scenarios which is worth to read.

To further evaluate virtualization market future you may want to read another couple of insights published by virtualization.info: The long chess game of VMware (one year old) and The Microsoft virtualization chance (one month old).

Whitepaper: Performance Tuning and Benchmarking Guidelines for VMware Workstation 6

Just a week after launch of new Workstation 6.0, VMware published a valuable 34-pages whitepaper detailing performance tuning best practices and recommended benchmarking approaches:

There are some areas in which the best-performing configurations of VMware Workstation virtual machines vary slightly from the configurations of native machines. One of the goals of this book is to provide guidance about these variations. To this end, we discuss configuration of the host, the VMware Workstation software, and the operating systems and applications in the individual virtual machines.

The benchmarking guidelines in this book are intended to assist in the acquisition of meaningful, accurate, and repeatable benchmarking results. These guidelines are useful when you perform comparisons between VMware Workstation and either native systems or other virtualization products, as well as when you run benchmarks in virtual machines in general. The guidelines should not, however, be considered VMware best practices in all cases. Among other reasons, this is because benchmarking sometimes involves saturating one resource while overprovisioning others, something that would not be desirable in a production environment.

Some point, mainly about benchmarking, are particularly interesting:

We don’t recommend use of the following benchmarks, as our experience has shown that they can produce unexpected results in virtual machines:

  • Sisoft Sandra
  • LMbench

while others seems pretty odd:

Try not to overcommit CPU resources:

  • Avoid running two or more single-processor virtual machines on a single-processor host system, even if the single-processor host has hyper-threading.

Is VMware trying to say now desktop virtualization is no more good for running concurrent virtual machines at once?

Read the whole whtepaper at source.

Release: Xen 3.1

After a long development phase Xen reaches state 3.1 (previously labelled as 3.0.5) and gains some interesting features:

  • XenAPI (providing support for virtual lab management capabilities and configuration metadata)
  • Dynamic memory control for non-paravirtualized virtual machines (Windows)
  • Support for basic save/restore/migrate operations for non-paravirtualized virtual machines (Windows)
  • Support for 32bit paravirtualized guests
  • Support for virtual disks on raw partitions

Download it as source, binary tarballs or RPMs.

From now on expect a new release of XenEnterprise and Virtual Iron platforms based on the new engine release.

Microsoft will use Virtual Machine Manager as desktop broker for VDI

Despite it’s not considering Virtual Desktop Infrastructures (VDIs) ready for enterprise deployment, Microsoft is embracing the technology pretty fast, modifying the Windows Vista licensing to be VDI-friendly.

At WinHEC 2007 Microsoft reveals even more, disclosing upcoming System Center Virtual Machine Manager (SCVMM) will act as desktop broker in a VDI environment, as BetaNews reported:

Under the new system, a thin client logging on will request a VM image from SCVMM. Based on the user profile it pulls up from that logon, SCVMM will then locate the best server on which the image of Vista will be run. Applications licensed to that user will then be run from the VM, as well as the seat for Vista that’s licensed to that user. But only a thin virtualization connection package will address that image remotely…

Read the whole article at source.

VMware moved earlier but used a different approach: instead of building from scratch desktop brokering capabilities on top of its management tool, VirtualCenter, preferred acquisition of an existing desktop broker provider: Propero.

Update: A slide from WinHEC 2007 describes with more details architecture Microsoft plan to use for VDI scenarios. It clarifies Virtual Machine Manager will not act as desktop broker as previously supposed.

VMware to release API for storage provisioning

Quoting from TechWorld:

Expect an API interface between VMware and virtualised storage resource providers for automated and dynamic storage provisioning.

This was backed up by EqualLogic EMEA VP John Joseph saying: “Stay tuned” when asked if this meant an automated interface between VMware’s virtual infrastructure and EqualLogic PS arrays such that a virtual server could be provisioned automatically with a volume of storage from an EqualLogic iSCSI storage pool.

This means that VMware will have an API through which storage provisioning commands can be issued and through which storage status messages can be delivered by a storage resource to VMware, such as ‘I’m getting full up.’…

Read the whole article at source.

Microsoft seems to work in the same direction, working with QLogic to integrate N_Port ID Virtualization (NPIV) in upcoming System Center Virtual Machine Manager.

Microsoft will deliver new hypervisor through Windows Update

It was a well-known information Microsoft will deliver its new virtualization platform, Windows Server Virtualization (WSV or codename Viridian), 180 days after Windows Server 2008 (formerly codename Longhorn) RTM. It was also known the new hypervisor will appear as new server role among ones available in the Server Core edition of Windows, a thin version of the operating system with just kernel and few subsystems on top (and GUI will not be one of them).

What was unknown is the delivery method to be used to distribute Vidirian inside Windows Server 2008. At WinHEC 2007 Microsoft clarified the new hypervisor will come through Windows Update, as reported by Bink.nu.