Guide to virtualization adoption – Part 5

In the last part of this series we completed last phase of our steps towards virtual datacenter.

At this point we already performed an exhaustive capacity planning based on existing and expected workloads, decided which hardware equipment to adopt, and moved part or whole physical machines population inside virtual machines.

Now our efforts will be directed to facing complex challenges of virtual infrastructure management, including mandatory control of physical and virtual resources availability, usage and access; deployment of disaster recovery solutions; provisioning of new virtual machines and other tasks automation; and needful monitoring and reporting of datacenter usage.

Being modern virtualization still a very young market we’ll discover how hard is achieving such tasks, with immature tools and missing solutions, with a particular void in performance analysis and troubleshooting discipline.

Challenges of liquid computing

The very mandatory task of every IT manager, virtualization adopter or not, is management of existing resources.

Tracking physical machines usage, operating systems and products licenses availability, services attainability, helps understanding if purchased assets satisfy demand, and reacting fast when a fault happens.

This operation, which can be very consuming even on small environments, becomes more complex when working with virtual infrastructures.

IT managers have now also to worry about a new class of problems, like efficient and controlled virtual machines deployment, rational physical resources assignment, and in some cases even accountability.

Easiness in creating new virtual machines and their nature of independence from underlying hardware, leads to the idea of liquid computing, where it’s hard to exactly understand where is what.

This property increases risk of so called VM sprawl, a problem we had for in last 5 years with traditional computing, but with a much faster expansion rate.

To avoid it virtualization management tools should provide a reliable security system, where permissions can limit operators’ capability to create new machines, and a strong monitoring system, reporting on allocated but unused resources.

At today just the first one is implemented in most virtualization platforms, usually leveraging virtual infrastructure access with LDAP centralized accounting systems, while administrators still are in big troubles when they need to compute efficiency of virtual datacenters.

Going further, when a new virtual machine has been created the virtual infrastructure manager has the new problem of deciding where it has to be hosted.

As we already saw during the capacity planning phase in fact, virtual workloads should be deployed carefully, considering which already deployed workloads could be complementary, to avoid an overloading of resources.

Here management tools should help, assisting deployment after new virtual machines creation.

The upcoming Virtual Machine Manager from Microsoft for example will offer a rating system for available physical machines, assigning one or more stars to each of them so administrators immediately knows where a new virtual machine fits best.

This scoring system will adapt to the evolving infrastructure, even if sysadmins decided to not follow previous suggestions, so in every moment it provides best advices.

But even with such system, in some environments easiness of virtual machine creation may be not easy enough. For example, big ISP remodelling their hosting offering on virtualization are in need of smart tools to deploy hundreds or thousands of virtual machines on demand, in seconds.

At the moment few 3rd party products can fill all virtualization management holes, and many companies prefer develop in-house solutions instead of spending big money for little flexibility.

In such complex scenarios virtualization management solutions have to offer software development kits (SDK) allowing wide customizations and different degrees of automation.

A wide open programmable interface and a strong support are key selling points here and so far VMware did a pretty good job compared to its competitors.

Last but not least today’s IT managers have to face a very new problem: accountability.

In a medium complexity corporation, several departments may work with virtual machines and share same physical servers, using them in different percentiles during a fiscal year.

When each of these departments has a cost centre on its own, it’s pretty hard tracking who has responsibility for paying underlying hardware.

And even when costs are handled by a single entity inside the company, enforcing controls about who may use physical resources, and how much of them can be requested is very hard.

At the moment a short number of customers are addressing such kinds of issues, doomed to become common problems in few years, but who is already in trouble may want to see IBM offering, which pioneered the segment with its Tivoli add-on called Usage and Accounting Manager.

Multiple platforms, multiple issues

Mentioned needs further increase when a big company has to handle more than one virtualization platform.

In big corporation each department often has autonomy in choosing preferred solutions, even if only one product will be used for production environment.

So IT manager may need to concurrently manage VMware ESX server and Xen at the same time, hoping to leverage control with a single, centralized tool.

Market offering for such tools is multiplexing as request for them rises.

Solutions from IBM, Cassatt, BMC Software, Enomaly and Scalent are at the moment the most popular, but new contenders like Opsware are coming.

In many cases support for multiple virtual infrastructures means IT managers have not to worry about what technology has been used for creating a virtual machine: these tools are able to perform control and, where possible, application migration from a virtual hardware set to another. Something which is otherwise achievable only with dedicated P2V tools.

When choosing one of these super-consoles, it’s critical verifying they leverage existing management tools provided by virtualization vendors. Otherwise return on investment may never come.

This article originally appeared on SearchServerVirtualization.

Guide to virtualization adoption – Part 6

In the last part of this series we took a look at how complex resource management in a virtual datacenter can be. But it’s just the first challenge administrators have to face when working with virtualization.

An even more delicate task, common in every infrastructure, is granting high availability of virtual machines, ensuring fast disaster recovery as well as reliable fail-over of faulty pieces in the environment: guests, hosts or storage.

Performing these operations is today more complex than on physical servers because there are new technical issues to address and market basically lacks of effective solutions.

Backup

In a virtual datacenter backup strategy can mimic traditional one, installing a backup agent inside every guest operating system and copying elsewhere files, partitions or the whole virtual disk.

This approach works flawless but has a painful downside when applied to virtual environments: each virtual machines uses same I/O channel offered by the host OS, so if multiple of them start their backup at the same time an I/O bottleneck is unavoidable.

To prevent it administrators should carefully plan a wave of backups, putting enough delay between them so that guest OSes never overlap during these intensive operations.

Unfortunately this method is not scalable and cannot avoid backup overlapping when there are more than a few virtual machines, considering each virtual disk has a size of at least 3GB, which can grow up to 10-20GB or even more depending on applications requirements, and may take hours to perform a backup.

Guest OS backup also obliges administrators, at restore time, to first recreate empty virtual machines and then boot a bare-metal recovery CD inside them.

An alternative approach consists in performing backup at host level, in a transparent way for guest operating systems.

Since virtual machines are self-contained in single files, dwelling on the host OS file system like a spreadsheet or a picture, virtualization newcomers may think backup is easier here, while instead it’s much more difficult, even if more desirable in many cases.

First of all virtual machines are considered just like open files locked by a process or an application (think about a .PST mail archive hold by Microsoft Outlook): these files must be accessed in special ways, freezing an image of their state (what we usually call snapshot) and performing the backup of it.

This task can be accomplished only if the backup software knows how to handle open files, even if in some cases it can be helped by the host OS: for example Microsoft Windows Server 2003 offers a feature called Volume Shadow Service (VSS), which can be invoked by 3rd parties solutions to perform snapshots.

Even knowing how to handle open files, we still have to face another challenge to perform a live backup: virtual machines are not just open files but complete operating systems accessing a complete set of virtual hardware.

Each time a state snapshot is taken everything freezes, including virtual memory and interrupts: this translates in the virtual world as a power failure, which may or may not corrupt guest file system structure.

Even if a robust OS doesn’t corrupt data on power failures, it’s an approach few vendors accept to support.

One of them we can find vizioncore, which funded its popularity around esxRanger, a product able to perform virtual machines live backup on VMware ESX Server with a fair grade of automation.

And for those brave enough to try such scenario even without support, there is the famous VMBK script, made by Massimiliano Daneri, which performs a basic live backup for VMware ESX Server virtual machines as well.

Microsoft will offer this kind of support for its Virtual Server 2005 starting from imminent Service Pack 1, but will not allow use of standard Microsoft Backup for the task.

The generally accepted approach, the only really endorsed by virtualization vendors to workaround hot shutdown issue, is to suspend or shutdown running virtual machines, perform the backup and resume or restart them.

Unfortunately this way prevents offering high available services and obliges administrators to use traditional agent-based backup approaches for mission critical virtual machines.

Putting aside live backup problems which will be eventually addressed as soon as operating system will be more virtualization-friendly, it is worth to note even this second approach stress a bit host I/O channels.

To completely eliminate the problem we have to move backup point from host to storage facility, where our virtual machines files can be manipulated without impacting on virtualization platform directly.

VMware has been the first adopting this solution but at today its Consolidated Backup (VCB) hs notable limitation since it works only with ESX Server, only acts as a proxy for real 3rd party backup solutions (obliging customers to configure and install different scripts for different products), and it’s not able to perform a restore.

Staying at the storage level a different method implies using SAN management softwares and invoking LUNs cloning, but this approach usually doesn’t provide enough granularity since the storage facility doesn’t natively recognize formatted LUNs and therefore cannot offer single virtual machines backups.

Recognition of LUN format depends on the storage management software we bought, and which file systems are supported.

It may recognize NTFS-formatted LUNs, allowing us to backup VMware Server for Windows virtual machines, while it could not support VMFS, preventing us from doing the same with VMware ESX Server virtual machines.

If the LUN format is unrecognized or simply we don’t have any enhanced storage management solution, we’ll have to clone the whole LUN, which contains several virtual machines, obliging us to restore all of them even if just one is needed.

Fail-overing and clustering

Obviously providing high availability in a virtual datacenter doesn’t only involve virtual machines backup. Like in traditional environment we should be able to configure clusters or at least fail-over structures.

But HA in virtualization can take place at two level instead of just one: we can work at guest level, relying on OS and application disaster recovery capabilities, or work at host level, facing a new kind of problems.

Implementing HA configurations at guest level is almost identical to what we already do in physical environments: there are some technical issues to address, like configuring a static MAC address for each virtual network interface, and some limitations, which depends on the chosen virtualization platform and on the chosen HA software.

But it’s basically always possible create a virtual cluster, or even create a mixed one, where one or more nodes are virtual machines while others are physical ones.

Much more complex, but much more needed, is providing high availability for hosts.

In such scenario, considering fail-over for example, virtual machines running on one host have to be copied on another one, and continuously synchronized, replicating virtual disks and virtual memory modifications.

This operation has same problems we already discussed above for live backup, but also adds complexity of doing everything as fast as possible and as many times as possible.

Here vizioncore is once again protagonist, with esxReplicator, able to copy running virtual machine from one VMware ESX Server to another, without or without a centralized storage facility. Unfortunately this product doesn’t handle network modifications needed to perform a real fail-over, so we have to manually switch between a faulty host and a cold stand-by one.

A more dynamic solution is provided by VMware itself, which introduced with ESX Server 3 and VirtualCenter 2 a fail-over option based on VMotion.

Unlike vizioncore esxReplicator, VMware HA automatically restarts virtual machines of a faulty host but it’s much more demanding in terms of configuration: it requires VirtualCenter and VMotion, and won’t work if VMs are not stored in a fibre channel SAN facility.

A different approach is possible thanks to physical to virtual (P2V) migration tools we already mentioned on a former part of this article.

Being able to perform also virtual to virtual migrations, we could configure them to replicate virtual machines contents from a host to another.

Here PlateSpin is the preferred choice at the moment, offering live migration for Windows operating systems and already oriented to disaster recovery use of its technology.

Unfortunately, just like vizioncore, PlateSpin doesn’t handle every aspect of fail-over so we still have to manually intervene.

Fail-overing is a good solution but surely the most desirable HA configuration is clustering, where multiple hosts act as an execution front-end for commonly shared virtual machines. If one of them goes down, there is no service interruption because virtual machines are always available through remaining hosts.

Clustering capability can be implemented at host level as native feature of virtualization platform or as 3rd party solution.

In Virtual Server case for example, Windows is the host OS and Microsoft grants virtualization physical nodes clustering through its Cluster Service.

ESX Server on the other side has not such feature but counts on external solution like Symantec Veritas Cluster Service to achieve the task. The recent acquisition of Rainfinity by EMC Corporation live some hopes one day Rainwall technology could be used to perform ESX clustering natively.

However at today clustering solutions for virtualization are far from being considered mature, and several customers are highly recommended to perform severe tests before adopting one.

Fail-over and clustering configurations are also complicated by different architectures: when virtual machines are moved from a host to another, they could be served by CPUs of different vendors, which are similar but not identical and current virtualization platforms are still unable to handle these differences in real-time, during a live migration.

In similar fashion if available hosts have different hardware configurations, virtual machines virtual hardware assignments may be not satisfied (think about a VM with four virtual CPUs), preventing migration at all.

This may get worst in a near future, depending on how vendors will implement support for paravirtualization.

As known this approach requires new generation CPUs, able to run host operating system at a special ring level. If the virtualization platform is not able to concurrently run both usual binary translation and paravirtualization, or if it’s not able to seamless switch between them, this will prevent using a mix of old and new physical servers, obliging to renovate the whole hardware infrastructure each time we buy new gears. Or to carefully decide how to aggregate hosts for high availability.

Last but not least we have to grant reliable access to the storage facility, which surely is the most critical step.

This is something usually addressed by the so-called multipathing: when hosts have aboard two or more HBAs, configured to reach more than one SAN, the storage management software knows how to prefer a working link among faulty ones, dynamically.

But being a software feature provided at drivers’ level there are some restrictions.

Depending on which virtualization platform you choose you may not have such capability: current architecture of VMware ESX Server, for example, doesn’t allow storage vendors to plug their own drivers, and provided ones doesn’t support dynamic multipath.

When choosing a hosted solution, like VMware Server or Microsoft Virtual Server, instead you are relying on the operating system to support OEMs drivers, which is always granted.

This article originally appeared on SearchServerVirtualization.

Guide to virtualization adoption – Part 7

In the last part of this series we examined some security challenges virtualization raises up, mainly talking about disaster recovery strategies and different approaches current products are offering.

In this new part we’ll consider an aspect of virtual datacenter management still rarely evaluated: automatic virtual machines provisioning and automation in general.

We’ll see how this apparently superfluous capability will become a top need very soon, driving vendors’ efforts in the next few years.

From servers sprawl to VMs sprawl

At beginning server virtualization capability to consolidate tents of physical servers in just one host has been considered a real solution for reducing the uncontrolled proliferation of new servers in several companies. But early adopters experienced just the opposite result.

Cost of implementing a new server in a virtual datacenter has been dramatically reduced, considering provisioning operations now takes hours, sometimes minutes, instead of weeks or months.

The only real limitations in deployment are given by availability of physical resources to assign new virtual machines and Windows license price, when used. Latter doesn’t even impact when a large corporate has a volume licensing agreement with Microsoft.

Suddenly IT managers find moving from planning to real implementation is as easy as never before, often providing a false perception of infrastructure limits.

Multi-tier architectures seems now less complex to build, isolation of applications for security, performance or compatibility reasons are the first contemplated scenario, deployment of new applications for testing is done with no hesitation.

In this scenario companies not enforcing a strict policy, depending on their dimension, have to front different challenges.

Bigger corporation, still trying to understand how to account virtualization use in their costs centres, will allow new resources to requiring departments, but will oblige infrastructure administrators to manually track down, at a later time, which virtual machines are really used and by whom.

Smaller companies not having an authorization process to respect may grant provisioning capabilities to several individuals, even without deep virtualization skills, just to improve projects speed.

Within a short period everybody needing a new virtual machine can simply assemble and power it on.

In such uncontrolled provisioning environments two things usually happen: first of all who deploy new virtual machines have no understanding of the big picture, knowing how many virtual machine a physical host can really handle, how many are planned to be hosted, and which kind of workloads are best suited for a certain location. Second, every new deployment compromises the big picture itself, leading to performance issues and continuous rebuilding of consolidation plans.

Last but not least every new virtual machine means a set of operating system and application licenses, which require special attention before being assigned.

Without really realizing it’s happening a company could grow up a jungle of virtual machines. Without documentation or related licenses, a precise role, or even an owner, impacting on overall health of its own virtual datacenter.

The need for automation

Even working in a more controlled environment, when the virtual datacenter grows a lot, IT managers need new ways to perform usual operations, with tools able to further scale up when needed.

The biggest problem to address when handling a high number of virtual machines is their placement: as we said many times during this series correct distribution of workloads is mandatory to achieve good performances with given physical resources.

Choosing the best host to serve a virtual machines, depending on its free resources and already hosted workloads, is not easy already during planning phase, when capacity planning tools are highly desirable.

Doing the same operation manually during the everyday datacenter life is overwhelming, not only because of time needed to decide placement, but also because the whole environment is almost liquid, with several machines moving from a host to another for balancing resources usage, for hosts maintenance, or other reasons.

Best placement becomes a relative concept and administrators will find every day more difficult identifying it clearly.

Another remarkable problem of large virtual infrastructure is customization of virtual machines on deployment.

While virtualization technologies used in conjunction with tools like Microsoft Sysprep made easy create clones and distribute them with new parameters, current deployment processes don’t scale well, and only consider single operating systems.

In large infrastructures, business units rarely require single virtual machines, asking more often for a multi-tier configuration. Just consider testing of a new e-commerce site, which at least implies a front-end web server, a back-end database server, and a client machine, which could be run automated tasks to measure performances and efficiency.

So every time mini virtual infrastructures need to be deployed, IT administrators have to manually put in place specific network topologies, access permissions, service level agreement policies, etc.

In such scenario is also improbable required virtual machines will need just simple customization Sysprep can offer: installation of specific applications, interconnection to existing remote services, execution of scripts able to run before and after deployment, etc., are all operations to be performed for each virtual infrastructure, with a huge loss of time.

Finally, if the required virtual infrastructure represents a typical scenario where to test several different stand-alone projects from several departments, it will have to be destroyed and recreated on demand.

On any new provisioning both requestors and administrators will have to remember correct settings and customizations for all tiers.

An emerging market

Considering such big risks and needs in today’s virtual datacenters, it’s not surprising that few companies are working hard to offer reliable and scalable automated provisioning tools.

Young start-ups like Dunes, from Switzerland, Surgient from Austin, or VMLogix from Boston, have to compete against current virtualization market leader VMware, which decided to acquire know-how and an already available product from another young company, Akimbi, in summer 2006.

Akimbi Slingshot proven to be an interesting product much before acquisition, but VMware spent a lot of time improving further, integrating it with its ESX Server and VirtualCenter flagship solutions.

This integration will be an important selling point, since it leverages already acquired skills of VMware customers in a familiar management environment.

On the other side every day more IT managers look at agnostic products able to automate virtual machines provisioning in mixed environments, where chosen virtualization platform doesn’t become an issue.

Here Surgient products (VQMS/VTMS/VMDS) or VMLogix LabManager have much more appeal, able to support VMware platforms as well as Microsoft ones, and in a near future Xen too.

Apart Dunes, all mentioned vendors are now focusing their products on the very first practical application of automated provisioning: the so called virtual lab management.

So it’s easy to find a priority commitment on basic provisioning capabilities, like multi-tier deployments, enhanced customizations of deployed clones, and physical resources scheduling.

And it’s probably all customers feel is needed at the moment, when virtual datacenters still have to reach critical masses.

Tomorrow they will look for less easy is to find features like management of provisioning authorization flow, or license management.

In any case autonomic datacenter is still far from here, and so far only Dunes, with its Virtual Service Orchestrator (VS-O), is offering a true framework to perform full automation of today’s virtual datacenters.

This article originally appeared on SearchServerVirtualization.

Guide to virtualization adoption – Part 8

At the end of our long journey in virtualization world, we finally arrived at the last stage of enterprise management.

After evaluating security solutions which best fits our needs in last instalment of this series, we now just miss to consider performance measurement and reporting issues before having a really good understanding of all aspects in a virtualization adoption project.

Actually tracking virtual machines performances, to pinpoint problems or just to have meaningful reports of resources consumption, is one of the most complex task for virtualization professionals. And not only because virtual machines behaviour is strictly related to the underlying host, but also because it heavily depends on what other virtual machines are doing.

And like with other challenges faced so far we will see market is currently offering few products really able to address them.

Virtualization needs new metrics

First of all we must understand traditional ways to measure performances in a datacenter don’t successfully apply to virtual infrastructures.

Depending on perspectives virtual servers are pretty identical to physical servers or completely different.

Looking from inside, virtual machines offers all traditional counters a performance monitor may need and usually tracks. So that existing reporting products are good enough if you simply install their agents in every guest operating system.
But in a virtual world some of obtained numbers are much less valuable, while others are simply meaningless.

A typical example is memory consumption and memory paging in a VMware ESX Server environment.

VMware’s flagship product has a special feature called ballooning.

Thanks to ballooning ESX can temporary use for other purposes some memory which system administrator assigned to a virtual machine: in any moment a special driver included in VMware Tools can request memory to the guest OS , just like inflating a balloon, which is freed away and immediately reallocated to other VMs in need of it.

While this happens the operating system is obliged to page out, showing unexpected, slight performances degradations.

When everything is back to normal, ESX deflates the balloon and give memory back to its original machine.

In the above scenario we have a guest OS reporting an incorrect memory and page file usage, which may lead to completely wrong deduction about how a virtual machine is performing.

Going further we could easily recognize how some other measurements have sense only related to what’s happening on the host.

In a scenario where a virtual machine is frequently reporting too high CPU usage, we couldn’t conclude it’s time for a virtual hardware upgrade, place a second virtual CPU, and feel confident about an improvement.

Sometimes a too high vCPU usage means the virtual machine is not served fast enough at host level, which may required a fine tuning of hypervisor’s resources management or upgrading number of physical CPU. And this can be discovered only tracking specific values at host level.

So we need to change our measuring approach, but what exactly do we need to track?

In a highly dense virtual datacenter, with tents of virtual machines in a single host, we mandatory need to consider interdependencies, and track the whole system as a single entity rather than a sum of elements.

And since relationship between virtual machines and hosts becomes critical, reporting solutions have to handle liquidity of every virtual datacenter, seamlessly adapting to hot or cold guest operating systems migrations within the infrastructure.

Last but not least these products have to address scalability: when administrators have to consider performances of thousand of virtual machines deployed on hundreds of hosts reporting solutions must work in fully automated mode, and provide smart summaries which are still human readable and meaningful.

Populating an almost empty segment

Performance tracking and reporting solutions segment is one of the emptiest in today’s virtualization industry.

In part because of complexity, in part because of still short demand, and in part because of little awareness that traditional solutions are quickly becoming inadequate.

Obviously virtualization platforms’ vendors offer more or less enhanced reporting tools, but at the moment none of them is addressing customers needs with a serious, dedicated solution.

So at the moment we have to look for 3rd party virtualization ISVs, which are providing only few products addressing a limited market.

Among current players we surely can mention vizioncore, which focus exclusively on VMware with its esxRanger.

This product provides a wide amount of charts, tracking history of virtual machine and hosts performances, and it’s a very good entry-level product.

vizioncore also offers a free edition which grants low-budged departments a decent capability to understand what’s happening in their infrastructure.

Devil Mountain Software (DMS) tries to embrace a much wider audience with its Clarity Suite 2006, supporting hardware virtualization solutions (VMware, Microsoft, but only Windows-based virtual machines) as well as application virtualization ones (Softricity, Altiris).

Clarity Suite is a hosted solution more focused on virtualized workloads profiling, comparing performances with a scoring system.

The solution does some simple correlations between virtual machines and hosts metrics, useful for capacity planning and what-if scenarios, but it’s still far from being the most complete reporting system for virtualized environments.

Like vizioncore also DMS offers a free version of Clarima Suite, which is unfortunately very limited in amount of deployable agents and in features.

A last company worthwhile of mention is the new entry Netuitive, which focuses on VMware ESX Server only as vizioncore, but offers innovative features: the SI solution automatically profiles virtual machines and hosts performances creating behaviour profiles, which correlates and uses to recognize odd behaviours.

As soon as they appear Netuitive SI reacts asking the VMware infrastructure a reconfiguration of its resource pools, so that performances bottlenecks are immediately addressed, much before any human intervention.

Is reporting the first aspect where datacenter automation will begin?

This article originally appeared on SearchServerVirtualization.

Virtual hardware hot add may not be a Viridian exclusive

As Mark Russinovich WinHEC 2007 keynote confirms, Windows Server 2008 (formerly codename Longhorn) introduces some remarkable kernel changes so that new Microsoft operating system now supports hot add of hardware without any downtime.

This feature, called Dynamic Hardware Partitioning (DHP), depends on Windows itself and it’s designed to work with processors, memory and some PCI Express cards like network cards (depending on availabily of DHP-aware drivers), regardless if added hardware is physical or virtual.

DHP is available inside OS since Windows Server 2003 Service Pack 1, where it’s limited to memory hot add. And same limitation will be present in Windows Server 2008 32bit versions.

Another big limitation (valid for all operating systems, 32 or 64bits) is Microsoft still doesn’t support hot remove of hardware (planned for future Windows versions), so that system administrators will still have to turn off the physical or virtual machine before removing a component.

So, in theory, every virtualization product on the market able to run a 64bit Windows Server 2008 guestOS will also be able to offer hot add capabilities Microsoft presented talking about upcoming Windows Server Virtualization hypervisor (codename Viridian).

And since first version of Viridian will not expose anymore hot add capability as previously announced, VMware and other virtualizatio providers have a chance to offer it before Microsoft itself.

Rumor: VMware to buy Opsware after IPO

SYS-CON is reporting a brief news claiming VMware is in talk to buy US vendor Opsware, once collected money from its Initial Public Offering, expected for this June.

While it’s expected VMware will proceed with more acquisition after IPO, at today there are no signs this may involve a target like Opsware, which provides datacenter automation capabilities and may extend VMware domain way further virtualization.

Despite that, such acquisition may make sense, considering virtualization is first step (and datacenter automation is the second) toward general purpose grid computing, a market where VMware may want to move once hardware virtualization becomes a free commodity.

Such scenario is backed by kind of acquisitions VMware made so far: Akimbi (in 2006), which provides automation in virtual lab management, and Propero (2007), which provides automation in virtual desktop provisioning.

Will Microsoft sunset VMware?

Massimo Re Ferrè, IT Architect at IBM, published a new interesting insight on his personal blog, about chances Microsoft has to subtract control of virtualization market from VMware’s hands:

However, looking ahead, with some level of confidence we can say that if Xen is going to make storm-like damages to VMware … MS Viridian, also known as Windows Virtualization will likely have the potential of causing hurricane-like devastations (to VMware).

This is true for a number of reasons…

  • …first being that Viridian will be close in terms of performance, architecture and features to VI3 (so in a nutshell nothing to do with the current MS Virtual Server product).
  • The other reason is that MS is a marketing machine and despite the fact that the product is good or bad as long as it has the Microsoft label in front of it, it will get LOTS of visibility.
  • Last but not least most of these functions will be embedded into the OS costs so the MS value proposition will be “free” or very cheap depending on how they will decide to license some add-on management features.

Massimo also covers chances Xen has to impose as preferred hypervisor:

As far as I can see “Xen included in the distributions” is not taking the market by storm and the reasons, in my opinion are:

  • Missing management functionalities: this is not Suse and RedHat primary business so what they have done (so far at least) is to add the open-source code and provide a very basic interface to use it (mostly text base)
  • Not perceived as an agnostic virtual platform: although it can technically support Windows I don’t see many customers going crazy to install RedHat or Suse Linux to host their Windows servers
  • Not clear strategy: Suse and RedHat have just added this to their distributions and they are already talking about adding new open source hypervisors (such as the KVM – Kernel Virtual Machine). While this could be a good strategy for a geek I don’t think that it’s going to interest any “business customer”: they don’t want “the latest cool stuff”, they rather want something stable/solid to run their applications on

Despite Massimo post has been published before Microsoft announced of key features drop from first Viridian release, it still is a valuable analysis of possible scenarios which is worth to read.

To further evaluate virtualization market future you may want to read another couple of insights published by virtualization.info: The long chess game of VMware (one year old) and The Microsoft virtualization chance (one month old).

Whitepaper: Performance Tuning and Benchmarking Guidelines for VMware Workstation 6

Just a week after launch of new Workstation 6.0, VMware published a valuable 34-pages whitepaper detailing performance tuning best practices and recommended benchmarking approaches:

There are some areas in which the best-performing configurations of VMware Workstation virtual machines vary slightly from the configurations of native machines. One of the goals of this book is to provide guidance about these variations. To this end, we discuss configuration of the host, the VMware Workstation software, and the operating systems and applications in the individual virtual machines.

The benchmarking guidelines in this book are intended to assist in the acquisition of meaningful, accurate, and repeatable benchmarking results. These guidelines are useful when you perform comparisons between VMware Workstation and either native systems or other virtualization products, as well as when you run benchmarks in virtual machines in general. The guidelines should not, however, be considered VMware best practices in all cases. Among other reasons, this is because benchmarking sometimes involves saturating one resource while overprovisioning others, something that would not be desirable in a production environment.

Some point, mainly about benchmarking, are particularly interesting:

We don’t recommend use of the following benchmarks, as our experience has shown that they can produce unexpected results in virtual machines:

  • Sisoft Sandra
  • LMbench

while others seems pretty odd:

Try not to overcommit CPU resources:

  • Avoid running two or more single-processor virtual machines on a single-processor host system, even if the single-processor host has hyper-threading.

Is VMware trying to say now desktop virtualization is no more good for running concurrent virtual machines at once?

Read the whole whtepaper at source.

Release: Xen 3.1

After a long development phase Xen reaches state 3.1 (previously labelled as 3.0.5) and gains some interesting features:

  • XenAPI (providing support for virtual lab management capabilities and configuration metadata)
  • Dynamic memory control for non-paravirtualized virtual machines (Windows)
  • Support for basic save/restore/migrate operations for non-paravirtualized virtual machines (Windows)
  • Support for 32bit paravirtualized guests
  • Support for virtual disks on raw partitions

Download it as source, binary tarballs or RPMs.

From now on expect a new release of XenEnterprise and Virtual Iron platforms based on the new engine release.

Microsoft will use Virtual Machine Manager as desktop broker for VDI

Despite it’s not considering Virtual Desktop Infrastructures (VDIs) ready for enterprise deployment, Microsoft is embracing the technology pretty fast, modifying the Windows Vista licensing to be VDI-friendly.

At WinHEC 2007 Microsoft reveals even more, disclosing upcoming System Center Virtual Machine Manager (SCVMM) will act as desktop broker in a VDI environment, as BetaNews reported:

Under the new system, a thin client logging on will request a VM image from SCVMM. Based on the user profile it pulls up from that logon, SCVMM will then locate the best server on which the image of Vista will be run. Applications licensed to that user will then be run from the VM, as well as the seat for Vista that’s licensed to that user. But only a thin virtualization connection package will address that image remotely…

Read the whole article at source.

VMware moved earlier but used a different approach: instead of building from scratch desktop brokering capabilities on top of its management tool, VirtualCenter, preferred acquisition of an existing desktop broker provider: Propero.

Update: A slide from WinHEC 2007 describes with more details architecture Microsoft plan to use for VDI scenarios. It clarifies Virtual Machine Manager will not act as desktop broker as previously supposed.