Guide to virtualization adoption – Part 6

In the last part of this series we took a look at how complex resource management in a virtual datacenter can be. But it’s just the first challenge administrators have to face when working with virtualization.

An even more delicate task, common in every infrastructure, is granting high availability of virtual machines, ensuring fast disaster recovery as well as reliable fail-over of faulty pieces in the environment: guests, hosts or storage.

Performing these operations is today more complex than on physical servers because there are new technical issues to address and market basically lacks of effective solutions.

Backup

In a virtual datacenter backup strategy can mimic traditional one, installing a backup agent inside every guest operating system and copying elsewhere files, partitions or the whole virtual disk.

This approach works flawless but has a painful downside when applied to virtual environments: each virtual machines uses same I/O channel offered by the host OS, so if multiple of them start their backup at the same time an I/O bottleneck is unavoidable.

To prevent it administrators should carefully plan a wave of backups, putting enough delay between them so that guest OSes never overlap during these intensive operations.

Unfortunately this method is not scalable and cannot avoid backup overlapping when there are more than a few virtual machines, considering each virtual disk has a size of at least 3GB, which can grow up to 10-20GB or even more depending on applications requirements, and may take hours to perform a backup.

Guest OS backup also obliges administrators, at restore time, to first recreate empty virtual machines and then boot a bare-metal recovery CD inside them.

An alternative approach consists in performing backup at host level, in a transparent way for guest operating systems.

Since virtual machines are self-contained in single files, dwelling on the host OS file system like a spreadsheet or a picture, virtualization newcomers may think backup is easier here, while instead it’s much more difficult, even if more desirable in many cases.

First of all virtual machines are considered just like open files locked by a process or an application (think about a .PST mail archive hold by Microsoft Outlook): these files must be accessed in special ways, freezing an image of their state (what we usually call snapshot) and performing the backup of it.

This task can be accomplished only if the backup software knows how to handle open files, even if in some cases it can be helped by the host OS: for example Microsoft Windows Server 2003 offers a feature called Volume Shadow Service (VSS), which can be invoked by 3rd parties solutions to perform snapshots.

Even knowing how to handle open files, we still have to face another challenge to perform a live backup: virtual machines are not just open files but complete operating systems accessing a complete set of virtual hardware.

Each time a state snapshot is taken everything freezes, including virtual memory and interrupts: this translates in the virtual world as a power failure, which may or may not corrupt guest file system structure.

Even if a robust OS doesn’t corrupt data on power failures, it’s an approach few vendors accept to support.

One of them we can find vizioncore, which funded its popularity around esxRanger, a product able to perform virtual machines live backup on VMware ESX Server with a fair grade of automation.

And for those brave enough to try such scenario even without support, there is the famous VMBK script, made by Massimiliano Daneri, which performs a basic live backup for VMware ESX Server virtual machines as well.

Microsoft will offer this kind of support for its Virtual Server 2005 starting from imminent Service Pack 1, but will not allow use of standard Microsoft Backup for the task.

The generally accepted approach, the only really endorsed by virtualization vendors to workaround hot shutdown issue, is to suspend or shutdown running virtual machines, perform the backup and resume or restart them.

Unfortunately this way prevents offering high available services and obliges administrators to use traditional agent-based backup approaches for mission critical virtual machines.

Putting aside live backup problems which will be eventually addressed as soon as operating system will be more virtualization-friendly, it is worth to note even this second approach stress a bit host I/O channels.

To completely eliminate the problem we have to move backup point from host to storage facility, where our virtual machines files can be manipulated without impacting on virtualization platform directly.

VMware has been the first adopting this solution but at today its Consolidated Backup (VCB) hs notable limitation since it works only with ESX Server, only acts as a proxy for real 3rd party backup solutions (obliging customers to configure and install different scripts for different products), and it’s not able to perform a restore.

Staying at the storage level a different method implies using SAN management softwares and invoking LUNs cloning, but this approach usually doesn’t provide enough granularity since the storage facility doesn’t natively recognize formatted LUNs and therefore cannot offer single virtual machines backups.

Recognition of LUN format depends on the storage management software we bought, and which file systems are supported.

It may recognize NTFS-formatted LUNs, allowing us to backup VMware Server for Windows virtual machines, while it could not support VMFS, preventing us from doing the same with VMware ESX Server virtual machines.

If the LUN format is unrecognized or simply we don’t have any enhanced storage management solution, we’ll have to clone the whole LUN, which contains several virtual machines, obliging us to restore all of them even if just one is needed.

Fail-overing and clustering

Obviously providing high availability in a virtual datacenter doesn’t only involve virtual machines backup. Like in traditional environment we should be able to configure clusters or at least fail-over structures.

But HA in virtualization can take place at two level instead of just one: we can work at guest level, relying on OS and application disaster recovery capabilities, or work at host level, facing a new kind of problems.

Implementing HA configurations at guest level is almost identical to what we already do in physical environments: there are some technical issues to address, like configuring a static MAC address for each virtual network interface, and some limitations, which depends on the chosen virtualization platform and on the chosen HA software.

But it’s basically always possible create a virtual cluster, or even create a mixed one, where one or more nodes are virtual machines while others are physical ones.

Much more complex, but much more needed, is providing high availability for hosts.

In such scenario, considering fail-over for example, virtual machines running on one host have to be copied on another one, and continuously synchronized, replicating virtual disks and virtual memory modifications.

This operation has same problems we already discussed above for live backup, but also adds complexity of doing everything as fast as possible and as many times as possible.

Here vizioncore is once again protagonist, with esxReplicator, able to copy running virtual machine from one VMware ESX Server to another, without or without a centralized storage facility. Unfortunately this product doesn’t handle network modifications needed to perform a real fail-over, so we have to manually switch between a faulty host and a cold stand-by one.

A more dynamic solution is provided by VMware itself, which introduced with ESX Server 3 and VirtualCenter 2 a fail-over option based on VMotion.

Unlike vizioncore esxReplicator, VMware HA automatically restarts virtual machines of a faulty host but it’s much more demanding in terms of configuration: it requires VirtualCenter and VMotion, and won’t work if VMs are not stored in a fibre channel SAN facility.

A different approach is possible thanks to physical to virtual (P2V) migration tools we already mentioned on a former part of this article.

Being able to perform also virtual to virtual migrations, we could configure them to replicate virtual machines contents from a host to another.

Here PlateSpin is the preferred choice at the moment, offering live migration for Windows operating systems and already oriented to disaster recovery use of its technology.

Unfortunately, just like vizioncore, PlateSpin doesn’t handle every aspect of fail-over so we still have to manually intervene.

Fail-overing is a good solution but surely the most desirable HA configuration is clustering, where multiple hosts act as an execution front-end for commonly shared virtual machines. If one of them goes down, there is no service interruption because virtual machines are always available through remaining hosts.

Clustering capability can be implemented at host level as native feature of virtualization platform or as 3rd party solution.

In Virtual Server case for example, Windows is the host OS and Microsoft grants virtualization physical nodes clustering through its Cluster Service.

ESX Server on the other side has not such feature but counts on external solution like Symantec Veritas Cluster Service to achieve the task. The recent acquisition of Rainfinity by EMC Corporation live some hopes one day Rainwall technology could be used to perform ESX clustering natively.

However at today clustering solutions for virtualization are far from being considered mature, and several customers are highly recommended to perform severe tests before adopting one.

Fail-over and clustering configurations are also complicated by different architectures: when virtual machines are moved from a host to another, they could be served by CPUs of different vendors, which are similar but not identical and current virtualization platforms are still unable to handle these differences in real-time, during a live migration.

In similar fashion if available hosts have different hardware configurations, virtual machines virtual hardware assignments may be not satisfied (think about a VM with four virtual CPUs), preventing migration at all.

This may get worst in a near future, depending on how vendors will implement support for paravirtualization.

As known this approach requires new generation CPUs, able to run host operating system at a special ring level. If the virtualization platform is not able to concurrently run both usual binary translation and paravirtualization, or if it’s not able to seamless switch between them, this will prevent using a mix of old and new physical servers, obliging to renovate the whole hardware infrastructure each time we buy new gears. Or to carefully decide how to aggregate hosts for high availability.

Last but not least we have to grant reliable access to the storage facility, which surely is the most critical step.

This is something usually addressed by the so-called multipathing: when hosts have aboard two or more HBAs, configured to reach more than one SAN, the storage management software knows how to prefer a working link among faulty ones, dynamically.

But being a software feature provided at drivers’ level there are some restrictions.

Depending on which virtualization platform you choose you may not have such capability: current architecture of VMware ESX Server, for example, doesn’t allow storage vendors to plug their own drivers, and provided ones doesn’t support dynamic multipath.

When choosing a hosted solution, like VMware Server or Microsoft Virtual Server, instead you are relying on the operating system to support OEMs drivers, which is always granted.

This article originally appeared on SearchServerVirtualization.