Xen 3.0 expected for September 2005

Quoting from InformationWeek:


Companies looking to squeeze more out of their IT infrastructure investments have for years been able to build virtual servers within mainframe, Unix, and even Windows environments. A movement to deliver this capability to Linux environments is gaining momentum thanks to Xen hypervisor, an open-source software virtualization tool managed by a startup XenSource Inc. and backed by some of IT’s biggest vendors.
Advanced Micro Devices Inc. and Intel are developing 64-bit processors that will make use of Xen hypervisor, while Linux providers Novell and Red Hat are working with XenSource to provide support for users consolidating server environments. Hewlett-Packard and IBM are contributing code to the Xen project and working with XenSource to develop new uses for the technology.

XenSource in January launched with a $6 million round of funding led by Kleiner Perkins Caufield & Byers and Sevin Rosen Funds. To succeed, they’ll have to take on well-established competitors, since Microsoft and VMware, a subsidiary of storage maker EMC, offer proprietary software that can be used to create virtual servers on Intel or AMD x86-based servers. Yet in a market where business-software buyers increasingly welcome an open-source alternative, XenSource could find an opening. “[Xen] is still very immature, but it offers a lot of promise that will be realized by first by Linux users and then in other environments,” predicts IDC analyst Dan Kusnetzky.

Xen, which is licensed under the GNU General Public License, works on servers running any open-source operating system, including Linux and NetBSD, with ports to FreeBSD and Plan 9 under development. When Intel and AMD deliver new processors within the next year that support virtual servers on the chip level, Xen should be able to run on proprietary operating systems as well.

A more subtle difference between the Xen hypervisor and competing proprietary technologies is that Xen keeps cache memory that records the state of each virtual server and operating system. XenSource does this through “para-virtualization,” or splitting the operating-system drivers in half. By doing this, one half operates the virtual server while the other half can be kept as a separate domain where this cache memory can be stored. “This saves users time and resources when switching between virtual servers,” says Simon Crosby, VP of strategy and corporate development for XenSource.

Yet Xen’s potential for widespread is clearly tied to chip-level advancements Intel and AMD are promising to deliver later this year.

Intel’s Vanderpool processor technology and AMD’s Pacifica processor will offer an interface with proprietary operating systems in a way that Xen hyperlink can’t do on its own. “AMD and Intel will make the hypervisor’s job easier, particularly for operating systems for which source code is not available,” Crosby says.

XenSource says that the next version, Xen 3.0, will come out by September. By the end of this year, 64-bit AMD and Intel technology also is due out, including Intel’s Vanderpool. Version 3.0 will also let users create virtual machines that run applications requiring multiple processors.

Xen enters a market poised for heavy growth over the next few years, says IDC analyst Dan Kusnetzky. The market for virtual environment software, including management and security tools, reached $19.3 million worldwide in 2004 and will grow 20% annually through 2008.

For Xen to appeal to business users, the technology needs support from major Linux backers such as Red Hat and Novell, in addition to XenSource. While Red Hat and Novell have some developers working on the project, they haven’t integrated Xen support into their existing services, a move that XenSource predicts is likely given those companies strong interest in the growth of Linux. “People in large enterprises like to buy from a single vendor, so we expect Red Hat and Novell to offer support [for Xen] with their products,” Crosby says. XenSource will also support Xen and will by the end of the year develop utility software with a graphical interface that can be used to manage the technology. “This is an absolute requirement so that more people can use it,” Crosby says.

XenSource will in April host a Xen developer summit to determine how the technology should progress. “It’s an essential piece in the open-source process,” Crosby says. One of the issues likely to be discussed is security. “We’re still figuring out how to make the hypervisor more secure,” he says. IBM has a project under way called Secure Hypervisor to create a run-time environment that securely manages context switching between virtual servers. The goal is to prevent unauthorized information transfers between virtual servers and to ensure that all virtual servers are governed by the same security policies, Crosby says.

Most Xen users are still in the experimental stage. “The lion’s share of people pushing it out are in the hosting world and those who are running it in large data center deployments in banks and Fortune 500 companies,” Crosby says. “They’re still learning about it.”

One Xen pioneer is stretching where virtualization can be applied, exploring whether virtualization techniques can be used to create a network router that can segment its internal resources, such as CPU cycles, memory, and network bandwidth. “As such, I am exploring the possibility of running routers on multiple virtual machines (or domains in Xen terminology), with one virtual machine router (routelet in my project) for each network flow requiring quality of service guarantees,” Ross McIlroy, a research student at Scotland’s University of Glasgow, says in an E-mail interview.

McIlroy’s goal is to partition the flows, preventing one overloaded flow from impacting the service provided to another quality-of-service flow. The success of this experiment could have a positive impact on applications that transport isochronous data across a network, for example teleconferencing or voice-over-IP applications, which require a network providing quality of service.

McIlroy knows that his project isn’t exactly what Xen’s creators had in mind. “Xen provides me with an ideal basis for the creation of a prototype router which should test [Xen’s basic] theories,” he says. McIlroy has found Xen’s para-virtualization technique useful in reducing the overhead that’s normally generated when creating a virtual environment.

Microsoft unveils roadmap for its virtualization technologies

Microsoft Watch reports a Virtual PC v2 (codename “Hedgehog”) and Virtual Server v2 expected for year 2006 and running on 64 bit architectures as announced by Microsoft talking about its roadmap for 64-bit servers availability.

Microsoft Watch also reports a Virtual PC 2004 Service Pack 2 and a Virtual Server 2005 Service Pack 1 expected within this year. eWeek reports Virtual PC 2004 Service Pack 2 could run 32-bit virtual machines on WOW64 or native 64-bit architecture.

More details as soon as available.

Intel Vanderpool holds promise, some pitfalls

Quoting from The Inquirer:


Intel introduced VT or Vanderpool Technology a few IDFs ago with great fanfare and little hard information. Since then, as the technology got closer and closer to release, there has been a little more info, but even more questions. In the following four part article, I will tell you a little about what VT is, and what it does for you. The first part is about virtualisation, what it is, what it does, followed by what problems it has in the second part. The third chapter will be more on Vanderpool (VT) itself and how it works on a technical level. The closing chapter will be on the uses of VT in the real world, and most likely how you will see, or hopefully not see, it in action.
Virtualisation is a way to run multiple operating systems on the same machine at the same time. It is akin to multitasking, but where multitasking allows you to run multiple programs on one OS on one set of hardware, virtualisation allows multiple OSes on one set of hardware. This can be very useful for security and uptime purposes, but it comes at a cost.

Imagine an OS that you can load in nearly no time, and if it crashes, you can simply throw it out, and quickly load a new one. If you have several of these running at the same time, you can shut one down and shunt the work off to the other ones while you are loading a fresh image. If you have five copies of Redhat running Apache, and one goes belly up, no problem. Simply pass incoming requests to the other four while the fifth is reloading.

If you save ‘snapshots’ of a running OS, you can reload it every time something unpleasant happens. Get hacked? Reload the image from a clean state and patch it up, quick. Virused? Same thing. Virtualisation provides the ability to reinstall an OS on the fly without reimaging a hard drive like you would with Ghost. You can simply load, unload and save OSes like programs.

It also allows you to run multiple different OSes on the same box at the same time. If you are a developer that needs to write code that will run on 95, 98, ME, 2000 and XP, you can have five machines on your desk or one with five virtual OSes running. Need to have every version of every browser to check your code against, but MS won’t let you do something as blindingly obvious as downgrading IE? Just load the old image, or better yet, run them all at once.

Another great example would be for a web hosting company. If you have 50 users on an average computer, each running a low level web site, you can have 50 boxes or one. 50 servers is the expensive way to go, very expensive, but also very secure. One is the sane way to go, that is until one person wants Cold Fusion installed, but that conflicts with the custom mods of customer 17, and moron 32 has a script that takes them all down every Thursday morning at 3:28am. This triggers a big headache for tech support as they get hit with 50 calls when there should be one.

Virtualisation fixes this by giving each user what appears to be their own computer. For all they know they are on a box to themselves, no muss, no fuss. If they want plain vanilla SuSE, Redhat with custom mods, or a Cold Fusion setup that only they understand, no problem. That script that crashes the machine? It crashes an instance, and with any luck, it will be reloaded before the person even notices the server went down, even if they are up at 3:28am. No one else on the box even notices.

But not all is well in virtulisation land. The most obvious thing is that 50 copies of an OS on a computer take up more resources and lead to a more expensive server. That is true, and it is hard to get around under any circumstances, more things loaded take more memory.

The real killer is the overhead. There are several techniques for virtualisation, but they all come with a performance hit. This number varies wildly with CPU, OS, workload and number of OSes you are running, and I do mean wildly. Estimates I hear run from 10% CPU time to over 40%, so it really is a ‘depends’ situation. If you are near the 40% mark, you are probably second guessing the sanity of using a VM in the first place.

The idea behind VT is to lower the cost of doing this virtualisation while possibly adding a few bells and whistles to the whole concept. Before we dig into how this works, it helps if you know a little more about how virtualisation accomplishes the seemingly magic task of having multiple OSes running on one CPU.

There are three main types of virtualisation: Paravirtualisation, Binary Translation and emulation. The one you may be familiar with is emulation, you can have a Super Nintendo emulator running in a window on XP, and in another you have Playstation emulator, and in the last you have a virtual Atari 2600. This can be considered the most basic form of virtualisation, as far as any game running is concerned, it is running on the original hardware. Emulation is really expensive in terms of CPU overhead, if you have to fake every bit of the hardware, it can take a lot of time and headaches. You simply have to jump through a lot of hoops, and do it perfectly.

The other end of the spectrum is the method currently in vogue, and endorsed by everyone under the sun, Sun included, Paravirtualisation (PV). PV is a hack, somewhat literally, it makes the hosted OSes aware that they are in a virtualised environment, and modifies them so they will play nice. The OSes need to be tweaked for this method, and there has to be a back and forth between the OS writers and the virtualisation people. In this regard, it isn’t as much a complete virtualisation as it is a cooperative relationship.

PV works very well for open source OSes where you can tweak what you want in order to get them to play nice. Linux, xBSD and others are perfect PV candidates, Windows is not. This probably explains why RedHat and Novell were all touting Xen last week and MS was not on the list of cheerleaders.

The middle ground is probably the best route in terms of tradeoffs; it is Binary Translation (BT). What this does is look at what the guest OS is trying to do and changes it on the fly. If the OS tries to execute instruction XYZ, and XYZ will cause problems to the virtualisation engine, it will change XYZ to something more palatable and fake the results of what XYZ should have returned. This is tricky work, and can be CPU time consuming, both for the monitoring and the fancy footwork required to have it all not blow up. Replacing one instruction with dozens of others is not a way to make things run faster.

When you add in things like self modifying code, you get headaches that mere mortals should not have. None of the problems have a simple solution, and all involve tradeoffs to one extent or another. Very few problems in this area are solved, most are just worked around with the least amount of pain possible.

So, what are these problems? For the x86 architecture, at least in 32 bits, there are enough to drive you quite mad, but as a general rule, all involve ring transitions and related instructions. Conceptually rings are a way to divide a system into privilege levels, you can have an OS running in a level that a user’s program can not modify. This way, if your program goes wild, it won’t crash the system, and the OS can take control, shutting down the offending program cleanly. Rings enforce control over various parts of the system.
There are four rings in x86, 0, 1, 2, and 3, with the lower numbers being higher privilege. A simple way to think about it is that a program running at a given ring can not change things running at a lower numbered ring, but something running at a low ring can mess with a higher numbered ring.

In practice, only rings 0 and 3, the highest and lowest, are commonly used. OSes typically run in ring 0 while user programs are in ring 3. One of the ways the 64-bit extensions to x86 ‘clean up’ the ISA is by losing the middle rings, 1 and 2. Pretty much no one cared that they are gone, except the virtualization folk.

Virtual Machines (VM) like VMware obviously have to run in ring 0, but if they want to maintain complete control, they need to keep the OS out of ring 0. If a runaway task can overwrite the VM, it kind of negates half the reason you want it in the first place. The obvious solution is to force the hosted OS to run in a lower ring, like ring 1.

This would be all fine and dandy except that the OSes are used to running in ring 0, and having complete control of the system. They are set to go from 0 to 3, not 1 to 3. In a PV environment, you change the OS so it plays nice. If you are going for the complete solution, you have to force it into ring 1.

The problem here is that some instructions will only work if they are going to or from ring 0, and other will behave oddly if not in the right ring. I mean ‘oddly’ in a computer way, IE a euphemism for really really bad things will happen if you try this; wear a helmet. It does prevent the hosted OS from trashing the VM, and also prevents the programs on the hosted OS from trashing the OS itself, or worse yet, the VM. This is the ‘0/1/3’ model.

The other model is called the ‘0/3’ model. It puts the VM in ring 0 and both the OS and programs all in ring 3, but essentially it does the rest of the things like the 0/1/3 model. The deprivileged OS in ring 3 can be walked on by user programs with much greater ease, but since there are not ring traversals, an expensive operation, it can run a little faster. Speed for security.

Another slightly saner way to do 0/3 is to have the CPU maintain two sets of page tables for things running in ring 0. One set of page tables would be for the OS, the other set for the old ring 3 programs. This way you have a fairly robust set of memory protections to keep user programs out of OS space, and the other way around. Of course this will once again cost performance, just in a different way, resulting in the other end of the speed vs security tradeoff.

To sum it all up, in 0/1/3, you have security, but take a hit when changing from 3 to 1, 3 to 0, or 1 to 0, and back again. In 0/3, you have only the 0 to 3 transition, so it could potentially run faster than a non-hosted OS. If you have a problem, the 0/3 model is much more likely to come down around your ears in a blue screen than the 0/1/3 model. The future is 0/3 though, mainly because, as I said earlier, 64-bit extensions do away with rings 1 and 2, so you are forced to the 0/3 model. That is, in computer terms, called progress, much in the same way devastating crashes are considered ‘odd’.

On paper this seems like the perfect thing to do, if you can live with a little more instability, or in the case of 0/1/3, a little speed loss. There are drawbacks though, and they broadly fall into four categories of ‘odd’. The first is instructions that check the ring they are in, followed by instructions that do not save the CPU state correctly when in the wrong ring. The last two are dead opposites, instructions that do not cause a fault when they should, and others that fault when they should not, and fault a lot. None of these make a VM writer’s life easier, nor do they speed anything up.

The first one is the most obvious, an instruction that checks the ring it is in. If you deprivilege an OS to ring 1, and it checks where it is running, it will return 1 not 0. If a program expects to be in 0, it will probably take the 1 as an error, probably a severe error. This leads to the user seeing blue, a core dump, or another form of sub-optimal user experience. Binary Translation can catch this and fake a 0, but that means tens or hundreds of instructions in the place of one, and the obvious speed hit.

Saving state is a potentially worse problem. Some things in a CPU are not easily saved on a context switch. A good example of this are ‘hidden’ Segment-Register States. Once they are loaded into memory, some portions of them cannot be saved leading to unexpected discontinuities between the memory resident portions and the actual values in the CPU. There are workarounds of course, but they are tricky and expensive performance wise.

Instructions that do not fault when they should pose an obvious problem. If you are expecting an instruction to cause a fault that you later trap, and it doesn’t, hilarity ensues. Hilarity for the person writing code in the cubicle next to you at the very least. If you are trying to make things work under these conditions, it isn’t all that much fun.

The opposite case is things that fault when they should not, and these things tend to be very common. Writes to CR0 and CR4 will fault if not running in the correct ring, leading to crashes or if you are really lucky, lots and lots of overhead. Both of these fault trappings, or lack thereof are eminently correctable on the fly, but they cost performance, lots of performance.

What the entire art of virtualization comes down to is moving the OS to a place where it should not be, and running around like your head is on fire trying to fix all the problems that come up. There are a lot of problems, and they happen quite often, so the performance loss is nothing specific, but more of a death by 1000 cuts.

IBM throws weight behind multi-OS push

Quoting from ZDNet:


IBM has quietly added a new option to the suddenly vogue market for “hypervisor” software that lets a computer run multiple operating systems simultaneously, CNET News.com has learned.

But Big Blue’s efforts aren’t likely to squash a potential rival just flexing its muscles.

IBM has released source code for its Research Hypervisor, or rHype, on its Web site, letting anyone examine the approach of a company renowned for its expertise in the field. One distinguishing feature: rHype works with multiple processor varieties, including IBM’s Power family, widely used x86 chips such as Intel’s Xeon, and the new Cell microprocessor codeveloped by IBM, Sony and Toshiba.

The project potentially competes with two commercial products–Microsoft’s Virtual Server and EMC’s VMware–and with the open-source Xen software that has attracted support from numerous computing heavyweights.

But given rHype’s open-source nature and IBM’s actions so far, rHype is more likely to be a help than a hindrance to Xen. Specifically, it could help Xen move from its current base of x86 chips to IBM’s Power.

“We’ve spent quite some time talking to its authors,” Xen founder Ian Pratt said. “Now that the rHype code is open source, it’s a great starting point for a port of Xen to Power.”

The rHype software may be incorporated directly into Xen because both packages are governed by the General Public License (GPL), Pratt said. And IBM isn’t shying away: Its programmers have been contributing to the Xen project.

It makes sense for IBM to help Xen, said Charles King, principal analyst of Pund-IT Research. “It sounds like a natural point of intersection, given IBM’s natural interest in open source and in virtualization,” King said.

IBM is the 800-pound gorilla when it comes to hypervisor software, which it has supported for decades on its mainframes and has brought to its Power-based Unix servers. But for x86 servers, IBM chose a partnership with VMware rather than bring its own technology to market.

IBM declined to comment on most details of rHype. However, Tom Bradicich, chief technology officer for IBM’s Intel-based xSeries server line, said Tuesday that it’s not likely IBM will turn rHype into a product.

“It’s in the realm of the possible, but we don’t foresee it at this time,” Bradicich said.

IBM has used rHype to aid three internal projects. One is sHype, the Secure Hypervisor project to build barriers between different virtual machines. Another is validating features of the Cell processor, which has nine separate processing cores. And a third is an IBM supercomputing project called PERCS (Productive, Easy-to-use, Reliable Computing System).

Hypervisor hype
A hypervisor–a term IBM is trying to trademark–is basic software that runs atop the processor, allocating resources such as processing power, memory and network links. By creating virtual connections to these resources–“virtualizing” them–the hypervisor provides a flexible foundation that can let a computer run multiple operating systems and thus multiple tasks more efficiently.

Juggling numerous tasks has long been a useful ability for corporate computing centers. Now such abilities are increasingly useful at home as computer networks get more complex and useful, King said.

“It’s fascinating to me that something that’s been seen as a benefit for enterprise data centers is percolating its way down into the set-top box,” King said.

The rHype software virtualizes only some resources, which makes it fall into the same “paravirtualization” category as the Xen hypervisor project. IBM developed security software called sHype on the rHype software, but in January it pledged to create a version that will work with Xen.

Among features IBM touts with rHype:

• A design that can handle sophisticated memory tasks and that works well with high-speed cache memory.

• Support for IBM’s open-source K42 operating system for multiprocessor servers.

• The ability to run on several processor simulators, including the “Mambo” simulator of IBM’s PowerPC 970 family of processors, the general-purpose QEMU simulator and the BOCHS x86 simulator. rHype also has run on VMware.

• Interfaces to use the software on servers with multiple processors and with multithreaded processors–those that can execute multiple simultaneous instruction sequences.

Divide and conquer
Xen and rHype contrast with virtual machine software such as VMware and Virtual Server, which employ full virtualization. That means an operating system doesn’t need to be modified, as generally is the case with paravirtualization today, but runs more slowly.

There are other ways of dividing a system so it can run multiple operating systems. Some higher-end IBM Intel servers have hardware-based partitions. And Linux-Vserver, SW-soft’s Virtuozzo and Solaris containers divides a single instance of an operating system so it appears that separate users have their own copies.

In addition to the rHype project, IBM has a commercial hypervisor running on machines that use its Power processors. Because rHype uses the same interfaces as the commercial hypervisor, Linux doesn’t have to be modified to run on an rHype-Power foundation. With rHype on x86 chips, Linux must be modified to work.

IBM isn’t the only company interested in helping Xen grow beyond x86 servers. Hewlett-Packard programmers have been working on Xen for computers using Intel’s Itanium 2 processor.

Although IBM is sharing the source code underlying the rHype project, it currently isn’t accepting modifications from outsiders.

Virtual-Server Face-Off

Here another comparison between VMware and Microsoft virtualization products. Quoting from WindowsITPRo:


The year 2004 might very well become known as the year of virtualization. Initially, IT pros recognized the value of virtualization technologies in test and demo environments. However, with the advent of ever more powerful systems coupled with continued improvements in virtual machine (VM) technologies, virtualization has since become a production-level technology that enables server consolidation.

VMware was first to market in the virtualization space with the release of VMware GSX Server in 2001. In October 2004, Microsoft entered the virtualization market with Virtual Server 2005, sparking much interest, especially among customers who have come to rely on VM technologies. A comparison of these two titans of virtualization leads to a clear recommendation as to the product that can best address a particular organization’s needs.

The VM Architecture
Both products install on top of the base OS and provide a software layer that emulates a physical system. You can install a guest OS on each emulated system, or VM, and you can run multiple VMs concurrently as if each were installed on a separate physical system.

Each VM owns its own virtual hardware, consisting of a processor, disk, memory, and network. VMs aren’t aware of other VMs as anything other than networked systems. The virtual server product handles the task of virtualizing the hardware and sharing it with all the VMs. The virtual server also provides virtual networking services that can connect VMs together as well as giving them access to external network resources.

Review Criteria
Although both products possess similar overall functionality, they also have several significant differences. In evaluating the products, the first criterion I considered was the host and guest OSs they support. The host OSs are the OS platforms on which you can install the VM software. The guest OSs are the OSs that the virtual servers can run. I also compared ease of use and overall manageability, looking at the process of creating new VMs as well as the ability to manage the virtual server and the VMs.

Finally, I compared the performance of the VMs running under each product. To check the overall performance of the guest OSs, I used SiSoftware Sandra 2005 Lite benchmarking software (http://www.SiSoftware.net). I compared the results of tests run on the VM that I created on each virtual server—a vanilla installation of Windows Server 2003, Enterprise Edition—to the results of baseline tests I ran on the physical machine. I performed all tests on an HP ProLiant ML350 with dual Intel Xeon 3.2GHz processors, 2GB of RAM, and a dual-channel Ultra320 SCSI controller connected to four 36GB, 15,000rpm hard drives running Windows 2003 as the host OS.

Microsoft Virtual Server 2005
Microsoft makes two versions of Virtual Server 2005: Standard Edition and Enterprise Edition. Standard Edition supports host servers with up to 4 processors; Enterprise Edition supports host servers that have as many as 32 processors. However, the product doesn’t provide SMP support for the VMs running on a Virtual Server 2005 server.

By using Physical Address Extension (PAE), Virtual Server can support up to 64GB of memory, and each VM can address up to 3.6GB of memory. Both versions support a maximum of 64 VMs per host. Microsoft supports Virtual Server for use only on 32-bit host platforms.

Unsurprisingly, Virtual Server supports only Microsoft host and guest OSs. Although I used the product to run other OSs, such as Linux, I don’t recommend that you do so in a production environment because of the lack of support.

I’m accustomed to using Microsoft Virtual PC, but Virtual Server is a very different animal and took a little getting used to. Instead of using a Windows-based management console, you manage Virtual Server through the Administration Website console that you see in Figure 1. You access the management program either by selecting the Administration Website option on the Virtual Server 2005 server or by pointing a browser to http://server:1024/VirtualServer/VSWebApp.exe.

Although it took a while for me to feel comfortable with it, the console made it very easy to manage Virtual Server from across the network. The Administration Website provides a thorough overview of the status of the configured VMs, including current performance data and even a mini screen view. I appreciated the no-footprint management offered by the Administration Website, but unfortunately the price you pay for it is having to run IIS 6.0 on the host. The installation process automatically configures IIS, adds the Administration Website, and sets permissions for the site, but you still have an extra element to manage.

Unlike GSX Server, Virtual Server has no wizard to step you through the process of creating VMs. Instead, you need to use the links provided by the Administration Website to manually create a virtual hard disk (VHD), a virtual network, and a VM that utilizes the VHD and virtual network. After I became familiar with the process, I found the interface fairly easy to navigate, but it lacked some of the niceties that I’ve come to expect from a Windows application, such as the ability to browse the file system when creating VHDs. I like the Administration Website’s ability to provide remote control for all the VMs. After you create a VM, you’ll probably want to install the Virtual Machine Additions, software drivers that increase screen resolution by adding an SVGA driver and enable better mouse tracking and control.

Virtual Server supports four types of VHDs: dynamically expanding, fixed-size, differencing, and linked. The host OS sees dynamically expanding and fixed-size VHDs as a large .vhd file that contains the file system for the guest VM. Dynamically expanding disks start small and automatically grow as the guest VM requires additional storage. Much like a physical hard drive, a dynamically expanding disk can grow only until it reaches its predefined limit. As you’d expect, the guest VM experiences a delay when the VHD must be expanded. Fixed-size VHDs are allocated when you create them and don’t grow.

Dynamically expanding, fixed-size, and differencing VHDs support using an optional undo disk. Undo disks let you reset all changes that have been made to a dynamic, fixed-size, or differencing disk. Undo disks store all configuration and data changes made to the VM during the session and prompt you to save or discard the changes when you shut down the VM. Differencing disks let you isolate changes that occur within a guest VHD; all changes that occur in the parent VHD are stored in the differencing disk. Unlike an undo disk, which is associated with the entire VM, a differencing disk is associated with a particular VHD. Compared with GSX Server’s differencing disks, Virtual Server’s built-in differencing disks are a snap to create.

Linked VHDs are different from the other types of VHDs. Linked disks convert an entire host file system’s partition to a VHD. Afterward, the host can no longer access that portion of the file system. You can’t use linked disks with undo disks or differencing disks.

You can configure virtual networking to use either the host system’s NIC or a user-defined virtual network that only VMs can access. If you use the host NIC, any VM connected to the virtual network can access the network that the host is connected to. Otherwise, the VM can access only the internal virtual network. Virtual Server can also provide a virtual DHCP server, so you don’t need to configure a guest VM on an internal network to act as a DCHP server.

One especially nice feature is Virtual Server’s ability to configure shared SCSI VHDs, which lets you set up Microsoft Cluster service over two VM nodes. Another welcome feature is the ability to transfer VMs created with Virtual PC 2004 to Virtual Server. One annoying limitation of Virtual Server is that, like Virtual PC, it lacks support for USB devices. Although you can use USB keyboards and mouse devices, you can’t plug in USB flash drives with Virtual Server and have them recognized in the VMs. Virtual Server also has a strong set of COM-based APIs that you can use in conjunction with VBScript to create your own custom management scripts.

Microsoft offers the Virtual Server Migration Toolkit (VSMT) as a free add-on to Virtual Server. Available for download at http://www.microsoft.com/widowsserversystem/virtualserver/evaluation/vsmt.mspx, the VSMT can convert physical machines to VMs and VMware VMs to Virtual Server–compatible VMs. VMware offers a similar product, called the VMware P2V Assistant, but you must purchase it separately.

VMware GSX Server 3.1
Now in its third release, VMware GSX Server offers two licensing levels: one for systems with one or two CPUs, and the other for systems with up to 32 CPUs. Like its competitor, GSX Server doesn’t provide SMP support for the guest OSs and lets you run a maximum of 64 VMs concurrently on one host, depending on the resources the VMs require. GSX Server supports up to 64GB of memory on PAE-enabled Windows systems, and each VM can address up to 3.6GB of memory.

When I wrote this review, GSX Server officially supported only 32-bit hosts. However, the product also provides “experimental support” for 64-bit hosts, which basically means that they work but aren’t recommended for use in a production setting. I expect VMware to announce official support for 64-bit host OSs after Microsoft releases Windows 2003 for 64-bit Extended Systems later this year.

GSX Server has a decided advantage over Virtual Server in the area of supported host and guest OSs. In addition to supporting all Windows OSs, GSX Server supports a variety of Linux systems as hosts, as you can see in Table 1. The product’s client OS support is equally extensive.

If you’ve used VM Workstation or an earlier version of GSX Server, you’ll find managing GSX Server to be a breeze. Virtual Machine Console. Although it provides decidedly less information than Virtual Server’s Administration Website, it’s easier to use and noticeably more responsive.

Setting up new VMs under GSX Server is decidedly easier than using Virtual Server’s piecemeal VM creation process. GSX Server’s New Virtual Machine Wizard provides an easy-to-use interface that steps you through VM, VHD, and network creation. You’ll probably want to install VMware’s VMTools on all your VMs. VMTools provides a higher-performance video driver and enables cutting and pasting text between the VMs and the host.

VMware gives you several options for remotely managing GSX Server. The Windows-based Virtual Machine Console can connect to networked GSX Server systems. A Web-based management interface enables basic VM management functions, such as displaying and controlling VMs. You can also use a set of scripting APIs for Perl and COM, called the vmPerl and vmCOM APIs, respectively.

GSX Server supports two basic types of virtual disks: raw and virtual. Raw disks directly access a local disk partition. Virtual disks appear to the GSX Server host OS as a file. That file, which has an extension of .vmdk, stores the VM’s entire file system. You can dynamically expand virtual disk files, or you can preallocate files when you create them.

GSX Server’s undo disks let you save or discard all the changes in a VM at the end of a session, and virtual disks have a snapshot feature that lets you capture the current state of the virtual disk. GSX Server also supports differencing, but the associated process is manual and isn’t nearly as easy to use as Virtual Server’s differencing disk capability.

You have a choice of three types of virtual networking for GSX Server VMs: host-only, Network Address Translation (NAT), and bridged. Host-only networking restricts you to internal VMs that have no outside connections. The NAT option lets VMs connect to the outside network using the host IP address. GSX Server provides its own built-in DHCP server for host-only and NAT configurations. Bridged networking lets VMs access the outside network. Alternatively, you can choose None to disable the network hardware.

GSX Server lets you set up Microsoft Cluster service using shared SCSI VHDs. You can also transfer to GSX Server any VMs that you’ve created with VMware Workstation. One key advantage GSX Server has over Virtual Server 2005 is full support for USB devices—I could freely transfer data between GSX Server VMs and USB flash drives.

Performance
To test performance, I used the Sandra benchmarking software’s combined performance index tests running on a fresh installation of Windows 2003, Enterprise Edition. I tested a variety of system performance factors, including basic display performance, memory access speed, and file-access and networking performance.

For Virtual Server 2005, I performed all tests on the local server that was running Virtual Server, using the Virtual Machine Remote Control Client running in full-screen mode. I configured the VM to use 384MB of RAM and used a fixed SCSI VHD so the test wouldn’t be affected by dynamic expansion. The VHD was also on a different disk spindle than the drive on which the host OS was installed. To determine whether the Virtual Machine Additions made a significant performance difference, I first ran a set of tests without the Virtual Machine Additions installed, then ran another set after installing them.

In all the performance tests, the VMs running under Virtual Server were slower than those running under GSX Server. The CPU arithmetic test shows Virtual Server lagging behind GSX Server by about 20 percent. The multimedia test showed similar results. The other tests were closer, but GSX Server held onto a 17.5 percent advantage in file system performance and a 5 percent edge in network performance. The presence of the Virtual Machine Additions gave a bigger boost to Virtual Server’s file and network access performance than it did to the product’s arithmetic and multimedia performance.

I configured GSX Server’s VM to use 384MB of RAM and a preallocated virtual SCSI hard disk that was located on a separate physical hard disk from the host system’s OS. I ran two sets of tests: the first without VMTools and the second with VMTools. VMs running under GSX Server provide notably better performance than those running under Virtual Server. Considering that GSX Server is in its third release and Virtual Server is in its first release, it wasn’t surprising that GSX Server is faster.

A Clear Choice
Both products are of excellent quality, and neither gave me any significant problem. If you need to run Linux or other guest OSs in a production environment, VMware GSX Server is the clear choice. VMware officially supports most popular Linux distributions. You can find more information about or download a 30-day evaluation version of VMware GSX Server 3.1 at http://www.vmware.com/products/server/gsx_features.html.

For those who have a Microsoft-only environment, however, Virtual Server 2005 is the better value. Significantly less costly than GSX Server, Virtual Server offers all the same capabilities for Windows guest OSs, albeit slightly slower performance. For more information about Virtual Server 2005 or to download a 180-day evaluation version, go to http://www.microsoft.com/virtualserver.

A resource for VMware exam preparation

A new undiscovered site about virtualization has brought to my attention today: http://vmwareprofessional.com.
This site mantained by Dominic Rivera will help you preparing both old (about ESX only) and new (about VirtualCenter also) VMware VCP certification exam.

I strongly suggest my readers to take a look at this site if they are approaching certification.

Thanks to Peter Erleshofer for this news.

Minime project hits beta

Finally Massimiliano Daneri found some time to develop more its ambitious project of replacing VMware VirtualCenter with a simplier and more powerful (it can manage Microsoft Virtual Server 2005 also) virtual infrastructure management tool: Minime.
He actually announced first minime public beta is coming. I’m personally testing a private beta release and need to say it’s really a good work.

Compliments to Massimiliano for his hard work!

VMware really has competitors?

If you read my blog you know this: this month press is totally crowded by articles about XEN and Virtual Iron. Many writers are pointing out that after years of predominance, now VMware has a lot of important competitors that offer same or better virtualization tecnologies.

My personal opinion is that nothing of this can be said today.

While XEN and Virtual Iron are interesting virtualization projects, today both offers just Linux virtualization, while VMware is extending it’s guest operating systems list with most wanted SUN Solaris, apart Windows, Linux, BSD and Netware already supported OS families.
Till XEN or Virtual Iron won’t offer support for at least Windows guest OS, then will be not comparable with VMware at all. Today the only real VMware competitor is SVISTA, and Microsoft Virtual PC/Server (considering we know they can run *nix guest OS, even if not supported).

If XEN and Virtual Iron will just offer Linux guest OSes in future, then will compete much more with virtualization solutions like Virtuozzo (even if base technology is different). And actually their best competitor is the just launched Container technology on Sun Solaris 10.
The problem is: who’ll adopt XEN that obliges to modify guest OS kernels, or Virtual Iron that is a commercial product when Sun offers a native software partitioning at zero cost (even for commercial projects), embedded in its operating system?

What is Xen, and why is it cool?

Quoting from Nathan Torkington at O’Reilly Developers Weblogs:


I got into Xen a few weeks ago, and I’m loving that they’re getting lots of attention. But most folks don’t know what they are and why their product is so cool. Allow me to shed light on the matter …
Xen is like the Mach microkernel, where you can have multiple operating systems running at once and a thin kernel handles switching between them and managing device access. This thin layer in Xen is called the hypervisor, and is analogous to the Mach microkernel. It provides an idealized hardware layer that you port your OS to, and in return you get the ability to run multiple operating system instances at once (e.g., run two copies of Redhat’s latest, one copy of the Novell Desktop, and an OpenBSD), freeze and restore snapshots of a running OS, and more.

What you can’t do with Xen is run Windows on it–that’s always going to be VMware’s niche (at least until Intel’s VM technology becomes ubiquitous). But Xen makes a whole lot of situations possible that are slow or impossible at the moment. Two applications that are working well for Xen: testing and server load balancing. If you’re working on your app and want to test it on a staging server, it’s no fun to reboot, or negotiate time on a shared staging server, and it’s way less fun to rebuild if your app hoses the staging server. The Xen way, you run your development OS and your staging OS on your machine at the same time and switch from one to the other when you need to. If the staging server gets borked, you delete that running OS and reload from a saved stable snapshot.

In the server room, it’s often easier and more secure to manage a single service on a running machine. The more ways into a box, the less defensible it is and the more risk for damage and service downtime if the box is compromised. So run Xen and use one OS instance per service. If a service is compromised, only that service is compromised. If you experience high load, say due to Slashdotting, you can easily reconfigure machines to run different services. (You can rdist the snapshot of an OS running that service and then bring it up on however many machines you need).

The potential for Xen is great. We’re going to feature them at OSCON because their technology is just so cool. Lots of companies like RedHat and HP are very interested in what Xen makes possible, because the hypervisor enables things that seemed like wishful fantasy a few years ago. I loved my time meeting with one of the company’s founders and playing with Xen–they’re very smart engineers with their heads screwed on right. There’s obviously a lot of work to be done in making Xen friendlier to install, getting more tools around the administration of Xen, etc., so the interest and involvement of companies with big budgets is a good thing. They’ll help move Xen from the research lab where it was born to data centers and developer desktops where it can be ubiquitous and useful.

So look for lots of action from Xen. I expect the next versions of Novell, HP, etc.’s offerings will feature Xen support (either standard or as an alternate kernel shipped with the distro). I hope there’ll be a great distro like Ubuntu or Gentoo offering a Xen install as well as a solo install. This will give everyone a painless way to do some very cool things and open the door for even cooler things down the line.