Intel Vanderpool holds promise, some pitfalls

Quoting from The Inquirer:


Intel introduced VT or Vanderpool Technology a few IDFs ago with great fanfare and little hard information. Since then, as the technology got closer and closer to release, there has been a little more info, but even more questions. In the following four part article, I will tell you a little about what VT is, and what it does for you. The first part is about virtualisation, what it is, what it does, followed by what problems it has in the second part. The third chapter will be more on Vanderpool (VT) itself and how it works on a technical level. The closing chapter will be on the uses of VT in the real world, and most likely how you will see, or hopefully not see, it in action.
Virtualisation is a way to run multiple operating systems on the same machine at the same time. It is akin to multitasking, but where multitasking allows you to run multiple programs on one OS on one set of hardware, virtualisation allows multiple OSes on one set of hardware. This can be very useful for security and uptime purposes, but it comes at a cost.

Imagine an OS that you can load in nearly no time, and if it crashes, you can simply throw it out, and quickly load a new one. If you have several of these running at the same time, you can shut one down and shunt the work off to the other ones while you are loading a fresh image. If you have five copies of Redhat running Apache, and one goes belly up, no problem. Simply pass incoming requests to the other four while the fifth is reloading.

If you save ‘snapshots’ of a running OS, you can reload it every time something unpleasant happens. Get hacked? Reload the image from a clean state and patch it up, quick. Virused? Same thing. Virtualisation provides the ability to reinstall an OS on the fly without reimaging a hard drive like you would with Ghost. You can simply load, unload and save OSes like programs.

It also allows you to run multiple different OSes on the same box at the same time. If you are a developer that needs to write code that will run on 95, 98, ME, 2000 and XP, you can have five machines on your desk or one with five virtual OSes running. Need to have every version of every browser to check your code against, but MS won’t let you do something as blindingly obvious as downgrading IE? Just load the old image, or better yet, run them all at once.

Another great example would be for a web hosting company. If you have 50 users on an average computer, each running a low level web site, you can have 50 boxes or one. 50 servers is the expensive way to go, very expensive, but also very secure. One is the sane way to go, that is until one person wants Cold Fusion installed, but that conflicts with the custom mods of customer 17, and moron 32 has a script that takes them all down every Thursday morning at 3:28am. This triggers a big headache for tech support as they get hit with 50 calls when there should be one.

Virtualisation fixes this by giving each user what appears to be their own computer. For all they know they are on a box to themselves, no muss, no fuss. If they want plain vanilla SuSE, Redhat with custom mods, or a Cold Fusion setup that only they understand, no problem. That script that crashes the machine? It crashes an instance, and with any luck, it will be reloaded before the person even notices the server went down, even if they are up at 3:28am. No one else on the box even notices.

But not all is well in virtulisation land. The most obvious thing is that 50 copies of an OS on a computer take up more resources and lead to a more expensive server. That is true, and it is hard to get around under any circumstances, more things loaded take more memory.

The real killer is the overhead. There are several techniques for virtualisation, but they all come with a performance hit. This number varies wildly with CPU, OS, workload and number of OSes you are running, and I do mean wildly. Estimates I hear run from 10% CPU time to over 40%, so it really is a ‘depends’ situation. If you are near the 40% mark, you are probably second guessing the sanity of using a VM in the first place.

The idea behind VT is to lower the cost of doing this virtualisation while possibly adding a few bells and whistles to the whole concept. Before we dig into how this works, it helps if you know a little more about how virtualisation accomplishes the seemingly magic task of having multiple OSes running on one CPU.

There are three main types of virtualisation: Paravirtualisation, Binary Translation and emulation. The one you may be familiar with is emulation, you can have a Super Nintendo emulator running in a window on XP, and in another you have Playstation emulator, and in the last you have a virtual Atari 2600. This can be considered the most basic form of virtualisation, as far as any game running is concerned, it is running on the original hardware. Emulation is really expensive in terms of CPU overhead, if you have to fake every bit of the hardware, it can take a lot of time and headaches. You simply have to jump through a lot of hoops, and do it perfectly.

The other end of the spectrum is the method currently in vogue, and endorsed by everyone under the sun, Sun included, Paravirtualisation (PV). PV is a hack, somewhat literally, it makes the hosted OSes aware that they are in a virtualised environment, and modifies them so they will play nice. The OSes need to be tweaked for this method, and there has to be a back and forth between the OS writers and the virtualisation people. In this regard, it isn’t as much a complete virtualisation as it is a cooperative relationship.

PV works very well for open source OSes where you can tweak what you want in order to get them to play nice. Linux, xBSD and others are perfect PV candidates, Windows is not. This probably explains why RedHat and Novell were all touting Xen last week and MS was not on the list of cheerleaders.

The middle ground is probably the best route in terms of tradeoffs; it is Binary Translation (BT). What this does is look at what the guest OS is trying to do and changes it on the fly. If the OS tries to execute instruction XYZ, and XYZ will cause problems to the virtualisation engine, it will change XYZ to something more palatable and fake the results of what XYZ should have returned. This is tricky work, and can be CPU time consuming, both for the monitoring and the fancy footwork required to have it all not blow up. Replacing one instruction with dozens of others is not a way to make things run faster.

When you add in things like self modifying code, you get headaches that mere mortals should not have. None of the problems have a simple solution, and all involve tradeoffs to one extent or another. Very few problems in this area are solved, most are just worked around with the least amount of pain possible.

So, what are these problems? For the x86 architecture, at least in 32 bits, there are enough to drive you quite mad, but as a general rule, all involve ring transitions and related instructions. Conceptually rings are a way to divide a system into privilege levels, you can have an OS running in a level that a user’s program can not modify. This way, if your program goes wild, it won’t crash the system, and the OS can take control, shutting down the offending program cleanly. Rings enforce control over various parts of the system.
There are four rings in x86, 0, 1, 2, and 3, with the lower numbers being higher privilege. A simple way to think about it is that a program running at a given ring can not change things running at a lower numbered ring, but something running at a low ring can mess with a higher numbered ring.

In practice, only rings 0 and 3, the highest and lowest, are commonly used. OSes typically run in ring 0 while user programs are in ring 3. One of the ways the 64-bit extensions to x86 ‘clean up’ the ISA is by losing the middle rings, 1 and 2. Pretty much no one cared that they are gone, except the virtualization folk.

Virtual Machines (VM) like VMware obviously have to run in ring 0, but if they want to maintain complete control, they need to keep the OS out of ring 0. If a runaway task can overwrite the VM, it kind of negates half the reason you want it in the first place. The obvious solution is to force the hosted OS to run in a lower ring, like ring 1.

This would be all fine and dandy except that the OSes are used to running in ring 0, and having complete control of the system. They are set to go from 0 to 3, not 1 to 3. In a PV environment, you change the OS so it plays nice. If you are going for the complete solution, you have to force it into ring 1.

The problem here is that some instructions will only work if they are going to or from ring 0, and other will behave oddly if not in the right ring. I mean ‘oddly’ in a computer way, IE a euphemism for really really bad things will happen if you try this; wear a helmet. It does prevent the hosted OS from trashing the VM, and also prevents the programs on the hosted OS from trashing the OS itself, or worse yet, the VM. This is the ‘0/1/3’ model.

The other model is called the ‘0/3’ model. It puts the VM in ring 0 and both the OS and programs all in ring 3, but essentially it does the rest of the things like the 0/1/3 model. The deprivileged OS in ring 3 can be walked on by user programs with much greater ease, but since there are not ring traversals, an expensive operation, it can run a little faster. Speed for security.

Another slightly saner way to do 0/3 is to have the CPU maintain two sets of page tables for things running in ring 0. One set of page tables would be for the OS, the other set for the old ring 3 programs. This way you have a fairly robust set of memory protections to keep user programs out of OS space, and the other way around. Of course this will once again cost performance, just in a different way, resulting in the other end of the speed vs security tradeoff.

To sum it all up, in 0/1/3, you have security, but take a hit when changing from 3 to 1, 3 to 0, or 1 to 0, and back again. In 0/3, you have only the 0 to 3 transition, so it could potentially run faster than a non-hosted OS. If you have a problem, the 0/3 model is much more likely to come down around your ears in a blue screen than the 0/1/3 model. The future is 0/3 though, mainly because, as I said earlier, 64-bit extensions do away with rings 1 and 2, so you are forced to the 0/3 model. That is, in computer terms, called progress, much in the same way devastating crashes are considered ‘odd’.

On paper this seems like the perfect thing to do, if you can live with a little more instability, or in the case of 0/1/3, a little speed loss. There are drawbacks though, and they broadly fall into four categories of ‘odd’. The first is instructions that check the ring they are in, followed by instructions that do not save the CPU state correctly when in the wrong ring. The last two are dead opposites, instructions that do not cause a fault when they should, and others that fault when they should not, and fault a lot. None of these make a VM writer’s life easier, nor do they speed anything up.

The first one is the most obvious, an instruction that checks the ring it is in. If you deprivilege an OS to ring 1, and it checks where it is running, it will return 1 not 0. If a program expects to be in 0, it will probably take the 1 as an error, probably a severe error. This leads to the user seeing blue, a core dump, or another form of sub-optimal user experience. Binary Translation can catch this and fake a 0, but that means tens or hundreds of instructions in the place of one, and the obvious speed hit.

Saving state is a potentially worse problem. Some things in a CPU are not easily saved on a context switch. A good example of this are ‘hidden’ Segment-Register States. Once they are loaded into memory, some portions of them cannot be saved leading to unexpected discontinuities between the memory resident portions and the actual values in the CPU. There are workarounds of course, but they are tricky and expensive performance wise.

Instructions that do not fault when they should pose an obvious problem. If you are expecting an instruction to cause a fault that you later trap, and it doesn’t, hilarity ensues. Hilarity for the person writing code in the cubicle next to you at the very least. If you are trying to make things work under these conditions, it isn’t all that much fun.

The opposite case is things that fault when they should not, and these things tend to be very common. Writes to CR0 and CR4 will fault if not running in the correct ring, leading to crashes or if you are really lucky, lots and lots of overhead. Both of these fault trappings, or lack thereof are eminently correctable on the fly, but they cost performance, lots of performance.

What the entire art of virtualization comes down to is moving the OS to a place where it should not be, and running around like your head is on fire trying to fix all the problems that come up. There are a lot of problems, and they happen quite often, so the performance loss is nothing specific, but more of a death by 1000 cuts.

IBM throws weight behind multi-OS push

Quoting from ZDNet:


IBM has quietly added a new option to the suddenly vogue market for “hypervisor” software that lets a computer run multiple operating systems simultaneously, CNET News.com has learned.

But Big Blue’s efforts aren’t likely to squash a potential rival just flexing its muscles.

IBM has released source code for its Research Hypervisor, or rHype, on its Web site, letting anyone examine the approach of a company renowned for its expertise in the field. One distinguishing feature: rHype works with multiple processor varieties, including IBM’s Power family, widely used x86 chips such as Intel’s Xeon, and the new Cell microprocessor codeveloped by IBM, Sony and Toshiba.

The project potentially competes with two commercial products–Microsoft’s Virtual Server and EMC’s VMware–and with the open-source Xen software that has attracted support from numerous computing heavyweights.

But given rHype’s open-source nature and IBM’s actions so far, rHype is more likely to be a help than a hindrance to Xen. Specifically, it could help Xen move from its current base of x86 chips to IBM’s Power.

“We’ve spent quite some time talking to its authors,” Xen founder Ian Pratt said. “Now that the rHype code is open source, it’s a great starting point for a port of Xen to Power.”

The rHype software may be incorporated directly into Xen because both packages are governed by the General Public License (GPL), Pratt said. And IBM isn’t shying away: Its programmers have been contributing to the Xen project.

It makes sense for IBM to help Xen, said Charles King, principal analyst of Pund-IT Research. “It sounds like a natural point of intersection, given IBM’s natural interest in open source and in virtualization,” King said.

IBM is the 800-pound gorilla when it comes to hypervisor software, which it has supported for decades on its mainframes and has brought to its Power-based Unix servers. But for x86 servers, IBM chose a partnership with VMware rather than bring its own technology to market.

IBM declined to comment on most details of rHype. However, Tom Bradicich, chief technology officer for IBM’s Intel-based xSeries server line, said Tuesday that it’s not likely IBM will turn rHype into a product.

“It’s in the realm of the possible, but we don’t foresee it at this time,” Bradicich said.

IBM has used rHype to aid three internal projects. One is sHype, the Secure Hypervisor project to build barriers between different virtual machines. Another is validating features of the Cell processor, which has nine separate processing cores. And a third is an IBM supercomputing project called PERCS (Productive, Easy-to-use, Reliable Computing System).

Hypervisor hype
A hypervisor–a term IBM is trying to trademark–is basic software that runs atop the processor, allocating resources such as processing power, memory and network links. By creating virtual connections to these resources–“virtualizing” them–the hypervisor provides a flexible foundation that can let a computer run multiple operating systems and thus multiple tasks more efficiently.

Juggling numerous tasks has long been a useful ability for corporate computing centers. Now such abilities are increasingly useful at home as computer networks get more complex and useful, King said.

“It’s fascinating to me that something that’s been seen as a benefit for enterprise data centers is percolating its way down into the set-top box,” King said.

The rHype software virtualizes only some resources, which makes it fall into the same “paravirtualization” category as the Xen hypervisor project. IBM developed security software called sHype on the rHype software, but in January it pledged to create a version that will work with Xen.

Among features IBM touts with rHype:

• A design that can handle sophisticated memory tasks and that works well with high-speed cache memory.

• Support for IBM’s open-source K42 operating system for multiprocessor servers.

• The ability to run on several processor simulators, including the “Mambo” simulator of IBM’s PowerPC 970 family of processors, the general-purpose QEMU simulator and the BOCHS x86 simulator. rHype also has run on VMware.

• Interfaces to use the software on servers with multiple processors and with multithreaded processors–those that can execute multiple simultaneous instruction sequences.

Divide and conquer
Xen and rHype contrast with virtual machine software such as VMware and Virtual Server, which employ full virtualization. That means an operating system doesn’t need to be modified, as generally is the case with paravirtualization today, but runs more slowly.

There are other ways of dividing a system so it can run multiple operating systems. Some higher-end IBM Intel servers have hardware-based partitions. And Linux-Vserver, SW-soft’s Virtuozzo and Solaris containers divides a single instance of an operating system so it appears that separate users have their own copies.

In addition to the rHype project, IBM has a commercial hypervisor running on machines that use its Power processors. Because rHype uses the same interfaces as the commercial hypervisor, Linux doesn’t have to be modified to run on an rHype-Power foundation. With rHype on x86 chips, Linux must be modified to work.

IBM isn’t the only company interested in helping Xen grow beyond x86 servers. Hewlett-Packard programmers have been working on Xen for computers using Intel’s Itanium 2 processor.

Although IBM is sharing the source code underlying the rHype project, it currently isn’t accepting modifications from outsiders.

Virtual-Server Face-Off

Here another comparison between VMware and Microsoft virtualization products. Quoting from WindowsITPRo:


The year 2004 might very well become known as the year of virtualization. Initially, IT pros recognized the value of virtualization technologies in test and demo environments. However, with the advent of ever more powerful systems coupled with continued improvements in virtual machine (VM) technologies, virtualization has since become a production-level technology that enables server consolidation.

VMware was first to market in the virtualization space with the release of VMware GSX Server in 2001. In October 2004, Microsoft entered the virtualization market with Virtual Server 2005, sparking much interest, especially among customers who have come to rely on VM technologies. A comparison of these two titans of virtualization leads to a clear recommendation as to the product that can best address a particular organization’s needs.

The VM Architecture
Both products install on top of the base OS and provide a software layer that emulates a physical system. You can install a guest OS on each emulated system, or VM, and you can run multiple VMs concurrently as if each were installed on a separate physical system.

Each VM owns its own virtual hardware, consisting of a processor, disk, memory, and network. VMs aren’t aware of other VMs as anything other than networked systems. The virtual server product handles the task of virtualizing the hardware and sharing it with all the VMs. The virtual server also provides virtual networking services that can connect VMs together as well as giving them access to external network resources.

Review Criteria
Although both products possess similar overall functionality, they also have several significant differences. In evaluating the products, the first criterion I considered was the host and guest OSs they support. The host OSs are the OS platforms on which you can install the VM software. The guest OSs are the OSs that the virtual servers can run. I also compared ease of use and overall manageability, looking at the process of creating new VMs as well as the ability to manage the virtual server and the VMs.

Finally, I compared the performance of the VMs running under each product. To check the overall performance of the guest OSs, I used SiSoftware Sandra 2005 Lite benchmarking software (http://www.SiSoftware.net). I compared the results of tests run on the VM that I created on each virtual server—a vanilla installation of Windows Server 2003, Enterprise Edition—to the results of baseline tests I ran on the physical machine. I performed all tests on an HP ProLiant ML350 with dual Intel Xeon 3.2GHz processors, 2GB of RAM, and a dual-channel Ultra320 SCSI controller connected to four 36GB, 15,000rpm hard drives running Windows 2003 as the host OS.

Microsoft Virtual Server 2005
Microsoft makes two versions of Virtual Server 2005: Standard Edition and Enterprise Edition. Standard Edition supports host servers with up to 4 processors; Enterprise Edition supports host servers that have as many as 32 processors. However, the product doesn’t provide SMP support for the VMs running on a Virtual Server 2005 server.

By using Physical Address Extension (PAE), Virtual Server can support up to 64GB of memory, and each VM can address up to 3.6GB of memory. Both versions support a maximum of 64 VMs per host. Microsoft supports Virtual Server for use only on 32-bit host platforms.

Unsurprisingly, Virtual Server supports only Microsoft host and guest OSs. Although I used the product to run other OSs, such as Linux, I don’t recommend that you do so in a production environment because of the lack of support.

I’m accustomed to using Microsoft Virtual PC, but Virtual Server is a very different animal and took a little getting used to. Instead of using a Windows-based management console, you manage Virtual Server through the Administration Website console that you see in Figure 1. You access the management program either by selecting the Administration Website option on the Virtual Server 2005 server or by pointing a browser to http://server:1024/VirtualServer/VSWebApp.exe.

Although it took a while for me to feel comfortable with it, the console made it very easy to manage Virtual Server from across the network. The Administration Website provides a thorough overview of the status of the configured VMs, including current performance data and even a mini screen view. I appreciated the no-footprint management offered by the Administration Website, but unfortunately the price you pay for it is having to run IIS 6.0 on the host. The installation process automatically configures IIS, adds the Administration Website, and sets permissions for the site, but you still have an extra element to manage.

Unlike GSX Server, Virtual Server has no wizard to step you through the process of creating VMs. Instead, you need to use the links provided by the Administration Website to manually create a virtual hard disk (VHD), a virtual network, and a VM that utilizes the VHD and virtual network. After I became familiar with the process, I found the interface fairly easy to navigate, but it lacked some of the niceties that I’ve come to expect from a Windows application, such as the ability to browse the file system when creating VHDs. I like the Administration Website’s ability to provide remote control for all the VMs. After you create a VM, you’ll probably want to install the Virtual Machine Additions, software drivers that increase screen resolution by adding an SVGA driver and enable better mouse tracking and control.

Virtual Server supports four types of VHDs: dynamically expanding, fixed-size, differencing, and linked. The host OS sees dynamically expanding and fixed-size VHDs as a large .vhd file that contains the file system for the guest VM. Dynamically expanding disks start small and automatically grow as the guest VM requires additional storage. Much like a physical hard drive, a dynamically expanding disk can grow only until it reaches its predefined limit. As you’d expect, the guest VM experiences a delay when the VHD must be expanded. Fixed-size VHDs are allocated when you create them and don’t grow.

Dynamically expanding, fixed-size, and differencing VHDs support using an optional undo disk. Undo disks let you reset all changes that have been made to a dynamic, fixed-size, or differencing disk. Undo disks store all configuration and data changes made to the VM during the session and prompt you to save or discard the changes when you shut down the VM. Differencing disks let you isolate changes that occur within a guest VHD; all changes that occur in the parent VHD are stored in the differencing disk. Unlike an undo disk, which is associated with the entire VM, a differencing disk is associated with a particular VHD. Compared with GSX Server’s differencing disks, Virtual Server’s built-in differencing disks are a snap to create.

Linked VHDs are different from the other types of VHDs. Linked disks convert an entire host file system’s partition to a VHD. Afterward, the host can no longer access that portion of the file system. You can’t use linked disks with undo disks or differencing disks.

You can configure virtual networking to use either the host system’s NIC or a user-defined virtual network that only VMs can access. If you use the host NIC, any VM connected to the virtual network can access the network that the host is connected to. Otherwise, the VM can access only the internal virtual network. Virtual Server can also provide a virtual DHCP server, so you don’t need to configure a guest VM on an internal network to act as a DCHP server.

One especially nice feature is Virtual Server’s ability to configure shared SCSI VHDs, which lets you set up Microsoft Cluster service over two VM nodes. Another welcome feature is the ability to transfer VMs created with Virtual PC 2004 to Virtual Server. One annoying limitation of Virtual Server is that, like Virtual PC, it lacks support for USB devices. Although you can use USB keyboards and mouse devices, you can’t plug in USB flash drives with Virtual Server and have them recognized in the VMs. Virtual Server also has a strong set of COM-based APIs that you can use in conjunction with VBScript to create your own custom management scripts.

Microsoft offers the Virtual Server Migration Toolkit (VSMT) as a free add-on to Virtual Server. Available for download at http://www.microsoft.com/widowsserversystem/virtualserver/evaluation/vsmt.mspx, the VSMT can convert physical machines to VMs and VMware VMs to Virtual Server–compatible VMs. VMware offers a similar product, called the VMware P2V Assistant, but you must purchase it separately.

VMware GSX Server 3.1
Now in its third release, VMware GSX Server offers two licensing levels: one for systems with one or two CPUs, and the other for systems with up to 32 CPUs. Like its competitor, GSX Server doesn’t provide SMP support for the guest OSs and lets you run a maximum of 64 VMs concurrently on one host, depending on the resources the VMs require. GSX Server supports up to 64GB of memory on PAE-enabled Windows systems, and each VM can address up to 3.6GB of memory.

When I wrote this review, GSX Server officially supported only 32-bit hosts. However, the product also provides “experimental support” for 64-bit hosts, which basically means that they work but aren’t recommended for use in a production setting. I expect VMware to announce official support for 64-bit host OSs after Microsoft releases Windows 2003 for 64-bit Extended Systems later this year.

GSX Server has a decided advantage over Virtual Server in the area of supported host and guest OSs. In addition to supporting all Windows OSs, GSX Server supports a variety of Linux systems as hosts, as you can see in Table 1. The product’s client OS support is equally extensive.

If you’ve used VM Workstation or an earlier version of GSX Server, you’ll find managing GSX Server to be a breeze. Virtual Machine Console. Although it provides decidedly less information than Virtual Server’s Administration Website, it’s easier to use and noticeably more responsive.

Setting up new VMs under GSX Server is decidedly easier than using Virtual Server’s piecemeal VM creation process. GSX Server’s New Virtual Machine Wizard provides an easy-to-use interface that steps you through VM, VHD, and network creation. You’ll probably want to install VMware’s VMTools on all your VMs. VMTools provides a higher-performance video driver and enables cutting and pasting text between the VMs and the host.

VMware gives you several options for remotely managing GSX Server. The Windows-based Virtual Machine Console can connect to networked GSX Server systems. A Web-based management interface enables basic VM management functions, such as displaying and controlling VMs. You can also use a set of scripting APIs for Perl and COM, called the vmPerl and vmCOM APIs, respectively.

GSX Server supports two basic types of virtual disks: raw and virtual. Raw disks directly access a local disk partition. Virtual disks appear to the GSX Server host OS as a file. That file, which has an extension of .vmdk, stores the VM’s entire file system. You can dynamically expand virtual disk files, or you can preallocate files when you create them.

GSX Server’s undo disks let you save or discard all the changes in a VM at the end of a session, and virtual disks have a snapshot feature that lets you capture the current state of the virtual disk. GSX Server also supports differencing, but the associated process is manual and isn’t nearly as easy to use as Virtual Server’s differencing disk capability.

You have a choice of three types of virtual networking for GSX Server VMs: host-only, Network Address Translation (NAT), and bridged. Host-only networking restricts you to internal VMs that have no outside connections. The NAT option lets VMs connect to the outside network using the host IP address. GSX Server provides its own built-in DHCP server for host-only and NAT configurations. Bridged networking lets VMs access the outside network. Alternatively, you can choose None to disable the network hardware.

GSX Server lets you set up Microsoft Cluster service using shared SCSI VHDs. You can also transfer to GSX Server any VMs that you’ve created with VMware Workstation. One key advantage GSX Server has over Virtual Server 2005 is full support for USB devices—I could freely transfer data between GSX Server VMs and USB flash drives.

Performance
To test performance, I used the Sandra benchmarking software’s combined performance index tests running on a fresh installation of Windows 2003, Enterprise Edition. I tested a variety of system performance factors, including basic display performance, memory access speed, and file-access and networking performance.

For Virtual Server 2005, I performed all tests on the local server that was running Virtual Server, using the Virtual Machine Remote Control Client running in full-screen mode. I configured the VM to use 384MB of RAM and used a fixed SCSI VHD so the test wouldn’t be affected by dynamic expansion. The VHD was also on a different disk spindle than the drive on which the host OS was installed. To determine whether the Virtual Machine Additions made a significant performance difference, I first ran a set of tests without the Virtual Machine Additions installed, then ran another set after installing them.

In all the performance tests, the VMs running under Virtual Server were slower than those running under GSX Server. The CPU arithmetic test shows Virtual Server lagging behind GSX Server by about 20 percent. The multimedia test showed similar results. The other tests were closer, but GSX Server held onto a 17.5 percent advantage in file system performance and a 5 percent edge in network performance. The presence of the Virtual Machine Additions gave a bigger boost to Virtual Server’s file and network access performance than it did to the product’s arithmetic and multimedia performance.

I configured GSX Server’s VM to use 384MB of RAM and a preallocated virtual SCSI hard disk that was located on a separate physical hard disk from the host system’s OS. I ran two sets of tests: the first without VMTools and the second with VMTools. VMs running under GSX Server provide notably better performance than those running under Virtual Server. Considering that GSX Server is in its third release and Virtual Server is in its first release, it wasn’t surprising that GSX Server is faster.

A Clear Choice
Both products are of excellent quality, and neither gave me any significant problem. If you need to run Linux or other guest OSs in a production environment, VMware GSX Server is the clear choice. VMware officially supports most popular Linux distributions. You can find more information about or download a 30-day evaluation version of VMware GSX Server 3.1 at http://www.vmware.com/products/server/gsx_features.html.

For those who have a Microsoft-only environment, however, Virtual Server 2005 is the better value. Significantly less costly than GSX Server, Virtual Server offers all the same capabilities for Windows guest OSs, albeit slightly slower performance. For more information about Virtual Server 2005 or to download a 180-day evaluation version, go to http://www.microsoft.com/virtualserver.

A resource for VMware exam preparation

A new undiscovered site about virtualization has brought to my attention today: http://vmwareprofessional.com.
This site mantained by Dominic Rivera will help you preparing both old (about ESX only) and new (about VirtualCenter also) VMware VCP certification exam.

I strongly suggest my readers to take a look at this site if they are approaching certification.

Thanks to Peter Erleshofer for this news.

Minime project hits beta

Finally Massimiliano Daneri found some time to develop more its ambitious project of replacing VMware VirtualCenter with a simplier and more powerful (it can manage Microsoft Virtual Server 2005 also) virtual infrastructure management tool: Minime.
He actually announced first minime public beta is coming. I’m personally testing a private beta release and need to say it’s really a good work.

Compliments to Massimiliano for his hard work!

VMware really has competitors?

If you read my blog you know this: this month press is totally crowded by articles about XEN and Virtual Iron. Many writers are pointing out that after years of predominance, now VMware has a lot of important competitors that offer same or better virtualization tecnologies.

My personal opinion is that nothing of this can be said today.

While XEN and Virtual Iron are interesting virtualization projects, today both offers just Linux virtualization, while VMware is extending it’s guest operating systems list with most wanted SUN Solaris, apart Windows, Linux, BSD and Netware already supported OS families.
Till XEN or Virtual Iron won’t offer support for at least Windows guest OS, then will be not comparable with VMware at all. Today the only real VMware competitor is SVISTA, and Microsoft Virtual PC/Server (considering we know they can run *nix guest OS, even if not supported).

If XEN and Virtual Iron will just offer Linux guest OSes in future, then will compete much more with virtualization solutions like Virtuozzo (even if base technology is different). And actually their best competitor is the just launched Container technology on Sun Solaris 10.
The problem is: who’ll adopt XEN that obliges to modify guest OS kernels, or Virtual Iron that is a commercial product when Sun offers a native software partitioning at zero cost (even for commercial projects), embedded in its operating system?

What is Xen, and why is it cool?

Quoting from Nathan Torkington at O’Reilly Developers Weblogs:


I got into Xen a few weeks ago, and I’m loving that they’re getting lots of attention. But most folks don’t know what they are and why their product is so cool. Allow me to shed light on the matter …
Xen is like the Mach microkernel, where you can have multiple operating systems running at once and a thin kernel handles switching between them and managing device access. This thin layer in Xen is called the hypervisor, and is analogous to the Mach microkernel. It provides an idealized hardware layer that you port your OS to, and in return you get the ability to run multiple operating system instances at once (e.g., run two copies of Redhat’s latest, one copy of the Novell Desktop, and an OpenBSD), freeze and restore snapshots of a running OS, and more.

What you can’t do with Xen is run Windows on it–that’s always going to be VMware’s niche (at least until Intel’s VM technology becomes ubiquitous). But Xen makes a whole lot of situations possible that are slow or impossible at the moment. Two applications that are working well for Xen: testing and server load balancing. If you’re working on your app and want to test it on a staging server, it’s no fun to reboot, or negotiate time on a shared staging server, and it’s way less fun to rebuild if your app hoses the staging server. The Xen way, you run your development OS and your staging OS on your machine at the same time and switch from one to the other when you need to. If the staging server gets borked, you delete that running OS and reload from a saved stable snapshot.

In the server room, it’s often easier and more secure to manage a single service on a running machine. The more ways into a box, the less defensible it is and the more risk for damage and service downtime if the box is compromised. So run Xen and use one OS instance per service. If a service is compromised, only that service is compromised. If you experience high load, say due to Slashdotting, you can easily reconfigure machines to run different services. (You can rdist the snapshot of an OS running that service and then bring it up on however many machines you need).

The potential for Xen is great. We’re going to feature them at OSCON because their technology is just so cool. Lots of companies like RedHat and HP are very interested in what Xen makes possible, because the hypervisor enables things that seemed like wishful fantasy a few years ago. I loved my time meeting with one of the company’s founders and playing with Xen–they’re very smart engineers with their heads screwed on right. There’s obviously a lot of work to be done in making Xen friendlier to install, getting more tools around the administration of Xen, etc., so the interest and involvement of companies with big budgets is a good thing. They’ll help move Xen from the research lab where it was born to data centers and developer desktops where it can be ubiquitous and useful.

So look for lots of action from Xen. I expect the next versions of Novell, HP, etc.’s offerings will feature Xen support (either standard or as an alternate kernel shipped with the distro). I hope there’ll be a great distro like Ubuntu or Gentoo offering a Xen install as well as a solo install. This will give everyone a painless way to do some very cool things and open the door for even cooler things down the line.

Virtual Iron, VMware virtually duke it out

Quoting from Internet News:


VMware has been the undisputed leader in the x86 virtualization space, teaming up with large systems vendors to find placement among the loads of Intel servers that companies like Dell (Quote, Chart), IBM (Quote, Chart) and HP (Quote, Chart) have been selling.

But one new start-up looking to advance the notion of using software to consolidate infrastructure in data centers has arrived this week, promising to do virtualization on a much broader level, starting with Intel machines running Linux.

Virtual Iron Software has created software that can assume the processing and performance of several connected hardware servers, which is a distinction from VMware right out of the gate.

Virtualization software allows customers to consolidate resources and move them around where necessary, a key component of utility or on-demand computing systems.

Virtual Distinctions and Contradictions

While virtualization software from VMware is designed to carve up a single, x86 (define) box into small partitions, Virtual Iron’s VFe 1.0 software allows users to take multiple computers in a data center and use them as “Lego-like” building blocks, or virtual computers.

Virtual Iron CTO and co-founder Scott Davis said this gives his company a broader applicability. But Raghu Raghuram, senior director of strategy and market development at VMware, is quick to disagree. He questioned Virtual Iron’s ability to gain traction based on demand for such technology on Intel-based systems.

“The proposition that you can combine two or more boxes with Virtual Iron’s software to replace a larger box starts to become cost effective when you are talking about applications running on large SMP servers, such as 8-ways or 16-ways or 32-ways,” Raghuram said. “Of the five-plus million Intel servers sold every year, only a few thousands fall into this category. We therefore think that this is a small market opportunity.”

VMware, Raghuram said, is focused on the millions of servers running applications that run on one-, two- and four-way servers. As multi-core processors from Intel and AMD become the norm, tomorrow’s two-way or four-way will itself have eight or 16 CPUs, which should boost VMware’s opportunity and diminish Virtual Iron’s chances in the space.

But Davis argued that VMware’s market for targeting x86 machines has gotten as good as it’s going to get despite the fact that VMware announced that its fourth-quarter revenues for 2004 were $71 million, a 159 percent increase year-over-year.

Davis believes the market for virtualization on multi-processor servers will grow, citing IDC estimates that almost 65 percent of all business processing applications or workloads continue to run on SMP-based servers.

Moreover, he said “we expect x86 server partitioning to rapidly be commoditized by advances in Intel and AMD processors.”

While this may be speculative, there are signs that Intel and AMD are doing more with virtualization for their chips. AMD this week announced it will port XenSource’s Xen open-source virtualization software to its AMD64 technology. Like VMware products, the Xen hypervisor is an x86 virtual machine that lets a single machine run multiple operating systems.

In fact, Pund-IT analyst Charles King said Xen is likely to be a greater threat to VMware than Virtual Iron, which he said sports an approach so different from VMware’s that Virtual Iron will more likely compete with grid computing vendors like Platform Computing or United Devices.

Despite the opposing opinions about the market’s direction, there is a lot of money to be made in virtualization software. IDC said the market garnered $19.3 billion from 2003 to 2004, with about $1.5 billion coming in the virtual processing space where Virtual Iron plays.

Virtual Iron Tech Garners Praise

At least one analyst was duly impressed with Virtual Iron’s novel approach to complexity issues in virtualization technology that go back 30 years.

IDC analyst Dan Kusnetzky said people are usually trying to find a way to bring the application to the resources they have. Virtual Iron has dragged the resources of a system — storage, processors and memory — to the application.

“They saw the same problem everyone else did, which is ‘we want to somehow aggregate the resources from a lot of inexpensive machines and in the end produce the work that used to require a supercomputer'” Kusnetzky said. “It’s a simple conceptual difference. But it is truly rocket science underneath — I am absolutely certain of it.”

But Kusnetzky acknowledged that while the technology might be impressive, the Acton, Mass.-based company of 35 employees has a ways to go to prove itself in a market dominated by VMware.

“Their little voice is likely to be drowned out by the big voices of other vendors,” he said. “I think that means this company is going to have to work very rapidly to develop alliances and partnerships with these companies so when they are presenting their message, Virtual Iron is part of their message.”

Ironically, Kusnetzky said this is how VMware started. It aligned itself with major server vendors and deepened or broadened those partnerships. VMware recently extended its pact with IBM to offer VMware’s VirtualCenter, VMotion, ESX Server and Virtual SMP software on IBM eServer xSeries and BladeCenter systems.

“On the edges, VMware and Virtual Iron will compete with each other,” he said. “But VMware can’t do what Virtual Iron does, which is spread resources among number of blades or boxes connected by InfiniBand. This is not to say VMware could not implement this, but it would probably take two to three years to implement.”

VMware nears Workstation 5.0 release a bit more

VMware just sent its beta testers Release Candidate 2 availability announcement. Since this is a public beta program everybody can download it from official VMware site.
At this time release notes page isn’t updated so I cannot list what’s new. Anyway don’t expect new features since RC development phase is usually code-frozen and dedicated to bugfix only.

More to come…

Xen lures big-name endorsements

Quoting from ZDNet:


Xen lets multiple operating systems run on the same computer, a feature that’s useful for extracting as much work as possible from a single system. The technology is common among high-end servers today, but on mainstream systems it requires proprietary “virtual machine” software from EMC subsidiary VMware.

At the LinuxWorld Conference and Expo here, numerous companies voiced Xen support in the form of endorsements, programming help and software contributions. Sun Microsystems, Hewlett-Packard, Novell, Red Hat, Intel, Advanced Micro Devices and Voltaire all are involved, but one of the more interesting allies is IBM, which has decades of experience in the area.

“Two or three months ago, it wasn’t on anybody’s radar. Now it’s going to make a big change in how everyone uses Linux,” said Chris Schlaeger, vice president of research and development for Novell’s SuSE Linux.

The change illustrates what can go right in the world of open-source software: a project can trigger a cascade of cooperation by multiple interested parties. When it works well, as in the case of Linux, that cooperation can lead to a unified, fast-developing project rather than proprietary, mutually incompatible competitors.

“The open-source community has finally decided to smooth over its differences and get behind one virtualization project, which means it’s actually going to happen rather than having 12 warring fiefdoms, each with about two soldiers,” said Illuminata analyst Gordon Haff.

Xen began three years ago at the University of Cambridge in England, said Ian Pratt, project leader and a founder of XenSource, a start-up that develops and supports the software and is trying to make it a standard computer feature. “Being ubiquitous on Linux is the first step to that,” he said.

Xen and other approaches to dividing a computer into separate partitions rely on a concept called virtualization, which lets programs run on a software simulation of actual hardware. In the case of VMware, this foundation is called a virtual machine.

One difference between VMware and Xen: The former completely simulates a machine, which theoretically allows any operating system to run unmodified on a virtual machine. Xen, on the other hand, uses “paravirtualization,” which doesn’t go as far. That means faster performance but also requires an operating system to be modified to run, Pratt said.

Higher-level software, however, doesn’t need to be modified, he said.

The requirement for a modified operating system will loosen with Intel’s coming Vanderpool Technology, or VT, due in 2005, Pratt said. It will mean unmodified operating systems will run on Xen, though not as fast as modified ones. That means Windows will run on Xen even though open-source programmers don’t have access to change Windows itself.

Falling by the wayside
Xen competitors that haven’t caught on include Plex86 and User-mode Linux. While the latter made it into the most recent version of SuSE Linux from Novell, it likely won’t last.

“User-mode Linux is most likely dead,” Schlaeger said. The management tools Novell developed to administer that software will be re-used to control Xen instead, added Markus Rex, general manager of SuSE Linux.

Xen will likely be incorporated into Novell’s upcoming SuSE Linux Professional 9.3 and later into the next version of its premium product SuSE Linux Enterprise Server, Rex said.

Linux seller Red Hat also has Xen plans. The virtualization package is being added to Red Hat’s experimental Fedora Core 4 product and will probably be in version 5 of Red Hat Enterprise Linux, said Paul Cormier, executive vice president of engineering. Like Novell, Red Hat plans to add management tools to control aspects such as the creation or removal of Xen virtual machines.

Hewlett-Packard strongly endorsed Xen this week, saying it will contribute software to the effort. “Our expectation is that Xen will provide a viable, open-source alternative in virtualization platforms,” said Martin Fink, vice president of Linux, in a keynote address at the Linux show. HP, too, hopes to profit from software to manage virtual machines.

Intel began contributing software to the Xen project in January so it could use VT extensions, said Phil Brace, director of marketing for Intel’s digital enterprise group.

Expanding into new areas
Currently Xen works with Linux on computers using x86 processors such as Intel’s Pentium, but efforts are under way to extend it into other domains. This week, AMD announced it’s helping to bring Xen to 64-bit x86 chips such as its Opteron, future generations of which, employing “Pacifica” technology, will have new virtualization support.

There’s experimental support for Intel’s Itanium family now in Xen, Pratt said. And IBM has expressed interest in moving it to the Power chip, Schlaeger said.

Among operating systems, the NetBSD variant of Unix works on Xen–and the version was done so quickly, Pratt said, that XenSource hired the NetBSD programmer who did the work, Christian Limpach.

And Sun’s Solaris–which the company has begun aggressively pushing for x86 servers–is another likely candidate, said John Fowler, executive vice president of Sun’s network systems group. “We think the open-source virtual hypervisor is the way to go,” he said. (Hypervisor is a term IBM is trying to trademark referring to a layer of software that lets hardware be divided up so multiple operating systems can run on it.)

The IBM connection
Sources familiar with IBM’s plans expect Big Blue to play a significant part in Xen. The company has decades of experience in the field with mainframes, Unix servers and Intel-based servers.

Although IBM has a sales and development partnership with VMware, it also has an in-house hypervisor project for x86 chips–a project that came to light in a January posting to the Xen mailing list. Researchers in IBM’s labs were using it as a foundation for a project called sHype, or Secure Hypervisor, to make virtual machines less susceptible to attack. The software uses rules that govern administrative privileges and the flow of information between virtual machines.

“We now plan to contribute this to Xen by integrating our security architecture into it,” IBM researcher Reiner Sailer said in the posting. Pratt responded favorably in his posting: “It’ll be great to have IBM contributing to Xen security.”

That’s not all. One source familiar with IBM’s plans said the company expects to contribute software for two key computing technologies–input-output services for communicating with devices such as network cards and virtual memory for extending physical memory using hard drives.

Despite the Xen support, IBM reaffirmed its VMware ties Thursday. “IBM has a strong and vital business relationship with VMware. That relationship is stronger than it’s ever been,” said spokesman Jim Larkin.

VMware, for its part, labels Xen a “nascent” virtualization project that’s hampered by its requirement that the operating system be modified. “Xen will not be very useful for the overwhelming majority of customers that have deployed standard Linux operating systems today,” VMware said in a statement.

But VMware–combined with Intel’s VT technology and Microsoft’s competing Virtual Server–faces a definite threat, Haff said.

VMware has higher-level VirtualCenter and VMotion management software, Haff said, but the core virtualization product is crucial to the subsidiary. “It’s where most of their money comes from today,” Haff said.