Microsoft offers Virtual PC 2004 with Service Pack 1 for free

While announcing unlimited virtual machines for Windows Server 2003 Datacenter Edition and up to 4 virtual machines for Windows Vista Enterprise Edition, Microsoft also released Virtual PC 2004 with Service Pack 1 for free.

The product now includes a recent hotfix specifically designed for laptop systems, correcting several issues.

Download it here.

The company also announced the next version, Virtual PC 2007, will be released during 2007, will support Vista and will be completely free as well.

The virtualization.info Virtualization Industry Roadmap has been updated accordingly.

Microsoft offers unlimited virtualization with Windows Server 2003 Datacenter Edition and 4 virtual machines with Windows Vista Enteprise Edition

From his blog Compl Torris, TechNet Ireland Manager, unofficially announces Microsoft will offer its customers purchasing Windows Server 2003 Datacenter Edition with Volume License capability to run unlimited virtual machines hosting any flavor of Windows Server, Enteprise or Datacenter editions. With effect since 1st October 2006.

In the official announcement Microsoft also stated Windows Vista Enteprise Edition customers will be able to run up to 4 virtual machines for free (Ben Armstrong specifies this policy works with any virtualization product, not just with Virtual PC).

Surgient gets new financing

Quoting from Statesman:

Austin-based Surgient Inc. has raised $20 million to accelerate its sales of software services in the United States and Europe. It also will need to buy more servers.

Surgient’s software-based services enable companies to use the Web to demonstrate products or provide customer training.

It has 65 customers, many of them software makers, and Surgient plans to expand its sales to other major companies in the United States and Europe.

Surgient expects to have about $20 million in revenue this year and to become self-sustaining by the second half of 2007…

Read the whole article at source.

Release: PHD esXpress 2.3 for VMware ESX Server

Quoting from the PHD official announcement:

PHD Technologies, Inc. (PHD), today officially announced the point release of esXpress, the Intelligent Backup Solution for VMware ESX Server(R). With its newest features, replication and mass restorations, PHD proves yet again why esXpress is the leader in virtual enterprise backups.

“The esXpress Mass/Auto Restore features allow you to recover from one to hundreds of virtual machines with very little user input. Our clients have reported their time to recovery has been cut to less than half using this feature.”

Additionally there are many new and improved features in esXpress 2.3 including an improved real time status screen, new scheduling options and reports engine, and a backend database that be used to do trend analysis.
The mass restore can be a scheduled process synchronized with esXpress to provide an easy and efficient mechanism to snapshot and replicate virtual machines with auto restore, push of a button…

Download it here.

The virtualization.info Virtualization Industry Roadmap has been updated accordingly.

Event: Linux Symposium 2006

This year’s Linux Symposium, taking place in Ottawa from June 19th to 22nd, has very interesting sessions about virtualization with speakers from Xen (which will introduce version 3.1), VMware, IBM, Intel and others:

  • Transparent Paravirtualization for Linux
    Paravirtualization has a lot of promise, in particular in its ability to deliver performance by allowing the hypervisor to be aware of the idioms in the operating system. Since Linux kernel changes are necessary, it is very easy to get into a situation where the paravirtualized kernel is incapable of executing on a native machine, or on another hypervisor. It is also quite easy to expose too many hypervisor metaphors in the name of performance, which can impede the general developement of the kernel with many hypervisor specific subtleties.
    VMI, or the Virtual Machine Interface, is a clearly defined extensible specification for OS communication with the hypervisor. VMI delivers great performance, but doesnt require that Linux kernel developers be aware of metaphors that are only relevant to the hypervisor. There is a clear distinction between the resource name space available to the virtual machine and the hypervisor. As a result, it can keep pace with the fast releases of the Linux kernel, a new kernel version can be trivially paravirtualized. With VMI, a single Linux kernel binary can run on a native machine and on one or more hypervisors. VMI, in a natural way, promotes hypervisor diversity.
    We provide a working patch to the latest Linux revision, performance data (a) on native to show the negligible cost of VMI and (b) on the VMware hypervisor to show its benefits. We will also share some future work directions.
  • Utilizing IOMMUs for Virtualization in Linux and Xen
    IOMMUs are hardware devices that translate device DMA addresses to proper machine physical addresses. IOMMUs have long been used for RAS (prohibiting devices from DMA’ing into the wrong memory) and for performance optimization (avoiding bounce buffers and simplifying scatter/gather). With the increasing emphasis on virtualization, IOMMUs from IBM, Intel and AMD are being used and re-designed in new ways, e.g., to enforce isolation between multiple operating systems with direct device access. These new IOMMUs and their usage scenarios have a profound impact on some of the OS and hypervisor abstractions and implementation.
    We describe the issues and design alternatives of kernel and hypervisor support for new IOMMU designs. We present the design and implementation of the changes made to Linux (some of which have already been merged into the mainline kernel) and Xen, as well as our proposed roadmap. We discuss how the interfaces and implementation can adapt to upcoming IOMMU designs and to tune performance for different workload/reliability/security scenarios. We conclude with a description of some of the key research and development challenges new IOMMUs present.
  • X86-64 XenLinux: Architecture, Implementation, and Optimizations
    Xen 3.0 has been officially released with x86-64 support added. In this paper, we discuss the architecture, design decisions, and various challenging issues we needed to solve when we para-virtualized x86-64 Linux.
    Although we reused the para-virtualization techniques and code employed by x86 XenLinux as much as possible, there are notable differences between x86 XenLinux and x86-64 XenLinux. Because of the limited segmentation with x86-64, for example, we needed to run both the guest kernel and applications in ring 3, raising the problem of protecting one from the other. This also complicated system calls handling, event handling, including exceptions such as page faults and interrupts. For example the native device drivers run in Ring 3 in x86-64 XenLinux today.
    Xen itself was required to extend to support x86-64 XenLinux. To handle transitions between kernel and user mode securely, for example, Xen is aware of the mode of the guests controlling the page tables used for each mode. We also discuss other extensions to x86 XenLinux, in support of x86-64, including page table management, 4-level writable page tables, shadow page tables for live migration, new hypercalls, DMA, and IA-32 binary support.
    The current x86-64 XenLinux has compelling performance for practical use. We compare performance between the native x86-64 Linux and XenLinux, and analyze the causes of visible performance regressions. We also discuss performance optimizations, especially how to overcome the overheads caused by the transitions between user and kernel mode. Optimization experiments are also present.
    Finally we discuss how we can merge the patches for x86-64 XenLinux in the upstream, and we present such efforts in that direction. We present a summary of the common codes with x86 XenLinux, and the changes to the native x86-64 Linux.
  • Linux as a Hypervisor – An Update
    Through its history, the Linux kernel has had increasing demands placed on it as it supported new applications and new workloads. A relatively new demand is to act as a hypervisor, as virtualization has become increasingly popular. In the past, there were many weaknesses in the ability of Linux to be a hypervisor. Today, there are noticably fewer, but they still exist.
    Not all virtualization technologies stress the capabilities of the kernel in new ways. User-mode Linux (UML) is the only prominent example of a virtualization technology which uses the capabilities of a stock Linux kernel. As such, UML has been the main impetus for improving the ability of Linux to be a hypervisor. A number of new capabilities have resulted in part from this, some of which have been merged and some of which haven’t. Many of these capabilities have utility beyond virtualization, as they have also been pushed by people who are interested in applications that are unrelated to virtualization. An early problem was the inability of ptrace of Linux/i386 to nullify intercepted system calls. This was fixed very early, as it is essential in order to virtualize system calls. Another ptrace weakness was its requirement that both system call entry and exit must be intercepted. A ptrace extension, PTRACE_SYSEMU addresses this. It causes only system call entrances to be intercepted, causing a noticable performance improvement in UML, even on workloads that aren’t system call-dependent. UML wasn’t one of the main drivers behind AIO and O_DIRECT but it benefits from them. These allow UML to behave more like the host kernel by allowing multiple outstanding I/O requests and to be more fully in charge of its own memory by bypassing the host’s caching. Another I/O improvement that improves the virtualization capabilities of the kernel is the ability to poke a hole in a file. Proposals for a sys_punch system call had circulated for years. MADVISE_REMOVE, which was the first to be merged, removes a range of pages from a tmpfs file. This allows Linux to support memory hotplug in its guests. FUSE (Filesystems in Userspace) is another recent addition of interest. It doesn’t contribute to the ability to host virtual machine, but it does contribute to the ability to manage them. UML is using FUSE export its filesystem to the host, allowing some guest system management to be done from the host. There are a number of other capabilities which are not merged. The large number of virtual memory areas (VMAs) that UML creates on the host is a noticable performance problem. Ingo Molnar implemented a new system call, remap_file_pages, to fix this problem. This allows pages within a mapping to be rearranged, reducing the number of VMAs for a UML process from nearly one per mapped page to one. PTRACE_SYSEMU notwithstanding, system call interception is still a performance problem. Ingo has another patch, VCPU (Virtual CPU), which improves this. In effect, it allows a process to trace itself, eliminating the context switching that ptrace currently requires.
  • Evolution in the Kernel Debugging Utilizing Hardware Virtualization With Xen
    Xen’s ability to run unmodified guests with the virtualization available in hardware opens new doors of possibilities in the kernel debugging. Now it’s possible to debug Linux kernel similar to debugging a user process in the Linux. Since virtualization hardware enables Xen to implement full virtualization, there is no need to change the kernel in any way to debug it. For example you can boot from disk installed with any standard Linux distribution, inside of a fully virtualized Xen guest. Start a gdbserver-xen and remotely connect gdb to it. When you get into gdb, the virtual machine is paused at the reset vector. Now you can use gdb to debug the kernel very similar to how you will use gdb to debug a user process. If you like, you can start debugging from the BIOS code, and then get on to the boot-loader debugging and the kernel debugging. You can poke into the state of the virtual machine, by using the standard gdb commands to access memory locations or registers. If you supply a symbols to the gdb, then you can also use the function and variable names in the gdb. This new debugging technique has few advantages over the kdb; like, there is no need to modify the kernel you are trying to debug. If the kernel misbehaves the debugger is not tossed with it because the debugger is outside of the kernel space. This paper demonstrates the new evolutionary debug techniques using examples. It also explains how the new technique actually works.
  • Xen 3.1 and the Art of Virtualization
    Xen 3 was released in December 2005, bringing new features such as support for SMP guest operating systems, PAE and x86_64, initial support for IA64, and support for CPU hardware virtualization extensions (VT/SVM). In this paper we provide a status update on Xen, reflecting on the evolution of Xen so far, and look towards the future. We will show how Xen’s VT/SVM support has been unified and describe plans to optimize our support for unmodified operating systems. We discuss how a single `xenified’ kernel can run on bare metal as well as over Xen. We report on improvements made to the Itanium support and on the status of the ongoing PowerPC port. Finally we conclude with a discussion of the Xen roadmap.
  • Virtual Scalability: Charting the Performance of Linux in a Virtual World
    Many past topics at Ottawa Linux Symposium have covered Linux Scalability. While still quite valid, most of these topics have left out a hot feature in computing: Virtualization. Virtualization adds a layer of resource isolation and control that allows many virtual systems to co-exist on the same physical machine. However, this layer also adds overhead which can be very light or very heavy. We will use the Xen hypervisor, Linux 2.6 kernels, and many freely available workloads to accurately quantify the scaling and overhead of the hypervisor. Areas covered will include: (1) SMP Scaling: use several workloads on a large SMP system to quantify performance with a hypervisor. (2) Performance Tools: discuss how resource monitoring, statistical profiling, and tracing tools work differently in a virtualized environment. (3) NUMA: discuss how Xen can best make use of large system which have Non-Uniform Memory Access.
  • HTTP-FUSE Xenoppix
    We developed HTTP-FUSE Xenoppix which boots Linux, Plan9, and NetBSD on Virtual Machine Monitor Xen with a small bootable (6.5MB) CD-ROM. The bootable CD-ROM includes boot loader, kernel and miniroot only. Most parts of files are obtained via Internet with network loopback block device HTTP-FUSE CLOOP. HTTP-FUSE CLOOP is made from cloop(Compressed Loopback block device) and FUSE(Filesystem USErspace). It re-constructs a block device from many small split-and-compressed block files of HTTP servers.
    The name of each block file is a unique hash value(MD5) of its contents. The block regions which have same contents are held together a unique hash value name’s file and reduce total storage space. The block files are cached at a local storage and reusable. If the necessary block files exist on a local storage, the driver doesn’t requires network connection. The file name is also used to confirm its contents and keeps security. When contents are updated, the new file is created with a unique hash value name. Old block files are not necessary to delete and used to rollback the file system.
    The performance of HTTP-FUSE CLOOP is sensitive to network latency, so we add 2 boot-options for HTTP-FUSE Xenoppix; Download-ahead and Netselect. Download-ahead downloads and caches necessary block files before HTTP-FUSE CLOOP driver requires them at boot time. Netselect finds the shortest download site among candidates. In this paper we report the performance to boot each OS with HTTP-FUSE Xenoppix.
    Next vesion will include CPU virtualization technology and enable to boot more OSes without kernel modification. We also plan to include trusted boot offered TPM, because HTTP-FUSE Xeonppix aspires to becoming a trial environment of OSes for anonymous users.

The complete list of sessions presentations is available in volume 1 and volume 2.

ToutVirtual extends support for Xen and changes product name

In February 2006 ToutVirtual launched the first beta of a new management tool for virtualization platforms, called ShieldIQ, initially supporting just VMware Player, Workstation and GSX Server.

Today the company changes name to its product, now called VirtualIQ 525, expanding support to upcoming VMware Server and Xen (implying it will probably support Virtual Iron 3.0 as well in the near future).

Quoting from the ToutVirtual official announcement:

ToutVirtual, an emerging leader in management software for virtual computing infrastructures, today announced VirtualIQ 525 software that supports a variety of policy-based and automated management functions for VMWare GSX Server, VMWare Server, VMWare ESX Server, VMWare Player, VMWare Workstation, and open-source Xen. VirtualIQ 525 software is currently being made available as freeware configured to manage up to 5 CPUs or 25 virtual machines….

The product is offered as virtual appliance. download it here.

Note 3 things:

  • the product seems still in beta: if you go to the download page you can clearly see the Early Access Program label.
  • there is no demo, screenshots or support documentation (except a FAQ document) for this product
  • the company completely relies on a major source of income, VMware, and misspells the name in the whole press release and in the whole website. After +8 years on the market…
    (corrected immediately after this article)

Tech: Virtual Machine Monitors vs Hypervisors

Ben Armstrong posted a brief explaination about Virtual Machines Monitors (VMM) and Hypervisors:


In the simplest terms – the VMM is the piece of software responsible for monitoring and enforcing policy on the virtual machines for which it is responsible. This means that the VMM keeps track of everything happening inside of a virtual machine, and when necessary provides resources, redirects the virtual machine to resources, or denies access to resources (different implementations of VMMs provide or redirect resources to varying levels – but that is a topic of discussion for another day).

Classically there are two types of VMM.

A type II VMM is one that runs on top of a hosting operating system and then spawns higher level virtual machines. Examples of type II VMMs include the JavaVM and .Net environment. These VMMs monitor their virtual machines and redirect requests for resource to appropriate APIs in the hosting environment (with some level of processing in between).

A type I VMM is one that runs directly on the hardware without the need of a hosting operating system. Type I VMMs are also known as ‘hypervisors’ – so the only true difference between a VMM and a hypervisor is where it runs. The functionality provided by both is equitable. Examples of type I VMMs include the mainframe virtualization solutions offered by companies such as Amdahl and IBM, and on modern computers by solutions like VMware ESX, Xen and Windows virtualization…

IBM uses the word hypervisor for everything:


Hypervisors use a thin layer of code in software or firmware to achieve fine-grained, dynamic resource sharing.

There are two types of hypervisors. Type 1 hypervisors run directly on the system hardware. The following figure shows one physical system with a type 1 hypervisor running directly on the system hardware, and three virtual systems using virtual resources provided by the hypervisor.

Type 2 hypervisors run on a host operating system that provides virtualization services, such as I/O device support and memory management.

Type 1 hypervisors are typically the preferred approach because they can achieve higher virtualization efficiency by dealing directly with the hardware. Type 1 hypervisors provide higher performance efficiency, availability, and security than type 2 hypervisors. Type 2 hypervisors are used mainly on client systems where efficiency is less critical. Type 2 hypervisors are also used mainly on systems where support for a broad range of I/O devices is important and can be provided by the host operating system…

vizioncore to release new product for VMware ESX Server upgrade

virtualization.info received an exclusive preview of a new vizioncore product, called esxMigrator, which will be able to simplify and speed-up migration from an ESX Server 2.x to the new VMware Infrastructure 3.

The product will sport following main features:

  • Minimize downtime during migration process from 2.X to 3.0 to as little as 60 seconds
  • Leverages core technologies from existing vizioncore offerings
  • Automates full migration process based on VMware recommendations
  • Migrated virtual machine is fully VI3 compliant per VMware’s best practices
  • Schedule final cutover for migration process
  • Source virtual machine remains 100% consistent and unmodified during migration/conversion


The product is expected to be launched later this month.

The virtualization.info Virtualization Industry Roadmap has been updated accordingly.

Virtual Iron rejects paravirtualization

From the corporate blog Virtual Iron management officially take distances from paravirtualization technology.

Just few months ago Virtual Iron announced the new 3.0 version, expected this month in its first beta, would be based on the Xen hypervisor, open source project and maximum example of paravirtualization.
Is this a paradox? Not really.
Virtual Iron will be based on Xen as announced but will completely depend on AMD and Intel hardware aid to virtualize guest operating systems.

The reason why Virtual Iron decided to skip paravirtualization is clearly stated in the blog post, which also reports some raw benchmark comparisons:


Paravirtualization requires substantial engineering efforts in modifying and maintaining an operating system. However, these heroic efforts are inevitably losing the battle against Moore’s Law and hardware advances being made in the x86 space. By the time the first product with paravirtualization appears on the market, more than 80% of the shipping x86 server processors from Intel and AMD will have hardware-based virtualization acceleration integrated into the chips (Intel-VT and AMD-V or “Rev-F”). This hardware-based acceleration is designed to optimize pure virtualization performance, primarily the virtualization of CPU, and it renders OS paravirtualization efforts as completely unnecessary and behind the technology curve.

The current batch of chips offering hardware-based virtualization acceleration from Intel and AMD, primarily helps with virtualization of CPUs and very little for virtualizing an I/O subsystem. To improve the I/O performance of unmodified guest OSs, we are developing accelerated drivers. The early performance numbers are encouraging. Some numbers are many times faster than emulated I/O and close to native hardware performance numbers.

Just to give people a flavor of the performance numbers that we are getting, below are some preliminary results on Intel Woodcrest (51xx series) with a Gigabit network, SAN storage and all of the VMs at 1 CPU. These numbers are very early. Disk performance is very good and we are just beginning to tune performance.

Native VI-accel
Bonnie-SAN (bigger is better)
Write, KB/sec 52,106 49,500
Read, KB/sec 59,392 57,186
netperf (bigger is better)
tcp req/resp (t/sec) 6,831 5,648
SPECjbb2000 (bigger is better)
JRockit JVM thruput 43,061 40,364

I strongly agree with this vision: the big problem of paravirtualization is that achieved performances are not a desirable benefit when you have to trade off them with kernel modification of your guest operating system.

Hardly software house would agree to support their applications on a paravirtualized OS, for the simple reason the environment is not controlled anymore. Reliability, security, compatibility have to be proved again in such scenario e no vendor would be able to grant same level of testing as one happened during original operating system development. Not in a decent amount of time.

Also, the biggest trade off point is that paravirtualization doesn’t permit to run Microsoft Windows and, as I said many times, this is a limit big enough to prevent the technology from entering in the largest part of the market: the SMB segment. The exact reason why Xen itself will eventually go the same path of Virtual Iron.

How much good paravirtualization performances have to be to suffer all of this?
Also: is it simpler to replace hardware (when new virtualization improvements becomes available in new CPUs) or to replace operating system (when a new paravirtualized OS becomes available)?

Review: Ars Technica reviews Parallels Desktop for Mac

After Macworld also Ars Technica publishes a review of Parallels Desktop for Mac. This time the conclusion is:

For all the naysayers and people who may still be unhappy with the transition to Intel chips, it’s hard to ignore the advantage of virtualization, which opens up a broad spectrum of applications and utilities that are no longer crippled by having to run in Virtual PC’s emulated environment.

People pondering the switch to a MacBook can rest assured that with the exception of USB device support and hardware accelerated 3-D applications, their needs will be well met by this little workhorse of a program. Between the networking that just works, the impressive speed and the inability of the client operating systems to know they are running within a “virtual machine,” I think you’ll be hard-pressed to find software for any x86 OS that doesn’t work within a Parallels VM. If you’re still not certain, you can always try the fully-working demo and make your decision later. Just keep in mind that the price tag jumps from US$50 to US$80 after July 15. Until then, you’ll have to send the remaining US$30 to me.

Pros

  • Fast and overall responsiveness in OSes is very good
  • Clean, unobtrusive interface
  • Seamless networking with no configuration needed
  • Additional tools for Windows make file sharing and mouse movement better
  • Disk image compacting tool saves hard drive space
  • Very good application compatibility for software within client OSes
  • Runs multiple instances of the application to use more than one core/CPU when running two or more client VMs
  • Connect image option is a time and disk saver for downloaded installers
  • Well priced, even at US$80

Cons

  • Not suitable for games or complex 3-D modeling applications
  • Limited USB hardware support
  • No option to use more than a single CPU core
  • Can’t burn DVDs and CDs within VMs
  • Improved mouse movement driver for Windows VMs only

The review is much more complete than the one from Macworld and it’s worth to read.