VMware Workstation beta program goes on

Flexbeta reports a new VMware Workstation 5 beta build (11888) is available. Many fixes are in place and a lot of components are updated:

– Issues solved in beta 11888, 2005.01.07

PR 55642: Virtual machines fail to power off cleanly (updated 2005.01.07)
Under some circumstances, a virtual machine would fail to power off, either appearing to hang or displaying a blank screen. This was fixed in build 11888.

PR 57020: Failures When Performing Multiple File Drag-and-Drop Operations (updated 2005.01.07)
Using drag-and drop-to copy multiple files from host to guest could cause unexpected behavior, such as a virtual machine crashing or VMware Workstation freezing. This was fixed in build 11888.

PR 56435: Virtual Machine Crashes When Installing Windows Server 2003 SP1 RC1 (updated 2005.01.07)
When upgrading a virtual machine from Windows 2003 to the latest release candidate of Windows 2003 SP1, the virtual machine process crashed on the subsequent reboot. This was fixed in build 11888.

PR 56469: Unrecoverable Error During Virtual Machine Power Operations (updated 2005.01.07)
Virtual machines configured with a large amount of memory (usually more than 1 GB) could crash during power operations. The logged error message was:
ASSERT C:/ob/bora-11571/bora/devices/mainmem/mainMemHosted.c:216
This was fixed in build 11888.

PR 56479: Panic When Using CTRL-ALT-DEL to Reboot a Virtual Machine (updated 2005.01.07)
Under some circumstances, using CTL-ALT-DEL or CTL-ALT-INS from within a virtual machine would cause the virtual machine to crash. The logged error message was:
This was fixed in build 11888.

PR 56693: Virtual Machine Crashes on Power Off (updated 2005.01.07)
Under some circumstances, a virtual machine in full screen mode might crash at power off. The logged error message was:
ASSERT /build/mts/release/bora-11608/bora/mks/main/xinfo.c:501
This was fixed in build 11888.

PR 56907: Virtual Machine Crashes During Disk Operation (updated 2005.01.07)
Under rare circumstances, it was possible for a virtual machine to crash while performing disk operations. The logged error message was:
MONITOR PANIC: ASSERT vmcore/vmm/cpu/dt.c:2375
This was fixed in build 11888.

PR 49964: glibc and NPTL-based Threading Model (Fedora Core 3 Host Compatibility) (updated 2005.01.07)
Fedora Core 3 is the first widely used Linux distribution to adopt a change to glibc to use an NPTL based threading model. This model was incompatible with VMware Workstation, and could cause Workstation to core dump when running on such a host. We have modified Workstation to work correctly with the new glibc as of build 11888. (Note: Fedora is not a supported host operating system in VMware Workstation 5.0).

Microsoft Virtual Server 2005 Service Pack 1 news leaking

Megan Davis posted on his blog a cool news about upcoming VS2005 SP1:

Here’s what Kurt Schmucker, the program manager for Virtual Server 2005 Service Pack 1 says about the release:

“As with typical service packs from Microsoft, Virtual Server 2005 Service Pack 1 will be primarily a rollup of fixes we have seen since the product was released to improve performance and increase scalability. In addition, with Service Pack 1, Virtual Server 2005 will have host support for Windows Server 2003 Service Pack 1 x64 Edition (note that this does not include IA64), provide PXE support, qualify Windows XP SP2 as a host and as a guest, and include the Virtual Disk Precompactor, a utility that is designed to “zero out” — that is, overwrite with zeros — any available blank space on a virtual hard disk.

A public beta is slated for the end of first quarter 2005, with product release planned for the second half of calendar year 2005.”

Internet Service Providers start adopting logical virtualization technologies

Netcraft reports EV1Servers and now Go Daddy started using so called “virtual private servers (VPS)”, a logical virtualization technology provided by SWsoft with the Virtuozzo platform.

A single VPS can be dedicated to a single customer without wasting space, money, maintenaince time, employee, etc. with new physical servers, achiving an incredible VPS/Physical server proportion.

EV1Servers is expanding beyond its core niche selling discount dedicated servers, introducing virtual private servers (VPS), storage solutions and managed services. The changes at the “all new” EV1Servers are a response to the evolving needs of its customers, as well as tougher competition in the dedicated server market.
EV1Servers is forging boldly into the VPS market, a strategy that allows it to capture shared hosting customers looking to move up, while squeezing more revenue from each server. VPSes use “virtual partitions” that allow a single machine to be used by multiple customers, with better security than shared hosting but many of the features of a dedicated server. Marsh believes VPS is “poised to break open a new top end shared hosting market,” and has priced EV1’s offerings at $39 a month.

“We see our hosting company customers as the primary distribution channel for this product,” said Marsh. “In early October we will host a Virtuozzo training session for hosting providers who are interested in offering VPS hosting. We hope this will help jump start a new and potentially lucrative product line for these customers.”

Domain registrar Go Daddy has begun selling virtual private servers (VPS) and dedicated servers, continuing an expansion that helped it become one of the fastest-growing hosting providers of 2004 in our Hosting Provider Switching Analysis. The move comes as the Scottsdale, Ariz. provider is preparing a major publicity campaign to increase its visibility, kicked off by a Super Bowl ad.

Go Daddy is using SWSoft’s Virtuozzo to power its VPS offering, following in the footsteps of EV1Servers, which announced a major VPS hosting initiative in September. VPS uses “virtual partitions” that allow a single machine to be used by multiple customers, with better security than shared hosting but many of the features of a dedicated server. While it has been a pioneer in discount pricing of domains and shared hosting, Go Daddy’s dedicated server offerings start at $219 a month and VPS at $39.95 a month, well above the offerings of current price leaders in those categories.

Novell, Red Hat eye virtualization for Linux

Quoting from ComputerWorld:

Novell Inc. last week said it will soon detail plans to include server virtualization technology in its SUSE Linux operating system. Red Hat Inc. intends to do the same thing with its Linux distribution, and a leading contender for both vendors may be an open-source virtualization technology called Xen.
Both Red Hat and Novell said they’re also looking at a number of other virtualization technologies. Novell, for instance, is eyeing Acton, Mass.-based start-up Katana Technology Inc.’s promised virtualization software, which is expected to run on Linux machines. Beyond that, all Novell will say is that it plans to act quickly. “We want to be aggressive about it,” said Ed Anderson, vice president of marketing at Novell.

Hewlett-Packard Co., Intel Corp. and Advanced Micro Devices Inc. are already working with Xen, according to officials at each of those companies. Intel and AMD are particularly interested in ensuring that Xen works well with their chip-partitioning technologies, which are due out next year.

Xen is available for download from the Web site of the University of Cambridge in England, where the 3-year-old open-source effort is based. The creators of Xen plan to open a company called XenSource Inc. in Palo Alto, Calif., within the next few weeks to support users of the technology.

Waiting for Acceptance

But corporate users may not embrace Xen until mainstream IT vendors back the technology.

That’s the case for Bob Armstrong, director of technical services at Delaware North Cos., a Buffalo, N.Y.-based hospitality services provider. Armstrong uses VMware Inc.’s virtualization software to run 19 guest operating systems on two production servers, each with two CPUs. He has virtualized about 25% of his data center and plans to increase that to about half of his systems over the next 18 months.

Armstrong said the technology from Palo Alto-based VMware, which is a division of EMC Corp., has allowed him to cut hardware spending by one-third. He also uses NetWare servers and will look at Novell’s virtualization technology. “Anywhere we can leverage our Novell investment, we would love to do that,” Armstrong said. “If we weren’t a Novell shop, we wouldn’t consider it.”

Xen supports Linux but not Windows, which means it’s unlikely to be adopted by Carmine Iannace, manager of IT architecture at Welch Foods Inc. in Concord, Mass. Iannace is running VMware environments that support Windows, Linux and Solaris. “We want to have the ability to run Windows, Solaris and Linux on the same server, and we really haven’t found anyone else who can provide that for us,” he said.

But Iannace added that the emergence of Linux vendors will increase competition in the virtualization market and help corporate users “by keeping a check on prices.”

Xen doesn’t support Windows because it requires a modification to the operating system kernel. However, Intel’s planned chip-partitioning technology and a similar offering due from AMD are expected to allow Windows to run in a virtualized environment without modifications.

VMware pushes hard for ESX Server

VMware just introduced the most wanted license upgrade from a GSX Server license to an ESX Server + VSMP + VirtualCenter Agent + VMotion, the so called bVirtual Infrastructure Node or VIN bundle package. Customers will be able to upgrade just paying the price difference between two commecial offers.

Quoting from official announcement:

We are pleased to announce that you can now upgrade your VMware GSX Server software to the ESX Server Virtual Infrastructure Node. For the first time, VMware is offering upgrades to our most capable virtual infrastructure product. The ESX Server Virtual Infrastructure Node bundles our datacenter-class ESX Server product together with the revolutionary VMotion and Virtual SMP add-ons and a VirtualCenter Agent for advanced management.

It has always been easy to move GSX Server virtual machines to ESX Server hosts when you need the performance and robustness of its bare-metal architecture. This new upgrade program now lets you replace your GSX Server software with the VMware ESX Server Virtual Infrastructure Node bundle at a price that gives full credit for the list price of your GSX Server purchase.

Now and Xen

Quoting from Linux Magazine:

How would you like to run several operating systems at once on the same physical hardware with virtually no performance overhead — and for free? That’s the promise and the purpose of Xen, a relatively new open source project that turns one piece of hardware into many, virtually. If you’re looking to cut costs or maximize usage or both, follow the path to Xen.

Hardware virtualization allows multiple operating systems to run simultaneously on the same hardware. With such a system, many servers can run on the same physical host, providing more cost-effective use of valuables resources, including CPU, power, and space. Additionally, separate instances of one or more operating systems can be isolated from each other, providing an additional degree of security and easier management of system-wide resources like configuration files and library versions.

Up until now, there have been no open source solutions for efficient, low-level virtualization of operating systems. But now there’s Xen, a virtual machine manager (VMM) developed at the University of Cambridge.

Xen uses a technique called paravirtualization, where the operating system that is to be virtualized is modified, but the applications run unmodified. Paravirtualization achieves unparalleled performance, while still supporting existing application binaries.

At the moment, Xen supports a slightly modified Linux 2.4 kernel and NetBSD, with full support of OpenBSD coming in a few months. Xen even supports an experimental version of Windows XP (however, XP cannot be distributed, except to those who’ve signed Microsoft’s academic license), and ports of Linux 2.6 and Plan 9 are in development.

Xen 1.0 has been publicly available for just over a year, and Xen 2.0 will be released shortly after you read this. This article discusses the benefits of hardware virtualization, explains why Xen was built in the first place, and previews some of the exciting, new features available in 2.0.

What is Xen?

Think of Xen as a next generation BIOS: Xen is a minimally invasive manager that shares physical resources (such as memory, CPU, network, and disks) among a set of operating systems. Effectively, Xen is transparent, as each operating system believes it has full control of its own physical machine. In fact, each operating system can be managed completely independent of one another.

Moreover, Xen divides resources very rigidly: it’s impossible for a misbehaving guest (an operating system that runs on a Xen host) to deny service to other guests. Simultaneous yet discrete operation is incredibly valuable.

For example, consider the problems inherent with hosting a set of services for different user groups. Perhaps you’re an application service provider, selling rack mount web server accounts. Or, perhaps you want to install a set of dissimilar services on the same physical host, but want avoid the overhead of trying to get system-wide configuration files to play nicely with all of them. Xen allows the installation of many operating system instances on the same host.

Xen is also useful in factoring servers for enterprise administration. The database administrator and web administrator may have entirely separate OS configurations, root shells, and so on, while sharing common physical hardware.

Virtualization has applications for home users, too. For example, consider the benefit of application sandboxing: applications that are at risk for attack by worms or viruses (think web browsers and email clients) can be run within a completely separate virtual machine. If, for whatever reason, one sandbox becomes infected, it can simply be destroyed and recreated, leaving the rest of the system untouched. The same applies for downloading applications off of the Internet that you don’t necessarily trust, like games or file sharing tools — just run them in a separate, isolated, OS instance.

Unlike User Mode Linux (UML, see http://www.linux-mag.com/2004-01/uml_01.html) and Bochs (see http://www.linux-mag.com/2003-10/guru_01.html), Xen provides excellent performance. Unlike virtual servers, Xen provides real low-level resource isolation, preventing individual operating system instances from interfering with the performance of others. And unlike commercial virtualization packages, Xen is free.


Many other existing packages for virtualization do what’s often referred to as pure virtualization. In pure virtualization, the virtualization layer presents an exact replica of the underlying physical hardware to the operating systems that run above it. Many CPUs make such a form of virtualization very easy, in some cases even providing specific support for it.

One big benefit of pure virtualization is that the operating system software need not be modified to run, because it sees the illusion of raw hardware. Unfortunately, x86 processors do not provide specific support for virtualization. More specifically, they don’t virtualize very well at all. (To understand why pure virtualization is so inefficient, see the sidebar “Why Pure Virtualization is Bad.”)

Sidebar: Why Pure Virtualization is Bad

Xen’s approach to virtualization is called paravirtualization: the interfaces presented to the operating system are not those of the raw physical devices. While paravirtualization enhances performance, it comes at a cost: operating system code must be modified before it can run on Xen. In essence, Xen is a new architecture, slightly different from x86, that operating systems must be ported to.

There are three crucial problems with purely virtualizing the x86 architecture, and all are very difficult to address, as solutions are bound to introduce a severe performance overhead.

Memory management is quite tricky to virtualize effectively. The virtual machine manager often provides the guest with a shadow page table, which appears to be a set of physically contiguous memory, and then remaps all accesses to this page table behind the scenes (at considerable cost).

Xen’s approach is to let the OS know what pages of memory it really has (machine addresses) and then allow a mapping onto a contiguous range (pseudo-physical addresses). This means that the OS can have raw access to its page table, with Xen being involved only to validate updates for safety (specifically, to prevent one OS from attempting to map memory that doesn’t belong to it.).

Certain instructions on x86 (pushf, for instance) only result in a trap when run in supervisor mode (CPU ring zero, where the operating system normally lives). However, when virtualized, the operating systems no longer runs at the appropriate level, and these instructions no longer result in traps.

In full virtualization this is commonly addressed with a technique called code scanning: the virtual machine manager examines the executing binary and redirects these calls. But since this run-time scanning can be very expensive, Xen does it beforehand. One of the tasks involved in porting an OS to Xen is to replace privileged instructions with the appropriate calls.

Sharing I/O devices such as network cards with pure virtualization means that the device driver in the guest OS must be able to interact with what it thinks is the raw physical device.

Rather than providing support for a virtualized version of every possible peripheral device, one approach is to map all underlying devices to the illusion of a single common one. This means that as long as the operating system running on top has support for that device, it will run without problems. Unfortunately, this also means that the system ends up running two device drivers for each device. In the case of network interfaces, extra device drivers typically mean extra copies, and so result in a per-byte overhead on each packet sent and received.

The paravirtualization approach to this problem is to provide the guest with a single idealized driver for each class of device. In the case of network interfaces, the guest OS driver interacts with a pair of buffers that allow messages to be sent and received without incurring an extra copy as they pass to Xen.

Solving these problems for pure virtualization is hard work, and several other software projects have made heroic efforts to reduce the associated performance costs.

In designing Xen, the software’s development team came to the conclusion that these just weren’t the right problems to solve. Paravirtualization seems to work quite a bit better, despite the one-time effort of porting an OS. And that cost is actually slight: the original port of Linux to run on Xen involved changing or adding about three thousand lines of source code, representing about 1.5 percent of the total Linux source. Moreover, about half of the changes are in the code for the new device drivers.

With Xen, most of the changes required to paravirtualize an existing OS are in the architecture-specific part of the operating system code. (The Linux 2.6 for Xen effort aims to further isolate the code in hopes that Xen will be included as a separate architecture within the 2.7 kernels.)

The paravirtualization of device drivers (described in the “Why Pure Virtualization is Bad” sidebar) adds another benefit: device drivers only need to be implemented once for all operating systems. Any guest can use any driver that’s supported by Xen.

Xen’s Latest Tricks

The initial release of Xen focused largely on making virtualization work and providing hard performance isolation between guest operating systems. In the year since getting isolation to work, many new features have been added that really demonstrate the benefits of virtualization.

Because Xen strictly isolates operating system instances, system reliability is enhanced.

Device drivers are commonly seen as a major source of instability. As drivers run in the kernel, driver bugs have a tendency to run amok, corrupting system memory and causing crashes.

In the original release of Xen, device drivers ran within Xen itself, exporting a common interface to all guests regardless of the specific device they were using. This simplified device support in the guest, but was ultimately a bad decision, because a driver crash could potentially crash Xen itself, just like in a non-virtualized OS.

In Xen 2.0, the Xen developers attacked this problem head-on, moving drivers up into their own guest OS domains. Drivers now run in an isolated virtual machine in the same way that a guest operating systems does, yet drivers remain shared between guests as before. When a new domain is configured, the administrator chooses its hardware. Examining a hardware bus from within a guest only reveals the devices that have been exported to it.

The performance of placing device drivers in a completely separate OS instance is surprisingly good. Xen 2.0 includes specific mechanisms for the page-flipping that was used to transfer network data in the original release of Xen. Guests can share and exchange pages at very low overhead, and Xen carefully tracks page ownership to ensure stability in the case of a crashing or misbehaving guest.

The additional cost to consider is context switch times, because now both the driver and the guest must be scheduled before an inbound packet or disk block is received. Fortunately, due to the bulk nature of both of these types of devices, drivers are largely able to batch requests, resulting in minimal performance degradation.

Xen can still allow raw device access to guests that need it by making the hardware visible to a guest. This is suitable for devices that are generally used by a single domain, such as video and sound, with one caveat: allowing device DMA access to guests is very dangerous. On the x86, DMA has unchecked access to physical memory, and so an erroneous (or malicious) target address can result in the overwriting of arbitrary system memory. Hopefully, newer I/O MMU support in emerging hardware can help address this particular issue, as it’s a major problem in existing systems.

In the common case though, where raw device access isn’t needed, driver isolation adds plenty to reliability. As an added benefit, driver crashes may be corrected in a running system. A privileged guest in Xen can be configured to monitor the health of each driver. Should the driver become unresponsive, crash, or attempt to consume excessive resources, it can be killed and restarted. Fault-injection experiments have shown that restarts are very fast, on the order of a hundred microseconds. A network card can crash and be restarted almost unnoticed as a transfer is in progress.

Finally, there are commonly large differences between drivers for the same device on different operating systems. A Linux driver may expose hardware features that are missing from its Windows counterpart, or a Windows driver may exist where Linux simply isn’t supported. Such disparities are largely due to organization: driver support for a specific platform needs an interested community of users to demand it, and considerable OS expertise to develop it.

Virtualization puts an interesting twist on the age-old problem of driver support. Hardware drivers can be written once, using whatever OS they choose. Xen’s current, sample drivers are Linux drivers running on a cut-down Linux kernel. With those in place, all that’s left to do is write the idealized drivers for each guest OS to interface with the top of the hardware driver.

Encapsulating application and OS state within a managed virtual machine allows for a range of exciting system services. One of the most useful of these is the ability to suspend a virtual machine and resume it at another time or in another place.

For example, a complex application can be configured in isolation from the rest of the system and within its own OS instance, and can then be “canned” so that a fresh copy of the application can be quickly instantiated whenever necessary.

Suspending a VM requires Xen to store its configuration and execution context to a file. Configuration details include parameters such as CPU allocation, network connections, and disk-access privileges, while execution context contains memory pages and CPU and register states.

Although resuming a virtual machine is largely a matter of reinstating its configuration and reloading its execution context, it’s somewhat complicated by the fact that the newly-resumed virtual machine will be allocated a different set of physical memory pages. Since Xen doesn’t provide full memory virtualization, each guest OS is aware of the physical address of each page that it owns. Resuming a virtual machine therefore requires Xen to rewrite the page tables of each process, and rewrite any other OS data structures that happen to contain physical addresses. This task is relatively simple for XenLinux, as most parts of the OS use a pseudo-physical memory layout, which is translated to real physical addresses only for page-table accesses.

Virtual machine migration can be thought of as a special form of suspend/resume, in which the state file is immediately transferred and resumed on a different target machine. Migration is particularly attractive in the data center, where it allows the current workload to be balanced dynamically across available rack space.

However, although Xen’s suspend/resume mechanism is very efficient, it may not be suitable for migrating latency-sensitive or high-availability applications. This is because the virtual machine cannot resume execution until its state file has been transferred to the target system, and this delay is largely determined by its memory size: a complex VM with a large memory allocation takes a correspondingly long time to transfer.

To avoid prolonged downtimes, Xen provides a migration engine that transfers a VM’s configuration information and memory image while the VM is still executing. The goal of the migration engine is to stop execution of the VM only while its (relatively tiny) register state is transferred. The “fly in the ointment” is that this can lead to an inconsistent memory image at the target machine if the VM modifies a memory location after it’s been copied. Xen avoids these inconsistencies by detecting when a memory page is updated after it is copied, and retransferring that page.

To do this without requiring OS modifications, Xen installs a shadow page table beneath the VM. In this mode of operation, the guest’s page table is no longer registered with the MMU. Instead, regions of the guest page table are translated and copied into the shadow table on demand.

Shadow page tables are not new: they are used in fully-virtualizing machine monitors such as VMware’s products to translate between a guest’s view of a contiguous physical memory space and the reality that its memory pages are scattered across the real physical memory space.

Shadow page tables are not used by the migration engine for full translation, but for dirty logging. The page mappings in the shadow table are therefore identical to those in the guest table, except for pages that the migration engine has transferred to the target system. Transferred pages are always converted to read-only access when their mappings are copied into the shadow table, and any attempt to update such a page causes a page fault. When Xen observes a write-access fault on a transferred page, it marks the page as “dirtied,” which informs the migration engine to schedule another transfer. Writable mappings of the page are again permitted until the page is retransferred (and again marked read-only).

Future Work

Xen is still in active development. In fact, by the time you read this article, there will likely be many new features available. Here is just a small sample of what you can look forward to:

One of the next releases of Xen will provide a real-time account of all the resources used by each active OS. This allows each guest OS to be charged for resources consumed and can also be used to establish consumption limits.

Xenoservers is a project to globally distribute a set of Xen-based hosts. The intent is to deploy Xen on a broad set of hosts across the Internet as a platform for global service deployment. (More information is available on the XenoServers web site at http://www.xenoserver.org)

Getting Xen

Xen and XenoLinux are available as a single ISO image that can be downloaded and burned to CD. The CD is bootable, so you can bring up a demo without modifying your system simply by booting off the Xen CD.

The ISO image is available from Sourceforge and via BitTorrent. See the Xen download page at http://www.cl.cam.ac.uk/Research/SRG/netos/xen/downloads.html for links.

The Xen development team continues to develop new features for Xen and is always looking for enthusiastic people to join the project. If you’d like to participate, drop us a line!

HP webinar: Virtual Machine Management Pack

HP is delivering a live Webinar on January 14, 2005 on Virtual Machine Management. Here’s the description from HP: “Learn the advantages of virtual machine technology and understand how the Virtual Machine Management Pack allows you to manage and control the VMWare and Microsoft Virtual Server resources in your environment.”

To register, go to http://www.hpbroadband.com/program.cfm?key=Q91MTB88Y

Thanks to Megan Davis for this information.

VMware celebrates seven years of continual innovation and execution

Quoting from official announcement:

VMware, Inc., the global leader in virtual infrastructure software for industry-standard systems, today [5th January] celebrated its seven-year anniversary, commemorating seven years of continual innovation and execution.

VMware was founded in January, 1998 to bring mainframe-class virtual machine technology to industry-standard computers. In 1999, VMware delivered its first product for the desktop, VMware Workstation. VMware Workstation has revolutionized software development by making it possible to develop faster, test more comprehensively and deploy even the most complex enterprise applications in virtual machines. The product is now a de-facto standard for development with more than 2.5 million users.

VMware entered the server market in 2001 with VMware ESX Server and VMware GSX Server. ESX Server and GSX Server are virtual infrastructure software products for partitioning, consolidating and managing computing resources. The products have been adopted by thousands of IT organizations worldwide and have saved customers hundreds of millions of dollars in costs through providing server consolidation, fast provisioning and disaster recovery.

In 2003, VMware introduced 2-Way VMware Virtual SMP (symmetric multiprocessing) that allows virtual machines to span two physical processors, making virtual machines ideal for resource-intensive enterprise applications.

Also in 2003, VMware launched VMware VirtualCenter with groundbreaking VMotion technology, and the company firmly established itself as the thought leader in the fast-growing virtual infrastructure marketplace. VirtualCenter is virtual infrastructure management software that provides a central point of control for virtual computing resources. Using VMotion technology, virtual machines can be migrated while running, allowing for dynamic load balancing and zero-downtime maintenance.

In 2004, VMware delivered the VMware Virtual Infrastructure Software Developer Kit that provides standards-based interfaces that enable ISVs, partners and customers to control VMware virtual infrastructure and to integrate virtual infrastructure deployments into existing management frameworks.

VMware delivered support for 64-bit computing in 2004, once again extending virtualization capabilities for industry-standard platforms. VMware also announced it would deliver 4-Way VMware Virtual SMP, making it possible to extend the benefits of virtual infrastructure to the most demanding enterprise workloads.

Also in 2004, VMware introduced a breakthrough new enterprise desktop management and security product, VMware ACE. VMware ACE is targeted at the problems of contractor, telecommuter and mobile laptop management and enables IT managers to provision secure, standardized PC environments throughout the extended enterprise. With the introduction of VMware ACE, VMware again demonstrated its relentless innovation and aggressive technology leadership.

“In just seven years, VMware has successfully created a new category of software, virtual infrastructure, and is poised to extend our leadership position in this market,” said Diane Greene, president of VMware. “We are an organization that thrives on consistently bringing innovative ideas to market in highly robust and high value products.”

VMware firsts include:

– First to demonstrate the value of virtualization on commoditized platforms
– First to virtualize the x86 architecture (VMware Workstation)
– First to deliver a hosted virtual machine monitor; the hosted architecture integrates a virtual monitor with an existing operating system (VMware Workstation)
– First to enable transparent memory sharing of virtual machines on a commoditized platform (VMware ESX Server)
– First to handle a modern I/O subsystem in a virtualized x86-based system (VMware ESX Server)
– First to enable a single virtual machine to span multiple physical processors on an x86-based system (VMware ESX Server)
– First operating system to support 64-bit extensions on an x86-based system (VMware ESX Server)
– First to enable a running virtual machine to move across physical boundaries (VMware VMotion)
– First to enable automatic conversion of a physical x86-based environment, including the operating system and applications, into a virtual environment (VMware P2V Assistant)
– First to deliver comprehensive Virtual Rights Management technology (VMware ACE)

VMware’s comprehensive virtual infrastructure solutions for enterprise desktops, servers and development and test groups solve the hard problems of efficiency, flexibility and security and provide an easy on-ramp to next generation computing models.

Key facts about VMware:

– VMware is the world’s #1 provider of virtual infrastructure
– VMware is one of the fastest growing $100 million+ software companies
– VMware is relied upon by more than 80 percent of the FORTUNE 100 and leading organizations worldwide
– VMware has a partner ecosystem that covers all leading processor, infrastructure and management vendors and includes more than 1,000 global and regional resellers, the top major x86 system OEMs and more than 50 technology partners
– VMware virtual infrastructure products continued to be recognized for excellence with major awards from the industry’s leading publications

Happy Birdthday VMware!

eWEEK names VMware VirtualCenter a top product of 2004

Quoting from official announcement:

VMware, Inc., the global leader in virtual infrastructure software for industry-standard systems, today announced that eWEEK has named VMware VirtualCenter a Top Product of 2004. The award is the third major industry accolade VirtualCenter has received since its introduction in November 2003. Past industry recognition includes the CNET Most Promising Technology of the Year award and the Windows IT Pro Best New Product award.

“We are honored to be identified by one of the industry’s most respected publications for our ongoing commitment to innovation and excellence” said Karthik Rau, director of product management for VMware. “The past year we saw virtual infrastructure mature and become the de-facto standard among leading enterprise IT departments for making the data center scalable and manageable. It is exciting for VirtualCenter to be singled out for its strategic value to our customers.”

Used by thousands of IT organizations worldwide, VMware VirtualCenter is virtual infrastructure management software that provides a central and secure point of control for virtual computing resources. VirtualCenter creates a more responsive data center, enabling faster reconfiguration and reallocation of applications and services. VirtualCenter allows for instant provisioning of servers and decreases user-downtime while optimizing the data center.

VirtualCenter provides a powerful way to connect IT to business needs. With VirtualCenter, IT infrastructure becomes more flexible, efficient and responsive. VirtualCenter uniquely leverages virtual computing, storage and networking to improve data center management and reduce cost. Together with VMotion technology, virtual machines can be migrated while running for dynamic load balancing and zero-downtime maintenance.

“eWEEK Labs was especially impressed with VMware’s unique VMotion technology” commented Francis Chu, technical analyst at eWEEK Labs. “VMotion allows IT managers to run critical applications on virtual machines that can be changed on the fly, so there is almost no downtime when moving virtual machines files from one host to another.”
VMware VirtualCenter received the eWEEK Labs excellent ratings for its usability, manageability and scalability.