Microsoft working on new virtualization technologies for Longhorn

Quoting from IT Jungle:


Microsoft gave more credence to its insistence that Longhorn will be more than just another service pack at the WinHEC 2005 conference Monday when officials detailed new virtualization technologies that the company is working on for the next version of Windows. When Longhorn Server ships in 2007, it could sport a new hypervisor layer, dynamic hardware partitioning software, co-processors used to manage partitions, and other high-end features designed to decrease downtime. These are features that are common in mainframe, RISC/Unix, and proprietary midrange servers.

In his Windows Server Overview and Roadmap presentation, Sean McGrane, a Windows product manager with Microsoft, detailed his company’s X64 server strategy and provided a glimpse at some new virtualization technologies his company is working on for the Longhorn Server operating system.

When you combine the current bump in processing capabilities made possible by the 64-bit processors from Intel and AMD with the coming multi-core versions of these processors, present-day symmetric multiprocessing (SMP) machines are going to look pretty wimpy compared to the systems available in just a few years, McGrane says. By McGrane’s math, one of these uni-socket servers (with four processors integrated onto a single chip) from the future will do the same amount of work as one of today’s eight-way, 32-bit, X86-based servers, he says.

In addition to keeping Moore’s Law alive and well, this massive increase in processing capability is going to help Microsoft sell new solutions, including a high performance computing (HPC) version of Windows for scientific workloads, called Compute Cluster Edition. For business applications, Microsoft wants to get into the application consolidation game, McGrane says. However, when applications running across many smaller servers are consolidated onto a fewer number of larger servers, downtime becomes much more of an issue. As a result, Microsoft will be looking to build a range of new reliability functions into Longhorn Server. These include better error detection, which is being introduced through Windows Hardware Error Architecture (WHEA), as well as sophisticated new virtualization technologies.

Microsoft’s planned virtualization technologies will boost overall reliability by enabling Windows to make better use of the underlying hardware without causing resource conflicts among existing operating systems and applications. Virtualization will also boost reliability by enabling users to add or replace failing hardware components before they cause unplanned downtime, McGrane explains.

He also said that Microsoft is looking to use virtualization in two ways. In the “Hot Addition” mode, dynamic hardware partitioning would allow users to add more processors, memory, or I/O to meet changing needs, without needing to reboot the server to add that capacity. In the “Hot Replace” mode, a failing processor, memory stick, or I/O adapter is replaced with a redundant spare kept on site–again, without causing downtime. “If a customer gets worried [about a failing component], you swap it out before it fails,” McGrane says. “Memory and processors never go away from the application and drivers point of view.” Permanently removing hardware without requiring a restart is a tricky maneuver that would not be supported with the first release of Longhorn, he says.

The virtualization plans for Longhorn call for a variety of new technologies, including: a new Partition Manager that provides a user interface for the creation and management of partitions; a hardware-based Service Processor that controls the interaction of regular processors and I/O; hardware-based partitioning for segregating processors at the socket level; and a software-based virtualization layer for sub-socket partitioning, which is commonly called a hypervisor.

Some of these new technologies, such as I/O virtualization and hardware partitioning, will require cooperation with Microsoft’s hardware partners. For others, such as the new hypervisor layer, Microsoft already has much of the based technology. There has been speculation that Virtual Server 2005 will disappear as a product if Microsoft decides to build virtualization capabilities directly into the operating system, which is not yet a done deal. This would have its advantages, as Virtual Server 2005 exacts a higher level of overhead than a hypervisor–and particularly one that will be assisted by the on-chip hardware virtualization technologies in future Intel and AMD chips, dubbed Virtualization Technology by Intel and code-named “Pacifica” by AMD.

Servers equipped with dynamic hardware partitioning and a software-based hypervisor layer would provide other advantages over Virtual Server 2005. While the 32-bit Virtual Server 2005 allows different operating systems, such as Windows Server 2003, Windows NT, Novell NetWare, or even Linux, to run on the server, Microsoft’s future partitioning would provide a greater level of isolation between partitions, thereby requiring less time, money, and energy to be invested in compatibility testing and certification.

It makes sense from a Microsoft point of view. High-end Unix and mainframe servers from Hewlett-Packard, IBM, and Sun Microsystems, and others have offered various levels of virtual or logical dynamic partitioning for years and a few vendors even offer sub-processor partitioning. These technologies have enabled higher server utilization levels and engendered server consolidation, both of which save customers money. Microsoft needs a compelling virtualization story, too, if it’s serious about breaking into the high-end market and in helping customers get better utilization out of their Windows machines.