Intel confirms virtualization and security technologies in 2006

Quoting from X-bit Labs:


Intel Corp.’s president Paul Otellini recently confirmed the company’s plans to bring the security and virtualization capabilities of platforms code-named LaGrande and Vanderpool in 2006, in-line with previous expectations. Both capabilities are likely to advance computers by a significant margin, as both ignite new usage models.

Lyndon, Bridge Creek, Averill – New Platforms from Intel

Adding of security and virtualization capabilities will be performed in the course of Intel’s forthcoming platform enhancements. In 2005 Intel plans to advance personal computing – now called digital home and office – platforms with dual-core Smithfield processors with 2MB cache, Enhanced Intel Speed Step (EIST), EM64T and iAMT features. A year later – in 2006 – Intel’s follow-up platforms will get Vanderpool, LaGrande technologies, next-generation iAMT and next-generation dual-core processor produced using 65nm process technology. Currently Intel refers its ‘05 digital home and office platform as Lyndon, while ‘06 platforms are called Bridge Creek and Averill.

Intel usually does not indicate any core-clocks, frequencies or performance levels of its future products. Besides, Intel also did not confirm whether it plans to sell packages of its desktop components under single brand, like it does with notebook hardware branded Intel Centrino.

Vanderpool and LaGrande – Corner Stones of Next-Generation Computing

Intel has been discussing its plans to enable extended security features code-named LaGrande for years now. Previously it was anticipated that the technique was to be implemented into the currently shipping 90nm processors, such as Intel Pentium 4 “Prescott”, however, Intel officially did not confirm this during the launch of the chip.

Beside security capabilities, the Santa Clara, California-based Intel has also been planning to enable advanced parallelism for personal computers in order to increase reliability and add new usage models for end-users. Vanderpool is a hardware tech that splits system into several virtual parts that work independently and use the same resources of the PC. Servers’ central processing units and platforms are also likely to get a virtualization tech: Intel calls it Silvervale, but does not reveal any differences compared to Vanderpool.

Besides innovative Vanderpool and LaGrande technologies, Intel will also add certain features that are likely to be required by numerous professional systems – iAMT, a remote system management capability, and EM64T, 64-bit capability that enables more than 4GB of memory and boosts performance in certain applications.

Microsoft Longhorn – The Catalyst of the Next Computing Era

While all the technologies that are currently discussed are supported by hardware, the potential of revamped capabilities is only likely to be exposed when using Microsoft’s forthcoming Longhorn operating system.

Particularly Vanderpool and LaGrande, just like competing technologies from Advanced Micro Devices, Intel Corp.’s main rival, code-named Presidio and Silvervale, will require support by operating system and are unlikely to be fully functional when running on current generation of OSes.

Microsoft Longhorn is currently anticipated to blend the whole system’s feature-set, as not only processors and chipsets should support advantages like virtualization, but also graphics cards, hard disk drives, I/O controllers and other hardware is likely to require support for certain functionality to take advantage of the Longhorn.

Currently Microsoft Longhorn is expected for release during 2006 – 2007 timeframe.

Few details about VMware ESX Server 3.0 emerge

VMware is expected since a week for its ESX Server 2.5 release, meanwhile few details about ESX Server 3.0 start to emerge on VMware Community posts.
Resuming:

– Release date expected for 2005 (unclear if Q1 or end of year) [ I personally would opt for “end of year” idea ]
– iSCSI support
– Obsolete Red Hat 7.2 could be abandoned

Programming Virtual Server 2005 with Visual Studio.NET

Quoting from .NET Developer’s Journal:


Microsoft Virtual Server 2005 (VS2005), a new addition to the Microsoft Server family, emulates physical hardware to create multiple independently operating environments or virtual machines (VMs).

This article reviews portions of a C# application leveraging the VS2005 API. We will get you started with VS2005 here. The complete application code demonstrates one of the more interesting aspects of VMs: differencing hard drives. These drive disks allow you to toggle a VM from one configuration to a different configuration and back. This capability is especially useful for software testing and verification, where hardware resources are limited.

VS2005 is amazingly powerful but also adds a new level of complexity to server management. The VS2005 background and code provided in this article are just the tip of the iceberg to get you started. Take some time to work with virtualization. You’ll quickly see how easily virtualization can benefit your organization.

Virtual Server Background
A full introduction to VS2005 could fill a book. The objective of this article is not to overwhelm you with details, but to show you how to write a simple interface program to VS2005 that does useful work. Before diving into code, some brief background on virtualization is necessary.

Virtualization enables a single physical server, or host, to be partitioned into many independently operating virtual servers. As depicted in Figure 1, the VS2005 software emulates the disk, network, keyboard/video/ mouse, processor, and memory needed to create a server. Each server is totally isolated from the others with no visibility into other VMs running on the same host. The virtualization software governs resource usage to ensure that one VM does not consume all of the host’s CPU capacity.

It is important to distinguish between hosts and VMs for VS2005 compatibility. The host that runs the virtualization software must be running Windows Server 2003 (Windows XP works but is not “supported”). The VMs run a broad range of operating systems, including most flavors of Linux, DOS, Novell, Windows NT, and recent Windows releases but Microsoft will only support Windows. Microsoft, in fact, intends for VMs to become the primary supported platform for Windows NT.

What Makes Virtualization Compelling?
Virtualization is compelling because of the increased efficiency and control it brings to the physical server data center. Our company, Surgient, Inc., is an on-demand enterprise software company whose applications automate software sales, marketing, training, and testing processes. Surgient has been using virtualization platforms since 2001 as early beta testers of first-generation virtual server products. During that time, we have deployed over a thousand hosts and uncountable VMs. If you’ve experienced the Microsoft Visual Studio.NET hands-on trial, then you’ve used the Surgient demo management product and you’ve also used a VM! The VMs powering that site have served over 100,000 demos of VS.NET.

From the beginning, we recognized that VMs were fundamentally different and much more flexible than physical servers. The host’s APIs enable programmatic control to manage the virtual servers. The price of this flexibility is additional planning and management of data center operations.

On a virtualization host, servers compete for limited amounts of memory (RAM), processor (CPU), and storage (disk). VS2005 VMs, like physical servers, need sufficient RAM to operate. They block out their full memory footprint on the host. You can, however, release a VM’s memory by turning off the VM, which enables you to maintain more VM configurations than you can concurrently run.

These idle virtual images voraciously consume large amounts of disk. This means that a typical host may not have enough space to store many idle 4-10 GB virtual image files. VS2005 partially solves this problem by allowing VMs to chain image files into differencing layers that share a common base image.

The VS2005 differencing layer technology is called differencing disks (undo drives are a variant of differencing disks). This technology enables you to create a base image of the operating system and then install applications into a difference disk layer. A single base image can then be used by several VMs that each have a unique differencing disk.

Virtualization Example
Our sample scenario is a code development and testing environment for a three-tier application. The host server is a dual-processor server with 2GB of RAM and 120GB of RAID 5 disk. The proposed demo application must support both SQL Server and Oracle, each of which requires at least 1GB of RAM for testing. How can we use VS2005 to test the application in all its permutations using only the host server?

Memory is the obvious contention point. Of the available 2GB on the host, one quarter is reserved as overhead for the host operating system and VS2005, leaving just 1.5GB for the three application tiers. Since the databases each require 1GB, the entire application will just barely run on the host. To test both database platforms, we will have to keep one turned off while we are testing the other. Toggling the servers will keep our RAM use within the limits of the host.

Storage is a less obvious but equally serious challenge. Assuming 100GB available of the 120GB total should give ample room to store VM disk files. If we assume 10GB per VM, then we could store 10 VMs. Ten quickly drops to five if we plan to keep one archive copy of each VM.

Differencing disks provide the solution to both these issues. Our database server requires a 4GB Windows 2003 base image and two distinct differencing disks (SQL at 4GB and Oracle at 6GB). Without differencing, the two servers would consume 18GB of disk. Sharing the base image uses just 14, saving 4GB of disk. It is not necessary to archive the shared base because it does not change. We do, however, want to archive each differencing disk. The total storage is 24 GB, down from 36 GB without the use of differencing disks (see Figure 2).

Toggling the VM power states and working with shared base images addresses the resource limitations of the host.

Programming Virtualization Control
To use our testing environment, we must create a small .NET application that toggles between server configurations by changing both the VM’s power state and its differencing disk configuration.

The C# sample application will provide the following features:

Create a VM with appropriate components, including differencing and undo drives
Manage the VM’s power (Start and Stop)
Change the disk configuration of an existing machine
Simple user interface
This article focuses on key VS2005 interfacing points in the application. The entire program is available online at here.
Before the program can run, it needs a reference for the VS2005 API. VS2005 provides a COM interface and requires a .NET Interop to use it. Visual Studio creates the Interop automatically when we add Microsoft Virtual Server from the COM Components tab from the Tools…Add/Remove Toolbox Items menu. This process will add the using Microsoft.VirtualServer.Interop reference to the code.

In addition to referencing the COM Interop, VS2005 requires COM security to enforce access control. Our sample application includes a dedicated class, COMServices, to provide this critical initialization. Your VS2005 application must include this or similar code. A call to COMServices. Initialize() is all that is needed before we can start using the VS2005 API.

The VS2005 API has two primary categories of functions. The first category controls the virtualization platform on the host, while the second controls states and attributes of individual VMs. The interface to the host is created by instantiating a new VMVirtualServerClass object. Once this object exists, it is possible to create VMVirtualMachine objects by either creating new VMs using CreateVirtualMachineVM or getting a reference to an existing VM using FindVirtualMachine.

The example application calls the host object “vs” for Virtual Server. Here is the code to attach to the host API:

VMVirtualServerClass vs = new VMVirtualServerClass();

Creating a Virtual Machine
The first step in creating a VM is deciding where to store the multigigabyte virtual disk files that store the data of the VM’s hard drives. The VS2005 default is to bury the files deep in the Documents and Settings directory tree, which can cause serious issues on systems with multiple partitions as the largest files default to the operating system’s partition. Change the default path for VMs from the VS2005 Web interface in the Virtual Server…Server Properties section. The VMs are stored in directories under the default virtual machine configuration folder. In this example, the host will store VMs on the second partition (d:) in the “vms” subdirectory.

Creating a VM is a multistep process. The basic CreateVirtualMachine method only creates a VM stub. The VM’s RAM, disk, and network must be configured before it is usable. However, you cannot just attach disk and network to a VM; you must “install” virtual devices before you can attach media to them. Specifically, you must add a network adapter to your VM before you can attach it to the network and you must specify which IDE or SCSI ports you are using when you attach drives.

The first step is to create the VM stub. The application calls the VM object vm. Here is the code requesting the host object to create a VM:

VMVirtualMachine vm = vs.CreateVirtualMachine(“vm01″,”d:\vms\vm01”);

With the VM stub, it is possible to configure the VM’s properties. Memory is the easiest to configure:

vm.Memory = 256;

Attaching a hard drive requires an existing virtual hard disk (VHD) file. You can use an existing one or create one dynamically. VHD files are configured to a maximum possible size and expand dynamically as data is added. The maximum size is specified in megabytes, so the code sample uses a 1K multiplication to improve readability. Here is how the host object is told to create a VHD file:

vs.CreateDynamicVirtualHardDisk(“d:\vms\vm01.vhd”, 16 * 1024)

Instead of creating a new disk (shown above), you can add a differencing disk to an existing hard disk. A differencing disk inherits the maximum size from its parent and also stores the parent’s location in its header. You must supply both a unique disk name and the parent disk when you create a difference disk:

vs.CreateDifferencingVirtualHardDisk(“d:\vms\vm02.vhd”,”d:\vms\parent.vhd”);

Once the disk file exists, it can be attached to the VM by selecting a bus (IDE or SCSI) and the bus address. If the disk is a differencing disk, only the difference disk file is provided for connection. The parent disk is not programmatically connected because the difference disk already has the reference location for its parent disk. In this example, VS2005 will connect the drive at address IDE 0:0:

vm.AddHardDiskConnection(“d:\vms\vm01.vhd”, VMDriveBusType.vmDriveBusType_IDE, 0, 0);

Undoable mode is an important VS2005 feature because it allows you to maintain a working session for your server. When using undoable drives, you can maintain, commit, or discard the working session. There is minimal performance impact for this feature, and it eliminates the time wasted recovering or rebuilding server environments. Undoable mode is an attribute of the VM and applies to all drives:

vm.undoable = true;

Connecting the new network adapter to the correct host network is more challenging. The result is that the attached adapters are available as a NetworkAdapters array on the VM object. To create the network adapter for the VM:

vm.AddNetworkAdaper();

When installed, VS2005 automatically creates a virtual network for each physical host network interface card (NIC) and an extra “internal” network that can be shared between VMs but is not externally connected. Virtual networks may be created or added from the Web interface in the Virtual Networks section. The host object offers an array of VirtualNetworks. Connect a VM to a network by providing a reference to the desired host network with AttachToVirtualNework method for an adapter:

vm.NetworkAdapters[1].AttachToVirtualNetwork(vs.VirtualNetworks[0]);

Managing VM Power
VS2005 allows absolute control over a VM’s power, including a saved state that releases a VM’s memory and CPU resources. Suspend is useful because the VM immediately resumes work when restarted, avoiding an operating system reboot.

Basic power management uses the VM’s Startup and TurnOff methods. These are not advised for most cases. TurnOff is dangerous because it does not gracefully shutdown, and Startup does not wait for the start before returning control.

To provide a graceful shutdown, the “Virtual Machine Additions” must be installed on the VM’s GuestOS – Microsoft’s name for the operating system running on the VM. VS2005 prompts you for the Additions in the Web interface. Once the additions are installed, you first check the CanShutdown property from the VM’s GuestOS attribute:

If (vm.GuestOS.CanShutown vm.GuestOS.Shutdown();

Waiting for start or shutdown completion requires asynchronous calls to the VS2005 interface. Many VS2005 methods return a VMTask object that can be used to monitor task completion:

VMTask vt = vs.GuestOS.Shutdown();
if (vt !=null) vt.WaitForCompletion(-1);

Going Forward
Programming helps you unlock and automate some of the most powerful features of Microsoft Virtual Server 2005. This article covered key points to consider about using and programming Virtual Server. The complete sample application provides additional context and more features, including switching a VM’s differencing disk as discussed in the example scenario. We believe that virtualization technology can radically improve the way you use server capacity and hope this article takes you a step closer to that realization.

IBM releases Virtual Machine Manager add-on for its Director

Quoting from official announcement:


– Description
Virtual Machine Manager – VMM can be installed on systems with IBM Director running on Microsoft OSes.
Extensions to VMM can be installed together with matching IBM Director components – agent, server and console.

VMM is not a stand alone tool, it requires an existing IBM Director environment.

– Product overview
Virtual Machine Manager enables the use of the following
virtualization applications in an IBM Director environment:

o VMware ESX Server in a VMware VirtualCenter environment

o Microsoft(R) Virtual Server

When Virtual Machine Manager and these virtualization
applications are installed, you can perform the following tasks
from IBM Director Console:

o Discover and report status about virtualization components

o Log in to the management interface of the virtualization
application

o Perform migration and power operations on virtual machines

– Compatibility with IBM Director
Virtual Machine Manager 1.0 is supported for use with
IBM Director 4.20.

You can download it here.

Lack of tests could block virtualization

Quoting from ComputerWorld:


Server virtualization technology is expected to play a big role in increasing CPU utilization rates on x86-based servers in the next few years. But attendees at Gartner Inc.’s data center conference here this week said one potential roadblock is the need to test packaged applications on virtualized systems.
That issue could put the relationships between users and software vendors to the test if vendors are reluctant to troubleshoot their applications on servers running virtualization software, according to Gartner analysts and IT managers at the conference.

Tony Fernandes, vice president of technology infrastructure at Inventure Solutions Inc., the internal IT arm of Vancouver City Savings Credit Union in British Columbia, plans to begin testing Microsoft Corp.’s Virtual Server software next year.

Fernandes said he expects to have to train his staff to perform some application troubleshooting tasks, and he views the testing process as an opportunity to find out if his vendors’ use of the word partner rings hollow. “Partner is this great word, but how many spell it correctly?” Fernandes said. He added that he plans to give the application vendors he works with this message: “You say that I’m an important customer, so show it.”

Fernandes and other users said they have two strategies for dealing with vendor resistance to testing. One approach involves training internal IT staffers to do the necessary work on virtualized servers. The other might be called the blunt-force method: threatening to take their business elsewhere. Fernandes said he thinks that in 95% of the cases at his company, he could find an alternative application vendor if necessary.

Conference attendees said troubleshooting software is most likely to come from point-solution vendors that develop specialized applications, often for specific vertical industries.

Many of those vendors are relatively small and don’t have the funding or expertise to test their software in virtualized environments, said William Miller, manager of computing services at Roche Diagnostics Corp., an Indianapolis-based maker of medical diagnostics equipment.

Miller plans to conduct his own tests of third-party software on virtualized servers and then seek help from vendors if there are problems. He said he will deliver a message similar to the one that Fernandes has in mind: “Support me, or I’ll go find another point solution.”

Application support by third-party software vendors is “the main issue” in adopting virtualization technology, said Luis Franco, vice president of technology at Banesco Bank in Venezuela. Some vendors “don’t want to assure the quality of their applications” on virtualized servers, Franco said, adding that the bank needs to increase the skills of its own personnel as a result.

In the long run, though, application vendors may have little choice other than to make the adoption of server virtualization software as easy as possible for users.

Gartner analyst Tom Bittman predicted that the average CPU utilization rate on two-way Wintel servers will increase to 40% by 2008, up from about 25% now. The rise will be partly driven by an increase in virtualization offerings, including Microsoft’s Virtual Server, he said.

Some users believe that x86-based servers are so inexpensive, there’s no point in buying virtualization software for them, Bittman said. But he argued that users may be spending more on x86-based servers as a whole than they do on mainframes or Unix systems.

The low-end servers also generate a lot of heat because of their increasing CPU power and density, contributing to cooling problems in many data centers, Bittman said.

Dave Mahaffey, technical systems administrator at the Santa Clara Valley Water District in San Jose, said he was at the conference to talk to vendors and research virtualization issues. “We’ve got a server for every application, and it’s getting out of hand,” Mahaffey said. He added that he wants to consolidate servers and increase their CPU utilization rates from the current level of between 15% and 20% to as much as 50%.

Whitepaper: 64-bit computing with Intel EM64T and AMD AMD64

IBM RedBooks department released this cool paper about 64bit technologies:


– Abstract

There are now three 64-bit implementations in the “Intel® compatible processor” marketplace:

Intel IA64, as implemented on the Itanium 2 processor
Intel EM64T, as implemented on the Xeon DP “Nocona” and future Xeon MP processors
AMD AMD64, as implemented on the Opteron processor

There is some uncertainty as to what a 64-bit processor is and even more importantly, what the benefit of 64-bit computing is. This document introduces the EM64T and AMD64 architectures and explains where 64-bit processing is useful and relevant for customers in this marketplace.

While this is not really virtualization related, VMware start supporting 64bit architectures, so this reading could be interesting for some of you.

Whitepaper: VMware ESX Server: Scale Up or Scale Out?

IBM Redbooks department released another (draft) paper on VMware mainstream product: ESX Server:


– Abstract

There have been a certain amount of discussions lately regarding optimal and efficient implementations of ESX Server, VMware’s flagship product in terms of x86 hardware virtualization. This document is a guide or a basis for a discussion related to the best practices in terms of the scalability of an ESX Server implementation on IBM xSeries and BladeCenter servers.

In this document, we focus the discussion around two major scenarios: the so-called scale-up implementation and the scale-out implementation. These are terms that are used often today that basically describe the usage of an architecture, either based on one or few big x86 server systems, or based on many smaller x86 server systems. The former being defined as a scale-up solution (that is, you have to add components and resources to the big system in order to achieve scalability) and the latter being defined as a scale-out solution (that is, you have to add new server systems to your “server farm” in order to scale it accordingly to your needs).

The objective of this document is to create the basis for a wide understanding of what the advantages and disadvantages are for both approaches without having to compromise our analysis and match it with a limited set of products. This should ensure the readers that our analysis will be as agnostic as possible.

We assume that readers of this paper have an understanding of the VMware products — specifically ESX Server, VirtualCenter and VMotion.