The virtues of virtualization

Quoting from CIO Asia:

During the past few decades, CIOs have stood at the center of one of the great technological revolutions in history: the replacement of the physical atom by the computational bit as the medium of commerce and culture. The profession might be forgiven for thinking that nothing is left for the next generation but tinkering. What could compare with a transition like that?

Actually, something almost as big might be coming over the horizon: the replacement of the bit with the virtual bit. Virtualization is the substitution of physical computing elements, either hardware or software, with artificial impostors that exactly replicate the originals, but without the sometimes inconvenient need for those originals to actually exist. Need a 1 terabyte hard drive, but only have 10 100GB drives? No problem, virtualization software can provide an interface that makes all 10 drives look and act like a single unit to any inquiring application. Got some data you need from an application you last accessed in 1993 on an aging MicroVAX 2000 that hit the garbage bin a decade ago? A virtual Digital VMS simulator could save your skin.

Stated like that, virtualization can sound like little more than a quick and dirty hack, and indeed, for most of the history of computing, that is exactly how the technique was viewed. Its roots lie in the early days of computing, when it was a means of tricking single-user, single-application mainframe hardware into supporting multiple users on multiple applications. But as every aspect of computing has grown more complex, the flexibility and intelligence that virtualization adds to the management of computing resources have become steadily more attractive. Today it stands on the lip of being the next big thing.

Raising the Dead
The Computer History Simulation Project, coordinated by Bob Supnik at SiCortex, uses virtualization to fool programs of historical interest into thinking that they are running on computer hardware that vanished decades ago. Supnik’s project has a practical end as well: Sometimes old systems are so embedded in the corporate landscape that they must be kept running. If the real hardware is unavailable, the only way to keep the old machines running is to virtualize them.

In a more contemporary example of the power of virtualization, about three years ago J. R. Simplot, a $3 billion food and agribusiness company in Boise, Idaho, found itself in a phase of especially rapid growth in server deployments. Of course, with rapid growth comes the headache of figuring out how to do everything faster. In this case, the company’s IT center concluded that their old server procurement system had to be accelerated.

Servers, of course, are pieces of physical equipment; they come with their own processing, memory, storage resources and operating systems. What the Simplot team did was use virtualization tools from VMware, a virtual infrastructure company, to create software-only servers that interacted with the network just like hardware servers, although they were really only applications. Whenever Simplot needed another server it would just flip the switches appropriate to the server type (Web, application, database, e-mail, FTP, e-commerce and so on). From that point, an automated template generated the virtual machine on a specific VMware ESX host machine.

Virtual Improvements
According to Tony Adams, a technology analyst at Simplot, there were gains all across the board. The time to get a new server up and running on the system went from weeks to hours or less. Uptime also increased, because the servers were programs and could run on any supported x86 hardware anywhere. If a machine failed or needed maintenance, the virtual server could be quickly moved to different hardware.

Perhaps most important were the gains in utilization efficiencies. Servers are built for specific roles. Sometimes demand for a particular role is in sync with available resources, but usually it isn’t. In the case of “real” servers, if there is a mismatch, then there is nothing that you can do about it; you’re stuck with what you have. If you end up with an average utilization rate of 10 percent per server, so be it. (The need to provide for peak demand makes the problem worse, and utilization can often be far below even 10 percent.) Low utilization means IT is stuck with unnecessary maintenance issues, security faces unnecessary access issues (they have to worry about protecting more machines), and facilities must deal with unnecessary heat and power issues.

Virtualization fixes these problems. The power to design any kind and number of servers that you like allows you to align capacity with load continuously and precisely. In the case of Simplot, once Adams’s servers turned virtual, he was able to deploy nearly 200 virtual servers on only a dozen physical machines. And, he says, typical CPU, network, disk and memory utilization on the VMware ESX boxes is greater than 50 percent—compared with utilization of around 5 percent on dedicated server hardware.

Virtualization also makes disaster recovery planning simpler, because it allows you to write server clusters appropriate to whatever infrastructure you have on hand. As Adams points out, conventional disaster recovery schemes force you to have an exact clone of your hardware sitting around doing nothing. “But personally, what I really like,” he says, “is the remote manageability. I can knock out new [servers] or do repairs anywhere on the Net, without even going to the data center.”

Adams wants one machine to look like many machines, but it is just as possible to virtualize the other way: making many machines look like one. Virtualization underlies the well-known RAID storage tricks that allow many disks to be treated as one huge drive for ease of access, and one disk to be treated as many for the purpose of robust backup. Another prime use for virtualization is development. The hardware world is growing much more complex all the time: Product cycles are turning faster, the number of device types is always rising, and the practice of running programs over networks means that any given program might come in contact with a huge universe of hardware. Developers can’t begin to afford to buy all of this hardware for testing, and they don’t need to: Running products on virtualized models of the hardware allows for quality assurance without the capital expense. Virtualizing the underlying hardware also gives developers far more control. Peter Magnusson, CTO of Virtutech, a systems simulation company in San Jose, Calif., points out that you can stop simulated hardware anywhere you like, any time you want to investigate internal details.

Unreal Future
During the next year or two, virtualization is on track to move from its current success in storage, servers and development, to networks and data centers. So CIOs will then be able to build software versions of firewalls, switches, routers, load balancers, accelerators and caches, exactly as needed. Everything that was once embodied in cards, disks and physical equipment of any kind, will be organized around a single point of control. If virtualization vendor promises materialize, changes that once were out of the question, or that at least would have required considerable man-hours and operational risk, will be done in minutes, routinely.

What those changes will mean is very much a topic for current discussion. For instance, all the new knobs and buttons virtualization provides will raise issues of policy, because it will be possible to discriminate among classes of service that once had to be handled together. You will, for instance, be able to write a Web server that gives customers who spend above a certain limit much better service than those who spend only half as much. There will be huge opportunities for automation. Infrastructure may be able to reconfigure itself in response to changes in demand, spinning out new servers and routers as necessary, the way load balancing is done today. (Certainly IBM et al. have been promoting just such a vision of the on-demand computing future.)

Virtualization examples so far have all been hardware-centric, because the inherent inflexibility of hardware means the elasticity advantages of virtualization are greater than with software. However, virtualization can work anywhere in the computing stack. You can virtualize both the hardware and the operating system, which allows programs written for one OS to run on another, and programs written for a virtual OS to run anywhere (similar to how Java maintains its hardware independence through the Java Virtual Machine).

Quite possibly the growth of virtualization predicts a deep change in the responsibilities of CIOs. Perhaps in the not-too-distant future no CIO will ever think about hardware: Raw physical processing and storage will be bought in bulk from information utilities or server farms. Applications will be the business of the departments or offices requiring them. The center of a CIO’s job will be the care and feeding of the execution environment. The very title of CIO might vanish, to be replaced, of course, by CVO.

Taking It All In
In that world, virtualization could graduate into a full-throated simulation of entire systems, the elements of which would not be just computing hardware, as now, but all the motors, switches, valves, doors, engines, vehicles and sensors in a company. The model would run in parallel with the physical company and in real-time. Where now virtualization is used for change management, disaster recovery planning, or maintenance scheduling for networks and their elements, it would in the future do the same for all facilities. Every object or product sold would come with a model of itself that could fit into one of these execution environments. It would be the CVO’s responsibility to make sure that each company’s image of itself was accurate and complete and captured the essentials. And that would not be a virtual responsibility in the least.

PlateSpin will be optimized for Microsoft Virtual Server

Quoting from the PlateSpin official announcement:

PlateSpin Ltd today announced that their patent-pending OS Portability PowerX product line is optimized for Microsoft’s current and future virtualization technologies, including Microsoft Virtual Server 2005 R2, through its licensing of the Microsoft virtual hard disk (VHD) format. Platespin shares with Microsoft a focus on enabling self-managing dynamic systems that deliver higher business value through automation, flexible resource utilization, interoperability and knowledge-based processes. PlateSpin PowerConvert works with Microsoft Virtual Server to optimize the data center by automating migrations and accelerating server consolidation projects. Additionally, PowerConvert is ideal for ensuring business continuity through rapid infrastructure independent recovery, and easing test lab deployment and development.

“Microsoft’s commitment to virtualization is evidenced by their decision to license the VHD format royalty-free, making it easy for vendors, such as PlateSpin, to work effectively with Microsoft Virtual Server 2005,” said Stephen Pollack, PlateSpin CEO. “With a clear roadmap, and the openness of the VHD format licensing, Microsoft is well positioned to grow the adoption of future versions of Windows virtualization technology into production environments in data centers across the world.”

PlateSpin PowerConvert automates the deployment of Microsoft VHDs across the data center infrastructure and completely decouples the software layer making it independent from the hardware it was previously reliant upon.

“Microsoft is licensing the VHD format as royalty free to provide strategic vendors, such as Platespin, the ability to accelerate development on the Virtual Server platform,” said Zane Adam, director of marketing, Windows Server Division, Microsoft Corp. “We are enabling the partner ecosystem to create value-added solutions to the Windows platform. The Microsoft VHD file format, and other standards initiatives, will help ensure a smooth transition to future versions of Windows virtualization technology, and provides cross platform support and an ongoing common format for easier security, management, reliability and migration.”

Multi-directional migrations are made possible with PlateSpin’s underlying OS Portability technology, which performs automated migrations from virtually any source to any target in the data center including physical servers, blade servers, virtual machines, and archive/backup images.

“Microsoft Virtual Server 2005 is a critical component in many server consolidation projects that we see today,” said Eric Courville, vice president of Global Sales and Business Development from PlateSpin. “The combination of PlateSpin PowerConvert and Microsoft Virtual Server 2005 is a proven solution, and we are working closely with Microsoft to ensure that we leverage their technology today and into the future so that we can offer our customers a market-leading solution for optimizing the data center.”

PlateSpin PowerConvert is available today for use with industry leading server technologies, image archives, and virtualization products including Microsoft Virtual Server. For more information on PlateSpin PowerConvert and its underlying OS Portability technology, please visit www.platespin.com.

Acronis True Image Enterprise Server extends disaster recovery capabilities using Microsoft Virtual Server 2005 R2

Quoting from the Acronis official announcement:

Acronis Inc., the technological leader in storage management software, announced that Acronis True Image Enterprise Server extends Microsoft’s current and future virtualization technologies, including Microsoft Virtual Server 2005 R2, and can provide full disaster recovery for both physical and virtual servers.

“As system virtualization becomes mainstream, IT managers will find a greater need for disk imaging for disaster recovery and systems deployment,” said Walter Scott, CEO of Acronis. “Acronis True Image Enterprise Server today provides the disaster recovery and bare-metal restore requirements for both the physical server and the virtual servers residing on it.”

“Combining Acronis’ real-time disk imaging and backup capabilities with Microsoft’s virtualization technologies, and royalty-free Virtual Hard Disk (VHD) license, provides a powerful value proposition for enterprise and hosting customers,” he said. “Server virtualization is the enabling technology for tomorrow’s dynamic data centers, but without comprehensive tools like those offered by Microsoft, enterprises will struggle to fully realize the benefits of the dynamic data centers.”

“Enterprises will be able to provision new physical servers instantaneously using Acronis’ award-winning disk imaging solution on systems running Microsoft Virtual Server 2005 R2,” said Stephen Lawton, Acronis director of marketing. “Once deployed, the new servers can be partitioned into virtual servers to help better utilize a customer’s server investment.”

“Recent events along the Gulf Coast and power outages in Los Angeles have brought to the fore the need for companies large and small to have disaster recovery strategies,” he continued. “As virtual servers are introduced into more networks, IT managers need a disaster recovery plan that addresses both physical and virtual systems.”

“Acronis True Image Enterprise Server helps extend Virtual Server 2005 R2 capabilities to aid customers in developing powerful disaster recovery strategies” said Zane Adam, director of marketing, Windows Server Division, Microsoft Corp. “The work Microsoft is doing to enable the partner ecosystem with virtualization technologies demonstrates our openness so that partners can add value on top of the Windows platform.”

Acronis True Image Enterprise Server is currently being shipped. It carries a suggested list price of $999; volume discounts are available. For additional information, visit www.acronis.com/enterprise/products/ATIESWin/.

Novell SuSE releases Linux 10.0 and embeds Xen 3.0 technical preview

Novell just released SuSE Linux 10.0 (previously referred as SuSE Linux Professional).
It now include Xen 3.0 technical preview (b6715) which supports AMD and Intel virtualization enhancements.

Quoting from the xen-3.0_6715-2.i586.rpm package:

Xen is a virtual machine monitor for x86 that supports execution of multiple guest operating systems with unprecedented levels of performance and resource isolation.

This package contains the Xen HyperVisor.

Modern computers are sufficiently powerful to use virtualization to present the illusion of many smaller virtual machines (VMs), each running a separate operating system instance. Successful partitioning of a machine to support the concurrent execution of multiple operating systems poses several challenges. Firstly, virtual machines must be isolated from one another: it is not acceptable for the execution of one to adversely affect the performance of another. This is particularly true when virtual machines are owned by mutually untrusting users. Secondly, it is necessary to support a variety of different operating systems to accommodate the heterogeneity of popular applications. Thirdly, the performance overhead introduced by virtualization should be small.
Xen uses a technique called paravirtualization: The guest OS is modified, mainly to enhance performance.

The Xen hypervisor (microkernel) does not provide device drivers for your hardware (except for CPU and Memory). This job is left to the kernel that’s running in domain 0. Thus the domain 0 kernel is privileged; it has full hardware access. It’s started immediately after Xen started up. Other domains have no access to the hardware; instead they will use virtual interfaces that are provided by Xen (with the help of the domain 0 kernel).

Xen does allow to boot other Operating Systems; ports of NetBSD (Christian Limpach), FreeBSD (Kip Macy) and Plan 9 (Ron Minnich) exist.
A port of Windows XP was developed for an earlier version of Xen, but is not available for release due to licence restrictions.
In addition to this package you need to install the kernel-xen and xen-tools to use Xen.
Xen3 also support full emulation and allows to run unmodified guests, relying on hardware support. Install xen-tools-ioemu if you want to use this.

Quoting from the xen-tools-ioemu-3.0_6715-2.i586.rpm package:

Xen is a virtual machine monitor for x86 that supports execution of multiple guest operating systems with unprecedented levels of performance and resource isolation.

This package contains the needed BIOS and device emulation code to support unmodified guests. (You need virtualization support in hardware to make use of this.)
….

Microsoft adapts Windows Server System licensing to virtualization scenarios

Quoting from the Microsoft official announcement:

As IT solutions become more complex, operational costs often continue to climb. It should come as no surprise then that organizations have been clamoring for ways to scale back the amount of time and money they spend maintaining their IT systems so they can instead ramp up efforts to better run their enterprise and bolster their bottom line.

Through its Dynamic Systems Initiative (DSI), a cross-company effort to address IT customers’ desire to be more cost-efficient, proactive and responsive to business requirements, Microsoft is attempting to address customer needs by making it easier for them to take advantage of the benefits of server virtualization technology. Organizations that virtualize computing environments can increase operational efficiency through server consolidation, application re-hosting, disaster recovery and software test and development.

With the help of customers, partners and industry analysts, Microsoft has developed new licensing and use rights to better enable customers to take advantage of virtualization and accommodate advances in technology. To get a sense of how Microsoft has adapted its Windows Server System licensing to reflect this growing demand, PressPass spoke with Brent Callinicos, Microsoft’s corporate vice president for Worldwide Licensing and Pricing.

PressPass: What changes to Windows Server System Licensing is Microsoft announcing today?

Callinicos: Customers are using virtualization technologies more and more for a variety of reasons – as part of server consolidations, for test and development, to create a more agile infrastructure that allows them to move workloads from machine to machine regardless of hardware, to establish better business continuity and to reduce downtime.

We wanted our licensing to allow customers to embrace virtualization benefits and eliminate any potential barriers. As a result, we have devised licensing policies that we feel best reflects how our customers want to use this virtualization technology.

First, we are licensing by running instance, which is to say the number of images, installations and/or copies of the original software stored on a local or storage network. Instead of licensing every inactive or stored virtual instance of a Windows Server System product, customers can now create and store an unlimited number of instances, including those for back-up and recovery, and only pay for the maximum number of running instances at any given time.

Second, we are providing easier deployment across servers. Customers can now move active instances from one licensed server to another licensed server without limitation, as long as the physical server is licensed for that same product. So, customers will now be able to store a set of instances on a storage network and deploy any instance to a rack server or blade server that has an available license for that server software.

Third, we are providing customers with greater flexibility with Windows Server System products that are currently licensed by processor, such as Microsoft SQL Server, BizTalk Server, Internet Security Accelerator Server and others. Customers can now stack multiple virtual instances on a machine by licensing for the number of virtual processors being used, rather than for all of the physical processors on the server.

Lastly, we recognize customers are using virtualization to consolidate servers. Therefore, we now have a policy for Windows Server 2003 R2 Enterprise Edition that allows customers to run up to four running virtual instances on one server at no additional cost. And we’ll go further with the Datacenter Edition of Windows Server “Longhorn,” the code name for the next version of Windows Server, by allowing customers to run unlimited virtual instances on one server at no additional charge.

PressPass: What role did partners, customers and analysts play in affecting this change in licensing?

Callinicos: Last year we looked at the advancements in hardware, specifically how dual- core and multi-core processors will impact the industry. We reviewed that along with our existing policies and set out to ensure that our policies are flexible enough to allow customers to take advantage of the performance benefits associated with dual-core chips without incurring additional server software costs.

At about the same time, we began to look at other emerging technologies, like virtualization. We sought the advice of several customers that participate on our Licensing Advisory Council and expressed a desire to create licensing policies that reflected how customers wanted to use this technology. The council viewed this as an industry-leading measure. We also talked to partners and industry analysts and asked them to research how customers were using virtualization and what they valued about the benefits associated with using software in virtual environments. We learned that customers want to more dynamically deploy their virtual machines, and we wanted our licensing to reflect that.

From the very beginning, partners, customers and analysts have aided our licensing evolution and have taken a hands-on approach in the process. Response to date has been very favorable.

PressPass: What impact will today’s licensing announcements have on customers and the industry?

Callinicos: There are several benefits to virtual-machine technology. There is a reduction in total cost of ownership by hardware utilization and consolidating workloads on fewer servers. The virtual machine market is a fairly niche market. It’s somewhere in the US$400 million revenue range. We’re very early on in its mainstream adoption on x86 platforms. But hopefully with the technology support and more flexible licensing policies that Microsoft is providing, we’ll be able to make it more mainstream. We strongly believe in the benefits of virtualization, and that’s why we’ve evolved our policies to better reflect how it’s being used presently, and how we expect it to be used in the future, as customers strive to achieve self-managing dynamic systems.

PressPass: How do Microsoft’s licensing policies differ from other enterprise application vendors’ policies for virtualized environments?

Callinicos: Many software companies don’t have clearly stated licensing policies for running server software in virtual environments, so it is difficult to say. However, most are focused on licensing server software by installed copy – whether in a virtual environment or not. Based on feedback from customers and partners, we’re moving away from that to licensing server software by running instance.

Update: Ben Armstrong, Microsoft Virtual Machines Team Program Manager, published on his blog, Virtual PC Guy, a new whitepaper about the Microsoft virtualization licensing terms with all cases and explainations. It’s a must-read for everyone.

Books: Rob’s Guide to Using VMware – Second Edition

Rob’s Guide to Using VMware
Second Edition

Release Date: July, 2005
ISBN: 9080893439
Edition: 2
Pages: 352
Size: 9.2″ x 7.0″ x 1.0″

Summary
The second edition of Rob’s Guide to Using VMWare continues where the author stopped with the first edition. New topics covered in the book are VMWare ACE and VMWare GSX Server. The book now also contains information on VMWare and Linux. This new edition features an overview of the new version 5 of VMWare Workstation. Many topics which were included in the first edition have been updated and new topics have been added.

Words from the Author
This new book is updated for all the new versions of VMware products. It is based on VMware Workstation version 5 and all the tips and procedures have been updated for this latest version.
New topics include: Multiple Snapshots, Teaming virtual machines and Linux installation and configuration. There is also a section that introduces VMware ACE.

The sections covering Physical to Virtual conversion and Clustering have been updated with new techniques and they now cover new information on the complete range of VMware products, including ACE v1, GSX Server v3.2 and ESX Server v2.5.
For P2V you will find updated information on imaging techniques that can be used to copy physical machines into virtual machines. Including information on how to restore Windows and Linux machines that reside on IDE hard disks inside virtual machines with SCSI disks, such as on VMware ESX Server. The tools that I cover for P2V have been updated to include Acronis TrueImage for Windows (including server versions) and freely available Linux tools and default commands to copy a machine into a virtual machine directly via the network.

The sections on VMware GSX Server and ESX Server will get you started with these high end VMware products. The book contains an introduction to VMware’s server products which will introduce the reader to the main concepts. I have also included an introduction to VMware VirtualCenter to give you an overview of how that product can help you to manage a multi server VMware environment.

The information in this book is based on Microsoft Windows, Linux and Novell NetWare and Open Enterprise Server. For each operating system the book contains unique configuration tips on using virtual disks, networking and more.

Table of Contents

  • Chapter 1 – VMware Overview
  • Chapter 2 – Fast Track to VMware Workstation
  • Chapter 3 – What’s New in VMware Workstation 5
  • Chapter 4 – Install Operating Systems and VMware Tools
  • Chapter 5 – Introduction to VMware ACE
  • Chapter 6 – General VMware Configuration and Tips
  • Chapter 7 – Virtual Disks, Floppies and CD-Roms
  • Chapter 8 – Performance Tuning and Optimization
  • Chapter 9 – Optimizing your Virtual Machine Environment
  • Chapter 10 – Transferring Data to and from a Virtual Machine
  • Chapter 11 – Networking Configurations
  • Chapter 12 – VMware Tips for Windows
  • Chapter 13 – VMware Tips for Linux
  • Chapter 14 – VMware Tips for Netware
  • Chapter 15 – Introduction to Physical to Virtual Conversion
  • Chapter 16 – Symantec Ghost’s Peer-to-Peer Imaging
  • Chapter 17 – Other Peer-to-Peer or Multicast Networking Solutions
  • Chapter 18 – Using Acronis True Image File Based Imaging via the Network
  • Chapter 19 – Creat a Virtual Disk from an Image File
  • Chapter 20 – Imaging the Really Cheap Way
  • Chapter 21 – Modify your Restored Opertating System to Work with VMware
  • Chapter 22 – Introduction to Clustering in VMware
  • Chapter 23 – Preparing for a Windows Server 2003 Cluster
  • Chapter 24- Preparing for a Netware 6.5 Cluster
  • Chapter 25 – Shared Disk Cluster with VMware Workstation
  • Chapter 26 – Shared Disk Cluster with VMware GSX Server
  • Chapter 27 – Shared Disk Cluster with VMware ESX Server
  • Chapter 28 – Installing Novell Cluster Services
  • Chapter 29 – Configuring Microsoft Cluster Services Software
  • Chapter 30 – Introduction to VMware GSX Server
  • Chapter 31 – Hardware & Software Requirements and Design
  • Chapter 32 – Upgrading GSX Server Software
  • Chapter 33 – Installation and Configuration on Windows
  • Chapter 34 – Installation and Configuration on Linux
  • Chapter 35 – Managing GSX Server and Virtual Machines
  • Chapter 36 – Creating and Configuring Virtual Machines on GSX Server
  • Chapter 37 – GSX Server Advanced Configurations
  • Chapter 38 – VMware ESX Server Introduction
  • Chapter 39 – Getting Started with VMware ESX Server
  • Chapter 40 – Managing Virtual Machines
  • Chapter 41 – Some Extra Tips on Using VMware ESX Server
  • Chapter 42 – Introduction to VirtualCenter

You can also buy the electronic version of this book at http://books4brains.com

Books: The HP Virtual Server Environment

The HP Virtual Server Environment
Release Date: September, 2005
ISBN: 0131855220
Edition: 1
Pages: 552
Size: 9.0″ x 7.0″ x 1.5″

Summary
Use HP virtualization to maximize IT service quality, agility, and value.

  • Includes coverage of HP’s new Integrity Virtual Machines, Global Workload Manager, Virtualization Manager, and Capacity Advisor
  • Plan, implement, and manage virtualization to drive maximum business value
  • Understand HP’s virtualization solutions for partitioning, utility pricing, high availability, and management for HP Integrity and HP 9000 servers
  • Manage your existing resources to drive unprecedented levels of utilization

Virtualization offers IT organizations unprecedented opportunities to enhance service quality, improve agility, and reduce cost by creating an automated balance in system resources. Now, there’s a comprehensive guide to virtualization based on the industry’s most flexible and complete solution: HP’s Virtual Server Environment (VSE).

Two leading HP architects and customer consultants help you identify the best “sweet spot” VSE solution for your environment-and plan, implement, and manage it.

The HP Virtual Server Environment systematically introduces VSE technologies for partitioning, utility pricing, high availability, and management, as well as HP’s powerful, unique goal-based approach to workload management. Whether you’re a solution designer, architect, or engineer, you’ll find realistic examples, deep insight, and practical tips, all with one goal: to help you maximize the business value of virtualization.

  • Architect flexible, dynamic configurations that adapt instantly to business requirements
  • Choose the right solutions from HP’s partitioning continuum: nPars, vPars, HP Integrity Virtual Machines, and Secure Resource Partitions
  • Use utility pricing solutions to deploy instant, temporary, or pay-per-use capacity wherever you need it
  • Improve utilization and control your virtual environment with HP-UX Workload Manager, HP Serviceguard, HP Global Workload Manager, Virtualization Manager, and Capacity Advisor
  • Integrate VSE technologies into heterogeneous HP-UX, Linux, and Windows environments on HP Integrity and HP 9000 servers

About the Authors

Dan Herington is the Chief Architect for HP’s Virtual Server Environment Advanced Technology Center. The ATC is a lab-based organization that has the mission to ensure the success of customers implementing solutions based on the Virtual Server Environment technologies. For the past four years he has been an architect in the lab working on HP’s industry leading Workload Manager and VSE Management products. He has also been responsible for communicating the technical vision of HP’s partitioning, utility pricing, and workload management products to the field and customers. This dual responsibility has provided him with a unique opportunity to both craft the technical message being delivered to HPs field and customers, as well as ensure that future versions of the products satisfy the requirements customers have for these solutions. He has delivered hundreds of seminars on HP’s Virtual Server Environment Technologies at customer visits, HP field and customer training sessions, and trade shows throughout the world. Most recently he has been a key contributor to defining the vision for the next generation Virtual Server Environment and its management tools, including a number of the new products covered by this book. This role has included working with the project teams that are responsible for delivering the individual products to ensure that they make up a well-integrated solution, and making sure it is easy for customers to realize the vision of an Adaptive Enterprise with these products.

Prior to rejoining HP in 2000, Dan held senior technical and management positions with a large systems integrator and a start-up software company. Dan started his career with HP in Cupertino, California where he held various technical positions in the OpenView Program. He was involved in the transition of this program from Cupertino to Ft. Collins, Colorado and later moved to Grenoble, France, where he helped incubate the OpenView program in Europe in the early 1990’s.

Bryan Jacquot is a Software Architect in HP’s Virtual Server Environment Advanced Technology Center. He has worked as both a software engineer and software architect on enterprise system administration applications for over six years. He developed the first graphical user interface for managing HP’s Superdome servers, Partition Manager. Additionally, Bryan developed the first HP-UX web-based system administration interface which is used for tuning the HP-UX kernel, Kcweb. In his current position, he serves as a technical software architect for the next generation system management tools for HP’s Virtual Server Environment. In addition, he works closely with customers to gather requirements and give presentations and training for HP’s Virtual Server Environment. Bryan holds a Bachelor’s of Science degree in Computer Science from Montana State University—Bozeman. Additionally, he is an HP Certified IT Professional, a Microsoft Certified System Administrator, and a RedHat Certified Technician.

Microsoft virtualization roadmap disclosed details

I just attended the live webcast Microsoft Virtualization Roadmap arranged to detail what to expect for the near future about virtualization.

One third of the presentation was focused on what Virtual Server 2005 is and which management tools are available todya.
Another third was focused on what Virtual Server 2005 R2 will offer.
The last third was focused on what technology will arrive next years.

The webcast recording is available here: http://go.microsoft.com/?linkid=4018895

I’ll try to summarize what Mike Neil, Microsoft Virtualization Product Manager, said:

  • This year (as you already know) we can expect just the Virtual Server 2005 R2 RTM with serveral features:
    • great performance improvement in memory handling and CPU usage
    • clustering for hosts (DAS, SAN with Fiber Channel and iSCSI supported), hosts failover, VMs migration between hosts (the downtime will depend on the network speed).
      Hosts clustering will be available for free as separate download.
    • 64bit architecture support for hosts (while guests will stay on 32bit virtual architecture until Longhorn timeframe)
    • PXE boot support for virtual network interfaces
    • Linux support
    • Win2003 SP1 and WinXP SP2 support
  • On the 1H 2006 a new Virtual Server version will hit the beta, scheduled for RTM on 2H 2006, providing:
    • Much better Linux VMs performances
    • AMD and Intel virtualization technologies support (mainly providing better performances for 3rd parties VMs)
  • At the Longhorn Server timeframe (nothing more specific) a new virtualization technology will appear with the planned features:
    • hypervisor technology with microkernel approach (virtualization device drivers will not stay in the hypervisor)
    • integrated in every Longhorn Server version (will not require a dedicated OS edition)
    • AMD and Intel virtualization technologies support (mainly providing better performances for 3rd parties VMs)
    • one Parent (containing the virtualization stack) and multiple dependants Child virtual partitions
    • 64 and 32bit guests
    • multiprocessing support (up to 4 CPUs) for guests
    • mixed virtualized and emulated devices
    • Child partitions live snapshots (by Volume Shadow Service integration)
  • At the same time a new wave of virtualization management tools will be released:
    • enterprise leveraged support for thousands of physical and virtual machines (I think he’s referring to the upcoming System Center management product)

Since slides often referred to 3rd parties VMs during the webcast I had the impression Microsoft could start supporting other guest OSes, like Solaris.

On the last Q&A time Mike Neil officially answered the classic HyperThreading question: he said Virtual Server 2005 environments should turn it off until R2.

Egenera BladeFrame System to support Xen Hypervisor

Quoting from the XenSource official announcement:

Egenera Inc., a global leader in utility computing, and XenSource, Inc., the leader in infrastructure-virtualization solutions based on the open source Xen hypervisor, today announced an alliance that will provide enterprise customers with the industry’s most integrated, available and manageable virtualized environment—from CPU to datacenter. Under the terms of the agreement, Egenera and XenSource will support the Xen hypervisor on the Egenera® BladeFrame® product line. Additionally, XenSource will join Egenera’s Accelerate alliance program in order to ensure seamless support of the integrated solution to customers. Combining Xen with the Egenera BladeFrame will enable customers to better utilize today’s ultra-fast processors and manage virtual machines more simply and effectively—enabling server consolidation and decreasing IT costs and administration time.

“We believe that managing virtual resources is the next big battleground in the industry and that Xen is emerging as a key technology for the enterprise,” said Pete Manca, senior vice president of engineering at Egenera. “Our alliance with XenSource gives customers a clear path to creating a totally virtualized environment that is highly available, manageable and secure. The synergy between the Egenera BladeFrame and the open source Xen hypervisor will help customers maximize the utilization of their computing resources and realize the business benefits of true utility computing.”

“With its virtualization of server, storage and network resources, the Egenera BladeFrame is the ideal platform for running Xen,” said Moshe Bar, CTO of XenSource. “Xen is the industry’s choice for virtualization of mission-critical applications in the datacenter, because of its outstanding performance and its adoption as an open industry standard. When running on the Egenera BladeFrame, Xen provides fine-grained control and virtualization of CPU, memory, network and storage resources, enabling CIOs to increase utilization and reduce TCO. This alliance will bring manageable and cost-effective virtual computing to enterprise datacenters worldwide.”

The Egenera BladeFrame system is a new server architecture specifically designed to reduce datacenter complexity and improve business responsiveness. The BladeFrame’s advanced virtualization technology and Egenera PAN Manager software dynamically allocate and repurpose servers to applications as needed without manual intervention. PAN Manager software will interoperate seamlessly with Xen virtualization to provide comprehensive control of the virtualized infrastructure.

Xen is the industry-standard, open source, infrastructure-virtualization software created and maintained by the founders of XenSource, Inc. and developed collaboratively by 20 of the world’s most innovative datacenter solution vendors, including Egenera. Xen allows multiple virtual server instances to run concurrently on the same physical server, with near-native performance and per-virtual-server performance guarantees. The technology is being adopted worldwide in enterprise datacenters to support server consolidation and reduce total cost of ownership. Xen 3.0, to be released shortly, supports up to 32-way SMP workloads as well as 32- and 64-bit processors and Intel® VT-x virtualization technology.

Sun to put VMware virtualization on its Sun Fire and StorEdge

Quoting from Enterprise Networks & Servers:

Sun Microsystems Inc. will deliver VMware’s full line of server virtualization capabilities on Sun’s Sun Fire x64 (x86, 64-bit) servers and the Sun StorEdge 6920 system. In addition, Sun and VMware also signed a technology agreement to provide support for the Solaris 10 Operating System (OS) as a guest OS on future VMware server and desktop products.

As a result of these agreements, Sun will resell the VMware ESX Server, VMware GSX Server and VMware Workstation products, providing customers purchasing Sun Fire x64 servers or workstations from Sun, such as the single- and dual-core Sun Fire V20z and Sun Fire V40z, with the option of adding the virtualization software to their systems.

Sun joins Dell, HP, IBM, NEC and FSC in offering VMware ESX Server on its servers.

VMware ESX Server is virtual infrastructure software for partitioning, consolidating and managing systems in mission- critical environments. With VMware ESX Server on Sun Fire x64 servers and support for the Solaris OS, IT organizations can extend their options to easily provision new services running on the Solaris 10 OS, Windows and standard Linux distribution operating systems on the same piece of hardware, thus helping to increase utilization of the servers.

VMware customers will be able to take advantage of key features of Solaris 10, including Solaris Dynamic Tracing (Dtrace) and Solaris Containers on any architecture. With technology advances such as enhancements to the network stack and special optimizations for multithreaded x64/x86 architectures, Solaris 10 is powering enterprise applications at record speeds on Sun Fire systems. In addition, the Sun StorEdge 6920 system, Sun’s flagship mid-tier storage system, complements VMware ESX Server by enabling customers to implement cost-effective consolidation initiatives and robust business continuity capabilities.

Enterprises in many industries typically run one application service on one server, and often utilize only 5 to 15 percent of their Linux or Windows server hardware capability. With virtual infrastructure, IT organizations can provision new services, change the amount of resources dedicated to a software service easily and consolidate disparate systems.