Microsoft Virtual Server 2005 review by BentUser

Virtual Server 2005 receives its countless review, this time by BentUser portal.

You could find it interesting since there are some performance comparisons with real hardware in various scenarios.
The test’s results anyway should be taken very carefully since no documentation is provided about how the host OS was configured and the hardware used isn’t the one anybody should use for a virtualization server.

Read it here.

VMware totally revamps web site

VMware just restyled the whole website. I’ve found this new layout much more rational, clearer and simpler to interact with.

An interesting thing is a new comparison between GSX and ESX Server products. In this comparison VMware clearly states something not so evident among customers: GSX Server achieves a best performances rate of 4 VMs/CPU while ESX Server perform a 8 VMs/CPU rate.

It seems VMware is preparing to launch Workstation 5.5 in great style for VMworld 2005 timeframe.

Citrix details itself as Application Virtualization vendor

The keyword of the last period is surely virtualization. Every vendor I see these months is trying to adapt old and new products description to fit the virtualization concept.

Citrix shouldn’t have the need to but Brian Madden reports that Citrix iForum conference brought a more explicit description for its flagship product, Presentation Server, now detailed as a application virtualization solution.

This points me to a 6 months-ago post where I suggested IT industry to adopt a standard naming convention for the various virtualization technologies. Something we probably still need.

PremiTech enhances End-User Systems Management Solution for Citrix and VMware

Quoting from the PremiTech official announcement:

PremiTech today announced the latest version of its proven software for managing end-user quality of IT service, Performance Guard™ 4.1. This new version offers Citrix Presentation Server™ customers enhanced usability and notification features, as well as support for applications virtualized on VMware servers. PremiTech will showcase Performance Guard, developed in close cooperation with Citrix and its customers, in Booth #604 at Citrix® iForum™ 2005, October 9-12, in Las Vegas, Nev.

Performance Guard is utilized by more than 200 global companies including an impressive array of some of the largest Citrix customers in the world, such as Ingersoll-Rand, Maersk Sealand, General Electric, Verizon Wireless, State Street Bank and the U.S. Environmental Protection Agency. The product is supported and distributed by leading Citrix platinum partners, such as Bell Business Solutions, IBM Global Services, RapidApp, DynTek, Vector-MTM and IPM. PremiTech also recently received the DABCC Seal of Approval for its ability to solve customers’ problems and achieve the highest standards of excellence.

With Performance Guard, IT organizations are able to monitor end-user performance and network latency, as well as the service on Presentation Server and supporting back-end servers to obtain consistent, accurate measurement of Citrix application performance in real time. The new release makes it even easier for IT staff to proactively ensure optimal service delivery for their business users while reducing operational costs through early resolution of problems.

New features of Performance Guard 4.1 include:

  • Added performance metrics: Especially important for Presentation Server users, Performance Guard now monitors additional key performance indicators, such as context switches.
  • Improved network analysis: Performance Guard monitors the entire network to pinpoint and document the exact cause of poor response times and broken sessions.
  • Improved distribution of scheduled reporting: Performance alarms and reports can be sent automatically to appropriate personnel inside and outside of IT.
  • Enhanced alarms and auto-baselining: Performance Guard intuitively configures itself to recognize baseline performance and notify IT at any point when the system deviates from normal conditions.
  • Strengthened correlation engine: Performance Guard now maintains server metrics for an extended period of time, providing the ability to correlate application and system performance for improved root cause identification.
  • Active Citrix login measurement: The Performance Guard agent now makes repeated login measurements in order to ensure the Citrix server is up and running and to identify differences in login times.
  • VMware support: For firms consolidating servers and using virtualized applications in their production environments, Performance Guard now performs baseline measurements and provides pre- and post-migration metrics for the VMware virtual infrastructure.
  • “This latest release shows that PremiTech is dedicated to making our jobs easier and helping us reduce support costs through improved efficiencies,” said Dean Matvey, Network Services Division of Ingersoll-Rand’s Infrastructure Sector. “With Performance Guard, we are able to provide a precise explanation on the cause of system issues and an exact account of how we’re going to fix them. You couldn’t get that from any other systems management product.”

About Performance Guard
Performance Guard is an off-the-shelf, standard solution that provides the ability to monitor end-user quality of IT service in Desktop, Citrix and VMware environments. It delivers consistent, accurate measurement of application performance from the end-user perspective in real time. This data is essential for the rapidly growing number of companies using access infrastructure solutions from Citrix to measure mission-critical application performance, identify performance bottlenecks and resolve issues. Using one centralized server with Performance Guard, companies are able to improve system performance, increase user productivity, reduce expenses in infrastructure and help-desk resources, and ensure compliance with Service Level Agreements (SLA).

The virtues of virtualization

Quoting from CIO Asia:

During the past few decades, CIOs have stood at the center of one of the great technological revolutions in history: the replacement of the physical atom by the computational bit as the medium of commerce and culture. The profession might be forgiven for thinking that nothing is left for the next generation but tinkering. What could compare with a transition like that?

Actually, something almost as big might be coming over the horizon: the replacement of the bit with the virtual bit. Virtualization is the substitution of physical computing elements, either hardware or software, with artificial impostors that exactly replicate the originals, but without the sometimes inconvenient need for those originals to actually exist. Need a 1 terabyte hard drive, but only have 10 100GB drives? No problem, virtualization software can provide an interface that makes all 10 drives look and act like a single unit to any inquiring application. Got some data you need from an application you last accessed in 1993 on an aging MicroVAX 2000 that hit the garbage bin a decade ago? A virtual Digital VMS simulator could save your skin.

Stated like that, virtualization can sound like little more than a quick and dirty hack, and indeed, for most of the history of computing, that is exactly how the technique was viewed. Its roots lie in the early days of computing, when it was a means of tricking single-user, single-application mainframe hardware into supporting multiple users on multiple applications. But as every aspect of computing has grown more complex, the flexibility and intelligence that virtualization adds to the management of computing resources have become steadily more attractive. Today it stands on the lip of being the next big thing.

Raising the Dead
The Computer History Simulation Project, coordinated by Bob Supnik at SiCortex, uses virtualization to fool programs of historical interest into thinking that they are running on computer hardware that vanished decades ago. Supnik’s project has a practical end as well: Sometimes old systems are so embedded in the corporate landscape that they must be kept running. If the real hardware is unavailable, the only way to keep the old machines running is to virtualize them.

In a more contemporary example of the power of virtualization, about three years ago J. R. Simplot, a $3 billion food and agribusiness company in Boise, Idaho, found itself in a phase of especially rapid growth in server deployments. Of course, with rapid growth comes the headache of figuring out how to do everything faster. In this case, the company’s IT center concluded that their old server procurement system had to be accelerated.

Servers, of course, are pieces of physical equipment; they come with their own processing, memory, storage resources and operating systems. What the Simplot team did was use virtualization tools from VMware, a virtual infrastructure company, to create software-only servers that interacted with the network just like hardware servers, although they were really only applications. Whenever Simplot needed another server it would just flip the switches appropriate to the server type (Web, application, database, e-mail, FTP, e-commerce and so on). From that point, an automated template generated the virtual machine on a specific VMware ESX host machine.

Virtual Improvements
According to Tony Adams, a technology analyst at Simplot, there were gains all across the board. The time to get a new server up and running on the system went from weeks to hours or less. Uptime also increased, because the servers were programs and could run on any supported x86 hardware anywhere. If a machine failed or needed maintenance, the virtual server could be quickly moved to different hardware.

Perhaps most important were the gains in utilization efficiencies. Servers are built for specific roles. Sometimes demand for a particular role is in sync with available resources, but usually it isn’t. In the case of “real” servers, if there is a mismatch, then there is nothing that you can do about it; you’re stuck with what you have. If you end up with an average utilization rate of 10 percent per server, so be it. (The need to provide for peak demand makes the problem worse, and utilization can often be far below even 10 percent.) Low utilization means IT is stuck with unnecessary maintenance issues, security faces unnecessary access issues (they have to worry about protecting more machines), and facilities must deal with unnecessary heat and power issues.

Virtualization fixes these problems. The power to design any kind and number of servers that you like allows you to align capacity with load continuously and precisely. In the case of Simplot, once Adams’s servers turned virtual, he was able to deploy nearly 200 virtual servers on only a dozen physical machines. And, he says, typical CPU, network, disk and memory utilization on the VMware ESX boxes is greater than 50 percent—compared with utilization of around 5 percent on dedicated server hardware.

Virtualization also makes disaster recovery planning simpler, because it allows you to write server clusters appropriate to whatever infrastructure you have on hand. As Adams points out, conventional disaster recovery schemes force you to have an exact clone of your hardware sitting around doing nothing. “But personally, what I really like,” he says, “is the remote manageability. I can knock out new [servers] or do repairs anywhere on the Net, without even going to the data center.”

Adams wants one machine to look like many machines, but it is just as possible to virtualize the other way: making many machines look like one. Virtualization underlies the well-known RAID storage tricks that allow many disks to be treated as one huge drive for ease of access, and one disk to be treated as many for the purpose of robust backup. Another prime use for virtualization is development. The hardware world is growing much more complex all the time: Product cycles are turning faster, the number of device types is always rising, and the practice of running programs over networks means that any given program might come in contact with a huge universe of hardware. Developers can’t begin to afford to buy all of this hardware for testing, and they don’t need to: Running products on virtualized models of the hardware allows for quality assurance without the capital expense. Virtualizing the underlying hardware also gives developers far more control. Peter Magnusson, CTO of Virtutech, a systems simulation company in San Jose, Calif., points out that you can stop simulated hardware anywhere you like, any time you want to investigate internal details.

Unreal Future
During the next year or two, virtualization is on track to move from its current success in storage, servers and development, to networks and data centers. So CIOs will then be able to build software versions of firewalls, switches, routers, load balancers, accelerators and caches, exactly as needed. Everything that was once embodied in cards, disks and physical equipment of any kind, will be organized around a single point of control. If virtualization vendor promises materialize, changes that once were out of the question, or that at least would have required considerable man-hours and operational risk, will be done in minutes, routinely.

What those changes will mean is very much a topic for current discussion. For instance, all the new knobs and buttons virtualization provides will raise issues of policy, because it will be possible to discriminate among classes of service that once had to be handled together. You will, for instance, be able to write a Web server that gives customers who spend above a certain limit much better service than those who spend only half as much. There will be huge opportunities for automation. Infrastructure may be able to reconfigure itself in response to changes in demand, spinning out new servers and routers as necessary, the way load balancing is done today. (Certainly IBM et al. have been promoting just such a vision of the on-demand computing future.)

Virtualization examples so far have all been hardware-centric, because the inherent inflexibility of hardware means the elasticity advantages of virtualization are greater than with software. However, virtualization can work anywhere in the computing stack. You can virtualize both the hardware and the operating system, which allows programs written for one OS to run on another, and programs written for a virtual OS to run anywhere (similar to how Java maintains its hardware independence through the Java Virtual Machine).

Quite possibly the growth of virtualization predicts a deep change in the responsibilities of CIOs. Perhaps in the not-too-distant future no CIO will ever think about hardware: Raw physical processing and storage will be bought in bulk from information utilities or server farms. Applications will be the business of the departments or offices requiring them. The center of a CIO’s job will be the care and feeding of the execution environment. The very title of CIO might vanish, to be replaced, of course, by CVO.

Taking It All In
In that world, virtualization could graduate into a full-throated simulation of entire systems, the elements of which would not be just computing hardware, as now, but all the motors, switches, valves, doors, engines, vehicles and sensors in a company. The model would run in parallel with the physical company and in real-time. Where now virtualization is used for change management, disaster recovery planning, or maintenance scheduling for networks and their elements, it would in the future do the same for all facilities. Every object or product sold would come with a model of itself that could fit into one of these execution environments. It would be the CVO’s responsibility to make sure that each company’s image of itself was accurate and complete and captured the essentials. And that would not be a virtual responsibility in the least.

PlateSpin will be optimized for Microsoft Virtual Server

Quoting from the PlateSpin official announcement:

PlateSpin Ltd today announced that their patent-pending OS Portability PowerX product line is optimized for Microsoft’s current and future virtualization technologies, including Microsoft Virtual Server 2005 R2, through its licensing of the Microsoft virtual hard disk (VHD) format. Platespin shares with Microsoft a focus on enabling self-managing dynamic systems that deliver higher business value through automation, flexible resource utilization, interoperability and knowledge-based processes. PlateSpin PowerConvert works with Microsoft Virtual Server to optimize the data center by automating migrations and accelerating server consolidation projects. Additionally, PowerConvert is ideal for ensuring business continuity through rapid infrastructure independent recovery, and easing test lab deployment and development.

“Microsoft’s commitment to virtualization is evidenced by their decision to license the VHD format royalty-free, making it easy for vendors, such as PlateSpin, to work effectively with Microsoft Virtual Server 2005,” said Stephen Pollack, PlateSpin CEO. “With a clear roadmap, and the openness of the VHD format licensing, Microsoft is well positioned to grow the adoption of future versions of Windows virtualization technology into production environments in data centers across the world.”

PlateSpin PowerConvert automates the deployment of Microsoft VHDs across the data center infrastructure and completely decouples the software layer making it independent from the hardware it was previously reliant upon.

“Microsoft is licensing the VHD format as royalty free to provide strategic vendors, such as Platespin, the ability to accelerate development on the Virtual Server platform,” said Zane Adam, director of marketing, Windows Server Division, Microsoft Corp. “We are enabling the partner ecosystem to create value-added solutions to the Windows platform. The Microsoft VHD file format, and other standards initiatives, will help ensure a smooth transition to future versions of Windows virtualization technology, and provides cross platform support and an ongoing common format for easier security, management, reliability and migration.”

Multi-directional migrations are made possible with PlateSpin’s underlying OS Portability technology, which performs automated migrations from virtually any source to any target in the data center including physical servers, blade servers, virtual machines, and archive/backup images.

“Microsoft Virtual Server 2005 is a critical component in many server consolidation projects that we see today,” said Eric Courville, vice president of Global Sales and Business Development from PlateSpin. “The combination of PlateSpin PowerConvert and Microsoft Virtual Server 2005 is a proven solution, and we are working closely with Microsoft to ensure that we leverage their technology today and into the future so that we can offer our customers a market-leading solution for optimizing the data center.”

PlateSpin PowerConvert is available today for use with industry leading server technologies, image archives, and virtualization products including Microsoft Virtual Server. For more information on PlateSpin PowerConvert and its underlying OS Portability technology, please visit www.platespin.com.

Acronis True Image Enterprise Server extends disaster recovery capabilities using Microsoft Virtual Server 2005 R2

Quoting from the Acronis official announcement:

Acronis Inc., the technological leader in storage management software, announced that Acronis True Image Enterprise Server extends Microsoft’s current and future virtualization technologies, including Microsoft Virtual Server 2005 R2, and can provide full disaster recovery for both physical and virtual servers.

“As system virtualization becomes mainstream, IT managers will find a greater need for disk imaging for disaster recovery and systems deployment,” said Walter Scott, CEO of Acronis. “Acronis True Image Enterprise Server today provides the disaster recovery and bare-metal restore requirements for both the physical server and the virtual servers residing on it.”

“Combining Acronis’ real-time disk imaging and backup capabilities with Microsoft’s virtualization technologies, and royalty-free Virtual Hard Disk (VHD) license, provides a powerful value proposition for enterprise and hosting customers,” he said. “Server virtualization is the enabling technology for tomorrow’s dynamic data centers, but without comprehensive tools like those offered by Microsoft, enterprises will struggle to fully realize the benefits of the dynamic data centers.”

“Enterprises will be able to provision new physical servers instantaneously using Acronis’ award-winning disk imaging solution on systems running Microsoft Virtual Server 2005 R2,” said Stephen Lawton, Acronis director of marketing. “Once deployed, the new servers can be partitioned into virtual servers to help better utilize a customer’s server investment.”

“Recent events along the Gulf Coast and power outages in Los Angeles have brought to the fore the need for companies large and small to have disaster recovery strategies,” he continued. “As virtual servers are introduced into more networks, IT managers need a disaster recovery plan that addresses both physical and virtual systems.”

“Acronis True Image Enterprise Server helps extend Virtual Server 2005 R2 capabilities to aid customers in developing powerful disaster recovery strategies” said Zane Adam, director of marketing, Windows Server Division, Microsoft Corp. “The work Microsoft is doing to enable the partner ecosystem with virtualization technologies demonstrates our openness so that partners can add value on top of the Windows platform.”

Acronis True Image Enterprise Server is currently being shipped. It carries a suggested list price of $999; volume discounts are available. For additional information, visit www.acronis.com/enterprise/products/ATIESWin/.

Novell SuSE releases Linux 10.0 and embeds Xen 3.0 technical preview

Novell just released SuSE Linux 10.0 (previously referred as SuSE Linux Professional).
It now include Xen 3.0 technical preview (b6715) which supports AMD and Intel virtualization enhancements.

Quoting from the xen-3.0_6715-2.i586.rpm package:

Xen is a virtual machine monitor for x86 that supports execution of multiple guest operating systems with unprecedented levels of performance and resource isolation.

This package contains the Xen HyperVisor.

Modern computers are sufficiently powerful to use virtualization to present the illusion of many smaller virtual machines (VMs), each running a separate operating system instance. Successful partitioning of a machine to support the concurrent execution of multiple operating systems poses several challenges. Firstly, virtual machines must be isolated from one another: it is not acceptable for the execution of one to adversely affect the performance of another. This is particularly true when virtual machines are owned by mutually untrusting users. Secondly, it is necessary to support a variety of different operating systems to accommodate the heterogeneity of popular applications. Thirdly, the performance overhead introduced by virtualization should be small.
Xen uses a technique called paravirtualization: The guest OS is modified, mainly to enhance performance.

The Xen hypervisor (microkernel) does not provide device drivers for your hardware (except for CPU and Memory). This job is left to the kernel that’s running in domain 0. Thus the domain 0 kernel is privileged; it has full hardware access. It’s started immediately after Xen started up. Other domains have no access to the hardware; instead they will use virtual interfaces that are provided by Xen (with the help of the domain 0 kernel).

Xen does allow to boot other Operating Systems; ports of NetBSD (Christian Limpach), FreeBSD (Kip Macy) and Plan 9 (Ron Minnich) exist.
A port of Windows XP was developed for an earlier version of Xen, but is not available for release due to licence restrictions.
In addition to this package you need to install the kernel-xen and xen-tools to use Xen.
Xen3 also support full emulation and allows to run unmodified guests, relying on hardware support. Install xen-tools-ioemu if you want to use this.

Quoting from the xen-tools-ioemu-3.0_6715-2.i586.rpm package:

Xen is a virtual machine monitor for x86 that supports execution of multiple guest operating systems with unprecedented levels of performance and resource isolation.

This package contains the needed BIOS and device emulation code to support unmodified guests. (You need virtualization support in hardware to make use of this.)
….