The virtues of virtualization

Quoting from CIO Asia:

During the past few decades, CIOs have stood at the center of one of the great technological revolutions in history: the replacement of the physical atom by the computational bit as the medium of commerce and culture. The profession might be forgiven for thinking that nothing is left for the next generation but tinkering. What could compare with a transition like that?

Actually, something almost as big might be coming over the horizon: the replacement of the bit with the virtual bit. Virtualization is the substitution of physical computing elements, either hardware or software, with artificial impostors that exactly replicate the originals, but without the sometimes inconvenient need for those originals to actually exist. Need a 1 terabyte hard drive, but only have 10 100GB drives? No problem, virtualization software can provide an interface that makes all 10 drives look and act like a single unit to any inquiring application. Got some data you need from an application you last accessed in 1993 on an aging MicroVAX 2000 that hit the garbage bin a decade ago? A virtual Digital VMS simulator could save your skin.

Stated like that, virtualization can sound like little more than a quick and dirty hack, and indeed, for most of the history of computing, that is exactly how the technique was viewed. Its roots lie in the early days of computing, when it was a means of tricking single-user, single-application mainframe hardware into supporting multiple users on multiple applications. But as every aspect of computing has grown more complex, the flexibility and intelligence that virtualization adds to the management of computing resources have become steadily more attractive. Today it stands on the lip of being the next big thing.

Raising the Dead
The Computer History Simulation Project, coordinated by Bob Supnik at SiCortex, uses virtualization to fool programs of historical interest into thinking that they are running on computer hardware that vanished decades ago. Supnik’s project has a practical end as well: Sometimes old systems are so embedded in the corporate landscape that they must be kept running. If the real hardware is unavailable, the only way to keep the old machines running is to virtualize them.

In a more contemporary example of the power of virtualization, about three years ago J. R. Simplot, a $3 billion food and agribusiness company in Boise, Idaho, found itself in a phase of especially rapid growth in server deployments. Of course, with rapid growth comes the headache of figuring out how to do everything faster. In this case, the company’s IT center concluded that their old server procurement system had to be accelerated.

Servers, of course, are pieces of physical equipment; they come with their own processing, memory, storage resources and operating systems. What the Simplot team did was use virtualization tools from VMware, a virtual infrastructure company, to create software-only servers that interacted with the network just like hardware servers, although they were really only applications. Whenever Simplot needed another server it would just flip the switches appropriate to the server type (Web, application, database, e-mail, FTP, e-commerce and so on). From that point, an automated template generated the virtual machine on a specific VMware ESX host machine.

Virtual Improvements
According to Tony Adams, a technology analyst at Simplot, there were gains all across the board. The time to get a new server up and running on the system went from weeks to hours or less. Uptime also increased, because the servers were programs and could run on any supported x86 hardware anywhere. If a machine failed or needed maintenance, the virtual server could be quickly moved to different hardware.

Perhaps most important were the gains in utilization efficiencies. Servers are built for specific roles. Sometimes demand for a particular role is in sync with available resources, but usually it isn’t. In the case of “real” servers, if there is a mismatch, then there is nothing that you can do about it; you’re stuck with what you have. If you end up with an average utilization rate of 10 percent per server, so be it. (The need to provide for peak demand makes the problem worse, and utilization can often be far below even 10 percent.) Low utilization means IT is stuck with unnecessary maintenance issues, security faces unnecessary access issues (they have to worry about protecting more machines), and facilities must deal with unnecessary heat and power issues.

Virtualization fixes these problems. The power to design any kind and number of servers that you like allows you to align capacity with load continuously and precisely. In the case of Simplot, once Adams’s servers turned virtual, he was able to deploy nearly 200 virtual servers on only a dozen physical machines. And, he says, typical CPU, network, disk and memory utilization on the VMware ESX boxes is greater than 50 percent—compared with utilization of around 5 percent on dedicated server hardware.

Virtualization also makes disaster recovery planning simpler, because it allows you to write server clusters appropriate to whatever infrastructure you have on hand. As Adams points out, conventional disaster recovery schemes force you to have an exact clone of your hardware sitting around doing nothing. “But personally, what I really like,” he says, “is the remote manageability. I can knock out new [servers] or do repairs anywhere on the Net, without even going to the data center.”

Adams wants one machine to look like many machines, but it is just as possible to virtualize the other way: making many machines look like one. Virtualization underlies the well-known RAID storage tricks that allow many disks to be treated as one huge drive for ease of access, and one disk to be treated as many for the purpose of robust backup. Another prime use for virtualization is development. The hardware world is growing much more complex all the time: Product cycles are turning faster, the number of device types is always rising, and the practice of running programs over networks means that any given program might come in contact with a huge universe of hardware. Developers can’t begin to afford to buy all of this hardware for testing, and they don’t need to: Running products on virtualized models of the hardware allows for quality assurance without the capital expense. Virtualizing the underlying hardware also gives developers far more control. Peter Magnusson, CTO of Virtutech, a systems simulation company in San Jose, Calif., points out that you can stop simulated hardware anywhere you like, any time you want to investigate internal details.

Unreal Future
During the next year or two, virtualization is on track to move from its current success in storage, servers and development, to networks and data centers. So CIOs will then be able to build software versions of firewalls, switches, routers, load balancers, accelerators and caches, exactly as needed. Everything that was once embodied in cards, disks and physical equipment of any kind, will be organized around a single point of control. If virtualization vendor promises materialize, changes that once were out of the question, or that at least would have required considerable man-hours and operational risk, will be done in minutes, routinely.

What those changes will mean is very much a topic for current discussion. For instance, all the new knobs and buttons virtualization provides will raise issues of policy, because it will be possible to discriminate among classes of service that once had to be handled together. You will, for instance, be able to write a Web server that gives customers who spend above a certain limit much better service than those who spend only half as much. There will be huge opportunities for automation. Infrastructure may be able to reconfigure itself in response to changes in demand, spinning out new servers and routers as necessary, the way load balancing is done today. (Certainly IBM et al. have been promoting just such a vision of the on-demand computing future.)

Virtualization examples so far have all been hardware-centric, because the inherent inflexibility of hardware means the elasticity advantages of virtualization are greater than with software. However, virtualization can work anywhere in the computing stack. You can virtualize both the hardware and the operating system, which allows programs written for one OS to run on another, and programs written for a virtual OS to run anywhere (similar to how Java maintains its hardware independence through the Java Virtual Machine).

Quite possibly the growth of virtualization predicts a deep change in the responsibilities of CIOs. Perhaps in the not-too-distant future no CIO will ever think about hardware: Raw physical processing and storage will be bought in bulk from information utilities or server farms. Applications will be the business of the departments or offices requiring them. The center of a CIO’s job will be the care and feeding of the execution environment. The very title of CIO might vanish, to be replaced, of course, by CVO.

Taking It All In
In that world, virtualization could graduate into a full-throated simulation of entire systems, the elements of which would not be just computing hardware, as now, but all the motors, switches, valves, doors, engines, vehicles and sensors in a company. The model would run in parallel with the physical company and in real-time. Where now virtualization is used for change management, disaster recovery planning, or maintenance scheduling for networks and their elements, it would in the future do the same for all facilities. Every object or product sold would come with a model of itself that could fit into one of these execution environments. It would be the CVO’s responsibility to make sure that each company’s image of itself was accurate and complete and captured the essentials. And that would not be a virtual responsibility in the least.