From the corporate blog Virtual Iron management officially take distances from paravirtualization technology.
Just few months ago Virtual Iron announced the new 3.0 version, expected this month in its first beta, would be based on the Xen hypervisor, open source project and maximum example of paravirtualization.
Is this a paradox? Not really.
Virtual Iron will be based on Xen as announced but will completely depend on AMD and Intel hardware aid to virtualize guest operating systems.
The reason why Virtual Iron decided to skip paravirtualization is clearly stated in the blog post, which also reports some raw benchmark comparisons:
…
Paravirtualization requires substantial engineering efforts in modifying and maintaining an operating system. However, these heroic efforts are inevitably losing the battle against Moore’s Law and hardware advances being made in the x86 space. By the time the first product with paravirtualization appears on the market, more than 80% of the shipping x86 server processors from Intel and AMD will have hardware-based virtualization acceleration integrated into the chips (Intel-VT and AMD-V or “Rev-F”). This hardware-based acceleration is designed to optimize pure virtualization performance, primarily the virtualization of CPU, and it renders OS paravirtualization efforts as completely unnecessary and behind the technology curve.
…
The current batch of chips offering hardware-based virtualization acceleration from Intel and AMD, primarily helps with virtualization of CPUs and very little for virtualizing an I/O subsystem. To improve the I/O performance of unmodified guest OSs, we are developing accelerated drivers. The early performance numbers are encouraging. Some numbers are many times faster than emulated I/O and close to native hardware performance numbers.
Just to give people a flavor of the performance numbers that we are getting, below are some preliminary results on Intel Woodcrest (51xx series) with a Gigabit network, SAN storage and all of the VMs at 1 CPU. These numbers are very early. Disk performance is very good and we are just beginning to tune performance.
|
Native |
VI-accel |
Bonnie-SAN (bigger is better) |
|
|
Write, KB/sec |
52,106 |
49,500 |
Read, KB/sec |
59,392 |
57,186 |
netperf (bigger is better) |
|
|
tcp req/resp (t/sec) |
6,831 |
5,648 |
SPECjbb2000 (bigger is better) |
|
|
JRockit JVM thruput |
43,061 |
40,364 |
I strongly agree with this vision: the big problem of paravirtualization is that achieved performances are not a desirable benefit when you have to trade off them with kernel modification of your guest operating system.
Hardly software house would agree to support their applications on a paravirtualized OS, for the simple reason the environment is not controlled anymore. Reliability, security, compatibility have to be proved again in such scenario e no vendor would be able to grant same level of testing as one happened during original operating system development. Not in a decent amount of time.
Also, the biggest trade off point is that paravirtualization doesn’t permit to run Microsoft Windows and, as I said many times, this is a limit big enough to prevent the technology from entering in the largest part of the market: the SMB segment. The exact reason why Xen itself will eventually go the same path of Virtual Iron.
How much good paravirtualization performances have to be to suffer all of this?
Also: is it simpler to replace hardware (when new virtualization improvements becomes available in new CPUs) or to replace operating system (when a new paravirtualized OS becomes available)?