The open source virtualization platform that made its way into the kernel is now at version 70.
This new build introduces a bunch of bugfixes and performance improvements on virtual network and simmetric multi-processing.
Download it here.
Virtual machines, containers, functions. Market knowledge for IT decision makers since 2003
The open source virtualization platform that made its way into the kernel is now at version 70.
This new build introduces a bunch of bugfixes and performance improvements on virtual network and simmetric multi-processing.
Download it here.
Multiple bidders are currently running for the FastScale acquisition, as virtualization.info has learned from mutliple sources.
FastScale is a US startup launched almost one year ago, funded by ATA Ventures, Leapfrog Ventures and Hunt Ventures, which offers an innovative approach to reduce the overall virtual machine size.
In the last two years, much before the public launch, the company tightly worked with VMware, probably to shape its solution in a way that would fit the virtualization giant’s strategy for virtual appliances.
This is the reason why virtualization.info (wrongly) speculated a FastScale acquisition by VMware in January 2008.
It’s unknown if VMware is part of the ongoing acquisition bid now. The only name reported so far is Sun.
The other FastScale partners (HP, IBM, Microsoft and Red Hat) may be part of the bid as well.
In a panel at Cisco Live! conference the company CEO, John Chambers, stated that Cisco doesn’t need to buy VMware, as NetworkWorld reports.
Chambers had to discuss such scenario because of the many, persistent rumors of a VMware acquisition in early 2009.
The only problem is that the rumored bidder is not Cisco, but Intel as far as virtualization.info heard from many different sources.
Second day at the Burton Group conference in San Diego.
To cover virtualization topics today on stage there are Steve Herrod, CTO at VMware, and Chris Wolf, Senior Analyst at Burton Group.
Steve Herrod is on stage. He introduces the VMware effort in the OVF (Open Virtual Machine Format).
Herrod explains that OVF born from the need to distribute the virtual appliances over a number of multiple hypervisors and to manage them through a number of management solutions.
OVF is developed in a way that ISVs can author a virtual machine, distribute it and deploy in the customers virtual platform.
A OVF package can be authored from scratch or just exported from a virtual infrastructure.
OVF packages can then be distributed easily because the format (an extensible XML document with 10 core sections) is designed to validate the target environment (at hardware, security and integrity levels), to include the licensing informations, to embed multiple VMs, and to stream the content without waiting for a full download.
The deployment is strictly controlled by 3 different portability levels, which define the restrictions in place to run a OVF package.
Chris Wolf is on stage talking about the mobility and orchestration opportunities and challenges in virtual infrastructures.
The challenges include the current immaturity of most live migration technologies, the incompatibilities between different hypervisors (virtual hard drive format, emulated hardware), the incompatibility between physical CPUs, the need to enforce SLAs and compliance policies, an often wrong capacity planning, the lack of standards.
Wolf sees a brighter future with more interoperable hardware, more flexible storage architectures, more intelligent virtual networking, more attention to security (despite the lack of standardization is critical in this area), and massive adoption of automation tools (to dynamically provision the whole computing stack and not only the servers).
The last key show of the day is a panel about management, interoperability and standards with the the DMTF, CiRBA, Cisco, HP and Novell.
During the panel Winston Bumpus, President of Distributed Management Task Force (DTMF) and VMware Director of Standards, announced the publishing of an almost complete OVF Standard specification.
The final version of the specification may be out within one month.
After over three years of development (the product was originally announced at WinHEC 2005 conference) Microsoft finally releases today its first bare-metal virtualization platform: Hyper-V.
During this very long process the product was delayed, changed name, and lost some planned key features.
Unlike Virtual Server and Virtual PC, Hyper-V is a type-1 virtual machine monitor (aka hypervisor) which features an architecture very similar to the one used by Xen and its commercial derivates.
This allows a direct comparison with platforms like Citrix XenServer, Virtual Iron, the upcoming Sun xVM Server and obviously with VMware ESX.
Unlike the latter, Hyper-V adopts a microkernel developed from scratch (so it’s not the Windows kernel) which is less than 1MB in size and delegates most of the tasks to a so called Parent Partition.
Depending on the configuration you adopted, the parent partition automatically loads a full copy of Windows Server 2008 or the new Windows Server 2008 Core.
Being a first generation product, Hyper-V cannot really compete with the above in features, but it clearly offers a performance boost (up to +107% in case of disk I/O activity) and some much deserved improvements over Virtual Server 2005 R2:
Like for Virtual Server 2005, Microsoft supports most of its applications inside virtual machines, but one of the products still unsupported (on any hypervisor) is Exchange 2007.
Now Microsoft reveals that the Exchange team will release a support statement within 60 days from today, finally giving the OK for the much awaited mail servers consolidation projects.
The company also supports 3rd party applications through an optional certification program. At the launch date three companies are already qualified to run their products on Hyper-V: Diskeeper, IBM (with DB2) and Symantec.
With Hyper-V Microsoft will also compete on the embedded hypervisor front against VMware (with ESXi) and Citrix (with XenServer Express): OEMs like HP, Dell, IBM, Fujitsu, Hitachi, NEC and Unisys are already preparing to ship their hardware with the integrated hypervisor.
As already announced the price of Hyper-V in these configurations will be $28.
The new hypervisor doesn’t change the licensing scheme already introduced for Virtual Server: Windows Server 2008 Standard Edition license includes one virtual machine, the Enterprise Edition includes up to four VMs, and the Datacenter Edition allows unlimited VMs.
Microsoft Hyper-V is fully integrated with Windows Server 2008 64bit so any download of the OS includes it. Download a trial here.
For those customers already using the beta or the release candidate of Hyper-V, the product will be updated through the Windows Update service beginning July 8.
To demonstrate how much the company bets on this new product, Microsoft is internally adopting Hyper-V since a while and already migrated inside its virtual machines all the web front-ends that serve TechNet and MSDN websites.
Now the customers wait for the upcoming System Center Virtual Machine Manager 2008, currently in beta, to centrally manage Hyper-V (along with Virtual Server and VMware ESX), and MAP 3.1, in beta as well, to perform accurate capacity planning.
The virtualization.info Virtualization Industry Roadmap has been updated accordingly.
This year’s Burton Group conference starts with a keynote from Drue Reeves, Vice President and Research Directory, and Chris Wolf, Senior Analyst.
Reeves starts talking about the current rigidity of the IT data centers, their under-utilized resources and the long amount of time need to change their infrastructure to react any new challenge.
To address these limitations the idea of Dynamic Data Center is emerging.
Such new infrastructure is service-oriented and can workloads around with new flexibility, is energy efficient and still enforces the security compliance.
Two major trends are concurring in providing these capabilities: virtualization and management orchestration/automation.
Both have challenges to address: for the former these are licensing, security and management, for the latter interoperability is the key problem.
Reeves acknowledges an increased adoption for server virtualization but some technologies which are directly related create additional issues: a broad support from storage vendors, the throughput bottleneck until I/O virtualization, the lack of a strong authentication model in storage facilities and more.
He also talks about the dynamic data center inhibitors: the IT staff’s fear to lose control, the CEO’s failure to understand the ROI.
Now Chris Wolf is on stage and his speech is specifically focused on virtualization.
Wolf acknowledges that the market is being populate by multiple good enough hypervisors. They may not compete feature by feature but all of the are acceptable solutions for customers.
Anyway, all of them introduce new challenges: licensing and support, high availability, security, interoperability, management, virtual desktops, storage.
At today supporting virtualization is still an unclear statement from most vendors.
Tracking licenses in a virtual, dynamic world is a complex task to accomplish.
High availability solutions still follow legacy, inefficient models which require a virtual machine full restart.
Security solutions are still unready for the dynamic data center: it’s impossible to attach a security policy or tool to the virtual machine, and when the VM is dynamically relocated the migration breaks any protection in place.
The current lack of management standardization prevents the interoperability and slows down the management vendors in providing valuable cross-platform solutions.
The management solutions available today need to become more effective in a lot of areas: controlling the VM sprawl, providing the compliance auditing, controlling the provisioning and more.
Virtual desktops and application virtualization are redefining the desktop paradigm, including the way we provide IT support to end-users.
At this point Wolf highlights an important point: ever vendor in the desktop space should have an effective VDI strategy. And he’s specifically talking about Apple, which is one few major vendors totally outside the virtualization market.
Back to the challenges: storage still has a lot of shortcomings that don’t make it very friendly-aware and need a lot of technology improvements.
The final advices from Wolf are: customers should continue to strongly demand for licensing, support and open standards improvements. At the same time they should start considering the adoption of virtual desktops and wide bandwidth 10Gb Ethernet as soon as possible.
Parallels announces today a major agreement to offer a VDI solution powered by Quest/Provision Networks Virtual Access Suite (VAS).
The deal involves Virtuozzo Containers, the Parallels OS virtualization solution, and not the upcoming hardware virtualization platform Parallels Server.
Considering the strong presence on Virtuozzo in the hosting industry it’s easy to imagine how Parallels is working to offer a rentable VDI mode like the one that Desktone is trying to market.
The use of OS virtualization as back-end infrastructure implies a minor overhead and a higher consolidation ratio: Parallels and Quest report something like 130/140 virtual desktops per host rather than 30/60 in VDI scenarios with hardware virtualization platforms.
The solution is priced $140 per concurrent desktop connection and it’s available now through sales channel of both companies.
On its side Provision Networks, even after the Quest acquisition, continues to close remarkable deals.
Before Parallels, the company signed a similar agreement with Virtual Iron in April 2007 and with HP in June 2007.
Parallels has a chance to raise much attention with this agreement, considering the overall lower price of the solution and the possibly higher performance.
Additionally, Parallels will almost certainly extend this agreement to Parallels Server when it will be out.
After a quite start, the US firm Veeam continues to expand its presence on virtualization industry becoming more aggressive: in March it announced its first major product, Backup, and now it proceed to the first acquisition.
NWorks is a 14-years old company focused on management plug-ins for major enterprise management platforms like HP OpenView and Microsoft System Center Opeation Manager (SCOM).
The company offers plug-ins to handle VMware infrastructures on these platforms since 2005 and Veeam probably plans to integrate them in its products Monitor and Reporter.
NWorks was acquired for an undisclosed sum.
Today PlateSpin launches the first update of its flagship product, PowerConvert, since the Novell acquisition.
This new major release introduces some some welcome features:
The virtualization.info Virtualization Industry Roadmap has been updated accordingly.
Before the official launch, planned within July 10, VMware releases a last build, the Release Candidate one, of its brand new application virtualization solution: ThinApp (formerly Thinstall Application Virtualization Suite).
As expected the new build (3.396) doesn’t introduce any new feature but some important enhancements.
Enroll for the beta program here.