The risk of using free virtualization products

Since the launch of VMware Player, the first free desktop virtualization solution, and Microsoft Virtual Server, the first free server virtualization solution, the IT world has never been the same.
A revolution in the way of thinking about computing resources started, and it will greatly accelerate now that also VMware Server 1.0 and Microsoft Virtual PC 2004 have been releases as free products.

At this moment the IT world is shaking up by two concurrent phenomenons at the same time: on a side the server virtualization technology itself and on the other side the fact virtualization didn’t ever have the chance to become mainstream and it’s already completely free.

While free virtualization is a huge benefit for the whole industry, obtaining it so fast could bring in a lot of issues.

The problem
Problems free virtualization could raise in the next years mainly depend on three factors: technology complexity, critical role in business, easiness in adoption.

A virtualized datacenter involves new challenges, and IT staff has to handle technical incompatibilities, performances penalties, lacks of products support, interoperability, accountability, and many others.
Professionals and companies had no enough time to become really expert in handling all of this in the new scenarios. There are so many aspects still to be fully understood and so much experience to collect before reaching the level of confidence we have today with physical server.

While desktop virtualization has a large diffusion but a limited impact on the way business services are offered, server virtualization completely changes the approach to datacenter, from hardware purchasing to resources management.
While desktop virtualization is a technology companies can decide to forgo at any moment if it doesn’t meet certain expectations, server virtualization is a no way back adoption most of times.

The fact today’s free solutions yesterday were commercial products, advertised as enterprise grade solutions, imply companies from small business to enterprise, will embrace them, both because are at no cost and because are trusted as reliable. And when a much desirable technology suddenly becomes free, a mass of professionals approach it, with or without required knowledge.

Where’s the risk? The biggest one is for small and medium companies which surely see in free server virtualization the biggest opportunity to lower costs.
In these realities time and budget allocated for IT staff training or outsourcing consulting and for testing is small or non-existent and often happens technologies are thrown in production without adequate skills and experience.

Here comes the technology complexity and multiple factors which could compromise a virtualization project: a poor capacity planning, superficiality in host and guest OSes configuration, missing policies for virtual machines provisioning, lack of knowledge for needed third party tools, poor investigation in supported configurations. All elements with lead to disappointing performances, virtual machines sprawl and increased efforts in management.

Such bad results will not only translates in many money required to correct deficiencies or revert back to physical server, but will also become the reason why companies will stay away from virtualization as much as possible, believing the technology is much less useful and reliable than expected.

At the end of the day surrendering the mirage of a complex solution such server virtualization available at no cost will damage companies in the short and medium term.

Future trends
It’s pretty sure server virtualization will remain free, will extend to the datacenter class solutions, now still a profitable part of the vendors offering, and will become pervasive, included in every operating system.

The biggest contribution in this direction will arrive from Microsoft which announced will embed a new virtualization technology called Windows Server Virtualization inside upcoming versions of its server operating system, codename Longhorn.

Within two years or little more virtualization as a commodity will appears in millions of installations, becoming a de-facto standard in datacenter architectures.

Investing in training or consulting today is not just a way to ensure free virtualization will deliver supposed benefits, but it’s also a way to build knowledge and be ahead of competition in the near future.

Conclusion
Free virtualization could appear as a very simple technology to solve very complex problems, and this appearance could lead to not consider mandatory an investment in training or outsourcing help.

The reality is today’s virtualization is very hard to handle and requires new capabilities IT staff doesn’t have.
Companies going to adopt free virtualization too easily could face stop issues at a point of the project so that correcting or reverting back to physical server will result in big waste of money.

This article originally appeared on SearchServerVirtualization.

Release: Scalent V/OE 2.0

Quoting from the Scalent official announcement:

Scalent Systems today announced general availability of Scalent™ Virtual Operating Environment (V/OE) version 2.0, the industry’s next-generation server infrastructure repurposing technology.

Serving as virtualization middleware V/OE enables data center operations owners to rapidly change entire systems and associated topologies, which servers are running, what software is running on them, and how they’re connected to network and storage, without altering physical infrastructure.

Scalent V/OE 2.0 extends Scalent’s broad hardware and OS support, with the introduction of additional enterprise extensibility, including:

  • Support for Solaris 10 on x86 and SPARC
  • Support for enterprise-class bladed Ethernet switches (for example, the Cisco 65xx)
  • Addition of a programmatic interface for third-party systems integration

Virtual Strategy Magazine also published a podcast about this release with company’s VicePresident of Marketing Kevin Epstein.

The virtualization.info Virtualization Industry Roadmap has been updated accordingly.

Podcast: TechNet Radio interviews Jeff Woolsey of Microsoft

Channel 9 published a TechNet Radio interview with Jeff Woolsey, Lead Program Manager Windows Virtualization at Microsoft, about the upcoming Windows Server Virtualization (WSV).

Listen the whole interview at source.

If you missed the WinHEC 2006 presentation be sure to check the webcasts and the virtualization.info Q&A about Windows Server Virtualization with Mike Neil, Virtual Machine Technologies Product Unit Manager at Microsoft.

Ron Oglesby on hypervisors future ubiquity

Brian Madden published an article from Ron Oglesby about virtualization future in the middle term.
Ron focused on what upcoming change in virtualization could further revolutionize the IT world, predicting it will be hypervisor binding with hardware and its ubiquity in desktop and server machines:


Right now I believe that the real race going on in the virtualization space isn’t about who can Vmotion or support four processor VMs, etc. The real race is about who has the first lightweight fully integrated hypervisor that is OEM’ed on servers and desktops.

The Future is a thin layer that is OEM’ed that can work with and control all these devices. It will not be as bulky as any Windows or Linux OS you have ever seen and will more closely resemble a glorified piece of firmware that boots and starts dividing up resources to whatever number of VMs you have running on the machine. Of course it will still have some type of interface while the server and its VMs are running, but it will be extremely lightweight and self-sustaining. This will come with every x86 server and desktop. What you will buy is not the hypervisor but the management tools that wrap around it…

Read the whole article at source.

Whitepaper: Roadmap to Virtual Infrastructure: Practical Implementation Strategies

VMware published a very interesting paper about steps CIOs should take to gradually implement virtualization technologies in the company, from the pilot program to the adoption in production environment:


We cover organizational charter, stakeholder buy-in strategies to ease the common non-technical resistance that can affect virtualization rollouts. We also highlight key areas of IT infrastructure and operations most impacted by virtualization.

We include some actionable next steps and templates for how to build an effective virtualization support team, assess readiness of your organization to adopt virtualization, and scope initial projects to help ensure success and develop your organization’s capabilities for broader virtualization deployment…

Read the whole paper at source.

VMware to support para-virtualization in all products

Quoting from the SYS-CON India:

VMware announced plans to support paravirtualized Linux and Solaris x86 operating systems in future releases of VMware virtual infrastructure platform products — Workstation, GSX Server and ESX Server.

“VMware’s support for paravirtualized Linux and Solaris x86 operating systems and our experience with enabling virtual operating environments for more than 10,000 enterprise server customers is consistent with our continued commitment to give customers greater choice,” said Jeffrey Engelmann, executive vice president of marketing at VMware.

VMware’s support for these additional operating systems means more customers can gain additional leverage from using VMware’s management software VirtualCenter with VMotion technology across heterogeneous operating system environments. This further extends the applicability of VMware virtual infrastructure as an enterprise-wide strategy…

Read the whole article at source.

Note that the article quotes Mr. Engelmann statement but no official announcement has been made so far. It’s highly probable this news had to be disclosed no earlier than next week.

Amazon launches Xen-powered virtual datacenter on demand

Following a trend started by Sun with its Grid (and evaluated by many other companies like Nortel), Amazon launched a public virtual computing facility: Elastic Compute Cloud (EC2).

The facility is powered by Xen, granting customers flexibility:

Amazon EC2 enables you to increase or decrease capacity within minutes, not hours or days. You can commission one, hundreds or even thousands of server instances simultaneously. Of course, because this is all controlled with web service APIs, your application can automatically scale itself up and down depending on its needs

and full control:

You have complete control of your instances. You have root access to each one, and you can interact with them as you would any machine. Each instance predictably provides the equivalent of a system with a 1.7Ghz Xeon CPU, 1.75GB of RAM, 160GB of local disk, and 250Mb/s of network bandwidth


courtesy of Cast Blog

In other words whole the Sun Grid permits customers to run submitted applications in the grid computing facility without direct access to Solaris Containers configuration, Amazon EC2 grants full control over virtual hardware, guest operating system, installed applications and even virtual network between virtual machines (as stated in the preliminary documentation).

The whole thing is remotely configured, launched and managed by web services, so I expect community made GUIs to appear very soon.

Exactly like the Sun Grid, Amazon EC2 has a pay-per-use model:

  • $0.10 per instance-hour consumed (or part of an hour consumed)
  • $0.20 per GB of data transferred outside of Amazon (i.e., Internet traffic)
  • $0.15 per GB-Month of Amazon S3 storage used for your images (charged by Amazon S3)

Amazon isn’t new to mass-scale virtualization projects since already launched its storage virtualization facility Simple Storage Service (S3) earlier this year.

Create an account to use EC2 here.

Personally I was expecting VMware to be the first launching such service, considering the company have all pieces to provide a similar offering:

  • a datacenter-class virtualization platform (ESX Server 3.0)
  • a datacenter-class management tool (VirtualCenter 2.0)
  • a very promising automation solution, gained after Akimbi acquisition (Virtual Lab Manager 1.0, formerly Slingshot)
  • a plethora of datacenter-class storage solutions, granted by its owner EMC

In no cases VMware could have a better chance to prove reliability of its own products.

Former VMware founder working on a new virtualization start-up

Quoting from the The Register:


This week we’ll be taking a look at the talented workers at two of our favorite start-ups – Nuova Systems and Montalvo Systems.

Well, of special note, we’ve found VMware’s founder and former CTO Ed Bugnion – not pronounced ‘bunion’ – at Nuova as the start-up’s VP of engineering. Bugnion used to be a graduate student of fellow VMware founder and Stanford professor Mendel Rosenblum.

Nuova has managed to keep the nature of its products pretty quiet, although Bugnion’s presence helps confirm some of the rumors we’ve heard. Our sources claim Nuova is working on a virtualization system that would combine server, storage and networking technology in a single box. The system is meant to align with Cisco and Intel’s larger strategy around Data Center Ethernet (DCE).

Broadly, DCE is a proposal to add more virtualization to networks and make it possible for myriad types of traffic to share Ethernet networks. It’s not hard to image a company such as Cisco seeing Nuova and DCE as a means of encroaching on the turf of Sun, IBM, HP and Dell. But Cisco would prefer you don’t think about that just yet.

Who else is at Nuova? Well, there’s Fabio Ingrao, the project lead for server start-up Fabric7. And there’s Dan Lenoski, the former VP of engineering at Cisco. You’ll also find a bunch of former Juniper, Netillion and Sun Microsystems executives. Quite the talented bunch…

Read the whole article at source.