Licensing issues with Virtuozzo and other OS partitioning solutions

One of the most complex issue to solve when dealing with new virtualization technologies is licensing.

While approaches like server virtualization (a la VMware, Parallels, Xen and Microsoft itself) and OS partitioning (a la SWsoft, Sun, UML) could represent a real saving also for operating system costs, traditional licensing models weren’t created for virtualization scenarios and aren’t actually helping new technology adoption.

Microsoft, more than any other company, has the majority of problems with OS licensing in virtual machines, and it’s moving to adapt to the aspect datacenters are taking.

From a certain point of view the fact Microsoft entered in the server virtualization segment is boosting changes, since the company is directly influenced by its own unflexible licensing scheme.
The result is a more indulgent term of license for Windows Server 2003 R2 Enterprise Edition, which allows up to 4 installations in virtual instances, apart the physical one, and Windows Server 2003 R2 Datacenter Edition, which allows unlimited installations in virtual instances.

But despite this effort customers are seriously confused, sometimes unable to understand how Microsoft licensing fits new virtualization technologies they are evaluating.

It’s the case of SWsoft OS partitioning solution, Virtuozzo, which is able to create multiple, independent, virtual instances of the same underlying operating system. Something Sun introduced with Solaris 10 and called Solaris Containers and Linux have since years thanks to the UML (User Mode Linux) project.

In the SWsoft Virtuozzo case customers are disorientated by the fact partitions don’t appear like complete virtual machines, like in VMware products, where you have to install a new operating system from scratch.

Recently SWsoft CEO, Serguei Beloussov, commented on CNET News about this topic and, without clearly describe readers how Windows licensing applies to Virtuozzo virtual instances, asked for more transparency:


Customers deserve more clarity about the licensing issues surrounding operating system virtualization. This should be a straightforward matter unless software vendors decide to suddenly charge per each virtualized environment. If that occurs, they’ll essentially be charging extra for the same bits and bytes of software they have already charged for…

His last conditional statement let thinks SWsoft is currently saying his customers they don’t need extra operating system licenses for virtual instances, but it’s just an interpretation.

To provide clarity Mr. Beloussov is demanding virtualization.info reached Mike Neil, Senior Director of Virtualization Strategy at Microsoft, and asked for an official statement about the issue:

A virtual operating system environment that enables a separate machine identity or separate administrative rights requires an operating system license.
In this case, each Virtuozzo virtual environment requires an operating system license.

Each instance of the OS can deliver value by providing additional flexibility for customers to deploy their business workloads.

Neil also added Virtuozzo partitions, like traditional virtual machines, benefits the new licensing terms introduced with Windows Server 2003 R2 Enterprise Edition and Datacenter Edition mentioned above.

It should be now clear Microsoft at today considers OS partitioning indentical to server virtualization from a licensing point of view.
This may change in future since the company is expected to launch an OS partitioning technology as well, but for the moment verify your licenses today.

Xen 3.0.3 expected next month, 3.0.4 before Christmas

Quoting from IT Week

Red Hat is preparing to release version 6 of its free Fedora Core Linux operating system next month.

The updated system includes version 3.0.3 of the open-source Xen virtualisation software, which was originally scheduled for release early in July.

XenSource had previously said most of the changes in 3.0.3 affect Xen?s PV capabilities. The update includes optimisations to improve para virtualised USB and network performance, and a PV frame buffer that enables graphic displays of virtual machines (VMs).

The release is also expected to include a new CPU scheduler and support for a basic non-uniform memory access (Numa) memory allocator.

Klorese said the 3.0.4 release of Xen is expected before Christmas. ?Version 3.0.4 should appear eight to 12 weeks after 3.0.3. We?ve loosened the spacing up a bit from the eight-week [release cycle] I talked about previously,? he added.

Earlier this year Klorese said version 3.0.4 would be optimised to run on servers fitted with four CPUs…

Read the whole article at source.

VMware products achieves Windows IT Pro awards

Quoting from the VMware official announcement:

VMware, Inc., the global leader in software for industry-standard virtualized desktops and servers, today announced that Windows IT Pro honored numerous VMware products in its annual Readers’ Choice awards, which it developed as a way to let its readers evaluate and recommend the best technology products in the industry.

Windows IT Pro readers named VMware ESX Server the Best Virtualization Software Product and VMware Capacity Planner the Top Capacity Planning and Trend Analysis Software Product . Readers also recognized these VMware products as solid picks that placed in the top three in several Windows IT Pro categories: VMware Player in the Best New Product category, VMware VirtualCenter in the Best Applications and Operations Management Tool category, VMware VirtualCenter in the Best Remote Management Tool category, VMware VirtualCenter with VMotion technology in the Best Enterprise Backup/Recovery/Archive Software category, VMware ACE in the Best Proxy Server/Web Access Control and Monitoring Solution category and VMware Workstation in the Best Software Deployment Tool category…


Webcast: Leveraging Software Virtualization Technology

Altiris and Network World arranged a new webcast about application virtualization for September 13th:


Learn how software virtualization:

  • Reduces the cost of deploying and maintaining applications
  • Peacefully coexists with, and adds value to, other forms of virtualization
  • Combined with software streaming, improves end-user experience and productivity, and further reduces demand on IT resources

Register for the webcast here.

3rd virtualization.info anniversary

3 years ago I started this blog.

In this day, one year ago, I was counting 80,000 visits.
Today I can count nearly 1 million.

I’ve put a lot of effort in this project so far and will continue this way: expect new great things in the coming weeks.
Meanwhile I would ask every reader today to submit suggestions and wishlists for a better virtualization.info.

To celebrate this year I would like to re-publish an interview SearchServerVirtualization arranged with me some months ago:

Andrew Kutz: You are one of the most well known, if not the leading, evangelists of virtualization on the internet today. Your roots, however, are in information technology security. What is your take on the relationship information technology security and virtualization?

Alessandro Perilli: Being a security professional means, among others, dealing all the time with a lot of different platforms, multi-tier products and networking devices.
Just think about testing a new exploit against several kinds of Windows or Linux operating systems. Or about testing a network intrusion detection system features: the simplest scenario would involve an attacking platform, a target one and a firewall in the middle.

Setting up a laboratory can be very expensive and you need a lot of time to restart from scratch before beginning to test a new scenario.
When I saw virtualization for the first time I immediately understood I would be able to create a security-lab-in-a-box without much effort, cutting away reinstallation times.

I also immediately felt virtualization could be used for some virtualization purposes, like sandboxing and honeypotting. So it soon became the mandatory companion of my security toolbox.

AK: Your accreditations in security speak for themselves, but what is your level of experience with the current crop of virtualization technologies (VMware, Microsoft, Xen, Parallels, Vanderpool/Silverdale, Pacifica, etc…)?

AP: In early days of modern virtualization I’ve been involved in virtualization projects with VMware and Microsoft technologies as soon as they became a viable solution for corporations.
Then, thanks to virtualization.info, my work expanded to many if not all products available in this niche.
At today I extensively test, and implement among several customers, the large majority of technologies out there, from platforms to P2V tools, passing through provisioning automation or disaster recovery.

If a new virtualization technology or product is out I work on it within one or two months.

AK: These days, almost anyone can start a blog and claim expertise on a wide variety of subject matter. Do you have any advice to give to IT professionals that can help them gage the worth of an information source when it pertains to virtualization?

AP: Sure: don’t follow the virtualization.info model. Don’t misunderstand me: I’m not saying so to avoid competition.
virtualization.info was born to fill a need of three years ago: aggregate scattered news about an emerging technology to understand trends and what product was out.
Today that virtualization is starting to be widely adopted this need is changed and virtualization.info itself had to extended its mission accordingly.

Start a new blog now doing what virtualization.info did three years ago is useless.
Customers are looking for some valuable content, not for another ten blogs publishing same news again and again, just changing the title or the quote cut.
Also, everybody with enough experience in blogging knows that news aggregation can be completely automated with some tools out there and there is no expertise at all in this.

At this point of the virtualization industry’s evolution I feel customers are mainly looking for technical tips because implementation is still the big issue these days.
Any blog providing such content would be considered a valuable information source.

AK: You founded the False Negatives project to help provide security consulting and training in Italy. Do you have any plans to expand this to include virtualization, and if so are you hiring? 🙂

AP: False Negatives is a project meant just for some high level security consulting, like strategical advisory or architectural designing and there are no plans at the moment to expand offering for virtualization outsourcing services.

But I can’t say there are no opportunities in this direction: virtualization.info is acting as a hub for vendors, system integrators, virtualization professionals and customers both in US and in EMEA.

I’m not hiring but I accept resumes from virtualization experts in every company department, from engineering to marketing.

You can consider virtualization.info a sort of virtualization head hunter, where best experts worldwide have a chance to be engaged by top players in the market.

AK: You have positioned yourself as primarily an aggregator of virtualization news, but you rarely give your personal opinions on the subject. On Tuesday, May 23rd, 2006, Paul Murphy claimed that modern virtualization is being sold as a solution to a problem the industry no longer has. What is your personal take on the current state of virtualization and where do you see it not only a year from now, but 3 years as well?

AP: I believe it’s quite evident modern virtualization is still at its infancy.
We still have to solve fundamental problems about implementation and support, and I think it’s natural we are still concentrating on obvious applications of the technology, like server consolidation, which could be not the best solution for every customers.

I don’t see big changes within 1 year from now: some vendors have still to prove their virtualization platforms are fast and reliable enough, others have still to prove their virtualization tools are useful, others have still to provide products support in virtual environments. And this is a slow process which won’t substantially change within 1 year.

Within 3 years, more probably 5, virtualization solutions will be more evolved and will start to offer experimental datacenter automation.
I imagine scenarios where, for example, virtual machines clone themselves and enable load sharing when performances go under a certain service level agreement.
Or virtual machines invoking a snapshot when a network attack is detected, sending attacker’s hard disk modifications to the security department.

In the middle term I believe virtualization is the path to something bigger than what today security vendors abusively calls self-defending network. Something I would rather call adaptive datacenter.
In this picture today’s vendors offering so called virtual lab automation solutions will be a key player tomorrow.

AK: I am a fan of open source software, especially of Tim O’Reilly’s idea of software as another commodity. Openness alone will not win Xen VMware’s current market dominance though. The formation of XenSource was a huge step, but what else do you think needs to happen for Xen to become a viable alternative in the eyes of IT managers everywhere to VMware?

AP: At today Xen has two problems: first of all has to offer Microsoft Windows support. We know this is about to happen this year thanks to hardware aid from AMD and Intel.
Secondly it has to provide management tools permitting more customers to embrace Xen paravirtualization even with limited knowledge of Linux. Also in this case there are companies like XenSource itself, Virtual Iron and recently Enomaly which are offering or are going to offer solutions in this direction.

A third critical point would be pushing the market to officially state products support in Xen paravirtualized infrastructures.
Without a wide applications’ vendors support there are few chances companies can seriously consider Xen adoption.

AK: On April 3rd, 2006, the Computer Business Review discussed the state of application virtualization.
Just a few days ago on May 19th it was announced that Microsoft is in talks to buy Softricity, one of the leading manufacturers of application virtualization solutions. Application virtualization is quite obviously the new hotness, but in your opinion where does it fit in the bigger picture?

AP: I think application virtualization is a fundamental companion of server virtualization.

In every day’s productivity end users need to address application compatibility, co-existence, testing and portability issues. Application virtualization is much more suitable to solve these problems than server virtualization, because in some senses is simpler and faster to use, requires less resources and has a lower impact on performances.

So I believe that, while server virtualization will fill datacenter needs, application virtualization will satisfy requirements in the client area, making the most from the whole infrastructure.

AK: On a purely technical level it seems that AMD’s Pacifica virtualization technology may best Intel’s own VT, if only for the fact that AMD’s CPUs include a memory controller that will be VT aware out of the box, while Intel’s separate memory controller will not be VT ready until 2007. On paper this could mean that the AMD chips will be faster at handling VT. I find this tidbit of information interesting because it shows that as the interest in virtualization grows, so must the hardware support for it in order to meet consumer expectations. To me the next piece of hardware that needs to build support for virtualization is the video card. Until this happens roommate OS installations (the term for side-by-side OS installations on a machine with a hypervisor) will not be able to run graphic-intensive applications at bare-metal speed. Do your sources have any information regarding what ATI and nVidia might be doing, and do you think that this is a logical step or simply a pipe dream?

AP: It’s true that one of the most emerging requests for virtualization use is 3D/CAD development. And there are some rumours, mainly fed by a specific Apple patent requested in 2002 and recently granted.

I’m not a graphic expert so I can’t say if modern video adapter already have hardware requirements to accommodate something I would call video partitioning, but we have to note the market trend is actually going in the opposite directions: solutions like the nVidia SLI or the ATI CrossFire aim to aggregate rendering power, not to partition it.

I also think that while I heard some customers asking for reliable 3D support in virtualization products, the market request is still too low to make it happens today.

AK. I e-mail you out of the blue and say: Alessandro, I want to learn about virtualization and what it can do for me, where do I start?
What is your response?

AP: When I started approaching virtualization there were neither books nor vendors courses (and still today I strongly believe there is a significant lack of training material).
I learned a huge amount of things silently following newsgroups for years.
Still today the most precious source of knowledge and real-world case studies is the community.

So my suggestion is: read books you find about the product you need to learn, but never forget to carefully monitor all web forum, newsgroups and blogs out there covering virtualization.
There is no book updated enough or complete enough able to offer you same level of broadening.

Thank you for reading and enjoy your stay.

Benchmarks: Evaluation of ESX Server Under CPU Intensive Workloads

Phillip J. Windley and his student Terry Wilcox published a very interesting 32-pages paper about VMware ESX Server 2.5.2 virtual machines performances depending on amount of assigned virtual RAM and enabling of Intel HyperThreading or vSMP:

We present a summary of our evaluation of VMWare ESX Server 2.5.2. In particular we confirm and work around known timing issues with guest operating systems running on ESX server. Our work validates and adds to the work of other groups modeling the behavior of ESX Server during CPU intensive workloads by exploring in more detail the effects of Hyper-Threading and the overhead of Virtual SMP.

We report and measure a previously unknown performance penalty for allocating too much RAM in virtual machines with Linux as the guest operating system.

This paper also describes the testbed we used to manage and run our tests including a virtualization test management system we developed to run the tests we performed.

We describe timing issues that affect performance testing on ESX Server and a method for measuring runtimes that gives accurate results.

Reported conclusion are extremely appealing:

  • Single CPU virtual machines scale better than virtual machines using Virtual SMP
  • Hyper-Threading increases throughput if there are a large number of virtual CPUs, but makes no difference if the number of virtual CPUs is less than or equal to the number of physical CPUs
  • Do not allocate excessive resources to virtual machines. Additional resources may hurt performance

Read the whole paper at source.