Blades are no good for virtualization

It’s a common practice by some vendors trying to accumunate server virtualization with blade technology.
At the moment blades can only help on physical consolidation, adding nothing new to current virtualization technologies.

SearchServerVirtualization finally published a critic to this forced pair:

…these efforts haven’t been enough to convince Rod Lucero, CTO of the Minnesotan VMware consultancy VMPowered, to recommend blade servers to customers undertaking large-scale virtualization and consolidation projects.

The big problem with blades, Lucero said, is the limited number of network interface cards (NICs) you can attach to a blade — “two, and that’s it,” he said, although some newer blade systems are starting to support more.

Lucero’s other gripe with blades is that they are often sold diskless, thus requiring the blades to boot from SAN – a less-than-perfect process.

If you are going to go with blades, you’re best off buying them fully populated…he’s often seen companies invest heavily in a sparsely populated blade chassis, only to find that they don’t really need the extra capacity down the road…

Read the whole article at source.

UML creator criticizes Xen, OpenVZ

Quoting from Linux.com:

Dike also noted that SWsoft and XenSource are trying to get OpenVZ and Xen technology, respectively, into the mainline kernel, but says that’s unlikely. Dike says that Xen “doesn’t fit in the Linux world,” and called it “a technological dead end.” He predicts a “family of virtualization stuff under KVM.” Whatever makes it into the mainline kernel, says Dike, is what the distros will follow.

He also says that OpenVZ is unlikely to be adopted in the mainline kernel tree, at least as it is. Dike says that OpenVZ has to have “code sprinkled all over the place” to work, and it violates conventions within the kernel…

Read the whole article at source.

Update: Kir Kolyshkin, Project Manager of OpenVZ, politely answered Dike’s critics on the official blog.

VMware codename Fusion could run virtualized Mac OS istances

IT Week reports a possible scoop about forthcoming VMware codename Fusion capabilities:

It should be noted that OS X is not one of the operating systems that can be run on another virtual machine, however. Virtualised OS X remains a sticky subject because Apple will not allow its operating system to run on non-Apple hardware.

VMware has hinted that it might enable virtualised instances of Mac OS X by implementing some sort of check to make sure that an OS X virtual machine was running on Apple hardware, but so far it seems that OS X will remain walled off from the world of virtualised deployment.

If this feature will be confirmed, maybe through an exclusive deal with Apple (which is unlikely but possible), Parallels may find more difficult maintaining interest of Mac OS users.

SWsoft on Virtuozzo isolation capability

A typical criticism moved to OS partitioning solutions is they offer less isolation than server virtualization solutions.

From the corporate blog Ilya Baimetov, Director of Technology at SWsoft, provides his point of view on the topic, clarifying Virtuozzo capabilities in:

  • Namespace isolation
  • Functional isolation
  • Fault isolation
  • Performance isolation
  • Security isolation

Read the whole article at source.

Xen integration in Sun Solaris 10 for summer 2007

Several news source are quoting the official Sun announcement about committment to integrate Xen in upcoming Solaris 10 releases. Every article also reports this integration should appear in next Update 4, which is expected during summer, even if Sun has not been so clear about release date.

This isn’t the only Sun strategy to offer Linux through virtualization: its OS partitioning technology known as Solaris Containers (or Zones) is expected to be further improved and introduce the new Solaris Containers for Linux Applications feature, able to run unmodified Red Hat Enterprise Linux or Cent OS binaries inside a standard zone. With further extensions of official support to other Linux and BSD distributions.

Solaris Containers for Linux Applications (formerly BrandZ) is already implemented in current OpenSolaris builds, but it’s unrevealed when bits will make their way to an official OS Update.

The virtualization.info Virtualization Industry Roadmap has been updated accordingly.

Announcement: virtualization.info Rent-A-Lab

Today virtualization.info launches Rent-A-Lab: the first virtual datacenter rentable on demand.

Offered in partnership with Kybernetika, virtualization.info Rent-A-Lab is a high available, enterprise-class infrastructure you can reach, configure and control completely online, from every part of the world, 24/7.

It offers full control over 1 management station, 9 servers, 2 SANs and 2 network switches, allowing to do any sort of work through virtualization technology:

  • test a new software before using it in corporate environment
  • evaluate a new feature or a new beta
  • train company staff on new technologies
  • benchmark a specific configuration
  • study for a certification exam
  • show your customers a new product

and more. All from your notebook or desktop.

virtualization.info Rent-A-Lab is made for virtualization, so each part of the infrastructure is certified to run the new VMware Infrastructure 3, Microsoft Virtual Server 2005 (and the upcoming Windows Server Virtualization hypervisor), Xen/XenEnterprise/Virtual Iron, SWsoft Virtuozzo or Sun Solaris Containers. But you can even use it to test virtualization solutions like ones from Altiris, Cassatt, Citrix, Dunes, Leostream, PlateSpin, Scalent, Surgient, vizioncore, VMLogix, etc.

Customers can rent full infrastructure or just part of it, depending on needs, for 1 day or more. Up to 2 week in a row.

To see a demo of what can be done with this infrastructure, to check equipment characteristics and prices visit the new virtualization.info Rent-A-Lab site.

IDC already calls for Virtualization 2.0

IDC published in December 2006 its prediction for current year, summoning rise of Virtualization 2.0 and seeing virtual appliances widespread:

  1. The next wave in virtualization emerges, which IDC calls Virtualization 2.0. Users will focus on continuity, disaster recovery, and high availability
  2. Software appliances will become a household word in 2007. The convergence of virtual machine technology and a new initiative by several tool vendors is giving birth to this new form of software packaging
  3. The use of Linux paravirtualization will be mostly sizzle – not steak. Few users are going to substitute their current kernel with a paravirtualized kernel
  4. Management of virtual infrastructure takes center stage at large enterprises, extending adoption of virtualization across test, development, and production
  5. Virtualization and security will become stronger focal points for ITIL/ITSM vendors, who will do more to add support for virtualization and managing virtual environments to their service management offerings

I think Virtualization 1.0 is not even near, with so much limits in hardware and software support, disaster recovery (something which should be included in any 1.0 product version) and enterprise management.
I also have very different point of view about virtual appliances, which introduces much more risks than benefits.

VMware publishes a community supported HCL for ESX Server

One of the biggest limit (or benefit, depending on point of views) of VMware ESX Server is its limited support for hardware and software.

For example customers cannot try the product on cheap IDE/SATA lab implementations, cannot replace embedded iSCSI initiator (currently a very old Cisco implementation), etc.

Company policy about HW/SW support is now slightly changing, with the publishing of a community driven Hardware/Software Compatibility List: VMware customers inclined to experimentation can now report which products are working with which version of ESX Server, while other can provide a rating on suggested implementation.

VMware support will act accordingly:

VMware Global Support Services (GSS) will assist customers in problem analysis to determine whether or not the technical issue is related to the 3rd party hardware or software. In order to isolate the Error, we reserve the right to request that the 3rd party hardware or software be removed. This will only be done where we have reason to believe the issue is related to the 3rd party hardware or software.

If VMware GSS cannot directly identify the root cause or it is reasonably suspected that the problem is related to the 3rd party hardware or software, we will direct the customer to open a support request with the 3rd party vendor’s support organization.

It’s reasonable to think VMware will use this initiative to lower testing costs, understanding which 3rd party products are more requested, and using community efforts to recognize which integrations are more reliable, so to move them, after enough QA, to the official ESX Server compatibility guides.

Benchmarks: Advantages of Dell PowerEdge 2950 Two Socket Servers over Hewlett-Packard Proliant DL 585 G2 Four Socket Servers for Virtualization

Dell published an interesting 16-pages paper where a 2-way Dell PowerEdge 2950 is compared against the new 4-way HP ProLiant DL 585 G2, both running several virtual machines over a VMware Infrastructure 3 platform:

There is a lot of debate these days around what is the optimal hosting platform for a virtualization deployment. Most of this debate is centered around the decision to deploy either 2 socket or 4 socket building blocks as the basis of the infrastructure. In order to illustrate the advantages of using two-socket servers for virtualization over four-socket servers, a test was conducted with VMware Infrastructure 3 on the Dell PowerEdge 2950 and the HP ProLiant DL585 G2.

The results of these tests show that three PowerEdge 2950 two-socket servers can provide up to 44% more performance, 57% more performance per watt and a 95% average advantage in price / performance than two HP ProLiant DL585 G2 four-socket servers.

Read the whole document at source.

Measurements focus on performance obtained by several Microsoft SQL Server 2005 and MySQL 5 virtual machines running concurrently, and it’s worth to read.