Release: vizioncore esxMigrator 1.0

As expected vizioncore just released a new product to its offering, still focused on VMware ESX Server:

esxMigrator provides a powerful tool that can support the upgrade process from ESX Server to VI3, by enabling smooth and seamless migrations to the new platform with minimal downtime even for complex environments.

A GUI-driven management tool to help ease the migration process and enable companies to leverage the new efficiencies of VI3, esxMigrator completes the migration with minimal downtime. The existing production environment can still be used by the workforce, even as individual virtual machines are being replicated and ported to the new production server. During the migration process, the source virtual machine remains intact and unmodified. Because the source retains its integrity until the cutover to the new server is completed, administrators can leverage this feature to support roll-back contingency plans. The destination virtual machine is also migrated in a consistent state, keeping all data intact…

Among its features:

  • Allows source virtual machines to be used continuously during migration process
  • Keeps both source and destination virtual machines intact, supporting roll-back contingency plans
  • Eliminates manual scripting
  • Can be used by any level IT administrator
  • Migrates multiple virtual machines
  • Windows GUI-driven migration tool
  • Reduces downtime to simple reboot
  • Offers scheduling option to control timing of reboots

Watch the demo here. Download it here.

The virtualization.info Virtualization Industry Roadmap has been updated accordingly.

Red Hat suddenly changed its mind: Xen is not stable enough

Quoting from ZDNet:

“[Xen] is not stable yet, it’s not ready for the enterprise,” Red Hat’s vice president of International Operations, Alex Pinchev, told ZDNet Australia today via telephone.

“We don’t feel that [Xen] is stable enough to address banking, telco, or any other enterprise customer, so until we are comfortable, we will not release it.”

Red Hat has spent “millions” of dollars testing Xen, according to Pinchev, and has hundreds of customers around the world trying out beta versions of the software…

Read the whole article at source.

Barely 4 months ago Red Hat spent big time announcing the launch of its Integrated Virtualization strategy which would include integration of Xen in Red Hat Enterprise Linux 5 and a series of services and tools around it.
Now, suddenly, the company changed its mind. Why?

Possibly because:

XenSource launches its Partner Program

Quoting from the XenSource official announcement:

XenSource, the leader in infrastructure virtualization solutions based on the open source Xen(TM) hypervisor, today introduced a comprehensive channel partner program for companies planning to deliver virtualization solutions based on the Xen hypervisor. Distribution of XenEnterprise for North America and Europe will be delivered through agreements with ComputerLinks, DataSolutions, Interquad, ITway and Tech Data.

The program is significant as it enables system integrators, hosting service providers and consultants to expand their company’s core virtualization skills by providing the expertise required to help end-users plan and deploy Xen virtualization solutions based on XenSource’s XenEnterprise multi-OS virtualization platform. The program includes a designated XenSource Certified Professional training and validation program.

The channel program offers:

  • Authorization to resell XenEnterprise
    offering members the opportunity to be one of the first companies to become certified on the delivery of Xen-based virtualization
  • Training and Certification
    offering members the opportunity to become certified experts in the delivery of Xen-based virtualization
  • Demand Generation
    providing a lead distribution process and the ability to leverage standardized XenEnterprise demand generation programs
  • Branding and Messaging
    enabling partners to differentiate their offerings through access to XenSource trademarked branding and integrated messaging
  • Access to expertise
    providing access to the industry leading knowledge and tools for Xen-based virtualization through the XenEnterprise multi-OS virtualization platform

There is no fee for charter members to join through 2006…

Is defragmentation a real benefit for virtualization?

In its last editorial Processor mention 2 opposite positions about value of defragmentation in virtual infrastructures:


Another issue facing those setting up virtual servers is the increasing disparity between high-performing CPU and memory chips and storage. According to Diskeeper?s Materie, the fastest hard drives available in the mainstream marketplace run at 15,000rpm. In contrast, CPUs and memory are measured in nanoseconds.

According to Materie, server administrators need to look for ways to make the slowest component in the network as fast as possible to alleviate bottlenecks. SMEs can strive for relative parity by employing high-performance hard drive setups. SANs have dropped significantly in price over the last several years, making them affordable for SMEs.

In addition, Materie says that defragmenting storage allows for faster access to data by consolidating file fragments. When the CPU is not searching all over the hard drive?be it virtual or physical?for pieces of a given file, you can expect quicker performance and a decrease in bottlenecks through I/O channels.

For his part, Illuminata?s Haff says that the benefits of defragmenting storage in a virtual setup are minimal, even though virtualization has the potential to cause greater disk fragmentation.

?The problem is that CPU and memory are so much faster than disks that optimizing disk speed is like a drop in the proverbial lake,? Haff says. ?As a last tweak, optimization sometimes makes cost-effective sense, but it is not a problem standing in the way of adopting virtualization.?

Read the whole article at source.

I have to strongly disagree with Haff’s position: in my real-world experience defragmentation of both host and guests disks has a notable impact, being I/O operations a critical bottleneck in every virtual infrastructure.

While it’s true defragmentation isn’t the final solution to performance issues (and I don’t think anybody at Diskeeper or other defrag companies believes so), it’s worth to remember that its price and the effort it requires to work is minimal.
Putting in place a meaningful defrag scheduling (to avoid operations start at workload peeks) brings benefits there is no reason to give up to.

VMware Ultimate Virtual Appliance Challenge phase 2 ends

The impressive VMware marketing campaign called Ultimate Virtual Appliance Challenge, promising $100,000 for the lightest, most useful, most creative virtual machine ever, is towards its conclusion.

Voting for best Virtual Appliance officially ended on July 31th.
Virtualization community voted several different appliances but didn’t show a neat preference, reaching maximum score of 3/5.

Apart first couple of entries many of the highest ones are made by virtual appliances running traditional products, which could run as LiveCD or be installed without much effort. Others are plagued by very complex configuration steps or notable dimensions.
For these reasons I believe most submitters missed the true spirit of the competition: create a no-brain, slim, single-purpose, original and useful tool just like an hi-fi stereo is (or an Ipod if you prefer). Avoiding throwing in existing tools without customization.

I didn’t check all virtual appliances but until now no one seems to me an effort big enough to be awarded with $100,000.
What I personally hoped to see (and I can’t see), given above prerequisites:

  • a URL filtering (aka content filtering or internet filter) tool, able to switch between proxy mode (if you are at home) or sensor mode (if you are at office and can put it on switches monitor port, like Websense can do)
  • a media center tool, able to upload on it music, videos and photos, able to serve home users (acting like a traditional media center) and professional users (switching between radio broadcasting station mode and dj station mode, for mixing songs at clubs)
  • a Quality of Service (QoS) tool, acting as a proxy
  • a storage hub, able to map remote storage resources (fibre channel or iSCSI NAS and SAN) and present them in a uniformed way (something actually done in certain kind of storage virtualization approaches)
  • a news aggregation tool, collecting news from several sources and in different formats (ATOM/RSS feeds, web page grabbing, newsgroups, Google Alerts, etc.) and providing a unique resulting output, normalized and without duplicated entries

Now the final word will come from illustrious judging panel which will name winners on August 14th.

After that time VMware could launch a virtual appliances marketplace.

AMD to launch virtualization in Opteron next month and I/O virtualization in 2007

Quoting from SearchServerVirtualization:


As a result, more OSes can run simultaneously on a system and share the hardware, according to Margaret Lewis, commercial software strategist with AMD. “The extensions take away some of the overhead and allow for the software to spoof this environment more simply,” said Lewis. “Now we’ll see near-native performance levels. The virtualized machine will run as well as it would on bare metal itself.”

AMD-V is already available in desktop and mobile versions of AMD processors. AMD said it expects to make an announcement next month on plans to release a virtualization-enhanced version of Operton, the AMD server chip. AMD I/O virtualization-enabled chip sets will likely hit the market in 2007, the company said.

Read the whole article at source.

The virtualization.info Virtualization Industry Roadmap has been updated accordingly.

Microsoft Virtual Machine Manager beta 1 expected for August

After announcing the product and related beta at WinHEC 2006, Microsoft is finally opening the program of Virtual Machine Manager, solution aimed to add enterprise management capabilities Virtual Server customers are complaining about since years.

Unfortunately betatesters will find a bad surprise since at the moment the program is open but no downloads are available and a red warning advice:

The product is currently in the final stages of testing and still on track for availability this August.

At least you are still on time to enroll the beta.

The virtualization.info Virtualization Industry Roadmap has been updated accordingly.

Tech: Configure unattended operations at virtual machines shutdown in Virtual Server 2005

Ben Armstrong posted another very good technical tip about default action to perform at virtual machines shutdown event in Microsoft Virtual Server 2005:


Each Virtual Server virtual machine has an attribute called ‘UndoAction’. This is normally set to ‘1’ for ‘keep’, but it can also be set to ‘0’ for ‘discard’ or ‘2’ for ‘commit’.

Below is a simple script that checks and sets a virtual machines UndoAction attribute:

Set objVS = CreateObject(“VirtualServer.Application”)
Set objVM = objVS.FindVirtualMachine(“The name of the virtual machine”)

WScript.Echo “VM Name: ” & objVM.Name
WScript.Echo “Undo action: ” & objVM.UndoAction

‘Set the undo action to discard
objVM.UndoAction = 0

‘Confirm that the settings change stuck
WScript.Echo “New Undo action: ” & objVM.UndoAction

Be sure to read the original post for updates and commets.

VMware could decide to support competitors hypervisors

Despite efforts in pushing a standardization in server virtualization market, VMware seems not able to drive community towards its direction.
At Ottawa Linux Symposium 2006 the company has not even been mentioned among best suited candidates for Linux kernel inclusion.

VMware is finding problems also with Microsoft, which not only wouldn’t adhere suggested standards but it’s actively working to impose its own with an agreement with XenSource.

Solid on its market leader position VMware will unlikely change its mind, embracing someone else standards, so the company seems to have no other choice to further permeate the market than supporting competitors implementations.

An hint in this direction comes from an old interview (it should be before VMWare Infrastructure 3 release) VMware president, Diane Greene, granted ComputerWorld and published today:


ComputerWorld: Infrastructure 3 supports Microsoft virtual machines. Do you plan to support Xen?

Diane Greene: That’s one reason we want to see the standards out there. If customers want it, we’re happy to support it. Just like we support all operating systems, all hardware, all storage. We’ll support all hypervisors…

It would make perfectly sense if ESX Server 4.0 (or VMware Infrastructure 4 if you like more) would support virtual machines natively running on Xen and upcoming Microsoft Windows Server Virtualization hypervisors. Beating Redmond giant at its own game.

Virtualization technologies have to converge to be included in Linux kernel

What emerged since the very first day of this year Linux Symposium is that various virtualization approaches have to reach a common standard before being considered for Linux kernel inclusion. In other words it’s unlikely one technology will be chosen over others.

SWsoft is reporting different positions about the specific virtualization approach called OS partitioning, implemented by mentioned solution like its products Virtuozzo (commercial) and OpenVZ (open source), Sun Solaris Containers, UML, Linux-VServer and others:

Eric Biederman wants to have so-called namespaces in kernel. Namespaces are basically a building blocks of containers, for example, with user namespace we have an ability to have the same root user in different containers; network namespace gives an ability to have a separate network interface; process namespace is when you have an isolated set of processes. All the namespaces combined together creates a container. But, as Eric states, an ability to use not all but only selected namespaces gives endless possibilities to a user.

IBM people want application containers, and for them the main purpose of such containers is live migration of those. The difference between app. container and the ?full? (system) container is a set of features: for example, an application container might lack /proc virtualization, devices, pseudo-terminals (needed to run ssh, for example) etc. So, an application container might be seen as a subset of a system container.

OpenVZ wants system containers that resemble the real system as much as possible. In other words, we want to preserve existing kernel APIs as much as possible inside a container, so all of the existing Linux distributions and applictions should run fine inside a container without any modifications. Of course, the goal is not 100% achievable, for example we do not want the container to be able to set the system time.

Linux-VServer wants just about the same as OpenVZ, it?s only that their implementations of various components are different, and their level of a container resembling a real system is a bit lower (for example, in networking).

Read the whole article at source.

Solution convergence is a huge problem here like in Xen / VMware server virtualization approaches. And an agreement seems too far at the moment.