Quoting from eWeek:
Linux vendor Red Hat Inc. is aggressively pushing to get Xen virtualization technology included in the Linux kernel as quickly as possible.
Brian Stevens, the newly appointed chief technology officer of the Raleigh, N.C., company, said that previous efforts to merge Xen into the kernel ran out of steam when nobody stepped forward to drive them. Red Hat is now stepping forward, Stevens said.
This move comes as Microsoft Corp. is pushing its own virtualization products and recently relaxed some of its licensing requirements around Windows Server 2003 to facilitate more pervasive adoption and use of those technologies.
Part of the Red Hat emerging technology team’s efforts will be to drive the Xen virtualization technologies as part of the Linux kernel rather than as part of a sidebar project, as is currently the case, Stevens said.
“My goal is to get this done in the most collaborative way possible with anyone in the community who wants to participate,” Stevens said, adding that Red Hat is committed to putting on this project enough of its staff who have the technical knowledge necessary to get the work done.
In addition, it recently hired an additional six staff members in the virtualization area alone. “We haven’t been able to focus enough on this until now to help get it done. So we’ve stepped up to work on this and help get it done. We’d like to have this done in the next two months. I don’t think it’s a long-term project at all,” he said.
A big part of the strategy is making virtualization and its management a part of a Linux system, “so this is not just maturing the technology but having the operating system itself, the kernel itself, be intimately aware that it is being virtualized so that it participates,” Stevens said.
Andrew Morton, the current maintainer of the Linux 2.6 kernel, who works for the Open Source Development Labs, in Beaverton, Ore., told eWEEK that he had expected a submission of Cambridge University’s Computer Laboratories’ Xen virtualization technology for merging into the Linux kernel quite some time ago.
“But Red Hat is a strong engineering company, and I trust them to produce a good contribution and to support it,” he said.
Once a contribution emerges from a development team, Morton said he will actively identify other stakeholders and solicit their feedback. “There are quite a few stakeholders here, including XenSource, Red Hat, IBM and Intel,” he said. “VMware is also working on virtualization in general, and they will provide feedback on the proposed design.
“I’ll then make decisions based upon that. As we haven’t recently gone through that process on Xen, I am not able to predict who will react, and how. So, the bottom line is that it is too early for me to say how it will turn out,” Morton said.
Ian Pratt, of the University of Cambridge in England and the leader of the Xen project, said there were a number of reasons for the delay in including Xen in the kernel. Primarily, Xen 3.0 had suffered from a bit of feature creep. Physical Address Extension (PAE) 32b support and Virtualization Technology, for example, were added very late in the cycle. “We were aiming for an end-of-summer release, but this now looks on target for December,” Pratt said.
It didn’t make much sense to start preparing patches for sending upstream until the Xen 3 guest API was close to being frozen, because there is a significant resource cost in maintaining multiple trees, he said.
“We hit this point a month or so back, and there’s actually been a lot of activity since then,” Pratt said. “We’ve done a first cut reorganization of our patch into the form that was agreed on at the last Xen summit, forward ported it to the head of Linus [Torvalds’] tree, and put out a call for help to Red Hat, SuSE, IBM, HP and all the other stakeholders to help us beat it into shape. It’s great to see them stepping up and promising to commit some of their best guys to help.”
The technology is certainly ready for inclusion in the kernel, he said. Rearranging the patch into a form that fits in better with the existing code base needs to happen first, but this is fairly mechanical.
“We maintained our patch in a form that made our life easier, and helped us track stable Linux versions while getting the stability of our own software right. It’s now time to make the change,” Pratt said.
Pratt’s confident that a patch will be ready to be submitted for inclusion in the kernel within two months, as none of the reorganization and cleanup work that needs to happen is very hard, “but it is essential we get the aesthetics right. But whether Andrew/Linus accept it is a different matter,” Pratt said.
He welcomes Red Hat’s support, saying they have good engineers that are well-known and respected in the Linux community, which is bound to make the process run smoother. “SuSE, IBM, HP are all helping the XenSource team too, just maybe not so publicly,” he said.
Asked what recent contributions the vendors have made to the technology, Pratt said that IBM’s MAC stuff is in, as is support for Intel VT-x. Advanced Micro Devices Inc.’s Pacifica support is working well too, but will not make Xen Version 3.0.0.
“Further down the line we’re doing some cool stuff with I/O vendors that will result in zero cost I/O virtualization. HP had contributed some useful tools for performance profiling and instrumentation, while SuSE had had helped with PAE support and Intel with x86_64,” he said.
A lot of companies, most notably IBM, are also helping with testing. “We’ve had a lot of support from individuals in the Xen community too,” Pratt said. “Xen 3.0 is a big team effort. It is just taking a little longer than we’d hoped.”
John Loiacono, executive vice president for software at Sun Microsystems Inc., welcomed the move to drive the virtualization technology around Linux forward.
Any aggressive move by Red Hat to get the technology into the Linux kernel will be fully supported by Sun, which is embracing the Xen virtualization technology across its products and plat-forms. It has some of its brightest engineers working on this and is collaborating with others in the open-source community, Loiacono said.
Even Sam Greenblatt, a senior vice president at Computer Associates International Inc., told eWEEK that he is pleased with the progress made with Xen. CA will support anything going into the kernel that supports virtualization. “It’s come a long way. We just want to be careful to make sure it goes in the right way,” he said.
That marks a significant turnaround from earlier this year, when Greenblatt told eWEEK, “We think [Xen] is great innovation, but its concept of virtualization is still not to the point that we want to see in there [the kernel].”
Red Hat’s Stevens said his goal is to make virtualization as ubiquitous as possible, thereby allowing customers to decide whether they need it or not. “Our strategy is around how to make it ubiquitous, what are all the issues that make it ubiquitous and part of the platform,” he said.
“But when you get there, a range of great benefits comes with it, like the agility of being able to migrate workloads, suspend workloads, drive up utilization on a system because now you can isolate workloads from each other whereas before an entire box had to be dedicated to a specific application,” he said.
But this will require an entirely new management infrastructure around it as those that exist today revolve around managing physical boxes. “While people are extending the existing management platforms to virtual boxes, what they are not doing is changing the management paradigm, and that needs to change to one where applications, systems and resources are just meeting for a point in time. This needs to be more of a brokering system than the management of physical systems,” Stevens said.
While XenSource founder Pratt said an “entirely new management paradigm and infrastructure” is not needed to make good use of virtualization, it would enable this and could “save the big shops a ton of money by doing so.”
“XenSource will be one of the companies offering management solutions around Xen, along with a bunch of others. Hopefully XenSource’s will be best as we’ve been working on this topic in the university for a long time, so we have some deep control and automation facilities rather than just a flashy GUI,” he said.