InfoWorld names VMware ESX Server a top technology impacting IT in 2005

Quoting from official announcement:


VMware, Inc., the global leader in virtual infrastructure software for industry-standard systems, today announced that VMware ESX Server received the InfoWorld 2005 Technology of the Year award. The InfoWorld Technology awards recognize significant technologies of the past year that promise to make the greatest impact on enterprise IT strategies as well as the products that best exemplify the implementation of those technologies.

“With our fifth annual Technology of the Year awards, we honor the most enterprising enterprise products, ones that have fundamentally altered the IT landscape,” said Steve Fox, editor-in-chief at InfoWorld.

“VMware created its ESX Server virtualization product for businesses that need truly enterprise-class virtualization. ESX Server implements the consolidation, dynamic provisioning, resource pooling and all-bases-covered availability assurance of expensive system and storage hardware,” commented Tom Yager, technical director at the InfoWorld Test Center. “But ESX Server does it with ordinary servers, modular SANs and vanilla operating systems.”

Yager continued, “Those coming down to x86 from Sparc, Power, or PA-RISC hardware should consider no option other than ESX Server. And those running more than a rack’s worth of x86 servers should think seriously about trading some raw performance, so often wasted, for the high-availability, ultimately reconfigurable server infrastructure that this product enables. It’s remarkable – even marvelous – to see VMware carry IT so far with software that fits on two CDs.”

VMware ESX Server is virtual infrastructure software for partitioning, consolidating and managing systems in mission-critical environments. Adopted by thousands of IT organizations worldwide over the last three years, ESX Server has saved hundreds of millions of dollars in costs through providing server consolidation, fast provisioning and disaster recovery. ESX Server provides a highly scalable platform with advanced resource management capabilities, which can be managed by VMware VirtualCenter. Using VMotion technology, enterprises are able to respond to changing business demands in real-time and move virtual machines from one physical server to another with continuous service availability.

“We are very honored to have received the prestigious InfoWorld 2005 Technology of the Year award,” said Karthik Rau, director of product management at VMware. “We take tremendous satisfaction in seeing thousands of enterprises deploy our virtual infrastructure software in ways that drive real value and solve real problems. The InfoWorld award reaffirms what our customers have been telling us all along – that enterprises are looking at VMware virtual infrastructure as a critical technology layer for all their x86 based systems.”

Past industry recognition includes the CNET Enterprise Product of the Year award, IDG Enterprise Product of the Year award, Microsoft TechEd Europe Best of Show award, PC Magazine Technical Excellence award and Windows IT Pro Magazine Best Overall Product in 2004 award.

VMware works with Intel to optimize virtualization of Intel servers and desktops

Quoting from official announcement:


At the Spring 2005 Intel Developer Forum, VMware, Inc., the global leader in virtual infrastructure software for industry-standard systems, today announced an expanded relationship with Intel to further drive the rapid expansion of virtualization of Intel servers and desktops throughout the enterprise.

As part of the expanded relationship, the two companies will collaborate on optimization of VMware server and desktop products working with Intel’s future chip-level virtualization technologies. The end result for enterprise users will be a robust, optimized set of operating system-agnostic virtual infrastructure solutions that leverage VMware’s seven years of extensive technology innovation and leadership in delivering high performance, production-ready virtualization products for the Intel architecture.

The current results of this collaboration can be seen at the Spring 2005 Intel Developer Forum where VMware products running on Intel’s Virtualization Technology (formerly codenamed Vanderpool Technology) prototypes will be showcased during Intel keynotes and in the solution pavilions. VMware will incorporate product support for Intel’s Virtualization Technology as it becomes available in Intel microprocessors.

“The combination of Intel Virtualization Technology and VMware’s virtualization expertise and products will lead to truly compelling usage models for enterprise computing,” said Pat Gelsinger, Senior Vice President and General Manager, Digital Enterprise Group, Intel. “Virtualization has reached a point of real maturity in the server space and will bring exciting new applications to enterprise clients including increased manageability, improved ease of use and enhanced security. We look forward to VMware’s support of Intel’s Virtualization Technology.”

“We are pleased that Intel, the world’s largest chipmaker, is adding virtualization support in the hardware. VMware endorses this move as it further validates the significance of our core market. We are excited to work with Intel to make future hardware technologies more virtualization-friendly and optimized for VMware products,” said Diane Greene, President, VMware. “We are looking forward to our increased co-operation as market leaders to proliferate the most advanced and robust virtualization solutions for the enterprise.”

In addition to product development and support, VMware and Intel are developing plans for a broad range of marketing activities that include advertising, events and solution centers to educate customers on the power of VMware virtual infrastructure on Intel servers and desktops.

Intel sees virtualization as key to child-proof PCs

Quoting from The Register:


A pair of Intel’s finest code pushers took the stage this week at the Intel Developer Forum to explain the merits of virtualization technology. In this particular case, the staffers looked to hype reasons why business users and consumers might like to have PCs that run different types of operating systems or might like to carve up PCs into different partitions. Intel, you see, if awful proud of some “hooks” it has come up with that make it easier for software makers to create partitions and the like.

“(Virtualization) is kind of like a swiss army knife,” said Greg Bryant, a director of marketing in Intel’s digital enterprise group. “It can be used for a lot of things.”

It’s true enough that virtualization has become very popular in the server market. VMware, in particular, has earned heaps of praise for being able to run multiple OSes on the same server. The case for similar technology on the desktop is less clear.

Connectix – now owned by Microsoft – does make a VirtualPC product that lets Mac users run Windows software and lets Windows users run old applications on their new computers.

Intel expects to see some business users build on these concepts. Companies might, for example, set up one partition that can run only approved software. Users can install iTunes or Doom or whatever unsafe software they like in another partition. Software makers might also create a type of “service operating system” that could be accessed no matter what has happened to the main copy of Windows or Linux.

“This lets you isolate the systems and recover if there is a malicious attack,” Bryant said.

What do consumers get?

Well, they can partition off the operating system into “for adult” and “for children” compartments.

“I really don’t want my kids messing with the Quicken files we use to pay our bills,” said Bill Leszinske, a director of marketing in Intel’s desktop group. Leszinske would return to this bad child theme again and again.

PC makers might also try and fine tune their systems to handle certain functions better. Why go through the time and trouble of booting Windows when you just want to play a DVD? Here comes the instant-on DVD partition, Leszinske said.

While the Intel marketeers did provide a couple of useful suggestions, they didn’t have answers for some of the more difficult questions posed by the IDF crowd. Things like, “Do we really need separate partitions for our evil children? Doesn’t a separate login do the trick today?” “What happens when Microsoft wants us to pay for four licenses for our four partitions?” “Do we install Norton four times?” “How about Service Pack 3?”

“There are some of the tough things we have to get to,” Leszinske said.

So far, this type of technology seems much more useful in the server world. It’s not at all clear that consumers need to do lots of PC carving. That is unless you have really crap kids, apparently.

But whether or not Intel is ready to answer some of these “tough” questions, it is ready to release the technology. Expect to see virtualization appearing in a PC near you later this year. It will pop up in the new Itanium due out by year end as well and then in Xeons and mobile chips in 2005.

Intel, AMD call for innovation on virtual platforms

Quoting from eWeek:


Intel and Advanced Micro Devices are touting their respective hardware-level virtualization technologies as platforms that will help spur greater innovation around such environments.

At the Intel Developer Forum here this week, Intel Corp. has been showing demonstrations of its Intel Virtualization Technology—formerly code-named “Vanderpool”—which is due to start appearing later this year in desktop chips and 64-bit Itanium processors, and next year in its Xeon server chips and mobile processors.
For its part, AMD (Advanced Micro Devices Inc.) will release the specification for its “Pacifica” virtualization technology later this month.

The technology will start appearing in AMD 64 processors in mid-2006, said Margaret Lewis, commercial software strategist for the chip maker.

Virtualization decouples the operating system from the hardware, letting users run multiple operating systems on individual servers or pool multiple systems into a single virtualized pool, leading to greater flexibility in how IT resources are used.

It has been commonplace in mainframe and Unix systems for many years, but the way x86 processors are built has made it difficult and cumbersome to bring virtualization into that space.

Companies such as VMware Inc. and Microsoft Corp. are enabling it on a software level. With the new technologies promised in upcoming Intel and AMD processors, virtualization capabilities in the x86 and Itanium space will grow, officials at both companies said Wednesday.

Virtualization is among a number of technologies that Intel is embedding into its processors as it focuses more on a platform approach to its products. Other technologies include dual-core processing, Intel Active Management and I/O Acceleration technologies.

At separate panel discussions here Wednesday, the two chip makers said the enhanced virtualization will lead to easier server consolidation, greater security, and better system and software utilization.

For example, users will be able to more easily create separate partitions on a single box that can be isolated from each other, reducing the likelihood that a virus in one partition will spill over into the other or onto the network, said Greg Bryant, director of planning and marketing for Intel’s Digital Enterprise Group.

Businesses looking to migrate legacy applications onto newer x86 servers will be able to create a partition on the server that will hold the legacy data, while another partition on the same server can be used to migrate the data to the new platform, AMD’s Lewis said.

In addition, having virtualization capabilities in the CPUs will increase the performance of virtual machines created through software from VMware and Microsoft, the chip makers said.

Intel and AMD are getting support from VMware and Microsoft in their virtualization pushes. Lewis said she envisions more software makers creating virtualization offerings that will run on the chip makers’ platforms, which will spur wider adoption among users.

“Once we get more and more users using the software … you’ll get them saying, ‘If it can work here, why can’t it work there?'” Lewis said at an AMD-led panel discussion at a hotel a few blocks away from the IDF convention. Also on the panel were representatives from VMware and Sun Microsystems Inc., which is building up a portfolio of servers running AMD’s 64-bit Opteron processors.

However, while the technology generated a lot of discussion at IDF, it raised a number of questions, including how well it would complement the software offerings from VMware and Microsoft.

“The question is, how much does it improve those products, and the answer today is, I don’t know,” said Gordon Haff, an analyst at Illuminata Inc. “VMware implies it isn’t really much, but you just don’t know.”

Haff also cautioned users not to assume that they will be able to create virtual machines simply by running systems that offer chip-based virtualization. “You’re still going to need to make an investment in software,” he said.

There also is the question of how software makers will license their products that are running on virtual machines. Martin Reynolds, an analyst with Gartner Inc. and moderator of the Intel panel discussion, said that will be a key challenge for software makers going forward. Users will not want to be charged a fee for every virtual machine the software runs on.

“The model’s going to have to change,” he said.

Intel drops ‘Vanderpool’ handle

Quoting from The Register:


Intel’s ‘multiple systems, one chip’ system, previously codenamed ‘Vanderpool’, will now take on the rather more prosaic moniker, Intel Virtualisation Technology (VT), outgoing CEO Craig Barrett said today.

VT is expected to debut next quarter when Intel launches ‘Smithfield’, its dual-core Pentium desktop processor, a year ahead of the company’s original release schedule.

Smithfield also got its go-to-market name today: it will ship as the Pentium D.

VT allows one processor to run multiple operating systems – or multiple instances of the same OS – simultaneously. It’s a technique long used in mainframe systems. Enterprises are generally happy to run whatever OS a particular app happens to require, so they often need to maintain multiple OSes, ideally on the same hardware to save money.

That’s not the case with desktop users – dual-booting Linux geeks being the most common exception; so you might think VT has less of a role in desktop usage scenarios than Intel might like us to believe.

Think of users running a Windows-based media server, one on VT-hosted virtual machine, while playing a 3D game in a second Windows instance on a second virtual machine, Frank Spindler, Intel’s industrial technology programs director, said yesterday. That way if the game hangs, you don’t lose the server functionality.

Maybe. Or you could just run an operating system with decent memory protection, so that the crashing game doesn’t bring down the whole system with it.

Whatever. The Pentium D has VT, whether you need it or not, and with launch day approaching, the technology needs a better go-to-market name than Vanderpool. Hence the new tag.

Curiously, Intel staffers did not refer to VT when detailing the PD today. But the company has already stated that PD and VT are shipping in the same timeframe; so it’s tempting to conclude that the one will be part of the latter. Intel’s VT blurb hints at a close relationship with dual-core architectures, and the Xeon line isn’t due to go dual-core until 2006.

XenSource to support Intel virtualization technology in Xen Hypervisor

Quoting from Business Wire:


XenSource, Inc. today announced plans to incorporate certain technology contributions from Intel Corporation into release 3.0 of the Xen hypervisor — the industry standard Open Source virtualization application that allows multiple operating systems to run concurrently on the same physical server. Intel has contributed code to the Xen project to enable support for Intel(R) Virtualization Technology (formerly code named Vanderpool), part of a collection of premier Intel technologies that can deliver new and improved computing benefits for home users and for business users and IT managers.

Xen is an Open Source hypervisor, spun-out from the University of Cambridge and developed and maintained by XenSource, Inc, that allows multiple operating systems to run concurrently on the same physical server, reducing the complexity of data center management and cost of ownership. Today the hypervisor must manage the complexity of operating system virtualization in software. Intel processors enhanced with Intel Virtualization Technology enable the hypervisor to efficiently run multiple operating systems, including unmodified legacy operating systems, in independent, isolated partitions. With virtualization, one computer system can function as multiple “virtual” systems. With enhancements to Intel’s various platforms, Intel Virtualization Technology can improve the robustness and performance of today’s software-only solutions. Xen 3.0 is targeted for availability early in the third quarter of this year, and will also include support for 64 bit processors and Symmetric Multi-Processor (SMP) guest operating systems.

“Intel Virtualization Technology, with a Xen-based hypervisor, helps allow the platform to natively support multiple operating systems and applications, and in particular makes it possible for the Xen hypervisor to run unmodified operating systems,” said Philip Brace, General Manager Marketing, Server Platform Group of Intel Corporation. “We plan continued support for the Xen project, and will work with XenSource and the Xen community to help ensure that Intel Virtualization Technology enabled Xen is available coincident with Intel platform availability.”

Intel has supported the development of Xen since its origins as a University research project at Cambridge, and continues to fund research and ongoing development of Xen. “Intel has been a first class supporter of Xen and has contributed to the Open Source code-base. This contribution will dramatically enhance the breadth of operating systems supported by the Xen hypervisor on Intel Virtualization Technology enabled servers,” said Ian Pratt, University of Cambridge Computer Laboratory and founder of XenSource. “Intel’s involvement has been invaluable, and we look forward to continuing our collaboration.”

Xen 3.0 expected for September 2005

Quoting from InformationWeek:


Companies looking to squeeze more out of their IT infrastructure investments have for years been able to build virtual servers within mainframe, Unix, and even Windows environments. A movement to deliver this capability to Linux environments is gaining momentum thanks to Xen hypervisor, an open-source software virtualization tool managed by a startup XenSource Inc. and backed by some of IT’s biggest vendors.
Advanced Micro Devices Inc. and Intel are developing 64-bit processors that will make use of Xen hypervisor, while Linux providers Novell and Red Hat are working with XenSource to provide support for users consolidating server environments. Hewlett-Packard and IBM are contributing code to the Xen project and working with XenSource to develop new uses for the technology.

XenSource in January launched with a $6 million round of funding led by Kleiner Perkins Caufield & Byers and Sevin Rosen Funds. To succeed, they’ll have to take on well-established competitors, since Microsoft and VMware, a subsidiary of storage maker EMC, offer proprietary software that can be used to create virtual servers on Intel or AMD x86-based servers. Yet in a market where business-software buyers increasingly welcome an open-source alternative, XenSource could find an opening. “[Xen] is still very immature, but it offers a lot of promise that will be realized by first by Linux users and then in other environments,” predicts IDC analyst Dan Kusnetzky.

Xen, which is licensed under the GNU General Public License, works on servers running any open-source operating system, including Linux and NetBSD, with ports to FreeBSD and Plan 9 under development. When Intel and AMD deliver new processors within the next year that support virtual servers on the chip level, Xen should be able to run on proprietary operating systems as well.

A more subtle difference between the Xen hypervisor and competing proprietary technologies is that Xen keeps cache memory that records the state of each virtual server and operating system. XenSource does this through “para-virtualization,” or splitting the operating-system drivers in half. By doing this, one half operates the virtual server while the other half can be kept as a separate domain where this cache memory can be stored. “This saves users time and resources when switching between virtual servers,” says Simon Crosby, VP of strategy and corporate development for XenSource.

Yet Xen’s potential for widespread is clearly tied to chip-level advancements Intel and AMD are promising to deliver later this year.

Intel’s Vanderpool processor technology and AMD’s Pacifica processor will offer an interface with proprietary operating systems in a way that Xen hyperlink can’t do on its own. “AMD and Intel will make the hypervisor’s job easier, particularly for operating systems for which source code is not available,” Crosby says.

XenSource says that the next version, Xen 3.0, will come out by September. By the end of this year, 64-bit AMD and Intel technology also is due out, including Intel’s Vanderpool. Version 3.0 will also let users create virtual machines that run applications requiring multiple processors.

Xen enters a market poised for heavy growth over the next few years, says IDC analyst Dan Kusnetzky. The market for virtual environment software, including management and security tools, reached $19.3 million worldwide in 2004 and will grow 20% annually through 2008.

For Xen to appeal to business users, the technology needs support from major Linux backers such as Red Hat and Novell, in addition to XenSource. While Red Hat and Novell have some developers working on the project, they haven’t integrated Xen support into their existing services, a move that XenSource predicts is likely given those companies strong interest in the growth of Linux. “People in large enterprises like to buy from a single vendor, so we expect Red Hat and Novell to offer support [for Xen] with their products,” Crosby says. XenSource will also support Xen and will by the end of the year develop utility software with a graphical interface that can be used to manage the technology. “This is an absolute requirement so that more people can use it,” Crosby says.

XenSource will in April host a Xen developer summit to determine how the technology should progress. “It’s an essential piece in the open-source process,” Crosby says. One of the issues likely to be discussed is security. “We’re still figuring out how to make the hypervisor more secure,” he says. IBM has a project under way called Secure Hypervisor to create a run-time environment that securely manages context switching between virtual servers. The goal is to prevent unauthorized information transfers between virtual servers and to ensure that all virtual servers are governed by the same security policies, Crosby says.

Most Xen users are still in the experimental stage. “The lion’s share of people pushing it out are in the hosting world and those who are running it in large data center deployments in banks and Fortune 500 companies,” Crosby says. “They’re still learning about it.”

One Xen pioneer is stretching where virtualization can be applied, exploring whether virtualization techniques can be used to create a network router that can segment its internal resources, such as CPU cycles, memory, and network bandwidth. “As such, I am exploring the possibility of running routers on multiple virtual machines (or domains in Xen terminology), with one virtual machine router (routelet in my project) for each network flow requiring quality of service guarantees,” Ross McIlroy, a research student at Scotland’s University of Glasgow, says in an E-mail interview.

McIlroy’s goal is to partition the flows, preventing one overloaded flow from impacting the service provided to another quality-of-service flow. The success of this experiment could have a positive impact on applications that transport isochronous data across a network, for example teleconferencing or voice-over-IP applications, which require a network providing quality of service.

McIlroy knows that his project isn’t exactly what Xen’s creators had in mind. “Xen provides me with an ideal basis for the creation of a prototype router which should test [Xen’s basic] theories,” he says. McIlroy has found Xen’s para-virtualization technique useful in reducing the overhead that’s normally generated when creating a virtual environment.

Microsoft unveils roadmap for its virtualization technologies

Microsoft Watch reports a Virtual PC v2 (codename “Hedgehog”) and Virtual Server v2 expected for year 2006 and running on 64 bit architectures as announced by Microsoft talking about its roadmap for 64-bit servers availability.

Microsoft Watch also reports a Virtual PC 2004 Service Pack 2 and a Virtual Server 2005 Service Pack 1 expected within this year. eWeek reports Virtual PC 2004 Service Pack 2 could run 32-bit virtual machines on WOW64 or native 64-bit architecture.

More details as soon as available.