Alan Priestley explains Intel’s virtualisation technology

Quoting from Techworld:


With hardware virtualisation shortly to become part of Intel processors’ feature set, we asked Alan Priestley, Intel’s European marketing manager, to explain what the new technology consists of and how it works. We started by asking him to justify the technology.

Q: How many people are staying with a single box per application, rather than consolidate? You must have some idea if you bothered to build it into the CPU. After all, it’s not free, it uses some of your transistor budget.
A: I don’t know what the mix will be. We have a generous transistor budget at the moment, it doesn’t significantly impact that.

There are still people out there today deploying thousands of DP servers as opposed to scaling up and going on to four or eight-way servers, and there will always be different theories or rationalisations about what processing model to deploy. Is it a big SMP box, is it a Superdome or at the other extreme, thousands of DP servers like Google? There’ll always be things in between. And we wanted to build a platform that’s general purpose that’s not specific to one segment, so VT [Virtual Technology] is there if you want it.

But there’s a lot of people who want to virtualise. They’re using VMware and Virtual Server and so on, Intel’s VT increases the reliability of virtualisation, an increased robustness, it doesn’t remove the need for software like VMware.

Q: How does it work?
A: OSes that have been virtualised run as if they owned the whole system. We have to change the protection model. Right now, they think they’re running at ring zero [which gives the OS total access to and control over the hardware] but they’re actually running at ring 1. The OS executes instructions that are ring zero instructions and then you’ve got to trap them. So there’s a risk in terms of doing it and it limits some IT departments’ desire to use virtualisation, because it’s only in software, not like the old mainframe days when it was hard-wired.

Putting VT in gives us increased robustness because we move the virtual machine monitor (VMM) down another layer, or ring. It means the OS runs where it should do and the VMM sits underneath it. So that gives a benefit in terms of stability, which will probably increase the uptake on it.

We’ve published the programming instructions for ISVs and it’s basically a set of instructions that allows you to switch context. Once you’re in the privileged ring, you use that set of instructions. Ring zero is where the OS sits normally and it’s allowed to access the hardware directly. Today the place where the VMM sits is at ring -1, under ring zero.

Q: What’s the significance of ring -1?
A: Today when you run VMware, it has to push the OS up so it runs at ring 1 so it can run at ring zero and have total control. The problem then is that the OS doesn’t realise that it’s at ring 1 and you have to trap and emulate those ring zero instructions. This has performance costs and stability risks. By putting the VMM underneath, we’re letting the OS have total control but then when we want to switch context, given that you’re in a multi-tasking environment, that VMM can cut in.

It has instructions that saves the context state because one of the things that impacts performance is saving the complete machine state so you can make that switch to the other instance, and that’s now hardware assisted. So we’ve got new instructions that enable you to get into that state and manage the virtual machine.

Q: Is there a set of virtual stacks that you save the machine state to?
A: The stack’s not saved in hardware, you have to flush the stack. It’s a bit like when you enter the system management mode. You flush out a set of registers — the processor context — and the chances are that there’s more context in the processor than the normal instruction set can save because of some of the state of the machines.

But it doesn’t save it into the processor — it’s not like the Itanium register stack engine where you’ve got these 228 registers which set up as general purpose registers and flip between them. It has to maintain the programming model that we have and know. You’ve got be able to run NT3, virtualised.

Q: Or a DOS box?
A: Exactly.

Virtualisation creates need for more-resilient servers

Quoting from Techworld:


As IT managers virtualise their x86 servers and consolidate applications on a smaller number of systems, they’re demanding more from the hardware they buy: more memory, certainly, but also added high-availability features such as multiple power supplies and cabling ports.

Some businesses have even gone a step further. Purdue Pharma LP last year started buying Stratus Technologies’ fault-tolerant servers, similar to those used by financial services companies and emergency call centres, to run Microsoft’s Active Directory and other applications in a virtual environment.

That was the drug maker’s first foray into fault-tolerant servers, and it came after the company decided to use VMware’s virtual server technology, said Stephen Rayda, director of architecture at Purdue.

Rayda said last week that he didn’t want Purdue’s IT administrators to have to answer the following question if the virtualised system failed: “When they lose 30 servers on a single box, they’re going to get asked, ‘What could we have done to avoid this?’ ”

CIO at Austin Energy Andres Carvallo said he’s buying fewer servers now and focusing his budget for Intel-based machines on systems “with higher capacity and more fault-tolerant-type features.” He embarked on an 18-month project last year to reduce his server count from about 250 systems to 80, largely through virtualisation.

CIO at Education Management Christopher Kowalsky, a company that operates a variety of academic institutions with a total of about 66,000 students, is evaluating VMware’s software and Microsoft’s rival Virtual Server offering. “We’re the same as most organisations,” he said. “We have a lot of servers and a lot of processors, and we’re continuing to try to figure out how to best utilise them.”

But Kowalsky added that if his company does adopt virtualisation technology, he will run the software on fault-tolerant, highly resilient systems that are capable of failing over to another box. Having servers with high-availability features is “going to be a big part” of any move to virtualisation, he said.

Impact on server sales?

Although virtualisation and grid computing technologies can increase server utilisation and reduce the need for new boxes, worldwide revenue from server sales grew 5.5 per cent last year, according to IDC. Analyst Stephen Josselyn said he doesn’t think virtualisation will hurt server revenues.

Virtualisation is more about better utilisation of resources, Josselyn said, adding that he expects users to continue to scale out their system installations more than they scale up single systems.

But Gartner has a different take. In a report presented at its data centre conference in December, Gartner said higher processor utilisation rates could “dramatically reduce server hardware and administrative spending.”

Users typically cite ease of management, reduced support needs and associated staffing savings as the top benefits that virtualisation can provide, not server cost reductions. But even if some of the hardware that users are buying for virtualised environments is more expensive than what they used to purchase, the fact remains that they’re buying fewer servers than before.

“We’re developing a love-hate relationship with our hardware vendor,” said Alex Cruz, who is an e-mail, Web and VMware administrator at Dean Health System. Dean Health uses IBM’s eServer BladeCenter hardware, and IBM “loves the fact that we are buying blades, but obviously it has cut down on our overall cost of servers that we’re purchasing,” Cruz said.

AMD demos Pacifica PC virtualization

Quoting from BetaNews:


At a meeting to showcase its latest technologies in Austin, Texas, AMD on Wednesday demonstrated its “Pacifica” virtualization technology, which allows a user to run multiple operating systems on a single machine. While this can already be done on today’s computers, Pacifica requires no advanced software.

With the Pacifica technology, as well as Intel’s rival offering dubbed “Vanderpool,” the software needed will be built into the chip itself. While basic software is still required to switch between operating systems, the programs will be less complex and more secure than older methods.

“AMD has taken an inclusive approach to Pacifica by previewing it to the virtualization ISV and analyst community. This ongoing collaboration, including today’s disclosure, will ultimately provide Pacifica users with an even richer feature set and a higher performance model for hosting hypervisor-based virtualization solutions,” AMD vice president Marty Seyer said.

Initially, the company will only put the technology on its AMD Opteron and Athlon 64 processors. These Pacifica-enabled chips are expected to hit the market in the early part of 2006. Software makers, however, will get a full preview of the specifications in April.

According to AMD, performance-boosting enhancements are planned for both single and dual-core chips in the future, although no date was set.

VMware’s marketing virtualization’s value – can do more with less

Quoting an Investor’s Business Daily interview to VMware President, Diane Greene, reported on the VMware press archive:


Mom’s old saying “Waste not, want not” applies as much to tech gear as it does to personal finance, old toys and uneaten vegetables.

As techies hail the speed milestones being reached by IntelINTC and Advanced Micro DevicesAMD, many overlook an awkward reality: Most of that silicon brainpower isn’t being used.

VMware President Diane Greene aims to change that. Her Palo Alto, Calif., company is bringing a technology called virtualization — once the sole domain of mainframe computers — to the masses.

The VM stands for virtual machine, a technique that fools software into thinking that one computer is really two, three or a score of machines. This lets firms run lots of software on a single system rather than having to buy a separate machine for every application — and see the PCs sit idle most of the time.

The upshot: Companies can do more with less.

Virtualization isn’t new. But VMware made it possible for the first time on cheap PC servers.

Today, VMware has partnerships with all of the major server vendors and chipmakers. With the chipmakers, the pacts make it easy for VMware to run with chips made by Intel and AMD. Many of the server makers, meanwhile, resell VMware with their hardware.

In 2003, data gear maker EMC bought the company. VMware operates as an independent subsidiary.

Greene recently spoke with IBD about virtualization.

IBD: Why does this technology matter?

Greene: It puts in a layer of software that allows you to manage your hardware separately from your software. That gives you incredible power to do all the management functions better and in ways that were not possible before. It’s a better way to compute and a profound way to give hardware independence to the software.

IBD: More tangibly, what makes this so attractive to server vendors, let alone customers?

Greene: It’s a huge (cost of ownership savings of) 20% to 30% typically — some people say it’s up to 80%. It’s also a huge timesaver.

IBD: Describe the virtualization market.

Greene: In the (market for machines powered by Intel or compatible chips), VMware is the undisputed leader. We created the market and we definitely lead it. We have a strong track record of moving our road map and technology forward pretty aggressively. We have 3 million users and 10,000 enterprise customers. Many of our customers use it on more than 50% of their Intel servers.

IBD: Who are your rivals?

Greene: Of course, the IBMIBM mainframe has virtualization. Then there’s virtualization on the (IBM) pSeries. And Sun (MicrosystemsSUNW ) has built this “container” functionality into Solaris (its server operating system) for their Unix systems. Those tend be a different set of applications and uses than what people are doing with us in the Intel space. Certainly MicrosoftMSFT is coming into this space.

IBD: Does that worry you?

Greene: Ultimately we’re going to work well with Microsoft. We’re doing good things for Microsoft. In the Intel space, bringing in a lot of mainframe-style functionality has helped accelerate x86 (Intel-compatible) adoption into data centers (most of which run Microsoft Windows). So we’re going to want our systems to work together and go to market together. But today, they’re coming in competitively.

IBD: How much of your software goes into Microsoft Windows-based systems?

Greene: Windows dominates, just as it does in the industry. But we have a substantive set of Linux customers. Then we have a lot of customers in mixed environments: Linux and Windows and NovellNOVL. And we’ve got experimental support for Solaris on x86.

IBD: Is that mix changing?

Greene: We’ve certainly seen a rise in Linux over the years that’s been fairly dramatic. And it’s going to be very interesting to watch Solaris x86 to see what happens there. We’re getting more and more demand for it.

IBD: Who are your partners?

Greene: HP (Hewlett-PackardHPQ ), IBM and DellDELL are three of our strongest partners. We’ve been working with them for years. We work with all the storage vendors, all the Intel vendors, the chip vendors and the system management vendors like CA (Computer AssociatesCA) and BMC SoftwareBMC .

IBD: Your parent, EMC, competes with several of those companies. How do you manage that web of relationships?

Greene: When EMC bought us, we were treated as an independent subsidiary, because our partnerships are so key to what we do. We’re about running on any (Intel-compatible) hardware and any operating system. For the first six months, everybody thought maybe (VMware’s independence from EMC) will change. Now it’s clearly established that that’s the way it is and it’s not going to change. Since the acquisition, we’ve been able to grow our relationships with IBM and HP.

IBD: Do you collaborate with EMC or maintain a separation?

Greene: We have to (have separation) because of our partnerships, because we do co-development with our partners. In the same way, we do co-development with EMC. When people do server consolidation, they generally do storage consolidation. A lot of EMC (storage-area networks) are working with VMware virtual infrastructure.

IBD: You just signed a deal with Intel. What impact will this have on your company and technology?

Greene: We’ve been working with Intel almost since the beginning. We’ve taken it a step up in that we’re going to collaborate around (Intel’s VT) virtualization technology. They’re putting support in their chip set. The first incarnation of VT is due out this year on the desktop and next year on the server.

IBD: How does Intel’s input help?

Greene: When VMware first virtualized the x86 back in 1998, there was no support. That creates some overhead because of some of the trickery we have to do to virtualize it. Now Intel is adding instruction support to the chip to reduce that overhead. For the customer, it will further expand the universe of applications that will want to take advantage of virtualization

AMD unveils virtualization platform Pacifica

Quoting from official announcement:


AMD today for the first time publicly disclosed key elements of its “Pacifica” virtualization technology, at the AMD Reviewer’s Day in Austin, Texas. “Pacifica” will help extend AMD’s technology leadership when it brings to market technology that is designed to enhance 64-bit server virtualization technologies for x86-based servers, desktops and mobile computers.

“AMD has taken an inclusive approach to Pacifica by previewing it to the virtualization ISV and analyst community. This ongoing collaboration, including today’s disclosure, will ultimately provide ‘Pacifica’ users with an even richer feature set and a higher performance model for hosting hypervisor-based virtualization solutions,” said Marty Seyer, vice president and general manager of the Microprocessor Business Unit, Computation Products Group, AMD. “By enhancing virtualization at the processor level, and building on the success of industry-leading AMD64 technology, we believe that ‘Pacifica’ is vital to the development of best-in-class virtualization solutions.”

“Pacifica” will extend AMD64 technology with Direct Connect Architecture to enhance the virtualization experience by introducing a new model and features into the processor and memory controller. Designed to enhance and extend traditional software-only based virtualization approaches, these new features will help reduce complexity and increase security of new virtualization solutions, while protecting IT investments through backward compatibility with existing virtualization software.

By enabling a platform to efficiently run multiple operating systems and applications in independent partitions, essentially allowing one compute system to function as multiple “virtual” systems, “Pacifica” is designed to provide foundation technologies to deliver IT resource utilization advantages through server consolidation, legacy migration and increased security. Information about “Pacifica” is immediately available at www.amd.com/enterprise.

AMD’s commitment to provide the industry with superior technology to enable virtualization solutions is demonstrated through strategic alliances with partners including Microsoft, VMware and XenSource.

“Businesses and consumers have rapidly adopted Microsoft’s Virtual PC 2004 and Virtual Server 2005 for scenarios ranging from development and test simulation to production server consolidation,” said Rob Short, corporate vice president, Windows® Division at Microsoft Corp. “We are excited about AMD’s focus on enabling technologies such as ‘Pacifica,’ and are working with them and other partners to ensure our software virtualization solutions for the Windows platform will leverage these underlying hardware advancements. Processor virtualization extensions are an important building block for future virtual machine solutions on the Windows platform.”

“Leveraging seven years of technology innovation and leadership in virtualization, VMware today delivers production-ready virtual infrastructure products for AMD Opteron™ processor-based servers and AMD Athlon™ 64 processor-based desktops,” said Paul Chan, vice president of Research and Development, VMware. “We’re pleased to continue collaborating with AMD on virtualization technologies such as ‘Pacifica’ to optimize future AMD64 technologies for our products and further expand the deployment of virtualization in the data center.”

Today’s disclosure about “Pacifica” precedes the general availability of the “Pacifica” specification, planned for April, 2005. “Pacifica,” which will provide users with hardware support to better enhance the flexibility and performance of current solutions, is planned to be available in both client and server processors from AMD in the first half of 2006. Feature enhancements are also planned for future single-core and dual-core AMD64 processors to further leverage the performance of 64-bit virtualization software.

PearPC taking donations to sue CherryOS

Quoting from Flexbeta:


PearPC developers are taking in donations to sue Maui X-Stream, the developers of the MAC emulator software CherryOS. There have been allegations that CherryOS is nothing more than PearPC code, which is open-source, but with a GUI attached to it. According to this post, one of the PearPC developers tried to get in contact with someone from Maui X-Stream, but eventually where told to “speak with an Attorney” about the allegations. So that’s what PearPC is doing.

Linux Virtualization Consortium reportedly in the works

Quoting from LinuxBusinessWeek:


The tom-toms say that Computer Associates, anxious to flex its leadership muscles, is quietly pulling together a consortium that would optimize virtualization technologies like the open source Xen hypervisor software, and presumably the very un-open source VMware, on Linux.

Oddly enough, it looks like the effort would be done outside the Open Source Development Labs (OSDL), through which most of the big companies have been pushing their wish lists for Linux.

The consortium might be announced at the next Linux open source show in the next few weeks.

CA has reportedly been approaching the usual suspects, companies such as IBM, Dell and Hewlett-Packard, along with some of the other ISVs.

CA has also been pushing to winnow the number of open source licenses to a reasonable number lately. Three is what it has in mind instead of the three score now in existence.