VMware preparing a minor GSX Server update

VMware actually started the GSX Server 3.2 beta program. It introduces just wider host and guest OSes support and obvious bug fixes, without new features:


New Operating System Support

– VMware GSX Server 3.2 adds full support for the following 64-bit host operating systems on AMD64 and Intel® EM64T processors:
Microsoft Windows Server 2003 x64 Edition as a host operating system on AMD64 and Intel EM64T processors

– VMware GSX Server 3.2 adds experimental support for the following 64-bit host operating systems on AMD64 and Intel EM64T processors:
Red Hat Enterprise Linux 4
Red Hat Enterprise Linux 3 Update 4
SUSE LINUX Enterprise Server 9 Service Pack 1
SUSE LINUX 9.2

– VMware GSX Server 3.2 adds support for the following 32-bit host and guest operating systems:
Microsoft® Windows Server 2003 Service Pack 1
Mandrake Linux 10.0 and 10.1
Red Hat? Enterprise Linux 4
Red Hat Enterprise Linux 3 Update 4
Red Hat Enterprise Linux 2.1 Update 6
SUSE? LINUX Enterprise Server 9 Service Pack 1
SUSE LINUX 9.2

Notice that experimental support for Sun Solaris 10 brought in with VMware Workstation 5.0 don’t appear on this GSX 3.2 beta build. I’m afraid we’ll need to wait till GSX Server 4.0.

Virtualization world still to look for a standard naming convention

As readers probably noticed during years virtualization firms and IT press still use different terms to refer about various virtualization tecnologies.
A clear example is given by the last ComputerWorld article I posted.
I strongly disagree on naming convention used there.

The article reports “Software VM” for products like VMware GSX Server/Microsoft Virtual Server, “Hardware VM” for products like VMware ESX Server/Xen, and “Application Containers” for products like SWSoft Virtuozzo/Sun Solaris Containers.
Actually there isn’t a common agreed naming convention about these three kind of virtualization, and the one choosed there s unfair since it puts too much difference in terms between products like VMware GSX and ESX server, and because it cut off real hardware virtualization.

I know many players are reading virtualization.info: for sure Microsoft, VMware, Xen, IBM, SWSoft, PlateSpin, Leostream. I’m unsure if Sun also read the blog.
For all of them I propose a single, simplier naming convention, based on where the hypervisor resides:

Hardware Partitioning or Hardware Hypervisor for products like IBM, Intel, AMD virtualization technologies.
Kernel Partitioning or Kernel Hypervisor for products like VMware ESX Server or Xen.
OS Partitioning or OS Hypervisor for products like VMware GSX Server or Microsoft Virtual Server or SVISTA.
Application Partitioning or Application Hypervisor for products like SWSoft Virtuozzo or Sun Solaris Containers.
Session Partitioning or Session Hypervisor for products like Microsoft Terminal Server or Citrix Presentation Server.

It’s an open discussion, please give virtualization.info some feedbacks.

Under the hood: the soul of a virtual machine

Quoting from ComputerWorld:


Although virtualization tools have similar objectives and use a virtualization software layer, called a resource manager or hypervisor, to manage virtual machines, the basic architectures vary.
In software-based VMs, the resource manager sits on top of a host operating system and juggles the requests of multiple guest operating systems loaded on top of it. Microsoft Virtual Server 2005 and VMware GSX Server follow this model.

Other products, such as Xen and VMware’s ESX Server, run in a hypervisor that sits beneath the guest operating systems and the hardware. Because the software layer sits on the “bare metal,” these are sometimes referred to as hardware VMs. Direct contact with the system hardware allows the VMs to work more efficiently.

Other products, such as Solaris Containers in Sun Microsystems Inc.’s Solaris 10 and SWsoft Inc.’s Virtuozzo, also use a software-based model but eliminate guest operating systems in favor of “virtualized operating systems,” or application containers. Each application appears to have the operating system to itself, but in fact, core elements, such as the kernel and system libraries, are shared. This approach is more efficient than running a full-blown guest operating system in each VM and saves on software costs because one operating system license can be used for all VMs on a physical server. But there’s a catch: Virtual operating systems can support only applications that will run on the host operating system.

IDC analyst Dan Kusnetzky says each approach fits a different need. “Those who need power will want approaches that are very lightweight. Others are more concerned about optimizing resources,” he says. “A single approach will not fit the need everywhere.”

Microsoft to fully support Linux in Virtual Server by end of the year

Quoting from About Virtualization blog:


During a preview of the upcoming Virtual Server Service Pack 1 by Jeff Woolsey, Program Manager for the Virtual Machine Technology Group, Microsoft revealed that it will provide official support for Linux under Virtual Server.
While showing off Red Hat Linux running as a virtual machine during Steve Ballmer?s keynote at the Microsoft Management Summit, Ballmer interrupted:

?As much as that hurts my eyes, I know that?s an important capability for the virtual server technology for our customers.?

Microsoft will also support Linux in the surrounding ecosystem providing management and monitoring support in the MOM management pack.
Developed by Vintela, MOM will also support standard Linux system, Mac OS, and UNIX systems like AIX, HP-UX and Solaris. With the continued cooperation between Sun and Microsoft, support for Solaris under Virtual Server might be a possibility. McNealy and Ballmer will meet shortly to discuss interoperability efforts, perhaps even Solaris running under windows.

Microsoft working on new virtualization technologies for Longhorn

Quoting from IT Jungle:


Microsoft gave more credence to its insistence that Longhorn will be more than just another service pack at the WinHEC 2005 conference Monday when officials detailed new virtualization technologies that the company is working on for the next version of Windows. When Longhorn Server ships in 2007, it could sport a new hypervisor layer, dynamic hardware partitioning software, co-processors used to manage partitions, and other high-end features designed to decrease downtime. These are features that are common in mainframe, RISC/Unix, and proprietary midrange servers.

In his Windows Server Overview and Roadmap presentation, Sean McGrane, a Windows product manager with Microsoft, detailed his company’s X64 server strategy and provided a glimpse at some new virtualization technologies his company is working on for the Longhorn Server operating system.

When you combine the current bump in processing capabilities made possible by the 64-bit processors from Intel and AMD with the coming multi-core versions of these processors, present-day symmetric multiprocessing (SMP) machines are going to look pretty wimpy compared to the systems available in just a few years, McGrane says. By McGrane’s math, one of these uni-socket servers (with four processors integrated onto a single chip) from the future will do the same amount of work as one of today’s eight-way, 32-bit, X86-based servers, he says.

In addition to keeping Moore’s Law alive and well, this massive increase in processing capability is going to help Microsoft sell new solutions, including a high performance computing (HPC) version of Windows for scientific workloads, called Compute Cluster Edition. For business applications, Microsoft wants to get into the application consolidation game, McGrane says. However, when applications running across many smaller servers are consolidated onto a fewer number of larger servers, downtime becomes much more of an issue. As a result, Microsoft will be looking to build a range of new reliability functions into Longhorn Server. These include better error detection, which is being introduced through Windows Hardware Error Architecture (WHEA), as well as sophisticated new virtualization technologies.

Microsoft’s planned virtualization technologies will boost overall reliability by enabling Windows to make better use of the underlying hardware without causing resource conflicts among existing operating systems and applications. Virtualization will also boost reliability by enabling users to add or replace failing hardware components before they cause unplanned downtime, McGrane explains.

He also said that Microsoft is looking to use virtualization in two ways. In the “Hot Addition” mode, dynamic hardware partitioning would allow users to add more processors, memory, or I/O to meet changing needs, without needing to reboot the server to add that capacity. In the “Hot Replace” mode, a failing processor, memory stick, or I/O adapter is replaced with a redundant spare kept on site–again, without causing downtime. “If a customer gets worried [about a failing component], you swap it out before it fails,” McGrane says. “Memory and processors never go away from the application and drivers point of view.” Permanently removing hardware without requiring a restart is a tricky maneuver that would not be supported with the first release of Longhorn, he says.

The virtualization plans for Longhorn call for a variety of new technologies, including: a new Partition Manager that provides a user interface for the creation and management of partitions; a hardware-based Service Processor that controls the interaction of regular processors and I/O; hardware-based partitioning for segregating processors at the socket level; and a software-based virtualization layer for sub-socket partitioning, which is commonly called a hypervisor.

Some of these new technologies, such as I/O virtualization and hardware partitioning, will require cooperation with Microsoft’s hardware partners. For others, such as the new hypervisor layer, Microsoft already has much of the based technology. There has been speculation that Virtual Server 2005 will disappear as a product if Microsoft decides to build virtualization capabilities directly into the operating system, which is not yet a done deal. This would have its advantages, as Virtual Server 2005 exacts a higher level of overhead than a hypervisor–and particularly one that will be assisted by the on-chip hardware virtualization technologies in future Intel and AMD chips, dubbed Virtualization Technology by Intel and code-named “Pacifica” by AMD.

Servers equipped with dynamic hardware partitioning and a software-based hypervisor layer would provide other advantages over Virtual Server 2005. While the 32-bit Virtual Server 2005 allows different operating systems, such as Windows Server 2003, Windows NT, Novell NetWare, or even Linux, to run on the server, Microsoft’s future partitioning would provide a greater level of isolation between partitions, thereby requiring less time, money, and energy to be invested in compatibility testing and certification.

It makes sense from a Microsoft point of view. High-end Unix and mainframe servers from Hewlett-Packard, IBM, and Sun Microsystems, and others have offered various levels of virtual or logical dynamic partitioning for years and a few vendors even offer sub-processor partitioning. These technologies have enabled higher server utilization levels and engendered server consolidation, both of which save customers money. Microsoft needs a compelling virtualization story, too, if it’s serious about breaking into the high-end market and in helping customers get better utilization out of their Windows machines.

How Microsoft is using Virtual Server

Quoting from Megan Davis blog:


Many of you would like to know how Virtual Server is being used at Microsoft. Here’s a response from Jeff Woolsey, Lead Program Manager for virtualization. Thanks Jeff!

Virtual Server is being used in a variety of ways at Microsoft, including for test and development and online training, such as Microsoft Learning.

– Test and Development
Virtual Server is used by test teams throughout Microsoft, including Exchange, SQL, SBS, MOM, and many others. This is because Virtual Server allows you to rapidly deploy test servers within virtual machines while minimizing hardware requirements. Also, Virtual Server makes debugging easier. Debugging typically requires that a test computer is attached to a developer?s computer via a serial cable. With Virtual Server there’s no need for this. The process is as follows:

···1. Testers reproduce the issue in a virtual machine.
···2. The virtual machine is saved at the point the issue occurs.
···3. The virtual machine is copied to the developer?s computer.
···4. The developer connects the virtual machine to a debugger though a named pipe (a virtual serial port) and debugs the issue in the development environment.

– Production Use by Microsoft Learning
In the past year, Microsoft Learning has converted the majority of their online training from scripted Flash-type demos to live interactive training using Virtual Server. They started off slowly and have been ramping up with the increase in demand. Users log in and perform step-by-step interactive training with Virtual Server. On the back end, this is all done using virtual machines and Undo disks. When the customer logs in, an Undo disk is created for the session. When the user finished and logs out, the Undo disk is discarded and immediately the virtual machine is ready for the next user.

Benefits
Microsoft Learning is servicing more customers than ever. This is a production environment in use everyday: 30,143 attendees in January (972 attendees daily) alone with a 206,390 YTD. Because of the huge success of this program, Microsoft Learning is adding more hardware to increase the number of available labs.

Here are a few of the positive results they?ve seen?

The 90-minute lab sessions are the most popular.
Lab session use has gone up.
Time spent in the lab has gone up (averaging 75 minutes per lab now).
Customer satisfaction is up (way up!).

Customer Comments
I think this is the way IT was meant to be all along. Thank You Bill and company.
The implementation is entirely innovative and gives administrators like me a chance to experiment away from production systems.
Awesome. This is the type of thing IT training has needed for ages.
Excellent. Very useful hands on training. This module needs to be longer.
EXCELLENT! This is extremely useful hands on training.
Great! This is what admins who need to implement your products need. What about providing other training on SMS site design configurations, clusters etc.? A virtual lab setup like that will again help admins who are looking to implement this product.

Microsoft details plan to put virtualization in OS

Quoting from ComputerWorld:


Microsoft Corp. this week fleshed out the details of a plan to build virtualization capabilities directly into Windows as part of its effort to catch up to virtualization software market leader VMware Inc.
Microsoft’s plan adopts an architecture similar to the one VMware uses — a point that VMware seized upon as a validation of its technical direction. But Microsoft said vendors won’t be able to differentiate themselves on virtualization alone once the technology is supported in operating systems and chips.

The virtualization road map that Microsoft laid out at its Windows Hardware Engineering Conference here includes a lightweight “hypervisor” layer of code that will be built into the next major version of Windows, code-named Longhorn, to support the creation of virtual machines.

Microsoft is even “leaning” toward eliminating future versions of its Virtual Server and Virtual PC products, said Mark Kieffer, group program manager of Windows virtualization. But Kieffer added that a decision hasn’t been finalized.

More immediately, Microsoft plans to work with unidentified industry partners to expand the support of third-party guest operating systems, including versions of Linux, in the first service pack update for Virtual Server 2005. The update is due by year’s end and will include 64-bit compatibility and improved performance, Microsoft said.

The plans weren’t enough to sway Jason Agee, a lead infrastructure systems analyst at the Nebraska Health and Human Services System, from his commitment to VMware.

“Too little, too late,” Agee said, adding that VMware’s more mature virtualization software performs better on less-powerful hardware and is helping the agency to improve its server utilization rates.

But Tom Bittman, an analyst at Gartner Inc., said that the integration of virtualization technology with operating systems should spur broader adoption. Novell Inc. and Red Hat Inc. also plan to support virtualization technology in their Linux distributions.

Microsoft bought its way into the virtualization market two years ago through its acquisition of Connectix Corp., and it released Virtual Server 2005 last fall. Analysts said Microsoft entered the market primarily to give users of older Windows versions an upgrade path to new hardware.

But consolidating Windows NT servers with Virtual Server requires users to run a copy of Windows Server 2003 as the host operating system. The performance overhead inherent in that approach will be reduced when Microsoft moves to its hypervisor architecture, said Ben Werther, a senior product manager for Windows Server.

By contrast, VMware’s rival ESX Server, first released in 2001, doesn’t require a host operating system. Instead, it uses a hypervisor layer that runs directly on the hardware.

At WinHEC, Microsoft officials showed diagrams with the planned Windows hypervisor code layer, which will divide a system’s resources among different virtual machines. Longhorn users will be able to configure the operating system for a virtualization “role,” stripping out unneeded functionality in a so-called MinWin configuration, said Werther. But, he added, it’s still not clear if the hypervisor technology will make the firstrelease of Longhorn Server that’s due in 2007.

Performance also is expected to improve as a result of the hypervisor’s support for upcoming virtualization extensions in chips from Intel Corp. and Advanced Micro Devices Inc.

Steven McDowell, a division marketing manager at AMD, said CPU overhead currently runs at 10% to 30% on virtualized servers. But he said AMD hopes the overhead will be “negligible” with its Pacifica virtualization technology, for which AMD released a specification this week.

Bob Armstrong, director of technical services at Delaware North Cos., said the Buffalo, N.Y.-based hospitality services provider is happy with the software it bought last year from VMware, which is a subsidiary of EMC Corp. Armstrong said Microsoft is heading in the right direction by building virtualization technology into its operating system, but he fears that “it’s going to take them a long time.”

Frank Gillett, an analyst at Forrester Research Inc., said it will take at least two years for Microsoft to deliver on its Longhorn virtualization plans. In the meantime, VMware must figure out how to stay ahead of Microsoft and Linux vendors, with general-purpose management software as one option, he said.

Raghu Raghuram, senior director of strategy and market development at VMware, said Microsoft is acknowledging that “if you want to get into the data center, you need to run an architecture that runs like ESX Server.”

But, Werther said, “the real challenge will be managing hundreds or thousands of virtual machines across a data center.” Microsoft has significantly increased its investment in virtualization management across its System Center family of management tools, he said.

Microsoft tilts toward virtualization

Quoting from Information Week:


As part of its broader push into the systems-management arena, Microsoft is beelining into the burgeoning virtualization space, and in the process has begun talking (and beginning to act) like a company more fully committed to interoperability with other platforms.

At its Microsoft Management Summit last week, the Redmond, Wash., software giant unveiled several new or updated products, notably the Virtual Server 2005 Service Pack 1 beta and a management pack that provides interfaces between Microsoft Operations Manager (MOM) 2005 and Virtual Server 2005 to allow the former to manage the latter. Company officials also disclosed future plans to bake its next-generation virtualization technology — called Hypervisor — directly into the operating system, which is expected to occur around the time Longhorn server OS is released in 2007.

Virtual Server 2005 debuted last October as Microsoft’s entry into the virtualization space, a direction the company says constitutes a key area of investment going forward. The latest service pack enables support for Linux inside Virtual Server 2005, a somewhat ironic development given Microsoft’s past posture of either outright dismissal of the open-source OS or a scathing critique of it. But it also underscores the reality that customers have massively heterogeneous IT environments — not just Windows — and they are crying out for better ways to manage all of the infrastructure.

That reality is also in large part driving demand for virtualization. Virtualization software enables a company to run multiple operating systems, applications, middleware and other software on a single server without forcing those individual elements to scrap over system resources, such as memory, cache and CPU cycles, which degrades performance. The technology also is considered a godsend by a corporate world desperate to cut down on the amount of hardware it is running and to more fully exploit the resources of the servers they do keep churning.

Microsoft’s tilt toward virtualization takes it further down a path of interoperability that in the past it has not willingly traveled. It’s a calculated move, according to Microsoft President and CEO Steve Ballmer, whose keynote speech at the management summit thematically addressed the importance of being able to play nice with others. Systems management is just one piece of the interoperability puzzle at Microsoft, where it is also focused on developing Web services and XML standards. But, according to Ballmer, management interoperability is strategic.

“[Microsoft] grew up focused in on Windows, managing Windows, taking care of Windows,” Ballmer told attendees at the management summit. “Today, I want to mark essentially a step forward where you’ll see that our dedication now is to providing you the kinds of tools that you need to manage heterogeneity in your data center.”

To underscore his point, Ballmer highlighted Microsoft’s blossoming relationship with once-arch-nemesis Sun Microsystems. Since the infamous truce over Java last year, the two IT giants have been working to make their respective technologies interoperate more smoothly. At the management event, Microsoft demonstrated a MOM 2005 console managing a Sun Solaris-based server living in the same rack as Windows servers.

With both Virtual Server 2005 and MOM 2005, Microsoft is dishing out tools that not only automate and manage the Windows environment (as the software always has), but other platforms and applications. And it’s a smart move, some partners say.

Mark Loos, managing consultant for International Network Services (INS) in Santa Clara, Calif., is one such partner who believes Microsoft is moving in the right direction. INS, which has nearly 1,000 employees worldwide specializing in infrastructure services and, recently, application development, is “taking the bull by the horns with Microsoft” and building a strong practice around the management and virtualization tools, he says.

“We’ve already done CiscoWorks, [HP] OpenView, etc., for years,” Loos says. “For Microsoft to begin to head into that area and to adjust their application development to have hooks for management [of other systems] is a great story to tell people with .Net environments.”

Microsoft’s interoperability play isn’t nirvana, it should be noted, especially in the realm of virtualization. Yes, Virtual Server 2005 will enable both Windows and other “guest” operating systems to run simultaneously inside separate virtual machines on one server. But Windows is developed to run optimally in Microsoft’s virtual machines; the others are not. In order for an OS like Linux to perform as well on Virtual Server 2005, Microsoft is looking for some help.

“We want partners to tune the third-party OSs, which is what we are enabling them to do better with [Virtual Server 2005] SP1,” says Eric Berg, director of product management for the Windows management division at Microsoft.

Virtualization leaders such as VMWare, which was acquired by EMC last year, are quick to point out that they don’t play favorites with the performance of the operating systems or other software they run. VMWare takes the agnostic approach and its software is not dependent on any operating system. Instead, it runs directly on the hardware. By comparison, Virtual Server 2005 needs Windows to operate and if Windows goes down, it does as well, according to Raghu Raghuram, senior director of strategy and market development for VMWare.

“You want your virtualization layer to be independent of the operating system so that it runs without prejudice,” Raghuram notes.

He added that Microsoft’s plans to debut in two year’s time Hypervisor virtualization technology to run production applications is, frankly, an architecture that VMWare already has today. For its part, VMWare is innovating with projects that let you take a three-tiered application and all the connectivity between the layers and represent it all on one server, he says.

“We are going for high-level virtualization functions,” he says.

Loos works with both Microsoft and VMWare and agrees that the latter is out of the gate first with much of this technology. But, he says, the guiding light for what INS recommends for virtualization depends in large part on what the customer’s environment already looks like.

“There’s a lot of Windows shops out there,” he says. “We tell customers about the pros and cons and values to be had, and we’ll even do a bake-off if we have to.”

The Virtual Server 2005 Service Pack 1 beta version is now available for download, and among other things features 64-bit compatibility and enhanced performance improvements to the virtual machine technology.

Microsoft to bundle Virtual PC with Win64 Server

Quoting from The Inquirer:


Since we know about Win64 coming at WinHEC, what is MS going to announce? It has to surprise people and give them something to babble about, and this time is no exception.
So this week, the big surprise at the WinHEC keynote is that it will bundle VirtualPC for free with Win64 Server. Don’t tell anyone, that is the big secret, and we wouldn’t want to scoop the keynote, would we?

Now, it may be a great thing for MS and it’s customers, especially now that it supports Linux, but others might have a bone to pick with them. I can just see EMC throwing raw meat to its learned fiends in anticipation

Voltaire demonstrates complete virtual data center solution with Xen server virtualization

Quoting from official announcement:


Voltaire, the leader in interconnect solutions for high performance grid computing today announced the completed integration and support for Xen server virtualization software with its InfiniBand-based grid interconnect solutions. The combined solution, which includes Voltaire network and storage virtualization technology, creates a highly scalable data center infrastructure by eliminating I/O overhead and providing the full flexibility to choose any application to run on any virtual server, and dynamically allocate compute, network, and storage resources. Voltaire will demonstrate the solution today at the Linux on Wall Street conference in New York City in booth 222.

Voltaire grid interconnect solutions consist of layer 2-7 multiprotocol switches with integrated InfiniBand, GbE and Fibre Channel connectivity, grid management and virtualization software, adapters and other software that enable high performance applications to run on virtualized server and storage resources. Voltaire switches deliver the industry?s highest speed and lowest latency and are deployed successfully in the world?s largest production supercomputer and many other large grids.

Voltaire is collaborating with Cambridge, U.K.-based XenSource and the open source Linux community to deliver the solution. The solution leverages InfiniBand drivers from OpenIB, the open source InfiniBand project. Both the InfiniBand and Xen drivers are included in Kernel.org 2.6.11.

?We see strong demand for server virtualization solutions coming from our enterprise customers, with large Wall Street firms in the lead,? said Yaron Haviv, Chief Technology Officer, Voltaire. ?Server virtualization is part of the general trend in IT today of moving to use large grids that are based on commodity servers and storage. Combining these commodity elements with open source Linux software and Voltaire multi-service switches dramatically reduces IT costs.?

?We are pleased with the progress and commitment shown by Voltaire to support Xen,? said Moshe Bar, CTO of XenSource. ?The integrated solution of Xen server virtualization with Voltaire grid interconnect with hardware-based virtual I/O enables customers to build high-performance virtual data center environments.?

Voltaire switches are centrally controlled and virtualized through embedded VoltaireVision Grid Interconnect Management software. VoltaireVision uses industry-standard interfaces to enterprise management platforms and can be provisioned by automation and policy platforms such as IBM Tivoli Intelligent Orchestrator and others. The integration of provisioning management tools, server virtualization software and intelligent grid interconnect solutions is the foundation of the next generation data center.

Xen is an open source server virtualization architecture that allows users to run multiple Virtual Machines (VMs) simultaneously on the same physical server. Each VM gets a portion of the server CPU, memory and I/O from Xen, which dynamically assigns resources to virtual machines or migrates virtual machines to other servers, if needed. Through integration with Voltaire grid interconnect, Xen benefits from multi-channel hardware and OS bypass to enable full isolation between virtual machines, greater performance and scalability.

Integrated with Voltaire GridBoot for Diskless Grids and Clusters
Voltaire?s Xen server virtualization solution also offers diskless boot capabilities which simplifies the deployment and management of complex grid computing environments. Voltaire GridBoot is complementary to Xen as it enables the complete disaggregation of compute and storage resources.

Voltaire?s solution enables users to run diskless stations, perform pre-boot remote diagnostics and conduct maintenance tasks using standard tools such as OSCAR, ROCKS and IBM Tivoli Intelligent Orchestrator and other commercial applications. GridBoot coupled with Voltaire multiprotocol switches offer customers the flexibility to boot off any type of storage: FC SAN, iSCSI, NAS or from a server within the InfiniBand cluster. Using Voltaire GridBoot, customers can easily move to diskless environments to reduce deployment time and cluster operating expenses.