Tech: Controlling Virtual Server 2005 R2 with Windows PowerShell

As soon as Windows PowerShell (formerly codename Monad) is approaching final release (at TechEd 2006 Microsoft showed the release candidate build), people start looking for using it in several environments.

One of them is obvliously virtualization where the powerful new shell could automate a lot of tasks (and will eventually do with Windows Server Virtualization and Virtual Machine Manger). The problem is Virtual Server 2005 R2 offers COM interfaces which PowerShell (a .NET application) can’t access.

Luckily Ben Armstrong, helped by the PowerShell team, wrote a precious article on how to handle the whole thing, changing the COM security level.

Could the same be done with upcoming VMware Server, exposing the old GSX Server VmCOM API and a brand new C API?

Update: Ben posted some more details to further simplify the whole thing.

New VMware products details leaked

virtualization.info just discovered details about new technologies VMware is working on.

Available informations suggests VMware is preparing a new offering under the umbrella name of VirtualCenter System Image (or Systems Image), which should includes revamped P2V capabilities, and new features for VMs live backup and patching, acting at host OS level (possibly packed in a product called Integrity).

Both names and project details have been leaked from job open positions descriptions VMware published on a recruiment site, as following screenshot demonstrates (URL has been masked):



More news as soon as possible!

WinHEC 2006 virtualization sessions available online

WinHEC 2006 has been a decisive conference for Microsoft in its effort to relaunch a solid virtualization strategy.

To recap the whole list of announcements take a look at following posts:

Now slides of all WinHEC sessions about virtualization have been made available online:

  • Device Virtualization Architecture
    This session discusses I/O virtualization techniques, focusing on those that will be used in Windows virtualization. It explains how devices can be shared between multiple partitions and provides the background necessary for understanding the following session.
  • How to Use the WMI Interfaces with Windows Virtualization
    This session provides attendees all of the information that they need to take advantage of the Windows Management Infrastructure (WMI) interfaces that allow remote and local management of a server that is running with Windows virtualization enabled. This knowledge will enable attendees to build software management solutions on top of the Windows virtualization architecture.
  • HyperCall APIs Explained
    This session provides attendees a robust understanding of Windows hypervisor application programming interfaces (APIs) that are used to configure and communicate with the Windows hypervisor. Makers of third-party operating systems can use this knowledge to build solutions on the Windows virtualization infrastructure.
  • Hypervisor, Virtualization Stack, and Device Virtualization Architectures
    The powerful new Windows virtualization infrastructure will be a core capability in Windows Server Longhorn and in subsequent client releases. This session provides an architectural overview of the three pillars of Windows virtualization: the hypervisor, the virtualization stack, and device virtualization. Other Windows virtualization sessions build on the groundwork that will be laid during this session.
  • I/O Memory Management Hardware goes Mainstream
    I/O memory management hardware has been an essential component of mainframe and high-end server platforms for decades. Just as other technology components that were once confined to the high end of the computing space have moved into the mainstream PC, I/O memory management hardware is now poised to make its mainstream debut. This presentation introduces the AMD I/O memory management architecture, including details of the software interface, page table formats, and table walking algorithms. The potential usage and benefits of the AMD I/O memory management architecture are also discussed.
  • Inside Microsoft’s Network and Storage VSP/VSC
    This session provides independent software vendors (ISVs) and independent hardware vendors (IHVs) an in-depth understanding of the architecture that is used in Microsoft’s network and storage virtual device drivers and familiarity with the built-in capabilities of these drivers.
  • Intel Virtualization Technology: Strategy and Evolution
    This session presents the vision and strategy for virtualization in enterprise computing, for both client and server usage models. It then discusses how system virtualization is implemented today and describes the role and value of the first-generation Intel Virtualization Technology (VT). Finally, the session provides a deep discussion of future VT architecture directions and ends with a description of the Intel virtualization roadmap.
  • Microsoft Server Virtualization Strategy and Virtual Hard Disk Directions
    This session provides attendees with insight into the direction that Microsoft is taking with its operating system virtualization technologies. It covers virtual server, virtual PC, Windows virtualization, and Microsoft’s virtual hard disk (VHD) direction. The session includes a brief history of product releases to date, the current work, and the future direction for each of these products.
  • PCIe Address Translation Services and I/O Virtualization
    This session presents some of the evolutions from the PCI I/O Virtualization working group in the two key areas of PCIe Address Translation Service (ATS) and protocols to support multiple operating system instances. The PCIe ATS specification defines a new protocol to enable I/O endpoints to efficiently work with chipsets that implement address translation and protection table technology. This session provides a functional overview of the address translation and protection table, ATS terminology, ATS wire protocol operation, critical areas of attention, and what lies ahead.

    The PCIe I/O Virtualization specifications define new protocols to enable I/O endpoints to be efficiently shared by multiple operating system instances and to break through the performance barriers that are currently gating virtualization solutions within the industry. This session covers the I/O virtualization terminology, a functional overview, I/O virtualization usage models, single-root and multi-root topologies, configuration, management, error handling, quality of service (QoS), and what lies ahead.

  • Windows Virtualization Best Practices and Future Hardware Directions
    This future-looking session gives attendees an understanding of the directions that Microsoft is taking with Windows virtualization and what independent hardware vendors (IHVs) can do to ensure interoperability between their hardware and Windows virtualization. Example topics include IOMMUs and direct memory access (DMA) remapping.
  • Windows Virtualization: Scenarios and Features
    This session discusses the scenarios and features that were used to define Windows virtualization. It looks in depth at the server consolidation, business continuity, development and test, and dynamic datacenter scenarios.
  • Windows Virtualization: Strategy and Roadmap
    This session outlines the short-term to mid-term business planning strategy and roadmap for Microsoft’s virtualization technologies. The primary focus is the expected enhancements to virtualization in the Windows Server Longhorn timeframe. The target audience is the senior business or marketing professional who wants to capitalize on the opportunity.

Webcast: How to Virtualize the Test Lab

PlateSpin and Akimbi scheduled a co-presented 1-hour webcast for June 27:

Are you leveraging your virtual infrastructure to its maximum potential?

Forward-looking IT organizations recognize that exploiting the power of their virtual infrastructure beyond the production data center will create significant benefits, saving both time and money.

Join Akimbi and PlateSpin as they explore emerging trends in server virtualization, specifically the growing interest in test lab virtualization in enterprise IT and application development and test organizations.

In this hour-long webcast, you will learn:

  • what it means to virtualize the test lab and the benefits it delivers
  • a stepwise planning approach for establishing a virtualized test lab
  • requirements and evaluation criteria for enabling technologies
  • best practices that accelerate implementation

Register for it here.

Virtual Iron joins the Distributed Management Task Force

Quoting from the Virtual Iron official announcement:

Virtual Iron Software, a provider of software solutions for creating and managing virtual infrastructure in the data center, today announced that it has joined the Distributed Management Task Force (DMTF) to help lead the development of management standards and promote interoperability in the enterprise data center.

The DMTF (www.dmtf.org) is the developer of the Common Information Model (CIM), the standard for the exchange of management information in a platform-independent and technology-neutral way. Virtual Iron joins other leading technology vendors to help streamline integration of management systems through end-to-end, multi-vendor interoperability.

Adoption of CIM standards by IT organizations enables a more integrated, cost-effective and stable approach to management. Virtual Iron is a member of the System Virtualization, Partitioning, and Clustering Work Group within DMTF. This group is focused on defining and addressing current and future customer requirements for performance, availability and interoperability in these areas.

Virtual Iron’s software currently leverages and implements multiple DMTF-supported standards such as CIM, communications/control protocols and core management services/utilities. This allows users to manage Virtual Iron directly through other management systems…

Release: Ecora Documentor, Reporter and Auditor for VMware

Quoting from the Ecora official announcement:

Ecora Software, the leader in software solutions supporting IT control, compliance, and security, announced the release of a suite of configuration management solutions for VMWare ESX Servers that provide substantial time and resource savings by simplifying virtual infrastructure management and mitigating security risks.

The suite includes Ecora Documentor for VMWare, a free application that installs and operates on a single workstation and collects hundreds of configuration settings from VMWare servers, producing comprehensive, audit-ready documentation.

Ecora Reporter for VMWare is an upgrade and provides additional functionality by offering dozens of ready-made report templates for fast, easy analysis of critical configuration data.

Ecora Auditor for VMWare provides IT staff with the ability to customize reports, as well as automatically track configuration changes.

Ecora Reporter and Auditor for VMWare come loaded with 28 ready-made report templates, including:

  • ESX security settings
  • Virtual machine permissions
  • VMFS files
  • VMFS volumes
  • ESX and GSX host overview
  • Virtual machine summary
  • Physical NIC and virtual switches
  • Linux local users and groups

Download Documentor (free), Reporter (trial) and Auditor (trial).

The virtualization.info Virtualization Industry Roadmap has been updated accordingly.

Virtualisation gaining in credibility

Quoting from ITnews:


“The virtualisation market is now established. Customers are talking about virtualisation across the board,” he said.

Supporting this claim, Harapin said of VMware’s 20,000 global enterprise customers, 90 percent were now using VMware to host frontline business applications, while 25 percent of users were now standardising on virtual infrastructure.

“We will also see the average deployment size is moving from the 100s to the 1000s within the next year,” he said. “There are currently 1,000,000 virtual VMware servers in operation.”

The company was also beginning to gain traction in the SMB market with some 500,000 downloads of its free VMware Server beta since February…

Read the whole article at source.

Microsoft remarks virtualization efforts at TechEd 2006

Bob Muglia, Senior Vice President Server and Tools Business at Microsoft, opened the TechEd 2006 conference this week remarking the effort in virtualization already disclosed at WinHEC 2006 event:


Now let’s talk about those Microsoft promises. I want to start with the first one, which is “Manage Complexity, Achieve Agility.”

This is really focused on our Dynamic Systems Initiative, something we’ve been working on now for about three or four years. It’s an area we’ve been very, very focused on in a consistent way. There are many pieces to this. Knowledge driven management is very important, design for operations with a lifecycle starting with developers all the way going through the IT lifecycle is very critical. And the third piece of it, and the one I want to focus on tonight, is virtual infrastructure and talk about the investments that Microsoft is making together with the industry to use virtualization to revolutionize the way you design your datacenters and roll out applications.

Now, when we think about virtualization, we really think about this at multiple levels. Sort of the level of virtualization that most people think about we would call hardware virtualization. This is what you use when you virtualize the entire hardware, it’s what Virtual Server does, products like VMWare and ZEN work at this level, and it’s a very, very effective way to achieve a great deal of isolation of applications in a packaged format, in our case we call it VHD format for Windows. Those packages are very isolated, but they are not very granular in terms of the level of control you have.

Yet this kind of virtualization, hardware virtualization is very critical, it’s a critical step and one that we’re investing heavily in, and in some ways it’s some of the first things that you’ll see business results on.

There are, however, two more places of investment that we think are pretty important. The middle one, which is absolutely the one that you’ll see the furthest out, this will take another generation of the operating system beyond “Longhorn” to really get this into place, is OS services virtualization. This is where within the operating system we virtualize the key system services, services like Win Logon to allow you to run multiple instances at the same time.

Now, this is maybe not useful in the most generalized of sense, certainly not as much as hardware virtualization, but it’s particularly interesting in hosting environments where you want to run thousands of identities, thousands of different companies on a single server, having those system services be virtualized is a key step.

The third piece, which is one that we do think will be very broadly applicable to everybody in this room and every IT shop, is application virtualization. This is the concept of being able to take and package an application up as a virtual object that can get sent down to the server or computer. It’s particularly interesting in the short run to allow applications to be run in a Windows client environment without having fear of interactions between them.

With application virtualization technology it’s possible, for example, to run two versions of Office on the same machine, which you could not do without this technology; or two versions of your business applications that use different DLLs that are incompatible, different, for example, levels of ADO that I know people have struggled with.

So providing this provides less isolation than, say, hardware virtualization but much more granularity of control. So it’s a series of tradeoffs.

Now, a scenario where we see a lot of important potential we have recently acquired a local company, Softricity, which has been a leader in this space. They have a clear leadership position in terms of application virtualization. And you’ll see us incorporate the Softricity technology into our product in the months and years to come.

So virtualization is very key, we’re making investments across the board…

Read the whole transcript here or watch the webcast here.

Release: Egenera PAN Manager 5.0

Quoting from the Egenera official announcement:

Egenera Inc., a leader in datacenter virtualization architecture, today unveiled Egenera® PAN ManagerTM 5.0, the fifth-generation software release for the Egenera BladeFrame® system. PAN Manager 5.0 enables multiple Egenera BladeFrame systems—hundreds of computing resources—to be managed as a single pool of assets in order to further enhance operational simplicity, extend failover to a broader set of computing resources, and lower total cost of ownership.

Among the major new and enhanced features in Egenera PAN Manager software release 5.0 are:

  • BladeFarms
    BladeFarms extend the PAN to encompass multiple Egenera EX systems creating a dramatically expanded pool of resources which can be managed by a single management console. Servers can now fail over to any of the Processing Blades within the BladeFarm. This capability extends the N+1 failover model to improve the flexibility and efficiency of failover pools and reduces the total cost of providing failover resources. In addition, clustered applications can draw additional processing resources from any Processing Blade in the BladeFarm.
  • Named Pools
    Egenera systems allow customers to mix-and-match a variety of different Processing Blades within a single system or within a BladeFarm (e.g. 32-bit/64-bit; two-socket/four-socket, single core/dual core, AMD®/Intel®). Named pools enable customers to easily create computing pools or failover pools of specific types of assets within the PAN based on business and application requirements—providing greater choice, flexibility and granularity to maximize the efficient use of computing resources. For example, if a customer’s Oracle® database runs best on dual-core four-socket blades, customers can draw from a like Egenera blade in a named pool anywhere within the PAN.
  • Warm pBlades
    Engineered for cost-effective high availability and rapid failover, “warm pBlades” are powered-up and paused in a “warm” state awaiting PAN Manager software to initiate an operating system boot—cutting total failover time in half. This gives customers an additional option in terms of failover, allowing them to choose different levels of availability needed for particular applications.
  • Reliability, Availability, and Serviceability Improvements
    PAN Manager software release 5.0 features numerous RAS improvements, including expanded HBA status and performance data, disk performance data, and internal connection monitoring and status values for each connection.
  • Virtual Tape Support
    As virtualization technologies continue to expand to new areas within the datacenter, Egenera extends the PAN architecture to encompass these new offerings. PAN Manager software release 5.0 now includes support for virtual tape for high-performance back-up. With an increasing number of customers running large database applications on Egenera platforms, there has been a corresponding need for rapid backup.

VMware to publish a benchmarking system for virtualization platforms

Hints on this move appeared with the first blog entry of Diane Greene, VMware President, and with the recent publishing of the whitepaper Performance Benchmarking Guidelines for VMware Workstation 5.5.

Now an unofficial confirmation comes from Computerworld:


Greene positioned the forthcoming release of VI3 as the start of a second phase for VMware, one where virtualisation is an established technology which users feel comfortable with.
“We spent the last eight and a half years evangelising virtualisation,” she says. “Now, it’s here to stay.”
Phase two also means facing competition from the likes of Microsoft and open source virtualisation player XenSource.

“In phase one, there were no alternatives, no shipping products,” Greene says. Looking ahead, VMware plans to issue performance benchmarks comparing its virtualisation software with that of its rivals, she says…

Read the whole article at source.