Microsoft answers the critics against its internal use of Hyper-V

microsoft logo

On April 9 virtualization.info published an article titled How Microsoft and VMware use virtualization internally, detailing how the two competitors use their own virtualization products inside the company.

The details about the Microsoft internal case study appeared in a public TechNet article that raised a number of critics on how poorly the vendors seems to use Hyper-V.

One of the Microsoft employees involved in that article, David Lef, Microsoft IT Technology Architect, wrote to virtualization.info giving additional details about the adoption of virtualization inside Microsoft:

I was not the source for all of the information in the Microsoft case study, but I was a significant contributor and presented the associated webcast. Keep in-mind that at MS we deploy very early, often before products are released publicly and everything needed to support them elegantly is available. Some of the decisions early on around clustering were driven by this and by the benefit/drawback decisions that were made at operational levels. It simply did not make sense for us to bring clustering into an environment that did not need it at the time, particularly at the price of additional complexity. System Center Virtual Machine Manager 2008 handles it well and our newer infrastructure is more suited to it, but that was not initially the case. Things change. We are designing it in as an integral part of our Windows Server 2008 R2 infrastructure for Hyper-V. ALL hosts will be clustered and managed by SC VMM from the start.

The VM-to-Host ratio is also very subjective and simply looking at a few excerpts does not tell the true story. Comparing simple numbers in what are probably very different scenarios can also be very misleading. Our 10:1 (or sometimes lower) ratio is a result of the fact that we are putting rather robust and demanding workloads into VMs, as-opposed to the legacy and small workloads that people have traditionally targeted for virtualization. We position Hyper-V VMs directly against our smaller standard physical servers, challenging the people requesting new server instances (or replacing older systems) to justify going with anything other than a VM. That is how we are approaching 40% of our total population (4500+ our of 11K instances) running in VMs and targeting 80% of net-new instances going straight to VMs going forward. Even when we talk about our “lab environment”, this is not purely dev/test. These are really pre-production environments, supporting live data and applications with users that depend on them to get the internal business of Microsoft done. (Source code management, product builds, support, etc.) It also encompasses the “dogfooding” aspect, where we move very large segments of our population through beta release cycles to exercise our products and scenarios before we inflict them on our customers. The comment about how our units are “absolutely monstrous” and “VMware is significantly cheaper” because they get the same number of VMs on a smaller host is easily explained if you realize it is a classic apples-versus-oranges comparison. We are putting 10 VMs on a host, but the VMs are running everything from SQL to enterprise core infrastructure to critical line-of-business tools. Where it was common to have single processor with 512 MB of RAM yesterday, we are now looking at 2-4 processors and four or more GB of RAM for each VM as the norm today. We have to make all of this happen in a way that assures we meet stringent SLAs, can assure performance expectations, and make a very demanding end-user community happy. We don’t want to simply stuff as many VMs onto a host as possible, without regard to the potential cost. We actually don’t want people to know or even care that their app or service is running on a VM. Most don’t, which indicates we are doing pretty well.

Virtualization is not perfect by any means. Microsoft, partner, and even competitor’s products continue to get better. We all continue to refine how we use them, a big part of which is changing business and operational process to really take advantage of the aspects of flexibility, agility, and centralization. We continue to work in a direction where the underlying capabilities of the platform and management tools around it will make it more efficient, but we are already much better off than if we would have been without it. The business recognizes this, which is why virtualization is very high on our list of priorities.

As an established enterprise, we don’t have the luxury of changing things overnight. Directionally, we know we are on the right path. We are evolving things in Microsoft IT toward a better future, just as we hope others are doing and can benefit from very candid views of our experience.