The open source Open Virtual Switch project was announced by Citrix in May 2009.
It reached version 1.0 exactly one year later, and meanwhile it became a key building block of the upcoming Xen Cloud Platform (XCP).
Citrix already announced that it will appear in the next version of XenServer, codename Cowley, hopefully to be released early next month at the Synergy 2010 Berlin conference.
On top of that, yesterday Simon Crosby, CTO of Virtualization and Cloud Computing division, added interesting details about its roadmap.
First of all, the next version of Citrix hypervisor will expand support for the Single Root I/O Virtualization (SR-IOV) technology, introduced in XenServer 5.6, so that its configuration will not require an interaction with the command-line interface.
Secondarily and more importantly, the next version of Open vSwitch will support SR-IOV so that virtual traffic won’t bypass the policy enforced at the switch level.
This version of Open vSwitch will specifically support the Intel 10Gbit NIC 82599ES (formerly codename Niantic).
The performance improvement offered by SR-IOV is remarkable. Below there’s a comparison chart of Citrix Provisioning Server running as a virtual appliance on XenServer 5.6, while streaming 300 virtual desktops, each booting within a couple of seconds:
Isn’t clear if this new version of Open vSwitch will be included in the upcoming new XenServer or not.
Update: Crosby also published an earlier post, describing the need for Open vSwitch that is really worth a read:
…To understand the need for the Open vSwitch, you have to realize that while CPU virtualization, including hardware support, has evolved rapidly over the last decade, network virtualization has lagged behind pretty badly. The dynamism that virtualization enables is the enemy of today’s locked down enterprise networks. For example, migrating a VM between servers could mean that network based firewall and intrusion detection systems are no longer able to protect it. Moreover, many enterprise networks are administered by a different group than the servers, so VM agility challenges an organizational boundary. What we want to achieve is seamless migration of all network-related state for a workload, along with the workload. The obvious place to effect such network changes is in the last-hop switch – which now, courtesy of Moore’s Law and virtualization, is on the server itself, either in the hypervisor or (increasingly) in smart hardware associated with a 10Gb/s NIC card. The Open vSwitch enables granular control over traffic flows, with per flow admission control, the option for rich per packet processing and control over forwarding rules, granular resource guarantees and isolation between tenants or applications, and enables us to dynamically reconfigure the network state for each VM, or for each multi-VM OVF package, as it is deployed or migrated. Network state for each virtual interface becomes a property of the virtual interface, and as a VM moves about the physical infrastructure, all of the policies associated with the VIF move with it. Suddenly the network team is no longer required in order to move a VM between servers…
The post also offers an interesting perspective about the relationship between Open vSwitch and the Cisco Nexus 1000V:
Many people wonder if the Open vSwitch is “competitive” with the ambitions of traditional networking vendors or with the Cisco Nexus 1000v virtual switch. The answer is “No – indeed the opposite”: The Nexus 1000v from Cisco provides Cisco customers with a powerful distributed switch architecture that brings the value of the full Cisco edge processing capability to virtualized environments, including Cisco management and toolset support. I would have no hesitation in recommending the Cisco product to Cisco customers. It delivers a value-added proposition on top of the basic concept of a dynamically controllable forwarding plane, very similar to OpenFlow and the Open vSwitch.
It would be easy to implement the Nexus 1000v both in parallel with, or on top of, the Open vSwitch. Indeed the value of OpenFlow has been recognized by one Cisco research group, and HP, Dell and NEC are active participants in the development and use of OpenFlow. Startups, such as Netronome and Solarflare are leading the way toward extensive hardware support of the Open vSwitch, permitting native multi-10Gb/s speed switching on server hardware that also hosts virtualized enterprise workloads.