The VMware Performance team has posted an blog article detailing the performance and use cases of using VMware DirectPath I/O. DirectPath I/O is a technology available since vSphere 4.0 to allow guests to directly access hardware devices trough hardware support, like Intel VT-D and AMD-Vi. A VM with DirectPath I/O can directly access the physical NIC instead of using an emulated/paravirtualized one, providing additional performance by saving CPU cycles and giving access to hardware features not yet supported by vSphere, like TCP Offload engine or SSL offload. VMware recommends using DirectPath I/O only for workloads with very high packet rates.
Ofcourse using hardware directly within a VM has some disadvantages as well, the NIC attached to the DirectPath I/O VM cannot be shared with other VM’s anymore, and the NIC can also not be used for features like vMotion and Network I/O control.
VMware did a performance review of using DirectPath I/O compared to emulated network. Resulting in the following graph generated using netperf, detailing the performance gain when using high packet rates. Also stating that when using low packet rates, the added value is not significant.
Also the performance of DirectPath I/O compared to a virtual NIC was compared for three scenario’s, one web server workload and two database workloads. Measuring load of users requesting data from a web server, resulting in a 15% more users per %CPU used.
For the database tests though, there is no remarkable performance gain measured due to the low packet rate.
Also the performance of a OLTP workload was measured with SAP and DB2, where in that specific situation the Virtual NIC out performed the DirectPath I/O by about 3% in a default configuration, which could be zeroed by setting memory reservations for the virtual NIC configuration.
Conclusion:
"…DirectPath I/O is intended for specific use cases. It is another technology VMware users can deploy to boost performance of applications with very high packet rate requirements…"