Tech: VMware Load-Based Teaming Performance

The VMware performance team has published an article about VMware Load-Based Teaming (LBT) Performance. LBT is a dynamic and traffic-load-aware teaming policy introduced in vSphere 4.1. that can ensure that physical NIC capacity in a NIC team is optimized. While traditional load balancing options base their routing on virtual switch port ID, IP hash or source MAC hash, LBT bases its load balancing on current network traffic, making sure traffic is effectively distributed among the physical uplinks. LBT also takes into consideration the disparity of the physical NIC capacity, for example a mixture of 1GBe and 10GBe physical NICs in a NIC team.

Since LBT is not the default teaming policy when creating a Distributed Virtual Port Group ,so LBT should be configured as the active policy afterwards. LBT uses a wakeup period and a link saturation threshold before it starts moving traffic flows. By default the wakeup period is 30 seconds and the saturation is 75%.

Using the SPECweb2005 benchmark, which contains three types of workloads; Banking, Ecommerce and Support the performance testing was performed. For this the Support workload was used which is the most I/O intensive one. By using a load of 30,000 SPECweb205 support users, the efficacy of the LBT policy in terms of load balancing and the CPU cost was measured.



clip_image001

A detailed explanation of the bandwidth usage in each phase:

Phase 1: Because all the virtual switch port IDs of the four VMs were hashed to the same dvUplink, only one of the dvUplinks was active. During this phase of the benchmark ramp-up, the total network traffic was below 7.5Gbps. Because the usage on the active dvUplink was lower than the saturation threshold, the second dvUplink remained unused.

Phase 2: The benchmark workload continued to ramp up and when the total network traffic exceeded 7.5Gbps (above the saturation threshold of 75% of link speed), LBT kicked in and dynamically remapped the port-to-uplink mapping of one of the vNIC ports from the saturated dvUplink1 to the unused dvUplink2. This resulted in dvUplink2 becoming active.  The usage on both the dvUplinks remained below the saturation threshold.

Phase 3: As the benchmark workload further ramped up and the total network traffic exceeded 10Gbps (7.5Gbps on dvUplink1 and 2.5Gbps on dvUplink2), LBT kicked in yet again, and dynamically changed port-to-uplink mapping of one of the three active vNIC ports currently mapped to the saturated dvUplink.

Phase 4: As the benchmark reached a steady state with the total network traffic exceeding little over 13Gbps, both the dvUplinks witnessed the same usage.

These results show that LBT can serve as a very effective load balancing policy to optimally use all the available dvUplink capacity while matching the performance of a manually load-balanced configuration.  In combination with VMware Network IO Control (NetIOC), LBT offers a powerful solution that will make your vSphere deployment even more suitable for your I/O-consolidated datacenter.