Paper: Understanding Memory Resource Management in VMware ESX 4.1

Last week VMware released a massive “minor” update for its virtual infrastructure platform.
vSphere 4.1 introduces a number of remarkable new features including what is called Memory Compression:

The idea of memory compression is very straightforward: if the swapped out pages can be compressed and stored in a compression cache located in the main memory, the next access to the page only causes a page decompression which can be an order of magnitude faster than the disk access. With memory compression, only a few uncompressible pages need to be swapped out if the compression cache is not full. This means the number of future synchronous swap-in operations will be reduced. Hence, it may improve application performance significantly when the host is in heavy memory pressure. In ESX 4.1, only the swap candidate pages will be compressed. This means ESX will not proactively compress guest pages when host swapping is not necessary. In other words, memory compression does not affect workload performance when host memory is undercommitted.

ESX41_MemoryCompression

According to that, the company updated what is probably the most popular whitepaper ever released in the history of virtualization (because of years of debate around memory over-commitment): Understanding Memory Resource Management in VMware ESX 4.1

Considering the time Microsoft recently spent to educate about hypervisor’s memory management, to introduce its new Dynamic Memory feature for Hyper-V, customers may want to compare how the two companies see the problem.