At today most public cloud infrastructures are sort of black boxes. There’s not too much that cloud providers are happy to disclose about, including their architecture internals, internal and 3rd parties software layers, or security defenses.
Customers have to blindly trust the provider as there’s no way yet to plug-in monitoring agents or on-demand assessments without a major pain (assuming a provider would permit them).
So the idea of measuring the performance of these public clouds is rather interesting. CloudHarmony, a new project focused on cloud benchmarking and taxonomy of public clouds, is providing some insight and early reports that are worth a check.
Benchmarking is a challenging discipline. In this scenario seems almost impossible, as the cloud platforms are distributed across worldwide datacenters and the abstractive nature of the cloud implies that the underlying hypervisor or even the underlying physical hardware may change without notice, in any moment.
Still, CloudHarmony decided to test over 150 public clouds measuring things from raw CPU power to application workloads performance against the almost 100 benchmarks that are part of their suite.
The first published test is about the raw CPU power and includes the following clouds:
- Amazon EC2
- The RackSpace Cloud
- Storm on Demand
- GoGrid
- Voxel
- NewServers
- Linode
- SoftLayer
- Terremark vCloud Express
- OpSource Cloud
- Speedyrails
- Zerigo
- ReliaCloud
- IBM Development & Test Cloud
- BlueLock
- Cloud Central
- RimuHosting
- ElasticHosts
- Flexiscale 2.0
Their brief conclusions are interesting:
One take-away point we observed is that heterogenous hardware environments (where host hardware is configured with faster CPUs for larger sized instances) appears to be more conducive to true cloud CPU scaling. Of the 20 server clouds we benchmarked, only 4 appear to be providing such an environment: EC2, Storm on Demand, GoGrid and NewServers.