CMPnet Asia published an interesting interview with David Wagner, Director of Solution Marketing for Capacity Management and Provisioning at BMC.
In the interview Wagner points out there are many challenges to front in virtualization and not every company is aware of all of them:
…
There are IT organizations that are very aware of the challenges of the management and there are those at the other end of the spectrum that think of virtualization as just another platform to manage. I think the ones that think of that way are doing themselves a disservice because there are some unique risks associated with virtualized environments that don’t exist in the physical environment or are at least not as significant.I would classify these risks in really two main categories. One is all the risks associated with change. The whole reason there are risks associated with change is because when you make changes you need to know, what the current state is, so that if and when problems do occur, you can revert back to the point before the change is made or you can inform the right people so that they can use tools to diagnose the problem based on the knowledge of what the current configuration is. The unique thing about virtualized environments is the environment’s configuration itself is changing over time. So in virtual environments, you have applications that might be running on one physical machine one day and another one another day — or one virtual machine (VM) here or VMs are brought in and out service.
This is a whole new paradigm and it creates a whole new set of availability risks and downstream management challenges.
The other major bucket associated with virtualization challenges is one that simply does not exist in isolated physical environments — capacity risk. If you previously had an environment where you had two different applications running on two different physical servers you could be pretty well certain they weren’t going to cause problems for each other from a performance standpoint because they had their own resources. If one application required 30 percent of CPU at 9:30 in the morning to meet response time guarantees, it could because it had its own dedicated physical box. And if the other one needed 40 percent at 9:30 in the morning, that was fine. But if you combine them both and they are both running on a shared hardware platform in two separate virtual machines, if they both need access to the same physical resource at the same time, by definition one of them is going to have to wait so that is a new risk that didn’t exist previously.
So previously, the capacity risk of industry standard architectures was really a cost issue. You just threw more hardware at it and knew that risk was solved. But throwing more hardware at it here doesn’t solve the problem because now you are making things share resources that didn’t used to, so now you need to plan for that…
Read the whole interview at source.