3 years ago I started this blog.
In this day, one year ago, I was counting 80,000 visits.
Today I can count nearly 1 million.
I’ve put a lot of effort in this project so far and will continue this way: expect new great things in the coming weeks.
Meanwhile I would ask every reader today to submit suggestions and wishlists for a better virtualization.info.
To celebrate this year I would like to re-publish an interview SearchServerVirtualization arranged with me some months ago:
Andrew Kutz: You are one of the most well known, if not the leading, evangelists of virtualization on the internet today. Your roots, however, are in information technology security. What is your take on the relationship information technology security and virtualization?
Alessandro Perilli: Being a security professional means, among others, dealing all the time with a lot of different platforms, multi-tier products and networking devices.
Just think about testing a new exploit against several kinds of Windows or Linux operating systems. Or about testing a network intrusion detection system features: the simplest scenario would involve an attacking platform, a target one and a firewall in the middle.
Setting up a laboratory can be very expensive and you need a lot of time to restart from scratch before beginning to test a new scenario.
When I saw virtualization for the first time I immediately understood I would be able to create a security-lab-in-a-box without much effort, cutting away reinstallation times.
I also immediately felt virtualization could be used for some virtualization purposes, like sandboxing and honeypotting. So it soon became the mandatory companion of my security toolbox.
AK: Your accreditations in security speak for themselves, but what is your level of experience with the current crop of virtualization technologies (VMware, Microsoft, Xen, Parallels, Vanderpool/Silverdale, Pacifica, etc…)?
AP: In early days of modern virtualization I’ve been involved in virtualization projects with VMware and Microsoft technologies as soon as they became a viable solution for corporations.
Then, thanks to virtualization.info, my work expanded to many if not all products available in this niche.
At today I extensively test, and implement among several customers, the large majority of technologies out there, from platforms to P2V tools, passing through provisioning automation or disaster recovery.
If a new virtualization technology or product is out I work on it within one or two months.
AK: These days, almost anyone can start a blog and claim expertise on a wide variety of subject matter. Do you have any advice to give to IT professionals that can help them gage the worth of an information source when it pertains to virtualization?
AP: Sure: don’t follow the virtualization.info model. Don’t misunderstand me: I’m not saying so to avoid competition.
virtualization.info was born to fill a need of three years ago: aggregate scattered news about an emerging technology to understand trends and what product was out.
Today that virtualization is starting to be widely adopted this need is changed and virtualization.info itself had to extended its mission accordingly.
Start a new blog now doing what virtualization.info did three years ago is useless.
Customers are looking for some valuable content, not for another ten blogs publishing same news again and again, just changing the title or the quote cut.
Also, everybody with enough experience in blogging knows that news aggregation can be completely automated with some tools out there and there is no expertise at all in this.
At this point of the virtualization industry’s evolution I feel customers are mainly looking for technical tips because implementation is still the big issue these days.
Any blog providing such content would be considered a valuable information source.
AK: You founded the False Negatives project to help provide security consulting and training in Italy. Do you have any plans to expand this to include virtualization, and if so are you hiring? 🙂
AP: False Negatives is a project meant just for some high level security consulting, like strategical advisory or architectural designing and there are no plans at the moment to expand offering for virtualization outsourcing services.
But I can’t say there are no opportunities in this direction: virtualization.info is acting as a hub for vendors, system integrators, virtualization professionals and customers both in US and in EMEA.
I’m not hiring but I accept resumes from virtualization experts in every company department, from engineering to marketing.
You can consider virtualization.info a sort of virtualization head hunter, where best experts worldwide have a chance to be engaged by top players in the market.
AK: You have positioned yourself as primarily an aggregator of virtualization news, but you rarely give your personal opinions on the subject. On Tuesday, May 23rd, 2006, Paul Murphy claimed that modern virtualization is being sold as a solution to a problem the industry no longer has. What is your personal take on the current state of virtualization and where do you see it not only a year from now, but 3 years as well?
AP: I believe it’s quite evident modern virtualization is still at its infancy.
We still have to solve fundamental problems about implementation and support, and I think it’s natural we are still concentrating on obvious applications of the technology, like server consolidation, which could be not the best solution for every customers.
I don’t see big changes within 1 year from now: some vendors have still to prove their virtualization platforms are fast and reliable enough, others have still to prove their virtualization tools are useful, others have still to provide products support in virtual environments. And this is a slow process which won’t substantially change within 1 year.
Within 3 years, more probably 5, virtualization solutions will be more evolved and will start to offer experimental datacenter automation.
I imagine scenarios where, for example, virtual machines clone themselves and enable load sharing when performances go under a certain service level agreement.
Or virtual machines invoking a snapshot when a network attack is detected, sending attacker’s hard disk modifications to the security department.
In the middle term I believe virtualization is the path to something bigger than what today security vendors abusively calls self-defending network. Something I would rather call adaptive datacenter.
In this picture today’s vendors offering so called virtual lab automation solutions will be a key player tomorrow.
AK: I am a fan of open source software, especially of Tim O’Reilly’s idea of software as another commodity. Openness alone will not win Xen VMware’s current market dominance though. The formation of XenSource was a huge step, but what else do you think needs to happen for Xen to become a viable alternative in the eyes of IT managers everywhere to VMware?
AP: At today Xen has two problems: first of all has to offer Microsoft Windows support. We know this is about to happen this year thanks to hardware aid from AMD and Intel.
Secondly it has to provide management tools permitting more customers to embrace Xen paravirtualization even with limited knowledge of Linux. Also in this case there are companies like XenSource itself, Virtual Iron and recently Enomaly which are offering or are going to offer solutions in this direction.
A third critical point would be pushing the market to officially state products support in Xen paravirtualized infrastructures.
Without a wide applications’ vendors support there are few chances companies can seriously consider Xen adoption.
AK: On April 3rd, 2006, the Computer Business Review discussed the state of application virtualization.
Just a few days ago on May 19th it was announced that Microsoft is in talks to buy Softricity, one of the leading manufacturers of application virtualization solutions. Application virtualization is quite obviously the new hotness, but in your opinion where does it fit in the bigger picture?
AP: I think application virtualization is a fundamental companion of server virtualization.
In every day’s productivity end users need to address application compatibility, co-existence, testing and portability issues. Application virtualization is much more suitable to solve these problems than server virtualization, because in some senses is simpler and faster to use, requires less resources and has a lower impact on performances.
So I believe that, while server virtualization will fill datacenter needs, application virtualization will satisfy requirements in the client area, making the most from the whole infrastructure.
AK: On a purely technical level it seems that AMD’s Pacifica virtualization technology may best Intel’s own VT, if only for the fact that AMD’s CPUs include a memory controller that will be VT aware out of the box, while Intel’s separate memory controller will not be VT ready until 2007. On paper this could mean that the AMD chips will be faster at handling VT. I find this tidbit of information interesting because it shows that as the interest in virtualization grows, so must the hardware support for it in order to meet consumer expectations. To me the next piece of hardware that needs to build support for virtualization is the video card. Until this happens roommate OS installations (the term for side-by-side OS installations on a machine with a hypervisor) will not be able to run graphic-intensive applications at bare-metal speed. Do your sources have any information regarding what ATI and nVidia might be doing, and do you think that this is a logical step or simply a pipe dream?
AP: It’s true that one of the most emerging requests for virtualization use is 3D/CAD development. And there are some rumours, mainly fed by a specific Apple patent requested in 2002 and recently granted.
I’m not a graphic expert so I can’t say if modern video adapter already have hardware requirements to accommodate something I would call video partitioning, but we have to note the market trend is actually going in the opposite directions: solutions like the nVidia SLI or the ATI CrossFire aim to aggregate rendering power, not to partition it.
I also think that while I heard some customers asking for reliable 3D support in virtualization products, the market request is still too low to make it happens today.
AK. I e-mail you out of the blue and say: Alessandro, I want to learn about virtualization and what it can do for me, where do I start?
What is your response?
AP: When I started approaching virtualization there were neither books nor vendors courses (and still today I strongly believe there is a significant lack of training material).
I learned a huge amount of things silently following newsgroups for years.
Still today the most precious source of knowledge and real-world case studies is the community.
So my suggestion is: read books you find about the product you need to learn, but never forget to carefully monitor all web forum, newsgroups and blogs out there covering virtualization.
There is no book updated enough or complete enough able to offer you same level of broadening.