Fedora 7 includes KVM and Xen 3.1

Fedora Linux distribution is slowly integrating virtualization as basic OS capability.

In Fedora Core 5 Red Hat initially integrated Xen hypervisor, followed by a basic GUI for it in Fedora Core 6: virt-manager. Now new version 7 includes Linux kernel 2.6.21 and then offers out-of-the-box its second virtualization platform: KVM.

Integration of kernel 2.6.21 also implies Fedora 7 sports paravirt-ops framework and VMware VMI interface. This means new VMware Workstation 6.0 should be able to run it as a para-virtualized guest (with a major performance boost).

Last but not least Fedora 7 updated embedded Xen package to new version 3.1 (formerly 3.0.5).

Read full release notes about virtualization packages here or download the distribution here.

Parallels launches Desktop 3.0 Release Candidate

Making a strong acceleration in its beta program scheduling, Parallels jumps public betas for new Desktop 3.0 and directly launches first Release Candidate.

After introducing the display mode called Coherence in previous releases, Parallels reaches another level in Mac-Windows desktops integration with new SmartSelect, capable of assigning a default application (on host or guests) to any file on both systems. In this way users can don’t even need to drag & drop files from one system to another, completely forgiving they are running a virtual machine.

Parallels Desktop 3.0 also introduces snapshots, already used in competitive products like VMware Workstation or Microsoft Virtual PC, or offline virtual disk access with new Parallels Explorer utility.

Parallels also claims capability to run 3D graphics inside virtual machines through existing hardware cards on host system, even if they doesn’t provide further details.

Enroll for this release candidate here.

It’s evident Parallels will launch Desktop 3.0 RTM for Apple WWDC conference later this month, consolidating its dominance on Mac virtualization market much earlier VMware can become a serious threat with its upcoming Fusion.

WinImage 8.01 beta introduces VMware VMDK support

New beta of popular WinImage disk utility introduces support for VMware virtual disk format: VMDK.

This way on both FAT and NTFS partitions you’ll be able to copy files from the VMDK file to host hard disk, while FAT partitions will also be able to receive files coming from host hard drive.

Gilles Vollant introduced similar support for Microsoft virtual disk format, VHD, at end of 2005 with WinImage 8.0.

Enroll for the beta here.

VMware supports VDI on ClearCube blades

Quoting from the ClearCube official announcement:

ClearCube Technology, the market leader in PC blade centralized computing solutions, today announced an OEM agreement with virtualization software leader VMware to deliver a new PC blade virtualization solution for VMware Virtual Desktop Infrastructure (VDI). ClearCube is the first to certify its PC blade solution for VMware VDI. With its Sentral v5.5 management software, ClearCube now provides VMware VDI solutions that enable organizations to more efficiently manage physical and virtualized computing resources…

VDI market seems in continuous expansion but this trend may change as soon as VMware will formally announce Propero acquisition and will accoirdingly launch its branded VDI desktop broker.

Microsoft launches Virtualization Calculator 2.0

Calculating how many Microsoft licenses you need in your virtualization infrastructure is not an easy task. Windows edition (Standard, Enterprise or Datacenter) can severely impact on final price and opportunity to grow. And back-end servers like SQL Server are part of this equation too.

To simplify this hard task Microsoft released a license calculator in January 2007, providing a simplified system to compare costs in three different scenarios where you are implementing different Windows editions.

After just 5 months Microsoft is ready to launch second version of this tool, still available for free as an online application, with a completely revamped approach.

Now users can literally build up virtual machines, filling them with Windows and Microsoft back-end servers they plan to deploy in production, and have a calculation of complete licensing costs of the infrastructure.

This new version is also able to calculate costs when you are using 3rd party virtualization platforms, including hosted solutions (like SWsoft Virtuozzo) and bare-metal ones (like VMware ESX Server).

Given this feature, Microsoft Virtualization Calculator becomes a mandatory tool in any virtualization project, and its highly recommended.

Access it here.

Update: Microsoft also published a 3-minutes demo of this new calculator which worth a check:

Speech: Application Virtualization Seminar

For June 13 (in Milan, Italy) and 14 (in Rome, Italy), LANDesk arranged a couple of seminars to introduce new application virtualization offering coming from OEM partnership with Thinstall.

I’ll speak at beginning of both events, talking about current situation of virtualization market and future trends, with a special focus on application virtualization.

If you are an italian virtualization.info reader I hope to see you there. Register for the event here.

(to see other events where I’ll have a lecture check my speaking schedule)

Guide to virtualization adoption – Part 1

Embracing virtualization is much simpler to say than to do, mostly when you decide to migrate your existing production environment and not only creating from scratch virtual lab for small testing and development scenarios.

What’s behind an enterprise virtualization project and what issues IT managers will have to face during it?

In this long series we’ll identify which phases a virtualization adoption is made of, including:

If the virtualization market seems crowded today we’ll discover that is quite the opposite: it’s largely premature and several areas have still to be covered in a decent way.

Candidates’ recognition

The very first phase of a wide virtualization project involves recognition of physical servers to be virtualized.

This operation can be much less simple than expected and possibly is the most time consuming one when our company is missing an efficient enterprise management policy.

So one of the issues here is discovering and inventorying the whole datacenter.

A second point, none the less critical, is performing a complete performance measurement on the whole server population, storing precious data for the capacity planning phase.

This step is often overlooked because the IT management usually has a raw idea of what servers are less resource demanding and immediately believes it’s enough to go ahead.

Sometimes a benchmarking analysis reveals unexpected bottlenecks, depending on problems or simply bad evaluation of servers’ workloads.

The best suggestion in the first case is to immediately pause the project and proceed solving bottleneck: moving a bad performing server inside a virtual environment can seriously impact on the whole infrastructure, with subsequent huge difficulties in troubleshooting.

A precise calculation of performance averages and peeks is also fundamental for next phase of our virtualization adoption, the capacity planning, when will be necessary consolidating complementary roles on the same host machine.

After measurement collecting we have to identify good candidates for virtualization among inventoried population.

Contrary to what some customers could think (or some consultants could pretend) not every physical server can be virtualized at today.

Three factors are critical in deciding which service can go into a virtual machine: virtualization overhead, dependency on highly specific hardware, and product support.

Virtualization overhead is something future improvements of virtualization will mitigate more and more, but it’s something we still have to seriously consider.

I/O workload is a critical stop issue in virtualization adoption and some servers highly relying on data exchange cannot be migrated so easily.

Databases and mail servers are particularly hard to move in virtual infrastructures.

In both cases overhead added by virtualization on I/O stream is significantly impacting on performances, sometimes at a point that migration is discouraged.

But there isn’t a general rule on these or others servers’ types: it really depends on the workload.

There are case studies where customers achieved virtualization without any particular effort, others where the migration was successful only when virtual machines received double of expected resources.

The second stop issue is related to special hardware on which our production servers are depending.

At the moment virtualization products are able to virtualize the standard range of ports, including old serials and parallel, but vendors are still not able to virtualize new hardware components on demand.

An effective example, without going any far, is provided by modern and powerful video adapters, needful for games development or CAM/CAD applications, which are the most contested unsupported hardware at today.

The third stop issue in confirming a server as a virtualization candidate is product support.

The market is seriously considering modern server virtualization only in the last two years and vendors are very slow in stating they support their products inside virtual machines.

It’s easy to understand why: the amount of factors in a virtual infrastructure impacting on performances is so big that application behaviour can be severely influenced by something vendor’s support staff cannot control, or even know is present.

Microsoft itself, while offering a virtualization solution, has been reluctant supporting its products inside its own Virtual Server, and at today few of Windows Server technologies are still unsupported.

So, whenever your server seems a good machine for virtualization the final word depends on vendor providing applications inside it.

While every virtualization provider has its own, usually undisclosed, list of supporting vendors, it’s always better directly query your application’s vendor for a confirmation.

Going virtual despite missing support is a high risk to take and wouldn’t be suggested neither after a long period of testing.

From a product point of view in this virtualization aspect the market is still poor of alternatives even if candidates’ recognition problem can be approached by four kinds of specialists: hardware vendors, operating systems vendors, application vendors and neutral virtualization specialists.

Hardware vendors providing big iron for virtualization and offering outsourcing services, like IBM or HP, usually has internal technologies for recognizing virtualization candidates.

In rare cases these tools are even available for customers use, like the IBM Consolidation Discovery and Analysis Tool (CDAT).

Operating systems vendors are not actually providing tools for virtualization, but the trend is going to change very soon: all of them, from Microsoft to Sun, passing through Novell and Red Hat, are going to implement hypervisors in their platforms and will have to offer tools for accelerating virtualization adoption.

It’s not a case Microsoft announced at WinHEC 2006 conference in May a new product called Virtual Machine Manager, addressing this phase needs and more.

Application vendors will hardly ever offer specific tools for virtualization, even if they have more knowledge of anybody else to achieve the task.

The best move to expect from them would be an application profiling tool, featuring a database of average values, to be used in performance comparisons between physical and virtual testing environments.

A today’s concrete solution for customers comes from the fourth category, the neutral virtualization specialists.

Among them the most known is probably PlateSpin with its PowerRecon, which offers a complete and flexible solution for inventorying and benchmarking datacenter physical machine, eventually passing data to physical to virtual migration tools, which we’ll cover in the fourth phase of our virtualization adoption.

In the next part we’ll address the delicate phase of capacity planning, on which depends success or failure of the whole project.

This article originally appeared on SearchServerVirtualization.

Guide to virtualization adoption – Part 2

In the previous article of this series we discussed the very first phase in every virtualization adoption project: recognition of best physical servers for migration inside a virtual infrastructure.

We saw how not every service can be virtualized and how some of them require special attentions. We also considered some available solutions to help in the task, discovering the virtualization market is still very poor in this aspect.

Once decided which machines will be virtualized we must move to the next, most critical phase of the entire project.

Capacity planning

What is capacity planning and why it’s the most delicate aspect of every project?

In this phase we evaluate distribution of virtualized physical machines inside physical hosts, dimensioning their resources, including processors type, memory size, mass storage dimension and type, redundant motherboard architecture and so on.

These machines have to contain planned virtual machines without suffer, have to survive more or less severe faults, depending on project’s requirements, and have to scale up easily.

In a medium complexity project chosen hardware doesn’t simply include physical servers, but means one or more storage devices, internetworking devices, network cards and cabling.

Every piece has to be chosen carefully and not for a performance need only: our decision will impact on next phase, when the return on investment (ROI) will be calculated, understanding if the project is worthwhile or not.

One critical value in hardware sizing is the virtual machines per core (VM/core) ratio.

Every virtualization platform has an expected performance level on the average, influenced by several factors, independents from the hardware chosen, from the virtualization engine optimization to the load virtual machine will have to handle. And the amount of virtual machines a single core (or a single CPU, in case of non-multicore processors) can support without problems depends on these factors.

So a VMware ESX Server 3.0 and a Microsoft Virtual Server 2005 R2 have a completely different VM/core ratio on the same host.

Why this value is so vague? The number of these factors is so large that it’s quite hard state a definitive ratio for a single product and virtualization vendors can barely provide an indication.

For example VMware states its ESX Server is able to handle up to 8 virtual machines per core, while its Server (formerly GSX Server) can handle up to 4 virtual machines.

But this numbers can be much higher or much lower depending on things like hosted application technology (a legacy accounting software written in COBOL is not what we call something efficient), I/O loads, etcetera.

Even if the value is so uncertain it’s still the most critical point in a virtualization project and it’s mandatory for a product comparison. Which is sometimes impossible: at the time of writing Microsoft has still to provide a suggested VM/core ratio for its Virtual Server 2005 R2.

Going beyond mere VM/core calculation, it’s mandatory to remember we are not going to virtualize a physical server but one or more server roles, so trying to size a virtual machine in the same way the actual physical server is could not be the best solution.

Given more than one host machine, a typical erroneous approach is consolidating virtual machines with the same logic of their physical location. So all production machines have to be virtualized in the first host, all development machines have to be virtualized in the second one, and so on.

It mainly depends on two factors: a natural desire to maintain what is considered a logical order, and a typical cultural bias, where physical location strictly relates to contained services (a way of thinking we’ll loose progressively towards our evolution to grid computing).

This approach usually leads to bad consolidations ratios: architects trying to cram several production virtual machines in the same host will find that machine overloaded by the huge weight of exercise services. Meanwhile another host, serving underutilized virtual machines, will waste most of its computing time.

So it’s pretty evident the big challenge of capacity planning is finding complimentary services to balance virtual machines allocation.

This operation has to consider several services factors, including expected workloads during all hours of a day, kind of physical resource requested, inclination to have very dynamic fluctuations, etc.

Obviously these factors can change over time, scaling up or completely mutating, so capacity planner must also try to forecast workloads growth, while in the enterprise management virtual administrators will have to rearrange virtual machines upon environment changes.

If this seems complex enough the biggest value still misses from the equation: acceptable performance level of every service.

This usually is the most overlooked aspect of capacity planning, giving for sure virtualized applications will always perform at their best. In the reality, even in the best arrangement, every software needs a certain amount of physical resources to perform in an acceptable way.

Capacity planning has to consider complementary workloads scenarios and must contemplate alternatives arrangements to always grant the expected performance for every service.

The task appears overwhelming but luckily part of it will be simplified in a near future, when all virtual infrastructures will be able to seamlessly and dynamically move virtual machines on different hosts, depending on their workloads.
VMware just launched this feature, called Distributed Resource Scheduler (DRS), within its new Virtual Infrastructure 3 (aka ESX Server 3.0 and VirtualCenter 2.0).

Microsoft expect offer the same capability with its upcoming Virtual Machine Manager tool.

This amount of factors can be partially managed today with help of few products.

The first and possibly most complete one is provided by the current market leader, VMware, which offers its Capacity Planner tool as a consulting service, available at fixed price of $22,000 for up to 200 servers.

The biggest benefit of this tool it’s the huge database where average performance values of industry applications are stored.

Based on these values the VMware Capacity Planner not only is able to suggest the best placement possible, but it’s also able to recognize troublesome applications, both at physical and virtual level.

VMware is not the only one offering this kind of tools: for example HP, with its Solution Sizer, and Sun, with its Consolidation Tool, are offering their customers a notable aid in this aspect of the project.

In both cases products are free but are tuned and locked for sizing just specific servers.

Once again PlateSpin PowerRecon, already mentioned in the candidates recognition phase, seems to be the most cost-effective solution for workload placement.

Thanks to its new Consolidation Planning Module it’s able to offer same capabilities of VMware Capacity Planner, minus the Industry Average database.

Its biggest feature is the integration with company physical to virtual (P2V) product, which we’ll see in the fourth phase of our virtualization adoption, offering a logical and integrated flow of actions during the initial steps of the project.

In the next part we’ll how the critical work of capacity planning translates in an economical value, justifying or not the entire project.

This article originally appeared on SearchServerVirtualization.

Guide to virtualization adoption – Part 3

In the previous article of this series we discussed the delicate phase of capacity planning, where virtualization architects have to carefully evaluate which kind of physical machines will need to host all expected virtual machines, how this evaluation depends on specific arrangement strategies, and how heavily these strategies depend on hosted applications workloads.

The expenditure report will detail part of the investment a company have to face in its virtualization adoption and will be compared against a long list of cost-saving benefits this technology provides.

ROI calculation

This is probably the most critical part of the whole project and it’s the third one, not the first. This means a potential customer should arrive at this point already investing a significant amount of time and money just to understand if he really can afford the adoption or not.

The Return on Investment (ROI) calculation is done applying a simple math to a complex amount of costs our company could mitigate or eliminate once adopting virtualization.

The operation is not so trivial because some direct costs can be underestimated in the equation and indirect costs could not even be recognized.

Among direct costs virtualization can reduce we can include:

  • physical space cost (leased or owned) for physical servers
  • energy cost to power physical servers
  • air conditioning cost to cool the server room
  • hardware cost of physical servers
  • hardware cost of networking devices (including expensive gears like switches and fibre channel host bus adapters)
  • software cost for operating system licenses
  • annual support contracts costs for purchased hardware and software
  • hardware parts cost for expected failures
  • downtime cost for expected hardware failures
  • man-hours of maintenance cost for every physical server and networking device

Some entries deserve a deep analysis which has to be reconsidered from project to project, given the incredible speed at which the virtualization market is changing.

Software cost for operating system licenses for example has not been a relevant entry until this year.

Using a virtual machine or a physical server didn’t change anything for customers deploying Windows, but Microsoft slightly changed its licensing agreement this year facilitating virtualization adoption.

At the moment owners of Windows Server 2003 Enterprise Edition are allowed to run up to four more Windows instances, any edition, in virtual machines.

The imminent Windows Vista will have a license permitting to have a second, virtual instance in virtual machine.

And license of upcoming Windows Server codename Longhorn Datacenter Edition will grant unlimited virtual instances of the operating system.

In a near future it’s highly probable Microsoft will further revise this strategy allowing unlimited virtual instances of the host operating system while they stay on the same physical hardware.

In any case all listed direct costs directly strictly depend on two factors we examined in the previous phase: VM/core ratio offered by the chosen virtualization platform, and most of all virtual machines arrangement we decided during capacity planning.

The better we worked on the previous step the most realistic situation we’ll have.

But whatever arrangement or virtualization platform we’ll decide, some important indirect costs should be calculated as well:

  • time cost to deploy a new server and its applications
  • time cost to apply the required configuration
  • time cost to migrate to new physical hardware for severe unplanned failure
  • time cost to migrate to new physical hardware for equipment obsolescence

While these factors cannot be easily quantified they drastically change a return on investment calculation when dealing with virtual infrastructures.

Speed and effectiveness in deploying new servers with pre-configured environments, something usually called capability to perform, is unbeatable in virtual datacenters.

In the same fashion speed and effectiveness in moving the mission-critical applications from a failing hardware to a safe one, the so called capability to recover, is a basilar feature of virtualization and no security solutions can compete with it.

It’s evident ROI calculation is a hard operation to achieve and different interpretations could lead to very different results, leading to potential failure of our project.

To help potential customers, some vendors offer ROI calculation tools with partially precompiled fields, providing for example CPU/VM ratio values or products license prices.

It’s the case of SWsoft which offers an online form for its OS partitioning product, Virtuozzo.

In other cases skilled virtualization professionals transform their experience in useful tools, available for the whole community.

It’s the case of Ron Oglesby which created an Excel spreadsheet called VMOglator needful for evaluating virtual machines costs depending on physical hardware characteristics.

In most cases these self-service calculators aren’t really effective, both because aren’t able to track all concurring factors in the cost-saving analysis, and because customers are often unable or don’t have time to evaluate from themselves the real value of some expenditures.

The best option would be require a traditional ROI analysis, possibly from a neutral third party firm, but being a very time-consuming and expensive service few small and medium companies can afford to commit it.

At the end of this phase we have to have a list of costs for the current infrastructure, so called AS IS environment, a list of costs for the virtual infrastructure, and a list of costs required for the adoption, including:

  • the man hours cost to recognize physical servers to virtualize
  • the man hours cost to complete the capacity planning
  • the man hours cost to ROI analysis
  • the man hours cost to perform the migration of actual infrastructure
  • the man hours and learning material cost to acquire required know-how for migrating the new infrastructure
  • the software license cost of all tools needed in all adoption phases

These values are the final tools a company should use to understand if virtualization is approachable or not and in how many months it will have a payback.

In the next part we’ll enter the operational phases of the project, beginning from the very first step: transforming existing physical machines in virtual machines, through a process known as physical to virtual (P2V) migration.

This article originally appeared on SearchServerVirtualization.

Guide to virtualization adoption – Part 4

In the previous article of this series we performed the last planning phase of our virtualization adoption project, the ROI calculation, discovering real costs behind project.

Now we finally enter in the first operational phase: the Physical to Virtual (P2V) migration of physical machines we recognized as good candidates for virtualization.

Candidates’ recognition has been discussed in the first part of this series, being responsible in providing mandatory data for subsequent capacity planning phase.

As we’ll see moving contents of a physical server inside a virtual machine often is a complex technical operation which requires time and costs money.

Physical to Virtual migration

Moving the whole content of a physical computer from its disks to a virtual machine is much more than simply copying files and folders from the starting location to a new position.

Virtual hardware presented by a virtual machine is always different from the one available on the original server and immediately after migration, at first reboot, operating system kernel recognizes new devices and look for drivers to handle them.

In the effort of adjusting itself for the new equipment a kernel not founding appropriate drivers stops to work completely, never reaching operational status.

Depending on operating system this adjustment implies more or less complexity.

Microsoft Windows is the most problematic operating system to move and it requires a helper to seamless fit new hardware.

A P2V migration tool passes to kernel needed drivers and initializes them at the right moment and in the right order, so that it can boot correctly and not show the freighting Blue Screen of Death (BSOD).

In Linux the adjustment is much easier and real experts would be able to perform it without commercial tools, but it’s greatly annoying and time consuming.

Also a manual operation wouldn’t be able to automate the remaining part of the process which involves interaction with the target virtualization platform.

In fact P2V migration tools not only have the responsibility to move data from source computer to target virtual machine and to accommodate the migration, but must be also able to create a new virtual machine with opportune virtual hardware, power it on and install required performance enhancement tools from vendor.

Market leader in this segment of the virtualization industry are PlateSpin and VMware, but other notable competitors exist, like Leostream and HelperApps.

Price of P2V migration tools is often considered very high by newcomers who relate them only to the mere operation to be achieved, without considering, for example, all costs related to unexpected errors in configurations where the physical server has been rebuilt from scratch in a virtual machine.

While in many cases the perception is wrong, PlateSpin is the first company approaching a new way to knock down potential customers’ diffidence: since September 2006 their P2V migration tool is available for rent, proposing a reasonable price per conversion even for smaller projects.

In any case an alternative approach exists: today several virtualization experts and enthusiasts offer free P2V migration tools but while they are good for converting few servers, at the moment no one is really able to scale for use in large conversion projects.

Customers usually discover very fast that amount of time spent to fix these tools when a technical issue appears is much more expensive than the cost per conversion in commercial solutions.

A needful add-on of modern P2V migrations tools is capability to interact with candidates’ recognition and capacity planning products.

Integration between these tools allows speedup of virtual migrations, since every just converted virtual machine can be immediately moved inside the most unloaded host server.

At the moment of writing PlateSpin is the only one offering such service, allowing its P2V solution, PowerConvert, to integrate with its PowerRecon discovery tool.

Desirable plus

A big plus when approaching P2V migration tools is considering their capability to perform the opposite process: a Virtual to Physical (V2P) migration.

While initially there is a prevalent need to consolidate machines, once embraced virtualization it’s easy turn to it for solving several different problems.

The huge administrative effort of deploying new workstation for employees, for example, could be greatly reduced if IT manager would be able to configure an ideal virtual machine and then inject it in brand new hardware.

Today this is partially mitigated with the help of disk cloning utilities like Symantec Save & Restore (formerly Ghost), but these solutions present a couple of severe limitations: they depend on hardware configuration so the same image cannot be restored on a different server, and any modification to the desired configuration implies saving a new master image.

The first limitation is being addressed these days by disk image companies like Acronis, which enhanced traditional cloning tools to restore images on different hardware, supporting also virtual machines, but in general a P2V migration tool also capable to do V2P operations could be the best choice.

Another feature we should look at when approaching a P2V migration product is its capability to perform the so called Virtual to Virtual migration.

A V2V migration moves OSes and their data between virtual machines of different vendors, taking care of differences at the host level and dissimilar virtual hardware.

As soon as multiple hypervisor will find their place in datacenters, bigger companies will have to address multi-vendor management and a simple way to move applications from a product to another.

Once again PlateSpin is leader here, already supporting V2V migrations back and forth between VMware, Microsoft and Virtual Iron virtualization products.

Avoiding downtime

Even the most efficient P2V migration tool has a big limit: it makes the physical machine unusable during the whole process.

The migration time is directly proportional to the size of local storage and network speed: on average a physical machine with a 72GB disk can takes up to 30 minutes for moving through a standard Ethernet link.

This could translate in a very expensive services downtime, which in case of mission critical environments or where a Service Level Agreement (SLA) is in place is not an option.

Luckily P2V technology is evolving and PlateSpin is already able to offer a live migration feature, completely avoiding the downtime.

This much desirable process is possible thanks to a special technique which handle copy of all files, including open and locked ones.

At the moment live migration is only available for Windows physical server but the company will add Linux support in the future.

A future of automation

In a near future candidates’ recognition, capacity planning and P2V/V2P/V2V migration tools will lose their original connotation becoming an intelligent, automated and autonomous service, dedicated to around the clock virtual datacenter optimization.

In perfect automation the candidates’ recognition component would scan 24 hours a day the whole datacenter, looking for underutilized physical server and overloaded virtual machines.

Every time one appears it would pass the report to the capacity planning component, which would suggest the best location where to migrate them: if a physical server is underutilized it has to be converted in a virtual machine, if a virtual machine is overloaded it has to be moved on a less busy host or converted back in a physical machine.

These orders would be passed to the migration component which would perform them seamless, without downtime.

At a point the whole environment will be completely liquid, changing its shape depending of every minute workload. And we’ll not even be able to distinguish if our services are served by a virtual or physical machine.

In our next appointment we’ll see the second, critical operational phase: the enterprise management, where we’ll have to face several challenges, including resources handling and monitoring, infrastructure automation and disaster recovery.

This article originally appeared on SearchServerVirtualization.