VMware launches Workstation for Mac OS public beta

At the same time of Workstation 6 beta public launch, VMware opens up beta program enrollment for its first virtualization product on the Apple Mac OS: codename Fusion.

Srinivas Krishnamurti, Director of Product Management and Market Development at VMware, announced public beta on the corporate blog, The Console, underlining features in current build (36932):

  • Native Cocoa UI
  • Virtual Battery (physical notebook battery indicator inside VMs)
  • Other VMware products interoperability (independently on host OS)

Enroll for the beta here.

VMware surely is the most popular virtualization company worldwide but currently exposed features seem not enough to downsize huge success retrieved so far by the young competitor Parallels, which has been endorsed by Apple itself and which is pretty fast in releasing new builds with new notable capabilities (like Coherence).

Exactly like happened to Microsoft, VMware delay in readying a product for Apple customers as soon as Mac OS has been released for x86 architectures, could cost company serious difficulties in gaining back trust of Mac community.

More problems may raise at RTM launch time, depending on marketing strategy: if nothing free will be provided customers may start wondering why VMware offers free Player (and Server) on Windows and Linux while they have to pay on Mac OS.

VMware VMmark hits beta

The new benchmarking system VMware is developing to measure and compare different virtualization platforms, VMmark, is now in beta.

To declare so is the company performance team from its new dedicated blog: VROOM!

Unfortunately at the moment there is nothing in particular to download, except the introductory whitepaper: VMmark: A Reliable Benchmarking System to Measure Virtual Machines Performances.

Developing a wide accepted performances measurement system is not easy, requiring recognition from partners and competitors.

So while a comment from players like Microsoft, SWsoft, Sun and others is waited, VMware is working to develop VMmark as a SPEC endorsed standard.

Meanwhile Intel and IBM, who are SPEC members as well, announced theirs plan to develop an independent benchmarking system.

Citrix acquires Ardence

Quoting from the Citrix official announcement:

Citrix Systems, Inc. the global leader in application delivery infrastructure, today announced a definitive agreement to acquire privately held Ardence, Inc. of Waltham, Mass. This strategic technology acquisition extends Citrix’s end-to-end application delivery infrastructure leadership by enabling the real-time, on demand provisioning of desktops, server images and service oriented architecture objects.

Several examples of how Ardence technologies will improve an enterprise’s application delivery infrastructure include:

  • Provisioning Desktops for Delivery Over a Network
    Using Ardence’s innovative OS-provisioning and remote network boot technology, any x86-based computer can be provisioned with an entire physical or virtual desktop environment from bare metal to production in minutes. This capability could be used, for example, to deliver new versions of operating systems, service packs and hot fixes to a diverse range of end users in minutes, then quickly rolled back to previous versions if problems are detected.
  • Enhanced Management of Citrix Presentation Server Components
    The provisioning capabilities of Ardence will allow IT administrators to more quickly add new servers to a Citrix Presentation Server farm and allow for the dynamic configuration of servers in a data center.
  • Provisioning Web Server Images as Load Changes
    Ardence also complements the Citrix NetScaler line of web application delivery solutions, allowing IT administrators to dynamically change the amount of storage or CPU capacity available to web applications during peak load times. For high-volume e-commerce applications, the Ardence technology can even re-provision web servers from one application to another on the fly as demand fluctuates. As customers adopt services oriented architectures (SOA) for their web application environments, the Ardence technology could also be used to provide on-demand provisioning of these application components

The financial terms of the agreement are not being disclosed. The acquisition is subject to various standard closing conditions, including applicable regulatory approvals, and is expected to close in the first quarter of 2007.

Upon close of the transaction, the Ardence team and products will remain based in Waltham, Mass. and report into the Management Systems Group (MSG), also based in the Boston area, under Lou Shipley.

Brian Madden published some interesting thoughts about this acquisition you may want to read.

The virtualization.info Virtualization Industry Radar has been updated accordingly.

Server virtualization vs OS partitioning

As long as virtualization hits more companies, these start wondering differences between different approaches. Is not trying to compare apples with oranges, is more trying to recognize most effective solution for a specific task. It’s normal customers look for point of comparison, some focusing on perfomances while others on features set.

Who tried so far to compare most popular representatives of both approaches, typically VMware products on a side and SWsoft Virtuozzo on the other, failed greatly, because at the end of the day you are still matching apples with oranges.
Anyway the need to understand is more important than the risk to fail, so new comparisons are popping up.

Tim Freeman, starting from the announce of KVM inclusion in Linux kernel, does a notable research on the topic and highlights a very interesting paper from Princeton University, titled Container-based Operating System Virtualization: A Scalable, High-performance Alternative to Hypervisors, where Linux VServer is compared with Xen:

Hypervisors, popularized by Xen and VMware, are quickly becoming commodity. They are appropriate for many usage scenarios, but there are scenarios that require system virtualization with high degrees of both efficiency and isolation. Examples include HPC clusters, the Grid, hosting centers, and PlanetLab.

We present an alternative to hypervisors that is better suited for such scenarios. This approach is a synthesis of prior work on resource containers and security containers applied to general-purpose, time-shared operating systems. Examples of such container-based systems include Solaris 10, Virtuozzo for Linux, and Linux VServers.

This paper describes the design and implementation of Linux Vservers.as a representative instance of container-based systems.and contrasts it with Xen, both architecturally and in terms of effiency and support for isolation.

Read the whole paper here.

Parallels Desktop reaches beta 2

Parallels continues its ride towards new release of Desktop for Mac OS.
In less than 1 month since last beta 1 build, the beta 2 (build 3094) is already available and offers some new features:

  • USB 2.0 support
  • CD/DVD burning capabilities support
  • Parallels Transporter beta 2 bundle

Enroll for the beta here.

Meanwhile VMware finally enters competition, releasing first public beta of codename Fusion.

Virtual Iron partners with Fabric7

Quoting from the Virtual Iron official announcement:

Virtual Iron Software and Fabric7 Systems today announced a new business agreement and partnership that will help joint customers maximize the capabilities of Fabric7’s new family of enterprise servers with Virtual Iron’s virtualization and management software solutions.
Under the agreement, Fabric7 will bundle Virtual Iron’s virtual infrastructure management software with its high-performance, AMD Opteron™ processor-based servers, to provide customers with maximum flexibility and reliability when deploying and managing enterprise-class applications in a virtual environment.

The companies have also agreed to develop joint product offerings for users and to collaborate on marketing and sales…

JumpBox joins XenSource Technology Partner Program

After adhering VMware Technology Alliance Partner Program last month, JumpBox also makes a deal with competitor XenSource:

JumpBox, a new virtual appliance development service, announced today that it has joined the XenSource Technology Partner Program.

This program offers JumpBox access to XenSource’s betas and pre-release products, giving JumpBox an advantage on ensuring compatibility of its appliances with XenSource’s product family…

Improve software development department efficiency with VMware

Companies developing software for themselves or for customers know how complex, expensive and time consuming can be releasing a new product.

Development team members have to independently work on code and share it when needed, build and rebuild it on the same or different environments, while QA engineers have to test it on multiple configurations and scenarios, until the final deployment in production, where several factors are out of control and can mine stability and reliability.

IT managers always had small or no possibilities to mitigate technical issues and smooth the release path. But server virtualization changed everything becoming one of the first choices for boost the process.

In this article we’ll see how deploying a VMware virtual infrastructure can reduce most of the problems our development department encounter, speeding up its capability to deliver new products at new, unexpected levels.

VMware is not the only company offering virtualization solutions, but its wide range of products and its capability to seamless migrate a virtual machine from a platform to another makes it the best choice for this scenario.

Typical problems

The very first issue of a software development is environment integrity.

For many software engineers is normal having their development tools on the everyday workstation. When a new project starts the large majority of them simply start coding on the same environment they use for browsing the web, reading the email, watch videos or presentations, etc. Often even gaming on it.

Such systems should be perfectly clean, like the fresh installation where customers are supposed to host the product we are developing. Unfortunately this rarely happens.

Daily use for so many tasks imply a lot of installed software, which injects libraries, requires high-level permissions, modifies environmental variables and so on. Not to talk about possible malware infections.

Developed code could easily run or not run because of these variations but moving it on different machines, where operating systems have been compromised in different manners, will produce different results, leading to a complex and time-consuming troubleshooting.

Another frequent slowing down issue in complex projects is environment portability.

Software architects and engineers, product managers, have to verify how a product is growing during the whole development process or have to collaborate on it to improve or debug its routines.

Having many persons around the same monitor or permitting remote access to the development environment is highly unpractical. On the other way moving code from an operating system to another is not simple.

It’s not only depending on environment integrity, which cannot be granted in any way, but also a mere fact of delivering all parts needed to run a piece of code.

Any application based on database access, for example, is very hard to share if the developer, as often happens, has installed the database instance on his own machine or rely on a remote instance on a dedicated network segment where not everybody can access.

But even without a database, development team could be in need of libraries or, in case of multi-tiered applications, of other services which aren’t moved along with the code.

A third typical problem is the lack of physical resources.

When software engineers are savvy enough to not rely on locally installed services, they need remotely accessed services which have to be deployed, configured and notified to the team.

This requires time but most of all implies machines availability which cannot be given for granted.

In similar way often happens hardware configurations have to be modified during development, adding for example more memory or another network interface card.
Adding new components can be even more complex in big companies where hardware is acquired in stock from OEMs.

But the amount of machines for software development is not only limited to ones where to deploy needed services. It’s also depends on how many systems the company wants to support.

QA engineers which have to try the same code on several versions of the same operating system to verify our code works as expected in all possible scenarios: with different permission levels in several Windows editions or with different libraries availability in several Linux distributions.

A dedicated machine is expected to be available for each of them and things become very complex when multiple new applications are concurrently in development.

It’s worth to note that lack of hardware machines once solved for developers and testers can soon turn to be a problem for IT managers.

Once the big project is finished they have a certain amount of computers which will be wasted until the next one and could become obsolete in the meanwhile, obliging to replace some or all of them.

Even having enough resources, software engineers and testing staff still have to front the most frightening risk: time shortcoming.

A long series of logistical operations can severely slowdown development distracting coders from their focuses.

For example recognizing the need for environment integrity leads to the act of debugging code always on a fresh installation, which is impossible until developers reinstall the whole operating system from scratch every time.

But even without such level of attention, when the developed code includes an installer it’s critical working on a virgin OS.

Lack of time availability interests also testers, which not only need multiple physical machines for every platform where our code has to be certified as working, but also need to reinstall the same operating system several times, maybe because last installation failed or simply because have to test different languages, service packs or concurrent applications.

Fundamentally every test case should be conducted on a dedicated environment and this implies a notable effort. Even if disk backup solutions are in place they can help limitedly considering dependency on underlying hardware, which could change and require to save a whole new set of images, and restore times.

Improving the development phase

The most popular and oldest product from VMware is also the most important in the whole solution chain: Workstation.

Workstation offers a wide range of features able to address the largest part of mentioned problems in software development.

The probability a software engineer tries it and still sticks with traditional tools is near zero.

The first problem Workstation addresses is the one about environment integrity: developers and testers can count on the Snapshot feature which allows saving a virtual machine state and reverting back to it anytime is needed.

A savvy use of the snapshot feature implies developers install a brand new operating system in the virtual machine, fits it with all tools they need to produce new code and finally save a snapshot.

The operation grants a pure environment completely isolated from the every day workstation.

In this scenario a developer is able browse the internet, read his emails and even gaming on his own machine without jeopardizing the development workspace.

For maximum security the virtual machine could be completely disconnected from the real network card, so there are no chances the workspace can be compromised with a remote attack or virus infection, and there is no hassle to continuously patch the operating system or install an antivirus to maintain security.

But even if the workspace cannot be compromised it still can be overloaded with libraries, utilities and other things during a project.

In this case our software engineer can revert back to a clean state as soon as the project is closed simply calling back its first snapshot, within seconds.

Snapshot feature is pretty evolved and it’s a needful tool for QA as well.

When a compatibility testing is in progress testers need to assure the new application works correctly with several different products, from our company or third parties.

A snapshot manager permits to save multiple states of the same virtual machine, allowing testers to install one application after another without reinstalling the whole environment every time.

For example in a scenario where our new prototype application has to be tested for compatibility with Microsoft Office and several service packs, the best approach is to save a first snapshot of the just installed operating system, another after the non service packed version of Office has been installed, and still another one after the service pack is in place.

At this point testers are able to proceed with our code installation.

If something goes wrong or if they want to test the same installation with a different service pack, they just need to revert back to the snapshot taken before the service pack installation.

Trying to do the same thing without virtualization or a lot of different physical machines would take hours or days.

This process can be further improved thanks to other Workstation feature: multiple snapshots branches and linked clones.

Multiple snapshots branches feature permits to set an already taken snapshot as the original virtual machine image, and take new snapshots from there.

Linked clones act in similar way but disjoin the new snapshots from original virtual machine image location.

Both features are particularly useful for QA since they don’t copy what already exists of the original virtual machine but only refer to it for what will be done in future.

To better clarify we can reconsider the previous example: a tester in need of verify compatibility of a new application against multiple Microsoft Office versions and their service packs, can proceed creating a snapshot of the brand new operating system.

After installing Office 2003 over this snapshot the QA engineer will be able to set the new starting point on the snapshot he already took after installing the fresh OS.

At this point he will be able to invoke a new snapshot for both branches before install Service Pack 2 on Office 2003 and Service Pack 1 on Office 2000.

Our application can be tested against all these environments while the Snapshot Manager makes easy creating and discarding snapshots and linked clones.

Snapshots and linked clones not only drastically reduce time needed to prepare a new development or testing environment but also address another critical problem we already discussed: lack of physical resources.

With them QA engineers don’t need a new machine for every single environment to test, but just enough disk space to contain several branches of snapshots and clones.

Another great feature of Workstation is Teaming, useful both in development and testing.

Teaming allows logically linking together multiple virtual machines and launching them at the same time.

It also allows users to create virtual connections between them with simulated speeds.

So, for example, software engineer developing a multi-tier application can control how his code performs when used on a modem or a broadband connection, or usability tester can verify how much bandwidth is needed for a networked application to run without providing a bad user experience.


Addressing portability

As already said the biggest benefit of VMware software is capability to seamless share virtual machines between different products.

This not only permits developers to move without modifications and show their work to other team mates or product managers, but also allows to port applications to other virtualization facilities, where the code will be tested or even put in production.

So as simply as copying a folder the virtual machine containing the new software can be moved from Workstation to Server, the enterprise virtualization product which VMware offers for free.

There can be tested for compatibility, usability and its performances can be verified against stress tests.

After the QA phase then, the same virtual machine can be moved again in the VMware product aimed at datacenter deployment, ESX Server, where it will be put in production.

Anytime a problem appears the virtual machine can be moved back and forth between these platforms for patching errors or testing new configurations.

And if a customer wants an onsite demonstration of the new product, the same virtual machine can be moved once again, put in the VMware free virtualization product for desktops, Player, and distributed to sales managers worldwide.

Conclusion

Virtualization is not revolutioning just datacenter planning and deployment aspects. It’s also touching the most critical part of IT industry: application development.

VMware saw this before competitors and is creating a whole ecosystem improving software engineers’ efficiency by cutting away unproductive time.

As side benefits, companies fully adopting virtualization gain safer environments and flexible ways to reach new customers. But it’s just the beginning: today all operations are done manually but in a near future VMware will provide automation for some of them with a new product called Virtual Lab Manager which is expected before end of this year.

This will greatly simplify control and optimization of software production phases in big companies where multiple departments adopt different development tools but need to leverage virtual machines images in mandatory testing and production virtual infrastructures.

Automation is behind the corner. A new dimension in software development lifecycle too.

This article originally appeared on SearchServerVirtualization.


IBM to compete with VMware on virtualization benchmarking?

Quoting from the IBM official announcement:

IBM and Intel Corporation have joined in an initiative aimed at improving how IT managers select, deploy and measure virtualized server solutions for enterprise data centers.

One of the first tools to emerge from this joint initiative is a new virtualization benchmarking methodology called vConsolidate that runs multiple instances of consolidated database, mail, Web and JAVA workloads in multiple virtual CPU partitions on Intel-based System x servers to simulate real-world server performance in a typical environment. IBM and Intel are contributing the vConsolidate methodology to an industry standards body for consideration.

Using vConsolidate to benchmark the IBM System x3950 server with four dual-core Intel® Xeon® 7100 processors shows the x3950 delivers up to 46 percent more performance throughput than a competing system when running a mix of larger two- and four virtualized processor partitions.

Based on this and other customer test results, IBM and Intel created a VMware Infrastructure Sizing Guide aimed at helping customers select and appropriately configure the various virtualized server options available to them.

The result is a tool that provides recommendations for target utilization rates, the total number of virtual machines that will be needed to run the application, and the number of physical servers required to support the computing workload and goals.

To assist customers with making virtualization adoption decisions, IBM expects to open the Virtualization Resource Center (VRC) in early 2007. Customers will be able to apply principles gleaned from vConsolidate and sizing guide activities to their particular environments and software workloads…

VMware is developing a benchmark model for virtualization platforms (VMmark) since a lot, and first details about company approach were disclosed at VMworld 2006.

VMware hopes VMmark would be widely accepted by other virtualization vendors and major OEMs, and it’s working with SPEC to standardize it.

Description of IBM vConsolidate is pretty similar to VMware VMmark in approach and aims to standardization. Also IBM is SPEC partner as well as VMware.
Is IBM taking distances from current VMmark development path?

Tech: How to generate a memory dump file on Virtual Server 2005

Microsoft published an interesting support article for developers trying to generate memory dump file inside Virtual Server virtual machines:

Microsoft Windows operating systems let you use the keyboard to generate a memory dump (Memory.dmp) file. When this feature is enabled, you can generate the Memory.dmp file by holding CTRL on the right side of the spacebar while you press SCROLL LOCK two times. This step-by-step article describes how to use this feature when you run a Windows operating system on a guest computer in Microsoft Virtual Server 2005.

Read the whole article at source.

Thanks to Andrew Dugdell for the news.