Books: VMware ESX Server: Advanced Technical Design Guide

Ron Oglesby, Scott Herold and Brian Madden book about VMware ESX Server is now available on Amazon.

VMware ESX Server: Advanced Technical Design Guide
Release Date: July 28, 2005
ISBN: 0971151067
Edition: 1
Pages: 488
Size: 9″ x 6″ x 1″

Summary
This book is not an administrator’s guide. Rather, it’s written for IT consultants, system engineers, and architects who must plan, design, implement, and optimize VMware ESX systems. It’s filled with real-world, proven strategies created specifically for ESX Server. See how some of the world’s largest companies are using ESX Server in their production environments.

Are you thinking about using VMware ESX Server in your environment?
Do you want to use it to consolidate a few servers, or do you want to use it on a larger scale?
Are you wondering wheter the ESX technology is “real” enough for production use?

If you’re wondering whether ESX Server will work for you, spend 50 bucks on this book before spending thousands of dollars on licenses.

Why Less than 500 pages? It’s amazing what you can fit into a small space by taking out pointless screenshots and unrelated filler material. Buy this book now and start learning how ESX Server really works.

What’s Covered in This Book…

  • Virtualization Overview
  • How Virtualization Works
  • The real differences between VMware Workstation, GSX, and ESX
  • Hardware Allocation
  • Getting devices to work
  • Real-world server sizing
  • SAN vs. local storage
  • The inner workings of ESX networking: virtual switches, physical
  • switches, and using ESX server for firewalls
  • Managing the server
  • Security
  • Server Monitoring
  • Automated installations and server provisioning
  • High availability
  • Backup and disaster recovery strategies

About the Authors

Ron Oglesby is the Director of Technical Architecture for RapidApp.
He’s helped companies of all sizes develop their virtualization strategies, ESX
Server farm designs, and virtualization roadmaps. Ron is a VMware
Authorized Consultant (VAC) and VMware Certified Professional (VCP) and has 9
years of experience in the industry with the last 2 years almost completely
dedicated to virtualization. Ron co-authored the best-selling books
Windows 2003 Terminal Servers, the Citrix MetaFrame XP CCA Study Guide, and
numerous articles and white papers on server and application virtualization.

Scott Herold is a Senior Network Engineer for RapidApp. He is a
VMware Authorized Consultant (VAC), VMware Certified Professional (VCP), and the
owner of www.vmguru.com. Scott has engineered and implemented some of the
largest ESX solutions in the world, including those for several financial
services and insurance companies. The solutions he has designed range from
2-3 physical server implementations to the enterprise sized environments of 50+
8-way hosts. He is one of the most active participants in the VMware
Technology Network Forums.

Table of Contents and some sample chapters are available here to download.

Guide to Apple MacOS x86 Tiger on VMware

As I reported few weeks ago a special VMware image is circulating on the Net: is the experimental Apple MacOS x86, hacked to bypass the hardware check Cupertino guys created to avoid installations on unauthorized x86 machines.

The image really exists (actually called tiger-x86.tar.bz2) but there are alternatives to setup a MacOS 10.4.1 inside VMware: at the xplodenet.com site you’ll find 4 simple guides to achieve the task.

A blogger posted some screenshots (I’m assuming are not fake).

Use VMKFSTOOLS instead of cp

Quoting from VMTS:

The VMFS-2 metadata manager serializes operations performed on different hosts that require metadata updates. This is a standard file system practice that protects a shared resource (the metadata) from being modified simultaneously by multiple hosts. Here, normal data I/O is not affected—only operations that originate from different hosts and that require a metadata update are serialized. Typically, these operations occur infrequently and performance impact is not significant.
cp , however, can change this dynamic grow (in blocks of 10Kb) as data is appended to them. Growing a copied file is an operation that requires a lock on the VMFS-2 metadata.

When you have a cp in action you may experience a degradation in I/O performance due to increased VMFS-2 metadata contention. Other symptoms may include the inability to change power states or modify a virtual machine.

The extent of any performance degradation depends on several factors, including the number of ESX Server hosts with virtual machines that use the VMFS-2 volume, and the intensity and pattern of I/O activity in the virtual machines with REDO log files and the number of concurrent cp operation.

The only way to do a file copy inside a VMFS volume is to use the undocumented vmkfstools option:

vmkfstools -e /vmfs/vmfsname/target.vmdk -d vmfs /vmfs/vmfsname/source.vmdk

How Microsoft is using Virtual Server

Quoting from The Soul of a Virtual Machine blog:

Many of you would like to know how Virtual Server is being used at Microsoft. Here’s a response from Jeff Woolsey, Lead Program Manager for virtualization. Thanks Jeff!

Virtual Server is being used in a variety of ways at Microsoft, including for test and development and online training, such as Microsoft Learning.

Test and Development
Virtual Server is used by test teams throughout Microsoft, including Exchange, SQL, SBS, MOM, and many others. This is because Virtual Server allows you to rapidly deploy test servers within virtual machines while minimizing hardware requirements. Also, Virtual Server makes debugging easier. Debugging typically requires that a test computer is attached to a developer’s computer via a serial cable. With Virtual Server there’s no need for this. The process is as follows:

  1. Testers reproduce the issue in a virtual machine.
  2. The virtual machine is saved at the point the issue occurs.
  3. The virtual machine is copied to the developer’s computer.
  4. The developer connects the virtual machine to a debugger though a named pipe (a virtual serial port) and debugs the issue in the development environment.

Production Use by Microsoft Learning
In the past year, Microsoft Learning has converted the majority of their online training from scripted Flash-type demos to live interactive training using Virtual Server. They started off slowly and have been ramping up with the increase in demand. Users log in and perform step-by-step interactive training with Virtual Server. On the back end, this is all done using virtual machines and Undo disks. When the customer logs in, an Undo disk is created for the session. When the user finished and logs out, the Undo disk is discarded and immediately the virtual machine is ready for the next user.

Benefits
Microsoft Learning is servicing more customers than ever. This is a production environment in use everyday: 30,143 attendees in January (972 attendees daily) alone with a 206,390 YTD. Because of the huge success of this program, Microsoft Learning is adding more hardware to increase the number of available labs.

Here are a few of the positive results they’ve seen…

  • The 90-minute lab sessions are the most popular.
  • Lab session use has gone up.
  • Time spent in the lab has gone up (averaging 75 minutes per lab now).
  • Customer satisfaction is up (way up!).

Customer Comments
I think this is the way IT was meant to be all along. Thank You Bill and company.
The implementation is entirely innovative and gives administrators like me a chance to experiment away from production systems.
Awesome. This is the type of thing IT training has needed for ages.
Excellent. Very useful hands on training. This module needs to be longer.
EXCELLENT! This is extremely useful hands on training.
Great! This is what admins who need to implement your products need. What about providing other training on SMS site design configurations, clusters etc.? A virtual lab setup like that will again help admins who are looking to implement this product.

SCO entering the virtualization market

This is quite unexpected: SCO annonced severan new products and among them the Project Fusion reveils the company move on the virtualization market.
Quoting from the SCO official announcement:

The SCO Group, Inc., owner of the UNIX operating system and a leading provider of UNIX-based solutions, today reported on customer adoption of SCO OpenServer 6 and released the first maintenance pack, adding multi-core processing capabilities to OpenServer. Additionally, SCO announced the availability of education curriculum materials as well as new bundled Support and Professional Services for SCO OpenServer 6. The company also outlined the roadmap for its UNIX technology and product line.

To date, SCO OpenServer 6 has sold thousands of customer licenses including many to companies in the Fortune 1000, government agencies and many small-to-medium businesses. SCO resellers have anticipated a high degree of interest in the upgrade from their customers due to the product’s increased performance, support for more powerful hardware and a broader array of applications, as well as significant security and stability enhancements.

“OpenServer 6 provides an almost seamless migration with years of headroom built in. Some customers are experiencing a tenfold increase in performance,” said Dave Ramgren, division vice president, BIS Computer Solutions, Inc., a SCO reseller partner. “We can confidently encourage other OpenServer customers to upgrade to SCO OpenServer 6. The process of integration and configuration is efficient and a low risk for customers.”

Ongoing UNIX Strategy and Roadmap

SCO’s ongoing UNIX strategy and roadmap will focus on a powerful new product code-named Project Fusion. Based on the new 64bit UNIX SVR6 kernel technology, Project Fusion will deliver an Operating System for the Internet age. Project Fusion will also integrate server virtualization capabilities in the kernel, thus providing an ecosystem of application runtime. The UnixWare and OpenServer customers will benefit from a larger pool of applications and makes the vision of a larger market a reality to solution providers.

“With the completion of Project Legend and thereby recent release of OpenServer 6, all of our development efforts are now streamlined on a single common source base and are geared towards innovation,” said Sandy Gupta, chief technology officer, The SCO Group. “Project Fusion should prove to be another fantastic product from The SCO Group. Its ability to support both 32 and 64-bit processing power will provide customers with the ability to take advantage of the new and emerging 64-bit hardware technologies when it is released,” said Bob Ungraetti, president, Garett Group Inc. “It’s nice to see that SCO is also furthering its UNIX System V technology. UNIX SVR6 technology applied in Fusion completes the modern operating system allowing users to run an array of applications on mature technology that has been developed for decades with a proven track record of stability and reliability.”

SCO plans to provide the first public demonstrations of Project Fusion during 2006. Pricing for the product will be announced as the product gets closer to shipping. Further information about the upcoming Project Fusion beta program and product release will be available at www.sco.com/fusion.

LinuxPlanet reports Project Fusion is able to run together SCO’s two existing operating environments, SCO Unix and UnixWare, while supporting non-SCO OSes.

PC virtualisation on the move

Quoting from TechWorld:

Thanks to the backward, “all software owns the entire system” design of the x86 CPU architecture, PC client and server virtualisation is one of the most challenging tasks facing system software developers. Even at their best, the benefits of x86 virtualisation solutions from VMware and Microsoft are limited to reliability, convenience, and manageability.

But virtualisation’s promise as a pathway to consolidation and the way to turn aggregated compute cycles into a provisionable distributed resource remains just that. Don’t blame VMware and Microsoft. There’s only so much virtualisation one can do in software.

The sense of wonder one feels on first seeing a system split itself in two can be tough to sustain. We can be so impressed that we can run two copies of Windows simultaneously, or Windows and Linux, or Linux and that phoney pirated copy of the Intel edition of OS X Tiger, that the end goal of virtualisation is ignored or given up for lost. The dawn of the x86 virtualisation era will break with the advent of two upheavals: CPU-assisted virtualisation and para-virtualisation.

CPU-assisted virtualisation will carry x86 systems closer to the essential goal of linear virtualisation, that is, the ability to split a CPU core into two virtual cores that operate at close to 50 per cent of the performance of their real parent. Truly linear virtualisation would require a zero-overhead VMM (virtual machine manager, or hypervisor) — a theoretical goal on par with a 100 per cent efficient solar panel.

But the Pacifica technology from AMD and Intel’s forthcoming Vanderpool technology will obviate the need for the performance-sapping work-arounds that make software-based x86 virtualisation behave a lot like emulation.

Connectix, the company Microsoft acquired to bring Virtual PC and Virtual Server to its product line, illustrated this beautifully by releasing an x86 emulation solution for the PowerPC-based Macintosh that is functionally identical to its virtualisation solution for x86 systems. It’s as if Connectix figured that since x86 virtualisation called for emulating some of the operation of the CPU itself, it might as well go the extra mile and craft an x86 entirely in software.

The Pacifica and Vanderpool on-CPU virtualisation technologies eliminate the need to emulate an x86 to virtualise it. I have not seen either technology in hardware, but virtualisation done with the Pacifica or Vanderpool hardware assist will simply blow the doors off software virtualisation at its debut, assuming a virtual machine manager exists that exploits x86 hardware virtualisation.

In my estimation, AMD and Intel have placed virtualisation within the reach of open source developers. AMD’s delivery of a software-based CPU emulator incorporating the Pacifica specification guarantees that Pacifica will have software ready to roll on day one.

In the run-up to Pacifica and Vanderpool, the x86 virtualisation landscape is evolving so rapidly that there seems to be more confusion than excitement. But there is ample reason for excitement; just keep your eyes on the prize and not on vendors’ positioning.

What’s the prize? Operating systems that deliver secure, transparent, high-performance, hardware-based virtualisation as a standard feature, and management tools that take advantage of the ability to create, destroy, suspend, relocate, and monitor virtual machines throughout an enterprise.

Server virtualization is being adopted at a much greater rate than storage virtualization

Quoting from TMCnet:

59% Are Using Server Virtualization While Only 16% Are Using Storage Virtualization

TheInfoPro (TIP), www.TheInfoPro.net, has released Wave 1 of its Server Study. According to in-depth interviews with leading-edge Server professionals conducted by TheInfoPro (TIP), the top priority among Fortune 1000 companies is to cut Server costs while creating operational efficiencies. Server Virtualization was cited as the means to accomplish this, with Server Virtualization Software receiving the highest scores on TIP’s patent-pending Technology Heat Index.

TIP’s Technology Heat Index factors in the current and planned usage of over 30 different Server hardware and software technologies, including Server Virtualization, Blade Servers, Infiniband, Grid Computing, Embedded Storage Switches, and Load Balancing Software, prioritizing them based on the immediacy of planned implementation and near-term spending.

“59% of Server pros report “slicing” servers into smaller virtual servers while another 30% have the technique ‘in plan’,” notes Bob Gill, TIP’s Chief Research Officer. Server pros cite the obvious hardware savings in server consolidation as an initial motivator but quickly experience the flexibility and reduction in operational expense by being able to dynamically provision server instances on different physical server boxes. Server “slicing” products such as VMware from EMC and Microsoft’s Virtual Server 2005 are among the most exciting products in the study, with the Virtualization Software category leading in TIP’s Technology Heat Index(TM).” While EMC’s VMware is cited as the leading vendor in users’ plans, users report that Microsoft is moving aggressively with its own Virtual Server 2005 product. Server pros report that VMware is a more mature and stable product, but Microsoft’s pricing, presence in most accounts, and its deep pool of development resources makes it an inevitable contender.

TIP has been studying the Open Systems Storage market for over three years and has not found nearly as much enthusiasm for Storage Virtualization. In the Wave 5 F1000 Storage Management Study released in Spring 2005, only 16% report Storage Virtualization in use while 41% do not have it in plan. Many reasons are cited for the tepid adoption rates and avoidance, including interoperability issues as many shops have multiple vendors, and hype about solutions that are not production ready.

TIP research has tracked the major shift that the Fortune 1000 has made to Tiered Storage over the past 18 months and with that move a large challenge has developed that virtualization has the potential to help with. “Number one on the storage pro’s wish list is to seamlessly move data between tiers. It is a real pain point now that tiers are in place and all data does not need to be treated equally,” notes TIP’s CEO and Founder Ken Male. “The bulk of the companies interviewed that have Storage Virtualization in the near and long term plan cite Data Migration/Mobility as the functionality they want to get out of it,” concludes Male.

The TIP Server and Storage Studies capture participating company’s technology roadmaps, vendor performance ratings and spending plans along with detailed narrative commentary for context. Over 125 technology providers are discussed including IBM, HP, Dell, Sun, AMD, Intel, Egenera, Brocade, McDATA, QLogic, Cisco, Broadcom, Microsoft, EMC/VMWare, Red Hat, Novell SUSE, Opsware, RLX, Network Appliance, Emulex, 3Com, Citrix, VERITAS, Foundry, Acopia, NeoPath, NuView, HDS, Rainfinity, CommVault, CA, Softek, FalconStor and BMC.

“Wave 1 of the TIP Server Study complements and supplements our industry standard offerings in the Storage, Networking, and Information Security markets where a new wave of a study is issued every six months via in-depth interviews with domain experts at Fortune 1000, Mid-market and European companies,” comments TIP CEO Ken Male. “In many cases, Server pros are beginning to look at servers as digital instances of a specific configuration of OS, Application, and Data, to be run anywhere as opposed to a physical ‘box’,” adds Gill. “This makes the synergies between Servers and Storage that much more important.”

Over 800 IT decision makers are members of the proprietary TIPNetwork, including Citigroup, BellSouth, Honeywell, P&G, and Visa. To learn more about TIP’s independent, objective research process visit www.theinfopro.net

Additional information about the Server Study can be viewed in a multi-media presentation located at: www.brainshark.com/theinfopro/ServerWave1_Web

A sample from the Storage Management Study is located at: www.brainshark.com/theinfopro/Stor_W5_MGT

XenSource tests door to Windows and profits

Quoting from The Register:

Open source darling XenSource took a couple of steps toward a more serious future this week by showing the public that it can run Windows without modification and by previewing one of its first for-profit management packages.

The developers behind the Xen partitioning/virtual machine project worked long and hard to boot Windows XP SP2 – instead of Linux or even Solaris x86 – on their software. Such a feat required dealing with old 16-bit code and a host of other issues. Simon Crosby, a co-founder at Xen’s corporate face XenSource, told us that plenty of Windows work remains before the OS can run bug free on the upcoming Xen 3.0 release.

Nonetheless, the Windows support helps bring Xen closer to competing against virtual machine leader VMware, which works well with Linux and Windows across its workstation and server product lines. XenSource plans to have a much more stable Windows-ready package by year end and to pull off its main goal of supporting Windows Server 2003.

In the meantime, XenSource can claim one edge over VMware. It has tapped into Intel’s VT – or virtualization technology – tools that should appear in server chips by year end. The use of the VT technology makes it possible to run operating systems and Xen without modification. VMware’s high-end software design makes similar work more difficult, Crosby said.

VMware officials, however, say such charges are incorrect and note that the company will ship its GSX and ESX Server products for Intel VT-ready Xeon server chips as soon as Intel starts shipping the processors next year.

XenSource demoed the Windows/VT accomplishment at the Intel Developer Forum here and also showed off the XenOptimizer SE package. The company hopes customers who enjoy Xen for free will buy this management software. The XenOptimizer code works as a basic console showing all the servers running Xen, their workloads, their CPU usage, memory usage and bandwidth. Administrators can then drag and drop workloads from one server to another with a minimal hit to performance, if they choose, or set up policies for different applications to make sure the right type of server is handling the right type of software. The package also has some load balancing tools and a simple GUI.

XenSource hopes to sell XenOptimizer later this year.

Every little milestone counts for XenSource as it tries to become a serious virtual machine player in the x86 market. It has enjoyed backing from large vendors such as Intel, Sun Microsystems and IBM but has yet to announce a single, major customer. By contrast, VMware – part of EMC – claims thousands of large customers. Meanwhile, Microsoft hopes to secure its own place in the virtual market with the underwhelming Virtual Server product and future technology for the Longhorn Server operating system.

The next major releases of Red Hat Enterprise Linux and SuSE Linux Enterprise Server will include tweaks that support Xen’s paravirtualization technology.

Or as the company puts it, “A Xen 3.0 community release is targeted for availability at the end of the third quarter of this year. Hardened, enterprise-ready Xen distributions will be available from enterprise Linux distributors in early 2006. Xen 3.0 features support for SMP guest operating systems, and can take advantage of 64-bit processors as well as supporting Physical Address Extensions (PAE) for 32 bit servers with more than 4 GB of memory.”

Xen backers claim its approach to virtual machines consume far less system resources than VMware’s approach and that Xen is better able to make use of technology being rolled out by Intel and AMD.

Microsoft prepares Virtual PC Express

Quoting from ENT News:

Microsoft plans to bolster the Software Assurance component of its volume licensing program next month with several additional benefits covering deployment services, enhanced support, training and exclusive software, according to a source familiar with Microsoft’s plans.
Microsoft will officially unveil the new Software Assurance benefits during a series of Web conferences starting at midnight on Sept. 15. But the company has not publicly detailed the benefits.

Sunny Jensen Charlebois, product manager, Worldwide Licensing and Pricing, Microsoft, declined to comment on the specific list of SA benefits assembled by ENT.

“Microsoft is continuing to enhance the value of Software Assurance to ensure it meets the needs our customers throughout each phase of the software lifecycle. We are in the process of training our field and partners on the forthcoming enhancements. On Sept. 15th, we will honor our commitment to communicate the new SA benefits directly to customers to ensure they have the information they need in a timely manner to make the best decision for their organizations,” Charlebois said.

The update for the program comes as part of Microsoft’s continuing effort to deliver additional benefits to Software Assurance customers. Many customers complain that they don’t get real value from their three-year SA contracts if no new software versions are delivered during the contract period. When Windows Vista arrives in the second half of 2006, for example, it will come five years after Windows XP first shipped.

According to the source, who asked not to be identified, specific additions to Software Assurance will include:

  • Desktop Deployment Planning Services
    Designed to assist in planning deployment of desktop software such as Windows and Microsoft Office, the planning services will be delivered by Microsoft partners and measured in engagement days. The number of days will depend on how much a customer spends on desktop SA over three years. Customers spending $60,000 will get one day, $300,000 will get three days, $600,000 will get five days, and $1.25 million will get 10 days. Customers will be able to trade training vouchers for additional deployment planning service days. The benefit is scheduled for 2006. A smaller-scale version for Microsoft Open Value customers, called Information Worker Desktop Services, is also planned.
  • Enhanced support
    Customers will now get one free base support incident per Software Assurance agreement, and incremental incidents for every $20,000 spent on servers and CALs and $200,000 spent on Information Worker and Client software. The 24×7 Web-based incident support currently available to SA customers for standard editions of servers will be extended to enterprise editions and desktop products. Customers with Premier Support contracts will also be able to convert incidents earned through SA into Premier Support incidents. The benefits, planned to be available in February, are apparently intended to cover 40 percent to 100 percent of a customer’s regular Microsoft support costs.
  • Virtual PC Express for SA
    A previously unannounced version of Microsoft Virtual PC will become part of the Software Assurance package next year. Intended to reduce compatibility issues with legacy applications when users migrate to the next platform, customers will get one instance of Virtual PC Express with each Windows client Software Assurance license. The product will allow users to run two Windows client operating systems at the same time.
  • Extended training
    Starting in February, customers with 30,000 or more licensed desktops will receive larger numbers of training vouchers. Those can be used for certain courses or traded in for additional desktop deployment planning assistance.
  • Windows Vista Enterprise Edition
    Microsoft CEO Steve Ballmer last month mentioned an Enterprise Edition of Windows Vista that would be a level above the Professional Edition. The special operating system appears to be part of Microsoft’s Software Assurance plans. With Vista deliverable late in 2006, it’s not immediately clear if Microsoft has finalized plans for the Windows Vista Enterprise Edition as a Software Assurance–only benefit, but it would make sense.

In previous interviews, Microsoft officials said that any new Software Assurance benefits would be retroactive to customers with current Software Assurance contracts.

The approach would be similar to what occurred in September 2003, when Microsoft greatly expanded the value of Software Assurance with a number of benefits. Current SA benefits include new version rights, spread payments, eLearning, Home Use Program, Employee Purchase Program, Enterprise Source License, TechNet Plus, Cold Backups for Disaster Recovery, Corporate Error Reporting and Extended Hotfix Support (introduced in July).

Microsoft’s Webcast introducing “The Next Generation of Software Assurance” are scheduled for midnight, 8 a.m., noon and 6:30 p.m. Pacific time on Sept. 15. The sign-up page is microsoftsoftwareassurance.savvislive.com.

VMware ESX Server VMXfile Backup Utility

Niclas Borgström released a tool called VMVBU for increasing ESX Server VMs availability in a SAN environment. Quoting from his site:

The purpose of this utility is to make backup copies of configuration files for VMware ESX Server between 2 VMware ESX Servers connected to a shared SAN. The reason is for increasing the availability of Virtual Machines should a hardware failure occur. By copying the VMX-files (and changing the displayName) all the administrator needs to do in the event of a hardware failure on one of the ESX Servers is to start up the Virtual Machine through the MUI on the other host.

Download VMVBU 2.0.1 here.