EMC and HP settle longstanding patent dispute

Quoting from SAP Info:


After four years of patent disputes, Hewlett Packard and EMC have agreed to amicably dismiss all claims and counterclaims with no admissions of liability.

As part of the settlement, HP will pay a net $325 million balancing payment to EMC which the company can satisfy through the purchase for resale or internal use of EMC products, such as the VMware product line, over the next five years. EMC and HP have also signed a five-year patent cross-license agreement. HP sued EMC for allegedly infringing at least seven HP patents as regards data storage. EMC countersued with at least six infringement claims against HP. Paul Dacier, EMC senior vice president and general counsel, said the company is pleased with the agreements.

?This resolution allows EMC to protect our substantial intellectual property investments and patent portfolio while serving the best interests of our customers,? he said. ?Upon completion, we expect to have a business relationship that will provide the customers of both companies with additional choices and technology that can accelerate their adoption of information lifecycle management.? Joe Beyers, vice president of intellectual property licensing at HP, said the deal ?recognizes the strength of both companies? intellectual property portfolios.? While both companies said they expect the settlement to strengthen their partnership and reselling arrangement, no details were disclosed.

Centrify joins VMware Technology Software Alliance Program

Quoting from official announcement:


Centrify Corporation, a leading provider of Active Directory-based identity, access and Group Policy management solutions, has joined the VMware® Technology Software Alliance Program. Centrify has optimized its DirectControl suite for VMware ESX Server so that customers can leverage Microsoft® Active Directory® to centralize and control access to their virtual machines and use Microsoft Group Policy to manage configurations. As one of the first VMware partners to provide Active Directory integration for authentication, authorization and group policy, Centrify is committed to delivering optimized support for VMware ESX Server.

“With DirectControl, administrators can now leverage their existing Active Directory infrastructure to have a single, secure password for all of their VMware virtual machines and the operating systems (running either Microsoft Windows or Linux or Solaris x86) and applications running in virtual machines,” said David McNeely, Director of Product Management at Centrify. “By consolidating user authorization and access for VMware ESX Server in Active Directory, Centrify DirectControl lets organizations easily add ESX Servers without increasing administrative overhead.”

DirectControl is an identity, access and Group Policy management solution that extends the capabilities of Active Directory for mixed Microsoft Windows®, UNIX®, and Linux® environments and Java? and web-based applications. DirectControl enhances the security of VMware ESX Server and delivers significant cost savings by enabling centralized control using Microsoft’s Active Directory environment and point of administration for mixed Windows, Linux/Unix, VMware ESX Server and Java/J2EE environments.

Administrators can reliably manage a user’s access to all systems and applications, including authentication to UNIX or Linux systems running in virtual machines, as well as VMware ESX Server. Along with simplifying administration for the IT Manager, this provides users with a single sign-on experience. DirectControl includes customizable, automatic capabilities to streamline administration and usage: auto-configuration of standard login methods; (console, ssh, ftp); caching of login credentials for offline use; and auto-home directory creation. In addition, any configuration file for VMware virtual machines can be centrally defined and automatically updated using Microsoft Group Policy and the DirectControl agent on the target system.

Price and Availability

Support for VMware ESX Server is included in the DirectControl solution suite. The DirectControl solution suite is licensed on a per server basis, starting at $300 per server.

Microsoft’s plans for desktop virtualization

Quoting from NewsFactor:


Microsoft Corp. recently fleshed out the details of a plan to build virtualization capabilities directly into Windows as part of its effort to catch up to virtualization software market leader VMware Inc.
Microsoft plans to adopt an architecture similar to the one VMware uses — a point that VMware seized upon as a validation of its technical direction. But Microsoft said vendors won’t be able to differentiate themselves on virtualization alone once the technology is supported in operating systems and chips .

The virtualization road map that Microsoft laid out at its Windows Hardware Engineering Conference includes a lightweight “hypervisor” layer of code that will be built into the next major version of Windows, code-named Longhorn, to support the creation of virtual machines.

Microsoft is even “leaning” toward eliminating future versions of its Virtual Server and Virtual PC products, said Mark Kieffer, group program manager of Windows virtualization. But Kieffer added that a decision hasn’t been finalized.

More immediately, Microsoft plans to work with unidentified industry partners to expand the support of third-party guest operating systems, including versions of Linux , in the first service pack update for Virtual Server 2005. The update is due by year’s end and will also include 64-bit compatibility and improved performance, Microsoft said.

The plans weren’t enough to sway Jason Agee, a lead infrastructure systems analyst at the Nebraska Health and Human Services System, from his commitment to VMware.

“Too little, too late,” Agee said, adding that VMware’s more mature virtualization software performs better on less-powerful hardware and is helping the agency to improve its server utilization rates.

But Tom Bittman, an analyst at Gartner Inc., said that the integration of virtualization technology with operating systems should spur broader adoption. Novell Inc. and Red Hat Inc. also plan to support virtualization technology in their Linux distributions.

Microsoft bought its way into the virtualization market two years ago through its acquisition of Connectix Corp., and it released Virtual Server 2005 last fall. Analysts said Microsoft entered the market primarily to give users of older Windows versions an upgrade path to new hardware.

But consolidating Windows NT servers with Virtual Server requires users to run a copy of Windows Server 2003 as the host operating system. That approach has a greater performance overhead than Microsoft’s hypervisor architecture will, acknowledged Ben Werther, a senior project manager for Windows Server.

In contrast, VMware’s rival ESX Server, first released in 2001, doesn’t require a host operating system. Instead, it uses a hypervisor layer that runs directly on the hardware.

At WinHEC, Microsoft officials showed diagrams with the planned Windows hypervisor code layer, which will divide a system’s resources among different virtual machines. Longhorn users will be able to configure the operating system for a virtualization “role,” stripping out unneeded functionality in a so-called MinWin configuration, said Werther. But, he added, it’s still not clear if the hypervisor technology will make the first release of Longhorn Server that’s due in 2007.

Performance also is expected to improve as a result of the hypervisor’s support for upcoming virtualization extensions in chips from Intel Corp. and Advanced Micro Devices Inc.

Steven McDowell, a division marketing manager at AMD , said CPU overhead currently runs at 10 percent to 30 percent on virtualized servers. But, he added, AMD hopes the overhead will be “negligible” with its Pacifica virtualization technology, for which AMD released a specification last week.

Bob Armstrong, director of technical services at Delaware North Cos., said the Buffalo, N.Y.-based hospitality services provider is happy with the software it bought last year from VMware, which is a subsidiary of EMC Corp. Armstrong said Microsoft is heading in the right direction by building virtualization technology into its operating system, but he fears that “it’s going to take them a long time.”

Frank Gillett, an analyst at Forrester Research Inc., said it will take at least two years for Microsoft to deliver on its Longhorn virtualization plans. In the meantime, he added, VMware must figure out how to stay ahead of Microsoft and Linux vendors, with general-purpose management software as one option.

Raghu Raghuram, senior director of strategy and market development at VMware, said Microsoft is acknowledging that “if you want to get into the data center , you need to run an architecture that runs like ESX Server.”

But, Werther said, “the real challenge will be managing hundreds or thousands of virtual machines across a data center.” Microsoft has significantly increased its investment in virtualization management across its System Center family of management tools, he added.

Desktop virtualization: end of the traditional operating system?

Quoting from Sci-Tech Today:


According to industry experts, the “operating system” as we know it is going to seem much less important in the near future.
Windows, for example, will not go away but it will no longer be considered the central experience of computing .

The recent release of Apple’s Tiger OS and the anticipation of Microsoft’s Longhorn remind us that, for all intents and purposes, the OS is the computer.

But that might all change in the coming years.

Enter Virtualization

Analysts say Intel did a little thing that helped to bump the OS universe off kilter, although the effects will be delayed.

The chipmaker funded an open-source project started at Cambridge University in the UK, which now is a Silicon Valley startup called XenSource . It makes a virtualization product called Xen hypervisor, which has only about 25,000 lines of code.

In conjunction with the Xen technology, Intel has begun to optimize its chip to run Xen hypervisor. What does it do? The short answer is that it allows a computer to realize its full capabilities, including all the operating systems it holds as well as functions that do not require an operating system.

“What Xen is, is a very thin layer of software that essentially presents to the operating system an idealized hardware abstraction,” said Simon Crosby, vice president of marketing for XenSource. The OS is no longer glued to the hardware but floats above it, talking to Xen as if it were the machine.

No Problem

The implications of such virtualization are enormous. For one, you would be able to run multiple operating systems on your desktop. Perhaps you have wanted to try the free version of Pro Tools that only runs on Windows 98 or would love to add a light Linux-based CAD program like CADvas to your system. No problem. The operating systems will not interfere with each other or the applications.

This general idea was originally intended for large computer systems, which employ partitioning to maximize the use of the machine’s hardware.

But your computer, armed with Intel’s hypervisor-enabled chips, would be able to do essentially the same thing, including doing tasks with which Windows and other operating systems are clumsy. In this paradigm, the OS could not be less important except as a tool to run the applications you need.

Get Me Browsing

“The first partition you might have is a TV partition that would come on, pretty much as soon as you turned the PC on,” said Gartner analyst Martin Reynolds. “There wouldn’t be very much code — it would load very quickly.” Boom. You are watching TV on your PC without having to run it through Windows.

“Now, if you wanted a really fast get-me-browsing Web browser, you’d have a partition for that, too,” Reynolds added, referring to the hypervisor’s capability of easily divvying up partitions. “You’d just load what you need and go.”

Reynolds says the revolution promised by Xen’s hypervisor software could be realized within five years. The era of a single operating system for each desktop might join the ranks of other computer nostalgia like DOS, monochromatic CRT displays or floppy discs.

“It’s a three year transition,” Reynolds acknowledged. “By 2010 everyone will expect hypervisor in their system.”

VMware: plenty of life after EMC

Quoting from Business Week:


EMC Corp. has done about a dozen acquisitions since 2000 in a bid to reinvent itself as more than just a data storage hardware maker. But eyebrows were still raised when it announced a $635 million cash deal for privately held VMware back in December, 2003.

It wasn’t so much the price tag — recent deals for Legato Systems and Documentum were over $1 billion each. But VMware, which makes software that allows companies to move computing tasks among servers, making the most out of their information-technology investments, was a relative unknown outside techdom. And it wasn’t exactly an intuitive fit with EMC’s (EMC ) storage business.

INDIE SPIRIT
In hindsight, it may have been one of the smartest deals it made. In VMware’s first full year as part of EMC, the unit racked up the most license revenues of any recent acquisitions, according to VMware. And it’s growing like a weed, doubling revenues year-over-year, generating $80 million in sales in the first quarter of 2005 alone.

The success is arguably due to EMC not messing with a good thing. EMC kept the division as an independent subsidiary, still headquartered in Palo Alto, Calif. It’s still run by Diane Greene, who founded the company in 1998 along with her husband, Mendel Rosenblum, a Stanford University professor, and several of his graduate school students.

BusinessWeek Online reporter Sarah Lacy caught up with Greene recently to talk about the early days of VMware, why she decided to sell, and why things have gone so well since.
Edited excerpts of the conversation follow:

Q: What was the original idea behind the company?
A: We founded VMware at the beginning of 1998. There were five of us, and I was the only one on the management business side, but I came from a technical background too. There was an understanding that there were some fundamental problems with systems isolation, system compatibility, and they were trying to figure out how to solve these problems. The insight we had was to revisit the notion of virtual machines. If we could bring virtual-machine technology to the 1990s and run it on the server, that would be very high value.

We announced beta [of our first product] in early 1999, and in a couple months had 75,000 people download it. Then we started working on the server product. We brought out server product a couple years later. We introduced a way to partition up a big machine so you could run applications without running into problems. We announced our first partnership with IBM (IBM ) in 2001 and partnered with HP (HPQ ) and Dell (DELL ) also to bring that product out.

Q: What was your financial situation like by then?
A: We were pretty much cash-neutral. We put a desktop version up on our Web site for sale — we didn’t have a sales force. It was about $299, and it basically left us cash-neutral. We did take [venture-capital] money, but we actually never used it.

Q: So if things were going so well on your own, why did you sell to EMC?
A: We had been profitable for a couple of years, and we were getting ready to go public, when all the sudden there was a tremendous amount of interest in acquiring us. It was just a matter of looking at what made the most sense for everyone at the company. You take a certain amount of risk off the table for everyone in an acquisition and you avoid all the Sarbanes-Oxley and whatnot of an IPO. We just didn’t want our momentum to be broken for a second.

And it worked out. It has been a remarkably successful acquisition. We announced it Dec. 15, and we closed it Jan. 9. Then that year we were able to more than double our revenues and double our headcount. It shows it was actually a very good decision. It’s like we didn’t skip a beat.

Q: Since your software runs on any hardware, and you rely on partnerships like IBM, HP, and Dell, was it a concern when you were bought by a hardware company?
A: It would have been almost untenable had they been a server company — a huge value is that it runs on every kind of hardware. And it would have been untenable to be bought by an operating system company, so those two kinds of acquisitions didn’t make any sense.

Q: Did they try anyway?
A: Well one that’s publicly known is Microsoft (MSFT ), but the rest is all under NDA [nondisclosure agreements].

Q: Has any of your success come as a result of being part of EMC or was it all stuff VMware would have been able to accomplish otherwise?
A: Well you can never say, but I think it’s possible we were able to expand our international sales force more rapidly, even though it’s completely independent from EMC.

Q: It seems obvious what EMC got out of it. What has VMware gotten out of the deal?
A: We’re able to go full-bore and to be very bold in a way you can’t [as a stand-alone public company], with shareholders monitoring you on a quarterly basis. The good news is we’re thriving, we’re growing, bringing out great products. It’s a great thing that’s going on here, so I don’t tend to microanalyze and ask, “What did we get out this?” It doesn’t matter.

Q: Were you worried when you first started talking about being acquired? How did you negotiate to remain autonomous?
A: Well I actually came to the conclusion that all bets were off. [Being acquired] wasn’t my first choice, but I actually decided to do it because so many other people thought it was the right thing to do. Once we decided to do it, I said, “Who knows? They’re going to pay a lot of money for us. They’ll do what they want.”

But then as we started getting ready to implement and announce the acquisition, and we all rolled up our sleeves and started rationally saying, “What has to happen for this to keep working?” It made sense to stay separate. EMC was wonderfully behind us. [EMC CEO] Joe Tucci saw the strategic reasons for doing what we did. IBM and HP are very important partners to us, so we didn’t see any reason to integrate and we saw all kinds of reasons to act as an independent subsidiary. But I never negotiated it.

There’s an immense pressure from the financial community to say, “Where are the synergies?” But we just sort of tuned that out and said, “We’re going to do what makes sense.” Now everyone endorses it because it turned out well

CherryOS: it’s all over

Quoting from Engadget:


First they put it on ?on hold? last month, then they announced they were going to relaunch it as an open source project, now the genius guy behind CherryOS, that OS X emulator which apparently ?borrowed? some code from an earlier emulator by the name of PearPC, has decided to pull the plug on the entire ill-fated venture.
Arben Kryeziu says that it simply ?was not ready?, which we believe is a euphemism for ?I didn?t feel like getting sued by the people behind PearPC?.

Battle of the X64 platforms

Quoting from IT Jungle:


The X86 platform has long since dominated both the server and workstation markets in terms of shipments, but in terms of engineering and features, the X86 platform has continued to lag RISC/Unix and proprietary alternatives for years. While the popularity of X86 platforms and the intense competition they have brought to the market have sucked a lot of the revenue and, more importantly, a lot of the profits from the server business, creators of non-X86 platforms have, to their credit, ran to higher ground, adding features and functions to their systems that the X86 could not deliver.

With the advent of a rapidly maturing X86 market as embodied in the new 64-bit X64 alternatives from Intel and Advanced Micro Devices, the competition looks to be getting even more intense. The few remaining RISC/Unix and proprietary platforms that are economically viable are going to start feeling even more pain now. That does not mean there is no longer room for alternative platforms; there most certainly is. But it is going to be very hard to bring them to market and make money.

The X64 architecture is not one, but two different architectures that can run the same instruction set and therefore support the same code base. There are gross similarities in the architectures–there has to be because of the nature of chip process technology and what economic and technical forces make you do–but there are a number of really different things that Intel and AMD are putting into their X64 platforms.

The main features that define the evolving X64 platforms are 64-bit memory extensions, the use of multiple cores and simultaneous multithreading on chips, integrated instruction set virtualization, power management, chipsets, and raw performance.

For the past five years or so, RISC/Unix platforms have included some form of hardware-assisted virtualization, using either virtual or logical partitions riding on top of a hypervisor layer that abstracts the processor instruction set such that virtual machine partitions equipped with their own operating systems think they are running a whole machine even though they are getting only a slice of it.

With future Xeon and Opteron processors, Intel and AMD are introducing hardware-assisted instruction set virtualization to make virtualization run more smoothly and without consuming as much resources as it does today.

There are limits to what Intel and AMD can do with virtualization on the chip, however, with current chip process technologies. The virtualization features that come with Intel’s Virtualization Technology or AMD’s “Pacifica” technology, due respectively in the “Montecito” Itaniums and future Xeons from Intel and in future Opteron processors from AMD, are only implementing instruction set virtualization in the chip rather than in VMware’s ESX Server hypervisor, Microsoft’s Virtual Server 2005 hypervisor, or the open source Xen hypervisor. However, to make a virtualized workstation or server environment, you have to virtualize memory–carving up a gob of main memory into pieces for each virtual machine and making sure that virtualized servers share memory for common functions so they use memory efficiently. Similarly, the virtualization software also has to do I/O virtualization, providing disk and network I/O access for each partition. These last two features are not going to be embedded in processors for a long time–perhaps years. They will be embedded in systems eventually, however, in some form. It is the nature of the IT industry to do this wherever possible. It is a question of transistor counts and standardization.

A Look at Guest PC 1.2

Quoting from OS News:


Guest PC is an emulator of the x86 PC for the Mac OS X platform. We had a quick look at the product and we compared it to VirtualPC 6.1 that we also happened to have in-house.

We used a dual PowerMac G4 1.25GHz with 2 GB of RAM and an ATi Radeon 9200 Pro, Mac OS X 10.3.9 as the host OS and Windows XP Pro as the guest OS. GuestPC supports DOS, Win95/98/ME/NT/XP/2k but there is the ability to potentially boot other OSes too (however we didn’t test this). When creating a new virtual PC image you can select from 300 MB to 32 GB of space for your guest OS, and memory from 32 MB to 512 MB. Then, you click on the “start OS installation” button and it fires up the emulated PC, reads from your CDROM and installs the OS. It took 2 hours to install Windows XP PRO on this machine.
After it got installed, it loaded in about 1.5 minutes to a full desktop. There are some add on drivers that GuestPC can install to extend the experience, and so we got these installed too. GuestPC’s interface is really simple, there are a few icons on the bottom of the OS window showing activity on the peripherals and the CPU load. Using Command+ESC you can release the mouse cursor from the emulated window.

I used Guest PC for over a month and I must say that it’s rock solid. If GuestPC has one great feature, that it is: stability.

I am not happy with the performance though. The kind of performance GuestPC gives me on this dual G4 is pretty much the same performance VirtualPC 5 was giving me on a Cube 450 Mhz 2 years ago. Loading notepad or IE takes a few seconds, while in VirtualPC 6.1 they are almost instant on the same machine (especially notepad). I found GuestPC to be 2-3 times slower than VirtualPC 6.1 in normal everyday usage (IE, office, PaintShopPro etc). I hear that GuestPC really shines when emulates Windows98SE instead on a G5 system. Maybe this is the case, but the reality of the thing is, most Mac users own G4s instead and they are more interested in emulating XP rather than the unstable and old Win98.

Graphics performance is pretty bleak too, there are lots of ugly redraws going on, as the graphics card emulated is a Cirrus Logic 5440 with 4 MBs of VRAM. Funny thing is, I used to actually have one of these cards in “real” life (not under emulation) and they were not too bad in 2D speed at the time, in fact, its 2D speed was better than the 16 MB S3 Trio that VirtualPC emulates. The main problem with the graphics card emulated here is the fact that it only emulates 4 MBs, and so GuestPC does not allow for more than XGA resolutions. I own a good 21″ SONY CRT monitor that can go all the way up to 2048×1536 (and 1600×1200 at 85 Hz), but GuestPC can’t take advantage of it. When I use the “full screen” option, I have to use GuestPC’s OS in XGA, which is a shame as my monitor is capable of a lot more.

GuestPC can’t save the state of the PC like VirtualPC does, but it can use Hibernation. Recovering from sleep with VirtualPC takes 5-8 seconds (suspension to disk, essentially), while recovering from hibernation with GuestPC can take up to 20-25 seconds. And what’s the deal with the “shut down” and “turn off” alert window? What’s the freaking difference on an emulated PC?

And a feature request: support VirtualPC’s image files (or include a utility that converts them to the GuestPC format). This can be an incentive for an old VirtualPC user to upgrade to GuestPC.

In conclusion, I have one thing to say to the GuestPC guys: bring down the price. GuestPC costs $70, but it offers a lesser experience than the $130 Virtual PC 7. Because I don’t see why someone would pick GuestPC over VirtualPC (at least for Windows OS support) or the free Qemu, the incentive should be an even better price. A price at around $40 is more like it. Or, further optimizations must occur.

Overall: 6/10

Is VMware playing unfair about learning?

As you probably know since quite 1 year (July 2004) VMware opened up a learning program for worldwide companies. IT professionals already certified as VMware Certified Professional (VCP) with a somewhat high exam score (85%) could become Certified Instructors and companies, achieving certain requirements and paying an annual fee, could become VMware Authorized Training Center (VATC) Partners.

Details about how to become an Instructor are always been undisclosed on the web site. Only a direct contact with VMware can clear what is needed.
How many of you were or are actually interested in becoming VMware instructors? How many of you invested in trying to obtain the certification? I know a few…
Details about how to become a VATC Partner remained undisclosed as well. But in this case can neither a direct call to VMware can clear requirements for achieving this special status.
How many of you were interested in receiving official training in national language instead of English? I know a few…

Impossible? Well: my company wanted to become a VATC Partner for Italy (notice that we already are both VMware Enterprise VIP and VMware Core Customers) and I wanted to become a VMware Certified Instructor. I contacted VMware as well asking for requirements and after taking me on hold for _FOUR MONTHS_ I was notified that the achievement was granted to another company.
Ok, I’ve no problem in having competitors, but I’m interested on knowing VATC requirements… VMware let me clearly intend that my company will not obtain VATC status since simply there are _NO REAL REQUIREMENTS_ someone can try to accomplish to.

This is really incredible to me. It’s incredible I needed to wait four months to have an elusive answer. It’s incredible VMware create a Partner Program no one can adhere to (or at least not in a democratic way). It’s incredible VMware chooses to not spread training about its own technology.

Now, checking official web site for VATC Partner Program URL I just discovered every reference disappeared… Just the german press release survived.
So need I to guess the VATC Program is shutted down without notice? If so, why VMware just designed an italian VATC Partner?

Really incredible. And really unfair.