Battle of the X64 platforms

Quoting from IT Jungle:


The X86 platform has long since dominated both the server and workstation markets in terms of shipments, but in terms of engineering and features, the X86 platform has continued to lag RISC/Unix and proprietary alternatives for years. While the popularity of X86 platforms and the intense competition they have brought to the market have sucked a lot of the revenue and, more importantly, a lot of the profits from the server business, creators of non-X86 platforms have, to their credit, ran to higher ground, adding features and functions to their systems that the X86 could not deliver.

With the advent of a rapidly maturing X86 market as embodied in the new 64-bit X64 alternatives from Intel and Advanced Micro Devices, the competition looks to be getting even more intense. The few remaining RISC/Unix and proprietary platforms that are economically viable are going to start feeling even more pain now. That does not mean there is no longer room for alternative platforms; there most certainly is. But it is going to be very hard to bring them to market and make money.

The X64 architecture is not one, but two different architectures that can run the same instruction set and therefore support the same code base. There are gross similarities in the architectures–there has to be because of the nature of chip process technology and what economic and technical forces make you do–but there are a number of really different things that Intel and AMD are putting into their X64 platforms.

The main features that define the evolving X64 platforms are 64-bit memory extensions, the use of multiple cores and simultaneous multithreading on chips, integrated instruction set virtualization, power management, chipsets, and raw performance.

For the past five years or so, RISC/Unix platforms have included some form of hardware-assisted virtualization, using either virtual or logical partitions riding on top of a hypervisor layer that abstracts the processor instruction set such that virtual machine partitions equipped with their own operating systems think they are running a whole machine even though they are getting only a slice of it.

With future Xeon and Opteron processors, Intel and AMD are introducing hardware-assisted instruction set virtualization to make virtualization run more smoothly and without consuming as much resources as it does today.

There are limits to what Intel and AMD can do with virtualization on the chip, however, with current chip process technologies. The virtualization features that come with Intel’s Virtualization Technology or AMD’s “Pacifica” technology, due respectively in the “Montecito” Itaniums and future Xeons from Intel and in future Opteron processors from AMD, are only implementing instruction set virtualization in the chip rather than in VMware’s ESX Server hypervisor, Microsoft’s Virtual Server 2005 hypervisor, or the open source Xen hypervisor. However, to make a virtualized workstation or server environment, you have to virtualize memory–carving up a gob of main memory into pieces for each virtual machine and making sure that virtualized servers share memory for common functions so they use memory efficiently. Similarly, the virtualization software also has to do I/O virtualization, providing disk and network I/O access for each partition. These last two features are not going to be embedded in processors for a long time–perhaps years. They will be embedded in systems eventually, however, in some form. It is the nature of the IT industry to do this wherever possible. It is a question of transistor counts and standardization.