% fortune -ae paul murphy

Virtualization? uh huh.

Let me see if I've got this right:

Systems virtualization as most people practice it consists of booting multiple copies of a guest OS into memory and switching between copies as needed.

The boot and switch process itself is handled by a third OS which monitors the guest OS(es) and input sources such as the network card(s).

The driving factors for the first really successful virtualization OS, IBM's VM, was the need to prevent accidental interaction between user processes - without having to dedicate an expensive machine to each process.

When NT came along in the mid nineties it had much the same problem OS/360 demonstrated in the early days of combined COBOL and assembler programming: an exponential increase in complexity and failure rates as more applications were added to the machine. IBM's response then, conditioned by the $3 million plus price tag for a 360/370 class server, was to split the machine first via hardware switching in the LPAR (logical partition) construct and then via cp level software switching and thus VM. Wintel's response, in contrast was conditioned by $10K boxes and therefore consisted of getting more boxes - and running only one application per server.

As the Wintel and mainframe cultures started to merge, the mainframe focus on getting the maximum possible utilisation from a multi-million dollar machine affected Wintel thinking - and so we got first dual boot and then, as memory got cheaper, now Wintel systems virtualization designed to raise hardware utilisation from the traditional five to ten percent to at least the seventy to eighty percent range by running multiple concurrent applications, each under their own OS instance.

It seems to me, however, that while the latest Windows server OSes are pretty limited relative to Unix, they're capable of running several concurrent, but non competing, applications. In other words, as long as you don't put two SQL-servers (or Notes and Exchange) on the same machine the two most basic rationales for partitioning and virtualization - server cost and the need to prevent one application from causing another to fail - simply don't apply.

So why invoke the overhead and costs for doing it?

In Unix, of course, the problems that drove partitioning and virtualization never existed and so the technology didn't develop until Sun and HP started selling larger servers to mainframers who insisted that splitting a five million dollar Sun 10K five ways made more sense than buying five smaller machines at $200K each - and keeping the four million in change.

Since then Unix virtualization efforts have gone in two very different directions.

First the N1 resource virtualization technology for Unix did the exact opposite of traditional virtualization: tying together a large number of machines in a data center to form a more easily manageable virtual computer consisting of all of them - and Microsoft's data center management tools do something similar at the rack mount level for multiple wintel servers.

And, second: Solaris containers (and now zones) virtualise devices (including file systems) within one OS instance to provide a user and/or application grouping fitting above group and below "everyone" in the traditional Unix everyone-group-user hierarchy.

Go down the rabbit hole to Wintel land and you'll find a parallel to this: a major Wintel virtualization supplier whose use of a Linux kernel means that some applications, like BEA's web server infrastructure, can run directly under VMware - no additional OS required.

But what do these two technologies - one representing large numbers of machines as one virtual machine and the other eliminating the need to run multiple OS instances to achieve virtualised application environments mean for adherents to traditional OS ghosting in the land of the $10K machine? I think they mean that virtualization is popular because it was popular - and not because there's a practical reason to do it.


Paul Murphy wrote and published The Unix Guide to Defenestration. Murphy is a 25-year veteran of the I.T. consulting industry, specializing in Unix and Unix-related management issues.