% fortune -ae paul murphy

Virtualization: IT's own global warming

The argument for PC style virtualization, in which a hypervisor switches between OS/application bundles on demand, is extremely simple: license something like VMware ESX to run multiple copies of the OS and therefore strongly separated applications on a single machine, and you can use the same server for multiple jobs, thus achieving hardware, power, and space savings.

Great, except that:

  1. the incremental software cost usually exceeds the savings in hardware, space, and power -directly because you need to pay for the VM licensing and support, and indirectly because the OS/Applications licenses you need to run on the larger memory or multi-processor models most people select as virtualization hosts usually cost more;

  2. the biggest IT cost source is labour, and since that's mainly determined by the number of pieces of software you have to support, it goes up rather than down when you do this - because you're bringing the application/OS ratio closer to one to one and then adding at least one piece, the virtualization software, per surviving server;

  3. the use of the same server for multiple OS/applications bundles increases both the likelihood of failure and its cost - because there's more to go wrong, you're placing more stress on the system, and shutdowns affect more users; and,

  4. the entire utilisation argument, borrowed from data processing, is wrong: - with real users what counts is available capacity on demand, and virtualization reduces that.

The most important objection, however, is a practical one: there's a much better way - actually, two of them.

In both cases the better way is simply to allow the OS to do its job - and if yours isn't up to the job; change to Solaris, because it is.

Part of the issue here is that the popularity of PC style virtualization responds to issues the Unix community has never had - the pretend professionalism implicit in copying data processing's commitment to partitioning, and the nineties NT manager's learned aversion to trusting NT with more than one application at a time.

Notice that both of these illustrate what happens when people refuse to adapt as reality changes - the cost and memory management issues that drove partitioning and VM in the 1960s were history by the late seventies, and today's Windows servers can easily handle a number of concurrent applications provided that the load process leaves the registery in a consistent state and no more than one hacks it during operations.

Solaris handles the job in several ways. The most effective is to simply load your applications and run them - just use the parameter passing functionality built into start-up script processing to ensure that you don't have port, storage, or shared memory conflicts.

The alternative is to use Solaris containers - basically virtualised environments that share the same kernel.

Either way, you get concurrent operation for multiple applications - but only one OS, and so you get the space, power, and hardware savings promised by virtualization but without the off-setting administrative cost increases and without necessarily giving up instantaneous user access to the full resources available.


Paul Murphy wrote and published The Unix Guide to Defenestration. Murphy is a 25-year veteran of the I.T. consulting industry, specializing in Unix and Unix-related management issues.