% fortune -ae paul murphy

Consolidation (Part One)

Every day, or almost every day, somebody talks to me about server consolidation and the need to get utilization rates up. That's a big deal in the general press too, Sun's president, for example, keeps blogging about how Solaris containers can help drive utilization. In real life containers are a marketing morph on trusted communities, and those are very useful, but his specific arguments on utilization are an appeal to ignorance, aimed far more at making sales than sense.

I don't want to imply that there isn't an argument to be made for getting server utilization up; there is, but its not always appropriate and, where it is, neither virtualization nor partitioning are likely to be the right way to do it.

First, consider that the overwhelming majority of the computers whose function makes us call them "servers" are used to provide a direct service to users -and whether that's email, SQL data access, or file and print doesn't matter. What the user wants from the machine is instant response -and the faster the better. No user cares much about another user's schedule: they want their result, and right now. To deliver that we needs lots of capacity on standby, not busy doing some other user's job: idle, and ready to go to work on demand. That's why most of our servers are idle upwards of 90% of the time, and too slow the other 10% or less of the time.

When partitioning started, in the sixties, a machine with 128K of memory and two 5MB disks cost over two million bucks -and took over 200 people earning around $4K/year to babysit. Memory management was critically important, but weak and poorly understood for large applications. As a result hardware partitioning became a solution that made sense as a way of keeping developers from bringing down production without having to buy and operate a second machine.

For the same reasons a different solution to the problem, systems virtualization, made sense too. Both solutions reduced the risk of program collision and both fit the context of the time. Of course, in those days, an electronic data processing machine cost the equivelent of 4,000 man years of labor and there weren't any users in the modern sense outside the academic and then emerging mini-computer markets. Today the typical server costs less than a person month, users depend on interactive services, and schedulable batch processing retains its primacy only among those too tied to the mainframe to change.

So lets push a little reality into this consolidation stuff, shall we? What matters now is user satisfaction and service. So next time someone complains to you that your servers sit idle a lot, ask them why it matters - and point out that making productive users wait, even a second a shot, also has a cost - one that overwhelms that server cost in a matter of months if not weeks.

None of this means, however, that you don't consolidate, as I'll discuss next week there are reasons to do it and methods that make sense -but not everywhere and not through partitioning or virtualization.


Paul Murphy wrote and published The Unix Guide to Defenestration. Murphy is a 25-year veteran of the I.T. consulting industry, specializing in Unix and Unix-related management issues.