% fortune -ae paul murphy

Data center energy use

It's getting to be impossible to read any major market technology journal or website without running into articles about greening your data center. I'm not in any sense against finding ways to use less energy, but most of these articles get almost everything completely backwards.

What's going on is that the editors and journalists involved legitimately try to leverage today's energy consciousness to get the page reads needed to sell advertising - but many then break the implicit contract between publishers and readers by not providing information or analysis of real value to readers.

In particular, I think there are three areas on one or more of which nearly all articles you'll find via a google news search on greening your data center mislead their audiences:

  1. first, most will maintain that virtualization in the PC/IBM context of running one application under each of multiple OS instances on one computer, offers a positive net contribution to reducing corporate energy use;

  2. second, most ignore energy use for desktop and networking gear; and,

  3. third, most assume that saving energy makes a positive contribution to extra-corporate goals including cleaning up the environment and reducing global warming.

The net result of this is that if you talk to IT executives about this issue you'll find that an easy majority have bought into the belief that reducing their data center energy consumption is both good for their employers and good for the planet - while, in reality, no part of that perception set is generally true.

I've talked about that third issue before: basically the problem is that because a lot of our energy generation and delivery infrastructure is both obsolete and inefficient, thus extending its life by reducing your energy demand produces more net pollutants than would using more power to over-stress that infrastructure and thus force renewal and long term order of of magnitude reductions in pollutant production.

The second set of issues are obvious: cutting a few thousand watt-hours out of a data center is nothing compared to cutting a few hundred thousand watt-hours out of desktop power consumption and cooling demand: 4 watts per Sun Ray versus 120+ Watts per PC desktop is terribly unpopular, not to say career ending, with the PC press but hardly a difficult calculation to make or sell to senior management.

The virtualization business is more complex - however, as regular readers know I love absolutist arguments, and the one on PC/IBM style virtualization is a winner in this category. It takes energy (mostly expressed as hot RAM!) to run those extra OS instances and because it's equally possible to have the same applications share the lesser hardware resources needed to run them faster under a single copy of the OS, it follows that virtualization uses more energy than necessary.

That difference, although small, is real - but the case for running many applications concurrently on the same OS is really built on something much more compelling than saving a few watt-hours: using the scheduler to run multiple applications concurrently beats using a VM because the scheduler does better at allocating resources to meet varying user needs.

I know some happy MCSEs, for example, who run E-communications and document services on a four way, eight core, PowerRISC machine split into four domains each of which runs two SuSe Linux instances. They're currently budgeting for a major hardware/software upgrade largely because the virtual machines they have for some jobs, including some reporting processes and email during the crush minutes when when people come back from lunch, coffee, or overnight absences, routinely bog down enough to cause visible delay and frequent user complaints.

What's happening in each of those cases is that the partitioning is combining with the virtualization to reduce an eight core machine to a single core running at an equivalent 1.05GHz rate with respect to each application - when running them all under a single OS instance would allow Linux to automatically adjust resources to meet variations in demand and thus give users the responsiveness they're paying for.

That's one machine, whose set-up abuses users because the people running it learnt their trade wrestling uptime minutes out of applications running on Windows NT, but the same principles apply to larger data centers with hundreds or thousands of machines each running multiple OS instance/application pairs. Using the OS properly - and this one application per instance business hasn't been necessary even for Windows users since XP/2003 Server- directly reduces overheads and allows for better resource scheduling and thus better user service -and incidently does it at a marginally lower net energy cost too.


Paul Murphy wrote and published The Unix Guide to Defenestration. Murphy is a 25-year veteran of the I.T. consulting industry, specializing in Unix and Unix-related management issues.