Managing Unix

- by Paul Murphy, -

What would your answer be if a selection team charged with hiring a new CIO to develop and implement an organization wide "strategic systems architecture" were to ask you what management considerations most differentiate use of Windows from use of Linux?

The right answer, I think, is that the more fully the organization implements the Unix Business Architecture (UBA; explained below), the smaller and more outward facing the Systems organization can be, and, conversely, the more fully it implements Microsoft's client-server architecture the larger, and more inwardly focused, the IT organization has to be. Explaining that answer, and the terms used, is what this column is about.

First lets look at the terminology. I use the term "UBA," or Unix Business Architecture, to mean a Unix "pure play" with centralised processing, desktop smart displays like Sun's Sunray 1g, simplified networking, and formal management support for people who use any Unix variant, including MacOS X, Linux, and BSD, at home. In this environment all processing is on the server with the desktop device limited to handling user interaction. As a result things always work: what shows up on a user's desktop depends on the user, not the display or its location; application support is handled by lead users; there is no help desk; and security issues never reach users.

As a result the CIO in a pure Unix environment can metaphorically "face outward;" looking beyond day to day operations in the data center to focus on business issues and user services while needing to commit only a minimal number of people to housekeeping tasks like server and applications management.

In contrast Microsoft's client-server architecture puts a powerful computer on the user's desk but uses it for little more than storing the user's home environment and managing the interface to many small servers run by the data center. Notice that client-server is client-server whether those desktops run a Windows variant, Linux, or MacOS X. The critical differences between the two architectures derive from the presence of a desktop computer capable of stand-alone use, regardless of what operating system that desktop machine runs. In practice, of course, most Microsoft client-server implementations rely on Microsoft software running on Intel based gear, but an implementation built using Linux servers and desktops would still be client-server -and thus ultimately fall heir to the same problems.

Originally the idea behind client-server was that desktop cycles would be used to process data stored on servers. In practice, however, coping with the realities of data backup, network failures, the improbability of transaction serialisation in a highly distributed client-server network, and client software control have turned implementations of this idea into something of a sham because service centralisation scales easily but true client-server doesn't.

As early as 1992, therefore, smart display maker NCD recognised the value of the web browser as a kind of universal display client and started offered Mosaic in either server downloadable or ROM loadable forms to augment its standard X/Postscript interfaces. NCD was well ahead of its time on this, but by 2000 the impact of repeated attempts to make PC based client-server work was clear enough that a statistical analysis of various client-server management strategies carried out by Julie Smith David, David Schuff, and Robert St. Louis (Managing Your IT Total Cost of Ownership Communications of the ACM; January 2002, pp101ff) showed that costs vary inversely with control intensity and management centralisation -meaning that the more controls on the PC desktop are tightened --effectively morphing client server back into the mainframe architecture it replaced-- the cheaper and more reliable it gets.

Of course building a mainframe data center using Wintel isn't particularly efficient and has organisational costs that go beyond simple systems issues. For example, use of the Microsoft client server environment typically means that users have to carry their computers with them when they leave their desks; systems are considered so untrustworthy and transient that users compromise the value of large scale software like ERP/SCM packages by limiting usage to only the most basic parts of the package; the network software stack needed gets to be so complex that the reboot/reload cycle replaces failure analysis and remediation; the expectation that security will fail more or less at random is integrated into daily operations; everything related to IT exists in a perpetual state of roll-out or upgrade; and, systems management has to provide a well staffed help desk simply to ensure that users can get their clients started up and connected to the central service. As a result the CIO in a Microsoft client-server environment has to be "inwardly focused:" concentrating on firefighting in the data center and totally committed to the daily struggle to provide the most basic of all forms of user support - simply keeping things running in the face of whatever today's security, software, budget, or staffing crisis may be.

Of course, both the UBA and the fully implemented Microsoft architecture represent extremes. In reality most organisational systems deployments fall somewhere between these and it's the need to drive that in-between world to a resolution in one direction or another which should inform your answer to the hiring committee. In formulating that answer, what you need to talk about most are the managerial consequences of each decision; not in terms of technology, but in terms of how that technology affects daily IT and business operations.

You need to pick one architecture and stick with it for all servers and upwards of 95% of your desktops because the most common situation, in which the business relies on a few Unix servers to provide truly critical services in an otherwise predominantly Microsoft environment, is also the worst possible one. What happens in this situation follows a well understood pattern: at first the Unix systems carry most of the real workload, but the staff involved become increasingly disconnected from the Wintel majority around them and eventually leave. The MCSEs put in charge of the Unix servers then "administer" them as if they were Windows servers using Windows certainties and Windows telnet or some other third party Unix management software for Windows with the result that the cheapest and most reliable part of the IT operation is quickly transformed into its most expensive and least reliable piece.

It's not unusual, in the late stages of this transformation, to see rock solid combinations such as an older Oracle release on SPARC, or PostGreSQL on Linux, going down daily while a cloud of MCSEs bad mouth Unix and struggle to rescue the company from it by porting the database to Microsoft's SQL-Server. Look at this process from a CIO perspective and the inevitability with which the numerical majority of Wintel people drives out the Unix gurus means that such mixed environments should be seen up front as virtual Microsoft pure plays with rapidly metastasizing cost and reliability cancers at their hearts.

Thus a decision one way or the other has to be made - and the decision to go with Microsoft is by far the easiest because users won't question it and you won't have to layoff most of the IT staff or rebuild the relationship between IT and its users. Doing the right thing is much harder -and a lot less rewarding in terms of your corporate visibility, career, and long term salary expectations. Remember, this kind of data center disaster doesn't grow out of any Wintel technical advantage or peer pressure from Windows advocates: what kills data center Unix is success.

IT management, like all management, focuses where the squeak is - but only for the duration of the emergency. As a result Unix people and technology brought in to solve an urgent software, technology, or cost problem become invisible to senior IT managers as soon the crisis ends. The rule is simple: no squeal, no grease; IT managers inundated by the daily cacophony of Wintel support grow that side of their business while ignoring what works simply because it works.

Eventually, of course, the Unix people move on, leaving the MCSE crowd in control of the systems and the consequent eruption of reliability, performance, and systems recovery crisises refocuses IT management's attention on Unix, but by then it's usually too late to save the data center without major changes in staffing and direction.

That's also the biggest long term problem facing the Unix CIO: its not easy to successfully transition an organization to the UBA; but once you succeed senior management will forget all about IT, unconsciously deleting it, and you, from their agenda - meaning that the most effective organisational CIOs also tend to be the least visible, least appreciated, and least promoted.

With that image in mind you can look the committee in its collective eye and say that the real consequence of the architecture decision isn't whether they should continue with a mixed environment - if that's what they have, it has to go - but what happens afterwards. That, you can tell them, boils down to whether you'll be getting in their faces just to remind them of your existence or at meetings they've had to call to discuss the budgetary or functional compromises needed to cope with the latest Wintel crisis.


Paul Murphy wrote and published The Unix Guide to Defenestration. Murphy is a 20-year veteran of the IT consulting industry.