% fortune -ae paul murphy

Brief: 1

This is the second excerpt from the first book in the Defen series: The board Member's IT brief.

From 1.2.1: The consequences of culture

There's a simple bottom line to the difference between science based computing and data processing - one that ties directly to the biggest silent assumption in computing: that a computer is a computer is a computer. In other words, that people who know how to deploy the management methods appropriate to one form of computing, can usefully apply those certainties to other forms.

It just isn't true: put a Unix guy in charge of a data processing shop, or vice versa, and you'll get a mess; put a Windows or mainframe guy in charge of Unix and you'll get a disaster. People do what they know how to do -and in technology usually fight tooth and nail to avoid legitimizing anything else by learning about it.

That's also the real bottom line on the cultural differences between the two groups. Experts can happily argue the cost and normative issues all week, but the fact is that your information architecture must include all four of the major pieces: management processes, context, hardware, and software. Change any one of these without changing the other three, and you will get a systems breakdown - just as if you put an 1890s steam engineer in the pilot's seat on a 747. People do what they know how to do -what they've been trained to do, what worked when they were learning the trade - and reality of this has nothing to do with brand A versus brand B: the behavior you get from systems management is an artifact of the role they think computing ought to fill, of the methods they're familiar with, and of their expectations about the behavior of others.

The problems you get if you mismatch skills and technology have nothing to do with the quality of the people either. If you ask a world class down hill ski racer to run a cross country ski-joring race, the resulting loss will be more your fault than his - not because he isn't an expert skier, and not because he's a bad guy, but simply because the downhiller's skills are all wrong for the job - he may learn about shooting, but his automatic reflexes on balance and braking will cause tumbles and delays on the cross country course.

That's why understanding computing culture and its implications is critical: you can't play mix and match with the human pieces any more than you can put a part from a jet engine in a steam locomotive.

From 1.2.2: There's no such thing as a pure play

In all three books in this series I make the simplifying assumption that you can tell which information architecture your organization actually has in place, but in reality that's not generally true - and where you can tell, you'll almost always find it isn't a pure play. Thus a pure data processing environment wouldn't have users at all - it would have data consumers (people it delivers reports to) and data producers (people it gets data from). That's literally what their world was like until the early seventies, but there's been an apparent change since with today's data consumers and producers communicating directly with the centralized software via locked down PC s acting as terminals.

Look deeper, however, and what you see is that the change is limited to the method of communication: just as COBOL on the System 360 let data processing continue using card structures despite the replacement of the physical cards with digital records, the paper based communications methods used before terminals have been replaced by electronic ones, but the underlying structures and management assumptions survive unchanged.

Similarly a pure data processing shop would accumulate input data as long as possible in order to run the largest possible batch jobs -because that reduces per unit system resource consumption. Today, however, many mainframe applications appear interactive with users submitting queries and getting responses more or less in real time.

Look closer and what you see is batches of one being executed in CICS (an IBM toolset introduced in the late 60s) or other software environments specifically designed to facilitate this. Again, the method and timing of data delivery and report transmission have changed, but the fundamentals have not.

Put these things together with personal computer proliferation and what you get is the typical modern data processing environment: a mix of traditional mainframe processing for some applications and Microsoft's client-server architecture for others. That looks very far from a pure play - but remember that what counts is the management mindset and, in the PC/mainframe case, those are merging into a PC based form of data processing organization; meaning one that tries to implement data processing's traditional agendas using PC tools.

The critical question is always: who's minding the shop? It's attitude of the people in charge, not the technology, that generally determines the dominant architecture in mixed environments - but there's a crucial exception you need to be aware of.

Many organizations have obviously Wintel (Windows on Intel) based architectures and budgets in place, but on closer inspection turn out to depend on a few large scale applications running on Unix or a surviving competitor such as IBM's OS/400. What has usually happened in those situations is that the Wintel staff has grown as PC use spread in the organization, and the visibility of the people driving this has given them control of the data center's staffing and budget. Meanwhile a handful of other people, invisible to senior management precisely because their stuff works, are actually delivering the bulk of critical IT services.

One of my clients, for example, moved from a late 70s IBM System 36 environment to HP-UX with non-graphics capable terminals in about 1990 and stabilized operations around the applications they licensed with the HP gear. Over time those became a complete set of ERP (Enterprise resource mnagement) applications that contributed significantly to the company's ability to produce a billion dollars in 1997 revenue.

Unfortunately the general ugliness of the original terminals combined with HP's high cost approach to GUI capable x-terminals and the worldwide hype over the dot dumb boom, led the company to invest heavily in Wintel software starting in 1996.

By 2000 their first commercial ERP implementation (SAP) had failed and plans, announced in 1997, to introduce new Windows NT 4.0 servers had to be put aside. Instead, the HP Unix machines were replaced with new HP gear so operations could continue.

The commercial ERP process was restarted in 2001 but, by late 2006, IBM's Global Services has been shown the door after another adaptive failure. Today the company supports over 1,500 PCs along with more than 65 IT staff - only two of whom operate the Unix ERP applications that almost every revenue process in the company relies on - now on IBM AIX (a Unix variant) based gear from a series of emergency hardware upgrades done in late 2004.

Look at this company's IT expenditure's or assess IT management's beliefs and their commitment to Microsoft's centralized client-server architecture is clear - but look at what keeps the company chugging along, and it's two people running Unix on four year old gear. Understanding the Windows/Unix staffing ratio is key to figuring out what's really going on in this kind of mixed environment. Windows needs lots of support people working directly with users, while Unix needs very few people, and in mixed environments Wintel management often restricts them to the data center.

That happens largely because of two factors: first the staffing ratios are enormously different, and, more importantly, systems stuff that works isn't visible at the executive level, but systems that fail get lots of face time.

For example, the only Fortune 500 company with a 100% Unix architecture, Sun Microsystems, has only about forty IT staff supporting nearly 34,000 Sun Ray Smart Display desktops (like the dual screen setup shown above.)

In contrast, pure play client-server companies typically have staffing ratios in the range of 25:1 - meaning it would take about 1,300 more IT people to support Sun's users if the company switched to the Microsoft client-server architecture.

As a result you often see companies doing what my client did: throwing a lot of money at failing projects - thus stalling their own growth and ability to adapt to external change - while under funding, and under utilizing, the systems that keep the company going. The bottom line, however, is that, neither counting the IT staff nor looking at the people best known to the executive group will really tell you what a company relies on: you have to look carefully at what the people who make the money actually use in geting their jobs done every day, and then at who is responsible for that.

Next week: from 1.3 Information Integrity

Some notes:

  1. These excerpts don't include footnotes and most illustrations have been dropped as simply too hard to insert correctly. (The wordpress html "editor" as used here enables a limited html subset and is implemented to force frustrations like the CPM line delimiters from MS-DOS).

  2. The feedback I'm looking for is what you guys do best: call me on mistakes, add thoughts/corrections I've missed or gotten wrong, and generally help make the thing better.


Paul Murphy wrote and published The Unix Guide to Defenestration. Murphy is a 25-year veteran of the I.T. consulting industry, specializing in Unix and Unix-related management issues.