% fortune -ae paul murphy

From Chapter Two: The appliance computing culture

This is the 16th excerpt from the second book in the Defen series: BIT: Business Information Technology: Foundations, Infrastructure, and Culture

Note that the section this is taken from, on the evolution of appliance computing, includes numerous illustrations and note tables omitted here.

Roots

The two characteristics which most significantly drove the development of management methods and hardware in this field were:

Mini-computers as front end processors.
Notice that the term "mini-computer" was used as early as 1953 to refer to machines used as pre-processors on larger systems. Thus the CDC 1604 was often used as a mini-computer (meaning small computer) front end to the CDC 6600 while many academic users of mainframe gear bought mini-computers from Digital Equipment Corporation (DEC) for the same purpose well into the eighties.

This use of minis not only off-loaded user keystroke processing from the mainframes but had the hidden attraction of allowing use of ASCII peripherals like this Decwriter 120 printer/terminal instead of IBM's much more expensive EBCDIC gear.

Outside the mainframe world this role for the mini-computer simply disappeared into the integrated circuit design of the modern CPU board.

In its last stand-alone form the serial I/O controller is now called a UART - for universal asynchronous receive transmit. This single chip device is essentially a smart buffer that accepts incoming characters, checks them for transmission errors, and reformats them before passing them on to the processor for action.

In the mainframe world the machine runs 168 hours a week and hardware costs mean that management lives on the knife edge between being able to schedule all required work into the period and not paying for excess capacity or idle time. Thus capacity managers try maximize average loads - and batches are very carefully scheduled to maximize capacity use over a 24 x 7 week.

In the mini computer world most of the work gets compressed into the hours the users work - usually around 44 per week - and IT management's only really important job is to ensure that the service is available during those hours. In the interactive computing world the computer must do most of its work while users wait; meaning that the machine must be configured to meet unpredictable peak instantaneous loads, not weekly or monthly average loads.

That fact produced a fundamental conflict with the data processing view of the world. To people whose gear cost millions of dollars and processed data in arrears, average utilization is the only appropriate measure for capacity planning. To a mini-computer user, however, average utilization is completely irrelevant and what counts is how long the user has to wait for each transaction to complete - even if the machine is completely idle the rest of the time.

When the first minis moved into businesses they didn't sell as computers; people didn't buy a PDP to do data processing or to start a data center. They bought applications -- a machine to do typesetting, a machine to reduce errors in receiving, a machine to improve job order scheduling, a machine like this 1975 AJ mini to do basic accounting. The fact that the machine was a general purpose computer running a special application was irrelevant, these things came in the door with a clear business focus.

What's a report? A cultural shibboleth, that's what
In the data processing world business users are seen as generating data which, when processed, yields reports. That was central in 1923 - "sort these cards by salesman and report their total commission earnings" - and is still central today - "total these claims by territory to report earnings and variances."

Originally reports were printed, but now reports are generally output to disk in forms suitable for use with query tools. Thus most users of mainframe transactions processing systems now have interactive access to report pages on-line, but all the report queries are pre-defined - meaning that the delivery method has changed but the focus on predictable reporting has not.

To get beyond that on the mainframe you usually need to buy a decision support application --there are many acronyms for these, MIS (Management Information System); EIS (Executive Information System); DSS (Decision Support system); ADQ (Ad Hoc Query) and so on, but they all amount to the same thing: some form of data query facility that extracts data from a pre-processed file, set of files, or database. Here the pre-processing to create this file is just the current version of the old report processing step, changed in form, but not in substance.

In the application appliance world, users can frame queries according to application data, not report formats. Thus the extraction and formatting step found on the mainframe can generally be omitted. (In practice it often isn't, but that's usually because the person in charge came from the mainframe or Windows environments where pre-processing is customary.)

This difference in focus hasn't changed fundamentally since the first PICK system debuted as DM512 in 1970. DM512 included the Access free form reporting language which allowed users to formulate queries directly against the production database. Similarly, the latest OS/400 release includes SQL/400 as well as other tools for direct user access to production data.

That made minis very different from the 360s. Those too had come in with a business focus - but a focus on the automation of clerical tasks in finance that left data processing people largely disconnected from business operations.

The result in the application appliance worlds was enormous user pressure for faster response - meaning more memory and faster CPUs. At the same time, however, the freedom to stand down during the other 124 hours per week meant that system managers were far less concerned than their mainframe colleagues about never having to shut down.

We're all immigrants, one way or the other.
The simple picture presented here in which various computer groups are easily distinguishable by their managerial reflexes distorts reality. In real data centers things are often much less clear cut because many of the people who obtained responsibility for Unix or mini-computer installations got their training in the mainframe or Windows world and so bring their ideas about the nature of business computing with them.

Just as many adults who got their education in some other language never quite get colloquial English, mainframe or Windows people who think they know computing don't usually adapt to new technology; prefering, instead, to try to adapt the new technology to their understanding of computing.

As a result you quite often see large Unix systems running one application and mini-computers like IBM's iSeries being used in an automatic data processing mode --running batch jobs 24x7 while their managers focus on job scheduling and capacity planning.

This always under-serves the business and doesn't make sense in terms of the technology, but you can understand the behavior easily enough if you think about the history and the conditions in which these people learned their trade.

The interactive applications and reporting model had similar effects: driving hardware evolution toward faster CPUs working with more memory; driving software to better memory management and faster switching between user applications; and driving management to prefer packaged applications over custom built software.

All of that happened because an interactive application is one in which users tend to login, start the application, and then let it sit idle until they need it. When they do need it, their actions trigger a cycle under which:

In this model the user applications are treated as little more than interfaces to a shared database, access to stored data is typically not sequential, capacity planning and batch scheduling are essentially meaningless, and printed reports are generally of little value to the users.

That applications model drives almost everything, from processor and OS evolution to appropriate management structure - and even the conflicts with traditional business systems experts whose internal model of "automatic data processing" is broadly unrelated to the needs and activities in the interactive applications environment.


Some notes:

  1. These excerpts don't (usually) include footnotes and most illustrations have been dropped as simply too hard to insert correctly. (The wordpress html "editor" as used here enables a limited html subset and is implemented to force frustrations like the CPM line delimiters from MS-DOS).

  2. The feedback I'm looking for is what you guys do best: call me on mistakes, add thoughts/corrections on stuff I've missed or gotten wrong, and generally help make the thing better.

    Notice that getting the facts right is particularly important for BIT - and that the length of the thing plus the complexity of the terminology and ideas introduced suggest that any explanatory anecdotes anyone may want to contribute could be valuable.

  3. When I make changes suggested in the comments, I make those changes only in the original, not in the excerpts reproduced here.


Paul Murphy wrote and published The Unix Guide to Defenestration. Murphy is a 25-year veteran of the I.T. consulting industry, specializing in Unix and Unix-related management issues.