% fortune -ae paul murphy

From Chapter Two: The appliance computing culture

This is the 17th excerpt from the second book in the Defen series: BIT: Business Information Technology: Foundations, Infrastructure, and Culture

Note that the section this is taken from, on the evolution of appliance computing, includes numerous illustrations and note tables omitted here.

Roots

[The appliance computing culture's interactive] applications model drives almost everything, from processor and OS evolution to appropriate management structure - and even the conflicts with traditional business systems experts whose internal model of "automatic data processing" is broadly unrelated to the needs and activities in the interactive applications environment.

Most importantly, it is reflected in how business processes are viewed. For example, the mainframe world tends to see receiving, an early candidate for interactive support, as a generator of data for later batch processing.

To the 360 manager, the receiving process is one in which a clerk reviews the arriving orders to produce a receipts list to be entered as cards or via a remote job entry station to create an input file for the day's receiving reconciliation batch job. This job, when it runs, then compares the day's receipts to outstanding orders and generates a reconciliation report for distribution, originally via a printed report and now generally on-line, to warehouse management.

One of the important consequences of the batch view of the world is that the correct execution of the order reconciliation job is dependent only on the prior execution of the receiving report --exactly as it would have been dependent on prior card sorting in the twenties and thirties. From a programmer's perspective it is, therefore, internally independent of other jobs and so eventually enters the catalog as a separate application when in fact it is simply one piece in a much larger application that's integrated through process scheduling rather than data.

Relative costs and focus meant that partitioning did not develop

Although most users did not do development, a few did. For those the cost and staffing advantages of the mini-computers meant that buying a second small machine for development and testing was cheaper and more effective than partitioning.

For example, an IBM 370/158 with a 1MB memory and the extended 1.4 MIPS processor ran to something like $60,000 per month to lease in 1978. The smallest 370, a model 128, would add $25,000 per month to that, much of it in software licensing and support. Combined with the absence of effective memory protection and the development agenda, these costs drove partitioning - splitting one machine into several virtual ones - as a best practice.

In comparison a fully configured System 38 with a 2.5MIPS processor, 1MB RAM, and the microcode database ran around $26,000 per month while the smallest unit, a model 100, would only add about $4,000. Combined with the general absence of the development agenda and the availability of good memory management capabilities, these costs mitigated against the use of partitioning.

In the application appliance world, the underlying applications model is database, not card, driven and access is seen as random, not sequential. As a result programmers - and their managers - were pushed in the direction of integration through the database rather than breaking jobs up into independent batches, and so tended to see individual functions in the context of broader business processes.

Here the receiving function is seen as part of inventory management; not as the source for the receiving report. In the appliance worldview, what arrives in inventory is stored and eventually shipped out. As a result people started to build integrated systems in which different user roles - receiver, shipping clerk, distribution manager - were all supported via different modules within one application that worked with one shared database That, in turn, forced the evolution of management methods focusing on protecting data integrity and system uptime during working hours.

Two of these were key to the success of the culture:

  1. A focus on operations rather than development; and,

  2. A focus on earning the business revenue rather than on accounting for it.

As a result, at least outside the IBM world, most mini-computers were brought in specifically to run packaged applications developed as commercializations of software pioneered in university or other research centers.

Cultural Contamination
People, of course, change jobs and so some people with data processing expertise became responsible for interactive computers where they did things like demand COBOL for CPF and port mainframe methods to interactive gear --usually with negative consequences for their employers.

Prime, for example, specialized in engineering support; Wang started with word processing systems; and Microdata commercialized Dick Pick's integrated file management system for logistics.

This process of research commercialization had numerous unexpected effects including:

  1. Many writers during the seventies and eighties didn't see the management distinctions between mainframe computers and the applications appliance group. Thus it became popular to refer to mainframes as "general purpose computers" and minis as specialized function or special purpose machines. This reversed reality, but fit the perceptions of the time and is currently being re-run --equally inappropriately-- in the Wintel PC media with the Microsoft Windows PC cast in place of the mainframe as "general purpose."

  2. Many of the more successful companies, including Data General and DEC, were spin-offs from MIT and other new England technical schools and universities. Thus Boston's route 128 became known as Silicon Alley well before Unix and database work at Berkeley combined with integrated electronics research at Stanford to precipitate the move west and the creation of today's Silicon Valley.

  3. The close relationship between floating point capacity and the research use of computers led to a price/performance spiral as software developed by researchers using the latest hot processors drove the commercial success of the companies involved.

    Digital Equipment Corporation (DEC), whose Unix based PDP and VAX (see box, below) equipment were outstandingly effective research machines, quickly developed the largest installed base among specialized software developers - so much so, in fact, that the VAX came to dominate all non mainframe processing in the eighties.

    HP, in contrast, built a line of mini-computers around its laboratory market based on combining powerful I/O capabilities and limited floating point. That produced a small but loyal customer base and established the company as a niche computer maker.

    It was only in the late eighties that HP bought several start-ups focused on floating point and used those products, along with the Domain/OS Unix version they got from buying Apollo, to form the foundation for their own highly successful line of PA-RISC machines running MPE and HP-UX.


Some notes:

  1. These excerpts don't (usually) include footnotes and most illustrations have been dropped as simply too hard to insert correctly. (The wordpress html "editor" as used here enables a limited html subset and is implemented to force frustrations like the CPM line delimiters from MS-DOS).

  2. The feedback I'm looking for is what you guys do best: call me on mistakes, add thoughts/corrections on stuff I've missed or gotten wrong, and generally help make the thing better.

    Notice that getting the facts right is particularly important for BIT - and that the length of the thing plus the complexity of the terminology and ideas introduced suggest that any explanatory anecdotes anyone may want to contribute could be valuable.

  3. When I make changes suggested in the comments, I make those changes only in the original, not in the excerpts reproduced here.


Paul Murphy wrote and published The Unix Guide to Defenestration. Murphy is a 25-year veteran of the I.T. consulting industry, specializing in Unix and Unix-related management issues.