% fortune -ae paul murphy

Technology strategies and applications

Over the course of the next few weeks I'd like to sketch out three sets of business requirements and then ask which technologies, and correspondingly what kind of IT organization, would be best for each.

This week I'd like to start that process with an imaginary research engineering company - the other two are a manufacturing business making brass and other metal products for the construction and renovation markets, and a professional services firm offering legal, accounting, and forensic engineering support to its clients.

The research company's mission is to advance remote sensing technologies to the point that satellites can be used to reliably spot organised weapons use or training by small groups embedded in civilian populations. Funding is imagined as coming from an initial $12 million dollar DARPA grant, renewable for ten years, along with an open-ended instrument launch commitment subject to needs and technology reviews.

The big question - and today's discussion topic- is how we judge whether one set of technological and organisational choices (i.e. systems architecture) is better than another?

To simplify this we'll initially assume that all three organizations start from scratch - in real life, of course, everybody starts with assumptions and pre-existing ideas about business processes and technology, so what I'm really doing here is hard wiring an assumption about avoiding the sunk cost fallacy.

As an aside, the sunk cost fallacy - considering the cost consequences of past decisions in making new choices - is one of the most widely used roadblocks to IT change anywhere.

Here's how that works: basic business theory says that if you're facing a decision between continuing with the status quo or changing, then you compare only the costs and benefits arising from that decision. In other words, only future costs and benefits count: the monies already spent on achieving the status quo cannot be counted as part of the decision and if you've sunk millions of dollars into maintaining your brand X infrastructure but the net future cost of continuing with it is expected to be one dollar more than the cost of switching to something else, then you should switch because money you've already spent doesn't count in the decision.

In reality, however, people who want to continue the status quo almost always treat sunk costs as the primary justification for doing so - most often arguing that the corporate investment in learning to live with one or more applications is a priceless asset that can't be given up as part of a move to something that nets out better.

Thus when mainframers argue that their investment in data center technologies should be self perpetuating or when Wintel people use local application customisations to justify continuing the PC desktop, what they're really doing is arguing the sunk cost fallacy. It's wrong, but intuitively appealing and correspondingly effective - and so, at least for now, lets side-step the argument by assuming these businesses start from nothing.

That assumption should make it easy to see what's at the heart of the "best" issue: since we're designing the IT infrastructure and the business processes at the same time, how can we decide which gets primacy and under what circumstances? Does the IT egg hatch out to the process chicken? or does it work the other way around?

In response I think the right answer is to define "best" in terms of the lowest net present value of the checks written to achieve the business objectives - and notice, please, that this means first that getting there sooner is better and secondly that paying more for IT support is fully justified if that means spending sufficiently less on something else.

In designing business processes for the manufacturing company we can, for example, "overspend" on skills, automation and quality control in order to get overall productivity net of warranty support and price premiums to the point where our Chinese competitor's labour cost advantage becomes an irrelevance in our markets.

In this context it should be obvious that the research engineering company is really about designing and supporting a future customer's workflow aimed at spotting terrorist and criminal activities during their train-up phase - a macrocosm of what we're doing here and thus all about using and managing IT. In other words, "best" for this imaginary business, is going to be whatever minimises the cost of the customer's business process.


Paul Murphy wrote and published The Unix Guide to Defenestration. Murphy is a 25-year veteran of the I.T. consulting industry, specializing in Unix and Unix-related management issues.