% fortune -ae paul murphy

Using yesterday to see tomorrow

One of the comments I expect to get about the imaginary IEED research company is that I've picked the example specifically so that Wintel wouldn't have anything to offer, and to some extent that's true.

What happened, however, wasn't a matter of picking an application to avoid Wintel, but that extrapolation of past trends in scientific processing locks current and foreseeable Wintel products out of contention.

Here's why: the history of widespread, non embedded, computer applications outside the faux electro-mechanical world of digital data processing has been one of ever faster processing of ever larger data sets. The IEED research group carries this to the next extreme: a real organization trying to do something like this would need real time processing measured in teraflops on data measured in gigabytes per interval - probably two to five seconds.

For IEED the volume and complexity spell a throughput requirement between two and three orders of magnitude greater than the best we can get out of advanced PPC processors like Cell and Xenon now - and those machines are already between one and two orders of magnitude more capable than x86.

Notice, however, that were this being written in 1981 I'd be predicting megaflops against kilobytes per second, ignoring Intel's announced plans to make and sell the i8088 as irrelevant, and suggesting that Microsoft's Xenix OS on the MC6800 could be a real contender for the base architecture. Similarly, if this were being written in 1991 I'd be predicting gigaflops against megabytes per interval, scoffing at Wintel's i80486, mentioning SGI/MIPS and DEC/Alpha as potential winners threatened by their own management, and predicting that machines like the then pending PPC powered MacAV and HyperSPARC based S20 would own the future.

What history tells us, in other words,is that the mass market for processors runs somewhere around ten years or so behind the science and video processing markets - something we can expect to see continue despite the loss of diversity in the market and therefore something suggesting that IEED's requirements, outrageous as they seem today, will eventually fit the capabilities of everyday, major market, home computing devices.

Of course if Wintel eventually makes the grade it won't look like Wintel today because what's perhaps five years out for PPC and SPARC is probably closer to fifteen years out for Intel and Microsoft. Right now, for example, Microsoft's compiler limitations have combined with Microsoft's ever increasing memory, CPU, and disk "footprint" to drive Intel back to the gigahertz race - and since there's no evidence of a compiler breakthrough happening at Microsoft, Wintel is, for the near future, trapped into a diminishing returns to power scenario that simply knocks it out of consideration for space borne uses.

So why not specify an x86 grid for the ground based back-check and development work? Largely because the T2/ZFS combination works now, makes more sense than x86 with Solaris (remember that Linux is used for the satellite processor and we can't use it in both places), offers an easy growth path, and will be cheaper to install and run. Beyond that there are no obvious technical reasons not to do it - in fact, in the real world a company like IEEDs would probably start with relatively small computing resources of its own and rent time on a security cleared x86 grid like those at some of the national laboratories for extended test runs.


Paul Murphy wrote and published The Unix Guide to Defenestration. Murphy is a 25-year veteran of the I.T. consulting industry, specializing in Unix and Unix-related management issues.