% fortune -ae paul murphy

Internet Forecast: scattered clouds, no rain

A lot of people are taking this internet cloud computing idea as some kind of given for future computing - and in that context it's appropriate to mention that quite a lot of people believe that the world is running out of oil, that global warming on Mars is caused by American SUVs, and/or that the 9/11 attacks in New York and Washington were a calculated response to subsequent American provocations in Afghanistan and Iraq.

Consider, for example, these paragraphs from a very bright guy who should know better - Whitfield Diffie writing for ACM:

... Much sooner than the next half century, web services will have destroyed locality in computing.

No significant corporate computation will take place on any one organization's machines. Programs will look at various yellow pages and advertisements and choose the most cost-effective providers for their most intensive computations. Image rendering, heat flow, marketing campaign modeling, and a host of services not yet imagined will be provided by myraid companies offering proprietary solutions.

When this happens, what we call secure computation today - you did it on your own computer and protected it adequately - may be gone forever.

The reason cloud computing isn't going to happen as some kind of widespread phenomenon is that owning the resource will continue be both individually prefered and financial preferable to renting for most businesses and most applications.

Resource ownership is generally prefered to rented access because ownership confers more control than renting.

Resource ownership is generally preferable to rented access because a business using its competitor's software has given up any hope of achieving competitive advantage via its information systems - and thus the more important information management is to the business, the greater the strategic opportunity offered by systems differentiation.

Notice that the contrary behavior we're seeing with today's up surge in time sharing (as cloud computing used to be known) is driven from precisely this need for, and valuation of, control. What's happening is that user managers are responding to the control centralization attendent on data processing's on-going takeover of the Microsoft monoculture in the same way they did when data processing took over the mini computer revolution in the seventies and early eighties - by using outside service providers to stand-off the internal "service" organization.

A stand-off, however, is just a stand-off: a real shift in control has to be driven by directional and behavioral change within IT and that's not going to happen without the application of significant external force. In the 80s that externel force came from rapid PC adoption by users leveraging free and stolen software on the one hand and data processing's automatic approval of all things IBM on the other. This time, I think, it'll come from users adopting open source software while leveraging the order of magnitude performance gains available with Linux on Cell and Solaris on UltraSPARC to regain control of their own computing.

Right now, for example, an engineering manager whose people routinely do 3D modeling calculations taking 30 minutes on a (theoretical) 48 gigaflop quad core Xeon would need to maintain and operate a mini-grid with 85 quad processor Xeons (at 35% grid efficiency) for each concurrent user if each such process were expected to complete in about one minute - and if he couldn't get internal IT to provide that service at a reasonable cost he'd be smart to look for an alternative such as Sun's utility computing network or New Mexico's super-computing service.

IBM has, however, prototype Cell based Linux workstations that could do this job in just under 90 seconds - and you can already build your own deskside machine by combining ten of Mercury Computing's dual cell blades in a half height rack to do the job in under 30 seconds - significantly less time than it takes to transmit the data to a remote computation facility.

In other words, what that engineering manager is going to do the day IBM or some other name brand company gets Cell based graphics workstations into the market, is drop out of the cloud in favor of controlling his own data on his own desktops.

Something similar is happening with respect to applications combining communications with data access. CRM applications, for example, used to be prime candidates for out-sourcing because of their network reach, memory use, and high reliability requirements. Now, however, the costs of meeting the telecom requirements are much lower than when companies like salesforce.com got started and a single Sun T2/X4500 combination can handle thousands of users - meaning that going in-house now makes more sense for all but the smallest users - and the forthcoming "Victory Falls" and "Rock" processors could drop today's costs by another order of magnitude for bigger players.

To take an extreme example: Google, right now, needs around 25,000 Xeons per data center to handle English language data indexing and retrievals - largely because the quick response requirement is best met, given the hardware limitations in place when the system was developed, by both distributing and replicating the first and second level indices across thousands of machines and keeping the number of available CPUs high relative the number of queries expected.

With Rock that structure can change - a custom Rock machine with enough memory (8TB!) could handle all of the index management work and thus let google replace those 25,000 Xeons with one database server and no more than a couple of racks of T2 machines handling query management.

Notice, incidently, that google's present search architecture leaves lots of idle resources - already-paid-for-by-search resources that can be used for services like gmail without generating additional fixed or variable costs - all of which would disappear if google adopted the Rock/T2 architecture for search.

The implication, therefore, is that the next google will do search, not cloud computing - and that ilustrates the general rule: cloud computing is a response to a set of problems created by IT management in the context of normal managerial behavior and systems limitations from a few years ago - and as those limitions get pushed further back, user management will use cloud computing more and more as a stick to beat IT and less and less as a solution to processing needs.


Paul Murphy wrote and published The Unix Guide to Defenestration. Murphy is a 25-year veteran of the I.T. consulting industry, specializing in Unix and Unix-related management issues.