% fortune -ae paul murphy

Why Client-server fails

The client-server idea goes back at least to the early 1960s and was, at least initially, a fairly direct extension of hardware co-processor ideas already in wide use throughout the industry.

Thus the vision going into the IBM Future Systems project in 1968 was simple: build future mainframes as dedicated relational database servers and run applications on small, dedicated, computers accessing the central server for data.

The database server, eventually introduced as the System 38, was a great success and the desktop machines, offered for sale in 1971 as the System 51XX worked well too - but the software for implementing the client server concept failed on the issue of multi-user serialization.

Serialization problems arise when more than one user can download the same data for use in some application process, make changes to that data, and expect to write the results back. Since you can't control the timing on this, early solutions simply locked the table on the first read; but that produced horribly unpredictable delays for other users and made systems serving more than a handful of users impractical.

Since the whole thing was ultimately predicated on the idea that business applications are simply different views of the data, IBM's response at the time: moving RPG to the server while abandoning the smart client in favor of terminals, made perfect sense.

Eventually, however, the emergence of a market for software capable of making the business PC seem useful beyond office automation drove the evolution of stored procedures as a solution to this problem. Notice, however, that stored procedures violate the fundemental client-server dictum that data must be stored on the server but processed on the client - in other words, that this solution addresses the biggest problem with client-server simply by eliminating client-server as an issue.

Serialization problems are much less important for read only data - like HTML. For that reason the original HTML definitions and related software didn't support browser based data entry or any form of live connection to the server: the browser was strictly limited to downloading page definition and content information, and then processing that locally to produce the readable page.

That didn't last - within months of the first browser's appearance people were adding forms capabilities, inventing javascript, and trying to extend web servers to connect pages to the inputs and outputs of more conventional applications.

Of these, javascript ultimately became the basis for what's now become client-server's most obvious problem: the application developer's inability to control what happens on the client.

The most obvious examples of this occur with respect to the interaction between locally stored cookies, locally executing javascript, and centrally stored data - widely refered to as "ajaxulation", this problem is that it's simply too easy to cheat when anyone can read and adapt the client code.

This page, for example, has a poll near the top on whether people think the article worthwhile - and that poll enables several different kinds of abuse. It's easy to combine cookie reading with server records, for example, to identify a few people who routinely visit this blog just long enough to click the thumbs down icon and then move on - zdnet doesn't do this, but that's an ethical, not a technical choice.

And, FYI, if you're not familiar with this try it: use your browser to check the source for that part of the page -you'll be astonished at the easily accessed records you're keeping on yourself.

More interestingly, things like this are almost trivially easy to game: everything from re-writing the cookie after each use, to cobbling together a few bits of Perl to directly manipulate the remote APIs. This isn't important in the context of zdnet's poll - but the same ideas applied to some of the leading massively multiplayer online role-playing games can promote otherwise quite mediocre players to the front ranks and, where resource trading is supported, can lead to real fraud.

The most directly important example of this that I know of concerns e-voting. Imagine this courtroom exchange between an e-voting technology expert and a lawyer for some democrat trying to overturn the vote result:

Lawyer: Is it possible for one of these machines to return a false result?

Expert: the software is designed to protect against that.

Lawyer: I asked if it's possible, not if you know it happened. So let me ask you again: is it possible for someone to reprogram one of these machines to record and/or report incorrect results.

Expert: I wouldn't know how.

Lawyer: Possible! is it possible? Yes or no?

Expert: Yes

And the reason for that "Yes"? -and therefore the reason you can never trust an election result returned by the current crop of evoting machines? Client-server.


Paul Murphy wrote and published The Unix Guide to Defenestration. Murphy is a 25-year veteran of the I.T. consulting industry, specializing in Unix and Unix-related management issues.