% fortune -ae paul murphy

Smart Displays and Productivity

As regular readers know I maintain that using a Unix/Smart display architecture allows organizations to centralise processing, devolve control to users, and gain significant productivity benefits over client-server users. Today's rant, and tomorrows too, actually; is going to be in defence of this position.

First, lets clarify terms. A smart display architecture is characterised by the use of desktop machines that do nothing more than handle output display and collect user input. A smart display does not run user applications, does not have a user accessible operating system, is not user programmable, and typically has the biggest screen, the clearest, fastest graphics, and the best presentation clarity affordable within the context set by the time and the organization using it.

This is a smart display - a 21 inch, NCD X-terminal from the mid ninties

This is another one, a current generation 17 inch Sun Ray.

These work in very different ways, but both are interaction only devices, Both have mean times before failure measured in the 100,000 plus hours, both use trivial amounts of desktop power, and neither has moving parts, makes noise, or generates enough heat to matter.

Smart displays handle user interaction with applications running on one or more computers and that's all they do - nothing more, nothing less. On the other hand, the nature of the application doesn't affect the display - that NCD, for example, ran Microsoft Office 5.0 under Sun's WABI at 1280 x 1600 at a time when PCs were functionally limited to 600 x 800 - and the Unix machine used as the primary host can be either one machine or a virtualization constructed on a world-wide network of them.

From a user perspective the smart display is simply a quiet, reliable, device. The user turns it on, navigates some kind of login, and one or more applications auto-start in the user's GUI of choice- -typically CDE on the X-terminal and Gnome or KDE on the Sun Ray.

From the traditional IT perspective the hardest things to get your head around are first that the things just work, and secondly that this has consequences for both user and IT behaviour.

The first thing to understand is that smart displays clearly separate the application from the system that delivers them. It's IT's job to deliver the applications, the user's job to use them.

Thus no desktop PCs and no PC networking means no PC help desk, and therefore the migration of application help from IT to lead users within the user's own organisational units.

Smart displays eliminate a lot of the ambiguity in problem diagnosis and remediation. The display device either works, or it doesn't; and if the device works but the user has a problem that problem is usually easy to understand: it may be a training issue, or maybe it's an application issue, but it usually isn't an IT issue.

Once users understand that they can do whatever they want with the applications but really can't make the delivery system fail, and that messing around with the set-up or applications won't bring an encounter with a superior being - a help desk techie or an escalation procedure - they start to experiment. Go through an office full of clerks or juniors using Windows desktops and you'll see essentially default screens on every desk; get them to open up about how they work with PCs, and you'll hear about magical thinking: one won't wear shoes while sitting at the PC because that causes it to crash, another believes that firing up Excel and then shutting it down again before opening the Navision application client prevents data entry loss, and so on.

Go through an otherwise similar operation using smart displays and almost every screen will show user customisation: in everything from choice of GUI (CDE, Gnome, or KDE if you're using Solaris), to choice of colors, choice of default applications windows, and even modified application icons - but no magical thinking.

Ask clerks using the typical Windows financial application to do something that's not within their daily usage pattern and they'll find all kinds of bizarre reasons for not doing it -in reality because they don't trust the system, and there are social and personal consequences for causing PC and applications failures. Put those same people to work on a similar application running on Unix with smart displays and eventually they'll try anything, including stuff neither you nor the applications designers ever thought of. Why? because people want to do their jobs, getting it right is fun; and the use of smart displays removes the penalty for failure.

So what's that worth? Well, there's a lot of experience that suggests organizations putting in large scale client-server systems gradually lose access to most of the functionality they paid for. What seems to happen is that each new hire learns a little less about, and is more hesitant to use, the system than the person they replace - meaning that five years into the process you may actually be using only that core bit of the functionality covering normal daily operations while winging a lot of little spreadsheets and email around so human effort can fill in for some of the functionality they're not using.

I've never seen smart displays used on the Fortune 1000 scale, but in smaller organizations you get the opposite effect and there's no reason to believe it would be different for bigger groups. What happens is that turn-over falls, both competence and confidence go up, and a much higher percentage of the available functionality actually gets used. And that's where the value is: if the software was worth getting in the first place, even a marginal increase in the return on that investment goes straight to the bottom line - and using package functionality more effectively than the other guy translates directly to local competitive advantage.

More importantly users who gain confidence eventually get past experimenting with what's there, and start to wonder about what could be there - and when they do the business gets competitive advantage from new ideas, new business processes, new deployments, and minor, but often powerful, application tweaks - often set up simply as links between functions that were previously thought to be unrelated.

And that's the bottom line answer: you get real competitive advantage first by freeing users to make better use of the software the organization pays for, and secondly by getting them to go beyond that in working with IT to develop and use local, idiosyncratic, change in support of whatever business processes they think makes them, and thus the organization, more efficient.

Notice, however, that those benefits can only be realized if IT can deliver on user directed change. As usual you can do this with any technology, but it's particularly cheap and easy with Unix and smart displays - but stay tuned, that's tomorrow's topic.


Paul Murphy wrote and published The Unix Guide to Defenestration. Murphy is a 25-year veteran of the I.T. consulting industry, specialising in Unix and Unix-related management issues.