% fortune -ae paul murphy

Evolution, risks, controls, and strategies

The thin client transition offers considerable benefits in terms of processing risk reduction, auditability and the imposition of relatively low level usage controls. Indeed the biggest organizational risk incurred by this transition results from the fact that it speeds up the move to centralized computing and allows the IT group to implement virtually the entire panopoly of mainframe community developed, and so data processing centric, CoBIT controls.

That's a big plus if you're a traditional data processing manager, a SOX compliance officer, or a forward looking Windows manager who sees that this is the way Microsoft's client-server is heading. On the other hand, it's a complete disaster from a user management perspective because it centralizes IT control subject to Finance, distances user management from IT decision making through multiple layers of committee and budget processes, and reduces the credibility of the threat to "do it ourselves".

The informed CIO's goal, therefore, in directing the thin client transition should be to prepare the way for the smart display world while ensuring that a reversion to data processing ideas doesn't happen. Among other things that means staging the transition - adopting thin clients for particular departmental areas or functions, letting decision making devolve to those users and starting that team down the path to revenue partnership before adjusting IT staffing and moving on to the next group.

A big part of the problem here arises because accounting ideas are extremely stable over time while IT ideas are not. Thus our idea of a system wide control is fifty or more years ahead of theirs - From their perspective a control is simply a policy guaranteeing the predictability of some business process and therefore dependent on the organization chart and management action rather than technology.

The narrow job roles in data processing started out, for example, in the 1920s as organizational design (read: org chart) ideas on making it hard for data processing to provide executives with falsified reports - something that happened as recently as the late 1980s when a leading Canadian business bank was driven into bankrupcy largely because its own people covered up a continuing systems failure.

The primary and therefore most fundamental CoBIT controls today are purely paperwork based: the service level agreement, the disaster recovery plan, the waterfall documents on applications, and so on, can often pass all audit checks even though IT is completely out of control and within months of bringing the entire organization to a standstill - auditors, for example, were unable to see anything of significance going wrong as Canadian federal bureaucrats and their high powered consultants spent an estimated Two Billion dollars -including an estimated $300 million on software development- on a fundamentally trivial gun registry.

Thus the thin client edge in accounting controls appears mainly at the secondary, or derived, control level -things like process logging, reporting heirachies, and server operator credentials.

In reality, however, that's not where the security advantage is: it's in the fact that the Solaris/SPARC servers you mostly see are largely unaffected by external attacks and, for the paranoid, there's the benefit that it's simply impossible for a user to do something on a Sun Ray which can't be logged, tracked, or tied to an alert of some kind - something that only makes sense in a national security context because actually doing this in businesses operating at normal security levels tend to be severely disfunctional.

Thus the most important internal control in reality is that most OS errors and almost all attacks are eliminated from consideration and IT can know, with certainty, that the only people accessing applications or data are the people who are authorized to do so.

Unfortunately, while it's true that the CoBIT controls tend to be disfunctional, you will nevertheless need to have something from which the appropriate paperwork can be drawn up to keep the auditors happy. For this, your number one tool consists of public performance metrics. Thus you can't actually have a meaningful service level agreement where the IT/User split doesn't exist, but you can maintain a web site giving information like the current average load time for E-Mail and draw your SLA from that on demand.

Similarly, you will not normally have a formal disaster recovery plan because the entire technology base is implemented as a disaster avoidance plan, but you will be able to provide one when asked by comparing desired performance (i.e. no effects on users) against the actual results of drills in which machines are unexpectedly shut down or people diverted.

Basically what happens across the board with the concerns underlying the CoBIT controls is that the smart display approach is tied so closely to user management that IT simply can't get away with the document based control conceptualization taught today's auditors - but the reality of real time, user visible, metrics has the nice consequence that it's easy to boiler plate the expected documentation when asked.

Unfortunately getting there from some "here" can be very difficult. The technology change itself is easy - rolling out Sun Rays and servers is a lot simpler than rolling out PCs, but the technology by itself combines with data processing tradition to exert a centralizing force - and that generally produces benefits to IT, but at a cost in flexibility and control to the user community.

That's counter-productive at the corporate level because the big dollars are in making users more effective, not in reducing IT costs. As a result you need to do the opposite thing: decentralize IT management despite adopting centralized processing - and that means planning this transition from the start; it means educating, coaxing, or removing middle and line management: it means lining up IT behind service delivery, not budget management - and unfortunately it also means finnessing 1920s auditor requirements.

This chapter, therefore, is to be about looking at what happens in each scenario when the appearance of thin clients tilts the technology balance in favor of those with control agendas -and SOX arguments.


Paul Murphy wrote and published The Unix Guide to Defenestration. Murphy is a 25-year veteran of the I.T. consulting industry, specializing in Unix and Unix-related management issues.