% fortune -ae paul murphy

Security metrics and issues

One of the questions coming out of last week's wintel vs lintel discussions asked which one is generally more secure. As it turns out that's an easy question to answer -unless, of course, you want to demonstrate that your answer is correct, because then it turns out that virtually nothing is definitively known about either the question or the answer.

So lets start by stating the question in terms appropriate to the kind of practical decision making IT administrators face with every new acquisition or significant change opportunity: "which choice will best support efforts to establish and maintain a reasonable compromise between security and usability?".

Next lets define "security" in terms of cost minimization; specifically in terms of the expected value of losses associated with system integrity failures such as data destruction, loss of processing productivity, unauthorized use of (or just access to) data, and so on.

What that leads to is an imaginary three part security metric made up as:

  1. a list of possible systems integrity failures;

    Note that this cannot be a listing of discrete events but must, instead, incorporate some kind of "liquid measure" classification to reflect event durations and scale.

  2. the estimated cost to the organization associated with each item on the list of possible failures; and,

  3. the estimated probability for each failure under different security policies (including OS choices) applied in each computing environment of interest.

Although there is no known practical way to generate the first two of these with any generality, this doesn't matter because, except with respect to settlement of legal actions undertaken in response to systems integrity failures, neither affects the lintel vs wintel question. Basically we can over-simplify a little bit and assert that the wintel/lintel decision affects only the probability of each event - not its cost and not its nature.

In other words what we want to compare for a broad sample of events is the conditional probability that the cost associated with the event will be incurred given Lintel versus the conditional probability that the cost associated with the event will be incurred given Wintel. (To be technical we want to compare the two distributions, choosing the one with the smaller integral.)

As far as I know no one's has tried to do this, but it's obvious how a research project along these lines would proceed: simply record, across a large number of machines used in a significant variety of roles, how often events occur and with what effect. Doing this for even a few thousand machines in each group over a period of a few months should produce a definitive general answer which users could then review in light of their own applications.

There may, however, be a proxy data set that would serve: netcraft's web server uptime and performance data. You have to pay for access to the actual data, but some of their regularly published reports, including the monthly hosting provider uptime and performance reports, are derived from it.

This data is of course far too narrow in scope to be remotely considered definitive - but it is probably reasonably indicative, and certainly pleasing to the Unix crowd because Solaris and the BSD's top the reliability listings while the Linux/x86 variant routinely occupies five or more of the top ten slots in the monthly hosting provider reports.

So what's the bottom line answer? We know how to answer the lintel vs wintel question with respect to security in the PC community sense of that word, but we don't have the data to do it. The very limited data we do have, however, leans heavily toward lintel over wintel.


Paul Murphy wrote and published The Unix Guide to Defenestration. Murphy is a 25-year veteran of the I.T. consulting industry, specializing in Unix and Unix-related management issues.