- by Paul Murphy, -
My winface.com server has been getting a lot of hits lately from searches that look like this:
18.104.22.168 - - [30/Apr/2004:13:44:24 -0400] "GET /article.html HTTP/1.1" 200 35654 "http://search.msn.com/results.aspx?q=windows+vs+unix+term+paper&FORM=SMCRT" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; .NET CLR 1.1.4322)"
On the one hand I suppose this reflects increased academic interest in Unix, on the other it does make me wonder about the marking keys used to judge a student's success in differentiating the two environments. As a public service, therefore, I thought I'd provide an overview for what to look for, and what to avoid, in Unix vs. Windows comparisons.
Since there are an awful lot of internet accessible documents which offer, or purport to offer, some contribution to this discussion, it's important to start with some basic rules for winnowing out the most obviously dishonest.
The single most common indicator of dishonest comparisons is the presence of important but unannounced anachronisms: meaning that technologies, costs, or ideas from one time frame are being compared to those from another as if they belonged together.
The most pervasive and subtle form of this is usually expressed in studies nominally reporting, with every appearence of utter objectivity, the result of a survey or other interview process. IDC, for example, has issued a series of reports in which the authors are able to conclude that Unix (as Linux) is much cheaper than Unix (mainly as HP-UX and Solaris) by comparing the results obtained from two series of interviews: one among older IT managers whose cost experience reflects their use of RISC/Unix ERP servers bought in the early to mid ninties and the other among technology savvy enthusiasts mainly running Apache on x86/Linux.
In its less subtle forms the failure to maintain cotemporaneity between the sides being compared can lead to much more obvious absurdities. If, for example, you find someone denouncing Windows 2003/XP Server because it still can't even match 1975's System V Unix with respect to something as basic as manipulating a Bell 103 modem via a DEC DL-11W TTY controller then you should forward the article to Information Week for stop-press publication but omit it from further consideration yourself.
The second largest pile of discardable studies are those whose conclusions derive from a single "big lie" - an absurd assertion treated as unquestionable truth. This is the foundation, for example, of much of the third party work offered by IBM in support of mainframe Linux with the pervasive belief engendered by years of exaggeration about mainframe performance setting the expectations that led to Microsoft's apparent embarassment at having to call IBM on zVM performance. The strength of this approach lies in the emotional committment people have to maintaining the wrong, since they'll loudly support almost conclusions built on such shared faiths.
Sometimes, of course, this approach can be so egregious that only the most brutally naive are taken in. If, for example, you come across papers arguing that code reviewed in secret by a few people who make their money selling it is more likely to prove bullet proof in use than than code reviewed by thousands of independent experts, you might want to ask Green Hills Software for a finders fee but not waste time reading it yourself.
Once you drop analyses based on either big lies or mismatched time frames you can usefully group the remainder according to whether their primary focus is on differences in technology, cost, or strategic consequences.
In many ways the technology focused comparisons are the hardest to evaluate because you need to understand the histories and technologies on both sides of the discussion to decide what weight to give the author's arguments. This is particularly true, of course, for implementation issues where the same terms hardly ever mean the same things to both camps. For example, most of the comparative literature on kernel design is applicable to the Windows Vs. Unix debate only if you can track changes in the meanings attributed to nominally well understood terms like "thread" both over time and between camps.
Getting one level away from the detailed technology can alleviate this problem and allow some technology issues to be usefully discussed in the Windows vs. Linux context without too much risk of the vocabulary leading to false assumptions and misleading parallisms. For example, studies comparing the technical consequences of the single user, single process, usage assumptions built into Microsoft's use of the Windows Registry against those of the multi-user, multi-processing assumptions behind Unix's use of start up scripts and user accessible application configuration files, can provide valuable insight to people making the Linux or BSD versus Windows decision for the first time.
Similarly, comparing the security implications of decisions to use Unix applications versus competitive Windows software can be both topical and technically interesting while offering real contributions to the open source vs. Microsoft decision that go beyond simple technical analyses. For example, comparisons pitting Microsoft's peremptory approach to change against the somewhat austere consistency Sun imposes on the Java application model or the generally joyous abandon with which the PHP development community flings around ideas and implementations, can contribute on both fact and judgemental levels to a strategic tools decision.
It's very important, however, in reading such comparisons to distinguish genuine applications issues from those affecting either the operating system or the hardware. On the software side, for example, the SANS Institute gets lots of publicity for its annual top ten lists matching any Unix application vulnerability, whether that application also runs on Windows or not, against a sample of Microsoft operating system exploits. Similarly, on the hardware side, the PC security industry spends a lot of time finding and advertising buffer overflows in Unix utilities or applications without ever informing their public that actual exploits depend on design weaknesses in the x86 CPU architecture; and are therefore almost impossible on SPARC or PowerPC hardware - which is why you read about a lot of such bugs being found but hardly ever about real Mac or Solaris users being affected by them.
The single largest group of Windows Vs. Linux (or any other Unix) comparisons make their cases on cost. If you eschew contributions from the flat earth society (consultants and others paid to find and present significantly distorted or one sided cost "studies") you'll find that the remainder can be grouped mainly by the depth or extent of their measures of cost.
In the simplest cases, cost measures are restricted to the cost of acquisition, but even these are subject to serious distortions the reader needs to be aware of. It's possible, for example, to claim that the marginal cost of a Windows Server 2003/XP license is zero to an academic institution with a site license and then compare that zero to the first copy of a fully supported Red Hat Enterprise suite at around $4,000. More commonly, however, such comparisons are included in "big lie" papers in which the authors do something like silently assuming the functional equivelance of a fully configured and supported Mac to a stripped and unsupported PC "whitebox" to show that a $379 no-name without software (or a monitor) is cheaper than an $799 eMac with everything.
In more sophisticated cost studies some lifetime operational costs are taken into account. These can be valuable if the cost information is supported and applies to products and services reasonably considered comparable in terms of both performance and cotemporaneity. It makes as little sense, however, to compare a fully configured Sun 6900 against a dual Xeon server at 3.2Ghz as it does to compare the cost of service on a ten year old HP K-Class against that same dual Xeon running Linux. In general the thing to watch out for in these is carefully chosen interview subjects - because it's easy to find clueless systems managers willing to say that since Unix cost them more than Windows, it has to be generally more expensive.
What I consider the most sophisticated - just possibly because I've often tried to do this - approach extends and combines both cost and technology arguments to look at the strategic consequences of the Windows and Unix decisions for entire organizations. The trouble with these, and thus the thing to be most cynical about when reading them, is that they tend to reflect "gedanken experiments" because real measurements have never been done. In next week's column, for example, I plan to compare the management focus needed to succeed with the Unix business architecture (centralized servers, smart display desktops, decentralized management, and formal support for Linux/BSD at home) to that needed to succeed with the traditional data center, whether implemented with Wintel or the older Mainframe technologies -but you should be aware that the discussion is based on personal experience, not independent, quantified, analysis of real operational data from a large number of organizations.