What differentiates Unix from Windows?

- by Paul Murphy, -

What really are the most fundamental differences between Windows variants like 2003/XP and Unix variants like Linux?

From a practical perspective cost is an obvious differentiator, as are access to source and the ability to run outside the Intel processor environment, but it's possible to argue that those differences are neither real nor important. For example, cost is usually important in business only if the products being compared are otherwise very similar, some companies have negotiated access to Windows source, and NT 4.0 Server on Alpha was, until quite recently, the fastest way to run any Microsoft OS.

To get beyond superficialities like these we need to look at the fundamental functions of a modern business oriented operating system and ask how these are implemented by the two groups: Microsoft and the Unix community. Conceptually, all major business oriented operating systems, including Linux and Windows 2003/XP, are pretty similar because they use similar hardware to achieve similar goals. Specifically, all of them act as interfaces between hardware and user applications with most able to provide a single virtual interface to the hardware for multiple, often concurrent, user applications. Thus most have four interlocking layers:

  1. The user, or applications layer communicates with the

  2. OS services layer, which

  3. Uses kernel services to share access to

  4. Hardware controllers.

And deliver five kernel functions:

  1. The scheduler mediates CPU resource sharing;

  2. The memory manager mediates memory, including virtual memory, sharing;

  3. The virtual file system abstracts the hardware to present a common file management interface to all applications;

  4. The network interface manages network I/O;

  5. The IPC (Inter-Process Communication (IPC)) module controls inter-process messaging.

Take any one of these and the technical differences between how Unix and Microsoft implement the function overwhelm the commonality of terminology and purpose. It is more or less true, for example, that both Windows NT 5.X and Unix variants like Mach and some BSD variants use a modified micro-kernel design with a pre-emptive scheduler focused on interruptible thread execution, but that use of the same words is just about as far as the actual similarity goes.

Look at how those ideas are implemented and what you see is that core design philosophies influence how developers make thousands of small decisions on exactly what the terms mean and how things actually get done. Since the core philosophies behind the operating system design are diametrically opposed, these micro-decisions tend to go in opposite directions and thereby most fundamentally differentiate the Microsoft Operating systems from Unix.

To the extent, for example, that we know what decisions the Microsoft people made, it appears that they generally made choices preferring efficiency for, and external controls over, a small number of processes over scalable multi-processing and internal process control. In contrast Unix developers, whether aiming at a true micro-kernel like BSD/Darwin or a monolithic kernel like Linux, generally made the opposite choices to favor multiple processes running under adaptive internal controls.

That difference in design philosophy shows up everywhere. In memory management, for example, Windows NT 5.0 and its successors use clustered paging, a working set memory analogue, and a free memory manager that fires up exactly once per second, while Unix uses an adaptive page specific algorithm (often LRU - least recently used) to control paging, there is no working set equivalent, and the free memory manager runs when needed.

Another of the ways the preference for technical choices that favor a small number of core processes is expressed in the Windows kernel is in the fact that it runs non threaded internally. This choice avoids "object blockage" to trade off concurrency and context switching in favor of increased efficiency for, and better control of, a small number of key processes. Similarly, multi-processor memory management and inter-process communications are tightly integrated with process control to gain better use of Intel's rather limited MMU (memory management hardware), in part by simplifying page management. In contrast the Unix approach has generally been to favor process creation and context switching at the cost of some efficiency for long running processes, favor multi-processor memory management at the cost of increased hardware complexity, and favor process or thread level independence at the cost of making interprocess communication more difficult.

These kinds of decisions have consequences beyond fundamentally differentiating the multi-user communications orientation embedded in the Unix approach from the single user, control oriented, focus in the Microsoft designs. Among these consequences three groups: affecting security, scalability, and adaptability, respectively; stand out as of interest in today's business environment. In Windows NT 5.X, for example, the hard wired nature of the one second interval at which the balance set manager runs almost certainly allows an attacker with application level access to crash the kernel more or less at will. Similarly, the hard 50:50 division of the available 32bit memory space in NT 5.2 and earlier releases can be expected to cause serious application incompatibilities when some future service pack or new release changes that in the run up to 64bit system compatibility.

In contrast to intrinsic weaknesses affecting reliability and security, most simple problems affecting scalability can be kludged - meaning that Microsoft can add temporary fixes as problems are recognised simply by adding code to isolate and work around each kind of special case as it comes up. Thus the "stack" idea found everywhere in NT 5.X, in which one processing object calls another which calls another until the process happens to hit one that deals with whatever the problem is, presents an object lesson in institutionalised kludging. Unix, of course, also has had its share of such kludges but a key research direction, particularly in the Solaris and BSD communities, has been to remove them and so bring the core OS closer and closer to a clean realization of the original design ideas - something that's both commercially and practically impossible for Microsoft to do.

For example, although we don't know what Microsoft's interprocess communications management code really looks like it's a safe bet that their code for this is at least an order of magnitude longer, and correspondingly more complex, than that used in a typical BSD kernel despite the fact that the BSD approach is both more general and conceptually more complex.

Some external changes are too complex to be dealt with via kludges and thus limit the OS's lifetime by constraining what can be achieved before the fundamental design breaks down. For example, the page management philosophy now embedded in the network, file system, and memory management stacks makes it functionally impossible for Microsoft to copy the page placement optimisations available for large multi-processor systems in Solaris 2.8 and later releases without making fundamental change to NT 5.X first.

Because the change needed to take advantage of new ideas like this tends to be quite fundamental, such changes have historically been accompanied by the addition of new layers of kludged code intended to maintain some semblance of backwards compatibility with previous kludges. Unix hasn't had this problem with the fundamental philosophy and research based development processes allowing it to grow consistently closer to an ideal representation of the underlying ideas. Thus a device dependent application like a 1991 copy of Vsifax for SunOS 4.4 works perfectly under Solaris 2.9, while Windows 2003/XP server now contains both a POSIX compliant interface set and four generations of the Win32 interface but code written explicitly for devices supported by previous generations still often fails. Similarly Solaris on SPARC users will experience no need for software change when products like the forthcoming eight-way Niagara CPU assembly hit the market, but Microsoft, and Intel with them, remain trapped in the megahertz race because Microsoft's basic Windows OS design is unable to take full advantage of even today's limited two way thread concurrency.

So what's really the difference between a Unix variant like Linux and any Windows OS? It's that Microsoft reacts to marketing presure to make design decisions favoring running a few processes faster but then finds itself forced first to layer in backwards compatibility and then to engage in a patch and kludge upgrade process until the code becomes so bloated, slow, and unreliable that wholesale replacement is again called for. In total contrast, Unix developers advance systems research to provide both long term continuity and continuous improvement in the software's ability to do more or better with respect to things like throughput, reliability, security, and communications.


Paul Murphy wrote and published The Unix Guide to Defenestration. Murphy is a 20-year veteran of the IT consulting industry.