% fortune -ae paul murphy

That Linux thing - where's the vision?

I get asked, fairly often, why I'm down on Linux. I'm not - I'm like a guy with three kids: love all of them, but find myself spending more time with one than the other two - hey, I even have the guilt that goes with the analogy!

So why? In large part because the strategic drivers for BSD and Solaris are clear, exciting, and aligned with my personal values, but I can't say that about Linux because I don't understand what drives it.

Linux has been a functional, trustworthy, OS since perhaps 1998 or 99 - certainly since SuSe 7.1, Debian 2.2, and the last Caldera releases. The question is, where has it gone since? and is it possible to distinguish strategic or directional evolution from defensive change aimed at pretending SCO doesn't have a case?

Express the "where it's gone since" question in terms of usage and the answer is that Linux has gone "just about everywhere" but has become dominant only in IBM installed super computing grids - a development that's now leading to the widespread use of Linux on cell for the next generation of super number crunchers.

And that's great, but there's a niggling question: is this happening because Linux is the best choice or because IBM doesn't have an alternative?

Express the "where it's gone since" question in terms of internals and the answer has some of that same "everywhere and nowhere" feel to it.

Thus there have been numerous improvements, particularly with regard to installation and management, with Linux overall becoming more robust, more reliable, and more scalable - but a lot of the changes that came with the 2.4 kernels in early 2001 have seen repeated change since, with driver APIs, the scheduler(s) and many other bits of the system changed, unchanged, and changed again as the code evolved. The net effect has clearly been to improve the product along dimensions I value - particularly reliability and performance - but overall I'd suggest that the scale of net technical advance from SuSe 7.1 to SuSe 10.2 today is roughly on par with the change from Windows 2000 to Vista and nowhere close to the scale of change between, say, FreeBSD 3.1 and Dragonfly 1.8, or between Solaris 2.9 and SunOS 3.0 -aka Solaris 10.

Whether that's true or not depends largely on how you measure change and value its consequences -after all, defensive change doesn't count, and a superficial review of code changes as shown in the logs maintained by kernel.org won't support firm conclusions. I think, however, that just the act of trying to list significant changes to Linux since the 2.4 kernels will highlight a couple of interesting things for you.

Look, for example, at the areas where Linux has indisputably evolved: page management has improved considerably; virtualization support has become much more sophisticated; the schedulers have, on net, been improved - particularly in terms of performance on four way multi-core SMP systems; disk array, LVM, and related driver support has improved; and, many driver APIs are now more general, and more flexible, than ever before.

Red Hat has a long list of these improvements just for the latest kernel upgrade. A sample:

There were quite a few generic improvements ranging from the much-asked-for kdump to speed up the time needed to gather dumps from these large systems, to splice for improving I/O throughput, to blktrace which allows fine grained statistics on I/O for tuning, lockdep, which is used to verify that deadlocks cannot occur and lots of improvements to ensure predictability by real time clocks, event connectors and higher resolution timers.

The file systems also saw a number of improvements, some of which were so valuable that Red Hat had back ported them into the Red Hat Enterprise Linux 4 systems, but are now fully upstream. Block reservation, on line re-sizing, extended attributes, many LVM features, the increase of maximum file system to 16TB (ext3), NFS enhancement for larger read and write sizes, autofs and cachefs and much needed improvement for CIFS support to improve Microsoft interoperability.

Security saw the MLS addition for many defense departments needs, address space randomization and other denial of service and virus eliminators, a faster and more comprehensive auditing subsystem, and more features for security-aware applications to gain information about security contexts. The SELinux troubleshooter is not part of the kernel, but it deserves mention as often as possible.

Networking saw the most new features. A lot of work to prevent congestion shows up in response time benefits. Offloading of some functions to hardware, such as fragmentation/defragmentation improve performance without giving up control and security. ipV6 received a lot of good work, some of it useful to ipV4 as well. Wireless is a big winner with more hardware support, better security and significant ease of use. More access control to give SELinux visibility and control of packets will prevent a lot of security violations, and lastly support for the Intel I/O Acceleration technology (IOAT).

All of these are great, but what niggles at me about them is that they seem to be externally driven - reactions to market requirements, to hardware change, or to advances made in the BSD or Solaris camps.

Don't mis-understand, I'm not saying there's nothing unique and valuable - I'm saying that the big change drivers are largely external to the process and often have a "me too" feel about them that's the opposite of the idealism animating BSD variants like Dragonfly or leading edge commercial work like that going on under the OpenSolaris umbrella.

So what's the bottom line? I love Linux and recommend it as a kind of common man's workhorse Unix for people who won't buy a Mac, don't need Solaris, and can't handle OpenBSD - but to me it looks like a follower where the BSDs and Solaris are leaders and that's why I don't write about it as often as I think I should.


Paul Murphy wrote and published The Unix Guide to Defenestration. Murphy is a 25-year veteran of the I.T. consulting industry, specializing in Unix and Unix-related management issues.