% fortune -ae paul murphy

Is Linux innovative or derivative?

When you ask google for innovations in Linux what you get is rather disappointing - on the first page there's somebody's rather uninformed attack on Linux as lacking innovation, some PR bumbf, and five separate references to a joint IBM/Red Hat press release entitled "IBM, Red Hat Announce New Development Innovations in Linux Kernels".

At the more general level the arguments for "Linux as innovative" come in three distinct forms:

  1. first there's the argument that the Linux kernel development process - highly collaborative, easily accessible, open source, internet based - was, or is, innovative.

    In fact this isn't true. The principles involved date back to the invention of the scientific method, the MIT Multics originators back in the late fifties and early sixties were the first to ennunicate open source ideas with respect to science based computing, and the Unix team at UCB practised the same methodology, albeit without internet enabled speed and reach, ever since Ken Thompson dropped the first Unix tape on them in 1973.

  2. second there's the argument that the kernel itself reflects innovations in design, inclusion, and execution.

    I would argue, however, that there's some brilliant coding but that, broadly speaking, none of these claims are true either. The underlying design target is Unix, the rendition best described as x86 mediated furball, and I can't think of a single included technology that isn't either an accommodation to x86 design parameters or adapted to Linux and the x86 from work pioneered elsewhere.

    Thus it's easy to think of specific technologies that entered the market through Solaris 10 - dTrace, ZFS, and SMF all spring to mind. At a more global level Plan 9 rethought the whole business of deploying computing resources -and only Solaris is moving in that direction.

    But where are the comparable Linux innovations? Here's what IBM and Red Hat came up with in the March 2007 IBM/Red Hat press release unearthed by that google search mentioned earlier:

    -- Xen Virtualization Optimizes IT Environments:
    -- Security Enhanced Linux Offers Greater Data Protection:
    -- Innovation for the Future: "Real-Time Linux" Application Development Platform

    Now compare those to Solaris Zones, Solaris hardware cryptology support for ZFS and iSCSI, and the absolutely precise (to the sub nanosecond level) processor timing control possible with the CMT/SMP line - and Solaris.

    Now I know that people will point at some very specific things as demonstrating Linux technical leadership - but the arguments for the ones I know about are unsupportable.

    People will argue, for example, that Linux has been ported to more environments, including non x86 ones, than any other OS - but that's simply not true: BSD beats Linux hands down on ports, on total installs, and on operational longevity.

    Similarly people will argue that Linux scales better than anything else. They'll point, for example, at SGI's Linux super computers - but those aren't SMP machines, they're multi-processor grids in boxes; and, by that logic the world CPU scaling crown would have to go to Microsoft: for botnet, a wannabe grid entirely enabled by Microsoft Windows.

  3. the third line of argument is that the GNU/Linux combination most of us know as Linux is politically innovative because it empowers people who would otherwise not be empowered, flattens the playing field at a level within reach for most University compsci graduates, and offers people a low cost, low risk, point of entry to the Unix world.

I think that last one's exactly right: Linux is less a technical tour de force than it is a social phenomenon and, seen in that context, it is incredibly successful. Let me re-iterate one of my favorite quotations from Unix originator Dennis Ritchie:

From the point of view of the group that was to be most involved in the beginnings of Unix (K. Thompson, Ritchie, M. D. McIlroy, J. F. Ossanna), the decline and fall of Multics had a directly felt effect. We were among the last Bell Laboratories holdouts actually working on Multics, so we still felt some sort of stake in its success.

More important, the convenient interactive computing service that Multics had promised to the entire community was in fact available to our limited group, at first under the CTSS system used to develop Multics, and later under Multics itself. Even though Multics could not then support many users, it could support us, albeit at exorbitant cost. We didn't want to lose the pleasant niche we occupied, because no similar ones were available; even the time-sharing service that would later be offered under GE's operating system did not exist. What we wanted to preserve was not just a good environment in which to do programming, but a system around which a fellowship could form. We knew from experience that the essence of communal computing, as supplied by remote-access, time-shared machines, is not just to type programs into a terminal instead of a keypunch, but to encourage close communication.

What he means here isn't just time sharing on a single machine, but a collaborative environment in which the system is used to provide communications among people and thus enable them to share goals and ideas while contributing to each other's work.

That's exactly what Linux achieves, that's what makes it a success - and, for my money, that success itself qualifies the Linux movement as insanely innovative.


Paul Murphy wrote and published The Unix Guide to Defenestration. Murphy is a 25-year veteran of the I.T. consulting industry, specializing in Unix and Unix-related management issues.