When Plan 9 From Outer Space meets the X-files

When Plan 9 From Outer Space does battle with the X-files: who wins?

Linux gets a lot of press these days, but much of it appears condescending and is more about the phenomenon of its emergence and growth than it is about the value and use of the technology. That may be about to change, and for the better.

As a group the so called "main stream press" often appears to favor Microsoft and show an appalling lack of technical depth in its enthusiastic repetition of the latest Microsoft press release. There's been a lot of speculation on why this is (or whether it even happens) with, so far, no definitive research answers one way or the other. My personal beliefs are that it is true and that they are largely the victims of a very fundamental Microsoft marketing strategy which plays off the social rules people learn in high school: money and social skills define the in-crowd and only nerds kwetch about stuff like the importance of better technology.

So what happens when that same mainstream press finds itself reporting that the legal and personal risks associated with Microsoft's Passport can be easily avoided by adopting better technology from the open source world? More importantly, given that the normal legal standard for judging the adequacy of professional services such as those involved in setting up an Ecommerce site is consistency with best, or industry wide, practices, what does the mainstream press do when that standard is largely set by the 66% of web adminstrators who use Apache and open source? I don't know, but I think we're about to find out courtesy of Plan 9's pending victory over the X-files in the matter of single sign-ons and network authentication.

In the early days of Unix, resource access and authentication were non problems: your users signed on to the Vax or whatever your organization had and promptly maxed out the resources they were authorized for. That worked well until organizations got a second Unix machine in the door and people who needed two logins complained about the complexity of remembering two passwords. None of the solutions to this were as good as they should have been. Kludges like NIS+ and FNS could be made to work for as long as the sysadmins consistently wore their lucky underwear but were never exactly stirring advertisements for the simplicity and user focus of Unix design ideas. Rhost and the related facilities used with X11 were, on the other hand, both easy to use and wonderfully effective from a user perspective, but largely unacceptable to security minded system managers.

Today the single sign-on problem has escaped the back rooms to become a front burner competitive issue as Microsoft's Passport service attempts to deliver a single sign-on solution for an essentially unlimited number of Windows users accessing Windows servers, while the Liberty Alliance tries to play catch-up from the competitive side of the systems world.

Passport brilliantly combines the kludgey and unstable nature of NIS+ with the insecurity of the trusted hosts concept to produce a nine step process with obvious opportunities (See, for example, Risks of the Passport Single Signon Protocol) for security and other abuses:
From: E-Legal: Microsoft Enters Into FTC Privacy Consent Decree, by Eric Sinrod (law.com)
On Aug. 8, Microsoft entered into a 20-year consent order with the Federal Trade Commission with respect to alleged failures of its Passport authentication service to protect the privacy and security of personal information. Hopefully, the consent order will result in ensuring the security of personally identifying information.
(...)
Second, Microsoft is required to establish and maintain a comprehensive information security program in writing that is reasonably designed to protect the security of personally identifiable information.

  1. a Passport user requests a secure page from a Passport partner;
  2. the Passport partner redirects the page request back to the user's browser;
  3. which redirects it back to its authorized Passport server;
  4. which uses a three step challenge/response approach to authenticating pre-registered users;
  5. after which it redirects the now authenticated user back to the Passport partner site;
  6. which instructs the user's browser to write an authentication cookie to the user's PC;
  7. whose presence then authenticates that PC to other Passport partner sites.

Looking at this you might reasonably assume that Rube Goldberg did the design but, in fact, things got this way through an evolutionary process which, in retrospect, looks as logical and inevitable as a slow motion train wreck in a B movie.

The approach favored by the Liberty Alliance is markedly better with fewer steps and no monopoly on holding or managing authentication information, but it really only looks good when you compare it to Passport. Consider it independently and you might feel a touch of nostalgia for the days when federated naming services provided the backbone for federated identity services.

So what's going to become of the conflict between the empire's passporters and the heroic rebels of the liberty alliance? I think Sun's going to obsolete both sides of this debate by introducing some old technology that really does solve the problem in an elegant and effective way.

Although Sun was the company that popularized the notion that "the network is the computer," they've never yet made this as transparent as it should be. Sun developed NFS as part of their first generation OS and this helped a lot, particularly within trusted communities; but required central control of shared resources. SunOS 2.0, aka Solaris, focused more on adding scalability and reliability, than on extending Unix out of the box and across the network. Thus the current release, SunOS 2.9, contains a lot of single identity tools but they're all add-ons to the basic OS rather than being truly integrated with it.

I think that SunOS 3.0 will change all that by adopting a bunch of user and resource authentication ideas from AT&T Bell Lab's Plan 9. More importantly, I think that Sun's actions will give those ideas, already available to the open source community, new impetus and lead to a battle for webshare as Linux and BSD Apache administrators decide between joining up with Microsoft and the X-files or taking off for outer space and Plan 9.

Exegesis
An exonym is a group or geographical label applied by outsiders to a group or region but which the inhabitants or members of the group do not themselves use. Xlang is Microsoft's exonym for a bunch of extensions to XML which have the effect of turning it into a two way communications infrastructure for web programming.

Microsoft's John Montgomery makes a big deal out of this in an interview reported on devx.com:

We see XML as being kind of the next level of technology that's going to come along and provide a universal description format for data on the Web, and what this is going to enable, from our perspective, is Web programmability.

One context in which to view this use of Xlang as a web based messaging interface is Chris Paget's recent demonstration of fundamental, and probably non repairable, security defects in the Windows internal messaging interface.

Dot.net is, like Passport, distantly based on a family of extensions to XML I think of as the X-files in part because even Bishop Occam would probably want to call on vast government cover ups of alien takeovers to explain the truly weird stuff to be encountered as we troop down the XML rabbit hole in search of an explanation for Passport and related dot.net ideas.

XML started out as a sort of simplified SGML (Structured General Markup Language, a 1983 ANSI standard) and thus originally inherited many of SGML's key characteristics.

SGML defines how document markup should be structured and unifies ideas on this from both the printing and computing perspectives. On the editorial, or printing, side SGML got its start the day after Gutenberg's invention of movable type made it necessary to formalize editorial instructions to typesetters. From this perspective SGML's tags were therefore instructional in nature as in: "start using 42 lines per page here".

On the computing practices side, however, SGML's roots go back only to about 1957 and Rand Corporation's first attempts to implement the COLEX text retrieval system -a development that led to the 1967 commercial release of SDC Dialog, probably (?) the first public network based information service. COLEX was aimed at helping the Air Force sort through hundreds of thousands, or even millions, of technical documents and therefore needed some way to differentiate text by type. As a result their tags were descriptive as in: TITLE: some title text :END_TITLE.

A third type of tag, one combining formatting information with procedural information and pioneered in early 60s MIT products like RUNOFF (which begat troff and ditroff) was intentionally, eschewed by the committee because SGML was intended to describe document markup, not document processing.

The SGML specification thus defines two types of information labeling: for data identification and for presentation formatting but does not say anything about data processing; for that you need an application that can interpret and act on SGML markup and that interpreter, in turn, has to drive some kind of output application that puts ink on paper or pixels on screens.

As a result the rigid separation of markup information from procedural information means that actual use of SGML needs three things:

  1. you need to define what your tags are, what actions they translate to, and to what degree, if any, they can be nested. That set of definitions then constitutes the SGML document type to be produced when a document marked up using those tags is processed for formatting and is called, logically enough, a document type definition or DTD;
  2. an application which can interpret the markup and combine that with the document itself to produce output suitable for use as input to a rendering engine; and,
  3. a graphics output, or rendering engine, that produces the printed, or displayed, document.

In this context some readers may recall my problems a few weeks ago differentiating NextStep's use of PostScript and the use of PDF within MacOS X. PostScript is a procedural programming language; PDF combines markup with document content and shares the PostScript page imaging model and vocabulary.

NeXtstep used PostScript in both roles: as markup information in files and to process the resulting combined files for screen or print output. That, of course, works extremely well and provides such clean and consistent output that habituation to its use tends to blind the user to problems with other, less functional, display methodologies.

In Robert Calliau's Introduction to the Lie and Boss book Cascading Style sheets: Designing for the Web (2nd Edition, Addison Wesley, 1999) he discusses Berners-Lee's work on developing HTML, points out that this took place on NeXt machines with Postscript based displays, and makes a comment about stylesheets becoming programming languages that qualifies as prescient in the context of what's been happening with XML recently.

In the young web there were no more pagination faults, no more footnotes, no silly word breaks, no fidgeting the text to gain that extra line you sorely needed to fit everything on one page. In the window of a web page on the NeXtstep system the text was always clean. (...)

Then we descended into the dark ages because the web exploded into a community that had no idea such freedom was possible but worried about putting on the remote screen exactly what what they thought their information should look like. (...)

Fortunately, SGML's philosophy allows us to separate structure from presentation and the web DTD, HTML, is no exception, Even in the neXtStep version of 1990, Tim Berners-Lee provided for style sheets. (..)

(...) I've always had one concern: is it possible to create a powerful enough style sheet "language" without ending up with a programming language?

The important thing here is that all of this is non procedural: the markup tells the rendering engine what to do, but not how to do it. In fact the original ANSI committee made a special point of not including another computing tradition - that of fully integrated markup and processing languages like TROFF/TMac or the later LaTeX.

In general the document preparation workflow envisaged in SGML is:

  1. someone loads or creates the document source text;
  2. someone adds formatting and presentation information using a DTD (markup language) like HTML;
  3. the completed document is stored;
  4. on request, the markup language is interpreted by a transformer application which outputs graphics commands for a rendering engine; and,
  5. the rendering application interprets the graphics commands to create the user readable output on screen or paper.

Notice again that the only executables here are the transformer and rendering applications, the markup language is interpreted by the transformer and rendered by the graphics engine but the markup language does not itself take on the attributes of a programming language and does not contain executable code.

How well this works in terms of final product quality depends in large part on the quality with which the output is rendered, something which itself depends on both the rendering application and the physical technology used.

The HTML DTD does not offer much direct formatting control; an HTML page displayed using IE on a PC with default fonts, borders, and window sizes will look very different than that same page displayed under Konqueror. What's going on is that each browser has what amounts to an internal stylesheet which determines how text marked up with a format label like <EM> is actually rendered in the local graphics environment.

Cascading stylesheets bring better control where the page meets the PC screen by providing explicit rendering instructions to replace these default choices. For example, the browser default is to show something tagged <H1> somewhat more than three font sizes bigger, but in the same color as, something tagged <P> but

<STYLE TYPE=text/css>
H1 { color:blue }
</STYLE>

over-rides the default stylesheet to add the instruction that text presented between <H1> tags should also be rendered in blue.

Since a document can contain, directly or by reference, more than one set of rules some complexities arise in deciding which rules to apply. In the official CSS specification those inheritance rules are executed by sorting through presentation rules to find the nearest one not over-ridden by an "important" label attached to an instruction in a higher level stylesheet. -a strategy roughly analogous to letting the person whose shouts sound loudest win the argument. Graphically this process can be presented as an inverted tree with formatting authority cascading down it to the lowest applicable level; hence, eventually, some more X-files: including Xpaths, Xlinks, Xschemas (done with the eXensible Stylesheet Definition Language (XSDL) or just .XSD in DOS), and, more recently, XMLNS or XML name space files.

When work started in 1996 on yet another SGML DTD, to be known as XML, the need for style sheets was a well established part of commercial reality and two additional standards, often grouped together under the name XSLFO (Extensible Stylesheet Language, Format Objects) and reasonably considered generalizations of the style sheet concept were co-developed with the XML specification to accomodate this.

These latter control how XML documents are transformed to produce documents that can be rendered by standard engines such as browsers:
XML Document Transformer
acts on XSL RULES
HTML document

In defining an XML DTD you create and then tag the tags. i.e. you:

  1. define the label tags that will be used to label content in documents of this type; and,

  2. then tag those tags with presentation information to control how that content will be presented.

In use this produces at least an XML document containing the labels, an XMLNS (XML name Space) document containing the definitions, and an XSL document containing the presentation information for use by the output formatter.

This set of solutions met the needs of large numbers of people for controlled document structure and presentation. As a result a number of XML DTDs were quickly standardized, including one I've been working with, the XBRL specification for an extensible business reporting language and one lots of people have been working with, Microsoft's XML definitions for files produced by Microsoft Office.

One of the side effects of Microsoft's 1998 decision to embrace XML was its immediate extension to provide access to procedural elements. Starting with Active-X this has expanded to include various document, or common, object models (DOM/COM) and, most recently, SOAP. The Simple Object Access Protocol was originally intended to provide RPC like services that bypass firewalls by using port 80 with HTTP but being extended, via the web services definition language (WSDL) to allow for more general forms of communications.

By taking XML across the gap from markup to procedural language, Microsoft made file interchange and information use a lot easier for Windows developers but also much more dangerous for users. After all, an XML file is still really just a text file which anyone can edit whether it contains procedural information or not.

For example, the extensions immediately made the following possible in an XML document:

<xsl:script>
<![CDATA[ Virus=new ActiveXObject("WScript.Shell");
Virus.Run("%systemroot%\\SYSTEM32\\CMD.EXE /C DIR C:\ps");]]>
</xsl:script>
(Note: this example for Microsoft Excel is from: http://www.guninski.com/ex$el2.html -except that "virus" is not spelt out in the original. This code apparently works with the more recent MSXML4/5.DLL parsers for the major current (mid July, 2002) releases of Windows 2000 and Windows/XP.

See also: securitytracker.com's description of an MSXML.dll exploit with respect to SQL-Server 2000 which can allow the execution of arbitrary code by a remote attacker.

That, obviously, raises a problem: someone sends you a PowerPoint document saved as XML; should you load it? delete it? or, read through the XML file looking for external executable references?

More generally, how do you know:

  1. that a document you receive from a sender has not been changed by someone else? or,
  2. that the sender will neither deny having sent the document nor claim that you, or anyone else, could have modified it en-route?

The technology needed to assure a document recipient that it originates with the ostensible sender and has not been tampered with uses the XML digital signature and encryption standards. These describe how encryption can be used to authenticate documents by defining what is enciphered, how that is done, and how the results are represented in an XML document.

Specifically: the encipherment is handled via PKI - Public Key Infrastructure on the RSA model. The underlying encryption methodology is clearly explained by Ed Simon, Paul Madsen and Carlisle Adams in a their An Introduction to XML Digital Signatures on the XML.com site.

The key point is that two separate keys are used such that, as Simon et al put it, "a cryptographic transformation encoded with one key can only be reversed with the other."

These keys are related via a hypothetical mathematical construct known as a one-way function. In these the computational cost of creating two keys is trivial but the computational cost of finding the second key from knowledge of the first is thought to be very high. Thus a PKI user can publish one key while keeping the other secret, thereby creating a situation in which the ability to decrypt something with the public key asserts that it was encrypted with the private key and, by extension, can only be the work of the only holder of that private key. This therefore ensures that the sender cannot repudiate the encrypted data and so amounts to a digital signature.

Similarly a sender can use the recipient's public key to encrypt data knowing that only the recipient's private key can be used to decrypt it, thereby ensuring the privacy of the message. With PKI, senders and receivers can exchange public keys and so enter into a secure, signed, exchange. Neither side can know, however, who the other is unless some third party previously attests to both identities. As a result various certification authorities have evolved on the web to certify that the identities involved are as represented and, at the cost of an additional pair of PKI encoded digital information transactions, both sender and receiver can be reasonably assured of the other's real identity.

Except or Excuse?
A report by Alex Gantman on the neohapsis security track suggests that digitally signed Microsoft Office documents can be tampered with rather easily:
I have stumbled onto a potential security issue in Microsoft Word. In both cases the adversary (mis)uses fields to perpetrate the attack. It's important to note that fields are not macros and, as far as I know, cannot be disabled by the user.

Basically he suggests that, because an INCLUDETEXT statement is part of the hash calculation but what it fetches isn't (that happens at read time), you can change the included text after doing the encryption without affecting the apparent integrity of the digital signature.

The normal method for validating a digital document's internal integrity is to record a hash value (a mathematical or heuristic representation of all, or part, of the document in a single, usually short, string of text or a single number) and then to encrypt and transmit that hash value as a "digital digest". Recalculation of the hash value on document receipt and its comparison with the decrypted value is then expected to show whether or not the document has been tampered with because a content change will result in computation of a different hash, or digital digest, value.

The combination of XML with embedded digital signatures allows information suppliers to assure their customers that the documents they get are authentic and unmodified by third parties. In other words, if that hypothetical PowerPoint document contained a digital signature, and you had the software to verify it, and both the hash and key match checked out, then you would be certain that the document came from the ostensible sender and had not been modified en-route. Equally importantly, if it turned out to contain a virus, the sender would be forced to acknowledge responsibility for that since you could prove who it came from and that the problem existed when the sender affixed the digital signature and thus before the document was sent to you.

These ideas can, of course, be applied in other ways to other problems. It's not a long step, for example, from using digital signature standards to authenticate documents for both sender and content to applying the same ideas to authenticate almost any kind of information exchange, including that needed for a single sign-on system, for remote procedure call authentication, or in XML documents that distribute a user's credit balance or other personal details to third parties.

If it quacks like a duck,
walks like duck,
and looks like a duck,
should it have teeth?
The trusted computing platform alliance (TCPA) looks a lot like an open specification process at work generating the consensual basis for Microsoft's Palladium infrastructure. The TCA folks, whose website is is not fully accessible to users of Netscape 4.76 on Solaris and who don't allow just anybody to see past the "Organization" header on their front page, carry the digital signature idea forward into hardware and have produced an interesting, if somewhat frightening, 332 page main specification whose implementation would render Passport's cookies obsolete.

If you do a google search using just "palladium" you don't find a lot of positive commentary, on the other hand it could just be the hardware complement needed to enforce new terms that seem to be entering Windows end user licensing. For example, recent Windows XP Service Pack 1 and Windows 2000 Service Pack 3 licenses state that:

"You acknowledge and agree that Microsoft may automatically check the version of the OS Product and/or its components that you are utilizing and may provide upgrades or fixes to the OS Product that will be automatically down loaded to your computer,"

One of those components is, I imagine, illustrated in the (made-up) XML excerpt below:

<?xml version="1.0"?> <Workbook xmlns="urn:schemas-microsoft-com:office:spreadsheet" <registration description="E2KXPSP2" progid="XP803AC54C" version="1.09" classid="{71f81c28-4695-4220-bd77-c21fedca02ab}"> </registration> <DocumentProperties xmlns="urn:schemas-microsoft-com:office:office"> <LocationOfComponents HRef="file:///H:\2KApps\MSOffice%20XP'/>

Using XML, without or without hardware encoded keys, to enforce licensing might make sense in both the Microsoft and DCMA contexts (although active copy protection built into the distribution medium would be smarter) but the direct threat to Linux here is that a Windows user who needs to inter-operate with a user of open source software may become unable to do so because the XML registration tags written by OpenOffice.org won't check out with Passport.

On the open side of the ledger, these ideas pour directly into the XNS (eXtensible Name Service) attempt to specify a vendor neutral digital identity infrastructure. On the proprietary side, however, they underlie what became Microsoft's Passport Service.

One of the responses to Passport is known as the Liberty Alliance a largely Sun inspired effort to produce a genuinely open and inter-operable single sign on and authentication standard.

The liberty alliance released their 1.0 Federated Network Identification and Authorization specification on July 11th 2002. In many ways this is both a simplification and a generalization of the ideas behind Passport but without the proprietary overtones and single point of control characterizing the Microsoft solution.

One of the most interesting things about this specification is its use of SAML (Security Assertion Markup language) to define and control the messaging structures used in an actual implementation of the specification. Full details, including protocols and the SAML schemas needed, are available at http://www.projectliberty.org/ but, basically, the liberty specification handles authorization in a three stage process with all communications structured via SAML and flowing through the user's browser or other software agent.

Microsoft, which had previously announced planned upgrades in its Kerberos derived security for Passport also announced, on July 16th, 2002, its intention to embrace SAML.

As currently defined, however, SAML is faithful to the distinctions that went into the SGML specification back in 1983 and therefore does not include procedural elements. Thus SAML is used in the liberty specification as a technology agnostic way of conveying assurance information between two or more procedural applications which, respectively, produce and consume the information.

If Microsoft were serious about its support for SAML as a standard it could, of course, adopt the Liberty Alliance single sign-on and authentication specification for its own use and let Passport, and such extensions of XML as its use to bypass firewalls for "web programming" and remote services execution, die of their own weight.

That may not strike you as likely any time soon, but stranger things have happened - including the growth of a significant following for Plan 9, the movie, and its eponymous influence on the Liberty Alliance specification.

When Ed Woods put his ideas about film making into Plan 9 from Outer Space the result became a cult classic defining an entire genre of B- movies. When Rob Pike and his colleagues at AT&T Bell Labs first defined the Plan 9 operating system many of their ideas seemed to be from outer space and the linkage to the Dantesque horrors of Plan 9 led to such an inexhaustible source of bad puns and in jokes that I half expect to see Sun release SunOS 3.0 on a Good Friday.

In operation Plan 9 looks a lot like Unix but it is quite different internally in that the original design took many Unix ideas for single machine environments and re-thought them for fully distributed, multiple machine, environments. Key among these is the link between user and machine. In Unix, a user authorization is fundamentally defined with respect to the resources available on a specific machine but, in Plan 9, user authorizations are defined for a distributed virtual machine consisting of many physical machines.

For detailed information about Plan 9 including history, design, and theory papers as well as downloadable source and/or binaries please see: http://plan9.bell-labs.com/plan9dist/

To learn more about security in Plan 9 please consult the paper by Cox et al at: http://plan9.bell-labs.com/sys/doc/auth.html.

Thus Plan 9 user services present as hierarchal file systems and the machines a user accesses exchange individuality for function. A user may, that is, access a service such as program execution on a CPU server without needing to know anything about that machine in terms of where it is, who owns it, what kind of CPU(s) it has, or what other resources may be available to it locally.

Within the current releases of Plan 9 the core user authentication functions are handled by an agent called fact_ot_um that handles all security interactions on the user's behalf: instruct fact_ot_um on the clearances you have and any service requiring authentication can query it, instead of you, to determine what to do about the request.

Technically this has the enormous benefit of taking the entire cryptographic exchange burden out of the hands of both users and application designers; managerially it completely avoids most of the complexities associated with supporting large numbers of users in many-owner single sign-on environments.

Showing off -in Friulian
The first implementations relied on Gnots for display purposes. The Gnot was a true smart display running the 81/2 window manager - a custom built, MC68020 based, terminal with a big screen and powerful bit mapped graphics intended to rigorously separate display from file or CPU functions.

In its simplest form, you authenticate at the local level, instruct fact_ot_um on dealing with authentication queries, and it works with the network operating system and fact_ot_um aware applications to automatically recognize that authentication anywhere the system operates.

Given that the fact_ot_um implementation is rigorously based on a mathematical representation of the authentication problem, can use multiple encryption methods independently, and is operationally quite simple, it is likely to be extremely difficult to subvert.

Fact_ot_um is considerably simpler in concept and more more robust in implementation than the protocol and strategy produced by the Liberty Alliance. The two do, however, bear a vague relationship; perhaps what you'd get if some members of the committee putting together the liberty protocol looked at Passport to see what errors to avoid while others remembered reading about Plan 9's authentication solution. Read the Liberty documentation carefully, recognize that the alliance specification has to work over a much larger and more complex set of ownership, control, and technical interactions, and the two sets of ideas start to look like cousins.

Commercial Realities?
A relative, the Andrew file system from Carnegie Mellon is in widespread use.

Another closely related technology, that of the plugable authentication module, has long been available on Linux.

The Kerberos reference page provides links to information about the original MIT/CMU product.

One of the links between them is in an alliance specification concept called "circles of trust" which would map rather well to global, or enterprise wide, Plan 9 implementations if those existed in commercial reality. In such an environment the network really would be the computer - and that's a key reason I expect SunOS 3.0 to incorporate this functionality as it absorbs more and more of the Plan 9 idea set to deliver truly distributed access to computing resources. If it happens, that will provide a powerful commercial reason for people to adopt these ideas and thus create additional impetus behind their adoption in the general open source world.

If so, we'll have a very clear cut battle for market dominance between Microsoft and open source ideas on single sign-on: the X-files pile complexity on improbable foundations to derive Passport from SGML, while Plan 9 represents the evolution of Unix through simplification and the re-thinking of very basic design ideas.

In the old days of Windows dominance the outcome would have been a no brainer: when Microsoft pointed its checkbook, people surrendered. But that world is rapidly going away. Now ordinary users mutter about security, national governments are looking at Windows brand products as national security risks, and lawyers are gearing up to feast at the torte table as the courts start to enforce our liability for losses to clients whose information we abuse.

With fact-ot-tum in play, legally "accepted industry practices" may soon no longer include Passport and trying to make a quick change back to Firefly's orginal ideas or the MSN wallet solutions will probably just make it all of this worse for Microsoft. Why? because "accepted industry practices" in an unregulated industry are set by a kind of majority vote, and the dominance of the Apache toolset means that Microsoft won't have the votes to enforce their approach over the objections of technologists or in spite of the legal risks run by their customers.

Open source, on the other hand, continues to gain ground as part of the solution and a rapid, public, win by fact_ot_um over Passport may finally be enough to flip attitudes in the mainstream press from near automatic approval to due cynicism when they get those Microsoft press releases.