This story generated a lot of email, most of it positive - an extremely welcome change from the deluge of hate mail generated by my series on mainframe Linux.
There was one I particularly liked because he really said it all in very few words: :
Date: Fri, 26 Jul 2002 12:31:06 -0400 X-SMTP-RCPT-TO: firstname.lastname@example.org Great article, great article. Our company uses MacOS X as our dev environment and run Linux on our webservers. MacOS X and Linux play nice together and make for an incredibly powerful (and inexpensive) computing solution. john koetsier technology solutions manager http://www.premieragendas.com/
There were a few along the lines of: "Okay, you can take me to lunch anytime, but only to a place that provides the great service I'm used to" and quite a few that reversed the logic, as someone using the moniker "space coyote" did in the slashdot discussion of the article, to poke a little fun at everyone involved:
Knowing most Linux users, since they want everything for free they'll almost certainly try and stick the Mac user with the bill. And he/she will pay it too, but not without a _whole_lot_ of whining after they find out their free lunch isn't free anymore.
There was a happy coincidence: on the same day that the article appeared on Linuxworld with my comment that OpenOffice.org was about ready to unleash their first release for MacOS X, they released it. This version is a developer release for the Xfree86 extensions to Darwin, but it pretty much works and there is a strong commitment to have it ported to Aqua by January, 2003.
The day after the announcement there was an incorrect news report which, while quickly corrected by both Sun and the people at OpenOffice.org, may be a harbinger of the future. The report incorrectly classified the OpenOffice.org developer release as a StarOffice release and speculated about a move by Apple to have StarOffice replace Microsoft Office. Sun denies having such plans but it makes perfect sense to think that something like that could eventually be worked out.
Some people wrote to point out errors. I hate being wrong, but love the fact that there are readers out there who care enough to kwetch and cavil when I am. So, taking the easy ones first:
If it is a percentage change, then you should show positive changes as (newamount-oldamount)/oldamount. So the MS Operating System cost percentage changes should be 400 and 565 percent. Alternatively, you could show it all as a percentage of the original amount, so for example the -85% would be shown as 15%.
I probably don't have to say it, but he's right.
Quite a few people wrote to correct me on the use of the Postscript imaging model in MacOS X. Originally paragraph three said:
Technically the most interesting thing about Apple's Unix is the use of the Postscript imaging model from NeXtStep within the MacOS X layer instead of the more traditional X based approach. With Darwin you can add X11 at any time, and even run both concurrently, but the imaging system defaults to the postscript display model.
but after the first comments came in, I asked the editor to change this to:
Technically, the most interesting aspect of Apple's Unix is its use of Adobe's PDF imaging model. This is based on the PostScript imaging model from NeXtStep, within the MacOS X layer instead of the more traditional X Window System-based approach. With Darwin you can add X11 at any time, and even run both concurrently, but the imaging system defaults to the PostScript-derived PDF display model
Unfortunately, it is a whole lot more complicated than that. In fact, the more I looked into this, the less simple and obvious it seemed to get. The bottom line appears to be that MacOS X started out with PostScript and changed to PDF for some mix of commercial and technical reasons.
Here's a fairly cogent comment describing the present situation from a person who asked not be publically identified:
OS X's imaging model (called Quartz by marketing folks and CoreGraphics by engineering) uses client-side rendering: the window's backing store is mapped into the client app's address space as well as the window server's, and the window server just acts as a pixel compositing engine. This allows a number of different graphics libraries to be used by client apps, including native CoreGraphics as well as QuickDraw and OpenGL (or of course an app can draw pixels directly if it wants to, as many games do.)
What might be confusing you is the marketing-speak about Quartz being PDF-based. What this really means is that the CoreGraphics API provides the same graphics primitives that PDF and PostScript Level 3 do, and that it includes a PDF parser and PDF generator, which makes it very easy to work with PDF format graphics. But there is absolutely no PostScript interpreter lying around in the OS any more, nor are any drawing commands executed by the window server.
|Please stay tuned, in a couple of weeks I hope to produce something for you under the title: "Plan 9 from Outer Space meets the X-files." Believe it or not this title will make sense in terms of the current discussion!|
Postscript is a programming language which produces, as output, a subset of its own command set in which loops, for example, are "unrolled" and function definitions discarded, to produce a commands file ready for interpretation by an output driver of some kind. That means, among other things, that many different graphics packages can be unified by having them produce PDF output for routing to a single compositing and display management system. That, of course, is what MacOS X does with its Quartz 2D, OpenGL 3D, and Quicktime drawing applications.
As a number of readers pointed out, PDF is royalty free as long as you acknowledge Adobe's copyright and don't try to change the specification. In their opinion at least (I've not tried to confirm this with Apple) those were the key reasons behind the change to PDF: it works beautifully to unify the graphics model, and it's free. I hope that's right because it would nicely combine technical and business elegance in one decision -not something you see very often.
A number of people queried my claim that the Darwin component of MacOS X is free citing costs as high as $999 for the MacOS X server software while others complained that I didn't highlight the fact that the xServe license doesn't have a user limit. In fact Apple says the software is "free" with the machine and I don't believe you can buy the xServe without the software and the absence of per client licensing is pretty much the Unix standard.
There's something very strange about hardware PC pricing.
For most assembled products, whether it's a car or a kitchen appliance, the total retail cost of all the parts is ten to twenty times that of the assembled whole. Economic theory says that's due to a combination of the buying power of the assemblers coupled with their relative freedom from the packaging, legal, and distribution costs incurred by those who sell the parts individually.
In the PC industry, however, there's a whole cottage industry assembling no brand gear because the cost of the parts is less than the cost of the whole. You can still build a clone cheaper than you can buy a pre-assembled name brand -and that just doesn't make economic sense.
A number of readers saw the article as another blow in a war against Microsoft. That's a serious oversimplification; I'm not actually against Microsoft, I'm for Unix. My comparisons are often against Microsoft's product pricing, reliability, or performance, not because I'm personally biased against the company but because it seems to be the leader of the anti-Unix camp.
Here's what a slashdot commentator using the name "namespan" (whom I don't know, really) had to say on this:
Since a lot of the discussion here is on the technical/coolness/usability merits of OS X, I think most of you may have missed the point of the article: the insightful analysis about how for Windows/x86 systems, the OS cost is becoming more and more of the total cost of the system. And more than that: how this kind of report is exactly the kind of thing that gets attention of managers, because it's a cost/benefit analysis. Not only that, but it's a big-picture, trend-based analysis. It gives some indication of what to expect in the future, it gives some indication of how to save now, and finally, since it's slightly non-obvious, it flatters those who understand it
And, finally, several people, including an Apple sales rep, sent me references to a set of xServe benchmarks on the Apple site. All of these show the xServe beating competitive models from other companies including IBM, Sun, and Dell.
What makes two of these results particularly interesting is that they show the value of optimizing software to take advantage of the hardware; reversing an effect I think of as "regression to the dumb" to achieve impressive results.
Regression to the dumb reflects, I think, the marketing tendency to focus on simple things that are easy to communicate in a volume market and thus to elevate these simplifications to the level of de facto standards engineers then have to accommodate in product or process design.
The megahurtz wars, long a sore point for Mac and Sun users, seem to illustrate this perfectly. Each new generation of x86 CPUs has done rather less per cycle than the one before, but driven the claimed megahertz number up because that's become the number that moves product. Along the way some very good technologies have been abandoned and software developers have been taught to avoid making their code dependent on chip specific features that could easily go away with the next iteration.
So what happens if you look carefully at the technical advantages you've got and optimize your code, and your hardware, accordingly instead of just going with industry averaging practices?
In this case, Apple's Advanced Computation Group, working with Genentech, modified an application widely used in genetics and related research to make maximum use of the facility. As a result the Blast benchmark, which searches a genetics database for matches, shows the dual 1GHZ xServe beating an IBM x330 with dual 1.4GHZ P3 CPUs by factors ranging from 5.8 to 21 (and a Sun V100 by up to 52 times) depending on the length and precision of the matches.
Technically I believe that there are two factors at work here: the xServe has faster memory and a cleaner data path to the CPU, and Apple's four-way ATA design is both faster and cheaper than the single path RAID card.
In both cases, better technology used in smarter ways wins. As in, duh? but, managerially what they've done here is pretty cool because they're standing up for excellence instead of collapsing the technical tent and going off in search of volume.