Professional programs like those leading to the MBA or an Accounting designation generally include mandatory course work aimed at giving the student some understanding of Information Technology. As a result faculty at hundreds of Universities and Colleges need high quality textbooks and related materials for use in courses designed to deliver on these requirements according to ACM, IFAC/IEG, and other professional competency maps.
This paper reviews two leading textbooks aimed at this market, finding serious structural problems that mitigate against learning; numerous errors of both commission and omission; and extensive use of unsubstantiated, as well as often misquoted and/or fundamentally incorrect, press reports in illustrations of supposedly real world IT applications.
A further review of six other textbooks shows that these two works are not exceptional, suggesting that this level of error is endemic in the introductory IT textbook publishing industry.
The two texts most closely scrutinised for this paper are:
This book has recently become available in a fourth edition. However, checksum calculations done on the accompanying downloadable Tech Guides [TG] show them to be unchanged.
The six books reviewed for Section Three are:
The Turban book was selected as one of the two anchor books for this review because the primary author of this article had the opportunity to step in as a sessional instructor for a course called Accounting 241, Introduction to Business Information Technology at a local University and this text had been previously selected for that course. The Gordon book was selected as the other anchor because it seemed to come closest, among seven previously unopened "possible 241" books accumulated from colleagues and vendor reps since using the Turban book, to covering the material specified as required in the IFAC/IEG competency map for the Accounting professions.
Section Two, below, discusses the nature and extent of the errors found in the two anchor books. Section Three briefly reviews the other six candidates to see whether the anchors are exceptional. Section Four raises the question of faculty options and responsibilities with respect to using these books in teaching.
Consider the following three quotations:
These illustrate three different kinds of factual error.
The first type is simple nonsense presented as fact. At the time the book was published IBM's z916 mainframe offered up to 16 accessible CPUs and the i840 mini-computer offered up to 32.
The second type is more difficult to respond to because it usually contains multiple errors along with something sufficiently arguable, and yet technically wrong, to give reasonable readers the feeling that only a nit picking zealot would refuse to accept the whole statement.
Thus in the example shown all the "facts" presented are wrong, but a knowledgeable reader will understand what the authors meant to say, and therefore be tempted to forgive, or even not notice, the imprecision. A student new to the subject, however, cannot be assumed to know that "systems software" is not a synonym for "operating system;" that neither systems software nor operating systems provide "self regulatory functions;" that all major operating systems today rely on boot loaders; or that "Windows" is a Microsoft brand name; not a product. That student, therefore, has no way of knowing whether the statement is is meant to describe Windows 98 Professional (an MS-DOS application rather than an OS), Windows 2003/XP Professional (an OS which starts from a boot loader), or even a Windows Longhorn developer pre-release (in which a boot loaded core OS starts its file system as an application).
The problem, of course, is that our willingness to understand what the authors meant to say invites us to gloss over the fact that everything they use to say it is wrong, and therefore damaging to a student coming to the material for the first time.
The third example illustrates a type of error in which the core element of the statement might be correct if extensively qualified but is wrong as written. It is true, for example, that software striping is used in some forms of RAID and can, in some circumstances, contribute to increased throughput relative to standard I/O against single disks. In general, however, both the core statement about parallelism in RAID and its absurd trailer on physical disk size, are wrong.
At bottom all three of these examples are made up of simple errors of fact, but they form a hierarchy in which the temptation to just let the errors pass increases with the amount of shared knowledge needed for a full explanation. Explaining to students that companies like CDC and Univac offered multi-processor machines long before IBM put its System 360 CPU components into a separate rack, or main frame, is easy. Explaining the effect each of the seven most common RAID levels has on data throughput in read/write operations under differing application workload, operating system, hardware support, and buffer size, scenarios requires extensive shared knowledge and considerable time.
A second class of error, the failure to recognise or accommodate change, is both much subtler and much harder to deal with than simple error.
Consider the following:
What appears to be going on in the first example is that Turban et al start by explaining OCR in terms of what it meant in terms of label recognition at a mainframe shop circa 1982, then mash together the scanning and OCR (Optical Character Recognition) processes used on PCs today, and complete their chronological tour by returning to the late eighties with the information that OCR accuracy is improved if the input documents use a font size and style for which the OCR software is optimised.
Something similar happens in the extract from Gordon et al. This definition occurs in the context of a diagram intended to indicate the parts of a computer, but which looks a lot like an IBM System 370 just after the introduction of the glass terminal in 1972 and not at all like a modern Sun V880 or Dell 6650.
From today's perspective all of the component statements in both examples are wrong but, with exceptions like the gratuitous comment about laptops, the merging of scanners and scanning software with OCR, and the omission of BSD ports, most were more or less applicable to some technologies, or components of technologies, at some time in the past.
Unfortunately there's nothing in the text to give the student coming to this the first time any indication either that these are incomplete views or that most of the information doesn't apply today. As far as the student is concerned, the books were published in 2002 and 2004 respectively and are, therefore, current. Put this stuff on an exam, and students will agree that the "R" in OCR stands for Reader, that scanning is an OCR component, that defining fonts and sizes in the JCL improves recognition accuracy, and that computers are accessed via displays which connect to the computer via physically distinct ports on controllers connected to the CPU through other ports.
That same anachronistic view is embedded everywhere. In both books, for example, heirarchial databases are discussed on an equal footing with relational methods for data integration while the diagramming methodologies applicable to the two technologies are discussed together with no effort at drawing distinctions in terms of either time frame or applicability. This hides the temporal relationships between technologies and methods from the reader with the resulting contradictions preventing that reader from forming any coherent mental map of the industry.
In some books anachronism is elevated to structure. The Gordon book, for example, is structured according to a mid seventies System 360 SDLC with the IT decision modelled as a process starting with needs evaluation and progressing through design to development and implementation. That's fundamentally inappropriate enough, but layered on top of this framework is a totally incompatible, and equally inappropriate, focus on the Microsoft PC and related software. The effect is to create an image of the industry that's so distorted by inappropriate structure and improbable naivete that there's nothing contextually incongruous about the image, presented to students as realistic (pp 327ff), of a Fortune 1000 business stepping through the traditional waterfall process to deliver enterprise applications written using Microsoft Access.
The third, and in many ways most pernicious, class of error is omissions.
In the real world technologies are invented by people and marketed by companies. In Gordon's world brand name omission operates like censorship, most recently avoiding references to companies and products competing with Microsoft while occasionally bowing to their former dedication to IBM through continued use of words like "adapter," a failure to excise lengthy sections on Token Ring and ATM, and their deep structural commitment to the 360 architecture with its accompanying management methods.
Turban et al carry this one step further, sanitising their work of most direct references to specific brands while allowing their enthusiasm for all things dot.net to permeate everything. Thus neither book spends an excess amount of time on negatives: things like the inherent weaknesses in the x86 architecture that give rise to buffer overflow exploits get no mention while the functional viability and security issues encountered by Microsoft's customers are unmentioned or given a false positive gloss: "Virus checkers identify and eliminate computer viruses." [Gordon, P. 97]
The issue with omission isn't that not discussing something is itself an error, it is that the effect in this case is to present the student with a wholly erroneous view of business computing. In the real world the internet, most telecommunications, and nearly all major corporate data centers rely on Unix. In contrast the indices to the cumulative 4,031 pages of these eight books provide a total of eight pointers to mentions of "Unix". In total, these authors devote roughly one word per thousand to all of Unix including BSD, GNU, Linux (really GNU/Linux), Solaris, the GPL, and the entire open source movement.
Teaching an introductory business computing course without reference to Unix and other non Microsoft technologies amounts to an absurd misrepresentation, roughly comparable to teaching a course in the fundamentals of democracy without reference to England or the United States. Unfortunately it's also easily accepted as a necessary, even useful, simplification because our own knowledge as instructors gets in the way of understanding how these books affect the student. We see the use of the PC as illustrative, they see it as exhaustive.
It is possible to explain mountains in terms of molehills, but it takes comparison not omission. Thus Apple's absence from the Gordon's world is not illustrative, it is misrepresentation. Partial absence can be as bad or worse. For example Linux does exist in the Gordon's representation of IT but what little they say about it is mostly wrong both in terms of what they include and in terms of what they omit.
"A variant of Unix called Linux became popular in the late 1990s. A Finnish graduate student named Linus Torvalds developed the software and purposely disclaimed any rights to it, leaving it in the public domain, with the condition that its code and all future versions developed from it remain open to view and change. Several companies, most notably Red Hat and Caldera, modified the software and then created versions having the same system calls and user interface to operate on many different types of computers."
This misstates the nature of Linux (a Linux kernel with Gnu open source "outerwear"); misstates the origin of Linux (Minix); misstates the controlling role played by Mr. Torvalds (at the OSDL); confuses the GPL with an absence of copyright; omits the pivitol role played by the general access license issued by Prentice Hall on publication of the Minix source code in Andrew S. Tanenbaum's seminal 1987 book Operating Systems: Design and Implementation; misstates core OS functionality; misstates Caldera's role; and, omits the role of the independent open source developers behind Gnu and GUIs like KDE and GNOME.
More importantly, however, the Gordon book also omits essentially the entire open source movement, devoting one paragraph to Unix and esentially nothing at all to BSD, GNU, the OSF, the GPL, and Solaris. There are two references to Sun in the index; but one of these is wrong (the page 93 reference to "StarSuite") and the other buries mention of the industry's most innovative company in a list of OASIS members.
In contrast to Turban's coyness about their loyalties to Microsoft, Gordon et al populate their text with loving references to Windows 2003/XP, use dozens of illustrations taken from Microsoft products, and support this year's Microsoft line in things like the (incorrect) definition of XML as a computer language (P. 407), their (incorrect) invocation of an "XML Database model" on page 141, and their failure to mention the security implications of webXML.
In comparison, Turban et al do much better on the specific issue of XML, providing this definition:
XML (eXtensible Markup Language) is optimised for document delivery across the Net. It is built on the foundation of SGML. XML is a language for defining, validating, and sharing document formats. It permits authors to create, manage, and access dynamic, personalised, and customised content on the Web -without introducing proprietary HTML extensions. [Turban, TG-02]
Substitute "useful" for "optimised;" replace "It is built on the foundation of SGML" with "It establishes a class of SGML compliant DTDs;" substitute "protocol" for "language;" rephrase "-without introducing proprietary HTML extensions" as "-by replacing the HTML DTD," and this would be about right.
The fourth, and last, class of error arises from the use of popular press stories to illustrate or support some nominally factual presentation in the book. This might seem perfectly reasonable, except that the stories as told in the books tend to be unidimensional, relentlessly upbeat, frequently unsubstantiated, and often fundamentally dishonest.
On a word count basis more of the Turban text is devoted to the author's enthusiasm for various forms of dot.net use than any other subject. Thus the Bristol-Myer's Squibb's [BMS] mini case on pages 2 and 3 of Chapter One both previews the text in its glowing portrait of BMS's many successes in the profitable exploitation of this technology and nicely exemplifies this class of error.
This mini-case starts with a list of eight "Enitiatives" undertaken as part of "the Solution" to problems faced at BMS, but several appear to have no basis in the source document and at least one contradicts widely available information about BMS. Worse, the key claim made in the text misrepresents the content of an internetweek interview cited as the source for the report.
Here's item three on Turban's list of "enitiatives":
"An e-procurement system makes possible purchases of equipment, PCs, and office supplies on-line. With this system 30,000 purchasing agents now use standard procedures, and inexperienced employees can be guided through the now-standardised acquisition process. The system enables BMS to track end-user spending in the company, and in turn, lets IT managers channel users to preferred suppliers" [Turban, P. 2]
Item three is referred to further under "The Results":
"It is difficult to estimate results at this early stage, but BMS's chief information officer (CIO) estimates $100 million annual savings just from e-procurement." [Turban, P. 3]
Here's the applicable excerpt from the source interview with CIO Jack Cooper as published by internetweek:
Bristol-Myers became serious about e-business about two years ago, when the company implemented an Ariba e-procurement system to streamline the ordering of equipment, such as PCs and office supplies, across the company's worldwide offices.
Prior to setting up that system, it was difficult to get the some 30,000 buyers across the company to comply with standard procedures for ordering equipment, much of which was bought over the phone. The Ariba ORM application leverages thin clients, the Internet and corporate intranets to track end-user spending and, in turn, let IT managers channel users to preferred suppliers.
A simple Wizard-based interface guides inexperienced employees through the acquisition process, while more experienced employees use a more sophisticated interface. Cooper estimates the system has saved the company close to $100 million since its deployment.
[InternetWeek June 12, 2000.]
Notice that Mr. Cooper is quoted by Internetweek as saying that ARIBA was implemented two years earlier and has saved the company "close to" $100 million since, but that Turban et al first round this up to $100 million and then double that by cutting the period covered in half.
More subtly, however, this mini case consciously sets the tone for the entire book; creatively extending already unsubstantiated, and highly questionable, claims of success to bolster the case for dot.net. Within the interview, for example, Mr. Cooper apparently claims that his company's use of the internet - all of which was outsourced at the time - builds on its contributions to ARPAnet. Had the authors checked, however, they would have found those contributions to have been minimal and long before Mr. Cooper's ascension; the company's SAP/ARIBA implementation, begun in 1996, in deep trouble; the Clairol line whose website they praise so lavishly gone eighteen months before their book was published; and the "supply partnering" process change heralded with the arrival of Ariba four years before this June 2000 interview, on hold.
This kind of naive twisting of undigested, and often self serving, mass media reports colours everything in the Turban book where few of the highlighted mini-cases can stand any serious level of scrutiny. Gordon et al do this too, but generally prefer either to embed undocumented stories directly in their narrative or to quote an entire story, with very minor modification and attribution, from one of the popular press magazines.
For example, the cheering story of plumber Andy Rodenhiser's successful use of "a CDPD WAN" is integrated in the text (P. 177) and correctly attributed to Computerworld Online (15 May, 2002) on page 196. Unfortunately, the Computerworld story was integral to a Verizon marketing campaign at the time and received wide publicity; not all of it necessarily accurate. In this case a later version, based on an actual interview conducted with Rodenhiser by Robert Mader (Contractor Magazine, October, 2002) gives a very different picture, cutting the number of plumbers involved from 27 to 17; eliminating references to both Verizon and its "CDPD WAN Services"; and, putting the focus on the use of radio with the company's existing software to reduce call-back costs.
This article draws attention to four kinds or classes of error:
|1 The examples used were selected, with the exception of second samples within a book where two relate to the same topic, by randomly opening the books and reading until a suitable example presented itself.|
From a teaching perspective you can correct a small number of simple errors but you cannot deal effectively with a large number of errors of varying degrees of complexity. By itself, correcting Turban's view of SQL, or Gordon's view of ATM, is no big deal but attempting to correct hundreds of simple errors will cause students to question your credibility long before you get to discussing the more complex errors found in these books.
Worse, both students and faculty are trained to believe textbooks. As a result any use of these books will result in you giving students full marks for wrong answers. Consider, for example, the following partial answers to the question:
Taken together these (non-exhaustive) answers illustrate every type of error discussed in this paper, but students will study from whatever book you made them buy and no administrator, or committee of colleagues, is going to deny a student appeal based on correctly quoting from The Book you specified for the course. As a result you would have to give students who correctly reguritate the nonsense in their particular book full marks for each of these answers.
As teachers charged with something like an "Introduction to Computers" course we're expected to turn out students with the grounding necessary to understand deeper explorations of specific topics like applications selection processes or E-commerce security. Unfortunately books like these not only don't contain the information needed to meet typical professional standards but put up barriers, particularly in terms of error, omission, and structure, which actively prevent us from succeeding.
Most people, of course, simply choose to close their eyes to the problem, rationalising that this stuff is good enough for an introductory course while churning out graduates who think that structured programming applies to SQL-Server; that SAP should cost $399.95 and run on a $269 PC; that IBM makes mainframes but all computers run Windows; that Unix is obsolete; that Apple makes toys; and, that Microsoft's miraculous innovations power the success of web enabled businesses everywhere.
Compromise like that is understandable, but it's also professional malfeasance and has to stop.
|Click here to join the discussion||Home of The Unix Guide to Defenestration|