% fortune -ae paul murphy

From Chapter Three: The Windows Culture

This is the 25th excerpt from the second book in the Defen series: BIT: Business Information Technology: Foundations, Infrastructure, and Culture

About this grouping

From a business perspective the key characteristics of this architecture include:

  1. Desktop services are delivered through desktop computers (PCs) that have all the hardware components needed to operate as stand-alone machines;

  2. Shared services are delivered through applications, known as servers, that run on dedicated and repackaged PCs, also known as servers;

  3. PC servers are stacked using rackmounts containing multiple independent machines and usually run one application per machine.

    This example [Note: diagrams and photos are omitted here] contains eight dual processor servers; some rackmounts hold up to 84 separate computers which may or may not share power, disk, and network resources. Blade servers, which must share data and network resources can be stacked up to 256 per rack.

    Although each machine in a rack server has its own memory and CPU, units usually share power, a control monitor and keyboard, and the network connection.

  4. Mainframe style OS Virtualization has recently become important to PC data centers. In this a basic OS (now usually Linux based) is used to to multi-task the processor(s) to run other operating systems as if they were applications -allowing one machine to run multiple applications, each under their own OS instance.

  5. Users interact mainly with a help desk organization operated at arms length by or from the internal systems group; and,

  6. Business management's interest in systems usually focuses on reliability and cost containment while data center management usually operates in rear-guard mode on security, unexpected user activity, and externally mandated upgrades. The underlying architecture is called `client-server" and now generally implemented using a strategy called "three tier."

    Note that both the hardware and the software running on it are called "servers" while both the user accessible software running on a PC, and the PC itself, are called "clients". Servers (hardware) differ from clients (hardware) only in their configuration, not their fundamental design. A typical server can be used as a PC by adding a separate monitor and keyboard and most PC client machines can be used as servers.

    For example, a business application like a GL will typically be accessed using the following steps:

    1. Each time the user restarts the desktop PC, usually at least once per day, it loads one or more pieces of client software from a file server (a machine and application combination which stores files for forwarding to client PCs; in this configuration, often also called a boot server).

    2. The user starts the financial system client as an application running under the local operating system, usually some Microsoft Windows variant, on the desktop PC.

    3. That client (software) then connects to one or more application servers (software) via the network. It might, for example, connect to the application's reporting and status component running on a Windows 2003/XP Server somewhere and the application's database running in SQL-Server (a Microsoft database management system) running on a Windows 2003 Advanced Server machine somewhere else.

    4. When the user takes some action, such as logging in, the client software evaluates the action and sends it to a server, in this case typically a Windows 2003/XP Advanced Directory Authentication Server, which generates the response.

    5. Once the remote server, or servers, respond, the client software formats that response for display to the user and waits for the next user action.

    In operation a GL can therefore involve as many as a dozen different machines, all running different server applications.

    The first PC networks, from Novel, used a server for file and print sharing via a single shared disk drive. In this configuration, which is now the standard for office document storage and used in almost all business implementations of the PC architecture, the central server (or rack of servers) drives one or more shared printers and acts as the repository for shared documents.

    In these configurations the client PC, at boot time, connects to the server via SMB (a now deprecated Microsoft networking standard) or TCP/IP and represents this connection to the user as a virtual disk; usually labelling it G: or H:. Users then access these documents using an application such as Microsoft Word which treats this virtual disk as if it were a real local disk to read or write documents.

    In practice, management methods vary tremendously. Small installations, from single PCs acting as their own servers to several hundred PCs and a substantial number of servers, often have no formal management strategy at all. Instead the people responsible for systems operations focus entirely on day to day support and getting upgrades and related costs approved on an ad hoc basis rather than through a formal budget process.

    In larger organizations two main, and complementary, management trends exist:

    1. Help desk out-sourcing, in which most day to day user support is contracted to an external organization which provides telephone based assistance and may, or may not, extend to managing the desktop PC investment and related licensing; and,

    2. Lockdowns in which all desktop control is shifted to the central systems managers.

      In a lockdown scenario systems management attempts to get control of the PC investment by "locking down" the user's desktop operating system to define a single, company wide, boot strategy, network identification, and application suite - in effect turning the PC into a simple graphics terminal on the old mainframe model.

    There is considerable research to support the view that operational costs form a continuum ranging from lowest with maximal centralized control or lockdown to highest with maximal user freedom. Companies which impose the fewest restrictions on user access to desktop PC services experience the highest costs, while companies which go furthest in user restrictions and control experience the lowest costs.

    Since the introduction of Windows 2000 Advanced Server in early 2001 companies have been able to reduce complexity within the data center by engaging in some server consolidation - replacing many small servers with fewer larger ones. That became possible for three reasons:

    1. Intel released an upgraded Xeon (formerly known as the Pentium Pro) chip capable of two and four way symmetric multi-processing (the 8 way and higher machines use multiples of the four way unit);

    2. Microsoft released Windows 2000 Advanced server - operating system software capable of using the Xeon; and,

    3. The price balance between hardware and software licensing made upgraded hardware and server consolidation seem financially sensible.

    Since then Microsoft's licensing policies have changed and now (2007) mitigate strongly against running more than one application on upgraded servers. Instead per CPU pricing has driven first a move to smaller servers and, more recently, PC virtualization in which a low level OS is used to break a larger machine into many smaller "virtual servers" each running one licensed application in its own copy of the licensed OS.

    Note too that both server consolidation and the lockdown approach to cost containment turn the desktop PC into a terminal accessing applications on centrally controlled servers. This strategy thus represents an implementation of the low cost application appliance environment using high cost client-server hardware and software.


    Some notes:

    1. These excerpts don't (usually) include footnotes and most illustrations have been dropped as simply too hard to insert correctly. (The wordpress html "editor" as used here enables a limited html subset and is implemented to force frustrations like the CPM line delimiters from MS-DOS).

    2. The feedback I'm looking for is what you guys do best: call me on mistakes, add thoughts/corrections on stuff I've missed or gotten wrong, and generally help make the thing better.

      Notice that getting the facts right is particularly important for BIT - and that the length of the thing plus the complexity of the terminology and ideas introduced suggest that any explanatory anecdotes anyone may want to contribute could be valuable.

    3. When I make changes suggested in the comments, I make those changes only in the original, not in the excerpts reproduced here.