The word "mainframe" originally refered to the physical layout of the IBM System 360 series because the primary CPU was mounted in a separate wiring cabinet or "main frame." Today's mainframe, although a direct descendent of that 360 in terms of software, differs dramatically from it in internal architecture.
That divergence started in the late ninties when IBM decided to standardize its AIX, OS/400, and zOS hardware on the same basic components and same basic plug together design. Thus today's z9 "enterprise" and "business" class servers look a lot like older iSeries and pSeries machines internally and are really distinguished from those at the hardware level only in terms of packaging and accomodations made to legacy mainframe software and related ideas.
The latest z9 series CPU board, called an MCM in IBMese, is embedded in what IBM calls "a book" and comes with up to 16 enabled CPU cores (aka processing units) and up to 128GB of enabled memory. Single z9 machines now max out at four books, or 64 CPU cores. Of these, however, at least two cores per board have to be dedicated to "storage assist processing" (i.e. I/O support), and at least two per machine have to be set aside as spares -i.e. the machine offers a maximum of 54 processing units, and 512GB of accessible memory.
The CPU used is the 1.65Ghz dual core Power5+ with some changes in the enabling firmware and socket wiring to support traditional mainframe features like CPU sparing and up to 40MB of shared cache per MCM. Interconnects across the MCM have relatively low latency, enabling dual, eight-way, SMP blocks in the pSeries and allowing zOS to co-ordinate up to four, eight way, blocks to form a 32 processor virtual machine - the largest logical partition yet supported on the IBM mainframe.
The z9 supports up to 32 way clustering via processor dedication. It also offers a very wide range of external I/O capabilities built around a maximum of 64 standardized ("self timing") interface boards each capable of handling up to 2.7GB/Sec in flows from plug in ethernet, disk, or other controllers.
All of this comes in a nicely packaged "mainframe" drawing from 6 to 19 KVA and typically taking up about a square meter of floor space.
In many ways this is an entirely admirable machine: physically small, relatively power efficient, reasonably fast for many small tasks done in parallel; capable of connecting to a lot of legacy storage; and, capable of supporting hot plug replacement for almost everything from I/O devices to processor books.
IBM stresses both reliability and virtualization in its sales presentations on this product line. Of these two, the reliability claims are mostly a case of saying what the customers want to hear - because, in reality, things like having spare CPUs on line are just costly hangovers from the System 360 and there's nothing otherwise significant in the architecture that isn't matched in IBM's other high end Power products.
Notice that this doesn't mean that Z9 machines aren't highly reliable, they are; but the reliability gains over the comparable AIX and OS/400 machines are due to the management methods and software that go with the box, not the hardware.
The heart of the system lies in its virtualization capabilities. The maxed out z9 can be broken up into 60 logical partitions each of which looks like an entire machine to its operating systems code. Load VM on one of these and it, in turn, can host a significant number of concurrent guest operating systems like Linux or one of the older 360 OS variants.
This structure reflects the machine's data processing heritage in which it assumed that all jobs are batch (even the on-line ones), all batches are computationally small and severable, and real work consists of sequential read, manipulate, and write operations on each of many very small data sets. Thus most jobs need fractional CPU shares, eight way SMP more than suffices for the largest jobs, and the use of logical partitioning and guest operating systems offers a proven way to both enforce job severability for scheduling and to protect one job from another during run-time.
This role conceptualization also has many consequences for hardware and software licensing. In particular software is usually licensed by the processor count and relative performance - meaning that limiting licensed code to a particular logical partition size or resource constrained guest operating system reduces costs significantly. Since it's difficult to apply this logic to software IBM wants to promote or stuff that can use the whole machine, IBM has developed licenses based on specified uses for processors. Thus the IFL (integrated facility for Linux) is an ordinary CPU licensed only to run Linux, while the zAAP (zSeries Application Assist Processor) is one licensed only to run JAVA, and SAPs (CPUs used as storage assist processors) don't usually count for licensing.
An ultra-low end machine, with a fractional single CPU license, starts at about $100,000 before storage and application licensing.
A maxed out, 54 processor, 1.65Ghz, 2094-754 z9 "enterprise" machine with 5124GB of accessible memory is thought to cost about $22.5 million at list plus about $92,700 per month in maintenance before software and storage.
According to an IBM TPC filing, an IBM p5-595 with AIX, 2048GB of accessible RAM, and 64 cores at 1.9Ghz lists at about $12.4 million -or about 48% ($10 million) less than the mainframe.
Similarly, list price on an IBM 9133-1EB 550Q rackmount P550 with eight cores and 16GB of memory is about $37,100. In other words, four racks filled with a total of 64 of these machines would provide about twice the total memory offered by the z9, about eight times the total CPU resource, about eight times the total I/O bandwidth, four more "hard wired" logical partitions, and just over $20 million (90%) in change.
Tomorrow I'm going to talk about running Linux on the z9, but the conundrum to ponder here is a simple one: if the value of the mainframe lies in its virtualization capabilities, why doesn't the 64 way rackmount offer a better solution?