% fortune -ae paul murphy

Turtles vs Hares

In Aesop's version of the story a bragging rabbit challenges a turtle to a race, and then loses because he shoots so far ahead he becomes overconfident enough to take a nap, only to find on waking that the turtle has made it over the finish line.

I alluded to this story in a response, last week, to a comment by frequent contributor JesperFrimann in which he excoriates the idea that 256 threads executing slowly can beat 16 theads executing quickly.

It sounded good at the time, but I was wrong to say it because the analogy between a herd of turtles beating a bunch of bunnies and stored procedure execution in the Sybase shared dataspace on a T2 beating similar processes on a Power6 is based on a wide range of mistaken assumptions and beliefs.

I'll look at the actual cost/performance differences tomorrow, but today I want to point out that the biggest real difference is that he and I simply see the same machine very differently - and that in making the analogy I was inadvertently falling in line with his view of everything from threads to IT management.

Thus he sees the T5440 as having 256 manageable CPUs - countable turtles - and the dba/syadmin job as one of setting up the right resource controls to ensure maximal effective utilization.

I don't see it that way at all: to me the T5440 is a four-way machine with lots of SMP memory and the sysadmin/DBA job is completely conditioned by one goal: minimize user response time subject only to continuity requirements.

Thus when confronted, even in imagination, with the job of configuring a T2 or p570 purchase to replace the p690s in the example I'd cited, he turns immediately to details any data processing professional would consider important:

At the very least we would need these data from the p690's:

Partition list
Hardware list
Utilization data for all the LPARS, with daily profiles.
Complete software stack with versions.
Cluster setup

And then there is all the stuff needed to design the target solution.

Target system requirements
What is the maximum allowed utilization, what is the uptime requirements, service windows. etc.
Software requirements of target system.
Do we need enterprise licenses or ?
What can be hosted inside the same OS instance,
and what need separate virtual machines ?
Floorspace cost
POWER cost
Datacenter Cooling effiency
Cost of net connections/SAN connections per port.

And he's perfectly correct in all this - except, that I'm not from the data processing culture and my sample questions are very different:

How many users, and what do we know about their usage mix?

How big are the indexes accounting for 95% or more of lookups? (Notice that DB size itself is almost a non issue - disk is cheap.)

Is housekeeping in place? (things like automated calls to update statisics and sp_recompile);

How nasty are the stored procedures in this thing?

What external sources/destinations are scripted in this thing and how badly are they going to complicate both regular and recovery operations?

Are there horrific performance killers in place? (e.g. the typical 3,000 line SQL inverter can be externalized to run in seconds as a Perl script -and table partitions put in to accomodate PC style parallelism should be taken out for a ZFS/thumper environment)

In one sense there is no difference: we both ask questions that relate to the things we care about: in his case IT management and in mine user response. He wants to know what the minimal service level is (a question hiding a wide range of assumptions -none of which I agree with - about how users relate to IT), and how this thing will fit into data center operations. I assume lots of stuff about this: that we can plug the rack in somewhere, that we can use ZFS on a super-thumper to handle replication (telling Sybase that the storage server is a raw device and thus setting up for easy storage expansion in the rack) and so on in order to focus on issues affecting users.

So what's the right analogy? I'm not convinced there is one - maybe something with respect to two blind guys trying to sketch the elephant that just stepped on them - but I think that the bottom line on this has little to do with hardware and everything to do with role conceptualization - if he were doing this migration I'm sure he'd do a great job, but both his focus and his results would be very different from mine: he'd produce something IT could be proud of; I generally produce things IT doesn't see and users consider part of the furniture.


Paul Murphy wrote and published The Unix Guide to Defenestration. Murphy is a 25-year veteran of the I.T. consulting industry, specializing in Unix and Unix-related management issues.