% fortune -ae paul murphy

Migrating Sybase

Last week I talked about configuring a Sun "Pod" consisting of a T2000 computer in a rack with a JBOD, a UPS, and some network gear. I'm still looking for help on picking and costing the rack and other non computing components - please: most of you must know more about rack and UPS costs than I do?

For today, however, lets imagine that we have an upgraded pod together that contains two fully loaded 32GB, 1.2Ghz, T2000 processors together with dual 450GB mirrored disk packs, the UPS, and network connectivity for about $91,000 inclusive of three years of platinum support.

Now lets consider a former client of mine who has an enterprise resource management package, including financials, running in a client-server environment with a total of about 1,200 PC users. The backend for this thing currently uses Sybase ASE 12.5.1 running on a pair of clustered of HP 9000 rp8420-16 servers each with 8 PA-8800 900 MHz CPUs, 64GB of ram, and shared access to an EMC DMX 2000 disk store.

My guess is that they're paying HP about $8,000 per month in support on this gear: meaning that our pod would pay out in marginally less than a year and produce net savings on the order of $200,000 over the three years for which we prepaid support on the pod --and, of course, that's without considering things like a couple of surplus HP boxes, space and HVAC savings, or Sybase support cost reductions.

To get the cash, we need to do the port - so how hard, and how risky, is that?

Setting up Sybase on Solaris 10 with ZFS and having replication server feed the second machine is largely a point and click process with the only exceptions being things like authorizing Solaris to use very large pages - a unique Solaris thing that makes some processes (like loading rarely used indexes) go very fast.

Once Sybase is set up the same scripts that let you make and manage the databases and backup processes on HP-UX work on Solaris and you can transfer the data simply by recovering, on Solaris, from the files made by Sybase backup server on HP-UX. Adding replication server on the primary HP machine can then bring your Sun implementation to full concurrency and keep it that way so the eventual transition doesn't need more than a minute of downtime.

Basically, there's not much to the Sybase part of the job. Where you will be caught, however, is in going through the user processes to look for non Sybase functionality - things like get and load scripts that access other systems and will need to be moved and have their permissions set appropriately. Many of these will turn out to need careful scrutiny and changes when brought over and, of course, the documentation everyone has usually turns out to have only an accidental, once upon a time, kind of relationship with reality.

Basically you can expect to spend a day or two wandering around their system finding and debugging these followed by some downtime while they clear whatever systems or information accesses blockages get thrown in your way.

In my experience you can expect, furthermore, a panic callback for half a day or so early in the next accounting cycle.

Overall, however, there's no rocket science and very little risk here. Some Sunday morning at about 2AM they'll be running on HP-UX, and by 3AM you'll be overseeing their first backups on Sun without so much as a hicup in between.

So what happens to performance?

There are two bottlenecks now: the EMC and some very convoluted stored procedures run every night to produce inverted data files for a business inteligence application hosted on some Windows/XP servers.

In both cases performance will improve dramatically: ZFS beats EMC hands down, and the Niagara processor's ability to do the memory based character swapping needed for the stored procedure run is unrivaled by anything in the industry - basically the T2000 will blow the HP/EMC combination away.

However, I have a favorite trick in this type of situation. Rather than using load balancing and clustering the primary and backup machines, I make them independent, direct all interactive traffic to the primary, and use replication server to keep the other machine current. Then I run the big stored procedures and backup server only on the replicant. That way the second machine is available for other work (like running Sun Rays during business hours), there is no need to slow primary processing to allow for a backup window, it's no longer necessary to run the big stored procedures only at night, and each time the procedures run the integrity of the backup system gets tested.

And if the primary fails? a trivial heartbeat script calls ifconfig to tell the secondary to respond to to packets addressed to the other machine. This means that transactions continue, but most batch transfers with third party permissions dependencies fail -but so what? by then an assessment can be made to see whether the rest of the changeover script should be run or not.


Paul Murphy wrote and published The Unix Guide to Defenestration. Murphy is a 25-year veteran of the I.T. consulting industry, specializing in Unix and Unix-related management issues.