% fortune -ae paul murphy

4Gls

In the most basic of all development methods, generally categorized as the waterfall method, each stage, from application concept and budget estimation through requirements, specifications, coding, and testing are handed off from one organizational subgroup to the next. Thus coders code, testers test, and business analysts analyze -or, at least, draw diagrams, and every new signoff speeds the project to its inevitable end - a raging success when encountered on resumes and usually just another source of user frustration in reality.

What's most fundamentally wrong with this process is that it evolved to fit tools, agendas, and reporting relationships last seen in the 1950s and is roughly as out of place today as a mid fifies jeepette in Afghanistan - something, by the way, successive Liberal governments have forced Canadian troops now there to use every day.

The canonical substitute is the object oriented development process - which cheerfully retains the same organizational structure but breaks the project into smaller pieces ultimately intended to be brought back together through the miracle of messaging: aka consequential overhead.

We need to ask, however, what happens when we break the project into the smallest possible pieces? not so much bricks that can be brought together later to make a structure, but the bits of straw and clay out of which the bricks are to be made?

Imagine doing this without changing the traditional role separation and what you would get is a situation in which:

  1. the suggestion that the application should have a "Customer Name" blank gets budgetted.

  2. that budget gets approved, and a business analyst determines that this is generally a business name, needs a link to a database row and column, and should be at least 24 characters wide.

  3. The steering committee approves that report, sending the signed off requirement down the waterfall for eventual coding and testing.

  4. After extensive debate, a second arrow head is added to the right hand end of the arrow diagramming the database connection.

  5. The steering committee approves the change, sending the again signed off requirement down the waterfall for more meetings and, eventually, coding and testing.

  6. Once testing signs off, six steering committee meetings later, user management initiates work with IT to budget a "Customer billing address" blank, and the process starts anew.

That's the reductio ad absurdum, but if you actually tried this you'd soon find ways to expedite matters. In particular you'd take advantage of the fact that the bits of clay or straw being passed down the waterfall here fall naturally into one of four basic groups:

  1. standard bits that are used with a lot of different arguments - things like a GUI or a way to link an on-screen object to one or more columns from one or more database tables;

  2. patterns of on-event actions in which the choice of action depends on the event but the content of the action depends on the context;

  3. bits of predictable user customization: some, for example, want to see screens in Spanish, some want their report titles right justified instead of centered, a few want their home locale weather display, originally added as a "fun feature" to involve users, drawn from a different web source than the one IT picked; and,

  4. some more or less freestanding bits of logic that need to be individually understood and coded.

In other words, if you had these standard bits ready to go, all you'd have to do is assemble them and then code the missing bits of group specific logic. But how? How do you bypass the steering committee, the waterfall process, the sign-offs waits, and all the rest of the impedimentia that make this so impractical?

Easy: sit down with some users, a decent workstation, and a 4GL development suite to do all of this stuff in a continuous loop:

  1. use a standard GUI;

  2. build the application screens;

  3. use that information to populate a test database;

  4. put the on-event logic in place - selecting actions from a pop-up menu and marking exceptions as required;

  5. use environment variables to control per-user customization; and,

  6. use an English-like programming language to handle event exceptions and any other missing bits.

It's called active prototyping -and it frustrates every rule learnt so painfully by the card punch people in the 1920s and 30s, but it applies to our time, our tools, and our needs. The bottom line? active prototyping works. It handles complexity, it bypasses all the opportunities for communication error implicit in the waterfall process, and it replaces "object oriented development" (i.e. organizational continuation through messaging overhead) with effective object reuse and linkage.

So how buggy are the prototypes? Lets talk tomorrow.


Paul Murphy wrote and published The Unix Guide to Defenestration. Murphy is a 25-year veteran of the I.T. consulting industry, specializing in Unix and Unix-related management issues.