Tech Tip 48

By Paul Murphy, author of The Unix Guide to Defenestration

To make a small problem big, start with the wrong tool.

Most day to day technical issues are easily resolved, but you always get a few where the obvious resolution leads to an increased daily or weekly workload -and that's something you generally want to avoid whether you can automate it or not.

For example, I had about 80 Resumix users under Solaris 2.7. That program had been ported from SunOS and occassionally dumped core, but we had to live with it because the company had veered into Winteland -and subsequent oblivion- during the dot.dumb boom.

A daily cron job:

find / -name core -exec rm {} \;

seemed to take care of it but was actually a rather uninformed response. What had happened was that the problem first came up because user space was being filled with 22MB core dumps and so deleting them made sense. Once my delete script had demonstrated itself, adding it to crontab to automate a daily cleanup also seemed sensible.

In effect my initial perception of the problem - remove core dumps - combined with a bit of cleverness -"\;"- and the ability to blame Resumix for the problem to make me like my solution more than I should have.

In reality adding: limit coredumpsize 0

to the user's .cshrc file is much better because it means the offending files don't get written in the first place while avoiding any ambiguity about which files to delete afterwards.

The point here is that it usually pays to go beyond your initial response to a problem, particularly if it's more than a one shot deal. The process of finding a second solution forces you to re-evaluate both the issue and your first solution, generally adds to your knowledge, and often lets you pick the right solution rather than just the one you already know.