By Paul Murphy, author of The Unix Guide to Defenestration
I got called in once to recover a production Oracle instance that had completely disappeared after a system restart. This shutdown and restart had been planned for a long weekend to allow building management to work on their power distribution system --but no one expected the production database to go AWOL and there were several hundred bemused users wandering the halls looking lost and lonely when I arrived.
As it turned out, the DBA they had hired after me was an MSCE with a spectacularly limited knowledge of Solaris. In a highly successful bid to improve on system performance she had rebuilt my configuration to get rid of those unsightly raw disks she didn't grok and put almost everything as cooked files on /tmp. That worked very well from a performance perspective but came to an ugly and unexpected end on reboot.
About a year later, I did exactly what she had done --intentionally using /tmp for a Sybase instance. Why? because it actually makes sense under some circumstances. Specifically, this client had a 3000+ line SQL stored procedure that produced the cube for their reporting application. That process had grown to take over four hours and now interfered with both Sybase and system backup.
By scripting an independent Sybase instance on /tmp I was able to use the files produced by backup server to populate the thing so it could run after the databases had been dumped to files and while the system was writing the files out to tape. As a result they got the entire four hour window back and the actual process, which had been severely I/O bound, ran in about 45 minutes.
For this I decided to remake the Sybase instance after boot-up but you can beat this by modifying /etc/init.d/RMTMPFILES to replicate the long gone default SunOS/BSD behavior of deleting files in /tmp but preserving directories.