Also, rather than fork each time you need a timeout it would be
nice to
be able to have a single 'watchdog' process and use some form of fast
IPC to tell it when to be active (passing the timeout to use and
relevant session id) and when to be inactive. A pipe would be fine.
But perhaps that could be a new DBIx::Watchdog module. Just a thought.
This is a good way for handling a few potentially long-running
queries, but forking each time means that you create a new dbh each
time, correct? That's impossibly slow for environments like high-
volume webservers.
The watchdog process concept is good, but if you have a watchdog for
each regular process, you end up with either double the connect on
the database server, or the watchdog process having to open-close
every time.
Better yet is a pool of watchdog processes, which can then also be
used in an Apache/modperl environment. You specify how many processes
are allowed to connect to the db, and you use those exclusively. You
can also kill any one of them any time you want, effectively having
safe signals.
Some kind of a DBI::Pool on the process level, basically. With a
parent process that manages the pool of DBI processes, which can be
killed by a calling process when the query takes too long.
H