Matthew O. Persico <[EMAIL PROTECTED]> wrote:
That's what I do; fork a child which performs the do() call and then exits. You just have to be careful not to use the same handle while the child is using it.
Um, if you open a handle in the parent and then fork, two data structures are pointing to the same connection. If the parent does something, doesn't that alter the data which makes it inconsistent with the data structure in the child?
Whatever happens in the parent is of no concern to the child process. All that matters is not to confuse the database by sending requests from two places at once. Perhaps it might also confuse the database if each statement sets some state in the local process's handle (for example a count of statements executed so far) and this count seems to be incorrect (as would happen if the child process executed a statement and then the parent did). But that doesn't seem to happen in practice.
What I'm doing is this:
- Process forks.
- Child process does 'delete from some_table' or other operation that returns no results.
- Parent does some CPU-intensive work in the meantime.
- Before doing anything with the database again, parent waits for child to exit.
It seems to work well enough in practice.
My rule is fork, then connect,
Possible, but a lot of overhead; the only reason to do the forking is to make things go faster.
You might get away with it in some DBMS - you won't with others.
It is a bad idea to rely on it working.
--
Jonathan Leffler ([EMAIL PROTECTED], [EMAIL PROTECTED]) #include <disclaimer.h>
Guardian of DBD::Informix v2003.04 -- http://dbi.perl.org/