Tim,
Thanks for taking the time to review my posting. I think that taking you
advice will solve >90% of the problems. It will still leave the program
exposed to unexpected "die", but it's much better than the current
status, where every "unexpected" fork/system need to be identified and
handled.
My new code has the following main loop: (code is running as CGI or
FASTCGI).
My $dbh = create_db_handle ;
My $pid = $$ ;
While ( my $cgi = get_next_cgi_request() ) {
$dbh->{InactiveDestroy} = 0 ;
eval { ... } ;
$dbh->{InactiveDestroy} = 1 of $pid == $$ ;
}
$dbh->disconnect ;
As a side note, I think it will be very helpfull if this behavior will
be embedded into the DBI management (or into individual drivers). From
the little research that I did, the ability to use DBI connection across
fork calls is only limited to few embedded databases (sqlite, DBI, ...).
I could not find any DB that will work across "system" calls. For
All/most "regular" db products (Sybase, SQL Server, Oracle), having to
explicitly deal with connection destructions across system/fork is just
a pain.
Thanks again,
Yair
-----Original Message-----
From: Tim Bunce [mailto:[EMAIL PROTECTED] On Behalf Of Tim Bunce
Sent: Tuesday, March 25, 2008 6:41 AM
To: Lenga, Yair [CMB-FICC]
Cc: [email protected]
Subject: Re: Automatically managing inactiveDestroy property.
Sounds like you could just set InactiveDestroy on all handles by
default, but then turn it off in the parent process just before it
disconnects/exits.
You could also possibly play games overriding CORE::GLOBAL::fork() and
CORE::GLOBAL::system().
Tim.
On Mon, Mar 24, 2008 at 12:04:48PM -0400, Lenga, Yair wrote:
> Hi,
>
> I have a complex CGI program, which need to call external program,
> while keeping the DBI connection open. I'm struggling with the
> "InactiveDestroy" property. Per the DBI perldoc:
>
> This attribute is specifically designed for use in Unix
> applica-
> tions that "fork" child processes. Either the parent or the
> child
> process, but not both, should set "InactiveDestroy" on all
> their
> shared handles. Note that some databases, including
> Oracle, don't
> support passing a database connection across a fork.
>
> My problem is that I do not have any control over the interface to the
> external program (3rd party perl code that I can not modify), which is
> using both 'fork' and 'system'. In the past, I solved the problem
> using the following method:
> - Before calling the 3rd party library, the code will fork.
> - The child will mark the inactiveDestroy.
> - Code will call the 3rd party library
> - Parent wait for child to complete.
>
> This logic worked OK until I got extra requirements to store the
> results of the 3rd party calls in the DB. Initially, I tried to push
> the results from the child to the parent (using a pipe), but this is
> getting more complex - as I requirement expanded to pass more data
> between the 3rd party calls and the DB.
>
> My questions/suggestion:
> Will it make sense to manage the 'inactiveDestroy' automatically using
> one of the following policies:
>
> (A) Remember in the DBI handle the PID of the calling process, and
> assume the 'InactiveDestroy' was turned on when the handle goes out of
> scope on a process with a different PID (e.g, children). For me, this
> solution will provide a clean handling of all the cases.
>
> (B) Add a new property 'AutoInactiveDestroy' which will do the same.
> The code will have to turn this property on (when the connection is
> created, or before the fork).
>
> I can not think of any situation where (A) will not work - but my
> experience is limited to Linux/SunOS with Sybase/SQLITE/SQLServer.
>
> Any feedback will be appreciated.
>
> Regards,
> Yair Lenga