Good morning (evening?)
I adapt a demo/test written with Perl/DBI to run with DBD:Oracle. It
previously works really fine with DBD:mysql. I have pointed you some
memory problems I have found in my tests, and it seems that they are now
resolved. Unfortunately, there is another kind of problems.
The test should be high available. It means that:
- If Oracle does not respond, client test has to reconnect.
- If transaction failed, client should reconnect and redo trx.
Connection, deconnection or transaction are timeouted this way:
eval {
local $SIG{ALRM} = sub { die "timeout\n"; };
alarm $TIMEOUT;
# I put here call to DBI->connect, $dbh->disconnect or a
# transaction ended by $dbh->commit
alarm 0;
};
if ($@) ...
It means that my client are never stuck in a dead connection. They run
the eval block until the action complete.
In case of failure/timeout. I always disconnect then reconnect. There
was some error when I disconnect. I read in man page that I have to
finish each prepared statement ($sth) before disconnecting. I do that
but there is often some segfault and error messages like:
kghalo bad size 0x0a0025b0
********** Internal heap ERROR KGHALO2 addr=0x0 *********
******************************************************
HEAP DUMP heap name="Alloc statemen" desc=0x8272f94
extent sz=0x1024 alt=32767 het=32767 rec=0 flg=2 opc=6
parent=824dac4 owner=0 nex=0 xsz=0xbc0
Hla: 0
kgepop: no error frame to pop to for error 0
My client "finishes" the pending statements ($sth->finish) and hang in
disconnect call.
Is there a way to clean properly the DBI/DBD context. The only I know is
to re-run my client, but that's really a poor way :)
Any ideas are welcomed...
thanks
Denis
--
Denis Pithon phone +33 (0) 1 41 40 02 13
Software Engineer fax +33 (0) 1 41 40 02 01
Lineo High Availability Group mail [EMAIL PROTECTED]