Hi, I am using HA-JDBC as a clustering solution and having a problem with exceptions due to duplicate keys. The application is multi-threaded, multi-JVM and needs to insert records into a table if they do not exist yet. It is possible, that two threads try to insert the same record into the database at the same time. Up to now, I found two solutions for this:
1. ignore the "duplicate key exception"
2. DELETE and then INSERT the record
Unfortunately, both are not an option for me due to the way HA-JBDC works: it
sends every SQL-update statement to every node. If it succeeds on one node
and fails on another node, then the failing node is believed to be
malfunctioning and taken out of the cluster. The problem is, that it is
possible that the statements are executed out of order on the different
nodes. This means, that e.g. node 1 executes statement 1 successfully and
statement 2 with the exception and node 2 does the same vice versa. HA-JDBC
detects different behaviour and disables one node.
One nice solution would be, to have something like MySQL's "REPLACE INTO" or
something like "INSERT IF NOT EXISTS", I saw somewhere else. The semantics
would be:
REPLACE INTO:
if primary key does not exist
-> INSERT
else
-> UPDATE
INSERT IF NOT EXISTS
if primary key does not exist
-> INSERT
else
-> nothing, especially no exception
Is there any way to achieve this with the current implementation or should I
file a RFE? The seconds one should be easy to implement, as you just do not
throw the exception.
--
Greetings
Kurt
pgpwL2EsA2dtH.pgp
Description: PGP signature
