"nikita.y.vol...@mail.ru" <nikita.y.vol...@mail.ru> wrote:

> Executing concurrent transactions inserting the same value of a
> unique key fails with the "duplicate key" error under code
> "23505" instead of any of transaction conflict errors with a
> "40***" code.

This is true, and can certainly be inconvenient when using
serializable transactions to simplify handling of race conditions,
because you can't just test for a SQLSTATE of '40001' or '40P01' to
indicate the need to retry the transaction.  You have two
reasonable ways to avoid duplicate keys if the values are synthetic
and automatically generated.  One is to use a SEQUENCE object to
generate the values.  The other (really only recommended if gaps in
the sequence are a problem) is to have the serializable transaction
update a row to "claim" the number.

Otherwise you need to consider errors related to duplicates as
possibly being caused by a concurrent transaction.  You may want to
do one transaction retry in such cases, and fail if an identical
error is seen.  Keep in mind that these errors do not allow
serialization anomalies to appear in the committed data, so are
arguably not violations of ACID principles -- more of a wart on the
otherwise clean technique of using serializable transactions to
simplify application programming under concurrency.

Thinking about it just now I think we might be able to generate a
write conflict instead of a duplicate key error for this case by
checking the visibility information for the duplicate row.  It
might not even have a significant performance impact, since we need
to check visibility information to generate the duplicate key
error.  That would still leave similar issues (where similar
arguments can be made) relating to foreign keys; but those can
largely be addressed already by declaring the constraints to be
DEFERRED -- and anyway, that would be a separate fix.

I'm moving this discussion to the -hackers list so that I can ask
other developers:

Are there any objections to generating a write conflict instead of
a duplicate key error if the duplicate key was added by a
concurrent transaction?  Only for transactions at isolation level
REPEATABLE READ or higher?

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to