Merlin Moncure <mmonc...@gmail.com> wrote: > Well, I'm arguing that duplicate key errors are not serialization > failures unless it's likely the insertion would succeed upon a retry; > a proper insert, not an upsert. If that's the case with what you're > proposing, then it makes sense to me. But that's not what it sounds > like...your language suggests AIUI that having the error simply be > caused by another transaction being concurrent would be sufficient to > switch to a serialization error (feel free to correct me if I'm > wrong!). > > In other words, the current behavior is: > txn A,B begin > txn A inserts > txn B inserts over A, locks, waits > txn A commits. B aborts with duplicate key error > > Assuming that case is untouched, then we're good! My long winded > point above is that case must fail with duplicate key error; a > serialization error is suggesting the transaction should be retried > and it shouldn't be...it would simply fail a second time.
What I'm proposing is that for serializable transactions B would get a serialization failure; otherwise B would get a duplicate key error. If the retry of B looks at something in the database to determine what it's primary key should be it will get a new value on the retry, since it will be starting after the commit of A. If it is using a literal key, not based on something changed by A, it will get a duplicate key error on the retry, since it will be starting after the commit of A. It will either succeed on retry or get an error for a different reason. -- Kevin Grittner EDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers