I'll repost my (OP) case, for the references to it to make more sense to
the others.
Having the following table:
CREATE TABLE "song_artist" (
"song_id" INT8 NOT NULL,
"artist_id" INT8 NOT NULL,
PRIMARY KEY ("song_id", "artist_id")
);
Even trying to protect from this with a select, won't help to get away from
the error, because at the beginning of the transaction the key does not
exist yet.
BEGIN ISOLATION LEVEL SERIALIZABLE READ WRITE;
INSERT INTO song_artist (song_id, artist_id)
SELECT 1, 2
WHERE NOT EXISTS (SELECT * FROM song_artist WHERE song_id=1 AND
artist_id=2);
COMMIT;
2014-12-26 21:38 GMT+03:00 Kevin Grittner <[email protected]>:
> Tom Lane <[email protected]> wrote:
>
> > Just for starters, a 40XXX error report will fail to provide the
> > duplicated key's value. This will be a functional regression,
>
> Not if, as is normally the case, the transaction is retried from
> the beginning on a serialization failure. Either the code will
> check for a duplicate (as in the case of the OP on this thread) and
> they won't see the error, *or* the the transaction which created
> the duplicate key will have committed before the start of the retry
> and you will get the duplicate key error.
>
> > I think an appropriate response to these complaints is to fix the
> > documentation to point out that duplicate-key violations may also
> > be worthy of retries.
>
> > but I see no mention of the issue in chapter 13.)
>
> I agree that's the best we can do for stable branches, and worth
> doing.
>
> It would be interesting to hear from others who have rely on
> serializable transactions in production environments about what
> makes sense to them. This is probably the wrong list to find such
> people directly; but I seem to recall Josh Berkus has a lot of
> clients who do. Josh? Any opinion on this thread?
>
> --
> Kevin Grittner
> EDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>