On Tue, Dec 13, 2016 at 5:30 AM, Ian Jackson <ian.jack...@eu.citrix.com> wrote:

> I am concerned that there are other possible bugs of this form.
> In earlier messages on this topic, it has been suggested that the
> "impossible" unique constraint violation is only one example of a
> possible "leakage".

As I see it, the main point of serializable transactions is to
prevent serialization anomalies from being persisted in the
database or seen by a serializable transaction which successfully
commits.  It is certainly very nice from a programming perspective
if the SQLSTATE permits easy identification of which failures it
can be expected to probably yield a different result on retry, but
it doesn't seem to me that the standard requires that, and other
researchers and developers in this area have taken advantage of the
fact that constraints prevent certain types of serialization
anomalies from reaching the database.  In the initial
implementation of serializable transactions we noted papers that
described this, and counted on it for correctness.

Note that with a one year development cycle for major releases, a
feature often initially gets implemented in a "bare bones" format
that is useful but not ideal, and later releases build on it.
Unfortunately there has not been anyone putting the resources into
building on the initial implementation (in 9.1) as the current
implementation has worked well enough for people to be focused on
other areas.

> Earlier you wrote:
>   If I recall correctly, the constraints for which there can be
>   errors appearing due to concurrent transactions are primary key,
>   unique, and foreign key constraints.  I don't remember seeing it
>   happen, but it would not surprise me if an exclusion constraint can
>   also cause an error due to a concurrent transaction's interaction
>   with the transaction receiving the error.
> Are all of these cases fixed by fcff8a57519847 "Detect SSI conflicts
> before reporting constraint violations" ?

No.  It was specifically meant to address duplicate keys, and there
was one particular set of steps which it was not able to address.
See post by Thomas Munro.  Hopefully, he, I, or someone else will
have a chance to work on the one known remaining issue and look for
others.  Your efforts have been helpful; it would be great if you
can find and document any other test cases which show a
less-than-ideal SQLSTATE or other outright serialization anomalies.

> I can try playing around with other kind of constraints, to try to
> discover different aspects or versions of this bug, but my knowledge
> of the innards of databases is very limited and I may not be
> particularly effective.  Certainly if I try and fail, I wouldn't have
> confidence that no such bug existed.

Right.  Proving the absence of any bug when dealing with race
conditions is notoriously hard.

> All statements in such transactions, even aborted transactions, need
> to see results, and have behaviour, which are completely consistent
> with some serialisaton of all involved transactions.  This must apply
> up to (but not including) any serialisation failure error.

If I understand what you are saying, I disagree.  To prevent
incorrect results from being returned even when a transaction later
fails with a serialization failure would require blocking, and
would have a major detrimental effect on both concurrency and
throughput compared to the techniques PostgreSQL is using.  As far
as I'm aware, the guarantee you seek can only be provided by strict
two phase locking (S2PL).  Benchmarks by Dan R. K. Ports of MIT
CSAIL showed S2PL to be consistently slower than the SSI techniques
used by PostgreSQL -- with throughput up to almost 5x worse in some
workloads, even with SSI's requirement that results from a
transaction which later gets a serialization failure must be
ignored.  Please see Section 8, and particularly the performance of
S2PL in Figure 4 of this paper, which Dan and I presented to the
VLDB conference in Istanbul:


> It would be nice if the documentation stated the error codes that
> might be generated.  AFAICT that's just 40P01 and 40001 ?

Those are the only ones I know of.

> (I'm not sure what 40002 is.)

That doesn't appear to me to be related to transaction
serialization issues.

>> For the record, read-write-unique-4.spec's permutation r2 w1 w2 c1 c2
>> remains an open question for further work.
> Is this another possible bug of this form ?

Yes.  See the last specified permutation in this file:


If it's not clear how to  read that to construct the problem case,
the README file might help:


I guess it's short enough to just paste here:

*** SETUP ***
CREATE TABLE invoice (
  year int,
  invoice_number int,
  PRIMARY KEY (year, invoice_number)
INSERT INTO invoice VALUES (2016, 1), (2016, 2);

SELECT COALESCE(MAX(invoice_number) + 1, 1) FROM invoice WHERE year = 2016;

INSERT INTO invoice VALUES (2016, 3);

INSERT INTO invoice VALUES (2016, 3);

Well, the test includes commits and teardown, but this gets you to
the problem.  Connection2 gets this:

ERROR:  duplicate key value violates unique constraint "invoice_pkey"
DETAIL:  Key (year, invoice_number)=(2016, 3) already exists.

If connection1 had explicitly read the "gap" into which it inserted
its row (i.e., with a SELECT statement) there would be a
serialization failure instead.  Getting the RI index maintenance to
register as a read for this purpose is a bit tricky, and we don't
yet have a working patch for that.

Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to