On Fri, Oct 14, 2016 at 2:04 AM, Kevin Grittner <kgri...@gmail.com> wrote:
> On Wed, Oct 12, 2016 at 8:06 PM, Thomas Munro <thomas.mu...@enterprisedb.com> 
> wrote:
>> The "higher isolation levels" probably shouldn't be treated the same way.
>> I think Peter's right about REPEATABLE READ.  We should definitely
>> raise the error immediately as we do in that level, because our RR
>> (SI) doesn't care about write skew and all that stuff, it just
>> promises that you can only see data in your snapshot.
> But the whole point of the special code for both RI and INSERT/ON
> CONFLICT is to get "underneath" that and provide a "primitive" that
> can see things an application statement can't, for better
> performance and error handling.  What SERIALIZABLE promises is that
> it runs exactly the same as REPEATABLE READ, but with some
> additional monitoring for serialization failure errors in some
> places that REPEATABLE READ does not generate them -- this would be
> the first and only place that SERIALIZABLE would break that model.
> The idea seems completely wrong and arbitrary.

Ugh, yeah.  Thanks for this reminder of the relationship between SI
and SSI, which I somehow temporarily lost sight of.

> Where do you see a problem if REPEATABLE READ handles INSERT/ON
> CONFLICT without error?  In many cases it would actually be
> providing a result consistent with a serial execution of the
> transactions; and where it doesn't, it would be the same anomalies
> that are possible with anything else under REPEATABLE READ.

I thought that there was something fishy about the idea of not running
Peter's check in the case of ON CONFLICT DO NOTHING in RR, because
then there isn't an opportunity to detect serialization failure that
the DO UPDATE variant has.  Upon reflection, DO NOTHING is not very
different from INSERT with an exception handler for unique_violation
that does nothing, and that doesn't cause RR to raise an error.  I see
now that you are right, and the check is probably bogus for RR.

>> But yeah, the existing code raises false positive serialization
>> failures under SERIALIZABLE, and that's visible in the isolation test
>> I posted: there is actually a serial order of those transactions with
>> the same result.
> Exactly.  The error based on the write conflict with ON CONFLICT DO
> NOTHING in your patch is really a false positive.  That doesn't
> break correctness, but it hurts performance, so it should be
> avoided if possible.

Agreed.  The check is bogus for SERIALIZABLE too, if we have proper SSI checks.

>> When working on commit fcff8a57 I became suspicious of the way ON
>> CONFLICT interacts with SSI, as I mentioned in passing back then[1],
>> thinking mainly of false negatives.  I failed to find a
>> non-serializable schedule involving ON CONFLICT that was allowed to
>> run, though I didn't spend much time on it.   One thing that worries
>> me is the final permutation of read-write-unique-4.spec, which
>> produces an arguably spurious UCV, that is, a transaction that doesn't
>> commit but raises a UCV instead of the serialization failure you might
>> expect.  The ON CONFLICT equivalent might be a transaction that takes
>> the ON CONFLICT path and then commits, even though it should be
>> considered non-serializable.  I would really like to understand that
>> case better, and until then I wouldn't bet my boots that it isn't
>> possible to commit anomalies using ON CONFLICT under SERIALIZABLE
>> without Peter's check (or even with it), despite the fact that it
>> reaches predicate locking code via heap_fetch etc.
> Hm.  With the duplicate key error I fail to see how any anomaly
> could make it to a committed state in the database, although I
> agree it is unfortunate that there is that one case where it really
> should be considered a serialization failure that we haven't yet
> coerced to yield that instead of the duplicate key error.

Right, in the unique_violation case it can't commit so there's no
problem (it would just be nicer to users if we could catch that case;
you might call it a false negative but it is harmless because a
unique_violation saves the day).  What I'm wondering about though is
whether a similar ON CONFLICT schedule suffers a similar problem, but
would allow you to commit.  For example, I think the ON CONFLICT
equivalent might be something like the following (rather contrived)
schedule, which happily commits if you comment out Peter's check:

(1)  postgres=# create table bank_account (id int primary key, cash int);
(1)  postgres=# begin transaction isolation level serializable ;
(1)  BEGIN

(2)      postgres=# begin transaction isolation level serializable ;
(2)      BEGIN

(1)  postgres=# select * from bank_account where id = 1;
(1)  ┌────┬──────┐
(1)  │ id │ cash │
(1)  ├────┼──────┤
(1)  └────┴──────┘
(1)  (0 rows)

(2)      postgres=# insert into bank_account values (1, 100);
(2)      INSERT 0 1

(1)  postgres=# insert into bank_account values (1, 200) on conflict do nothing;
(1)  ...waits for tx2...

(2)      postgres=# commit;
(2)      COMMIT

(1)  INSERT 0 0
(1)  postgres=# commit;

If tx1 ran before tx2, then it would have succeeded in inserting (1,
200), and tx2 would have failed with unique_violation.  If tx2 ran
before tx1, then tx1's SELECT command would have seen (1, 100) and
possibly taken a different course of action.  So this schedule is
non-serializable, right?

If you remove ON CONFLICT DO NOTHING, then tx1 gets a unique_violation
after tx2 commits, which is similar to the last case in
read-write-unique-4.spec.  To be able to produce a cycle that SSI can
detect, perhaps an INSERT containing an implicit uniqueness check
would need to be modelled as a read followed by a write.  I couldn't
make that work, but I'm not sure if it's sensible anyway: wouldn't
overlapping transactions consisting of just a single INSERT with the
same key then produce a false positive, instead of unique_violation in
one transaction?

Thomas Munro

Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:

Reply via email to