On Fri, Oct 11, 2013 at 10:02 AM, Andres Freund <and...@2ndquadrant.com> wrote:
> On 2013-10-11 08:43:43 -0400, Robert Haas wrote:
>> > I appreciate that it's odd that serializable transactions now have to
>> > worry about seeing something they shouldn't have seen (when they
>> > conclusively have to go lock a row version not current to their
>> > snapshot).
>> Surely that's never going to be acceptable. At read committed,
>> locking a version not current to the snapshot might be acceptable if
>> we hold our nose, but at any higher level I think we have to fail with
>> a serialization complaint.
> I think an UPSERTish action in RR/SERIALIZABLE that notices a concurrent
> update should and has to *ALWAYS* raise a serialization
> failure. Anything else will cause violations of the given guarantees.
Sorry, this was just a poor choice of words on my part. I totally
agree with you here. Although I wasn't even talking about noticing a
concurrent update - I was talking about noticing that a tuple that
it's necessary to lock isn't visible to a serializable snapshot in the
first place (which should also fail).
What I actually meant was that it's odd that that one case (reason for
returning) added to HeapTupleSatisfiesMVCC() will always obligate
Serializable transactions to throw a serialization failure. Though
that isn't strictly true; the modifications to
HeapTupleSatisfiesMVCC() that I'm likely to propose also redundantly
work for other cases where, if I'm not mistaken, that's okay (today,
if you've exclusively locked a tuple and it hasn't been
updated/deleted, why shouldn't it be visible to your snapshot?). The
onus is on the executor-level code to notice this
should-be-invisibility for non-read-committed, probably immediately
after returning from value locking.
Sent via pgsql-hackers mailing list (firstname.lastname@example.org)
To make changes to your subscription: