Simon Riggs <si...@2ndquadrant.com> wrote:
APPROACH1
1. Join to produce results based upon snapshot at start of query
2. Apply results for INSERT, UPDATE or DELETE

Such failures are of great concern in practice because the time
between 1 and 2 could be very long for large statements, or for
smaller statements we might have sufficiently high concurrency to
allow us to see regular failures.

I'm not sure that they're a *great* concern in a world with something
that targets UPSERT use cases, which is a situation that does not exist
in DBMSs with MERGE (with the notable exception of Teradata). But it's
clearly a concern that users may expect to avoid duplicate violations in
READ COMMITTED, since this caused confusion among users of other
database systems with MERGE.

APPROACH2 (modified from my original proposal slightly)

This write-up actually begins to confront the issues that I've raised.
I'm glad to see this.

1. Join...
2. Apply results for UPDATE, if present not visible via the snapshot
taken at 1, do EPQ to ensure we locate current live tuple
3. If still not visible, do speculative insertion if we have a unique
index available, otherwise ERROR. If spec insertion fails, go to 2

The loop created above can live-lock, meaning that an infinite loop
could be created.

The loop is *guaranteed* to live-lock once you "goto 2". So you might as
well just throw an error at that point, which is the behavior that I've
been arguing for all along!

If this isn't guaranteed to live-lock at "goto 2", then it's not clear
why. The outcome of step 2 is clearly going to be identical if you don't
acquire a new MVCC snapshot, but you don't address that.

You might have meant "apply an equivalent ON CONFLICT DO UPDATE", or
something like that, despite the fact that the use of ON CONFLICT DO
NOTHING was clearly implied by the "goto 2". I also see problems with
that, but I'll wait for you to clarify what you meant before going into
what they are.

In practice, such live-locks are rare and we could detect them by
falling out of the loop after a few tries. Approach2's purpose is to
alleviate errors in Approach1, so falling out of the loop merely takes
us back to the error we would have got if we didn't try, so Approach2
has considerable benefit over Approach1.

I don't hate the idea of retrying a fixed number of times for things
like this, but I don't like it either. I'm going to assume that it's
fine for now.

I read that step 3 in Approach2 is some kind of problem in MVCC
semantics. My understanding is that SQL Standard allows us to define
what the semantics of the statement are in relation to concurrency, so
any semantic issue can be handled by defining it to work the way we
want.

My only concern is that our choices here should be good ones, based on
practical considerations. We both more or less agree on how this should
be assessed, I think; we just reach different conclusions.

As you point out, whichever we choose, we will be bound by those
semantics. So if we take Approach1, as has been indicated currently,
what is the written explanation for that, so we can show that to the
people who ask in the future about our decisions?

Well, Approach1 is what other systems implement. I think that it would
be important to point out that MERGE with Approach1 isn't special, but
ON CONFLICT DO UPDATE is special. We'd also say that higher isolation
levels will not have duplicate violations.

--
Peter Geoghegan


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to