On 01/14/2014 12:20 PM, Peter Geoghegan wrote:
I think that the prevention of unprincipled deadlocking is all down to
this immediately prior piece of code, at least in those test cases:

!                       /*
!                        * in insertion by other.
!                        *
!                        * Before returning true, check for the special case 
that the
!                        * tuple was deleted by the same transaction that 
inserted it.
!                        * Such a tuple will never be visible to anyone else, 
!                        * the transaction commits or aborts.
!                        */
!                       if (!(tuple->t_infomask & HEAP_XMAX_INVALID) &&
!                               !(tuple->t_infomask & HEAP_XMAX_COMMITTED) &&
!                               !(tuple->t_infomask & HEAP_XMAX_IS_MULTI) &&
!                               !HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask) &&
!                               HeapTupleHeaderGetRawXmax(tuple) == 
!                       {
!                               return false;
!                       }

But why should it be acceptable to change the semantics of dirty
snapshots like this, which previously always returned true when
control reached here? It is a departure from their traditional
behavior, not limited to clients of this new promise tuple
infrastructure. Now, it becomes entirely a matter of whether we tried
to insert before or after the deleting xact's deletion (of a tuple it
originally inserted) as to whether or not we block. So in general we
don't get to "keep our old value locks" until xact end when we update
or delete.

Hmm. So the scenario would be that a process inserts a tuple, but kills it again later in the transaction, and then re-inserts the same value. The expectation is that because it inserted the value once already, inserting it again will not block. Ie. inserting and deleting a tuple effectively acquires a value-lock on the inserted values.

Even if you don't consider this a bug for existing dirty
snapshot clients (I myself do - we can't rely on deleting a row and
re-inserting the same values now, which could be particularly
undesirable for updates),

Yeah, it would be bad if updates start failing because of this. We could add a check for that, and return true if the tuple was updated rather than deleted.

I have already described how we can take
advantage of deleting tuples while still holding on to their "value
locks" [1] to Andres. I think it'll be very important for multi-master
conflict resolution. I've already described this useful property of
dirty snapshots numerous times on this thread in relation to different
aspects, as it happens. It's essential.

I didn't understand that description.

Anyway, I guess you're going to need an infomask bit to fix this, so
you can differentiate between 'promise' tuples and 'proper' tuples.

Yeah, that's one way. Or you could set xmin to invalid, to make the killed tuple look thoroughly dead to everyone.

Those are in short supply. I still think this problem is more or less
down to a modularity violation, and I suspect that this is not the
last problem that will be found along these lines if we continue to
pursue this approach.

You have suspected that many times throughout this thread, and every time there's been a relatively simple solutions to the issues you've raised. I suspect that's also going to be true for whatever mundane next issue you come up with.

- Heikki

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to