Re: [HACKERS] heap_update() VM retry could break HOT?

2016-07-18 Thread Pavan Deolasee
On Mon, Jul 18, 2016 at 12:47 PM, Andres Freund  wrote:

>
> as far as I can see that could mean that we perform hot updates when not
> permitted, because the tuple has been replaced since, including the
> pkey. Similarly, the wrong tuple lock mode could end up being used.
>
> Am I missing something?
>
>
If the to-be-updated tuple gets updated while we were retrying vm pinning,
heap_update() should return HeapTupleUpdated and the caller must wait for
the updating transaction to finish, retry update with the new version (or
fail depending on the isolation level). Given
that HeapTupleSatisfiesUpdate() is called after l2, the logic seems fine to
me.

Thanks,
Pavan

-- 
 Pavan Deolasee   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


[HACKERS] heap_update() VM retry could break HOT?

2016-07-18 Thread Andres Freund
Hi,

heap_update() retries pinning the vm pinning, as explained in the
following comment:

/*
 * Before locking the buffer, pin the visibility map page if it appears 
to
 * be necessary.  Since we haven't got the lock yet, someone else might 
be
 * in the middle of changing this, so we'll need to recheck after we 
have
 * the lock.
 */
if (PageIsAllVisible(page))
visibilitymap_pin(relation, block, );

...

/*
 * If we didn't pin the visibility map page and the page has become all
 * visible while we were busy locking the buffer, or during some
 * subsequent window during which we had it unlocked, we'll have to 
unlock
 * and re-lock, to avoid holding the buffer lock across an I/O.  That's 
a
 * bit unfortunate, especially since we'll now have to recheck whether 
the
 * tuple has been locked or updated under us, but hopefully it won't
 * happen very often.
 */
if (vmbuffer == InvalidBuffer && PageIsAllVisible(page))
{
LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
visibilitymap_pin(relation, block, );
LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
goto l2;
}

unfortunately the l2 target is after the following:
/*
 * If we're not updating any "key" column, we can grab a weaker lock 
type.
 * This allows for more concurrency when we are running simultaneously
 * with foreign key checks.
 *
 * Note that if a column gets detoasted while executing the update, but
 * the value ends up being the same, this test will fail and we will use
 * the stronger lock.  This is acceptable; the important case to 
optimize
 * is updates that don't manipulate key columns, not those that
 * serendipitiously arrive at the same key values.
 */
HeapSatisfiesHOTandKeyUpdate(relation, hot_attrs, key_attrs, id_attrs,
 
_hot, _key,
 _id, 
, newtup);
if (satisfies_key)
{
*lockmode = LockTupleNoKeyExclusive;
mxact_status = MultiXactStatusNoKeyUpdate;
key_intact = true;

/*
 * If this is the first possibly-multixact-able operation in the
 * current transaction, set my per-backend OldestMemberMXactId
 * setting. We can be certain that the transaction will never 
become a
 * member of any older MultiXactIds than that.  (We have to do 
this
 * even if we end up just using our own TransactionId below, 
since
 * some other backend could incorporate our XID into a MultiXact
 * immediately afterwards.)
 */
MultiXactIdSetOldestMember();
}
else
{
*lockmode = LockTupleExclusive;
mxact_status = MultiXactStatusUpdate;
key_intact = false;
}

as far as I can see that could mean that we perform hot updates when not
permitted, because the tuple has been replaced since, including the
pkey. Similarly, the wrong tuple lock mode could end up being used.

Am I missing something?

- Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers