On Tue, Feb 10, 2026, at 10:09 AM, Greg Burd wrote:
> Hello,
>
> TL;DR, I'm going to put a pin in this idea for now.

Okay, I couldn't put the pen down after all. :)

Here's my thinking, this patch set can be thought of as:

a) moving HeapDetermineColumnsInfo() into the executor
b) all that HOT nonsense

I feel that (a) has value even without (b), that removing a chunk of work from 
within an exclusive buffer lock to outside that lock is a Good Thing(TM) and 
could in this case result in more concurrency.

To that end, present to you a single patch that *only* does (a), it moves the 
logic of HeapDeterminColumnsInfo() into the executor and doesn't change 
anything else.  Meaning that what goes HOT today (without this patch), should 
continue to be HOT tomorrow (with this patch) and nothing else.

Catalog tuples use simple_heap_update() which calls HeapDeterminColumnsInfo() 
just as it does now, so these results are also identical.

Logically replicated tuples avoid HeapDeterminColumnsInfo() because the carry 
with them the set of changed attributes, so I surface that from within 
slot_modify_data() and intersect it with the set of indexed attributes 
resulting in an identical update while avoiding that overhead.

This has to run faster, I'll measure it ASAP and post, but I thought I'd share 
this now to potentially keep the ball rolling.

best.

-greg

PS: I'll layer in the additional changes in a future post that expand HOT, but 
those can be viewed in the context of "maybe in v20" while I hope that this 
patch could be potentially acceptable in v19.
From b4d0b42943b68a522b139093a27fc786921cd22f Mon Sep 17 00:00:00 2001
From: Greg Burd <[email protected]>
Date: Sun, 2 Nov 2025 11:36:20 -0500
Subject: [PATCH v20260211] Idenfity modified indexed attributes in the
 executor on UPDATE

Refactor executor update logic to determine which indexed columns have
actually changed during an UPDATE operation rather than leaving this up
to HeapDetermineColumnsInfo() in heap_update().

ExecWhichIndexesRequireUpdates() replaces HeapDeterminesColumnsInfo()
and is called before table_tuple_update() crucially without the need
for an exclusive buffer lock on the page that holds the tuple being
updated.  This reduces the time the lock is held later within
heapam_tuple_update() and heap_update().

Catalog tuple updates use simple_heap_update() which still calls into
HeapDeterminesColumnInfo() to identify modified indexed attributes as
before however some of the same logic now in heapam_tuple_update() is
replicated in simple_heap_update().  The special case for
ItemIdIsNormal() only applies to catalog tuple updates and so that code
remains in simple_heap_update(), but not in other paths.

Updates stemming from logical replication now avoid calling
HeapDetermineColumnsInfo() as well. The modified indexed attributes for
these updates are now simply the intersection of the attrbutes returned
from slot_modify_data() with the set of indexed attributes on a
relation.

Besides identifying the set of modified indexed attributes
HeapDetermineColumnsInfo() was also responsible for part of the logic
involed in the decision to include the replica identity key or not.
This now happens within heapam_tuple_update().
---
 src/backend/access/heap/heapam.c              | 604 ++++++++----------
 src/backend/access/heap/heapam_handler.c      | 182 +++++-
 src/backend/access/table/tableam.c            |   5 +-
 src/backend/catalog/indexing.c                |  16 +-
 src/backend/executor/execMain.c               |   1 +
 src/backend/executor/execReplication.c        |   7 +
 src/backend/executor/nodeModifyTable.c        | 169 ++++-
 src/backend/replication/logical/worker.c      |  69 +-
 src/backend/utils/cache/relcache.c            |  44 +-
 src/include/access/heapam.h                   |  26 +-
 src/include/access/tableam.h                  |   8 +-
 src/include/catalog/index.h                   |   1 +
 src/include/executor/executor.h               |   3 +
 src/include/nodes/execnodes.h                 |   6 +
 src/include/utils/rel.h                       |   2 +-
 src/include/utils/relcache.h                  |   2 +-
 .../regress/expected/generated_virtual.out    |   2 +-
 src/test/regress/expected/updatable_views.out |   4 +-
 src/test/regress/sql/generated_virtual.sql    |   2 +-
 src/test/regress/sql/updatable_views.sql      |   2 +-
 20 files changed, 762 insertions(+), 393 deletions(-)

diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 98d53caeea8..2da1f702486 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -37,6 +37,7 @@
 #include "access/multixact.h"
 #include "access/subtrans.h"
 #include "access/syncscan.h"
+#include "access/tableam.h"
 #include "access/valid.h"
 #include "access/visibilitymap.h"
 #include "access/xloginsert.h"
@@ -51,6 +52,7 @@
 #include "utils/datum.h"
 #include "utils/injection_point.h"
 #include "utils/inval.h"
+#include "utils/relcache.h"
 #include "utils/spccache.h"
 #include "utils/syscache.h"
 
@@ -62,16 +64,8 @@ static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 								  HeapTuple newtup, HeapTuple old_key_tuple,
 								  bool all_visible_cleared, bool new_all_visible_cleared);
 #ifdef USE_ASSERT_CHECKING
-static void check_lock_if_inplace_updateable_rel(Relation relation,
-												 const ItemPointerData *otid,
-												 HeapTuple newtup);
 static void check_inplace_rel_lock(HeapTuple oldtup);
 #endif
-static Bitmapset *HeapDetermineColumnsInfo(Relation relation,
-										   Bitmapset *interesting_cols,
-										   Bitmapset *external_cols,
-										   HeapTuple oldtup, HeapTuple newtup,
-										   bool *has_external);
 static bool heap_acquire_tuplock(Relation relation, const ItemPointerData *tid,
 								 LockTupleMode mode, LockWaitPolicy wait_policy,
 								 bool *have_tuple_lock);
@@ -3300,7 +3294,10 @@ simple_heap_delete(Relation relation, const ItemPointerData *tid)
  *	heap_update - replace a tuple
  *
  * See table_tuple_update() for an explanation of the parameters, except that
- * this routine directly takes a tuple rather than a slot.
+ * this routine directly takes a heap tuple rather than a slot.
+ *
+ * It's required that the caller has acquired the pin and lock on the buffer.
+ * That lock and pin will be managed here, not in the caller.
  *
  * In the failure cases, the routine fills *tmfd with the tuple's t_ctid,
  * t_xmax (resolving a possible MultiXact, if necessary), and t_cmax (the last
@@ -3308,30 +3305,19 @@ simple_heap_delete(Relation relation, const ItemPointerData *tid)
  * generated by another transaction).
  */
 TM_Result
-heap_update(Relation relation, const ItemPointerData *otid, HeapTuple newtup,
+heap_update(Relation relation, HeapTupleData *oldtup, HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
 			TM_FailureData *tmfd, LockTupleMode *lockmode,
-			TU_UpdateIndexes *update_indexes)
+			Buffer buffer, Page page, BlockNumber block, ItemId lp,
+			bool hot_allowed, Buffer *vmbuffer, bool rep_id_key_required)
 {
 	TM_Result	result;
 	TransactionId xid = GetCurrentTransactionId();
-	Bitmapset  *hot_attrs;
-	Bitmapset  *sum_attrs;
-	Bitmapset  *key_attrs;
-	Bitmapset  *id_attrs;
-	Bitmapset  *interesting_attrs;
-	Bitmapset  *modified_attrs;
-	ItemId		lp;
-	HeapTupleData oldtup;
 	HeapTuple	heaptup;
 	HeapTuple	old_key_tuple = NULL;
 	bool		old_key_copied = false;
-	Page		page;
-	BlockNumber block;
 	MultiXactStatus mxact_status;
-	Buffer		buffer,
-				newbuf,
-				vmbuffer = InvalidBuffer,
+	Buffer		newbuf,
 				vmbuffer_new = InvalidBuffer;
 	bool		need_toast;
 	Size		newtupsize,
@@ -3339,13 +3325,11 @@ heap_update(Relation relation, const ItemPointerData *otid, HeapTuple newtup,
 	bool		have_tuple_lock = false;
 	bool		iscombo;
 	bool		use_hot_update = false;
-	bool		summarized_update = false;
 	bool		key_intact;
 	bool		all_visible_cleared = false;
 	bool		all_visible_cleared_new = false;
 	bool		checked_lockers;
 	bool		locker_remains;
-	bool		id_has_external = false;
 	TransactionId xmax_new_tuple,
 				xmax_old_tuple;
 	uint16		infomask_old_tuple,
@@ -3353,144 +3337,13 @@ heap_update(Relation relation, const ItemPointerData *otid, HeapTuple newtup,
 				infomask_new_tuple,
 				infomask2_new_tuple;
 
-	Assert(ItemPointerIsValid(otid));
-
-	/* Cheap, simplistic check that the tuple matches the rel's rowtype. */
-	Assert(HeapTupleHeaderGetNatts(newtup->t_data) <=
-		   RelationGetNumberOfAttributes(relation));
-
+	Assert(BufferIsLockedByMe(buffer));
+	Assert(ItemIdIsNormal(lp));
 	AssertHasSnapshotForToast(relation);
 
-	/*
-	 * Forbid this during a parallel operation, lest it allocate a combo CID.
-	 * Other workers might need that combo CID for visibility checks, and we
-	 * have no provision for broadcasting it to them.
-	 */
-	if (IsInParallelMode())
-		ereport(ERROR,
-				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
-				 errmsg("cannot update tuples during a parallel operation")));
-
-#ifdef USE_ASSERT_CHECKING
-	check_lock_if_inplace_updateable_rel(relation, otid, newtup);
-#endif
-
-	/*
-	 * Fetch the list of attributes to be checked for various operations.
-	 *
-	 * For HOT considerations, this is wasted effort if we fail to update or
-	 * have to put the new tuple on a different page.  But we must compute the
-	 * list before obtaining buffer lock --- in the worst case, if we are
-	 * doing an update on one of the relevant system catalogs, we could
-	 * deadlock if we try to fetch the list later.  In any case, the relcache
-	 * caches the data so this is usually pretty cheap.
-	 *
-	 * We also need columns used by the replica identity and columns that are
-	 * considered the "key" of rows in the table.
-	 *
-	 * Note that we get copies of each bitmap, so we need not worry about
-	 * relcache flush happening midway through.
-	 */
-	hot_attrs = RelationGetIndexAttrBitmap(relation,
-										   INDEX_ATTR_BITMAP_HOT_BLOCKING);
-	sum_attrs = RelationGetIndexAttrBitmap(relation,
-										   INDEX_ATTR_BITMAP_SUMMARIZED);
-	key_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_KEY);
-	id_attrs = RelationGetIndexAttrBitmap(relation,
-										  INDEX_ATTR_BITMAP_IDENTITY_KEY);
-	interesting_attrs = NULL;
-	interesting_attrs = bms_add_members(interesting_attrs, hot_attrs);
-	interesting_attrs = bms_add_members(interesting_attrs, sum_attrs);
-	interesting_attrs = bms_add_members(interesting_attrs, key_attrs);
-	interesting_attrs = bms_add_members(interesting_attrs, id_attrs);
-
-	block = ItemPointerGetBlockNumber(otid);
-	INJECTION_POINT("heap_update-before-pin", NULL);
-	buffer = ReadBuffer(relation, block);
-	page = BufferGetPage(buffer);
-
-	/*
-	 * Before locking the buffer, pin the visibility map page if it appears to
-	 * be necessary.  Since we haven't got the lock yet, someone else might be
-	 * in the middle of changing this, so we'll need to recheck after we have
-	 * the lock.
-	 */
-	if (PageIsAllVisible(page))
-		visibilitymap_pin(relation, block, &vmbuffer);
-
-	LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
-
-	lp = PageGetItemId(page, ItemPointerGetOffsetNumber(otid));
-
-	/*
-	 * Usually, a buffer pin and/or snapshot blocks pruning of otid, ensuring
-	 * we see LP_NORMAL here.  When the otid origin is a syscache, we may have
-	 * neither a pin nor a snapshot.  Hence, we may see other LP_ states, each
-	 * of which indicates concurrent pruning.
-	 *
-	 * Failing with TM_Updated would be most accurate.  However, unlike other
-	 * TM_Updated scenarios, we don't know the successor ctid in LP_UNUSED and
-	 * LP_DEAD cases.  While the distinction between TM_Updated and TM_Deleted
-	 * does matter to SQL statements UPDATE and MERGE, those SQL statements
-	 * hold a snapshot that ensures LP_NORMAL.  Hence, the choice between
-	 * TM_Updated and TM_Deleted affects only the wording of error messages.
-	 * Settle on TM_Deleted, for two reasons.  First, it avoids complicating
-	 * the specification of when tmfd->ctid is valid.  Second, it creates
-	 * error log evidence that we took this branch.
-	 *
-	 * Since it's possible to see LP_UNUSED at otid, it's also possible to see
-	 * LP_NORMAL for a tuple that replaced LP_UNUSED.  If it's a tuple for an
-	 * unrelated row, we'll fail with "duplicate key value violates unique".
-	 * XXX if otid is the live, newer version of the newtup row, we'll discard
-	 * changes originating in versions of this catalog row after the version
-	 * the caller got from syscache.  See syscache-update-pruned.spec.
-	 */
-	if (!ItemIdIsNormal(lp))
-	{
-		Assert(RelationSupportsSysCache(RelationGetRelid(relation)));
-
-		UnlockReleaseBuffer(buffer);
-		Assert(!have_tuple_lock);
-		if (vmbuffer != InvalidBuffer)
-			ReleaseBuffer(vmbuffer);
-		tmfd->ctid = *otid;
-		tmfd->xmax = InvalidTransactionId;
-		tmfd->cmax = InvalidCommandId;
-		*update_indexes = TU_None;
-
-		bms_free(hot_attrs);
-		bms_free(sum_attrs);
-		bms_free(key_attrs);
-		bms_free(id_attrs);
-		/* modified_attrs not yet initialized */
-		bms_free(interesting_attrs);
-		return TM_Deleted;
-	}
-
-	/*
-	 * Fill in enough data in oldtup for HeapDetermineColumnsInfo to work
-	 * properly.
-	 */
-	oldtup.t_tableOid = RelationGetRelid(relation);
-	oldtup.t_data = (HeapTupleHeader) PageGetItem(page, lp);
-	oldtup.t_len = ItemIdGetLength(lp);
-	oldtup.t_self = *otid;
-
-	/* the new tuple is ready, except for this: */
+	/* The new tuple is ready, except for this */
 	newtup->t_tableOid = RelationGetRelid(relation);
 
-	/*
-	 * Determine columns modified by the update.  Additionally, identify
-	 * whether any of the unmodified replica identity key attributes in the
-	 * old tuple is externally stored or not.  This is required because for
-	 * such attributes the flattened value won't be WAL logged as part of the
-	 * new tuple so we must include it as part of the old_key_tuple.  See
-	 * ExtractReplicaIdentity.
-	 */
-	modified_attrs = HeapDetermineColumnsInfo(relation, interesting_attrs,
-											  id_attrs, &oldtup,
-											  newtup, &id_has_external);
-
 	/*
 	 * If we're not updating any "key" column, we can grab a weaker lock type.
 	 * This allows for more concurrency when we are running simultaneously
@@ -3502,9 +3355,8 @@ heap_update(Relation relation, const ItemPointerData *otid, HeapTuple newtup,
 	 * is updates that don't manipulate key columns, not those that
 	 * serendipitously arrive at the same key values.
 	 */
-	if (!bms_overlap(modified_attrs, key_attrs))
+	if (*lockmode == LockTupleNoKeyExclusive)
 	{
-		*lockmode = LockTupleNoKeyExclusive;
 		mxact_status = MultiXactStatusNoKeyUpdate;
 		key_intact = true;
 
@@ -3521,22 +3373,14 @@ heap_update(Relation relation, const ItemPointerData *otid, HeapTuple newtup,
 	}
 	else
 	{
-		*lockmode = LockTupleExclusive;
 		mxact_status = MultiXactStatusUpdate;
 		key_intact = false;
 	}
 
-	/*
-	 * Note: beyond this point, use oldtup not otid to refer to old tuple.
-	 * otid may very well point at newtup->t_self, which we will overwrite
-	 * with the new tuple's location, so there's great risk of confusion if we
-	 * use otid anymore.
-	 */
-
 l2:
 	checked_lockers = false;
 	locker_remains = false;
-	result = HeapTupleSatisfiesUpdate(&oldtup, cid, buffer);
+	result = HeapTupleSatisfiesUpdate(oldtup, cid, buffer);
 
 	/* see below about the "no wait" case */
 	Assert(result != TM_BeingModified || wait);
@@ -3568,8 +3412,8 @@ l2:
 		 */
 
 		/* must copy state data before unlocking buffer */
-		xwait = HeapTupleHeaderGetRawXmax(oldtup.t_data);
-		infomask = oldtup.t_data->t_infomask;
+		xwait = HeapTupleHeaderGetRawXmax(oldtup->t_data);
+		infomask = oldtup->t_data->t_infomask;
 
 		/*
 		 * Now we have to do something about the existing locker.  If it's a
@@ -3609,13 +3453,12 @@ l2:
 				 * requesting a lock and already have one; avoids deadlock).
 				 */
 				if (!current_is_member)
-					heap_acquire_tuplock(relation, &(oldtup.t_self), *lockmode,
+					heap_acquire_tuplock(relation, &oldtup->t_self, *lockmode,
 										 LockWaitBlock, &have_tuple_lock);
 
 				/* wait for multixact */
 				MultiXactIdWait((MultiXactId) xwait, mxact_status, infomask,
-								relation, &oldtup.t_self, XLTW_Update,
-								&remain);
+								relation, &oldtup->t_self, XLTW_Update, &remain);
 				checked_lockers = true;
 				locker_remains = remain != 0;
 				LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
@@ -3625,9 +3468,9 @@ l2:
 				 * could update this tuple before we get to this point.  Check
 				 * for xmax change, and start over if so.
 				 */
-				if (xmax_infomask_changed(oldtup.t_data->t_infomask,
+				if (xmax_infomask_changed(oldtup->t_data->t_infomask,
 										  infomask) ||
-					!TransactionIdEquals(HeapTupleHeaderGetRawXmax(oldtup.t_data),
+					!TransactionIdEquals(HeapTupleHeaderGetRawXmax(oldtup->t_data),
 										 xwait))
 					goto l2;
 			}
@@ -3652,8 +3495,8 @@ l2:
 			 * before this one, which are important to keep in case this
 			 * subxact aborts.
 			 */
-			if (!HEAP_XMAX_IS_LOCKED_ONLY(oldtup.t_data->t_infomask))
-				update_xact = HeapTupleGetUpdateXid(oldtup.t_data);
+			if (!HEAP_XMAX_IS_LOCKED_ONLY(oldtup->t_data->t_infomask))
+				update_xact = HeapTupleGetUpdateXid(oldtup->t_data);
 			else
 				update_xact = InvalidTransactionId;
 
@@ -3694,9 +3537,9 @@ l2:
 			 * lock.
 			 */
 			LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
-			heap_acquire_tuplock(relation, &(oldtup.t_self), *lockmode,
+			heap_acquire_tuplock(relation, &oldtup->t_self, *lockmode,
 								 LockWaitBlock, &have_tuple_lock);
-			XactLockTableWait(xwait, relation, &oldtup.t_self,
+			XactLockTableWait(xwait, relation, &oldtup->t_self,
 							  XLTW_Update);
 			checked_lockers = true;
 			LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
@@ -3706,20 +3549,20 @@ l2:
 			 * other xact could update this tuple before we get to this point.
 			 * Check for xmax change, and start over if so.
 			 */
-			if (xmax_infomask_changed(oldtup.t_data->t_infomask, infomask) ||
+			if (xmax_infomask_changed(oldtup->t_data->t_infomask, infomask) ||
 				!TransactionIdEquals(xwait,
-									 HeapTupleHeaderGetRawXmax(oldtup.t_data)))
+									 HeapTupleHeaderGetRawXmax(oldtup->t_data)))
 				goto l2;
 
 			/* Otherwise check if it committed or aborted */
-			UpdateXmaxHintBits(oldtup.t_data, buffer, xwait);
-			if (oldtup.t_data->t_infomask & HEAP_XMAX_INVALID)
+			UpdateXmaxHintBits(oldtup->t_data, buffer, xwait);
+			if (oldtup->t_data->t_infomask & HEAP_XMAX_INVALID)
 				can_continue = true;
 		}
 
 		if (can_continue)
 			result = TM_Ok;
-		else if (!ItemPointerEquals(&oldtup.t_self, &oldtup.t_data->t_ctid))
+		else if (!ItemPointerEquals(&oldtup->t_self, &oldtup->t_data->t_ctid))
 			result = TM_Updated;
 		else
 			result = TM_Deleted;
@@ -3732,39 +3575,32 @@ l2:
 			   result == TM_Updated ||
 			   result == TM_Deleted ||
 			   result == TM_BeingModified);
-		Assert(!(oldtup.t_data->t_infomask & HEAP_XMAX_INVALID));
+		Assert(!(oldtup->t_data->t_infomask & HEAP_XMAX_INVALID));
 		Assert(result != TM_Updated ||
-			   !ItemPointerEquals(&oldtup.t_self, &oldtup.t_data->t_ctid));
+			   !ItemPointerEquals(&oldtup->t_self, &oldtup->t_data->t_ctid));
 	}
 
 	if (crosscheck != InvalidSnapshot && result == TM_Ok)
 	{
 		/* Perform additional check for transaction-snapshot mode RI updates */
-		if (!HeapTupleSatisfiesVisibility(&oldtup, crosscheck, buffer))
+		if (!HeapTupleSatisfiesVisibility(oldtup, crosscheck, buffer))
 			result = TM_Updated;
 	}
 
 	if (result != TM_Ok)
 	{
-		tmfd->ctid = oldtup.t_data->t_ctid;
-		tmfd->xmax = HeapTupleHeaderGetUpdateXid(oldtup.t_data);
+		tmfd->ctid = oldtup->t_data->t_ctid;
+		tmfd->xmax = HeapTupleHeaderGetUpdateXid(oldtup->t_data);
 		if (result == TM_SelfModified)
-			tmfd->cmax = HeapTupleHeaderGetCmax(oldtup.t_data);
+			tmfd->cmax = HeapTupleHeaderGetCmax(oldtup->t_data);
 		else
 			tmfd->cmax = InvalidCommandId;
 		UnlockReleaseBuffer(buffer);
 		if (have_tuple_lock)
-			UnlockTupleTuplock(relation, &(oldtup.t_self), *lockmode);
-		if (vmbuffer != InvalidBuffer)
-			ReleaseBuffer(vmbuffer);
-		*update_indexes = TU_None;
+			UnlockTupleTuplock(relation, &oldtup->t_self, *lockmode);
+		if (*vmbuffer != InvalidBuffer)
+			ReleaseBuffer(*vmbuffer);
 
-		bms_free(hot_attrs);
-		bms_free(sum_attrs);
-		bms_free(key_attrs);
-		bms_free(id_attrs);
-		bms_free(modified_attrs);
-		bms_free(interesting_attrs);
 		return result;
 	}
 
@@ -3777,10 +3613,10 @@ l2:
 	 * tuple has been locked or updated under us, but hopefully it won't
 	 * happen very often.
 	 */
-	if (vmbuffer == InvalidBuffer && PageIsAllVisible(page))
+	if (*vmbuffer == InvalidBuffer && PageIsAllVisible(page))
 	{
 		LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
-		visibilitymap_pin(relation, block, &vmbuffer);
+		visibilitymap_pin(relation, block, vmbuffer);
 		LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
 		goto l2;
 	}
@@ -3791,9 +3627,9 @@ l2:
 	 * If the tuple we're updating is locked, we need to preserve the locking
 	 * info in the old tuple's Xmax.  Prepare a new Xmax value for this.
 	 */
-	compute_new_xmax_infomask(HeapTupleHeaderGetRawXmax(oldtup.t_data),
-							  oldtup.t_data->t_infomask,
-							  oldtup.t_data->t_infomask2,
+	compute_new_xmax_infomask(HeapTupleHeaderGetRawXmax(oldtup->t_data),
+							  oldtup->t_data->t_infomask,
+							  oldtup->t_data->t_infomask2,
 							  xid, *lockmode, true,
 							  &xmax_old_tuple, &infomask_old_tuple,
 							  &infomask2_old_tuple);
@@ -3805,12 +3641,12 @@ l2:
 	 * tuple.  (In rare cases that might also be InvalidTransactionId and yet
 	 * not have the HEAP_XMAX_INVALID bit set; that's fine.)
 	 */
-	if ((oldtup.t_data->t_infomask & HEAP_XMAX_INVALID) ||
-		HEAP_LOCKED_UPGRADED(oldtup.t_data->t_infomask) ||
+	if ((oldtup->t_data->t_infomask & HEAP_XMAX_INVALID) ||
+		HEAP_LOCKED_UPGRADED(oldtup->t_data->t_infomask) ||
 		(checked_lockers && !locker_remains))
 		xmax_new_tuple = InvalidTransactionId;
 	else
-		xmax_new_tuple = HeapTupleHeaderGetRawXmax(oldtup.t_data);
+		xmax_new_tuple = HeapTupleHeaderGetRawXmax(oldtup->t_data);
 
 	if (!TransactionIdIsValid(xmax_new_tuple))
 	{
@@ -3825,7 +3661,7 @@ l2:
 		 * Note that since we're doing an update, the only possibility is that
 		 * the lockers had FOR KEY SHARE lock.
 		 */
-		if (oldtup.t_data->t_infomask & HEAP_XMAX_IS_MULTI)
+		if (oldtup->t_data->t_infomask & HEAP_XMAX_IS_MULTI)
 		{
 			GetMultiXactIdHintBits(xmax_new_tuple, &infomask_new_tuple,
 								   &infomask2_new_tuple);
@@ -3853,7 +3689,7 @@ l2:
 	 * Replace cid with a combo CID if necessary.  Note that we already put
 	 * the plain cid into the new tuple.
 	 */
-	HeapTupleHeaderAdjustCmax(oldtup.t_data, &cid, &iscombo);
+	HeapTupleHeaderAdjustCmax(oldtup->t_data, &cid, &iscombo);
 
 	/*
 	 * If the toaster needs to be activated, OR if the new tuple will not fit
@@ -3870,12 +3706,12 @@ l2:
 		relation->rd_rel->relkind != RELKIND_MATVIEW)
 	{
 		/* toast table entries should never be recursively toasted */
-		Assert(!HeapTupleHasExternal(&oldtup));
+		Assert(!HeapTupleHasExternal(oldtup));
 		Assert(!HeapTupleHasExternal(newtup));
 		need_toast = false;
 	}
 	else
-		need_toast = (HeapTupleHasExternal(&oldtup) ||
+		need_toast = (HeapTupleHasExternal(oldtup) ||
 					  HeapTupleHasExternal(newtup) ||
 					  newtup->t_len > TOAST_TUPLE_THRESHOLD);
 
@@ -3908,9 +3744,9 @@ l2:
 		 * updating, because the potentially created multixact would otherwise
 		 * be wrong.
 		 */
-		compute_new_xmax_infomask(HeapTupleHeaderGetRawXmax(oldtup.t_data),
-								  oldtup.t_data->t_infomask,
-								  oldtup.t_data->t_infomask2,
+		compute_new_xmax_infomask(HeapTupleHeaderGetRawXmax(oldtup->t_data),
+								  oldtup->t_data->t_infomask,
+								  oldtup->t_data->t_infomask2,
 								  xid, *lockmode, false,
 								  &xmax_lock_old_tuple, &infomask_lock_old_tuple,
 								  &infomask2_lock_old_tuple);
@@ -3920,18 +3756,18 @@ l2:
 		START_CRIT_SECTION();
 
 		/* Clear obsolete visibility flags ... */
-		oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
-		oldtup.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
-		HeapTupleClearHotUpdated(&oldtup);
+		oldtup->t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		oldtup->t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
+		HeapTupleClearHotUpdated(oldtup);
 		/* ... and store info about transaction updating this tuple */
 		Assert(TransactionIdIsValid(xmax_lock_old_tuple));
-		HeapTupleHeaderSetXmax(oldtup.t_data, xmax_lock_old_tuple);
-		oldtup.t_data->t_infomask |= infomask_lock_old_tuple;
-		oldtup.t_data->t_infomask2 |= infomask2_lock_old_tuple;
-		HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
+		HeapTupleHeaderSetXmax(oldtup->t_data, xmax_lock_old_tuple);
+		oldtup->t_data->t_infomask |= infomask_lock_old_tuple;
+		oldtup->t_data->t_infomask2 |= infomask2_lock_old_tuple;
+		HeapTupleHeaderSetCmax(oldtup->t_data, cid, iscombo);
 
 		/* temporarily make it look not-updated, but locked */
-		oldtup.t_data->t_ctid = oldtup.t_self;
+		oldtup->t_data->t_ctid = oldtup->t_self;
 
 		/*
 		 * Clear all-frozen bit on visibility map if needed. We could
@@ -3940,7 +3776,7 @@ l2:
 		 * worthwhile.
 		 */
 		if (PageIsAllVisible(page) &&
-			visibilitymap_clear(relation, block, vmbuffer,
+			visibilitymap_clear(relation, block, *vmbuffer,
 								VISIBILITYMAP_ALL_FROZEN))
 			cleared_all_frozen = true;
 
@@ -3954,10 +3790,10 @@ l2:
 			XLogBeginInsert();
 			XLogRegisterBuffer(0, buffer, REGBUF_STANDARD);
 
-			xlrec.offnum = ItemPointerGetOffsetNumber(&oldtup.t_self);
+			xlrec.offnum = ItemPointerGetOffsetNumber(&oldtup->t_self);
 			xlrec.xmax = xmax_lock_old_tuple;
-			xlrec.infobits_set = compute_infobits(oldtup.t_data->t_infomask,
-												  oldtup.t_data->t_infomask2);
+			xlrec.infobits_set = compute_infobits(oldtup->t_data->t_infomask,
+												  oldtup->t_data->t_infomask2);
 			xlrec.flags =
 				cleared_all_frozen ? XLH_LOCK_ALL_FROZEN_CLEARED : 0;
 			XLogRegisterData(&xlrec, SizeOfHeapLock);
@@ -3979,7 +3815,7 @@ l2:
 		if (need_toast)
 		{
 			/* Note we always use WAL and FSM during updates */
-			heaptup = heap_toast_insert_or_update(relation, newtup, &oldtup, 0);
+			heaptup = heap_toast_insert_or_update(relation, newtup, oldtup, 0);
 			newtupsize = MAXALIGN(heaptup->t_len);
 		}
 		else
@@ -4015,20 +3851,20 @@ l2:
 				/* It doesn't fit, must use RelationGetBufferForTuple. */
 				newbuf = RelationGetBufferForTuple(relation, heaptup->t_len,
 												   buffer, 0, NULL,
-												   &vmbuffer_new, &vmbuffer,
+												   &vmbuffer_new, vmbuffer,
 												   0);
 				/* We're all done. */
 				break;
 			}
 			/* Acquire VM page pin if needed and we don't have it. */
-			if (vmbuffer == InvalidBuffer && PageIsAllVisible(page))
-				visibilitymap_pin(relation, block, &vmbuffer);
+			if (*vmbuffer == InvalidBuffer && PageIsAllVisible(page))
+				visibilitymap_pin(relation, block, vmbuffer);
 			/* Re-acquire the lock on the old tuple's page. */
 			LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
 			/* Re-check using the up-to-date free space */
 			pagefree = PageGetHeapFreeSpace(page);
 			if (newtupsize > pagefree ||
-				(vmbuffer == InvalidBuffer && PageIsAllVisible(page)))
+				(*vmbuffer == InvalidBuffer && PageIsAllVisible(page)))
 			{
 				/*
 				 * Rats, it doesn't fit anymore, or somebody just now set the
@@ -4066,42 +3902,21 @@ l2:
 	 * will include checking the relation level, there is no benefit to a
 	 * separate check for the new tuple.
 	 */
-	CheckForSerializableConflictIn(relation, &oldtup.t_self,
+	CheckForSerializableConflictIn(relation, &oldtup->t_self,
 								   BufferGetBlockNumber(buffer));
 
 	/*
 	 * At this point newbuf and buffer are both pinned and locked, and newbuf
-	 * has enough space for the new tuple.  If they are the same buffer, only
-	 * one pin is held.
+	 * has enough space for the new tuple so we can use the HOT update path if
+	 * the caller determined that it is allowable.
+	 *
+	 * NOTE: If newbuf == buffer then only one pin is held.
 	 */
+	use_hot_update = (newbuf == buffer) && hot_allowed;
 
-	if (newbuf == buffer)
-	{
-		/*
-		 * Since the new tuple is going into the same page, we might be able
-		 * to do a HOT update.  Check if any of the index columns have been
-		 * changed.
-		 */
-		if (!bms_overlap(modified_attrs, hot_attrs))
-		{
-			use_hot_update = true;
-
-			/*
-			 * If none of the columns that are used in hot-blocking indexes
-			 * were updated, we can apply HOT, but we do still need to check
-			 * if we need to update the summarizing indexes, and update those
-			 * indexes if the columns were updated, or we may fail to detect
-			 * e.g. value bound changes in BRIN minmax indexes.
-			 */
-			if (bms_overlap(modified_attrs, sum_attrs))
-				summarized_update = true;
-		}
-	}
-	else
-	{
-		/* Set a hint that the old page could use prune/defrag */
+	/* Set a hint that the old page could use prune/defrag */
+	if (!use_hot_update)
 		PageSetFull(page);
-	}
 
 	/*
 	 * Compute replica identity tuple before entering the critical section so
@@ -4110,9 +3925,7 @@ l2:
 	 * logged.  Pass old key required as true only if the replica identity key
 	 * columns are modified or it has external data.
 	 */
-	old_key_tuple = ExtractReplicaIdentity(relation, &oldtup,
-										   bms_overlap(modified_attrs, id_attrs) ||
-										   id_has_external,
+	old_key_tuple = ExtractReplicaIdentity(relation, oldtup, rep_id_key_required,
 										   &old_key_copied);
 
 	/* NO EREPORT(ERROR) from here till changes are logged */
@@ -4135,7 +3948,7 @@ l2:
 	if (use_hot_update)
 	{
 		/* Mark the old tuple as HOT-updated */
-		HeapTupleSetHotUpdated(&oldtup);
+		HeapTupleSetHotUpdated(oldtup);
 		/* And mark the new tuple as heap-only */
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
@@ -4144,7 +3957,7 @@ l2:
 	else
 	{
 		/* Make sure tuples are correctly marked as not-HOT */
-		HeapTupleClearHotUpdated(&oldtup);
+		HeapTupleClearHotUpdated(oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
 	}
@@ -4153,17 +3966,17 @@ l2:
 
 
 	/* Clear obsolete visibility flags, possibly set by ourselves above... */
-	oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
-	oldtup.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
+	oldtup->t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+	oldtup->t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 	/* ... and store info about transaction updating this tuple */
 	Assert(TransactionIdIsValid(xmax_old_tuple));
-	HeapTupleHeaderSetXmax(oldtup.t_data, xmax_old_tuple);
-	oldtup.t_data->t_infomask |= infomask_old_tuple;
-	oldtup.t_data->t_infomask2 |= infomask2_old_tuple;
-	HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
+	HeapTupleHeaderSetXmax(oldtup->t_data, xmax_old_tuple);
+	oldtup->t_data->t_infomask |= infomask_old_tuple;
+	oldtup->t_data->t_infomask2 |= infomask2_old_tuple;
+	HeapTupleHeaderSetCmax(oldtup->t_data, cid, iscombo);
 
 	/* record address of new tuple in t_ctid of old one */
-	oldtup.t_data->t_ctid = heaptup->t_self;
+	oldtup->t_data->t_ctid = heaptup->t_self;
 
 	/* clear PD_ALL_VISIBLE flags, reset all visibilitymap bits */
 	if (PageIsAllVisible(BufferGetPage(buffer)))
@@ -4171,7 +3984,7 @@ l2:
 		all_visible_cleared = true;
 		PageClearAllVisible(BufferGetPage(buffer));
 		visibilitymap_clear(relation, BufferGetBlockNumber(buffer),
-							vmbuffer, VISIBILITYMAP_VALID_BITS);
+							*vmbuffer, VISIBILITYMAP_VALID_BITS);
 	}
 	if (newbuf != buffer && PageIsAllVisible(BufferGetPage(newbuf)))
 	{
@@ -4196,12 +4009,12 @@ l2:
 		 */
 		if (RelationIsAccessibleInLogicalDecoding(relation))
 		{
-			log_heap_new_cid(relation, &oldtup);
+			log_heap_new_cid(relation, oldtup);
 			log_heap_new_cid(relation, heaptup);
 		}
 
 		recptr = log_heap_update(relation, buffer,
-								 newbuf, &oldtup, heaptup,
+								 newbuf, oldtup, heaptup,
 								 old_key_tuple,
 								 all_visible_cleared,
 								 all_visible_cleared_new);
@@ -4226,7 +4039,7 @@ l2:
 	 * both tuple versions in one call to inval.c so we can avoid redundant
 	 * sinval messages.)
 	 */
-	CacheInvalidateHeapTuple(relation, &oldtup, heaptup);
+	CacheInvalidateHeapTuple(relation, oldtup, heaptup);
 
 	/* Now we can release the buffer(s) */
 	if (newbuf != buffer)
@@ -4234,14 +4047,14 @@ l2:
 	ReleaseBuffer(buffer);
 	if (BufferIsValid(vmbuffer_new))
 		ReleaseBuffer(vmbuffer_new);
-	if (BufferIsValid(vmbuffer))
-		ReleaseBuffer(vmbuffer);
+	if (BufferIsValid(*vmbuffer))
+		ReleaseBuffer(*vmbuffer);
 
 	/*
 	 * Release the lmgr tuple lock, if we had it.
 	 */
 	if (have_tuple_lock)
-		UnlockTupleTuplock(relation, &(oldtup.t_self), *lockmode);
+		UnlockTupleTuplock(relation, &oldtup->t_self, *lockmode);
 
 	pgstat_count_heap_update(relation, use_hot_update, newbuf != buffer);
 
@@ -4255,32 +4068,9 @@ l2:
 		heap_freetuple(heaptup);
 	}
 
-	/*
-	 * If it is a HOT update, the update may still need to update summarized
-	 * indexes, lest we fail to update those summaries and get incorrect
-	 * results (for example, minmax bounds of the block may change with this
-	 * update).
-	 */
-	if (use_hot_update)
-	{
-		if (summarized_update)
-			*update_indexes = TU_Summarizing;
-		else
-			*update_indexes = TU_None;
-	}
-	else
-		*update_indexes = TU_All;
-
 	if (old_key_tuple != NULL && old_key_copied)
 		heap_freetuple(old_key_tuple);
 
-	bms_free(hot_attrs);
-	bms_free(sum_attrs);
-	bms_free(key_attrs);
-	bms_free(id_attrs);
-	bms_free(modified_attrs);
-	bms_free(interesting_attrs);
-
 	return TM_Ok;
 }
 
@@ -4289,7 +4079,7 @@ l2:
  * Confirm adequate lock held during heap_update(), per rules from
  * README.tuplock section "Locking to write inplace-updated tables".
  */
-static void
+void
 check_lock_if_inplace_updateable_rel(Relation relation,
 									 const ItemPointerData *otid,
 									 HeapTuple newtup)
@@ -4461,7 +4251,7 @@ heap_attr_equals(TupleDesc tupdesc, int attrnum, Datum value1, Datum value2,
  * listed as interesting) of the old tuple is a member of external_cols and is
  * stored externally.
  */
-static Bitmapset *
+Bitmapset *
 HeapDetermineColumnsInfo(Relation relation,
 						 Bitmapset *interesting_cols,
 						 Bitmapset *external_cols,
@@ -4508,10 +4298,11 @@ HeapDetermineColumnsInfo(Relation relation,
 		}
 
 		/*
-		 * Extract the corresponding values.  XXX this is pretty inefficient
-		 * if there are many indexed columns.  Should we do a single
-		 * heap_deform_tuple call on each tuple, instead?	But that doesn't
-		 * work for system columns ...
+		 * Extract the corresponding values.
+		 *
+		 * XXX this is pretty inefficient if there are many indexed columns.
+		 * Should we do a single heap_deform_tuple call on each tuple,
+		 * instead? But that doesn't work for system columns ...
 		 */
 		value1 = heap_getattr(oldtup, attrnum, tupdesc, &isnull1);
 		value2 = heap_getattr(newtup, attrnum, tupdesc, &isnull2);
@@ -4544,25 +4335,183 @@ HeapDetermineColumnsInfo(Relation relation,
 }
 
 /*
- *	simple_heap_update - replace a tuple
- *
- * This routine may be used to update a tuple when concurrent updates of
- * the target tuple are not expected (for example, because we have a lock
- * on the relation associated with the tuple).  Any failure is reported
- * via ereport().
+ * This routine may be used to update a tuple when concurrent updates of the
+ * target tuple are not expected (for example, because we have a lock on the
+ * relation associated with the tuple).  Any failure is reported via ereport().
+ * Returns the set of modified indexed attributes.
  */
-void
-simple_heap_update(Relation relation, const ItemPointerData *otid, HeapTuple tup,
+Bitmapset *
+simple_heap_update(Relation relation, const ItemPointerData *otid, HeapTuple tuple,
 				   TU_UpdateIndexes *update_indexes)
 {
 	TM_Result	result;
 	TM_FailureData tmfd;
 	LockTupleMode lockmode;
+	Buffer		buffer;
+	Buffer		vmbuffer = InvalidBuffer;
+	Page		page;
+	BlockNumber block;
+	Bitmapset  *hot_attrs,
+			   *sum_attrs,
+			   *key_attrs,
+			   *rid_attrs,
+			   *mix_attrs,
+			   *idx_attrs;
+	ItemId		lp;
+	HeapTupleData oldtup;
+	bool		hot_allowed;
+	bool		summarized_only;
+	bool		rep_id_key_required = false;
+
+	Assert(ItemPointerIsValid(otid));
+
+	/* Cheap, simplistic check that the tuple matches the rel's rowtype. */
+	Assert(HeapTupleHeaderGetNatts(tuple->t_data) <=
+		   RelationGetNumberOfAttributes(relation));
+
+	/*
+	 * Forbid this during a parallel operation, lest it allocate a combo CID.
+	 * Other workers might need that combo CID for visibility checks, and we
+	 * have no provision for broadcasting it to them.
+	 */
+	if (IsInParallelMode())
+		ereport(ERROR,
+				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
+				 errmsg("cannot update tuples during a parallel operation")));
+
+#ifdef USE_ASSERT_CHECKING
+	check_lock_if_inplace_updateable_rel(relation, otid, tuple);
+#endif
+
+	/*
+	 * We must fetch these bitmaps of attributes from relcache to be checked
+	 * for various operations below before obtaining a buffer lock because if
+	 * we are doing an update on one of the relevant system catalogs we could
+	 * deadlock if we try to fetch them later on. Relcache will return copies
+	 * of each bitmap, so we need not worry about relcache flush happening
+	 * midway through this operation.
+	 */
+	idx_attrs = RelationGetIndexAttrBitmap(relation,
+										   INDEX_ATTR_BITMAP_INDEXED);
+	sum_attrs = RelationGetIndexAttrBitmap(relation,
+										   INDEX_ATTR_BITMAP_SUMMARIZED);
+	rid_attrs = RelationGetIndexAttrBitmap(relation,
+										   INDEX_ATTR_BITMAP_IDENTITY_KEY);
+	key_attrs = RelationGetIndexAttrBitmap(relation,
+										   INDEX_ATTR_BITMAP_KEY);
+
+	block = ItemPointerGetBlockNumber(otid);
+	INJECTION_POINT("heap_update-before-pin", NULL);
+	buffer = ReadBuffer(relation, block);
+	page = BufferGetPage(buffer);
+
+	/*
+	 * Before locking the buffer, pin the visibility map page if it appears to
+	 * be necessary.  Since we haven't got the lock yet, someone else might be
+	 * in the middle of changing this, so we'll need to recheck after we have
+	 * the lock.
+	 */
+	if (PageIsAllVisible(page))
+		visibilitymap_pin(relation, block, &vmbuffer);
+
+	LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
+
+	lp = PageGetItemId(page, ItemPointerGetOffsetNumber(otid));
+
+	/*
+	 * Usually, a buffer pin and/or snapshot blocks pruning of otid, ensuring
+	 * we see LP_NORMAL here.  When the otid origin is a syscache, we may have
+	 * neither a pin nor a snapshot.  Hence, we may see other LP_ states, each
+	 * of which indicates concurrent pruning.
+	 *
+	 * Failing with TM_Updated would be most accurate.  However, unlike other
+	 * TM_Updated scenarios, we don't know the successor ctid in LP_UNUSED and
+	 * LP_DEAD cases.  While the distinction between TM_Updated and TM_Deleted
+	 * does matter to SQL statements UPDATE and MERGE, those SQL statements
+	 * hold a snapshot that ensures LP_NORMAL.  Hence, the choice between
+	 * TM_Updated and TM_Deleted affects only the wording of error messages.
+	 * Settle on TM_Deleted, for two reasons.  First, it avoids complicating
+	 * the specification of when tmfd->ctid is valid.  Second, it creates
+	 * error log evidence that we took this branch.
+	 *
+	 * Since it's possible to see LP_UNUSED at otid, it's also possible to see
+	 * LP_NORMAL for a tuple that replaced LP_UNUSED.  If it's a tuple for an
+	 * unrelated row, we'll fail with "duplicate key value violates unique".
+	 * XXX if otid is the live, newer version of the newtup row, we'll discard
+	 * changes originating in versions of this catalog row after the version
+	 * the caller got from syscache.  See syscache-update-pruned.spec.
+	 */
+	if (!ItemIdIsNormal(lp))
+	{
+		Assert(RelationSupportsSysCache(RelationGetRelid(relation)));
+
+		UnlockReleaseBuffer(buffer);
+		if (vmbuffer != InvalidBuffer)
+			ReleaseBuffer(vmbuffer);
+		*update_indexes = TU_None;
+
+		bms_free(sum_attrs);
+		bms_free(rid_attrs);
+		bms_free(key_attrs);
+		bms_free(idx_attrs);
+		/* mix_attrs not yet initialized */
+
+		elog(ERROR, "tuple concurrently deleted");
+
+		return NULL;
+	}
+
+	/*
+	 * Partially construct the oldtup for HeapDetermineColumnsInfo to work and
+	 * then pass that on to heap_update.
+	 */
+	oldtup.t_tableOid = RelationGetRelid(relation);
+	oldtup.t_data = (HeapTupleHeader) PageGetItem(page, lp);
+	oldtup.t_len = ItemIdGetLength(lp);
+	oldtup.t_self = *otid;
+
+	/* Use a bitmap of all indexed attributes here */
+	mix_attrs = HeapDetermineColumnsInfo(relation, idx_attrs, rid_attrs,
+										 &oldtup, tuple, &rep_id_key_required);
+
+	/*
+	 * We'll need to WAL log the replica identity attributes if either they
+	 * overlap with the modified indexed attributes or, as we've checked for
+	 * just now in HeapDetermineColumnsInfo, they were unmodified external
+	 * indexed attributes.
+	 */
+	rep_id_key_required = rep_id_key_required || bms_overlap(mix_attrs, rid_attrs);
+	bms_free(rid_attrs);
+
+	/*
+	 * HOT updates are possible when either: a) there are no modified indexed
+	 * attributes, or b) the modified attributes are all on summarizing
+	 * indexes.
+	 */
+	hot_attrs = bms_del_members(idx_attrs, sum_attrs);
+	summarized_only = !bms_is_empty(mix_attrs) && bms_is_subset(mix_attrs, sum_attrs);
+	hot_allowed = !bms_overlap(mix_attrs, hot_attrs) || summarized_only;
+	bms_free(hot_attrs);		/* no need to free idx_attrs */
+	bms_free(sum_attrs);
+
+	/*
+	 * If we're not updating any "key" attributes, we can grab a weaker lock
+	 * type. This allows for more concurrency when we are running
+	 * simultaneously with foreign key checks.
+	 */
+	if (bms_overlap(mix_attrs, key_attrs))
+		lockmode = LockTupleExclusive;
+	else
+		lockmode = LockTupleNoKeyExclusive;
+	bms_free(key_attrs);
+
+	result = heap_update(relation, &oldtup, tuple, GetCurrentCommandId(true),
+						 InvalidSnapshot, true /* wait for commit */ ,
+						 &tmfd, &lockmode, buffer, page, block, lp, hot_allowed,
+						 &vmbuffer, rep_id_key_required);
+
+	*update_indexes = TU_None;
 
-	result = heap_update(relation, otid, tup,
-						 GetCurrentCommandId(true), InvalidSnapshot,
-						 true /* wait for commit */ ,
-						 &tmfd, &lockmode, update_indexes);
 	switch (result)
 	{
 		case TM_SelfModified:
@@ -4572,6 +4521,10 @@ simple_heap_update(Relation relation, const ItemPointerData *otid, HeapTuple tup
 
 		case TM_Ok:
 			/* done successfully */
+			if (!HeapTupleIsHeapOnly(tuple))
+				*update_indexes = TU_All;
+			else if (summarized_only)
+				*update_indexes = TU_Summarizing;
 			break;
 
 		case TM_Updated:
@@ -4586,8 +4539,9 @@ simple_heap_update(Relation relation, const ItemPointerData *otid, HeapTuple tup
 			elog(ERROR, "unrecognized heap_update status: %u", result);
 			break;
 	}
-}
 
+	return mix_attrs;
+}
 
 /*
  * Return the MultiXactStatus corresponding to the given tuple lock mode.
diff --git a/src/backend/access/heap/heapam_handler.c b/src/backend/access/heap/heapam_handler.c
index cbef73e5d4b..a23cb93cac9 100644
--- a/src/backend/access/heap/heapam_handler.c
+++ b/src/backend/access/heap/heapam_handler.c
@@ -44,7 +44,9 @@
 #include "storage/procarray.h"
 #include "storage/smgr.h"
 #include "utils/builtins.h"
+#include "utils/injection_point.h"
 #include "utils/rel.h"
+#include "utils/relcache.h"
 
 static void reform_and_rewrite_tuple(HeapTuple tuple,
 									 Relation OldHeap, Relation NewHeap,
@@ -312,23 +314,177 @@ heapam_tuple_delete(Relation relation, ItemPointer tid, CommandId cid,
 	return heap_delete(relation, tid, cid, crosscheck, wait, tmfd, changingPart);
 }
 
-
 static TM_Result
 heapam_tuple_update(Relation relation, ItemPointer otid, TupleTableSlot *slot,
-					CommandId cid, Snapshot snapshot, Snapshot crosscheck,
-					bool wait, TM_FailureData *tmfd,
-					LockTupleMode *lockmode, TU_UpdateIndexes *update_indexes)
+					CommandId cid, Snapshot snapshot,
+					Snapshot crosscheck, bool wait,
+					TM_FailureData *tmfd,
+					LockTupleMode *lockmode,
+					const Bitmapset *mix_attrs,
+					TU_UpdateIndexes *update_indexes)
 {
-	bool		shouldFree = true;
-	HeapTuple	tuple = ExecFetchSlotHeapTuple(slot, true, &shouldFree);
+	bool		rep_id_key_required;
+	bool		hot_allowed;
+	bool		summarized_only;
+	bool		shouldFree = false;
+	HeapTuple	tuple;
+	HeapTupleData oldtup;
+	Buffer		buffer;
+	Buffer		vmbuffer = InvalidBuffer;
+	Page		page;
+	BlockNumber block;
+	ItemId		lp;
+	Bitmapset  *hot_attrs,
+			   *sum_attrs,
+			   *key_attrs,
+			   *rid_attrs,
+			   *idx_attrs;
 	TM_Result	result;
 
+	Assert(ItemPointerIsValid(otid));
+
+	tuple = ExecFetchSlotHeapTuple(slot, true, &shouldFree);
+
+	/* Cheap, simplistic check that the tuple matches the rel's rowtype. */
+	Assert(HeapTupleHeaderGetNatts(tuple->t_data) <=
+		   RelationGetNumberOfAttributes(relation));
+
+	/*
+	 * Forbid this during a parallel operation, lest it allocate a combo CID.
+	 * Other workers might need that combo CID for visibility checks, and we
+	 * have no provision for broadcasting it to them.
+	 */
+	if (IsInParallelMode())
+		ereport(ERROR,
+				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
+				 errmsg("cannot update tuples during a parallel operation")));
+
+#ifdef USE_ASSERT_CHECKING
+	check_lock_if_inplace_updateable_rel(relation, otid, tuple);
+#endif
+
+	block = ItemPointerGetBlockNumber(otid);
+	INJECTION_POINT("heap_update-before-pin", NULL);
+	buffer = ReadBuffer(relation, block);
+	page = BufferGetPage(buffer);
+
+	/*
+	 * Before locking the buffer, pin the visibility map page if it appears to
+	 * be necessary.  Since we haven't got the lock yet, someone else might be
+	 * in the middle of changing this, so we'll need to recheck after we have
+	 * the lock.
+	 */
+	if (PageIsAllVisible(page))
+		visibilitymap_pin(relation, block, &vmbuffer);
+
+	LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
+
+	lp = PageGetItemId(page, ItemPointerGetOffsetNumber(otid));
+
+	Assert(ItemIdIsNormal(lp));
+
+	oldtup.t_tableOid = RelationGetRelid(relation);
+	oldtup.t_data = (HeapTupleHeader) PageGetItem(page, lp);
+	oldtup.t_len = ItemIdGetLength(lp);
+	oldtup.t_self = *otid;
+
+	/*
+	 * We'll need to include the replica identity key when either the identity
+	 * key attributes overlap with the modified index attributes or when the
+	 * replica identity attributes are stored externally.  This is required
+	 * because for such attributes the flattened value won't be WAL logged as
+	 * part of the new tuple so we must determine if we need to extract and
+	 * include them as part of the old_key_tuple (see ExtractReplicaIdentity).
+	 */
+	idx_attrs = RelationGetIndexAttrBitmap(relation,
+										   INDEX_ATTR_BITMAP_INDEXED);
+	rid_attrs = RelationGetIndexAttrBitmap(relation,
+										   INDEX_ATTR_BITMAP_IDENTITY_KEY);
+
+	rep_id_key_required = bms_overlap(mix_attrs, rid_attrs);
+	if (!rep_id_key_required)
+	{
+		Bitmapset  *attrs;
+		TupleDesc	tupdesc = RelationGetDescr(relation);
+		int			attidx = -1;
+
+		/*
+		 * We don't own idx_attrs so we'll copy it and remove the modified set
+		 * to reduce the attributes we need to test in the while loop and
+		 * avoid a two branches in the loop.
+		 */
+		attrs = bms_difference(idx_attrs, mix_attrs);
+		attrs = bms_int_members(attrs, rid_attrs);
+
+		while ((attidx = bms_next_member(attrs, attidx)) >= 0)
+		{
+			/*
+			 * attidx is zero-based, attrnum is the normal attribute number
+			 */
+			AttrNumber	attrnum = attidx + FirstLowInvalidHeapAttributeNumber;
+			Datum		value;
+			bool		isnull;
+
+			/*
+			 * System attributes are not added into interesting_attrs in
+			 * relcache
+			 */
+			Assert(attrnum > 0);
+
+			value = heap_getattr(&oldtup, attrnum, tupdesc, &isnull);
+
+			/* No need to check attributes that can't be stored externally */
+			if (isnull ||
+				TupleDescCompactAttr(tupdesc, attrnum - 1)->attlen != -1)
+				continue;
+
+			/* Check if the old tuple's attribute is stored externally */
+			if (VARATT_IS_EXTERNAL((struct varlena *) DatumGetPointer(value)))
+			{
+				rep_id_key_required = true;
+				break;
+			}
+		}
+
+		bms_free(attrs);
+	}
+	bms_free(rid_attrs);
+
 	/* Update the tuple with table oid */
 	slot->tts_tableOid = RelationGetRelid(relation);
 	tuple->t_tableOid = slot->tts_tableOid;
 
-	result = heap_update(relation, otid, tuple, cid, crosscheck, wait,
-						 tmfd, lockmode, update_indexes);
+	/*
+	 * HOT updates are possible when either: a) there are no modified indexed
+	 * attributes, or b) the modified attributes are all on summarizing
+	 * indexes.
+	 */
+	sum_attrs = RelationGetIndexAttrBitmap(relation,
+										   INDEX_ATTR_BITMAP_SUMMARIZED);
+	hot_attrs = bms_del_members(idx_attrs, sum_attrs);
+	summarized_only = !bms_is_empty(mix_attrs) && bms_is_subset(mix_attrs, sum_attrs);
+	hot_allowed = !bms_overlap(mix_attrs, hot_attrs) || summarized_only;
+	bms_free(hot_attrs);		/* no need to free idx_attrs */
+	bms_free(sum_attrs);
+
+	/*
+	 * If we're not updating any "key" attributes, we can grab a weaker lock
+	 * type. This allows for more concurrency when we are running
+	 * simultaneously with foreign key checks.
+	 */
+	key_attrs = RelationGetIndexAttrBitmap(relation,
+										   INDEX_ATTR_BITMAP_KEY);
+	if (bms_overlap(mix_attrs, key_attrs))
+		*lockmode = LockTupleExclusive;
+	else
+		*lockmode = LockTupleNoKeyExclusive;
+	bms_free(key_attrs);
+
+	result = heap_update(relation, &oldtup, tuple, cid, crosscheck, wait, tmfd,
+						 lockmode, buffer, page, block, lp, hot_allowed,
+						 &vmbuffer, rep_id_key_required);
+
+
 	ItemPointerCopy(&tuple->t_self, &slot->tts_tid);
 
 	/*
@@ -342,15 +498,13 @@ heapam_tuple_update(Relation relation, ItemPointer otid, TupleTableSlot *slot,
 	 * update only summarized indexes, or none at all.
 	 */
 	if (result != TM_Ok)
-	{
-		Assert(*update_indexes == TU_None);
 		*update_indexes = TU_None;
-	}
 	else if (!HeapTupleIsHeapOnly(tuple))
-		Assert(*update_indexes == TU_All);
+		*update_indexes = TU_All;
+	else if (summarized_only)
+		*update_indexes = TU_Summarizing;
 	else
-		Assert((*update_indexes == TU_Summarizing) ||
-			   (*update_indexes == TU_None));
+		*update_indexes = TU_None;
 
 	if (shouldFree)
 		pfree(tuple);
diff --git a/src/backend/access/table/tableam.c b/src/backend/access/table/tableam.c
index dfda1af412e..42acd5b17a9 100644
--- a/src/backend/access/table/tableam.c
+++ b/src/backend/access/table/tableam.c
@@ -359,6 +359,7 @@ void
 simple_table_tuple_update(Relation rel, ItemPointer otid,
 						  TupleTableSlot *slot,
 						  Snapshot snapshot,
+						  const Bitmapset *mix_attrs,
 						  TU_UpdateIndexes *update_indexes)
 {
 	TM_Result	result;
@@ -369,7 +370,9 @@ simple_table_tuple_update(Relation rel, ItemPointer otid,
 								GetCurrentCommandId(true),
 								snapshot, InvalidSnapshot,
 								true /* wait for commit */ ,
-								&tmfd, &lockmode, update_indexes);
+								&tmfd, &lockmode,
+								mix_attrs,
+								update_indexes);
 
 	switch (result)
 	{
diff --git a/src/backend/catalog/indexing.c b/src/backend/catalog/indexing.c
index 0a1a68e0644..690a2511023 100644
--- a/src/backend/catalog/indexing.c
+++ b/src/backend/catalog/indexing.c
@@ -102,7 +102,7 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple,
 	 * Get information from the state structure.  Fall out if nothing to do.
 	 */
 	numIndexes = indstate->ri_NumIndices;
-	if (numIndexes == 0)
+	if (numIndexes == 0 || updateIndexes == TU_None)
 		return;
 	relationDescs = indstate->ri_IndexRelationDescs;
 	indexInfoArray = indstate->ri_IndexRelationInfo;
@@ -314,15 +314,18 @@ CatalogTupleUpdate(Relation heapRel, const ItemPointerData *otid, HeapTuple tup)
 {
 	CatalogIndexState indstate;
 	TU_UpdateIndexes updateIndexes = TU_All;
+	Bitmapset  *updatedAttrs;
 
 	CatalogTupleCheckConstraints(heapRel, tup);
 
 	indstate = CatalogOpenIndexes(heapRel);
 
-	simple_heap_update(heapRel, otid, tup, &updateIndexes);
-
+	updatedAttrs = simple_heap_update(heapRel, otid, tup, &updateIndexes);
+	((ResultRelInfo *) indstate)->ri_ChangedIndexedCols = updatedAttrs;
 	CatalogIndexInsert(indstate, tup, updateIndexes);
+
 	CatalogCloseIndexes(indstate);
+	bms_free(updatedAttrs);
 }
 
 /*
@@ -338,12 +341,15 @@ CatalogTupleUpdateWithInfo(Relation heapRel, const ItemPointerData *otid, HeapTu
 						   CatalogIndexState indstate)
 {
 	TU_UpdateIndexes updateIndexes = TU_All;
+	Bitmapset  *updatedAttrs;
 
 	CatalogTupleCheckConstraints(heapRel, tup);
 
-	simple_heap_update(heapRel, otid, tup, &updateIndexes);
-
+	updatedAttrs = simple_heap_update(heapRel, otid, tup, &updateIndexes);
+	((ResultRelInfo *) indstate)->ri_ChangedIndexedCols = updatedAttrs;
 	CatalogIndexInsert(indstate, tup, updateIndexes);
+	((ResultRelInfo *) indstate)->ri_ChangedIndexedCols = NULL;
+	bms_free(updatedAttrs);
 }
 
 /*
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index bfd3ebc601e..cd7ae4aeec2 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -1286,6 +1286,7 @@ InitResultRelInfo(ResultRelInfo *resultRelInfo,
 	/* The following fields are set later if needed */
 	resultRelInfo->ri_RowIdAttNo = 0;
 	resultRelInfo->ri_extraUpdatedCols = NULL;
+	resultRelInfo->ri_ChangedIndexedCols = NULL;
 	resultRelInfo->ri_projectNew = NULL;
 	resultRelInfo->ri_newTupleSlot = NULL;
 	resultRelInfo->ri_oldTupleSlot = NULL;
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 743b1ee2b28..7040a69c275 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -33,6 +33,7 @@
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
+#include "utils/relcache.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/typcache.h"
@@ -937,7 +938,13 @@ ExecSimpleRelationUpdate(ResultRelInfo *resultRelInfo,
 		if (rel->rd_rel->relispartition)
 			ExecPartitionCheck(resultRelInfo, slot, estate, true);
 
+		/*
+		 * We're not going to call ExecCheckIndexedAttrsForChanges here
+		 * because we've already identified the changes earlier on thanks to
+		 * slot_modify_data.
+		 */
 		simple_table_tuple_update(rel, tid, slot, estate->es_snapshot,
+								  resultRelInfo->ri_ChangedIndexedCols,
 								  &update_indexes);
 
 		conflictindexes = resultRelInfo->ri_onConflictArbiterIndexes;
diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c
index 6802fc13e95..d099ec79375 100644
--- a/src/backend/executor/nodeModifyTable.c
+++ b/src/backend/executor/nodeModifyTable.c
@@ -17,6 +17,7 @@
  *		ExecModifyTable		- retrieve the next tuple from the node
  *		ExecEndModifyTable	- shut down the ModifyTable node
  *		ExecReScanModifyTable - rescan the ModifyTable node
+ *		ExecCheckIndexedAttrsForChanges - find set of updated indexed columns
  *
  *	 NOTES
  *		The ModifyTable node receives input from its outerPlan, which is
@@ -54,6 +55,7 @@
 
 #include "access/htup_details.h"
 #include "access/tableam.h"
+#include "access/tupdesc.h"
 #include "access/xact.h"
 #include "commands/trigger.h"
 #include "executor/execPartition.h"
@@ -188,6 +190,131 @@ static TupleTableSlot *ExecMergeNotMatched(ModifyTableContext *context,
 										   ResultRelInfo *resultRelInfo,
 										   bool canSetTag);
 
+/*
+ * ExecCheckIndexedAttrsForChanges
+ *
+ * Determine which indexes need updating by finding the set of modified indexed
+ * attributes.
+ *
+ * The goal is for the executor to know, ahead of calling into the table AM to
+ * process the update and before calling into the index AM for inserting new
+ * index tuples, which attributes in the new TupleTableSlot, if any, truely
+ * necessitate a new index tuple.
+ *
+ * Returns a Bitmapset of attributes that intersects with indexes which require
+ * a new index tuple.
+ */
+Bitmapset *
+ExecCheckIndexedAttrsForChanges(ResultRelInfo *resultRelInfo,
+								TupleTableSlot *old_tts,
+								TupleTableSlot *new_tts)
+{
+	int			attidx = -1;
+	Relation	relation = resultRelInfo->ri_RelationDesc;
+	TupleDesc	tupdesc = RelationGetDescr(relation);
+	Bitmapset  *idx_attrs;		/* interesting attrs */
+	Bitmapset  *mix_attrs = NULL;	/* modified, indexed attributes */
+
+	/* If no indexes, we're done */
+	if (resultRelInfo->ri_NumIndices == 0)
+		return NULL;
+
+	/*
+	 * Fetch the set of all attributes referenced across all indexes on the
+	 * relation as well as the set of attributes referenced in expressions
+	 * that generate attributes.
+	 */
+	idx_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_INDEXED);
+
+	/* Review all attributes referenced in indexes on this relation */
+	while ((attidx = bms_next_member(idx_attrs, attidx)) >= 0)
+	{
+		/* attidx is zero-based, attnum is the normal attribute number */
+		AttrNumber	attnum = attidx + FirstLowInvalidHeapAttributeNumber;
+		Datum		old_value,
+					new_value;
+		bool		old_null,
+					new_null;
+		CompactAttribute *att;
+
+		/*
+		 * If it's a whole-tuple reference, say "not equal".  It's not really
+		 * worth supporting this case, since it could only succeed after a
+		 * no-op update, which is hardly a case worth optimizing for.
+		 */
+		if (attnum == 0)
+		{
+			mix_attrs = bms_add_member(mix_attrs, attidx);
+
+			continue;
+		}
+
+		/*
+		 * Likewise, automatically say "not equal" for any system attribute
+		 * other than tableOID; we cannot expect these to be consistent in a
+		 * HOT chain, or even to be set correctly yet in the new tuple.
+		 */
+		if (attnum < 0)
+		{
+			if (attnum != TableOidAttributeNumber)
+				mix_attrs = bms_add_member(mix_attrs, attidx);
+
+			continue;
+		}
+
+		/* Extract the corresponding values */
+		att = TupleDescCompactAttr(tupdesc, attnum - 1);
+
+
+		/*
+		 * Skip checking generated columns because if any of the base columns
+		 * referenced in the generation expression have changed. If none have
+		 * changed, the generated column value is unchanged.
+		 */
+		if (att->attgenerated)
+			continue;
+
+		/* Extract values from both slots for this attribute */
+		old_value = slot_getattr(old_tts, attnum, &old_null);
+		new_value = slot_getattr(new_tts, attnum, &new_null);
+
+		/* A change to/from NULL, so not equal */
+		if (old_null != new_null)
+		{
+			mix_attrs = bms_add_member(mix_attrs, attidx);
+			continue;
+		}
+
+		/* Both NULL, no change/unmodified */
+		if (old_null)
+			continue;
+
+		/*
+		 * We do simple binary comparison of the two datums.  This may be
+		 * overly strict because there can be multiple binary representations
+		 * for the same logical value.  But we should be OK as long as there
+		 * are no false positives.  Using a type-specific equality operator is
+		 * messy because there could be multiple notions of equality in
+		 * different operator classes; furthermore, we cannot safely invoke
+		 * user-defined functions while holding exclusive buffer lock.
+		 */
+		if (attnum <= 0)
+		{
+			/* The only allowed system columns are OIDs, so do this */
+			if (DatumGetObjectId(old_value) != DatumGetObjectId(new_value))
+				mix_attrs = bms_add_member(mix_attrs, attidx);
+		}
+		else
+		{
+			if (!datum_image_eq(old_value, new_value, att->attbyval, att->attlen))
+				mix_attrs = bms_add_member(mix_attrs, attidx);
+		}
+	}
+
+	bms_free(idx_attrs);
+
+	return mix_attrs;
+}
 
 /*
  * Verify that the tuples to be produced by INSERT match the
@@ -2197,14 +2324,17 @@ ExecUpdatePrepareSlot(ResultRelInfo *resultRelInfo,
  */
 static TM_Result
 ExecUpdateAct(ModifyTableContext *context, ResultRelInfo *resultRelInfo,
-			  ItemPointer tupleid, HeapTuple oldtuple, TupleTableSlot *slot,
-			  bool canSetTag, UpdateContext *updateCxt)
+			  ItemPointer tupleid, HeapTuple oldtuple, TupleTableSlot *oldSlot,
+			  TupleTableSlot *slot, bool canSetTag, UpdateContext *updateCxt)
 {
 	EState	   *estate = context->estate;
 	Relation	resultRelationDesc = resultRelInfo->ri_RelationDesc;
 	bool		partition_constraint_failed;
 	TM_Result	result;
 
+	/* The set of modified indexed attributes that trigger new index entries */
+	Bitmapset  *mix_attrs = NULL;
+
 	updateCxt->crossPartUpdate = false;
 
 	/*
@@ -2321,7 +2451,23 @@ lreplace:
 		ExecConstraints(resultRelInfo, slot, estate);
 
 	/*
-	 * replace the heap tuple
+	 * Identify which, if any, indexed attributes were modified here so that
+	 * we might reuse it in a few places.
+	 */
+	bms_free(resultRelInfo->ri_ChangedIndexedCols);
+	resultRelInfo->ri_ChangedIndexedCols = NULL;
+
+	/*
+	 * Next up we need to find out the set of indexed attributes that have
+	 * changed in value and should trigger a new index tuple.  We could start
+	 * with the set of updated columns via ExecGetUpdatedCols(), but if we do
+	 * we will overlook attributes directly modified by heap_modify_tuple()
+	 * which are not known to ExecGetUpdatedCols().
+	 */
+	mix_attrs = ExecCheckIndexedAttrsForChanges(resultRelInfo, oldSlot, slot);
+
+	/*
+	 * Call into the table AM to update the heap tuple.
 	 *
 	 * Note: if es_crosscheck_snapshot isn't InvalidSnapshot, we check that
 	 * the row to be updated is visible to that snapshot, and throw a
@@ -2335,8 +2481,12 @@ lreplace:
 								estate->es_crosscheck_snapshot,
 								true /* wait for commit */ ,
 								&context->tmfd, &updateCxt->lockmode,
+								mix_attrs,
 								&updateCxt->updateIndexes);
 
+	Assert(bms_is_empty(resultRelInfo->ri_ChangedIndexedCols));
+	resultRelInfo->ri_ChangedIndexedCols = mix_attrs;
+
 	return result;
 }
 
@@ -2553,8 +2703,9 @@ ExecUpdate(ModifyTableContext *context, ResultRelInfo *resultRelInfo,
 		 */
 redo_act:
 		lockedtid = *tupleid;
-		result = ExecUpdateAct(context, resultRelInfo, tupleid, oldtuple, slot,
-							   canSetTag, &updateCxt);
+
+		result = ExecUpdateAct(context, resultRelInfo, tupleid, oldtuple, oldSlot,
+							   slot, canSetTag, &updateCxt);
 
 		/*
 		 * If ExecUpdateAct reports that a cross-partition update was done,
@@ -3404,8 +3555,8 @@ lmerge_matched:
 					Assert(oldtuple == NULL);
 
 					result = ExecUpdateAct(context, resultRelInfo, tupleid,
-										   NULL, newslot, canSetTag,
-										   &updateCxt);
+										   NULL, resultRelInfo->ri_oldTupleSlot,
+										   newslot, canSetTag, &updateCxt);
 
 					/*
 					 * As in ExecUpdate(), if ExecUpdateAct() reports that a
@@ -3430,6 +3581,7 @@ lmerge_matched:
 									   tupleid, NULL, newslot);
 					mtstate->mt_merge_updated += 1;
 				}
+
 				break;
 
 			case CMD_DELETE:
@@ -4537,7 +4689,7 @@ ExecModifyTable(PlanState *pstate)
 		 * For UPDATE/DELETE/MERGE, fetch the row identity info for the tuple
 		 * to be updated/deleted/merged.  For a heap relation, that's a TID;
 		 * otherwise we may have a wholerow junk attr that carries the old
-		 * tuple in toto.  Keep this in step with the part of
+		 * tuple in total.  Keep this in step with the part of
 		 * ExecInitModifyTable that sets up ri_RowIdAttNo.
 		 */
 		if (operation == CMD_UPDATE || operation == CMD_DELETE ||
@@ -4717,6 +4869,7 @@ ExecModifyTable(PlanState *pstate)
 				/* Now apply the update. */
 				slot = ExecUpdate(&context, resultRelInfo, tupleid, oldtuple,
 								  oldSlot, slot, node->canSetTag);
+
 				if (tuplock)
 					UnlockTuple(resultRelInfo->ri_RelationDesc, tupleid,
 								InplaceUpdateTupleLock);
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 32725c48623..968ff626f04 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -285,12 +285,14 @@
 #include "storage/procarray.h"
 #include "tcop/tcopprot.h"
 #include "utils/acl.h"
+#include "utils/datum.h"
 #include "utils/guc.h"
 #include "utils/inval.h"
 #include "utils/lsyscache.h"
 #include "utils/memutils.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
+#include "utils/relcache.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
@@ -1110,15 +1112,18 @@ slot_store_data(TupleTableSlot *slot, LogicalRepRelMapEntry *rel,
  * "slot" is filled with a copy of the tuple in "srcslot", replacing
  * columns provided in "tupleData" and leaving others as-is.
  *
+ * Returns a bitmap of the modified columns.
+ *
  * Caution: unreplaced pass-by-ref columns in "slot" will point into the
  * storage for "srcslot".  This is OK for current usage, but someday we may
  * need to materialize "slot" at the end to make it independent of "srcslot".
  */
-static void
+static Bitmapset *
 slot_modify_data(TupleTableSlot *slot, TupleTableSlot *srcslot,
 				 LogicalRepRelMapEntry *rel,
 				 LogicalRepTupleData *tupleData)
 {
+	Bitmapset  *modified = NULL;
 	int			natts = slot->tts_tupleDescriptor->natts;
 	int			i;
 
@@ -1195,6 +1200,27 @@ slot_modify_data(TupleTableSlot *slot, TupleTableSlot *srcslot,
 				slot->tts_isnull[i] = true;
 			}
 
+			/*
+			 * Determine if the replicated value changed the local value by
+			 * comparing slots.  This is a subset of
+			 * ExecCheckIndexedAttrsForChanges.
+			 */
+			if (srcslot->tts_isnull[i] != slot->tts_isnull[i])
+			{
+				/* One is NULL, the other is not so the value changed */
+				modified = bms_add_member(modified, i + 1 - FirstLowInvalidHeapAttributeNumber);
+			}
+			else if (!srcslot->tts_isnull[i])
+			{
+				/* Both are not NULL, compare their values */
+
+				if (!datumIsEqual(srcslot->tts_values[i],
+								  slot->tts_values[i],
+								  att->attbyval,
+								  att->attlen))
+					modified = bms_add_member(modified, i + 1 - FirstLowInvalidHeapAttributeNumber);
+			}
+
 			/* Reset attnum for error callback */
 			apply_error_callback_arg.remote_attnum = -1;
 		}
@@ -1202,6 +1228,8 @@ slot_modify_data(TupleTableSlot *slot, TupleTableSlot *srcslot,
 
 	/* And finally, declare that "slot" contains a valid virtual tuple */
 	ExecStoreVirtualTuple(slot);
+
+	return modified;
 }
 
 /*
@@ -2918,6 +2946,7 @@ apply_handle_update_internal(ApplyExecutionData *edata,
 	ConflictTupleInfo conflicttuple = {0};
 	bool		found;
 	MemoryContext oldctx;
+	Bitmapset  *indexed = NULL;
 
 	EvalPlanQualInit(&epqstate, estate, NULL, NIL, -1, NIL);
 	ExecOpenIndices(relinfo, false);
@@ -2934,6 +2963,8 @@ apply_handle_update_internal(ApplyExecutionData *edata,
 	 */
 	if (found)
 	{
+		Bitmapset  *modified = NULL;
+
 		/*
 		 * Report the conflict if the tuple was modified by a different
 		 * origin.
@@ -2957,15 +2988,29 @@ apply_handle_update_internal(ApplyExecutionData *edata,
 
 		/* Process and store remote tuple in the slot */
 		oldctx = MemoryContextSwitchTo(GetPerTupleMemoryContext(estate));
-		slot_modify_data(remoteslot, localslot, relmapentry, newtup);
+		modified = slot_modify_data(remoteslot, localslot, relmapentry, newtup);
 		MemoryContextSwitchTo(oldctx);
 
+		/*
+		 * Normally we'd call ExecCheckIndexedAttrForChanges but here we have
+		 * the record of changed columns in the replication state, so let's
+		 * use that instead.
+		 */
+		indexed = RelationGetIndexAttrBitmap(relinfo->ri_RelationDesc,
+											 INDEX_ATTR_BITMAP_INDEXED);
+
+		bms_free(relinfo->ri_ChangedIndexedCols);
+		relinfo->ri_ChangedIndexedCols = bms_int_members(modified, indexed);
+		bms_free(indexed);
+
 		EvalPlanQualSetSlot(&epqstate, remoteslot);
 
 		InitConflictIndexes(relinfo);
 
-		/* Do the actual update. */
+		/* First check privileges */
 		TargetPrivilegesCheck(relinfo->ri_RelationDesc, ACL_UPDATE);
+
+		/* Then do the actual update. */
 		ExecSimpleRelationUpdate(relinfo, estate, &epqstate, localslot,
 								 remoteslot);
 	}
@@ -3455,6 +3500,8 @@ apply_handle_tuple_routing(ApplyExecutionData *edata,
 				bool		found;
 				EPQState	epqstate;
 				ConflictTupleInfo conflicttuple = {0};
+				Bitmapset  *modified = NULL;
+				Bitmapset  *indexed;
 
 				/* Get the matching local tuple from the partition. */
 				found = FindReplTupleInLocalRel(edata, partrel,
@@ -3523,8 +3570,8 @@ apply_handle_tuple_routing(ApplyExecutionData *edata,
 				 * remoteslot_part.
 				 */
 				oldctx = MemoryContextSwitchTo(GetPerTupleMemoryContext(estate));
-				slot_modify_data(remoteslot_part, localslot, part_entry,
-								 newtup);
+				modified = slot_modify_data(remoteslot_part, localslot, part_entry,
+											newtup);
 				MemoryContextSwitchTo(oldctx);
 
 				EvalPlanQualInit(&epqstate, estate, NULL, NIL, -1, NIL);
@@ -3549,6 +3596,18 @@ apply_handle_tuple_routing(ApplyExecutionData *edata,
 					EvalPlanQualSetSlot(&epqstate, remoteslot_part);
 					TargetPrivilegesCheck(partrelinfo->ri_RelationDesc,
 										  ACL_UPDATE);
+
+					/*
+					 * Normally we'd call ExecCheckIndexedAttrForChanges but
+					 * here we have the record of changed columns in the
+					 * replication state, so let's use that instead.
+					 */
+					indexed = RelationGetIndexAttrBitmap(partrelinfo->ri_RelationDesc,
+														 INDEX_ATTR_BITMAP_INDEXED);
+					bms_free(partrelinfo->ri_ChangedIndexedCols);
+					partrelinfo->ri_ChangedIndexedCols = bms_int_members(modified, indexed);
+					bms_free(indexed);
+
 					ExecSimpleRelationUpdate(partrelinfo, estate, &epqstate,
 											 localslot, remoteslot_part);
 				}
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 6b634c9fff1..f30505d8ae3 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -2475,8 +2475,8 @@ RelationDestroyRelation(Relation relation, bool remember_tupdesc)
 	bms_free(relation->rd_keyattr);
 	bms_free(relation->rd_pkattr);
 	bms_free(relation->rd_idattr);
-	bms_free(relation->rd_hotblockingattr);
 	bms_free(relation->rd_summarizedattr);
+	bms_free(relation->rd_indexedattr);
 	if (relation->rd_pubdesc)
 		pfree(relation->rd_pubdesc);
 	if (relation->rd_options)
@@ -5276,8 +5276,8 @@ RelationGetIndexPredicate(Relation relation)
  *									(beware: even if PK is deferrable!)
  *	INDEX_ATTR_BITMAP_IDENTITY_KEY	Columns in the table's replica identity
  *									index (empty if FULL)
- *	INDEX_ATTR_BITMAP_HOT_BLOCKING	Columns that block updates from being HOT
- *	INDEX_ATTR_BITMAP_SUMMARIZED	Columns included in summarizing indexes
+ *	INDEX_ATTR_BITMAP_SUMMARIZED	Columns only included in summarizing indexes
+ *	INDEX_ATTR_BITMAP_INDEXED		Columns referenced by indexes
  *
  * Attribute numbers are offset by FirstLowInvalidHeapAttributeNumber so that
  * we can include system attributes (e.g., OID) in the bitmap representation.
@@ -5300,8 +5300,8 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 	Bitmapset  *uindexattrs;	/* columns in unique indexes */
 	Bitmapset  *pkindexattrs;	/* columns in the primary index */
 	Bitmapset  *idindexattrs;	/* columns in the replica identity */
-	Bitmapset  *hotblockingattrs;	/* columns with HOT blocking indexes */
-	Bitmapset  *summarizedattrs;	/* columns with summarizing indexes */
+	Bitmapset  *summarizedattrs;	/* columns only in summarizing indexes */
+	Bitmapset  *indexedattrs;	/* columns referenced by indexes */
 	List	   *indexoidlist;
 	List	   *newindexoidlist;
 	Oid			relpkindex;
@@ -5320,10 +5320,10 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 				return bms_copy(relation->rd_pkattr);
 			case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 				return bms_copy(relation->rd_idattr);
-			case INDEX_ATTR_BITMAP_HOT_BLOCKING:
-				return bms_copy(relation->rd_hotblockingattr);
 			case INDEX_ATTR_BITMAP_SUMMARIZED:
 				return bms_copy(relation->rd_summarizedattr);
+			case INDEX_ATTR_BITMAP_INDEXED:
+				return bms_copy(relation->rd_indexedattr);
 			default:
 				elog(ERROR, "unknown attrKind %u", attrKind);
 		}
@@ -5366,8 +5366,8 @@ restart:
 	uindexattrs = NULL;
 	pkindexattrs = NULL;
 	idindexattrs = NULL;
-	hotblockingattrs = NULL;
 	summarizedattrs = NULL;
+	indexedattrs = NULL;
 	foreach(l, indexoidlist)
 	{
 		Oid			indexOid = lfirst_oid(l);
@@ -5426,7 +5426,7 @@ restart:
 		if (indexDesc->rd_indam->amsummarizing)
 			attrs = &summarizedattrs;
 		else
-			attrs = &hotblockingattrs;
+			attrs = &indexedattrs;
 
 		/* Collect simple attribute references */
 		for (i = 0; i < indexDesc->rd_index->indnatts; i++)
@@ -5435,9 +5435,9 @@ restart:
 
 			/*
 			 * Since we have covering indexes with non-key columns, we must
-			 * handle them accurately here. non-key columns must be added into
-			 * hotblockingattrs or summarizedattrs, since they are in index,
-			 * and update shouldn't miss them.
+			 * handle them accurately here. Non-key columns must be added into
+			 * indexedattrs or summarizedattrs, since they are in index, and
+			 * update shouldn't miss them.
 			 *
 			 * Summarizing indexes do not block HOT, but do need to be updated
 			 * when the column value changes, thus require a separate
@@ -5498,12 +5498,20 @@ restart:
 		bms_free(uindexattrs);
 		bms_free(pkindexattrs);
 		bms_free(idindexattrs);
-		bms_free(hotblockingattrs);
 		bms_free(summarizedattrs);
+		bms_free(indexedattrs);
 
 		goto restart;
 	}
 
+	/*
+	 * Record what attributes are only referenced by summarizing indexes. Then
+	 * add that into the other indexed attributes to track all referenced
+	 * attributes.
+	 */
+	summarizedattrs = bms_del_members(summarizedattrs, indexedattrs);
+	indexedattrs = bms_add_members(indexedattrs, summarizedattrs);
+
 	/* Don't leak the old values of these bitmaps, if any */
 	relation->rd_attrsvalid = false;
 	bms_free(relation->rd_keyattr);
@@ -5512,10 +5520,10 @@ restart:
 	relation->rd_pkattr = NULL;
 	bms_free(relation->rd_idattr);
 	relation->rd_idattr = NULL;
-	bms_free(relation->rd_hotblockingattr);
-	relation->rd_hotblockingattr = NULL;
 	bms_free(relation->rd_summarizedattr);
 	relation->rd_summarizedattr = NULL;
+	bms_free(relation->rd_indexedattr);
+	relation->rd_indexedattr = NULL;
 
 	/*
 	 * Now save copies of the bitmaps in the relcache entry.  We intentionally
@@ -5528,8 +5536,8 @@ restart:
 	relation->rd_keyattr = bms_copy(uindexattrs);
 	relation->rd_pkattr = bms_copy(pkindexattrs);
 	relation->rd_idattr = bms_copy(idindexattrs);
-	relation->rd_hotblockingattr = bms_copy(hotblockingattrs);
 	relation->rd_summarizedattr = bms_copy(summarizedattrs);
+	relation->rd_indexedattr = bms_copy(indexedattrs);
 	relation->rd_attrsvalid = true;
 	MemoryContextSwitchTo(oldcxt);
 
@@ -5542,10 +5550,10 @@ restart:
 			return pkindexattrs;
 		case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 			return idindexattrs;
-		case INDEX_ATTR_BITMAP_HOT_BLOCKING:
-			return hotblockingattrs;
 		case INDEX_ATTR_BITMAP_SUMMARIZED:
 			return summarizedattrs;
+		case INDEX_ATTR_BITMAP_INDEXED:
+			return indexedattrs;
 		default:
 			elog(ERROR, "unknown attrKind %u", attrKind);
 			return NULL;
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index 3c0961ab36b..ca6ac1f8a4d 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -364,11 +364,11 @@ extern TM_Result heap_delete(Relation relation, const ItemPointerData *tid,
 							 TM_FailureData *tmfd, bool changingPart);
 extern void heap_finish_speculative(Relation relation, const ItemPointerData *tid);
 extern void heap_abort_speculative(Relation relation, const ItemPointerData *tid);
-extern TM_Result heap_update(Relation relation, const ItemPointerData *otid,
-							 HeapTuple newtup,
-							 CommandId cid, Snapshot crosscheck, bool wait,
-							 TM_FailureData *tmfd, LockTupleMode *lockmode,
-							 TU_UpdateIndexes *update_indexes);
+extern TM_Result heap_update(Relation relation, HeapTupleData *oldtup,
+							 HeapTuple newtup, CommandId cid, Snapshot crosscheck, bool wait,
+							 TM_FailureData *tmfd, LockTupleMode *lockmode, Buffer buffer,
+							 Page page, BlockNumber block, ItemId lp, bool hot_allowed,
+							 Buffer *vmbuffer, bool rep_id_key_required);
 extern TM_Result heap_lock_tuple(Relation relation, HeapTuple tuple,
 								 CommandId cid, LockTupleMode mode, LockWaitPolicy wait_policy,
 								 bool follow_updates,
@@ -402,8 +402,8 @@ extern bool heap_tuple_needs_eventual_freeze(HeapTupleHeader tuple);
 
 extern void simple_heap_insert(Relation relation, HeapTuple tup);
 extern void simple_heap_delete(Relation relation, const ItemPointerData *tid);
-extern void simple_heap_update(Relation relation, const ItemPointerData *otid,
-							   HeapTuple tup, TU_UpdateIndexes *update_indexes);
+extern Bitmapset *simple_heap_update(Relation relation, const ItemPointerData *otid,
+									 HeapTuple tup, TU_UpdateIndexes *update_indexes);
 
 extern TransactionId heap_index_delete_tuples(Relation rel,
 											  TM_IndexDeleteOp *delstate);
@@ -430,6 +430,18 @@ extern void log_heap_prune_and_freeze(Relation relation, Buffer buffer,
 									  OffsetNumber *dead, int ndead,
 									  OffsetNumber *unused, int nunused);
 
+/* in heap/heapam.c */
+extern Bitmapset *HeapDetermineColumnsInfo(Relation relation,
+										   Bitmapset *interesting_cols,
+										   Bitmapset *external_cols,
+										   HeapTuple oldtup, HeapTuple newtup,
+										   bool *has_external);
+#ifdef USE_ASSERT_CHECKING
+extern void check_lock_if_inplace_updateable_rel(Relation relation,
+												 const ItemPointerData *otid,
+												 HeapTuple newtup);
+#endif
+
 /* in heap/vacuumlazy.c */
 extern void heap_vacuum_rel(Relation rel,
 							const VacuumParams params, BufferAccessStrategy bstrategy);
diff --git a/src/include/access/tableam.h b/src/include/access/tableam.h
index 251379016b0..3b080aa3711 100644
--- a/src/include/access/tableam.h
+++ b/src/include/access/tableam.h
@@ -549,6 +549,7 @@ typedef struct TableAmRoutine
 								 bool wait,
 								 TM_FailureData *tmfd,
 								 LockTupleMode *lockmode,
+								 const Bitmapset *updated_cols,
 								 TU_UpdateIndexes *update_indexes);
 
 	/* see table_tuple_lock() for reference about parameters */
@@ -1524,12 +1525,12 @@ static inline TM_Result
 table_tuple_update(Relation rel, ItemPointer otid, TupleTableSlot *slot,
 				   CommandId cid, Snapshot snapshot, Snapshot crosscheck,
 				   bool wait, TM_FailureData *tmfd, LockTupleMode *lockmode,
-				   TU_UpdateIndexes *update_indexes)
+				   const Bitmapset *mix_cols, TU_UpdateIndexes *update_indexes)
 {
 	return rel->rd_tableam->tuple_update(rel, otid, slot,
 										 cid, snapshot, crosscheck,
-										 wait, tmfd,
-										 lockmode, update_indexes);
+										 wait, tmfd, lockmode,
+										 mix_cols, update_indexes);
 }
 
 /*
@@ -2010,6 +2011,7 @@ extern void simple_table_tuple_delete(Relation rel, ItemPointer tid,
 									  Snapshot snapshot);
 extern void simple_table_tuple_update(Relation rel, ItemPointer otid,
 									  TupleTableSlot *slot, Snapshot snapshot,
+									  const Bitmapset *mix_attrs,
 									  TU_UpdateIndexes *update_indexes);
 
 
diff --git a/src/include/catalog/index.h b/src/include/catalog/index.h
index b259c4141ed..14a39beab6e 100644
--- a/src/include/catalog/index.h
+++ b/src/include/catalog/index.h
@@ -132,6 +132,7 @@ extern bool CompareIndexInfo(const IndexInfo *info1, const IndexInfo *info2,
 							 const AttrMap *attmap);
 
 extern void BuildSpeculativeIndexInfo(Relation index, IndexInfo *ii);
+extern void BuildUpdateIndexInfo(ResultRelInfo *resultRelInfo);
 
 extern void FormIndexDatum(IndexInfo *indexInfo,
 						   TupleTableSlot *slot,
diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h
index 55a7d930d26..67ecb1771c3 100644
--- a/src/include/executor/executor.h
+++ b/src/include/executor/executor.h
@@ -800,5 +800,8 @@ extern ResultRelInfo *ExecLookupResultRelByOid(ModifyTableState *node,
 											   Oid resultoid,
 											   bool missing_ok,
 											   bool update_cache);
+extern Bitmapset *ExecCheckIndexedAttrsForChanges(ResultRelInfo *relinfo,
+												  TupleTableSlot *old_tts,
+												  TupleTableSlot *new_tts);
 
 #endif							/* EXECUTOR_H  */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index 63c067d5aae..13284dbd70b 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -502,6 +502,12 @@ typedef struct ResultRelInfo
 	/* true if the above has been computed */
 	bool		ri_extraUpdatedCols_valid;
 
+	/*
+	 * For UPDATE a Bitmapset of the attributes that are both indexed and have
+	 * changed in value.
+	 */
+	Bitmapset  *ri_ChangedIndexedCols;
+
 	/* Projection to generate new tuple in an INSERT/UPDATE */
 	ProjectionInfo *ri_projectNew;
 	/* Slot to hold that tuple */
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index 236830f6b93..10e5e9044ee 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -162,8 +162,8 @@ typedef struct RelationData
 	Bitmapset  *rd_keyattr;		/* cols that can be ref'd by foreign keys */
 	Bitmapset  *rd_pkattr;		/* cols included in primary key */
 	Bitmapset  *rd_idattr;		/* included in replica identity index */
-	Bitmapset  *rd_hotblockingattr; /* cols blocking HOT update */
 	Bitmapset  *rd_summarizedattr;	/* cols indexed by summarizing indexes */
+	Bitmapset  *rd_indexedattr; /* all cols referenced by indexes */
 
 	PublicationDesc *rd_pubdesc;	/* publication descriptor, or NULL */
 
diff --git a/src/include/utils/relcache.h b/src/include/utils/relcache.h
index 2700224939a..57b46ee54e5 100644
--- a/src/include/utils/relcache.h
+++ b/src/include/utils/relcache.h
@@ -69,8 +69,8 @@ typedef enum IndexAttrBitmapKind
 	INDEX_ATTR_BITMAP_KEY,
 	INDEX_ATTR_BITMAP_PRIMARY_KEY,
 	INDEX_ATTR_BITMAP_IDENTITY_KEY,
-	INDEX_ATTR_BITMAP_HOT_BLOCKING,
 	INDEX_ATTR_BITMAP_SUMMARIZED,
+	INDEX_ATTR_BITMAP_INDEXED,
 } IndexAttrBitmapKind;
 
 extern Bitmapset *RelationGetIndexAttrBitmap(Relation relation,
diff --git a/src/test/regress/expected/generated_virtual.out b/src/test/regress/expected/generated_virtual.out
index 249e68be654..c2b3cab2fa3 100644
--- a/src/test/regress/expected/generated_virtual.out
+++ b/src/test/regress/expected/generated_virtual.out
@@ -287,7 +287,7 @@ DETAIL:  Column "b" is a generated column.
 INSERT INTO gtest1v VALUES (8, DEFAULT), (9, DEFAULT);  -- error
 ERROR:  cannot insert a non-DEFAULT value into column "b"
 DETAIL:  Column "b" is a generated column.
-SELECT * FROM gtest1v;
+SELECT * FROM gtest1v ORDER BY a;
  a | b  
 ---+----
  3 |  6
diff --git a/src/test/regress/expected/updatable_views.out b/src/test/regress/expected/updatable_views.out
index 9cea538b8e8..4877a1ddce9 100644
--- a/src/test/regress/expected/updatable_views.out
+++ b/src/test/regress/expected/updatable_views.out
@@ -372,15 +372,15 @@ INSERT INTO rw_view16 (a, b) VALUES (3, 'Row 3'); -- should be OK
 UPDATE rw_view16 SET a=3, aa=-3 WHERE a=3; -- should fail
 ERROR:  multiple assignments to same column "a"
 UPDATE rw_view16 SET aa=-3 WHERE a=3; -- should be OK
-SELECT * FROM base_tbl;
+SELECT * FROM base_tbl ORDER BY a;
  a  |   b    
 ----+--------
+ -3 | Row 3
  -2 | Row -2
  -1 | Row -1
   0 | Row 0
   1 | Row 1
   2 | Row 2
- -3 | Row 3
 (6 rows)
 
 DELETE FROM rw_view16 WHERE a=-3; -- should be OK
diff --git a/src/test/regress/sql/generated_virtual.sql b/src/test/regress/sql/generated_virtual.sql
index 81152b39a79..74ab83dcff0 100644
--- a/src/test/regress/sql/generated_virtual.sql
+++ b/src/test/regress/sql/generated_virtual.sql
@@ -127,7 +127,7 @@ ALTER VIEW gtest1v ALTER COLUMN b SET DEFAULT 100;
 INSERT INTO gtest1v VALUES (8, DEFAULT);  -- error
 INSERT INTO gtest1v VALUES (8, DEFAULT), (9, DEFAULT);  -- error
 
-SELECT * FROM gtest1v;
+SELECT * FROM gtest1v ORDER BY a;
 DELETE FROM gtest1v WHERE a >= 5;
 DROP VIEW gtest1v;
 
diff --git a/src/test/regress/sql/updatable_views.sql b/src/test/regress/sql/updatable_views.sql
index 1635adde2d4..160e7799715 100644
--- a/src/test/regress/sql/updatable_views.sql
+++ b/src/test/regress/sql/updatable_views.sql
@@ -125,7 +125,7 @@ INSERT INTO rw_view16 VALUES (3, 'Row 3', 3); -- should fail
 INSERT INTO rw_view16 (a, b) VALUES (3, 'Row 3'); -- should be OK
 UPDATE rw_view16 SET a=3, aa=-3 WHERE a=3; -- should fail
 UPDATE rw_view16 SET aa=-3 WHERE a=3; -- should be OK
-SELECT * FROM base_tbl;
+SELECT * FROM base_tbl ORDER BY a;
 DELETE FROM rw_view16 WHERE a=-3; -- should be OK
 -- Read-only views
 INSERT INTO ro_view17 VALUES (3, 'ROW 3');
-- 
2.51.2

Reply via email to