Hello,

This is an updated version of the last patch with a few fixes and a layer on 
top of it that tries to cleanup heap_update().

v20260219:

0001 - Not much changed in this patch, some clean up and fixed a few mistakes.  
This patch passes all tests without any need to modify them.  I've added 
ExecCompareSlotAttrs() helper function.

0002 - As before in v29 I've split off the top half of heap_update() and moved 
that into heapam_tuple_update() and simple_heap_update().  I've created helper 
functions for different steps in these early stages: HeapUpdateHotAllowable(), 
HeapUpdateRequiresReplicaId(), HeapUpdateDetermineLockmode().  This allows for 
a cleaner set of bitmaps and logic (as I read it) on the heapam_tuple_update() 
path.  I reuse these helper functions in simple_heap_update() when possible, 
even trimming up HeapDeterminColumnsInfo() a bit so as to reuse 
HeapUpdateRequiresReplicaId().

I've tested with code that validates in heapam_tuple_update() that the modified 
attr bitmaps are identical:

{
        Bitmapset  *hot_attrs = RelationGetIndexAttrBitmap(relation,
                INDEX_ATTR_BITMAP_INDEXED);
        Bitmapset  *id_attrs = RelationGetIndexAttrBitmap(relation,
                INDEX_ATTR_BITMAP_IDENTITY_KEY);
        Bitmapset  *hdci_attrs = HeapDetermineColumnsInfo(relation, hot_attrs,
                &oldtup, tuple);

        Assert(bms_equal(mix_attrs, hdci_attrs));
        bms_free(hot_attrs);
        bms_free(id_attrs);
        bms_free(hdci_attrs);
}

Despite that passing two tests became non-deterministic without an "ORDER BY" 
on a select.  You'll see those in generated_virtual.sql and updatable_views.sql 
in the second patch.  I don't know yet why that happened, but the results are 
otherwise identical.

I will continue to performance test.  Most things I've tried differ by less 
than 0.5% before/after this patch.  Some operations where multiple rows are 
matched in an UPDATE and there are concurrent reads are faster (10-20%), I need 
to dig into this.  I'm expanding the tests I'm running to try to find any cases 
where holding the buffer lock for less time could possibly result in higher 
TPS.  The goals for $subject were faster TPS, but more importantly to lower 
index bloat and help shorten vacuum times.

Next up I'll work on re-introducing some of the other work from $subject and 
other changes in v29 and earlier patch sets.

* avoid the need for index_unchanged_by_update()
* re-add the new index AM function, "amcomparedatums()" or similar
* add a flag for types indicating that they have "sub-attributes" (JSONB, XML, 
ARRAY)
* store in pg_attribute during CREATE INDEX when types with sub-attributes are 
in expressions the relation and some representation of what the sub-attribute is
* update JSONB functions that mutate content to use the pg_attribute 
information and record if there were changes to indexed "sub-attributes" or not
* use that recorded information later in the executor to identify if the 
indexed sub-attribute changed or not opening the door for $subject without 
evaluating the before/after expressions
* re-examine partial indexes as well
* consider how one might layer a PHOT/WARM-thingie on this... (in a different 
thread in the future, like next year)


best.

-greg
From c996ab0e27724ff175a24da26603ec3c81a57d40 Mon Sep 17 00:00:00 2001
From: Greg Burd <[email protected]>
Date: Sun, 2 Nov 2025 11:36:20 -0500
Subject: [PATCH v20260219 1/2] Idenfity modified indexed attributes in the
 executor on UPDATE

Refactor executor update logic to determine which indexed columns have
actually changed during an UPDATE operation rather than leaving this up
to HeapDetermineColumnsInfo() in heap_update().

ExecCheckIndexedAttrsForChanges() replaces HeapDeterminesColumnsInfo()
and is called before table_tuple_update() crucially without the need
for an exclusive buffer lock on the page that holds the tuple being
updated.  This reduces the time the lock is held later within
heapam_tuple_update() and heap_update().

ExecCheckIndexedAttrsForChanges() in turn uses ExecCompareSlotAttrs() to
identify which attributes have changed and then intersects that with the
set of indexed attributes to identify the modified indexed set.

Besides identifying the set of modified indexed attributes
HeapDetermineColumnsInfo() was also responsible for part of the logic
involed in the decision to include the replica identity key or not.
This now happens in heap_update() when modified_attrs_valid is false.

Catalog tuple updates use simple_heap_update() and don't pass a
modified_attrs Bitmapset into heap_update() indicated by the
modified_attrs_valid bool set to false.

Updates stemming from logical replication also use the new
ExecCheckIndexedAttrsForChanges() in ExecSimpleRelationUpdate().

Before row triggers on UPDATE may use heap_modify_tuple() to update
attributes not identified by ExecGetAllUpdatedCols() as is the case in
tsvector_update_trigger().  ExecBRUpdateTriggers() now identifies
changes to indexed columns not found by ExecGetAllUpdateCols()
and adds their attributes to ri_extraUpdatedCols.  See
tsearch.sql tests for an example of this.
---
 src/backend/access/heap/heapam.c         | 80 +++++++++++++++++---
 src/backend/access/heap/heapam_handler.c |  7 +-
 src/backend/access/table/tableam.c       |  5 +-
 src/backend/commands/trigger.c           | 20 ++++-
 src/backend/executor/execReplication.c   | 12 ++-
 src/backend/executor/execTuples.c        | 93 ++++++++++++++++++++++++
 src/backend/executor/nodeModifyTable.c   | 78 ++++++++++++++++++--
 src/backend/replication/logical/worker.c | 10 +--
 src/backend/utils/cache/relcache.c       | 15 ++++
 src/include/access/heapam.h              |  1 +
 src/include/access/tableam.h             |  8 +-
 src/include/executor/executor.h          |  8 ++
 src/include/utils/rel.h                  |  1 +
 src/include/utils/relcache.h             |  1 +
 14 files changed, 303 insertions(+), 36 deletions(-)

diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 98d53caeea8..ab8b6ddb8de 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -3311,6 +3311,7 @@ TM_Result
 heap_update(Relation relation, const ItemPointerData *otid, HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
 			TM_FailureData *tmfd, LockTupleMode *lockmode,
+			const Bitmapset *modified_attrs, bool modified_attrs_valid,
 			TU_UpdateIndexes *update_indexes)
 {
 	TM_Result	result;
@@ -3320,7 +3321,6 @@ heap_update(Relation relation, const ItemPointerData *otid, HeapTuple newtup,
 	Bitmapset  *key_attrs;
 	Bitmapset  *id_attrs;
 	Bitmapset  *interesting_attrs;
-	Bitmapset  *modified_attrs;
 	ItemId		lp;
 	HeapTupleData oldtup;
 	HeapTuple	heaptup;
@@ -3345,7 +3345,7 @@ heap_update(Relation relation, const ItemPointerData *otid, HeapTuple newtup,
 	bool		all_visible_cleared_new = false;
 	bool		checked_lockers;
 	bool		locker_remains;
-	bool		id_has_external = false;
+	bool		rep_id_key_required = false;
 	TransactionId xmax_new_tuple,
 				xmax_old_tuple;
 	uint16		infomask_old_tuple,
@@ -3487,9 +3487,69 @@ heap_update(Relation relation, const ItemPointerData *otid, HeapTuple newtup,
 	 * new tuple so we must include it as part of the old_key_tuple.  See
 	 * ExtractReplicaIdentity.
 	 */
-	modified_attrs = HeapDetermineColumnsInfo(relation, interesting_attrs,
-											  id_attrs, &oldtup,
-											  newtup, &id_has_external);
+	if (!modified_attrs_valid)
+	{
+		bool		id_has_external = false;
+
+		modified_attrs = HeapDetermineColumnsInfo(relation, interesting_attrs,
+												  id_attrs, &oldtup,
+												  newtup, &id_has_external);
+		rep_id_key_required = id_has_external ||
+			bms_overlap(modified_attrs, id_attrs);
+	}
+	else
+	{
+		/*
+		 * ExtractReplicatIdentity() needs to know if a modified attrbute is
+		 * used as a replica indentity or if any of the unmodified indexed
+		 * attributes in the old tuple are stored externally and used as a
+		 * replica identity.
+		 */
+		rep_id_key_required = bms_overlap(modified_attrs, id_attrs);
+		if (!rep_id_key_required)
+		{
+			Bitmapset  *attrs;
+			TupleDesc	tupdesc = RelationGetDescr(relation);
+			int			attidx = -1;
+
+			/* Check all unmodified indexed replica identity key attributes */
+			attrs = bms_difference(interesting_attrs, modified_attrs);
+			attrs = bms_int_members(attrs, id_attrs);
+
+			while ((attidx = bms_next_member(attrs, attidx)) >= 0)
+			{
+				/*
+				 * attidx is zero-based, attrnum is the normal attribute
+				 * number
+				 */
+				AttrNumber	attrnum = attidx + FirstLowInvalidHeapAttributeNumber;
+				Datum		value;
+				bool		isnull;
+
+				/*
+				 * System attributes are not added into interesting_attrs in
+				 * relcache.
+				 */
+				Assert(attrnum > 0);
+
+				value = heap_getattr(&oldtup, attrnum, tupdesc, &isnull);
+
+				/* No need to check attributes that can't be stored externally */
+				if (isnull ||
+					TupleDescCompactAttr(tupdesc, attrnum - 1)->attlen != -1)
+					continue;
+
+				/* Check if the old tuple's attribute is stored externally */
+				if (VARATT_IS_EXTERNAL((struct varlena *) DatumGetPointer(value)))
+				{
+					rep_id_key_required = true;
+					break;
+				}
+			}
+
+			bms_free(attrs);
+		}
+	}
 
 	/*
 	 * If we're not updating any "key" column, we can grab a weaker lock type.
@@ -3763,7 +3823,7 @@ l2:
 		bms_free(sum_attrs);
 		bms_free(key_attrs);
 		bms_free(id_attrs);
-		bms_free(modified_attrs);
+		/* modified attrs is passed in and free'd by the caller, or NULL */
 		bms_free(interesting_attrs);
 		return result;
 	}
@@ -4111,8 +4171,7 @@ l2:
 	 * columns are modified or it has external data.
 	 */
 	old_key_tuple = ExtractReplicaIdentity(relation, &oldtup,
-										   bms_overlap(modified_attrs, id_attrs) ||
-										   id_has_external,
+										   rep_id_key_required,
 										   &old_key_copied);
 
 	/* NO EREPORT(ERROR) from here till changes are logged */
@@ -4278,7 +4337,7 @@ l2:
 	bms_free(sum_attrs);
 	bms_free(key_attrs);
 	bms_free(id_attrs);
-	bms_free(modified_attrs);
+	/* modified attrs is passed in and free'd by the caller, or NULL */
 	bms_free(interesting_attrs);
 
 	return TM_Ok;
@@ -4562,7 +4621,8 @@ simple_heap_update(Relation relation, const ItemPointerData *otid, HeapTuple tup
 	result = heap_update(relation, otid, tup,
 						 GetCurrentCommandId(true), InvalidSnapshot,
 						 true /* wait for commit */ ,
-						 &tmfd, &lockmode, update_indexes);
+						 &tmfd, &lockmode,
+						 NULL, false, update_indexes);
 	switch (result)
 	{
 		case TM_SelfModified:
diff --git a/src/backend/access/heap/heapam_handler.c b/src/backend/access/heap/heapam_handler.c
index cbef73e5d4b..2d74fa90c7f 100644
--- a/src/backend/access/heap/heapam_handler.c
+++ b/src/backend/access/heap/heapam_handler.c
@@ -312,12 +312,11 @@ heapam_tuple_delete(Relation relation, ItemPointer tid, CommandId cid,
 	return heap_delete(relation, tid, cid, crosscheck, wait, tmfd, changingPart);
 }
 
-
 static TM_Result
 heapam_tuple_update(Relation relation, ItemPointer otid, TupleTableSlot *slot,
 					CommandId cid, Snapshot snapshot, Snapshot crosscheck,
-					bool wait, TM_FailureData *tmfd,
-					LockTupleMode *lockmode, TU_UpdateIndexes *update_indexes)
+					bool wait, TM_FailureData *tmfd, LockTupleMode *lockmode,
+					const Bitmapset *modified_attrs, TU_UpdateIndexes *update_indexes)
 {
 	bool		shouldFree = true;
 	HeapTuple	tuple = ExecFetchSlotHeapTuple(slot, true, &shouldFree);
@@ -328,7 +327,7 @@ heapam_tuple_update(Relation relation, ItemPointer otid, TupleTableSlot *slot,
 	tuple->t_tableOid = slot->tts_tableOid;
 
 	result = heap_update(relation, otid, tuple, cid, crosscheck, wait,
-						 tmfd, lockmode, update_indexes);
+						 tmfd, lockmode, modified_attrs, true, update_indexes);
 	ItemPointerCopy(&tuple->t_self, &slot->tts_tid);
 
 	/*
diff --git a/src/backend/access/table/tableam.c b/src/backend/access/table/tableam.c
index dfda1af412e..42acd5b17a9 100644
--- a/src/backend/access/table/tableam.c
+++ b/src/backend/access/table/tableam.c
@@ -359,6 +359,7 @@ void
 simple_table_tuple_update(Relation rel, ItemPointer otid,
 						  TupleTableSlot *slot,
 						  Snapshot snapshot,
+						  const Bitmapset *mix_attrs,
 						  TU_UpdateIndexes *update_indexes)
 {
 	TM_Result	result;
@@ -369,7 +370,9 @@ simple_table_tuple_update(Relation rel, ItemPointer otid,
 								GetCurrentCommandId(true),
 								snapshot, InvalidSnapshot,
 								true /* wait for commit */ ,
-								&tmfd, &lockmode, update_indexes);
+								&tmfd, &lockmode,
+								mix_attrs,
+								update_indexes);
 
 	switch (result)
 	{
diff --git a/src/backend/commands/trigger.c b/src/backend/commands/trigger.c
index 8df915f63fb..62879ad3b4e 100644
--- a/src/backend/commands/trigger.c
+++ b/src/backend/commands/trigger.c
@@ -2978,6 +2978,7 @@ ExecBRUpdateTriggers(EState *estate, EPQState *epqstate,
 					 bool is_merge_update)
 {
 	TriggerDesc *trigdesc = relinfo->ri_TrigDesc;
+	TupleDesc	tupdesc = RelationGetDescr(relinfo->ri_RelationDesc);
 	TupleTableSlot *oldslot = ExecGetTriggerOldSlot(estate, relinfo);
 	HeapTuple	newtuple = NULL;
 	HeapTuple	trigtuple;
@@ -2985,7 +2986,9 @@ ExecBRUpdateTriggers(EState *estate, EPQState *epqstate,
 	bool		should_free_new = false;
 	TriggerData LocTriggerData = {0};
 	int			i;
-	Bitmapset  *updatedCols;
+	Bitmapset  *updatedCols = NULL;
+	Bitmapset  *remainingCols = NULL;
+	Bitmapset  *modifiedCols;
 	LockTupleMode lockmode;
 
 	/* Determine lock mode to use */
@@ -3127,6 +3130,21 @@ ExecBRUpdateTriggers(EState *estate, EPQState *epqstate,
 	if (should_free_trig)
 		heap_freetuple(trigtuple);
 
+	/*
+	 * Before UPDATE triggers may have updated attributes not known to
+	 * ExecGetAllUpdatedColumns() using heap_modify_tuple() or
+	 * heap_modifiy_tuple_by_cols().  Find and record those now.
+	 */
+	remainingCols = bms_add_range(NULL, 1 - FirstLowInvalidHeapAttributeNumber,
+							   tupdesc->natts - FirstLowInvalidHeapAttributeNumber);
+	remainingCols = bms_del_members(remainingCols, updatedCols);
+	modifiedCols = ExecCompareSlotAttrs(tupdesc, remainingCols, oldslot, newslot);
+	relinfo->ri_extraUpdatedCols =
+		bms_add_members(relinfo->ri_extraUpdatedCols, modifiedCols);
+
+	bms_free(remainingCols);
+	bms_free(modifiedCols);
+
 	return true;
 }
 
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 2497ee7edc5..63a53d88a82 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -33,6 +33,7 @@
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
+#include "utils/relcache.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/typcache.h"
@@ -906,6 +907,7 @@ ExecSimpleRelationUpdate(ResultRelInfo *resultRelInfo,
 	bool		skip_tuple = false;
 	Relation	rel = resultRelInfo->ri_RelationDesc;
 	ItemPointer tid = &(searchslot->tts_tid);
+	Bitmapset  *mix_attrs;
 
 	/*
 	 * We support only non-system tables, with
@@ -944,8 +946,16 @@ ExecSimpleRelationUpdate(ResultRelInfo *resultRelInfo,
 		if (rel->rd_rel->relispartition)
 			ExecPartitionCheck(resultRelInfo, slot, estate, true);
 
+		mix_attrs = ExecCheckIndexedAttrsForChanges(resultRelInfo,
+													estate, searchslot, slot);
+
+		/*
+		 * We're not going to call ExecCheckIndexedAttrsForChanges here
+		 * because we've already identified the changes earlier on thanks to
+		 * slot_modify_data.
+		 */
 		simple_table_tuple_update(rel, tid, slot, estate->es_snapshot,
-								  &update_indexes);
+								  mix_attrs, &update_indexes);
 
 		conflictindexes = resultRelInfo->ri_onConflictArbiterIndexes;
 
diff --git a/src/backend/executor/execTuples.c b/src/backend/executor/execTuples.c
index b768eae9e53..e95dde2df2e 100644
--- a/src/backend/executor/execTuples.c
+++ b/src/backend/executor/execTuples.c
@@ -66,6 +66,7 @@
 #include "nodes/nodeFuncs.h"
 #include "storage/bufmgr.h"
 #include "utils/builtins.h"
+#include "utils/datum.h"
 #include "utils/expandeddatum.h"
 #include "utils/lsyscache.h"
 #include "utils/typcache.h"
@@ -1929,6 +1930,98 @@ ExecFetchSlotHeapTupleDatum(TupleTableSlot *slot)
 	return ret;
 }
 
+/*
+ * ExecCompareSlots
+ *
+ * Compare old and new TupleTableSlots to detect which attributes have changed.
+ *
+ * This function serves two purposes:
+ * 1) After trigger execution: detect trigger-modified columns
+ *    (pass excluding=explicitly SET columns, including=NULL)
+ * 2) Index maintenance: detect changes in indexed columns
+ *    (pass including=indexed AND explicitly SET columns, excluding=NULL)
+ *
+ * Parameters:
+ *   resultRelInfo - relation information
+ *   excluding - bitmapset of attributes to skip
+ *   including - bitmapset of attributes to check
+ *   tupdesc - RelationGetDescr(relation)
+ *   old_tts - old tuple slot
+ *   new_tts - new tuple slot
+ *
+ * If including is NULL, check all attributes EXCEPT those in excluding
+ * If excluding is NULL, check ONLY attributes in including
+ * If both are NULL, check all attributes
+ *
+ * Returns a Bitmapset of attribute indices (using FirstLowInvalidHeapAttributeNumber
+ * convention) that differ between the two slots.
+ */
+Bitmapset *
+ExecCompareSlotAttrs(TupleDesc tupdesc,
+					 const Bitmapset *attrs,
+					 TupleTableSlot *old_tts,
+					 TupleTableSlot *new_tts)
+{
+	int			attidx = -1;
+	Bitmapset  *modified = NULL;
+
+	while ((attidx = bms_next_member(attrs, attidx)) >= 0)
+	{
+		/* attidx is zero-based, attrnum is the normal attribute number */
+		AttrNumber	attrnum = attidx + FirstLowInvalidHeapAttributeNumber;
+		Datum		old_value,
+					new_value;
+		bool		old_null,
+					new_null;
+		CompactAttribute *att;
+
+		/*
+		 * If it's a whole-tuple reference, say "not equal".  It's not really
+		 * worth supporting this case, since it could only succeed after a
+		 * no-op update, which is hardly a case worth optimizing for.
+		 */
+		if (attrnum == 0)
+		{
+			modified = bms_add_member(modified, attidx);
+			continue;
+		}
+
+		/*
+		 * Likewise, automatically say "not equal" for any system attribute
+		 * other than tableOID; we cannot expect these to be consistent in a
+		 * HOT chain, or even to be set correctly yet in the new tuple.
+		 */
+		if (attrnum < 0)
+		{
+			if (attrnum != TableOidAttributeNumber)
+			{
+				modified = bms_add_member(modified, attidx);
+				continue;
+			}
+		}
+
+		att = TupleDescCompactAttr(tupdesc, attrnum - 1);
+		old_value = slot_getattr(old_tts, attrnum, &old_null);
+		new_value = slot_getattr(new_tts, attrnum, &new_null);
+
+		/* A change to/from NULL, so not equal */
+		if (old_null != new_null)
+		{
+			modified = bms_add_member(modified, attidx);
+			continue;
+		}
+
+		/* Both NULL, no change/unmodified */
+		if (old_null)
+			continue;
+
+		if (!datum_image_eq(old_value, new_value, att->attbyval, att->attlen))
+			modified = bms_add_member(modified, attidx);
+	}
+
+	return modified;
+}
+
 /* ----------------------------------------------------------------
  *				convenience initialization routines
  * ----------------------------------------------------------------
diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c
index 793c76d4f82..18796baed28 100644
--- a/src/backend/executor/nodeModifyTable.c
+++ b/src/backend/executor/nodeModifyTable.c
@@ -17,6 +17,7 @@
  *		ExecModifyTable		- retrieve the next tuple from the node
  *		ExecEndModifyTable	- shut down the ModifyTable node
  *		ExecReScanModifyTable - rescan the ModifyTable node
+ *		ExecCheckIndexedAttrsForChanges - find set of updated indexed columns
  *
  *	 NOTES
  *		The ModifyTable node receives input from its outerPlan, which is
@@ -54,6 +55,7 @@
 
 #include "access/htup_details.h"
 #include "access/tableam.h"
+#include "access/tupdesc.h"
 #include "access/xact.h"
 #include "commands/trigger.h"
 #include "executor/execPartition.h"
@@ -188,6 +190,50 @@ static TupleTableSlot *ExecMergeNotMatched(ModifyTableContext *context,
 										   ResultRelInfo *resultRelInfo,
 										   bool canSetTag);
 
+/*
+ * ExecCheckIndexedAttrsForChanges
+ *
+ * Determine which indexes need updating by finding the set of modified indexed
+ * attributes.
+ *
+ * The goal is for the executor to know, ahead of calling into the table AM to
+ * process the update and before calling into the index AM for inserting new
+ * index tuples, which attributes in the new TupleTableSlot, if any, truely
+ * necessitate a new index tuple.
+ *
+ * Returns a Bitmapset of attributes that intersects with indexes which require
+ * a new index tuple.
+ */
+Bitmapset *
+ExecCheckIndexedAttrsForChanges(ResultRelInfo *resultRelInfo,
+								EState *estate,
+								TupleTableSlot *old_tts,
+								TupleTableSlot *new_tts)
+{
+	Relation	relation = resultRelInfo->ri_RelationDesc;
+	TupleDesc	tupdesc = RelationGetDescr(relation);
+	Bitmapset  *attrs,
+			   *mix_attrs;
+
+	/* If no indexes, we're done */
+	if (resultRelInfo->ri_NumIndices == 0)
+		return NULL;
+
+	/*
+	 * Fetch the set of attributes explicity SET in the UPDATE statement or
+	 * set by a before row trigger (even if not mentioned in the SQL) and get
+	 * the subset that are also indexed.
+	 */
+	attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_INDEXED);
+	attrs = bms_int_members(attrs, ExecGetAllUpdatedCols(resultRelInfo, estate));
+
+	/* Find out which, if any, modified indexed attributes changed */
+	mix_attrs = ExecCompareSlotAttrs(tupdesc, attrs, old_tts, new_tts);
+
+	bms_free(attrs);
+
+	return mix_attrs;
+}
 
 /*
  * Verify that the tuples to be produced by INSERT match the
@@ -2195,14 +2241,17 @@ ExecUpdatePrepareSlot(ResultRelInfo *resultRelInfo,
  */
 static TM_Result
 ExecUpdateAct(ModifyTableContext *context, ResultRelInfo *resultRelInfo,
-			  ItemPointer tupleid, HeapTuple oldtuple, TupleTableSlot *slot,
-			  bool canSetTag, UpdateContext *updateCxt)
+			  ItemPointer tupleid, HeapTuple oldtuple, TupleTableSlot *oldSlot,
+			  TupleTableSlot *slot, bool canSetTag, UpdateContext *updateCxt)
 {
 	EState	   *estate = context->estate;
 	Relation	resultRelationDesc = resultRelInfo->ri_RelationDesc;
 	bool		partition_constraint_failed;
 	TM_Result	result;
 
+	/* The set of modified indexed attributes that trigger new index entries */
+	Bitmapset  *mix_attrs = NULL;
+
 	updateCxt->crossPartUpdate = false;
 
 	/*
@@ -2319,7 +2368,16 @@ lreplace:
 		ExecConstraints(resultRelInfo, slot, estate);
 
 	/*
-	 * replace the heap tuple
+	 * Next up we need to find out the set of indexed attributes that have
+	 * changed in value and should trigger a new index tuple.  We could start
+	 * with the set of updated columns via ExecGetUpdatedCols(), but if we do
+	 * we will overlook attributes directly modified by heap_modify_tuple()
+	 * which are not known to ExecGetUpdatedCols().
+	 */
+	mix_attrs = ExecCheckIndexedAttrsForChanges(resultRelInfo, estate, oldSlot, slot);
+
+	/*
+	 * Call into the table AM to update the heap tuple.
 	 *
 	 * Note: if es_crosscheck_snapshot isn't InvalidSnapshot, we check that
 	 * the row to be updated is visible to that snapshot, and throw a
@@ -2333,6 +2391,7 @@ lreplace:
 								estate->es_crosscheck_snapshot,
 								true /* wait for commit */ ,
 								&context->tmfd, &updateCxt->lockmode,
+								mix_attrs,
 								&updateCxt->updateIndexes);
 
 	return result;
@@ -2555,8 +2614,9 @@ ExecUpdate(ModifyTableContext *context, ResultRelInfo *resultRelInfo,
 		 */
 redo_act:
 		lockedtid = *tupleid;
-		result = ExecUpdateAct(context, resultRelInfo, tupleid, oldtuple, slot,
-							   canSetTag, &updateCxt);
+
+		result = ExecUpdateAct(context, resultRelInfo, tupleid, oldtuple, oldSlot,
+							   slot, canSetTag, &updateCxt);
 
 		/*
 		 * If ExecUpdateAct reports that a cross-partition update was done,
@@ -3406,8 +3466,8 @@ lmerge_matched:
 					Assert(oldtuple == NULL);
 
 					result = ExecUpdateAct(context, resultRelInfo, tupleid,
-										   NULL, newslot, canSetTag,
-										   &updateCxt);
+										   NULL, resultRelInfo->ri_oldTupleSlot,
+										   newslot, canSetTag, &updateCxt);
 
 					/*
 					 * As in ExecUpdate(), if ExecUpdateAct() reports that a
@@ -3432,6 +3492,7 @@ lmerge_matched:
 									   tupleid, NULL, newslot);
 					mtstate->mt_merge_updated += 1;
 				}
+
 				break;
 
 			case CMD_DELETE:
@@ -4539,7 +4600,7 @@ ExecModifyTable(PlanState *pstate)
 		 * For UPDATE/DELETE/MERGE, fetch the row identity info for the tuple
 		 * to be updated/deleted/merged.  For a heap relation, that's a TID;
 		 * otherwise we may have a wholerow junk attr that carries the old
-		 * tuple in toto.  Keep this in step with the part of
+		 * tuple in total.  Keep this in step with the part of
 		 * ExecInitModifyTable that sets up ri_RowIdAttNo.
 		 */
 		if (operation == CMD_UPDATE || operation == CMD_DELETE ||
@@ -4719,6 +4780,7 @@ ExecModifyTable(PlanState *pstate)
 				/* Now apply the update. */
 				slot = ExecUpdate(&context, resultRelInfo, tupleid, oldtuple,
 								  oldSlot, slot, node->canSetTag);
+
 				if (tuplock)
 					UnlockTuple(resultRelInfo->ri_RelationDesc, tupleid,
 								InplaceUpdateTupleLock);
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 32725c48623..0b044154195 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -2917,7 +2917,6 @@ apply_handle_update_internal(ApplyExecutionData *edata,
 	TupleTableSlot *localslot = NULL;
 	ConflictTupleInfo conflicttuple = {0};
 	bool		found;
-	MemoryContext oldctx;
 
 	EvalPlanQualInit(&epqstate, estate, NULL, NIL, -1, NIL);
 	ExecOpenIndices(relinfo, false);
@@ -2956,15 +2955,13 @@ apply_handle_update_internal(ApplyExecutionData *edata,
 		}
 
 		/* Process and store remote tuple in the slot */
-		oldctx = MemoryContextSwitchTo(GetPerTupleMemoryContext(estate));
 		slot_modify_data(remoteslot, localslot, relmapentry, newtup);
-		MemoryContextSwitchTo(oldctx);
 
 		EvalPlanQualSetSlot(&epqstate, remoteslot);
 
 		InitConflictIndexes(relinfo);
 
-		/* Do the actual update. */
+		/* First check privileges */
 		TargetPrivilegesCheck(relinfo->ri_RelationDesc, ACL_UPDATE);
 		ExecSimpleRelationUpdate(relinfo, estate, &epqstate, localslot,
 								 remoteslot);
@@ -3522,10 +3519,7 @@ apply_handle_tuple_routing(ApplyExecutionData *edata,
 				 * Apply the update to the local tuple, putting the result in
 				 * remoteslot_part.
 				 */
-				oldctx = MemoryContextSwitchTo(GetPerTupleMemoryContext(estate));
-				slot_modify_data(remoteslot_part, localslot, part_entry,
-								 newtup);
-				MemoryContextSwitchTo(oldctx);
+				slot_modify_data(remoteslot_part, localslot, part_entry, newtup);
 
 				EvalPlanQualInit(&epqstate, estate, NULL, NIL, -1, NIL);
 
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 6b634c9fff1..547cf1d054d 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -2477,6 +2477,7 @@ RelationDestroyRelation(Relation relation, bool remember_tupdesc)
 	bms_free(relation->rd_idattr);
 	bms_free(relation->rd_hotblockingattr);
 	bms_free(relation->rd_summarizedattr);
+	bms_free(relation->rd_indexedattr);
 	if (relation->rd_pubdesc)
 		pfree(relation->rd_pubdesc);
 	if (relation->rd_options)
@@ -5278,6 +5279,7 @@ RelationGetIndexPredicate(Relation relation)
  *									index (empty if FULL)
  *	INDEX_ATTR_BITMAP_HOT_BLOCKING	Columns that block updates from being HOT
  *	INDEX_ATTR_BITMAP_SUMMARIZED	Columns included in summarizing indexes
+ *	INDEX_ATTR_BITMAP_INDEXED		Columns referenced by indexes
  *
  * Attribute numbers are offset by FirstLowInvalidHeapAttributeNumber so that
  * we can include system attributes (e.g., OID) in the bitmap representation.
@@ -5301,6 +5303,7 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 	Bitmapset  *pkindexattrs;	/* columns in the primary index */
 	Bitmapset  *idindexattrs;	/* columns in the replica identity */
 	Bitmapset  *hotblockingattrs;	/* columns with HOT blocking indexes */
+	Bitmapset  *indexedattrs;	/* columns referenced by indexes */
 	Bitmapset  *summarizedattrs;	/* columns with summarizing indexes */
 	List	   *indexoidlist;
 	List	   *newindexoidlist;
@@ -5324,6 +5327,8 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 				return bms_copy(relation->rd_hotblockingattr);
 			case INDEX_ATTR_BITMAP_SUMMARIZED:
 				return bms_copy(relation->rd_summarizedattr);
+			case INDEX_ATTR_BITMAP_INDEXED:
+				return bms_copy(relation->rd_indexedattr);
 			default:
 				elog(ERROR, "unknown attrKind %u", attrKind);
 		}
@@ -5368,6 +5373,7 @@ restart:
 	idindexattrs = NULL;
 	hotblockingattrs = NULL;
 	summarizedattrs = NULL;
+	indexedattrs = NULL;
 	foreach(l, indexoidlist)
 	{
 		Oid			indexOid = lfirst_oid(l);
@@ -5500,10 +5506,14 @@ restart:
 		bms_free(idindexattrs);
 		bms_free(hotblockingattrs);
 		bms_free(summarizedattrs);
+		/* indexedattrs not yet initialized */
 
 		goto restart;
 	}
 
+	/* Set indexed attributes to track all referenced attributes */
+	indexedattrs = bms_union(hotblockingattrs, summarizedattrs);
+
 	/* Don't leak the old values of these bitmaps, if any */
 	relation->rd_attrsvalid = false;
 	bms_free(relation->rd_keyattr);
@@ -5516,6 +5526,8 @@ restart:
 	relation->rd_hotblockingattr = NULL;
 	bms_free(relation->rd_summarizedattr);
 	relation->rd_summarizedattr = NULL;
+	bms_free(relation->rd_indexedattr);
+	relation->rd_indexedattr = NULL;
 
 	/*
 	 * Now save copies of the bitmaps in the relcache entry.  We intentionally
@@ -5530,6 +5542,7 @@ restart:
 	relation->rd_idattr = bms_copy(idindexattrs);
 	relation->rd_hotblockingattr = bms_copy(hotblockingattrs);
 	relation->rd_summarizedattr = bms_copy(summarizedattrs);
+	relation->rd_indexedattr = bms_copy(indexedattrs);
 	relation->rd_attrsvalid = true;
 	MemoryContextSwitchTo(oldcxt);
 
@@ -5546,6 +5559,8 @@ restart:
 			return hotblockingattrs;
 		case INDEX_ATTR_BITMAP_SUMMARIZED:
 			return summarizedattrs;
+		case INDEX_ATTR_BITMAP_INDEXED:
+			return indexedattrs;
 		default:
 			elog(ERROR, "unknown attrKind %u", attrKind);
 			return NULL;
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index 3c0961ab36b..a56f3d1f378 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -368,6 +368,7 @@ extern TM_Result heap_update(Relation relation, const ItemPointerData *otid,
 							 HeapTuple newtup,
 							 CommandId cid, Snapshot crosscheck, bool wait,
 							 TM_FailureData *tmfd, LockTupleMode *lockmode,
+							 const Bitmapset *mix_attrs, bool mix_attrs_valid,
 							 TU_UpdateIndexes *update_indexes);
 extern TM_Result heap_lock_tuple(Relation relation, HeapTuple tuple,
 								 CommandId cid, LockTupleMode mode, LockWaitPolicy wait_policy,
diff --git a/src/include/access/tableam.h b/src/include/access/tableam.h
index 251379016b0..3b080aa3711 100644
--- a/src/include/access/tableam.h
+++ b/src/include/access/tableam.h
@@ -549,6 +549,7 @@ typedef struct TableAmRoutine
 								 bool wait,
 								 TM_FailureData *tmfd,
 								 LockTupleMode *lockmode,
+								 const Bitmapset *updated_cols,
 								 TU_UpdateIndexes *update_indexes);
 
 	/* see table_tuple_lock() for reference about parameters */
@@ -1524,12 +1525,12 @@ static inline TM_Result
 table_tuple_update(Relation rel, ItemPointer otid, TupleTableSlot *slot,
 				   CommandId cid, Snapshot snapshot, Snapshot crosscheck,
 				   bool wait, TM_FailureData *tmfd, LockTupleMode *lockmode,
-				   TU_UpdateIndexes *update_indexes)
+				   const Bitmapset *mix_cols, TU_UpdateIndexes *update_indexes)
 {
 	return rel->rd_tableam->tuple_update(rel, otid, slot,
 										 cid, snapshot, crosscheck,
-										 wait, tmfd,
-										 lockmode, update_indexes);
+										 wait, tmfd, lockmode,
+										 mix_cols, update_indexes);
 }
 
 /*
@@ -2010,6 +2011,7 @@ extern void simple_table_tuple_delete(Relation rel, ItemPointer tid,
 									  Snapshot snapshot);
 extern void simple_table_tuple_update(Relation rel, ItemPointer otid,
 									  TupleTableSlot *slot, Snapshot snapshot,
+									  const Bitmapset *mix_attrs,
 									  TU_UpdateIndexes *update_indexes);
 
 
diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h
index d46ba59895d..7b0019fe15b 100644
--- a/src/include/executor/executor.h
+++ b/src/include/executor/executor.h
@@ -606,6 +606,10 @@ extern TupleDesc ExecCleanTypeFromTL(List *targetList);
 extern TupleDesc ExecTypeFromExprList(List *exprList);
 extern void ExecTypeSetColNames(TupleDesc typeInfo, List *namesList);
 extern void UpdateChangedParamSet(PlanState *node, Bitmapset *newchg);
+extern Bitmapset *ExecCompareSlotAttrs(TupleDesc tupdesc,
+									   const Bitmapset *attrs,
+									   TupleTableSlot *old_tts,
+									   TupleTableSlot *new_tts);
 
 typedef struct TupOutputState
 {
@@ -803,5 +807,9 @@ extern ResultRelInfo *ExecLookupResultRelByOid(ModifyTableState *node,
 											   Oid resultoid,
 											   bool missing_ok,
 											   bool update_cache);
+extern Bitmapset *ExecCheckIndexedAttrsForChanges(ResultRelInfo *relinfo,
+												  EState *estate,
+												  TupleTableSlot *old_tts,
+												  TupleTableSlot *new_tts);
 
 #endif							/* EXECUTOR_H  */
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index 236830f6b93..df5426fd7fb 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -164,6 +164,7 @@ typedef struct RelationData
 	Bitmapset  *rd_idattr;		/* included in replica identity index */
 	Bitmapset  *rd_hotblockingattr; /* cols blocking HOT update */
 	Bitmapset  *rd_summarizedattr;	/* cols indexed by summarizing indexes */
+	Bitmapset  *rd_indexedattr; /* all cols referenced by indexes */
 
 	PublicationDesc *rd_pubdesc;	/* publication descriptor, or NULL */
 
diff --git a/src/include/utils/relcache.h b/src/include/utils/relcache.h
index 2700224939a..5834ab7b903 100644
--- a/src/include/utils/relcache.h
+++ b/src/include/utils/relcache.h
@@ -71,6 +71,7 @@ typedef enum IndexAttrBitmapKind
 	INDEX_ATTR_BITMAP_IDENTITY_KEY,
 	INDEX_ATTR_BITMAP_HOT_BLOCKING,
 	INDEX_ATTR_BITMAP_SUMMARIZED,
+	INDEX_ATTR_BITMAP_INDEXED,
 } IndexAttrBitmapKind;
 
 extern Bitmapset *RelationGetIndexAttrBitmap(Relation relation,
-- 
2.51.2

From 9aba4466008a79556e31570b3a6bd94451007daa Mon Sep 17 00:00:00 2001
From: Greg Burd <[email protected]>
Date: Thu, 19 Feb 2026 14:08:17 -0500
Subject: [PATCH v20260219 2/2] Refactor heap_update() and move attribute
 determination into callers

This refactoring relocates column modification determination from
heap_update() to its callers (simple_heap_update and
heapam_tuple_update), moving the logic upstream to executor/handler
level.

- Remove modified_attrs and modified_attrs_valid parameters from
  heap_update()
- Extract buffer management (pin, lock, page fetch) into
  simple_heap_update() and heapam_tuple_update() before calling
  heap_update()
- Create helper functions: HeapUpdateHotAllowable(),
  HeapUpdateRequiresReplicaId(), HeapUpdateDetermineLockmode()
- Pass pre-calculated attributes (hot_allowed, rep_id_key_required, and
  lockmode) to heap_update() instead of deriving them within function
- Add ORDER BY clauses to generated_virtual.sql and updatable_views.sql
  to ensure deterministic result ordering (XXX: under review...)
---
 src/backend/access/heap/heapam.c              | 775 +++++++++---------
 src/backend/access/heap/heapam_handler.c      | 105 ++-
 src/backend/executor/execTuples.c             |  55 +-
 src/backend/executor/nodeModifyTable.c        |   7 +-
 src/backend/utils/cache/relcache.c            |  35 +-
 src/include/access/heapam.h                   |  25 +-
 src/include/utils/rel.h                       |   1 -
 src/include/utils/relcache.h                  |   1 -
 .../regress/expected/generated_virtual.out    |   4 +-
 src/test/regress/expected/updatable_views.out |   4 +-
 src/test/regress/sql/generated_virtual.sql    |   4 +-
 src/test/regress/sql/updatable_views.sql      |   2 +-
 12 files changed, 539 insertions(+), 479 deletions(-)

diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index ab8b6ddb8de..93fd714ce58 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -29,6 +29,8 @@
  *
  *-------------------------------------------------------------------------
  */
+#include "access/sysattr.h"
+#include "nodes/lockoptions.h"
 #include "postgres.h"
 
 #include "access/heapam.h"
@@ -37,6 +39,7 @@
 #include "access/multixact.h"
 #include "access/subtrans.h"
 #include "access/syncscan.h"
+#include "access/tableam.h"
 #include "access/valid.h"
 #include "access/visibilitymap.h"
 #include "access/xloginsert.h"
@@ -51,6 +54,7 @@
 #include "utils/datum.h"
 #include "utils/injection_point.h"
 #include "utils/inval.h"
+#include "utils/relcache.h"
 #include "utils/spccache.h"
 #include "utils/syscache.h"
 
@@ -62,16 +66,8 @@ static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 								  HeapTuple newtup, HeapTuple old_key_tuple,
 								  bool all_visible_cleared, bool new_all_visible_cleared);
 #ifdef USE_ASSERT_CHECKING
-static void check_lock_if_inplace_updateable_rel(Relation relation,
-												 const ItemPointerData *otid,
-												 HeapTuple newtup);
 static void check_inplace_rel_lock(HeapTuple oldtup);
 #endif
-static Bitmapset *HeapDetermineColumnsInfo(Relation relation,
-										   Bitmapset *interesting_cols,
-										   Bitmapset *external_cols,
-										   HeapTuple oldtup, HeapTuple newtup,
-										   bool *has_external);
 static bool heap_acquire_tuplock(Relation relation, const ItemPointerData *tid,
 								 LockTupleMode mode, LockWaitPolicy wait_policy,
 								 bool *have_tuple_lock);
@@ -3300,7 +3296,10 @@ simple_heap_delete(Relation relation, const ItemPointerData *tid)
  *	heap_update - replace a tuple
  *
  * See table_tuple_update() for an explanation of the parameters, except that
- * this routine directly takes a tuple rather than a slot.
+ * this routine directly takes a heap tuple rather than a slot.
+ *
+ * It's required that the caller has acquired the pin and lock on the buffer.
+ * That lock and pin will be managed here, not in the caller.
  *
  * In the failure cases, the routine fills *tmfd with the tuple's t_ctid,
  * t_xmax (resolving a possible MultiXact, if necessary), and t_cmax (the last
@@ -3308,30 +3307,19 @@ simple_heap_delete(Relation relation, const ItemPointerData *tid)
  * generated by another transaction).
  */
 TM_Result
-heap_update(Relation relation, const ItemPointerData *otid, HeapTuple newtup,
+heap_update(Relation relation, HeapTupleData *oldtup, HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			TM_FailureData *tmfd, LockTupleMode *lockmode,
-			const Bitmapset *modified_attrs, bool modified_attrs_valid,
-			TU_UpdateIndexes *update_indexes)
+			TM_FailureData *tmfd, const LockTupleMode *lockmode,
+			Buffer buffer, Page page, BlockNumber block, ItemId lp,
+			bool hot_allowed, Buffer vmbuffer, bool rep_id_key_required)
 {
 	TM_Result	result;
 	TransactionId xid = GetCurrentTransactionId();
-	Bitmapset  *hot_attrs;
-	Bitmapset  *sum_attrs;
-	Bitmapset  *key_attrs;
-	Bitmapset  *id_attrs;
-	Bitmapset  *interesting_attrs;
-	ItemId		lp;
-	HeapTupleData oldtup;
 	HeapTuple	heaptup;
 	HeapTuple	old_key_tuple = NULL;
 	bool		old_key_copied = false;
-	Page		page;
-	BlockNumber block;
 	MultiXactStatus mxact_status;
-	Buffer		buffer,
-				newbuf,
-				vmbuffer = InvalidBuffer,
+	Buffer		newbuf,
 				vmbuffer_new = InvalidBuffer;
 	bool		need_toast;
 	Size		newtupsize,
@@ -3339,13 +3327,11 @@ heap_update(Relation relation, const ItemPointerData *otid, HeapTuple newtup,
 	bool		have_tuple_lock = false;
 	bool		iscombo;
 	bool		use_hot_update = false;
-	bool		summarized_update = false;
 	bool		key_intact;
 	bool		all_visible_cleared = false;
 	bool		all_visible_cleared_new = false;
 	bool		checked_lockers;
 	bool		locker_remains;
-	bool		rep_id_key_required = false;
 	TransactionId xmax_new_tuple,
 				xmax_old_tuple;
 	uint16		infomask_old_tuple,
@@ -3353,204 +3339,13 @@ heap_update(Relation relation, const ItemPointerData *otid, HeapTuple newtup,
 				infomask_new_tuple,
 				infomask2_new_tuple;
 
-	Assert(ItemPointerIsValid(otid));
-
-	/* Cheap, simplistic check that the tuple matches the rel's rowtype. */
-	Assert(HeapTupleHeaderGetNatts(newtup->t_data) <=
-		   RelationGetNumberOfAttributes(relation));
-
+	Assert(BufferIsLockedByMe(buffer));
+	Assert(ItemIdIsNormal(lp));
 	AssertHasSnapshotForToast(relation);
 
-	/*
-	 * Forbid this during a parallel operation, lest it allocate a combo CID.
-	 * Other workers might need that combo CID for visibility checks, and we
-	 * have no provision for broadcasting it to them.
-	 */
-	if (IsInParallelMode())
-		ereport(ERROR,
-				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
-				 errmsg("cannot update tuples during a parallel operation")));
-
-#ifdef USE_ASSERT_CHECKING
-	check_lock_if_inplace_updateable_rel(relation, otid, newtup);
-#endif
-
-	/*
-	 * Fetch the list of attributes to be checked for various operations.
-	 *
-	 * For HOT considerations, this is wasted effort if we fail to update or
-	 * have to put the new tuple on a different page.  But we must compute the
-	 * list before obtaining buffer lock --- in the worst case, if we are
-	 * doing an update on one of the relevant system catalogs, we could
-	 * deadlock if we try to fetch the list later.  In any case, the relcache
-	 * caches the data so this is usually pretty cheap.
-	 *
-	 * We also need columns used by the replica identity and columns that are
-	 * considered the "key" of rows in the table.
-	 *
-	 * Note that we get copies of each bitmap, so we need not worry about
-	 * relcache flush happening midway through.
-	 */
-	hot_attrs = RelationGetIndexAttrBitmap(relation,
-										   INDEX_ATTR_BITMAP_HOT_BLOCKING);
-	sum_attrs = RelationGetIndexAttrBitmap(relation,
-										   INDEX_ATTR_BITMAP_SUMMARIZED);
-	key_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_KEY);
-	id_attrs = RelationGetIndexAttrBitmap(relation,
-										  INDEX_ATTR_BITMAP_IDENTITY_KEY);
-	interesting_attrs = NULL;
-	interesting_attrs = bms_add_members(interesting_attrs, hot_attrs);
-	interesting_attrs = bms_add_members(interesting_attrs, sum_attrs);
-	interesting_attrs = bms_add_members(interesting_attrs, key_attrs);
-	interesting_attrs = bms_add_members(interesting_attrs, id_attrs);
-
-	block = ItemPointerGetBlockNumber(otid);
-	INJECTION_POINT("heap_update-before-pin", NULL);
-	buffer = ReadBuffer(relation, block);
-	page = BufferGetPage(buffer);
-
-	/*
-	 * Before locking the buffer, pin the visibility map page if it appears to
-	 * be necessary.  Since we haven't got the lock yet, someone else might be
-	 * in the middle of changing this, so we'll need to recheck after we have
-	 * the lock.
-	 */
-	if (PageIsAllVisible(page))
-		visibilitymap_pin(relation, block, &vmbuffer);
-
-	LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
-
-	lp = PageGetItemId(page, ItemPointerGetOffsetNumber(otid));
-
-	/*
-	 * Usually, a buffer pin and/or snapshot blocks pruning of otid, ensuring
-	 * we see LP_NORMAL here.  When the otid origin is a syscache, we may have
-	 * neither a pin nor a snapshot.  Hence, we may see other LP_ states, each
-	 * of which indicates concurrent pruning.
-	 *
-	 * Failing with TM_Updated would be most accurate.  However, unlike other
-	 * TM_Updated scenarios, we don't know the successor ctid in LP_UNUSED and
-	 * LP_DEAD cases.  While the distinction between TM_Updated and TM_Deleted
-	 * does matter to SQL statements UPDATE and MERGE, those SQL statements
-	 * hold a snapshot that ensures LP_NORMAL.  Hence, the choice between
-	 * TM_Updated and TM_Deleted affects only the wording of error messages.
-	 * Settle on TM_Deleted, for two reasons.  First, it avoids complicating
-	 * the specification of when tmfd->ctid is valid.  Second, it creates
-	 * error log evidence that we took this branch.
-	 *
-	 * Since it's possible to see LP_UNUSED at otid, it's also possible to see
-	 * LP_NORMAL for a tuple that replaced LP_UNUSED.  If it's a tuple for an
-	 * unrelated row, we'll fail with "duplicate key value violates unique".
-	 * XXX if otid is the live, newer version of the newtup row, we'll discard
-	 * changes originating in versions of this catalog row after the version
-	 * the caller got from syscache.  See syscache-update-pruned.spec.
-	 */
-	if (!ItemIdIsNormal(lp))
-	{
-		Assert(RelationSupportsSysCache(RelationGetRelid(relation)));
-
-		UnlockReleaseBuffer(buffer);
-		Assert(!have_tuple_lock);
-		if (vmbuffer != InvalidBuffer)
-			ReleaseBuffer(vmbuffer);
-		tmfd->ctid = *otid;
-		tmfd->xmax = InvalidTransactionId;
-		tmfd->cmax = InvalidCommandId;
-		*update_indexes = TU_None;
-
-		bms_free(hot_attrs);
-		bms_free(sum_attrs);
-		bms_free(key_attrs);
-		bms_free(id_attrs);
-		/* modified_attrs not yet initialized */
-		bms_free(interesting_attrs);
-		return TM_Deleted;
-	}
-
-	/*
-	 * Fill in enough data in oldtup for HeapDetermineColumnsInfo to work
-	 * properly.
-	 */
-	oldtup.t_tableOid = RelationGetRelid(relation);
-	oldtup.t_data = (HeapTupleHeader) PageGetItem(page, lp);
-	oldtup.t_len = ItemIdGetLength(lp);
-	oldtup.t_self = *otid;
-
-	/* the new tuple is ready, except for this: */
+	/* The new tuple is ready, except for this */
 	newtup->t_tableOid = RelationGetRelid(relation);
 
-	/*
-	 * Determine columns modified by the update.  Additionally, identify
-	 * whether any of the unmodified replica identity key attributes in the
-	 * old tuple is externally stored or not.  This is required because for
-	 * such attributes the flattened value won't be WAL logged as part of the
-	 * new tuple so we must include it as part of the old_key_tuple.  See
-	 * ExtractReplicaIdentity.
-	 */
-	if (!modified_attrs_valid)
-	{
-		bool		id_has_external = false;
-
-		modified_attrs = HeapDetermineColumnsInfo(relation, interesting_attrs,
-												  id_attrs, &oldtup,
-												  newtup, &id_has_external);
-		rep_id_key_required = id_has_external ||
-			bms_overlap(modified_attrs, id_attrs);
-	}
-	else
-	{
-		/*
-		 * ExtractReplicatIdentity() needs to know if a modified attrbute is
-		 * used as a replica indentity or if any of the unmodified indexed
-		 * attributes in the old tuple are stored externally and used as a
-		 * replica identity.
-		 */
-		rep_id_key_required = bms_overlap(modified_attrs, id_attrs);
-		if (!rep_id_key_required)
-		{
-			Bitmapset  *attrs;
-			TupleDesc	tupdesc = RelationGetDescr(relation);
-			int			attidx = -1;
-
-			/* Check all unmodified indexed replica identity key attributes */
-			attrs = bms_difference(interesting_attrs, modified_attrs);
-			attrs = bms_int_members(attrs, id_attrs);
-
-			while ((attidx = bms_next_member(attrs, attidx)) >= 0)
-			{
-				/*
-				 * attidx is zero-based, attrnum is the normal attribute
-				 * number
-				 */
-				AttrNumber	attrnum = attidx + FirstLowInvalidHeapAttributeNumber;
-				Datum		value;
-				bool		isnull;
-
-				/*
-				 * System attributes are not added into interesting_attrs in
-				 * relcache.
-				 */
-				Assert(attrnum > 0);
-
-				value = heap_getattr(&oldtup, attrnum, tupdesc, &isnull);
-
-				/* No need to check attributes that can't be stored externally */
-				if (isnull ||
-					TupleDescCompactAttr(tupdesc, attrnum - 1)->attlen != -1)
-					continue;
-
-				/* Check if the old tuple's attribute is stored externally */
-				if (VARATT_IS_EXTERNAL((struct varlena *) DatumGetPointer(value)))
-				{
-					rep_id_key_required = true;
-					break;
-				}
-			}
-
-			bms_free(attrs);
-		}
-	}
-
 	/*
 	 * If we're not updating any "key" column, we can grab a weaker lock type.
 	 * This allows for more concurrency when we are running simultaneously
@@ -3562,9 +3357,8 @@ heap_update(Relation relation, const ItemPointerData *otid, HeapTuple newtup,
 	 * is updates that don't manipulate key columns, not those that
 	 * serendipitously arrive at the same key values.
 	 */
-	if (!bms_overlap(modified_attrs, key_attrs))
+	if (*lockmode == LockTupleNoKeyExclusive)
 	{
-		*lockmode = LockTupleNoKeyExclusive;
 		mxact_status = MultiXactStatusNoKeyUpdate;
 		key_intact = true;
 
@@ -3581,22 +3375,15 @@ heap_update(Relation relation, const ItemPointerData *otid, HeapTuple newtup,
 	}
 	else
 	{
-		*lockmode = LockTupleExclusive;
+		Assert(*lockmode == LockTupleExclusive);
 		mxact_status = MultiXactStatusUpdate;
 		key_intact = false;
 	}
 
-	/*
-	 * Note: beyond this point, use oldtup not otid to refer to old tuple.
-	 * otid may very well point at newtup->t_self, which we will overwrite
-	 * with the new tuple's location, so there's great risk of confusion if we
-	 * use otid anymore.
-	 */
-
 l2:
 	checked_lockers = false;
 	locker_remains = false;
-	result = HeapTupleSatisfiesUpdate(&oldtup, cid, buffer);
+	result = HeapTupleSatisfiesUpdate(oldtup, cid, buffer);
 
 	/* see below about the "no wait" case */
 	Assert(result != TM_BeingModified || wait);
@@ -3628,8 +3415,8 @@ l2:
 		 */
 
 		/* must copy state data before unlocking buffer */
-		xwait = HeapTupleHeaderGetRawXmax(oldtup.t_data);
-		infomask = oldtup.t_data->t_infomask;
+		xwait = HeapTupleHeaderGetRawXmax(oldtup->t_data);
+		infomask = oldtup->t_data->t_infomask;
 
 		/*
 		 * Now we have to do something about the existing locker.  If it's a
@@ -3669,13 +3456,12 @@ l2:
 				 * requesting a lock and already have one; avoids deadlock).
 				 */
 				if (!current_is_member)
-					heap_acquire_tuplock(relation, &(oldtup.t_self), *lockmode,
+					heap_acquire_tuplock(relation, &oldtup->t_self, *lockmode,
 										 LockWaitBlock, &have_tuple_lock);
 
 				/* wait for multixact */
 				MultiXactIdWait((MultiXactId) xwait, mxact_status, infomask,
-								relation, &oldtup.t_self, XLTW_Update,
-								&remain);
+								relation, &oldtup->t_self, XLTW_Update, &remain);
 				checked_lockers = true;
 				locker_remains = remain != 0;
 				LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
@@ -3685,9 +3471,9 @@ l2:
 				 * could update this tuple before we get to this point.  Check
 				 * for xmax change, and start over if so.
 				 */
-				if (xmax_infomask_changed(oldtup.t_data->t_infomask,
+				if (xmax_infomask_changed(oldtup->t_data->t_infomask,
 										  infomask) ||
-					!TransactionIdEquals(HeapTupleHeaderGetRawXmax(oldtup.t_data),
+					!TransactionIdEquals(HeapTupleHeaderGetRawXmax(oldtup->t_data),
 										 xwait))
 					goto l2;
 			}
@@ -3712,8 +3498,8 @@ l2:
 			 * before this one, which are important to keep in case this
 			 * subxact aborts.
 			 */
-			if (!HEAP_XMAX_IS_LOCKED_ONLY(oldtup.t_data->t_infomask))
-				update_xact = HeapTupleGetUpdateXid(oldtup.t_data);
+			if (!HEAP_XMAX_IS_LOCKED_ONLY(oldtup->t_data->t_infomask))
+				update_xact = HeapTupleGetUpdateXid(oldtup->t_data);
 			else
 				update_xact = InvalidTransactionId;
 
@@ -3754,9 +3540,9 @@ l2:
 			 * lock.
 			 */
 			LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
-			heap_acquire_tuplock(relation, &(oldtup.t_self), *lockmode,
+			heap_acquire_tuplock(relation, &oldtup->t_self, *lockmode,
 								 LockWaitBlock, &have_tuple_lock);
-			XactLockTableWait(xwait, relation, &oldtup.t_self,
+			XactLockTableWait(xwait, relation, &oldtup->t_self,
 							  XLTW_Update);
 			checked_lockers = true;
 			LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
@@ -3766,20 +3552,20 @@ l2:
 			 * other xact could update this tuple before we get to this point.
 			 * Check for xmax change, and start over if so.
 			 */
-			if (xmax_infomask_changed(oldtup.t_data->t_infomask, infomask) ||
+			if (xmax_infomask_changed(oldtup->t_data->t_infomask, infomask) ||
 				!TransactionIdEquals(xwait,
-									 HeapTupleHeaderGetRawXmax(oldtup.t_data)))
+									 HeapTupleHeaderGetRawXmax(oldtup->t_data)))
 				goto l2;
 
 			/* Otherwise check if it committed or aborted */
-			UpdateXmaxHintBits(oldtup.t_data, buffer, xwait);
-			if (oldtup.t_data->t_infomask & HEAP_XMAX_INVALID)
+			UpdateXmaxHintBits(oldtup->t_data, buffer, xwait);
+			if (oldtup->t_data->t_infomask & HEAP_XMAX_INVALID)
 				can_continue = true;
 		}
 
 		if (can_continue)
 			result = TM_Ok;
-		else if (!ItemPointerEquals(&oldtup.t_self, &oldtup.t_data->t_ctid))
+		else if (!ItemPointerEquals(&oldtup->t_self, &oldtup->t_data->t_ctid))
 			result = TM_Updated;
 		else
 			result = TM_Deleted;
@@ -3792,39 +3578,32 @@ l2:
 			   result == TM_Updated ||
 			   result == TM_Deleted ||
 			   result == TM_BeingModified);
-		Assert(!(oldtup.t_data->t_infomask & HEAP_XMAX_INVALID));
+		Assert(!(oldtup->t_data->t_infomask & HEAP_XMAX_INVALID));
 		Assert(result != TM_Updated ||
-			   !ItemPointerEquals(&oldtup.t_self, &oldtup.t_data->t_ctid));
+			   !ItemPointerEquals(&oldtup->t_self, &oldtup->t_data->t_ctid));
 	}
 
 	if (crosscheck != InvalidSnapshot && result == TM_Ok)
 	{
 		/* Perform additional check for transaction-snapshot mode RI updates */
-		if (!HeapTupleSatisfiesVisibility(&oldtup, crosscheck, buffer))
+		if (!HeapTupleSatisfiesVisibility(oldtup, crosscheck, buffer))
 			result = TM_Updated;
 	}
 
 	if (result != TM_Ok)
 	{
-		tmfd->ctid = oldtup.t_data->t_ctid;
-		tmfd->xmax = HeapTupleHeaderGetUpdateXid(oldtup.t_data);
+		tmfd->ctid = oldtup->t_data->t_ctid;
+		tmfd->xmax = HeapTupleHeaderGetUpdateXid(oldtup->t_data);
 		if (result == TM_SelfModified)
-			tmfd->cmax = HeapTupleHeaderGetCmax(oldtup.t_data);
+			tmfd->cmax = HeapTupleHeaderGetCmax(oldtup->t_data);
 		else
 			tmfd->cmax = InvalidCommandId;
 		UnlockReleaseBuffer(buffer);
 		if (have_tuple_lock)
-			UnlockTupleTuplock(relation, &(oldtup.t_self), *lockmode);
+			UnlockTupleTuplock(relation, &oldtup->t_self, *lockmode);
 		if (vmbuffer != InvalidBuffer)
 			ReleaseBuffer(vmbuffer);
-		*update_indexes = TU_None;
 
-		bms_free(hot_attrs);
-		bms_free(sum_attrs);
-		bms_free(key_attrs);
-		bms_free(id_attrs);
-		/* modified attrs is passed in and free'd by the caller, or NULL */
-		bms_free(interesting_attrs);
 		return result;
 	}
 
@@ -3851,9 +3630,9 @@ l2:
 	 * If the tuple we're updating is locked, we need to preserve the locking
 	 * info in the old tuple's Xmax.  Prepare a new Xmax value for this.
 	 */
-	compute_new_xmax_infomask(HeapTupleHeaderGetRawXmax(oldtup.t_data),
-							  oldtup.t_data->t_infomask,
-							  oldtup.t_data->t_infomask2,
+	compute_new_xmax_infomask(HeapTupleHeaderGetRawXmax(oldtup->t_data),
+							  oldtup->t_data->t_infomask,
+							  oldtup->t_data->t_infomask2,
 							  xid, *lockmode, true,
 							  &xmax_old_tuple, &infomask_old_tuple,
 							  &infomask2_old_tuple);
@@ -3865,12 +3644,12 @@ l2:
 	 * tuple.  (In rare cases that might also be InvalidTransactionId and yet
 	 * not have the HEAP_XMAX_INVALID bit set; that's fine.)
 	 */
-	if ((oldtup.t_data->t_infomask & HEAP_XMAX_INVALID) ||
-		HEAP_LOCKED_UPGRADED(oldtup.t_data->t_infomask) ||
+	if ((oldtup->t_data->t_infomask & HEAP_XMAX_INVALID) ||
+		HEAP_LOCKED_UPGRADED(oldtup->t_data->t_infomask) ||
 		(checked_lockers && !locker_remains))
 		xmax_new_tuple = InvalidTransactionId;
 	else
-		xmax_new_tuple = HeapTupleHeaderGetRawXmax(oldtup.t_data);
+		xmax_new_tuple = HeapTupleHeaderGetRawXmax(oldtup->t_data);
 
 	if (!TransactionIdIsValid(xmax_new_tuple))
 	{
@@ -3885,7 +3664,7 @@ l2:
 		 * Note that since we're doing an update, the only possibility is that
 		 * the lockers had FOR KEY SHARE lock.
 		 */
-		if (oldtup.t_data->t_infomask & HEAP_XMAX_IS_MULTI)
+		if (oldtup->t_data->t_infomask & HEAP_XMAX_IS_MULTI)
 		{
 			GetMultiXactIdHintBits(xmax_new_tuple, &infomask_new_tuple,
 								   &infomask2_new_tuple);
@@ -3913,7 +3692,7 @@ l2:
 	 * Replace cid with a combo CID if necessary.  Note that we already put
 	 * the plain cid into the new tuple.
 	 */
-	HeapTupleHeaderAdjustCmax(oldtup.t_data, &cid, &iscombo);
+	HeapTupleHeaderAdjustCmax(oldtup->t_data, &cid, &iscombo);
 
 	/*
 	 * If the toaster needs to be activated, OR if the new tuple will not fit
@@ -3930,12 +3709,12 @@ l2:
 		relation->rd_rel->relkind != RELKIND_MATVIEW)
 	{
 		/* toast table entries should never be recursively toasted */
-		Assert(!HeapTupleHasExternal(&oldtup));
+		Assert(!HeapTupleHasExternal(oldtup));
 		Assert(!HeapTupleHasExternal(newtup));
 		need_toast = false;
 	}
 	else
-		need_toast = (HeapTupleHasExternal(&oldtup) ||
+		need_toast = (HeapTupleHasExternal(oldtup) ||
 					  HeapTupleHasExternal(newtup) ||
 					  newtup->t_len > TOAST_TUPLE_THRESHOLD);
 
@@ -3968,9 +3747,9 @@ l2:
 		 * updating, because the potentially created multixact would otherwise
 		 * be wrong.
 		 */
-		compute_new_xmax_infomask(HeapTupleHeaderGetRawXmax(oldtup.t_data),
-								  oldtup.t_data->t_infomask,
-								  oldtup.t_data->t_infomask2,
+		compute_new_xmax_infomask(HeapTupleHeaderGetRawXmax(oldtup->t_data),
+								  oldtup->t_data->t_infomask,
+								  oldtup->t_data->t_infomask2,
 								  xid, *lockmode, false,
 								  &xmax_lock_old_tuple, &infomask_lock_old_tuple,
 								  &infomask2_lock_old_tuple);
@@ -3980,18 +3759,18 @@ l2:
 		START_CRIT_SECTION();
 
 		/* Clear obsolete visibility flags ... */
-		oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
-		oldtup.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
-		HeapTupleClearHotUpdated(&oldtup);
+		oldtup->t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		oldtup->t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
+		HeapTupleClearHotUpdated(oldtup);
 		/* ... and store info about transaction updating this tuple */
 		Assert(TransactionIdIsValid(xmax_lock_old_tuple));
-		HeapTupleHeaderSetXmax(oldtup.t_data, xmax_lock_old_tuple);
-		oldtup.t_data->t_infomask |= infomask_lock_old_tuple;
-		oldtup.t_data->t_infomask2 |= infomask2_lock_old_tuple;
-		HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
+		HeapTupleHeaderSetXmax(oldtup->t_data, xmax_lock_old_tuple);
+		oldtup->t_data->t_infomask |= infomask_lock_old_tuple;
+		oldtup->t_data->t_infomask2 |= infomask2_lock_old_tuple;
+		HeapTupleHeaderSetCmax(oldtup->t_data, cid, iscombo);
 
 		/* temporarily make it look not-updated, but locked */
-		oldtup.t_data->t_ctid = oldtup.t_self;
+		oldtup->t_data->t_ctid = oldtup->t_self;
 
 		/*
 		 * Clear all-frozen bit on visibility map if needed. We could
@@ -4014,10 +3793,10 @@ l2:
 			XLogBeginInsert();
 			XLogRegisterBuffer(0, buffer, REGBUF_STANDARD);
 
-			xlrec.offnum = ItemPointerGetOffsetNumber(&oldtup.t_self);
+			xlrec.offnum = ItemPointerGetOffsetNumber(&oldtup->t_self);
 			xlrec.xmax = xmax_lock_old_tuple;
-			xlrec.infobits_set = compute_infobits(oldtup.t_data->t_infomask,
-												  oldtup.t_data->t_infomask2);
+			xlrec.infobits_set = compute_infobits(oldtup->t_data->t_infomask,
+												  oldtup->t_data->t_infomask2);
 			xlrec.flags =
 				cleared_all_frozen ? XLH_LOCK_ALL_FROZEN_CLEARED : 0;
 			XLogRegisterData(&xlrec, SizeOfHeapLock);
@@ -4039,7 +3818,7 @@ l2:
 		if (need_toast)
 		{
 			/* Note we always use WAL and FSM during updates */
-			heaptup = heap_toast_insert_or_update(relation, newtup, &oldtup, 0);
+			heaptup = heap_toast_insert_or_update(relation, newtup, oldtup, 0);
 			newtupsize = MAXALIGN(heaptup->t_len);
 		}
 		else
@@ -4126,42 +3905,21 @@ l2:
 	 * will include checking the relation level, there is no benefit to a
 	 * separate check for the new tuple.
 	 */
-	CheckForSerializableConflictIn(relation, &oldtup.t_self,
+	CheckForSerializableConflictIn(relation, &oldtup->t_self,
 								   BufferGetBlockNumber(buffer));
 
 	/*
 	 * At this point newbuf and buffer are both pinned and locked, and newbuf
-	 * has enough space for the new tuple.  If they are the same buffer, only
-	 * one pin is held.
+	 * has enough space for the new tuple so we can use the HOT update path if
+	 * the caller determined that it is allowable.
+	 *
+	 * NOTE: If newbuf == buffer then only one pin is held.
 	 */
-
-	if (newbuf == buffer)
-	{
-		/*
-		 * Since the new tuple is going into the same page, we might be able
-		 * to do a HOT update.  Check if any of the index columns have been
-		 * changed.
-		 */
-		if (!bms_overlap(modified_attrs, hot_attrs))
-		{
-			use_hot_update = true;
-
-			/*
-			 * If none of the columns that are used in hot-blocking indexes
-			 * were updated, we can apply HOT, but we do still need to check
-			 * if we need to update the summarizing indexes, and update those
-			 * indexes if the columns were updated, or we may fail to detect
-			 * e.g. value bound changes in BRIN minmax indexes.
-			 */
-			if (bms_overlap(modified_attrs, sum_attrs))
-				summarized_update = true;
-		}
-	}
+	if ((newbuf == buffer) && hot_allowed)
+		use_hot_update = true;
 	else
-	{
 		/* Set a hint that the old page could use prune/defrag */
 		PageSetFull(page);
-	}
 
 	/*
 	 * Compute replica identity tuple before entering the critical section so
@@ -4170,8 +3928,7 @@ l2:
 	 * logged.  Pass old key required as true only if the replica identity key
 	 * columns are modified or it has external data.
 	 */
-	old_key_tuple = ExtractReplicaIdentity(relation, &oldtup,
-										   rep_id_key_required,
+	old_key_tuple = ExtractReplicaIdentity(relation, oldtup, rep_id_key_required,
 										   &old_key_copied);
 
 	/* NO EREPORT(ERROR) from here till changes are logged */
@@ -4194,7 +3951,7 @@ l2:
 	if (use_hot_update)
 	{
 		/* Mark the old tuple as HOT-updated */
-		HeapTupleSetHotUpdated(&oldtup);
+		HeapTupleSetHotUpdated(oldtup);
 		/* And mark the new tuple as heap-only */
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
@@ -4203,7 +3960,7 @@ l2:
 	else
 	{
 		/* Make sure tuples are correctly marked as not-HOT */
-		HeapTupleClearHotUpdated(&oldtup);
+		HeapTupleClearHotUpdated(oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
 	}
@@ -4212,17 +3969,17 @@ l2:
 
 
 	/* Clear obsolete visibility flags, possibly set by ourselves above... */
-	oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
-	oldtup.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
+	oldtup->t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+	oldtup->t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 	/* ... and store info about transaction updating this tuple */
 	Assert(TransactionIdIsValid(xmax_old_tuple));
-	HeapTupleHeaderSetXmax(oldtup.t_data, xmax_old_tuple);
-	oldtup.t_data->t_infomask |= infomask_old_tuple;
-	oldtup.t_data->t_infomask2 |= infomask2_old_tuple;
-	HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
+	HeapTupleHeaderSetXmax(oldtup->t_data, xmax_old_tuple);
+	oldtup->t_data->t_infomask |= infomask_old_tuple;
+	oldtup->t_data->t_infomask2 |= infomask2_old_tuple;
+	HeapTupleHeaderSetCmax(oldtup->t_data, cid, iscombo);
 
 	/* record address of new tuple in t_ctid of old one */
-	oldtup.t_data->t_ctid = heaptup->t_self;
+	oldtup->t_data->t_ctid = heaptup->t_self;
 
 	/* clear PD_ALL_VISIBLE flags, reset all visibilitymap bits */
 	if (PageIsAllVisible(BufferGetPage(buffer)))
@@ -4255,12 +4012,12 @@ l2:
 		 */
 		if (RelationIsAccessibleInLogicalDecoding(relation))
 		{
-			log_heap_new_cid(relation, &oldtup);
+			log_heap_new_cid(relation, oldtup);
 			log_heap_new_cid(relation, heaptup);
 		}
 
 		recptr = log_heap_update(relation, buffer,
-								 newbuf, &oldtup, heaptup,
+								 newbuf, oldtup, heaptup,
 								 old_key_tuple,
 								 all_visible_cleared,
 								 all_visible_cleared_new);
@@ -4285,7 +4042,7 @@ l2:
 	 * both tuple versions in one call to inval.c so we can avoid redundant
 	 * sinval messages.)
 	 */
-	CacheInvalidateHeapTuple(relation, &oldtup, heaptup);
+	CacheInvalidateHeapTuple(relation, oldtup, heaptup);
 
 	/* Now we can release the buffer(s) */
 	if (newbuf != buffer)
@@ -4300,7 +4057,7 @@ l2:
 	 * Release the lmgr tuple lock, if we had it.
 	 */
 	if (have_tuple_lock)
-		UnlockTupleTuplock(relation, &(oldtup.t_self), *lockmode);
+		UnlockTupleTuplock(relation, &oldtup->t_self, *lockmode);
 
 	pgstat_count_heap_update(relation, use_hot_update, newbuf != buffer);
 
@@ -4314,32 +4071,9 @@ l2:
 		heap_freetuple(heaptup);
 	}
 
-	/*
-	 * If it is a HOT update, the update may still need to update summarized
-	 * indexes, lest we fail to update those summaries and get incorrect
-	 * results (for example, minmax bounds of the block may change with this
-	 * update).
-	 */
-	if (use_hot_update)
-	{
-		if (summarized_update)
-			*update_indexes = TU_Summarizing;
-		else
-			*update_indexes = TU_None;
-	}
-	else
-		*update_indexes = TU_All;
-
 	if (old_key_tuple != NULL && old_key_copied)
 		heap_freetuple(old_key_tuple);
 
-	bms_free(hot_attrs);
-	bms_free(sum_attrs);
-	bms_free(key_attrs);
-	bms_free(id_attrs);
-	/* modified attrs is passed in and free'd by the caller, or NULL */
-	bms_free(interesting_attrs);
-
 	return TM_Ok;
 }
 
@@ -4348,7 +4082,7 @@ l2:
  * Confirm adequate lock held during heap_update(), per rules from
  * README.tuplock section "Locking to write inplace-updated tables".
  */
-static void
+void
 check_lock_if_inplace_updateable_rel(Relation relation,
 									 const ItemPointerData *otid,
 									 HeapTuple newtup)
@@ -4510,6 +4244,162 @@ heap_attr_equals(TupleDesc tupdesc, int attrnum, Datum value1, Datum value2,
 	}
 }
 
+/*
+ * HOT updates are possible when either: a) there are no modified indexed
+ * attributes, or b) the modified attributes are all on summarizing indexes.
+ * Later, in heap_update(), we can choose to perform a HOT update if there is
+ * space on the page for the new tuple and the following code has determined
+ * that HOT is allowed.
+ */
+bool
+HeapUpdateHotAllowable(Relation relation, const Bitmapset *mix_attrs, bool *summarized_only)
+{
+	bool		hot_allowed;
+
+	/*
+	 * Let's be optimistic and start off by assuming the best case, no indexes
+	 * need updating and HOT is allowable.
+	 */
+	hot_allowed = true;
+	*summarized_only = false;
+
+	/*
+	 * Check for case (a); when there are no modified index attributes HOT is
+	 * allowed.
+	 */
+	if (bms_is_empty(mix_attrs))
+		hot_allowed = true;
+	else
+	{
+		Bitmapset  *sum_attrs = RelationGetIndexAttrBitmap(relation,
+														   INDEX_ATTR_BITMAP_SUMMARIZED);
+
+		/*
+		 * At least one index attribute was modified, but is this case (b)
+		 * where all the modified index attributes are only used by
+		 * summarizing indexes?
+		 */
+		if (bms_is_subset(mix_attrs, sum_attrs))
+		{
+			hot_allowed = true;
+			*summarized_only = true;
+		}
+		else
+		{
+			/*
+			 * Now we know that one or more indexed attribute were updated and
+			 * that there was at least one of those attributes were referenced
+			 * by a non-summarizing index. HOT is not allowed.
+			 */
+			hot_allowed = false;
+		}
+
+		bms_free(sum_attrs);
+	}
+
+	return hot_allowed;
+}
+
+bool
+HeapUpdateRequiresReplicaId(Relation relation, const Bitmapset *mix_attrs,
+							HeapTupleData *tuple)
+{
+	bool		rep_id_key_required;
+	Bitmapset  *rid_attrs,
+			   *idx_attrs;
+
+	rid_attrs = RelationGetIndexAttrBitmap(relation,
+										   INDEX_ATTR_BITMAP_IDENTITY_KEY);
+
+	if (bms_is_empty(rid_attrs))
+	{
+		bms_free(rid_attrs);
+		return false;
+	}
+
+	idx_attrs = RelationGetIndexAttrBitmap(relation,
+										   INDEX_ATTR_BITMAP_INDEXED);
+
+	/*
+	 * ExtractReplicatIdentity() needs to know if a modified indexed attrbute
+	 * is used as a replica indentity or if any of the unmodified indexed
+	 * attributes in the old tuple are stored externally and used as a replica
+	 * identity.
+	 */
+	rep_id_key_required = bms_overlap(mix_attrs, rid_attrs);
+	if (!rep_id_key_required)
+	{
+		TupleDesc	tupdesc = RelationGetDescr(relation);
+		int			attidx;
+
+		/* Check only unmodified indexed replica identity key attributes */
+		idx_attrs = bms_del_members(idx_attrs, mix_attrs);
+		rid_attrs = bms_int_members(rid_attrs, idx_attrs);
+
+		/*
+		 * Start traversing the bitmap after any system or whole row
+		 * attributes, they don't influence replica identity.
+		 */
+		attidx = -FirstLowInvalidHeapAttributeNumber;
+
+		while ((attidx = bms_next_member(rid_attrs, attidx)) >= 0)
+		{
+			/*
+			 * attidx is zero-based, attrnum is the normal attribute number
+			 */
+			AttrNumber	attrnum = attidx + FirstLowInvalidHeapAttributeNumber;
+			Datum		value;
+			bool		isnull;
+
+			/*
+			 * System attributes are not added into interesting_attrs in
+			 * relcache.
+			 */
+			Assert(attrnum > 0);
+
+			value = heap_getattr(tuple, attrnum, tupdesc, &isnull);
+
+			/* No need to check attributes that can't be stored externally */
+			if (isnull ||
+				TupleDescCompactAttr(tupdesc, attrnum - 1)->attlen != -1)
+				continue;
+
+			/* Check if the old tuple's attribute is stored externally */
+			if (VARATT_IS_EXTERNAL((struct varlena *) DatumGetPointer(value)))
+			{
+				rep_id_key_required = true;
+				break;
+			}
+		}
+	}
+
+	bms_free(rid_attrs);
+	bms_free(idx_attrs);
+
+	return rep_id_key_required;
+}
+
+/*
+ * If we're not updating any "key" attributes, we can grab a weaker lock type.
+ * This allows for more concurrency when we are running simultaneously with
+ * foreign key checks.
+ */
+LockTupleMode
+HeapUpdateDetermineLockmode(Relation relation, const Bitmapset *mix_attrs)
+{
+	LockTupleMode lockmode = LockTupleExclusive;
+
+	Bitmapset  *key_attrs = RelationGetIndexAttrBitmap(relation,
+													   INDEX_ATTR_BITMAP_KEY);
+
+	if (!bms_overlap(mix_attrs, key_attrs))
+		lockmode = LockTupleNoKeyExclusive;
+
+	bms_free(key_attrs);
+
+	return lockmode;
+}
+
 /*
  * Check which columns are being updated.
  *
@@ -4520,12 +4410,10 @@ heap_attr_equals(TupleDesc tupdesc, int attrnum, Datum value1, Datum value2,
  * listed as interesting) of the old tuple is a member of external_cols and is
  * stored externally.
  */
-static Bitmapset *
+Bitmapset *
 HeapDetermineColumnsInfo(Relation relation,
 						 Bitmapset *interesting_cols,
-						 Bitmapset *external_cols,
-						 HeapTuple oldtup, HeapTuple newtup,
-						 bool *has_external)
+						 HeapTuple oldtup, HeapTuple newtup)
 {
 	int			attidx;
 	Bitmapset  *modified = NULL;
@@ -4567,10 +4455,11 @@ HeapDetermineColumnsInfo(Relation relation,
 		}
 
 		/*
-		 * Extract the corresponding values.  XXX this is pretty inefficient
-		 * if there are many indexed columns.  Should we do a single
-		 * heap_deform_tuple call on each tuple, instead?	But that doesn't
-		 * work for system columns ...
+		 * Extract the corresponding values.
+		 *
+		 * XXX this is pretty inefficient if there are many indexed columns.
+		 * Should we do a single heap_deform_tuple call on each tuple,
+		 * instead? But that doesn't work for system columns ...
 		 */
 		value1 = heap_getattr(oldtup, attrnum, tupdesc, &isnull1);
 		value2 = heap_getattr(newtup, attrnum, tupdesc, &isnull2);
@@ -4581,48 +4470,146 @@ HeapDetermineColumnsInfo(Relation relation,
 			modified = bms_add_member(modified, attidx);
 			continue;
 		}
-
-		/*
-		 * No need to check attributes that can't be stored externally. Note
-		 * that system attributes can't be stored externally.
-		 */
-		if (attrnum < 0 || isnull1 ||
-			TupleDescCompactAttr(tupdesc, attrnum - 1)->attlen != -1)
-			continue;
-
-		/*
-		 * Check if the old tuple's attribute is stored externally and is a
-		 * member of external_cols.
-		 */
-		if (VARATT_IS_EXTERNAL((varlena *) DatumGetPointer(value1)) &&
-			bms_is_member(attidx, external_cols))
-			*has_external = true;
 	}
 
 	return modified;
 }
 
 /*
- *	simple_heap_update - replace a tuple
- *
- * This routine may be used to update a tuple when concurrent updates of
- * the target tuple are not expected (for example, because we have a lock
- * on the relation associated with the tuple).  Any failure is reported
- * via ereport().
+ * This routine may be used to update a tuple when concurrent updates of the
+ * target tuple are not expected (for example, because we have a lock on the
+ * relation associated with the tuple).  Any failure is reported via ereport().
  */
 void
-simple_heap_update(Relation relation, const ItemPointerData *otid, HeapTuple tup,
+simple_heap_update(Relation relation, const ItemPointerData *otid, HeapTuple tuple,
 				   TU_UpdateIndexes *update_indexes)
 {
 	TM_Result	result;
 	TM_FailureData tmfd;
 	LockTupleMode lockmode;
+	Buffer		buffer;
+	Buffer		vmbuffer = InvalidBuffer;
+	Page		page;
+	BlockNumber block;
+	Bitmapset  *sum_attrs,
+			   *mix_attrs,
+			   *idx_attrs;
+	ItemId		lp;
+	HeapTupleData oldtup;
+	bool		hot_allowed;
+	bool		summarized_only;
+	bool		rep_id_key_required = false;
 
-	result = heap_update(relation, otid, tup,
-						 GetCurrentCommandId(true), InvalidSnapshot,
-						 true /* wait for commit */ ,
-						 &tmfd, &lockmode,
-						 NULL, false, update_indexes);
+	Assert(ItemPointerIsValid(otid));
+
+	/* Cheap, simplistic check that the tuple matches the rel's rowtype. */
+	Assert(HeapTupleHeaderGetNatts(tuple->t_data) <=
+		   RelationGetNumberOfAttributes(relation));
+
+	/*
+	 * Forbid this during a parallel operation, lest it allocate a combo CID.
+	 * Other workers might need that combo CID for visibility checks, and we
+	 * have no provision for broadcasting it to them.
+	 */
+	if (IsInParallelMode())
+		ereport(ERROR,
+				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
+				 errmsg("cannot update tuples during a parallel operation")));
+
+#ifdef USE_ASSERT_CHECKING
+	check_lock_if_inplace_updateable_rel(relation, otid, tuple);
+#endif
+
+	/*
+	 * We must fetch these bitmaps of attributes from relcache to be checked
+	 * for various operations below before obtaining a buffer lock because if
+	 * we are doing an update on one of the relevant system catalogs we could
+	 * deadlock if we try to fetch them later on. Relcache will return copies
+	 * of each bitmap, so we need not worry about relcache flush happening
+	 * midway through this operation.
+	 */
+	idx_attrs = RelationGetIndexAttrBitmap(relation,
+										   INDEX_ATTR_BITMAP_INDEXED);
+	sum_attrs = RelationGetIndexAttrBitmap(relation,
+										   INDEX_ATTR_BITMAP_SUMMARIZED);
+
+	block = ItemPointerGetBlockNumber(otid);
+	INJECTION_POINT("heap_update-before-pin", NULL);
+	buffer = ReadBuffer(relation, block);
+	page = BufferGetPage(buffer);
+
+	/*
+	 * Before locking the buffer, pin the visibility map page if it appears to
+	 * be necessary.  Since we haven't got the lock yet, someone else might be
+	 * in the middle of changing this, so we'll need to recheck after we have
+	 * the lock.
+	 */
+	if (PageIsAllVisible(page))
+		visibilitymap_pin(relation, block, &vmbuffer);
+
+	LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
+
+	lp = PageGetItemId(page, ItemPointerGetOffsetNumber(otid));
+
+	/*
+	 * Usually, a buffer pin and/or snapshot blocks pruning of otid, ensuring
+	 * we see LP_NORMAL here.  When the otid origin is a syscache, we may have
+	 * neither a pin nor a snapshot.  Hence, we may see other LP_ states, each
+	 * of which indicates concurrent pruning.
+	 *
+	 * Failing with TM_Updated would be most accurate.  However, unlike other
+	 * TM_Updated scenarios, we don't know the successor ctid in LP_UNUSED and
+	 * LP_DEAD cases.  While the distinction between TM_Updated and TM_Deleted
+	 * does matter to SQL statements UPDATE and MERGE, those SQL statements
+	 * hold a snapshot that ensures LP_NORMAL.  Hence, the choice between
+	 * TM_Updated and TM_Deleted affects only the wording of error messages.
+	 * Settle on TM_Deleted, for two reasons.  First, it avoids complicating
+	 * the specification of when tmfd->ctid is valid.  Second, it creates
+	 * error log evidence that we took this branch.
+	 *
+	 * Since it's possible to see LP_UNUSED at otid, it's also possible to see
+	 * LP_NORMAL for a tuple that replaced LP_UNUSED.  If it's a tuple for an
+	 * unrelated row, we'll fail with "duplicate key value violates unique".
+	 * XXX if otid is the live, newer version of the newtup row, we'll discard
+	 * changes originating in versions of this catalog row after the version
+	 * the caller got from syscache.  See syscache-update-pruned.spec.
+	 */
+	if (!ItemIdIsNormal(lp))
+	{
+		Assert(RelationSupportsSysCache(RelationGetRelid(relation)));
+
+		UnlockReleaseBuffer(buffer);
+		if (vmbuffer != InvalidBuffer)
+			ReleaseBuffer(vmbuffer);
+		*update_indexes = TU_None;
+
+		bms_free(idx_attrs);
+		bms_free(sum_attrs);
+		/* mix_attrs not yet initialized */
+
+		elog(ERROR, "tuple concurrently deleted");
+	}
+
+	/*
+	 * Partially construct the oldtup for HeapDetermineColumnsInfo to work and
+	 * then pass that on to heap_update.
+	 */
+	oldtup.t_tableOid = RelationGetRelid(relation);
+	oldtup.t_data = (HeapTupleHeader) PageGetItem(page, lp);
+	oldtup.t_len = ItemIdGetLength(lp);
+	oldtup.t_self = *otid;
+
+	mix_attrs = HeapDetermineColumnsInfo(relation, idx_attrs, &oldtup, tuple);
+	lockmode = HeapUpdateDetermineLockmode(relation, mix_attrs);
+	rep_id_key_required = HeapUpdateRequiresReplicaId(relation, mix_attrs, &oldtup);
+	hot_allowed = HeapUpdateHotAllowable(relation, mix_attrs, &summarized_only);
+
+	result = heap_update(relation, &oldtup, tuple, GetCurrentCommandId(true),
+						 InvalidSnapshot, true /* wait for commit */ ,
+						 &tmfd, &lockmode, buffer, page, block, lp, hot_allowed,
+						 vmbuffer, rep_id_key_required);
+
+	*update_indexes = TU_None;
 	switch (result)
 	{
 		case TM_SelfModified:
@@ -4632,6 +4619,10 @@ simple_heap_update(Relation relation, const ItemPointerData *otid, HeapTuple tup
 
 		case TM_Ok:
 			/* done successfully */
+			if (!HeapTupleIsHeapOnly(tuple))
+				*update_indexes = TU_All;
+			else if (summarized_only)
+				*update_indexes = TU_Summarizing;
 			break;
 
 		case TM_Updated:
@@ -4646,8 +4637,10 @@ simple_heap_update(Relation relation, const ItemPointerData *otid, HeapTuple tup
 			elog(ERROR, "unrecognized heap_update status: %u", result);
 			break;
 	}
-}
 
+	bms_free(idx_attrs);
+	bms_free(sum_attrs);
+}
 
 /*
  * Return the MultiXactStatus corresponding to the given tuple lock mode.
diff --git a/src/backend/access/heap/heapam_handler.c b/src/backend/access/heap/heapam_handler.c
index 2d74fa90c7f..54d117ea151 100644
--- a/src/backend/access/heap/heapam_handler.c
+++ b/src/backend/access/heap/heapam_handler.c
@@ -44,6 +44,7 @@
 #include "storage/procarray.h"
 #include "storage/smgr.h"
 #include "utils/builtins.h"
+#include "utils/injection_point.h"
 #include "utils/rel.h"
 
 static void reform_and_rewrite_tuple(HeapTuple tuple,
@@ -316,18 +317,97 @@ static TM_Result
 heapam_tuple_update(Relation relation, ItemPointer otid, TupleTableSlot *slot,
 					CommandId cid, Snapshot snapshot, Snapshot crosscheck,
 					bool wait, TM_FailureData *tmfd, LockTupleMode *lockmode,
-					const Bitmapset *modified_attrs, TU_UpdateIndexes *update_indexes)
+					const Bitmapset *mix_attrs, TU_UpdateIndexes *update_indexes)
 {
-	bool		shouldFree = true;
-	HeapTuple	tuple = ExecFetchSlotHeapTuple(slot, true, &shouldFree);
+	bool		shouldFree = false;
+	HeapTuple	tuple;
+	bool		rep_id_key_required = false;
+	bool		hot_allowed;
+	bool		summarized_only;
+	HeapTupleData oldtup;
+	Buffer		buffer;
+	Buffer		vmbuffer = InvalidBuffer;
+	Page		page;
+	BlockNumber block;
+	ItemId		lp;
 	TM_Result	result;
 
+	Assert(ItemPointerIsValid(otid));
+
+	tuple = ExecFetchSlotHeapTuple(slot, true, &shouldFree);
+
+	/* Cheap, simplistic check that the tuple matches the rel's rowtype. */
+	Assert(HeapTupleHeaderGetNatts(tuple->t_data) <=
+		   RelationGetNumberOfAttributes(relation));
+
+	/*
+	 * Forbid this during a parallel operation, lest it allocate a combo CID.
+	 * Other workers might need that combo CID for visibility checks, and we
+	 * have no provision for broadcasting it to them.
+	 */
+	if (IsInParallelMode())
+		ereport(ERROR,
+				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
+				 errmsg("cannot update tuples during a parallel operation")));
+
+#ifdef USE_ASSERT_CHECKING
+	check_lock_if_inplace_updateable_rel(relation, otid, tuple);
+#endif
+
+	hot_allowed = HeapUpdateHotAllowable(relation, mix_attrs, &summarized_only);
+	*lockmode = HeapUpdateDetermineLockmode(relation, mix_attrs);
+
+	block = ItemPointerGetBlockNumber(otid);
+	INJECTION_POINT("heap_update-before-pin", NULL);
+	buffer = ReadBuffer(relation, block);
+	page = BufferGetPage(buffer);
+
+	/*
+	 * Before locking the buffer, pin the visibility map page if it appears to
+	 * be necessary.  Since we haven't got the lock yet, someone else might be
+	 * in the middle of changing this, so we'll need to recheck after we have
+	 * the lock.
+	 */
+	if (PageIsAllVisible(page))
+		visibilitymap_pin(relation, block, &vmbuffer);
+
+	LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
+
+	lp = PageGetItemId(page, ItemPointerGetOffsetNumber(otid));
+
+	Assert(ItemIdIsNormal(lp));
+
+	oldtup.t_tableOid = RelationGetRelid(relation);
+	oldtup.t_data = (HeapTupleHeader) PageGetItem(page, lp);
+	oldtup.t_len = ItemIdGetLength(lp);
+	oldtup.t_self = *otid;
+
+	rep_id_key_required = HeapUpdateRequiresReplicaId(relation, mix_attrs, &oldtup);
+
+#if 1
+	{
+		Bitmapset  *hot_attrs = RelationGetIndexAttrBitmap(relation,
+														   INDEX_ATTR_BITMAP_INDEXED);
+		Bitmapset  *id_attrs = RelationGetIndexAttrBitmap(relation,
+														  INDEX_ATTR_BITMAP_IDENTITY_KEY);
+		Bitmapset  *hdci_attrs = HeapDetermineColumnsInfo(relation, hot_attrs,
+														  &oldtup, tuple);
+
+		Assert(bms_equal(mix_attrs, hdci_attrs));
+		bms_free(hot_attrs);
+		bms_free(id_attrs);
+		bms_free(hdci_attrs);
+	}
+#endif
+
 	/* Update the tuple with table oid */
 	slot->tts_tableOid = RelationGetRelid(relation);
 	tuple->t_tableOid = slot->tts_tableOid;
 
-	result = heap_update(relation, otid, tuple, cid, crosscheck, wait,
-						 tmfd, lockmode, modified_attrs, true, update_indexes);
+	result = heap_update(relation, &oldtup, tuple, cid, crosscheck, wait, tmfd,
+						 lockmode, buffer, page, block, lp, hot_allowed,
+						 vmbuffer, rep_id_key_required);
+
 	ItemPointerCopy(&tuple->t_self, &slot->tts_tid);
 
 	/*
@@ -335,21 +415,16 @@ heapam_tuple_update(Relation relation, ItemPointer otid, TupleTableSlot *slot,
 	 *
 	 * Note: heap_update returns the tid (location) of the new tuple in the
 	 * t_self field.
-	 *
-	 * If the update is not HOT, we must update all indexes. If the update is
-	 * HOT, it could be that we updated summarized columns, so we either
-	 * update only summarized indexes, or none at all.
 	 */
+	*update_indexes = TU_None;
 	if (result != TM_Ok)
-	{
-		Assert(*update_indexes == TU_None);
 		*update_indexes = TU_None;
-	}
 	else if (!HeapTupleIsHeapOnly(tuple))
-		Assert(*update_indexes == TU_All);
+		*update_indexes = TU_All;
+	else if (summarized_only)
+		*update_indexes = TU_Summarizing;
 	else
-		Assert((*update_indexes == TU_Summarizing) ||
-			   (*update_indexes == TU_None));
+		*update_indexes = TU_None;
 
 	if (shouldFree)
 		pfree(tuple);
diff --git a/src/backend/executor/execTuples.c b/src/backend/executor/execTuples.c
index e95dde2df2e..2d7e2c46587 100644
--- a/src/backend/executor/execTuples.c
+++ b/src/backend/executor/execTuples.c
@@ -1933,46 +1933,31 @@ ExecFetchSlotHeapTupleDatum(TupleTableSlot *slot)
 /*
  * ExecCompareSlots
  *
- * Compare old and new TupleTableSlots to detect which attributes have changed.
- *
- * This function serves two purposes:
- * 1) After trigger execution: detect trigger-modified columns
- *    (pass excluding=explicitly SET columns, including=NULL)
- * 2) Index maintenance: detect changes in indexed columns
- *    (pass including=indexed AND explicitly SET columns, excluding=NULL)
- *
- * Parameters:
- *   resultRelInfo - relation information
- *   excluding - bitmapset of attributes to skip
- *   including - bitmapset of attributes to check
- *   tupdesc - RelationGetDescr(relation)
- *   old_tts - old tuple slot
- *   new_tts - new tuple slot
- *
- * If including is NULL, check all attributes EXCEPT those in excluding
- * If excluding is NULL, check ONLY attributes in including
- * If both are NULL, check all attributes
- *
- * Returns a Bitmapset of attribute indices (using FirstLowInvalidHeapAttributeNumber
- * convention) that differ between the two slots.
+ * Compare the subset of attributes in attrs bewtween TupleTableSlots to detect
+ * which attributes have changed.
+ *
+ * Returns a Bitmapset of attribute indices (using
+ * FirstLowInvalidHeapAttributeNumber convention) that differ between the two
+ * slots.
  */
 Bitmapset *
-ExecCompareSlotAttrs(TupleDesc tupdesc,
-					 const Bitmapset *attrs,
-					 TupleTableSlot *old_tts,
-					 TupleTableSlot *new_tts)
+ExecCompareSlotAttrs(TupleDesc tupdesc, const Bitmapset *attrs,
+					 TupleTableSlot *s1, TupleTableSlot *s2)
 {
 	int			attidx = -1;
 	Bitmapset  *modified = NULL;
 
+	/* XXX what if slots don't share the same tupleDescriptor... */
+	/* Assert(s1->tts_tupleDescriptor == s2->tts_tupleDescriptor); */
+
 	while ((attidx = bms_next_member(attrs, attidx)) >= 0)
 	{
 		/* attidx is zero-based, attrnum is the normal attribute number */
 		AttrNumber	attrnum = attidx + FirstLowInvalidHeapAttributeNumber;
-		Datum		old_value,
-					new_value;
-		bool		old_null,
-					new_null;
+		Datum		value1,
+					value2;
+		bool		null1,
+					null2;
 		CompactAttribute *att;
 
 		/*
@@ -2001,21 +1986,21 @@ ExecCompareSlotAttrs(TupleDesc tupdesc,
 		}
 
 		att = TupleDescCompactAttr(tupdesc, attrnum - 1);
-		old_value = slot_getattr(old_tts, attrnum, &old_null);
-		new_value = slot_getattr(new_tts, attrnum, &new_null);
+		value1 = slot_getattr(s1, attrnum, &null1);
+		value2 = slot_getattr(s2, attrnum, &null2);
 
 		/* A change to/from NULL, so not equal */
-		if (old_null != new_null)
+		if (null1 != null2)
 		{
 			modified = bms_add_member(modified, attidx);
 			continue;
 		}
 
 		/* Both NULL, no change/unmodified */
-		if (old_null)
+		if (null2)
 			continue;
 
-		if (!datum_image_eq(old_value, new_value, att->attbyval, att->attlen))
+		if (!datum_image_eq(value1, value2, att->attbyval, att->attlen))
 			modified = bms_add_member(modified, attidx);
 	}
 
diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c
index 18796baed28..082dee94422 100644
--- a/src/backend/executor/nodeModifyTable.c
+++ b/src/backend/executor/nodeModifyTable.c
@@ -213,7 +213,7 @@ ExecCheckIndexedAttrsForChanges(ResultRelInfo *resultRelInfo,
 	Relation	relation = resultRelInfo->ri_RelationDesc;
 	TupleDesc	tupdesc = RelationGetDescr(relation);
 	Bitmapset  *attrs,
-			   *mix_attrs;
+			   *mix_attrs = NULL;
 
 	/* If no indexes, we're done */
 	if (resultRelInfo->ri_NumIndices == 0)
@@ -227,8 +227,9 @@ ExecCheckIndexedAttrsForChanges(ResultRelInfo *resultRelInfo,
 	attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_INDEXED);
 	attrs = bms_int_members(attrs, ExecGetAllUpdatedCols(resultRelInfo, estate));
 
-	/* Find out which, if any, modified indexed attributes changed */
-	mix_attrs = ExecCompareSlotAttrs(tupdesc, attrs, old_tts, new_tts);
+	/* Find out which, if any, modified indexed attributes changed value */
+	if (!bms_is_empty(attrs))
+		mix_attrs = ExecCompareSlotAttrs(tupdesc, attrs, old_tts, new_tts);
 
 	bms_free(attrs);
 
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 547cf1d054d..f30505d8ae3 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -2475,7 +2475,6 @@ RelationDestroyRelation(Relation relation, bool remember_tupdesc)
 	bms_free(relation->rd_keyattr);
 	bms_free(relation->rd_pkattr);
 	bms_free(relation->rd_idattr);
-	bms_free(relation->rd_hotblockingattr);
 	bms_free(relation->rd_summarizedattr);
 	bms_free(relation->rd_indexedattr);
 	if (relation->rd_pubdesc)
@@ -5277,8 +5276,7 @@ RelationGetIndexPredicate(Relation relation)
  *									(beware: even if PK is deferrable!)
  *	INDEX_ATTR_BITMAP_IDENTITY_KEY	Columns in the table's replica identity
  *									index (empty if FULL)
- *	INDEX_ATTR_BITMAP_HOT_BLOCKING	Columns that block updates from being HOT
- *	INDEX_ATTR_BITMAP_SUMMARIZED	Columns included in summarizing indexes
+ *	INDEX_ATTR_BITMAP_SUMMARIZED	Columns only included in summarizing indexes
  *	INDEX_ATTR_BITMAP_INDEXED		Columns referenced by indexes
  *
  * Attribute numbers are offset by FirstLowInvalidHeapAttributeNumber so that
@@ -5302,9 +5300,8 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 	Bitmapset  *uindexattrs;	/* columns in unique indexes */
 	Bitmapset  *pkindexattrs;	/* columns in the primary index */
 	Bitmapset  *idindexattrs;	/* columns in the replica identity */
-	Bitmapset  *hotblockingattrs;	/* columns with HOT blocking indexes */
+	Bitmapset  *summarizedattrs;	/* columns only in summarizing indexes */
 	Bitmapset  *indexedattrs;	/* columns referenced by indexes */
-	Bitmapset  *summarizedattrs;	/* columns with summarizing indexes */
 	List	   *indexoidlist;
 	List	   *newindexoidlist;
 	Oid			relpkindex;
@@ -5323,8 +5320,6 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 				return bms_copy(relation->rd_pkattr);
 			case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 				return bms_copy(relation->rd_idattr);
-			case INDEX_ATTR_BITMAP_HOT_BLOCKING:
-				return bms_copy(relation->rd_hotblockingattr);
 			case INDEX_ATTR_BITMAP_SUMMARIZED:
 				return bms_copy(relation->rd_summarizedattr);
 			case INDEX_ATTR_BITMAP_INDEXED:
@@ -5371,7 +5366,6 @@ restart:
 	uindexattrs = NULL;
 	pkindexattrs = NULL;
 	idindexattrs = NULL;
-	hotblockingattrs = NULL;
 	summarizedattrs = NULL;
 	indexedattrs = NULL;
 	foreach(l, indexoidlist)
@@ -5432,7 +5426,7 @@ restart:
 		if (indexDesc->rd_indam->amsummarizing)
 			attrs = &summarizedattrs;
 		else
-			attrs = &hotblockingattrs;
+			attrs = &indexedattrs;
 
 		/* Collect simple attribute references */
 		for (i = 0; i < indexDesc->rd_index->indnatts; i++)
@@ -5441,9 +5435,9 @@ restart:
 
 			/*
 			 * Since we have covering indexes with non-key columns, we must
-			 * handle them accurately here. non-key columns must be added into
-			 * hotblockingattrs or summarizedattrs, since they are in index,
-			 * and update shouldn't miss them.
+			 * handle them accurately here. Non-key columns must be added into
+			 * indexedattrs or summarizedattrs, since they are in index, and
+			 * update shouldn't miss them.
 			 *
 			 * Summarizing indexes do not block HOT, but do need to be updated
 			 * when the column value changes, thus require a separate
@@ -5504,15 +5498,19 @@ restart:
 		bms_free(uindexattrs);
 		bms_free(pkindexattrs);
 		bms_free(idindexattrs);
-		bms_free(hotblockingattrs);
 		bms_free(summarizedattrs);
-		/* indexedattrs not yet initialized */
+		bms_free(indexedattrs);
 
 		goto restart;
 	}
 
-	/* Set indexed attributes to track all referenced attributes */
-	indexedattrs = bms_union(hotblockingattrs, summarizedattrs);
+	/*
+	 * Record what attributes are only referenced by summarizing indexes. Then
+	 * add that into the other indexed attributes to track all referenced
+	 * attributes.
+	 */
+	summarizedattrs = bms_del_members(summarizedattrs, indexedattrs);
+	indexedattrs = bms_add_members(indexedattrs, summarizedattrs);
 
 	/* Don't leak the old values of these bitmaps, if any */
 	relation->rd_attrsvalid = false;
@@ -5522,8 +5520,6 @@ restart:
 	relation->rd_pkattr = NULL;
 	bms_free(relation->rd_idattr);
 	relation->rd_idattr = NULL;
-	bms_free(relation->rd_hotblockingattr);
-	relation->rd_hotblockingattr = NULL;
 	bms_free(relation->rd_summarizedattr);
 	relation->rd_summarizedattr = NULL;
 	bms_free(relation->rd_indexedattr);
@@ -5540,7 +5536,6 @@ restart:
 	relation->rd_keyattr = bms_copy(uindexattrs);
 	relation->rd_pkattr = bms_copy(pkindexattrs);
 	relation->rd_idattr = bms_copy(idindexattrs);
-	relation->rd_hotblockingattr = bms_copy(hotblockingattrs);
 	relation->rd_summarizedattr = bms_copy(summarizedattrs);
 	relation->rd_indexedattr = bms_copy(indexedattrs);
 	relation->rd_attrsvalid = true;
@@ -5555,8 +5550,6 @@ restart:
 			return pkindexattrs;
 		case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 			return idindexattrs;
-		case INDEX_ATTR_BITMAP_HOT_BLOCKING:
-			return hotblockingattrs;
 		case INDEX_ATTR_BITMAP_SUMMARIZED:
 			return summarizedattrs;
 		case INDEX_ATTR_BITMAP_INDEXED:
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index a56f3d1f378..f1858d14b42 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -364,12 +364,11 @@ extern TM_Result heap_delete(Relation relation, const ItemPointerData *tid,
 							 TM_FailureData *tmfd, bool changingPart);
 extern void heap_finish_speculative(Relation relation, const ItemPointerData *tid);
 extern void heap_abort_speculative(Relation relation, const ItemPointerData *tid);
-extern TM_Result heap_update(Relation relation, const ItemPointerData *otid,
-							 HeapTuple newtup,
+extern TM_Result heap_update(Relation relation, HeapTuple oldtup, HeapTuple newtup,
 							 CommandId cid, Snapshot crosscheck, bool wait,
-							 TM_FailureData *tmfd, LockTupleMode *lockmode,
-							 const Bitmapset *mix_attrs, bool mix_attrs_valid,
-							 TU_UpdateIndexes *update_indexes);
+							 TM_FailureData *tmfd, const LockTupleMode *lockmode,
+							 Buffer buffer, Page page, BlockNumber block, ItemId lp,
+							 bool hot_allowed, Buffer vmbuffer, bool rep_id_key_required);
 extern TM_Result heap_lock_tuple(Relation relation, HeapTuple tuple,
 								 CommandId cid, LockTupleMode mode, LockWaitPolicy wait_policy,
 								 bool follow_updates,
@@ -431,6 +430,22 @@ extern void log_heap_prune_and_freeze(Relation relation, Buffer buffer,
 									  OffsetNumber *dead, int ndead,
 									  OffsetNumber *unused, int nunused);
 
+/* in heap/heapam.c */
+extern bool HeapUpdateHotAllowable(Relation relation, const Bitmapset *mix_attrs,
+								   bool *summarized_only);
+extern bool HeapUpdateRequiresReplicaId(Relation relation, const Bitmapset *mix_attrs,
+										HeapTupleData *tuple);
+extern Bitmapset *HeapDetermineColumnsInfo(Relation relation,
+										   Bitmapset *interesting_cols,
+										   HeapTuple oldtup, HeapTuple newtup);
+extern LockTupleMode HeapUpdateDetermineLockmode(Relation relation,
+												 const Bitmapset *mix_attrs);
+#ifdef USE_ASSERT_CHECKING
+extern void check_lock_if_inplace_updateable_rel(Relation relation,
+												 const ItemPointerData *otid,
+												 HeapTuple newtup);
+#endif
+
 /* in heap/vacuumlazy.c */
 extern void heap_vacuum_rel(Relation rel,
 							const VacuumParams params, BufferAccessStrategy bstrategy);
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index df5426fd7fb..10e5e9044ee 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -162,7 +162,6 @@ typedef struct RelationData
 	Bitmapset  *rd_keyattr;		/* cols that can be ref'd by foreign keys */
 	Bitmapset  *rd_pkattr;		/* cols included in primary key */
 	Bitmapset  *rd_idattr;		/* included in replica identity index */
-	Bitmapset  *rd_hotblockingattr; /* cols blocking HOT update */
 	Bitmapset  *rd_summarizedattr;	/* cols indexed by summarizing indexes */
 	Bitmapset  *rd_indexedattr; /* all cols referenced by indexes */
 
diff --git a/src/include/utils/relcache.h b/src/include/utils/relcache.h
index 5834ab7b903..57b46ee54e5 100644
--- a/src/include/utils/relcache.h
+++ b/src/include/utils/relcache.h
@@ -69,7 +69,6 @@ typedef enum IndexAttrBitmapKind
 	INDEX_ATTR_BITMAP_KEY,
 	INDEX_ATTR_BITMAP_PRIMARY_KEY,
 	INDEX_ATTR_BITMAP_IDENTITY_KEY,
-	INDEX_ATTR_BITMAP_HOT_BLOCKING,
 	INDEX_ATTR_BITMAP_SUMMARIZED,
 	INDEX_ATTR_BITMAP_INDEXED,
 } IndexAttrBitmapKind;
diff --git a/src/test/regress/expected/generated_virtual.out b/src/test/regress/expected/generated_virtual.out
index 249e68be654..6aea0346ee2 100644
--- a/src/test/regress/expected/generated_virtual.out
+++ b/src/test/regress/expected/generated_virtual.out
@@ -260,7 +260,7 @@ MERGE INTO gtestm t USING gtestm AS s ON 2 * t.a = s.b WHEN MATCHED THEN DELETE
 DROP TABLE gtestm;
 -- views
 CREATE VIEW gtest1v AS SELECT * FROM gtest1;
-SELECT * FROM gtest1v;
+SELECT * FROM gtest1v ORDER BY a;
  a | b 
 ---+---
  3 | 6
@@ -287,7 +287,7 @@ DETAIL:  Column "b" is a generated column.
 INSERT INTO gtest1v VALUES (8, DEFAULT), (9, DEFAULT);  -- error
 ERROR:  cannot insert a non-DEFAULT value into column "b"
 DETAIL:  Column "b" is a generated column.
-SELECT * FROM gtest1v;
+SELECT * FROM gtest1v ORDER BY a;
  a | b  
 ---+----
  3 |  6
diff --git a/src/test/regress/expected/updatable_views.out b/src/test/regress/expected/updatable_views.out
index 9cea538b8e8..4877a1ddce9 100644
--- a/src/test/regress/expected/updatable_views.out
+++ b/src/test/regress/expected/updatable_views.out
@@ -372,15 +372,15 @@ INSERT INTO rw_view16 (a, b) VALUES (3, 'Row 3'); -- should be OK
 UPDATE rw_view16 SET a=3, aa=-3 WHERE a=3; -- should fail
 ERROR:  multiple assignments to same column "a"
 UPDATE rw_view16 SET aa=-3 WHERE a=3; -- should be OK
-SELECT * FROM base_tbl;
+SELECT * FROM base_tbl ORDER BY a;
  a  |   b    
 ----+--------
+ -3 | Row 3
  -2 | Row -2
  -1 | Row -1
   0 | Row 0
   1 | Row 1
   2 | Row 2
- -3 | Row 3
 (6 rows)
 
 DELETE FROM rw_view16 WHERE a=-3; -- should be OK
diff --git a/src/test/regress/sql/generated_virtual.sql b/src/test/regress/sql/generated_virtual.sql
index 81152b39a79..1142bb93525 100644
--- a/src/test/regress/sql/generated_virtual.sql
+++ b/src/test/regress/sql/generated_virtual.sql
@@ -115,7 +115,7 @@ DROP TABLE gtestm;
 
 -- views
 CREATE VIEW gtest1v AS SELECT * FROM gtest1;
-SELECT * FROM gtest1v;
+SELECT * FROM gtest1v ORDER BY a;
 INSERT INTO gtest1v VALUES (4, 8);  -- error
 INSERT INTO gtest1v VALUES (5, DEFAULT);  -- ok
 INSERT INTO gtest1v VALUES (6, 66), (7, 77);  -- error
@@ -127,7 +127,7 @@ ALTER VIEW gtest1v ALTER COLUMN b SET DEFAULT 100;
 INSERT INTO gtest1v VALUES (8, DEFAULT);  -- error
 INSERT INTO gtest1v VALUES (8, DEFAULT), (9, DEFAULT);  -- error
 
-SELECT * FROM gtest1v;
+SELECT * FROM gtest1v ORDER BY a;
 DELETE FROM gtest1v WHERE a >= 5;
 DROP VIEW gtest1v;
 
diff --git a/src/test/regress/sql/updatable_views.sql b/src/test/regress/sql/updatable_views.sql
index 1635adde2d4..160e7799715 100644
--- a/src/test/regress/sql/updatable_views.sql
+++ b/src/test/regress/sql/updatable_views.sql
@@ -125,7 +125,7 @@ INSERT INTO rw_view16 VALUES (3, 'Row 3', 3); -- should fail
 INSERT INTO rw_view16 (a, b) VALUES (3, 'Row 3'); -- should be OK
 UPDATE rw_view16 SET a=3, aa=-3 WHERE a=3; -- should fail
 UPDATE rw_view16 SET aa=-3 WHERE a=3; -- should be OK
-SELECT * FROM base_tbl;
+SELECT * FROM base_tbl ORDER BY a;
 DELETE FROM rw_view16 WHERE a=-3; -- should be OK
 -- Read-only views
 INSERT INTO ro_view17 VALUES (3, 'ROW 3');
-- 
2.51.2

Reply via email to