On Mon, May 13, 2019 at 12:51 PM Robert Haas <robertmh...@gmail.com> wrote:

> On Fri, May 10, 2019 at 3:43 PM Ashwin Agrawal <aagra...@pivotal.io>
> wrote:
> > Meant to stick the question mark in that email, somehow missed. Yes
> > not planning to spend any time on it if objections. Here is the list
> > of renames I wish to perform.
> >
> > Lets start with low hanging ones.
> >
> > table_rescan -> table_scan_rescan
> > table_insert -> table_tuple_insert
> > table_insert_speculative -> table_tuple_insert_speculative
> > table_delete -> table_tuple_delete
> > table_update -> table_tuple_update
> > table_lock_tuple -> table_tuple_lock
> >
> > Below two you already mentioned no objections to rename
> > table_fetch_row_version -> table_tuple_fetch_row_version
> > table_get_latest_tid -> table_tuple_get_latest_tid
> >
> > Now, table_beginscan and table_endscan are the ones which are
> > wide-spread.
>
> I vote to rename all the ones where the new name would contain "tuple"
> and to leave the others alone.  i.e. leave table_beginscan,
> table_endscan, and table_rescan as they are.  I think that there's
> little benefit in standardizing table_rescan but not the other two,
> and we seem to agree that tinkering with the other two gets into a
> painful amount of churn.
>

Thank you. Please find the patch to rename the agreed functions. It would
be good to make all consistent instead of applying exception to three
functions but seems no consensus on it. Given table_ api are new, we could
modify them leaving systable_ ones as is, but as objections left that as is.
From 39fcc223a0aaeacf545e09cfe29b8d565d970234 Mon Sep 17 00:00:00 2001
From: Ashwin Agrawal <aagra...@pivotal.io>
Date: Tue, 14 May 2019 10:10:21 -0700
Subject: [PATCH] Rename table am wrappers to match table am callback names.

Now all the table wrapper functions match the
"table_<am_callback_name>" theme, except table_beginscan(),
table_endscan() and table_rescan(). The exception was applied and
decision is not to rename the three functions based on discussion to
avoid code churn.
---
 src/backend/access/heap/heapam.c       | 10 ++--
 src/backend/access/table/tableam.c     | 46 +++++++--------
 src/backend/commands/copy.c            | 12 ++--
 src/backend/commands/createas.c        | 14 ++---
 src/backend/commands/matview.c         | 14 ++---
 src/backend/commands/tablecmds.c       |  4 +-
 src/backend/commands/trigger.c         | 12 ++--
 src/backend/executor/execMain.c        |  8 +--
 src/backend/executor/execReplication.c | 16 +++---
 src/backend/executor/nodeLockRows.c    |  8 +--
 src/backend/executor/nodeModifyTable.c | 78 ++++++++++++-------------
 src/backend/executor/nodeTidscan.c     |  4 +-
 src/backend/utils/adt/tid.c            |  4 +-
 src/include/access/tableam.h           | 80 +++++++++++++-------------
 14 files changed, 155 insertions(+), 155 deletions(-)

diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index ec9853603fd..86a964c4f48 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -1856,12 +1856,12 @@ ReleaseBulkInsertStatePin(BulkInsertState bistate)
  * The new tuple is stamped with current transaction ID and the specified
  * command ID.
  *
- * See table_insert for comments about most of the input flags, except that
+ * See table_tuple_insert for comments about most of the input flags, except that
  * this routine directly takes a tuple rather than a slot.
  *
  * There's corresponding HEAP_INSERT_ options to all the TABLE_INSERT_
  * options, and there additionally is HEAP_INSERT_SPECULATIVE which is used to
- * implement table_insert_speculative().
+ * implement table_tuple_insert_speculative().
  *
  * On return the header fields of *tup are updated to match the stored tuple;
  * in particular tup->t_self receives the actual TID where the tuple was
@@ -2434,7 +2434,7 @@ xmax_infomask_changed(uint16 new_infomask, uint16 old_infomask)
 /*
  *	heap_delete - delete a tuple
  *
- * See table_delete() for an explanation of the parameters, except that this
+ * See table_tuple_delete() for an explanation of the parameters, except that this
  * routine directly takes a tuple rather than a slot.
  *
  * In the failure cases, the routine fills *tmfd with the tuple's t_ctid,
@@ -2880,7 +2880,7 @@ simple_heap_delete(Relation relation, ItemPointer tid)
 /*
  *	heap_update - replace a tuple
  *
- * See table_update() for an explanation of the parameters, except that this
+ * See table_tuple_update() for an explanation of the parameters, except that this
  * routine directly takes a tuple rather than a slot.
  *
  * In the failure cases, the routine fills *tmfd with the tuple's t_ctid,
@@ -3951,7 +3951,7 @@ get_mxact_status_for_lock(LockTupleMode mode, bool is_update)
  *	*buffer: set to buffer holding tuple (pinned but not locked at exit)
  *	*tmfd: filled in failure cases (see below)
  *
- * Function results are the same as the ones for table_lock_tuple().
+ * Function results are the same as the ones for table_tuple_lock().
  *
  * In the failure cases other than TM_Invisible, the routine fills
  * *tmfd with the tuple's t_ctid, t_xmax (resolving a possible MultiXact,
diff --git a/src/backend/access/table/tableam.c b/src/backend/access/table/tableam.c
index baba1ea699b..2bf70fcdb60 100644
--- a/src/backend/access/table/tableam.c
+++ b/src/backend/access/table/tableam.c
@@ -219,19 +219,19 @@ table_index_fetch_tuple_check(Relation rel,
  */
 
 /*
- * simple_table_insert - insert a tuple
+ * simple_table_tuple_insert - insert a tuple
  *
- * Currently, this routine differs from table_insert only in supplying a
+ * Currently, this routine differs from table_tuple_insert only in supplying a
  * default command ID and not allowing access to the speedup options.
  */
 void
-simple_table_insert(Relation rel, TupleTableSlot *slot)
+simple_table_tuple_insert(Relation rel, TupleTableSlot *slot)
 {
-	table_insert(rel, slot, GetCurrentCommandId(true), 0, NULL);
+	table_tuple_insert(rel, slot, GetCurrentCommandId(true), 0, NULL);
 }
 
 /*
- * simple_table_delete - delete a tuple
+ * simple_table_tuple_delete - delete a tuple
  *
  * This routine may be used to delete a tuple when concurrent updates of
  * the target tuple are not expected (for example, because we have a lock
@@ -239,16 +239,16 @@ simple_table_insert(Relation rel, TupleTableSlot *slot)
  * via ereport().
  */
 void
-simple_table_delete(Relation rel, ItemPointer tid, Snapshot snapshot)
+simple_table_tuple_delete(Relation rel, ItemPointer tid, Snapshot snapshot)
 {
 	TM_Result	result;
 	TM_FailureData tmfd;
 
-	result = table_delete(rel, tid,
-						  GetCurrentCommandId(true),
-						  snapshot, InvalidSnapshot,
-						  true /* wait for commit */ ,
-						  &tmfd, false /* changingPart */ );
+	result = table_tuple_delete(rel, tid,
+								GetCurrentCommandId(true),
+								snapshot, InvalidSnapshot,
+								true /* wait for commit */ ,
+								&tmfd, false /* changingPart */ );
 
 	switch (result)
 	{
@@ -270,13 +270,13 @@ simple_table_delete(Relation rel, ItemPointer tid, Snapshot snapshot)
 			break;
 
 		default:
-			elog(ERROR, "unrecognized table_delete status: %u", result);
+			elog(ERROR, "unrecognized table_tuple_delete status: %u", result);
 			break;
 	}
 }
 
 /*
- * simple_table_update - replace a tuple
+ * simple_table_tuple_update - replace a tuple
  *
  * This routine may be used to update a tuple when concurrent updates of
  * the target tuple are not expected (for example, because we have a lock
@@ -284,20 +284,20 @@ simple_table_delete(Relation rel, ItemPointer tid, Snapshot snapshot)
  * via ereport().
  */
 void
-simple_table_update(Relation rel, ItemPointer otid,
-					TupleTableSlot *slot,
-					Snapshot snapshot,
-					bool *update_indexes)
+simple_table_tuple_update(Relation rel, ItemPointer otid,
+						  TupleTableSlot *slot,
+						  Snapshot snapshot,
+						  bool *update_indexes)
 {
 	TM_Result	result;
 	TM_FailureData tmfd;
 	LockTupleMode lockmode;
 
-	result = table_update(rel, otid, slot,
-						  GetCurrentCommandId(true),
-						  snapshot, InvalidSnapshot,
-						  true /* wait for commit */ ,
-						  &tmfd, &lockmode, update_indexes);
+	result = table_tuple_update(rel, otid, slot,
+								GetCurrentCommandId(true),
+								snapshot, InvalidSnapshot,
+								true /* wait for commit */ ,
+								&tmfd, &lockmode, update_indexes);
 
 	switch (result)
 	{
@@ -319,7 +319,7 @@ simple_table_update(Relation rel, ItemPointer otid,
 			break;
 
 		default:
-			elog(ERROR, "unrecognized table_update status: %u", result);
+			elog(ERROR, "unrecognized table_tuple_update status: %u", result);
 			break;
 	}
 
diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c
index c39218f8dbb..a2349570df9 100644
--- a/src/backend/commands/copy.c
+++ b/src/backend/commands/copy.c
@@ -90,7 +90,7 @@ typedef enum EolType
  */
 typedef enum CopyInsertMethod
 {
-	CIM_SINGLE,					/* use table_insert or fdw routine */
+	CIM_SINGLE,					/* use table_tuple_insert or fdw routine */
 	CIM_MULTI,					/* always use table_multi_insert */
 	CIM_MULTI_CONDITIONAL		/* use table_multi_insert only if valid */
 } CopyInsertMethod;
@@ -2664,7 +2664,7 @@ CopyFrom(CopyState cstate)
 	PartitionTupleRouting *proute = NULL;
 	ErrorContextCallback errcallback;
 	CommandId	mycid = GetCurrentCommandId(true);
-	int			ti_options = 0; /* start with default table_insert options */
+	int			ti_options = 0; /* start with default table_tuple_insert options */
 	BulkInsertState bistate = NULL;
 	CopyInsertMethod insertMethod;
 	CopyMultiInsertInfo multiInsertInfo = {0};	/* pacify compiler */
@@ -2737,7 +2737,7 @@ CopyFrom(CopyState cstate)
 	 * FSM for free space is a waste of time, even if we must use WAL because
 	 * of archiving.  This could possibly be wrong, but it's unlikely.
 	 *
-	 * The comments for table_insert and RelationGetBufferForTuple specify that
+	 * The comments for table_tuple_insert and RelationGetBufferForTuple specify that
 	 * skipping WAL logging is only safe if we ensure that our tuples do not
 	 * go into pages containing tuples from any other transactions --- but this
 	 * must be the case if we have a new table or new relfilenode, so we need
@@ -2888,7 +2888,7 @@ CopyFrom(CopyState cstate)
 	/*
 	 * It's generally more efficient to prepare a bunch of tuples for
 	 * insertion, and insert them in one table_multi_insert() call, than call
-	 * table_insert() separately for every tuple. However, there are a number
+	 * table_tuple_insert() separately for every tuple. However, there are a number
 	 * of reasons why we might not be able to do this.  These are explained
 	 * below.
 	 */
@@ -3291,8 +3291,8 @@ CopyFrom(CopyState cstate)
 					else
 					{
 						/* OK, store the tuple and create index entries for it */
-						table_insert(resultRelInfo->ri_RelationDesc, myslot,
-									 mycid, ti_options, bistate);
+						table_tuple_insert(resultRelInfo->ri_RelationDesc, myslot,
+										   mycid, ti_options, bistate);
 
 						if (resultRelInfo->ri_NumIndices > 0)
 							recheckIndexes = ExecInsertIndexTuples(myslot,
diff --git a/src/backend/commands/createas.c b/src/backend/commands/createas.c
index 43c2fa91242..4c1d909d380 100644
--- a/src/backend/commands/createas.c
+++ b/src/backend/commands/createas.c
@@ -60,7 +60,7 @@ typedef struct
 	Relation	rel;			/* relation to write to */
 	ObjectAddress reladdr;		/* address of rel, for ExecCreateTableAs */
 	CommandId	output_cid;		/* cmin to insert in output tuples */
-	int			ti_options;		/* table_insert performance options */
+	int			ti_options;		/* table_tuple_insert performance options */
 	BulkInsertState bistate;	/* bulk insert state */
 } DR_intorel;
 
@@ -576,18 +576,18 @@ intorel_receive(TupleTableSlot *slot, DestReceiver *self)
 
 	/*
 	 * Note that the input slot might not be of the type of the target
-	 * relation. That's supported by table_insert(), but slightly less
+	 * relation. That's supported by table_tuple_insert(), but slightly less
 	 * efficient than inserting with the right slot - but the alternative
 	 * would be to copy into a slot of the right type, which would not be
 	 * cheap either. This also doesn't allow accessing per-AM data (say a
 	 * tuple's xmin), but since we don't do that here...
 	 */
 
-	table_insert(myState->rel,
-				 slot,
-				 myState->output_cid,
-				 myState->ti_options,
-				 myState->bistate);
+	table_tuple_insert(myState->rel,
+					   slot,
+					   myState->output_cid,
+					   myState->ti_options,
+					   myState->bistate);
 
 	/* We know this is a newly created relation, so there are no indexes */
 
diff --git a/src/backend/commands/matview.c b/src/backend/commands/matview.c
index 99bf3c29f27..5d9dfbc82e4 100644
--- a/src/backend/commands/matview.c
+++ b/src/backend/commands/matview.c
@@ -54,7 +54,7 @@ typedef struct
 	/* These fields are filled by transientrel_startup: */
 	Relation	transientrel;	/* relation to write to */
 	CommandId	output_cid;		/* cmin to insert in output tuples */
-	int			ti_options;		/* table_insert performance options */
+	int			ti_options;		/* table_tuple_insert performance options */
 	BulkInsertState bistate;	/* bulk insert state */
 } DR_transientrel;
 
@@ -481,18 +481,18 @@ transientrel_receive(TupleTableSlot *slot, DestReceiver *self)
 
 	/*
 	 * Note that the input slot might not be of the type of the target
-	 * relation. That's supported by table_insert(), but slightly less
+	 * relation. That's supported by table_tuple_insert(), but slightly less
 	 * efficient than inserting with the right slot - but the alternative
 	 * would be to copy into a slot of the right type, which would not be
 	 * cheap either. This also doesn't allow accessing per-AM data (say a
 	 * tuple's xmin), but since we don't do that here...
 	 */
 
-	table_insert(myState->transientrel,
-				 slot,
-				 myState->output_cid,
-				 myState->ti_options,
-				 myState->bistate);
+	table_tuple_insert(myState->transientrel,
+					   slot,
+					   myState->output_cid,
+					   myState->ti_options,
+					   myState->bistate);
 
 	/* We know this is a newly created relation, so there are no indexes */
 
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index baeb13e6a0c..c002e44aa29 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -4730,7 +4730,7 @@ ATRewriteTable(AlteredTableInfo *tab, Oid OIDNewHeap, LOCKMODE lockmode)
 		newrel = NULL;
 
 	/*
-	 * Prepare a BulkInsertState and options for table_insert. Because we're
+	 * Prepare a BulkInsertState and options for table_tuple_insert. Because we're
 	 * building a new heap, we can skip WAL-logging and fsync it to disk at
 	 * the end instead (unless WAL-logging is required for archiving or
 	 * streaming replication). The FSM is empty too, so don't bother using it.
@@ -5003,7 +5003,7 @@ ATRewriteTable(AlteredTableInfo *tab, Oid OIDNewHeap, LOCKMODE lockmode)
 
 			/* Write the tuple out to the new relation */
 			if (newrel)
-				table_insert(newrel, insertslot, mycid, ti_options, bistate);
+				table_tuple_insert(newrel, insertslot, mycid, ti_options, bistate);
 
 			ResetExprContext(econtext);
 
diff --git a/src/backend/commands/trigger.c b/src/backend/commands/trigger.c
index 2beb3781450..a99441f0b93 100644
--- a/src/backend/commands/trigger.c
+++ b/src/backend/commands/trigger.c
@@ -3332,7 +3332,7 @@ GetTupleForTrigger(EState *estate,
 		 */
 		if (!IsolationUsesXactSnapshot())
 			lockflags |= TUPLE_LOCK_FLAG_FIND_LAST_VERSION;
-		test = table_lock_tuple(relation, tid, estate->es_snapshot, oldslot,
+		test = table_tuple_lock(relation, tid, estate->es_snapshot, oldslot,
 								estate->es_output_cid,
 								lockmode, LockWaitBlock,
 								lockflags,
@@ -3386,7 +3386,7 @@ GetTupleForTrigger(EState *estate,
 					ereport(ERROR,
 							(errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
 							 errmsg("could not serialize access due to concurrent update")));
-				elog(ERROR, "unexpected table_lock_tuple status: %u", test);
+				elog(ERROR, "unexpected table_tuple_lock status: %u", test);
 				break;
 
 			case TM_Deleted:
@@ -3402,7 +3402,7 @@ GetTupleForTrigger(EState *estate,
 				break;
 
 			default:
-				elog(ERROR, "unrecognized table_lock_tuple status: %u", test);
+				elog(ERROR, "unrecognized table_tuple_lock status: %u", test);
 				return false;	/* keep compiler quiet */
 		}
 	}
@@ -3412,7 +3412,7 @@ GetTupleForTrigger(EState *estate,
 		 * We expect the tuple to be present, thus very simple error handling
 		 * suffices.
 		 */
-		if (!table_fetch_row_version(relation, tid, SnapshotAny, oldslot))
+		if (!table_tuple_fetch_row_version(relation, tid, SnapshotAny, oldslot))
 			elog(ERROR, "failed to fetch tuple for trigger");
 	}
 
@@ -4270,7 +4270,7 @@ AfterTriggerExecute(EState *estate,
 			{
 				LocTriggerData.tg_trigslot = ExecGetTriggerOldSlot(estate, relInfo);
 
-				if (!table_fetch_row_version(rel, &(event->ate_ctid1), SnapshotAny, LocTriggerData.tg_trigslot))
+				if (!table_tuple_fetch_row_version(rel, &(event->ate_ctid1), SnapshotAny, LocTriggerData.tg_trigslot))
 					elog(ERROR, "failed to fetch tuple1 for AFTER trigger");
 				LocTriggerData.tg_trigtuple =
 					ExecFetchSlotHeapTuple(LocTriggerData.tg_trigslot, false, &should_free_trig);
@@ -4287,7 +4287,7 @@ AfterTriggerExecute(EState *estate,
 			{
 				LocTriggerData.tg_newslot = ExecGetTriggerNewSlot(estate, relInfo);
 
-				if (!table_fetch_row_version(rel, &(event->ate_ctid2), SnapshotAny, LocTriggerData.tg_newslot))
+				if (!table_tuple_fetch_row_version(rel, &(event->ate_ctid2), SnapshotAny, LocTriggerData.tg_newslot))
 					elog(ERROR, "failed to fetch tuple2 for AFTER trigger");
 				LocTriggerData.tg_newtuple =
 					ExecFetchSlotHeapTuple(LocTriggerData.tg_newslot, false, &should_free_new);
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index ed7c0606bf1..dd80b1e5ed8 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -2436,7 +2436,7 @@ ExecBuildAuxRowMark(ExecRowMark *erm, List *targetlist)
  * quals. For that result to be useful, typically the input tuple has to be
  * last row version (otherwise the result isn't particularly useful) and
  * locked (otherwise the result might be out of date). That's typically
- * achieved by using table_lock_tuple() with the
+ * achieved by using table_tuple_lock() with the
  * TUPLE_LOCK_FLAG_FIND_LAST_VERSION flag.
  *
  * Returns a slot containing the new candidate update/delete tuple, or
@@ -2654,9 +2654,9 @@ EvalPlanQualFetchRowMarks(EPQState *epqstate)
 			else
 			{
 				/* ordinary table, fetch the tuple */
-				if (!table_fetch_row_version(erm->relation,
-											 (ItemPointer) DatumGetPointer(datum),
-											 SnapshotAny, slot))
+				if (!table_tuple_fetch_row_version(erm->relation,
+												   (ItemPointer) DatumGetPointer(datum),
+												   SnapshotAny, slot))
 					elog(ERROR, "failed to fetch tuple for EvalPlanQual recheck");
 			}
 		}
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index f8f6463358f..a487831b69b 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -173,7 +173,7 @@ retry:
 
 		PushActiveSnapshot(GetLatestSnapshot());
 
-		res = table_lock_tuple(rel, &(outslot->tts_tid), GetLatestSnapshot(),
+		res = table_tuple_lock(rel, &(outslot->tts_tid), GetLatestSnapshot(),
 							   outslot,
 							   GetCurrentCommandId(false),
 							   lockmode,
@@ -208,7 +208,7 @@ retry:
 				elog(ERROR, "attempted to lock invisible tuple");
 				break;
 			default:
-				elog(ERROR, "unexpected table_lock_tuple status: %u", res);
+				elog(ERROR, "unexpected table_tuple_lock status: %u", res);
 				break;
 		}
 	}
@@ -337,7 +337,7 @@ retry:
 
 		PushActiveSnapshot(GetLatestSnapshot());
 
-		res = table_lock_tuple(rel, &(outslot->tts_tid), GetLatestSnapshot(),
+		res = table_tuple_lock(rel, &(outslot->tts_tid), GetLatestSnapshot(),
 							   outslot,
 							   GetCurrentCommandId(false),
 							   lockmode,
@@ -372,7 +372,7 @@ retry:
 				elog(ERROR, "attempted to lock invisible tuple");
 				break;
 			default:
-				elog(ERROR, "unexpected table_lock_tuple status: %u", res);
+				elog(ERROR, "unexpected table_tuple_lock status: %u", res);
 				break;
 		}
 	}
@@ -425,7 +425,7 @@ ExecSimpleRelationInsert(EState *estate, TupleTableSlot *slot)
 			ExecPartitionCheck(resultRelInfo, slot, estate, true);
 
 		/* OK, store the tuple and create index entries for it */
-		simple_table_insert(resultRelInfo->ri_RelationDesc, slot);
+		simple_table_tuple_insert(resultRelInfo->ri_RelationDesc, slot);
 
 		if (resultRelInfo->ri_NumIndices > 0)
 			recheckIndexes = ExecInsertIndexTuples(slot, estate, false, NULL,
@@ -490,8 +490,8 @@ ExecSimpleRelationUpdate(EState *estate, EPQState *epqstate,
 		if (resultRelInfo->ri_PartitionCheck)
 			ExecPartitionCheck(resultRelInfo, slot, estate, true);
 
-		simple_table_update(rel, tid, slot,estate->es_snapshot,
-							&update_indexes);
+		simple_table_tuple_update(rel, tid, slot, estate->es_snapshot,
+								  &update_indexes);
 
 		if (resultRelInfo->ri_NumIndices > 0 && update_indexes)
 			recheckIndexes = ExecInsertIndexTuples(slot, estate, false, NULL,
@@ -535,7 +535,7 @@ ExecSimpleRelationDelete(EState *estate, EPQState *epqstate,
 	if (!skip_tuple)
 	{
 		/* OK, delete the tuple */
-		simple_table_delete(rel, tid, estate->es_snapshot);
+		simple_table_tuple_delete(rel, tid, estate->es_snapshot);
 
 		/* AFTER ROW DELETE Triggers */
 		ExecARDeleteTriggers(estate, resultRelInfo,
diff --git a/src/backend/executor/nodeLockRows.c b/src/backend/executor/nodeLockRows.c
index 7674ac893c2..e8d7f24efea 100644
--- a/src/backend/executor/nodeLockRows.c
+++ b/src/backend/executor/nodeLockRows.c
@@ -185,7 +185,7 @@ lnext:
 		if (!IsolationUsesXactSnapshot())
 			lockflags |= TUPLE_LOCK_FLAG_FIND_LAST_VERSION;
 
-		test = table_lock_tuple(erm->relation, &tid, estate->es_snapshot,
+		test = table_tuple_lock(erm->relation, &tid, estate->es_snapshot,
 								markSlot, estate->es_output_cid,
 								lockmode, erm->waitPolicy,
 								lockflags,
@@ -208,7 +208,7 @@ lnext:
 				 * to fetch the updated tuple instead, but doing so would
 				 * require changing heap_update and heap_delete to not
 				 * complain about updating "invisible" tuples, which seems
-				 * pretty scary (table_lock_tuple will not complain, but few
+				 * pretty scary (table_tuple_lock will not complain, but few
 				 * callers expect TM_Invisible, and we're not one of them). So
 				 * for now, treat the tuple as deleted and do not process.
 				 */
@@ -229,7 +229,7 @@ lnext:
 					ereport(ERROR,
 							(errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
 							 errmsg("could not serialize access due to concurrent update")));
-				elog(ERROR, "unexpected table_lock_tuple status: %u",
+				elog(ERROR, "unexpected table_tuple_lock status: %u",
 					 test);
 				break;
 
@@ -246,7 +246,7 @@ lnext:
 				break;
 
 			default:
-				elog(ERROR, "unrecognized table_lock_tuple status: %u",
+				elog(ERROR, "unrecognized table_tuple_lock status: %u",
 					 test);
 		}
 
diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c
index 444c0c05746..d159c950d0e 100644
--- a/src/backend/executor/nodeModifyTable.c
+++ b/src/backend/executor/nodeModifyTable.c
@@ -235,7 +235,7 @@ ExecCheckTIDVisible(EState *estate,
 	if (!IsolationUsesXactSnapshot())
 		return;
 
-	if (!table_fetch_row_version(rel, tid, SnapshotAny, tempSlot))
+	if (!table_tuple_fetch_row_version(rel, tid, SnapshotAny, tempSlot))
 		elog(ERROR, "failed to fetch conflicting tuple for ON CONFLICT");
 	ExecCheckTupleVisible(estate, rel, tempSlot);
 	ExecClearTuple(tempSlot);
@@ -543,11 +543,11 @@ ExecInsert(ModifyTableState *mtstate,
 			specToken = SpeculativeInsertionLockAcquire(GetCurrentTransactionId());
 
 			/* insert the tuple, with the speculative token */
-			table_insert_speculative(resultRelationDesc, slot,
-									 estate->es_output_cid,
-									 0,
-									 NULL,
-									 specToken);
+			table_tuple_insert_speculative(resultRelationDesc, slot,
+										   estate->es_output_cid,
+										   0,
+										   NULL,
+										   specToken);
 
 			/* insert index entries for tuple */
 			recheckIndexes = ExecInsertIndexTuples(slot, estate, true,
@@ -555,8 +555,8 @@ ExecInsert(ModifyTableState *mtstate,
 												   arbiterIndexes);
 
 			/* adjust the tuple's state accordingly */
-			table_complete_speculative(resultRelationDesc, slot,
-									   specToken, specConflict);
+			table_tuple_complete_speculative(resultRelationDesc, slot,
+											 specToken, specConflict);
 
 			/*
 			 * Wake up anyone waiting for our decision.  They will re-check
@@ -583,9 +583,9 @@ ExecInsert(ModifyTableState *mtstate,
 		else
 		{
 			/* insert the tuple normally */
-			table_insert(resultRelationDesc, slot,
-						 estate->es_output_cid,
-						 0, NULL);
+			table_tuple_insert(resultRelationDesc, slot,
+							   estate->es_output_cid,
+							   0, NULL);
 
 			/* insert index entries for tuple */
 			if (resultRelInfo->ri_NumIndices > 0)
@@ -765,13 +765,13 @@ ExecDelete(ModifyTableState *mtstate,
 		 * mode transactions.
 		 */
 ldelete:;
-		result = table_delete(resultRelationDesc, tupleid,
-							  estate->es_output_cid,
-							  estate->es_snapshot,
-							  estate->es_crosscheck_snapshot,
-							  true /* wait for commit */ ,
-							  &tmfd,
-							  changingPart);
+		result = table_tuple_delete(resultRelationDesc, tupleid,
+									estate->es_output_cid,
+									estate->es_snapshot,
+									estate->es_crosscheck_snapshot,
+									true /* wait for commit */ ,
+									&tmfd,
+									changingPart);
 
 		switch (result)
 		{
@@ -831,7 +831,7 @@ ldelete:;
 					inputslot = EvalPlanQualSlot(epqstate, resultRelationDesc,
 												 resultRelInfo->ri_RangeTableIndex);
 
-					result = table_lock_tuple(resultRelationDesc, tupleid,
+					result = table_tuple_lock(resultRelationDesc, tupleid,
 											  estate->es_snapshot,
 											  inputslot, estate->es_output_cid,
 											  LockTupleExclusive, LockWaitBlock,
@@ -873,7 +873,7 @@ ldelete:;
 							 * out.
 							 *
 							 * See also TM_SelfModified response to
-							 * table_delete() above.
+							 * table_tuple_delete() above.
 							 */
 							if (tmfd.cmax != estate->es_output_cid)
 								ereport(ERROR,
@@ -898,7 +898,7 @@ ldelete:;
 							 * locking the latest version via
 							 * TUPLE_LOCK_FLAG_FIND_LAST_VERSION.
 							 */
-							elog(ERROR, "unexpected table_lock_tuple status: %u",
+							elog(ERROR, "unexpected table_tuple_lock status: %u",
 								 result);
 							return NULL;
 					}
@@ -916,7 +916,7 @@ ldelete:;
 				return NULL;
 
 			default:
-				elog(ERROR, "unrecognized table_delete status: %u", result);
+				elog(ERROR, "unrecognized table_tuple_delete status: %u", result);
 				return NULL;
 		}
 
@@ -988,8 +988,8 @@ ldelete:;
 			}
 			else
 			{
-				if (!table_fetch_row_version(resultRelationDesc, tupleid,
-											 SnapshotAny, slot))
+				if (!table_tuple_fetch_row_version(resultRelationDesc, tupleid,
+												   SnapshotAny, slot))
 					elog(ERROR, "failed to fetch deleted tuple for DELETE RETURNING");
 			}
 		}
@@ -1132,7 +1132,7 @@ ExecUpdate(ModifyTableState *mtstate,
 		 * If we generate a new candidate tuple after EvalPlanQual testing, we
 		 * must loop back here and recheck any RLS policies and constraints.
 		 * (We don't need to redo triggers, however.  If there are any BEFORE
-		 * triggers then trigger.c will have done table_lock_tuple to lock the
+		 * triggers then trigger.c will have done table_tuple_lock to lock the
 		 * correct tuple, so there's no need to do them again.)
 		 */
 lreplace:;
@@ -1307,12 +1307,12 @@ lreplace:;
 		 * needed for referential integrity updates in transaction-snapshot
 		 * mode transactions.
 		 */
-		result = table_update(resultRelationDesc, tupleid, slot,
-							  estate->es_output_cid,
-							  estate->es_snapshot,
-							  estate->es_crosscheck_snapshot,
-							  true /* wait for commit */ ,
-							  &tmfd, &lockmode, &update_indexes);
+		result = table_tuple_update(resultRelationDesc, tupleid, slot,
+									estate->es_output_cid,
+									estate->es_snapshot,
+									estate->es_crosscheck_snapshot,
+									true /* wait for commit */ ,
+									&tmfd, &lockmode, &update_indexes);
 
 		switch (result)
 		{
@@ -1371,7 +1371,7 @@ lreplace:;
 					inputslot = EvalPlanQualSlot(epqstate, resultRelationDesc,
 												 resultRelInfo->ri_RangeTableIndex);
 
-					result = table_lock_tuple(resultRelationDesc, tupleid,
+					result = table_tuple_lock(resultRelationDesc, tupleid,
 											  estate->es_snapshot,
 											  inputslot, estate->es_output_cid,
 											  lockmode, LockWaitBlock,
@@ -1409,7 +1409,7 @@ lreplace:;
 							 * otherwise error out.
 							 *
 							 * See also TM_SelfModified response to
-							 * table_update() above.
+							 * table_tuple_update() above.
 							 */
 							if (tmfd.cmax != estate->es_output_cid)
 								ereport(ERROR,
@@ -1419,8 +1419,8 @@ lreplace:;
 							return NULL;
 
 						default:
-							/* see table_lock_tuple call in ExecDelete() */
-							elog(ERROR, "unexpected table_lock_tuple status: %u",
+							/* see table_tuple_lock call in ExecDelete() */
+							elog(ERROR, "unexpected table_tuple_lock status: %u",
 								 result);
 							return NULL;
 					}
@@ -1437,7 +1437,7 @@ lreplace:;
 				return NULL;
 
 			default:
-				elog(ERROR, "unrecognized table_update status: %u", result);
+				elog(ERROR, "unrecognized table_tuple_update status: %u", result);
 				return NULL;
 		}
 
@@ -1518,7 +1518,7 @@ ExecOnConflictUpdate(ModifyTableState *mtstate,
 	 * previous conclusion that the tuple is conclusively committed is not
 	 * true anymore.
 	 */
-	test = table_lock_tuple(relation, conflictTid,
+	test = table_tuple_lock(relation, conflictTid,
 							estate->es_snapshot,
 							existing, estate->es_output_cid,
 							lockmode, LockWaitBlock, 0,
@@ -1609,7 +1609,7 @@ ExecOnConflictUpdate(ModifyTableState *mtstate,
 			return false;
 
 		default:
-			elog(ERROR, "unrecognized table_lock_tuple status: %u", test);
+			elog(ERROR, "unrecognized table_tuple_lock status: %u", test);
 	}
 
 	/* Success, the tuple is locked. */
@@ -1674,7 +1674,7 @@ ExecOnConflictUpdate(ModifyTableState *mtstate,
 
 	/*
 	 * Note that it is possible that the target tuple has been modified in
-	 * this session, after the above table_lock_tuple. We choose to not error
+	 * this session, after the above table_tuple_lock. We choose to not error
 	 * out in that case, in line with ExecUpdate's treatment of similar cases.
 	 * This can happen if an UPDATE is triggered from within ExecQual(),
 	 * ExecWithCheckOptions() or ExecProject() above, e.g. by selecting from a
diff --git a/src/backend/executor/nodeTidscan.c b/src/backend/executor/nodeTidscan.c
index 156be56a57d..65d863b6a56 100644
--- a/src/backend/executor/nodeTidscan.c
+++ b/src/backend/executor/nodeTidscan.c
@@ -365,9 +365,9 @@ TidNext(TidScanState *node)
 		 * current according to our snapshot.
 		 */
 		if (node->tss_isCurrentOf)
-			table_get_latest_tid(heapRelation, snapshot, &tid);
+			table_tuple_get_latest_tid(heapRelation, snapshot, &tid);
 
-		if (table_fetch_row_version(heapRelation, &tid, snapshot, slot))
+		if (table_tuple_fetch_row_version(heapRelation, &tid, snapshot, slot))
 			return slot;
 
 		/* Bad TID or failed snapshot qual; try next */
diff --git a/src/backend/utils/adt/tid.c b/src/backend/utils/adt/tid.c
index 6ab26d8ea8b..036cd8829dd 100644
--- a/src/backend/utils/adt/tid.c
+++ b/src/backend/utils/adt/tid.c
@@ -380,7 +380,7 @@ currtid_byreloid(PG_FUNCTION_ARGS)
 	ItemPointerCopy(tid, result);
 
 	snapshot = RegisterSnapshot(GetLatestSnapshot());
-	table_get_latest_tid(rel, snapshot, result);
+	table_tuple_get_latest_tid(rel, snapshot, result);
 	UnregisterSnapshot(snapshot);
 
 	table_close(rel, AccessShareLock);
@@ -415,7 +415,7 @@ currtid_byrelname(PG_FUNCTION_ARGS)
 	ItemPointerCopy(tid, result);
 
 	snapshot = RegisterSnapshot(GetLatestSnapshot());
-	table_get_latest_tid(rel, snapshot, result);
+	table_tuple_get_latest_tid(rel, snapshot, result);
 	UnregisterSnapshot(snapshot);
 
 	table_close(rel, AccessShareLock);
diff --git a/src/include/access/tableam.h b/src/include/access/tableam.h
index ebfa0d51855..39910ee0c8c 100644
--- a/src/include/access/tableam.h
+++ b/src/include/access/tableam.h
@@ -80,7 +80,7 @@ typedef enum TM_Result
 
 
 /*
- * When table_update, table_delete, or table_lock_tuple fail because the target
+ * When table_tuple_update, table_tuple_delete, or table_tuple_lock fail because the target
  * tuple is already outdated, they fill in this struct to provide information
  * to the caller about what happened.
  *
@@ -106,13 +106,13 @@ typedef struct TM_FailureData
 	bool		traversed;
 } TM_FailureData;
 
-/* "options" flag bits for table_insert */
+/* "options" flag bits for table_tuple_insert */
 #define TABLE_INSERT_SKIP_WAL		0x0001
 #define TABLE_INSERT_SKIP_FSM		0x0002
 #define TABLE_INSERT_FROZEN			0x0004
 #define TABLE_INSERT_NO_LOGICAL		0x0008
 
-/* flag bits for table_lock_tuple */
+/* flag bits for table_tuple_lock */
 /* Follow tuples whose update is in progress if lock modes don't conflict  */
 #define TUPLE_LOCK_FLAG_LOCK_UPDATE_IN_PROGRESS	(1 << 0)
 /* Follow update chain and lock latest version of tuple */
@@ -335,12 +335,12 @@ typedef struct TableAmRoutine
 	 * ------------------------------------------------------------------------
 	 */
 
-	/* see table_insert() for reference about parameters */
+	/* see table_tuple_insert() for reference about parameters */
 	void		(*tuple_insert) (Relation rel, TupleTableSlot *slot,
 								 CommandId cid, int options,
 								 struct BulkInsertStateData *bistate);
 
-	/* see table_insert_speculative() for reference about parameters */
+	/* see table_tuple_insert_speculative() for reference about parameters */
 	void		(*tuple_insert_speculative) (Relation rel,
 											 TupleTableSlot *slot,
 											 CommandId cid,
@@ -348,7 +348,7 @@ typedef struct TableAmRoutine
 											 struct BulkInsertStateData *bistate,
 											 uint32 specToken);
 
-	/* see table_complete_speculative() for reference about parameters */
+	/* see table_tuple_complete_speculative() for reference about parameters */
 	void		(*tuple_complete_speculative) (Relation rel,
 											   TupleTableSlot *slot,
 											   uint32 specToken,
@@ -358,7 +358,7 @@ typedef struct TableAmRoutine
 	void		(*multi_insert) (Relation rel, TupleTableSlot **slots, int nslots,
 								 CommandId cid, int options, struct BulkInsertStateData *bistate);
 
-	/* see table_delete() for reference about parameters */
+	/* see table_tuple_delete() for reference about parameters */
 	TM_Result	(*tuple_delete) (Relation rel,
 								 ItemPointer tid,
 								 CommandId cid,
@@ -368,7 +368,7 @@ typedef struct TableAmRoutine
 								 TM_FailureData *tmfd,
 								 bool changingPart);
 
-	/* see table_update() for reference about parameters */
+	/* see table_tuple_update() for reference about parameters */
 	TM_Result	(*tuple_update) (Relation rel,
 								 ItemPointer otid,
 								 TupleTableSlot *slot,
@@ -380,7 +380,7 @@ typedef struct TableAmRoutine
 								 LockTupleMode *lockmode,
 								 bool *update_indexes);
 
-	/* see table_lock_tuple() for reference about parameters */
+	/* see table_tuple_lock() for reference about parameters */
 	TM_Result	(*tuple_lock) (Relation rel,
 							   ItemPointer tid,
 							   Snapshot snapshot,
@@ -914,7 +914,7 @@ table_index_fetch_end(struct IndexFetchTableData *scan)
  * supports storing multiple row versions reachable via a single index entry
  * (like heap's HOT). Whereas table_fetch_row_version only evaluates the
  * tuple exactly at `tid`. Outside of index entry ->table tuple lookups,
- * table_fetch_row_version is what's usually needed.
+ * table_tuple_fetch_row_version is what's usually needed.
  */
 static inline bool
 table_index_fetch_tuple(struct IndexFetchTableData *scan,
@@ -957,10 +957,10 @@ extern bool table_index_fetch_tuple_check(Relation rel,
  * index entry->table tuple lookups.
  */
 static inline bool
-table_fetch_row_version(Relation rel,
-						ItemPointer tid,
-						Snapshot snapshot,
-						TupleTableSlot *slot)
+table_tuple_fetch_row_version(Relation rel,
+							  ItemPointer tid,
+							  Snapshot snapshot,
+							  TupleTableSlot *slot)
 {
 	return rel->rd_tableam->tuple_fetch_row_version(rel, tid, snapshot, slot);
 }
@@ -970,7 +970,7 @@ table_fetch_row_version(Relation rel,
  * point at the newest version.
  */
 static inline void
-table_get_latest_tid(Relation rel, Snapshot snapshot, ItemPointer tid)
+table_tuple_get_latest_tid(Relation rel, Snapshot snapshot, ItemPointer tid)
 {
 	rel->rd_tableam->tuple_get_latest_tid(rel, snapshot, tid);
 }
@@ -1050,8 +1050,8 @@ table_compute_xid_horizon_for_tuples(Relation rel,
  * reflected in the slots contents.
  */
 static inline void
-table_insert(Relation rel, TupleTableSlot *slot, CommandId cid,
-			 int options, struct BulkInsertStateData *bistate)
+table_tuple_insert(Relation rel, TupleTableSlot *slot, CommandId cid,
+				   int options, struct BulkInsertStateData *bistate)
 {
 	rel->rd_tableam->tuple_insert(rel, slot, cid, options,
 								  bistate);
@@ -1066,12 +1066,12 @@ table_insert(Relation rel, TupleTableSlot *slot, CommandId cid,
  *
  * A transaction having performed a speculative insertion has to either abort,
  * or finish the speculative insertion with
- * table_complete_speculative(succeeded = ...).
+ * table_tuple_complete_speculative(succeeded = ...).
  */
 static inline void
-table_insert_speculative(Relation rel, TupleTableSlot *slot, CommandId cid,
-						 int options, struct BulkInsertStateData *bistate,
-						 uint32 specToken)
+table_tuple_insert_speculative(Relation rel, TupleTableSlot *slot, CommandId cid,
+							   int options, struct BulkInsertStateData *bistate,
+							   uint32 specToken)
 {
 	rel->rd_tableam->tuple_insert_speculative(rel, slot, cid, options,
 											  bistate, specToken);
@@ -1082,8 +1082,8 @@ table_insert_speculative(Relation rel, TupleTableSlot *slot, CommandId cid,
  * succeeded is true, the tuple is fully inserted, if false, it's removed.
  */
 static inline void
-table_complete_speculative(Relation rel, TupleTableSlot *slot,
-						   uint32 specToken, bool succeeded)
+table_tuple_complete_speculative(Relation rel, TupleTableSlot *slot,
+								 uint32 specToken, bool succeeded)
 {
 	rel->rd_tableam->tuple_complete_speculative(rel, slot, specToken,
 												succeeded);
@@ -1098,7 +1098,7 @@ table_complete_speculative(Relation rel, TupleTableSlot *slot,
  *
  * Except for taking `nslots` tuples as input, as an array of TupleTableSlots
  * in `slots`, the parameters for table_multi_insert() are the same as for
- * table_insert().
+ * table_tuple_insert().
  *
  * Note: this leaks memory into the current memory context. You can create a
  * temporary context before calling this, if that's a problem.
@@ -1115,7 +1115,7 @@ table_multi_insert(Relation rel, TupleTableSlot **slots, int nslots,
  * Delete a tuple.
  *
  * NB: do not call this directly unless prepared to deal with
- * concurrent-update conditions.  Use simple_table_delete instead.
+ * concurrent-update conditions.  Use simple_table_tuple_delete instead.
  *
  * Input parameters:
  *	relation - table to be modified (caller must hold suitable lock)
@@ -1138,9 +1138,9 @@ table_multi_insert(Relation rel, TupleTableSlot **slots, int nslots,
  * struct TM_FailureData for additional info.
  */
 static inline TM_Result
-table_delete(Relation rel, ItemPointer tid, CommandId cid,
-			 Snapshot snapshot, Snapshot crosscheck, bool wait,
-			 TM_FailureData *tmfd, bool changingPart)
+table_tuple_delete(Relation rel, ItemPointer tid, CommandId cid,
+				   Snapshot snapshot, Snapshot crosscheck, bool wait,
+				   TM_FailureData *tmfd, bool changingPart)
 {
 	return rel->rd_tableam->tuple_delete(rel, tid, cid,
 										 snapshot, crosscheck,
@@ -1151,7 +1151,7 @@ table_delete(Relation rel, ItemPointer tid, CommandId cid,
  * Update a tuple.
  *
  * NB: do not call this directly unless you are prepared to deal with
- * concurrent-update conditions.  Use simple_table_update instead.
+ * concurrent-update conditions.  Use simple_table_tuple_update instead.
  *
  * Input parameters:
  *	relation - table to be modified (caller must hold suitable lock)
@@ -1182,10 +1182,10 @@ table_delete(Relation rel, ItemPointer tid, CommandId cid,
  * for additional info.
  */
 static inline TM_Result
-table_update(Relation rel, ItemPointer otid, TupleTableSlot *slot,
-			 CommandId cid, Snapshot snapshot, Snapshot crosscheck, bool wait,
-			 TM_FailureData *tmfd, LockTupleMode *lockmode,
-			 bool *update_indexes)
+table_tuple_update(Relation rel, ItemPointer otid, TupleTableSlot *slot,
+				   CommandId cid, Snapshot snapshot, Snapshot crosscheck, bool wait,
+				   TM_FailureData *tmfd, LockTupleMode *lockmode,
+				   bool *update_indexes)
 {
 	return rel->rd_tableam->tuple_update(rel, otid, slot,
 										 cid, snapshot, crosscheck,
@@ -1227,7 +1227,7 @@ table_update(Relation rel, ItemPointer otid, TupleTableSlot *slot,
  * comments for struct TM_FailureData for additional info.
  */
 static inline TM_Result
-table_lock_tuple(Relation rel, ItemPointer tid, Snapshot snapshot,
+table_tuple_lock(Relation rel, ItemPointer tid, Snapshot snapshot,
 				 TupleTableSlot *slot, CommandId cid, LockTupleMode mode,
 				 LockWaitPolicy wait_policy, uint8 flags,
 				 TM_FailureData *tmfd)
@@ -1601,12 +1601,12 @@ table_scan_sample_next_tuple(TableScanDesc scan,
  * ----------------------------------------------------------------------------
  */
 
-extern void simple_table_insert(Relation rel, TupleTableSlot *slot);
-extern void simple_table_delete(Relation rel, ItemPointer tid,
-					Snapshot snapshot);
-extern void simple_table_update(Relation rel, ItemPointer otid,
-					TupleTableSlot *slot, Snapshot snapshot,
-					bool *update_indexes);
+extern void simple_table_tuple_insert(Relation rel, TupleTableSlot *slot);
+extern void simple_table_tuple_delete(Relation rel, ItemPointer tid,
+									  Snapshot snapshot);
+extern void simple_table_tuple_update(Relation rel, ItemPointer otid,
+									  TupleTableSlot *slot, Snapshot snapshot,
+									  bool *update_indexes);
 
 
 /* ----------------------------------------------------------------------------
-- 
2.19.1

Reply via email to