Returning to an old thread that I just remembered:
On 09/01/2024 08:24, vignesh C wrote:
On Thu, 9 Nov 2023 at 21:48, Heikki Linnakangas <[email protected]> wrote:
On 18/09/2023 07:08, David Rowley wrote:
On Fri, 15 Sept 2023 at 22:37, Heikki Linnakangas <[email protected]> wrote:
A few quick comments:
- It would be nice to add a test for the issue that you fixed in patch
v7, i.e. if you prepare a transaction while holding session-level locks.
- GrantLockLocal() now calls MemoryContextAlloc(), which can fail if you
are out of memory. Is that handled gracefully or is the lock leaked?
The above are still valid comments..
CFBot shows one of the test has aborted at [1] with:
[20:54:28.535] Core was generated by `postgres: subscriber: logical
replication apply worker for subscription 16397 '.
[20:54:28.535] Program terminated with signal SIGABRT, Aborted.
[20:54:28.535] #0 __GI_raise (sig=sig@entry=6) at
../sysdeps/unix/sysv/linux/raise.c:50
[20:54:28.535] Download failed: Invalid argument. Continuing without
source file ./signal/../sysdeps/unix/sysv/linux/raise.c.
[20:54:28.627]
[20:54:28.627] Thread 1 (Thread 0x7f0ea02d1a40 (LWP 50984)):
[20:54:28.627] #0 __GI_raise (sig=sig@entry=6) at
../sysdeps/unix/sysv/linux/raise.c:50
...
...
[20:54:28.627] #2 0x00005618e989d62f in ExceptionalCondition
(conditionName=conditionName@entry=0x5618e9b40f70
"dlist_is_empty(&(MyProc->myProcLocks[i]))",
fileName=fileName@entry=0x5618e9b40ec0
"../src/backend/storage/lmgr/proc.c", lineNumber=lineNumber@entry=856)
at ../src/backend/utils/error/assert.c:66
[20:54:28.627] No locals.
[20:54:28.627] #3 0x00005618e95e6847 in ProcKill (code=<optimized
out>, arg=<optimized out>) at ../src/backend/storage/lmgr/proc.c:856
[20:54:28.627] i = <optimized out>
[20:54:28.627] proc = <optimized out>
[20:54:28.627] procgloballist = <optimized out>
[20:54:28.627] __func__ = "ProcKill"
[20:54:28.627] #4 0x00005618e959ebcc in shmem_exit
(code=code@entry=1) at ../src/backend/storage/ipc/ipc.c:276
[20:54:28.627] __func__ = "shmem_exit"
[20:54:28.627] #5 0x00005618e959ecd0 in proc_exit_prepare
(code=code@entry=1) at ../src/backend/storage/ipc/ipc.c:198
[20:54:28.627] __func__ = "proc_exit_prepare"
[20:54:28.627] #6 0x00005618e959ee8e in proc_exit (code=code@entry=1)
at ../src/backend/storage/ipc/ipc.c:111
[20:54:28.627] __func__ = "proc_exit"
[20:54:28.627] #7 0x00005618e94aa54d in BackgroundWorkerMain () at
../src/backend/postmaster/bgworker.c:805
[20:54:28.627] local_sigjmp_buf = {{__jmpbuf =
{94665009627112, -3865857745677845768, 0, 0, 140732736634980, 1,
3865354362587970296, 7379258256398875384}, __mask_was_saved = 1,
__saved_mask = {__val = {18446744066192964099, 94665025527920,
94665025527920, 94665025527920, 0, 94665025528120, 8192, 1,
94664997686410, 94665009627040, 94664997622076, 94665025527920, 1, 0,
0, 140732736634980}}}}
[20:54:28.627] worker = 0x5618eb37c570
[20:54:28.627] entrypt = <optimized out>
[20:54:28.627] __func__ = "BackgroundWorkerMain"
[20:54:28.627] #8 0x00005618e94b495c in do_start_bgworker
(rw=rw@entry=0x5618eb3b73c8) at
../src/backend/postmaster/postmaster.c:5697
[20:54:28.627] worker_pid = <optimized out>
[20:54:28.627] __func__ = "do_start_bgworker"
[20:54:28.627] #9 0x00005618e94b4c32 in maybe_start_bgworkers () at
../src/backend/postmaster/postmaster.c:5921
[20:54:28.627] rw = 0x5618eb3b73c8
[20:54:28.627] num_launched = 0
[20:54:28.627] now = 0
[20:54:28.627] iter = {cur = 0x5618eb3b79a8, next =
0x5618eb382a20, prev = 0x5618ea44a980 <BackgroundWorkerList>}
[20:54:28.627] #10 0x00005618e94b574a in process_pm_pmsignal () at
../src/backend/postmaster/postmaster.c:5073
[20:54:28.627] __func__ = "process_pm_pmsignal"
[20:54:28.627] #11 0x00005618e94b5f4a in ServerLoop () at
../src/backend/postmaster/postmaster.c:1760
[1] - https://cirrus-ci.com/task/5118173163290624?logs=cores#L51
I was able to reproduce and debug that. The patch made the assumption
that when a process is about to exit, in ShutdownPostgres(), it can only
have session-level locks left, after we have done
AbortOutOfAnyTransaction(). That assumption did not hold, because if
proc_exit() is called while we're waiting on a lock, the lock is not yet
registered with the resource owner (or if it's a session lock, it's not
yet added to the 'session_locks' list). Therefore, when
ShutdownPostgres() called AbortOutOfAnyTransaction() and
LockReleaseSession(USER_LOCKMETHOD), it would not clean up the lock
objects for the lock we were waiting on. That's pretty straightforward
to fix by also calling LockErrorCleanup() from ShutdownPostgres().
Alternatively, perhaps we should register the lock with the resource
owner earlier.
Here's a rebased version of the patch. I also split it into a few parts
for easier review:
v10-0001-Assert-that-all-fastpath-locks-are-released-befo.patch adds a
few assertions, which are not probably good to have even without the
rest of the patches.
v10-0002-use-linked-list-in-ResourceOwner-to-hold-LOCALLO.patch replaces
the "cache" in ResourceOwnerData with the linked list.
v10-0003-stop-using-releaseMask-in-PostPrepare_Locks.patch contains the
changes to PostPrepare_Locks(), so that it doesn't use the 'releaseMask'
field anymore.
v10-0004-Remove-LockReleaseAll.patch removes LockReleaseAll, and
replaces it with LockReleaseSession() calls.
v10-0005-Fix-assertion-failure-from-ProcKill-in-subscript.patch fixes
the assertion failure.
I realized that after these patches, we only use PGPROC->myProcLocks
lists in assertions and in PostPrepare_Locks(). PostPrepare_Locks()
would be pretty easy to refactor to not need them either. That would
shrink the PGPROC struct considerably.
- Heikki
From 5a4a2ea67b79e8cd3e1349fda4a103a63d81ef21 Mon Sep 17 00:00:00 2001
From: Heikki Linnakangas <[email protected]>
Date: Mon, 19 Jan 2026 16:03:02 +0200
Subject: [PATCH v10 1/5] Assert that all fastpath locks are released before
exiting
---
src/backend/storage/lmgr/proc.c | 36 ++++++++++++++-------------------
1 file changed, 15 insertions(+), 21 deletions(-)
diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c
index 063826ae576..cb19b2c1169 100644
--- a/src/backend/storage/lmgr/proc.c
+++ b/src/backend/storage/lmgr/proc.c
@@ -498,13 +498,11 @@ InitProcess(void)
MyProc->waitProcLock = NULL;
pg_atomic_write_u64(&MyProc->waitStart, 0);
#ifdef USE_ASSERT_CHECKING
- {
- int i;
-
- /* Last process should have released all locks. */
- for (i = 0; i < NUM_LOCK_PARTITIONS; i++)
- Assert(dlist_is_empty(&(MyProc->myProcLocks[i])));
- }
+ /* Last process should have released all locks. */
+ for (int i = 0; i < NUM_LOCK_PARTITIONS; i++)
+ Assert(dlist_is_empty(&(MyProc->myProcLocks[i])));
+ for (uint32 g = 0; g < FastPathLockGroupsPerBackend; g++)
+ Assert(MyProc->fpLockBits[g] == 0);
#endif
MyProc->recoveryConflictPending = false;
@@ -694,13 +692,11 @@ InitAuxiliaryProcess(void)
MyProc->waitProcLock = NULL;
pg_atomic_write_u64(&MyProc->waitStart, 0);
#ifdef USE_ASSERT_CHECKING
- {
- int i;
-
- /* Last process should have released all locks. */
- for (i = 0; i < NUM_LOCK_PARTITIONS; i++)
- Assert(dlist_is_empty(&(MyProc->myProcLocks[i])));
- }
+ /* Last process should have released all locks. */
+ for (int i = 0; i < NUM_LOCK_PARTITIONS; i++)
+ Assert(dlist_is_empty(&(MyProc->myProcLocks[i])));
+ for (uint32 g = 0; g < FastPathLockGroupsPerBackend; g++)
+ Assert(MyProc->fpLockBits[g] == 0);
#endif
/*
@@ -936,13 +932,11 @@ ProcKill(int code, Datum arg)
SyncRepCleanupAtProcExit();
#ifdef USE_ASSERT_CHECKING
- {
- int i;
-
- /* Last process should have released all locks. */
- for (i = 0; i < NUM_LOCK_PARTITIONS; i++)
- Assert(dlist_is_empty(&(MyProc->myProcLocks[i])));
- }
+ /* All locks should be released before exiting. */
+ for (int i = 0; i < NUM_LOCK_PARTITIONS; i++)
+ Assert(dlist_is_empty(&(MyProc->myProcLocks[i])));
+ for (uint32 g = 0; g < FastPathLockGroupsPerBackend; g++)
+ Assert(MyProc->fpLockBits[g] == 0);
#endif
/*
--
2.47.3
From 5ed0d4008afdfddf9b6fc082b12b1e51938402bd Mon Sep 17 00:00:00 2001
From: Heikki Linnakangas <[email protected]>
Date: Mon, 19 Jan 2026 15:50:48 +0200
Subject: [PATCH v10 2/5] use linked list in ResourceOwner to hold
LOCALLOCKOWNERS
---
src/backend/storage/lmgr/lock.c | 362 +++++++++++---------------
src/backend/utils/resowner/resowner.c | 144 +++++-----
src/include/storage/lock.h | 17 +-
src/include/utils/resowner.h | 6 +-
4 files changed, 246 insertions(+), 283 deletions(-)
diff --git a/src/backend/storage/lmgr/lock.c b/src/backend/storage/lmgr/lock.c
index 7f0cd784f79..fe2821e31f7 100644
--- a/src/backend/storage/lmgr/lock.c
+++ b/src/backend/storage/lmgr/lock.c
@@ -322,6 +322,8 @@ static HTAB *LockMethodLockHash;
static HTAB *LockMethodProcLockHash;
static HTAB *LockMethodLocalHash;
+/* A memory context for storing LOCALLOCKOWNER structs */
+static MemoryContext LocalLockOwnerContext;
/* private state for error cleanup */
static LOCALLOCK *StrongLockInProgress;
@@ -416,8 +418,8 @@ static void BeginStrongLockAcquire(LOCALLOCK *locallock, uint32 fasthashcode);
static void FinishStrongLockAcquire(void);
static ProcWaitStatus WaitOnLock(LOCALLOCK *locallock, ResourceOwner owner);
static void waitonlock_error_callback(void *arg);
-static void ReleaseLockIfHeld(LOCALLOCK *locallock, bool sessionLock);
-static void LockReassignOwner(LOCALLOCK *locallock, ResourceOwner parent);
+static void ReleaseLockIfHeld(LOCALLOCKOWNER *locallockowner, bool sessionLock);
+static void LockReassignOwner(LOCALLOCKOWNER *locallockowner, ResourceOwner parent);
static bool UnGrantLock(LOCK *lock, LOCKMODE lockmode,
PROCLOCK *proclock, LockMethod lockMethodTable);
static void CleanUpLock(LOCK *lock, PROCLOCK *proclock,
@@ -517,6 +519,17 @@ InitLockManagerAccess(void)
16,
&info,
HASH_ELEM | HASH_BLOBS);
+
+ /*
+ * Create a slab context for storing LOCALLOCKOWNERs. Slab seems like a
+ * good context type for this as it will manage fragmentation better than
+ * aset.c contexts and it will free() excess memory rather than maintain
+ * excessively long freelists after a large surge in locking requirements.
+ */
+ LocalLockOwnerContext = SlabContextCreate(TopMemoryContext,
+ "LOCALLOCKOWNER context",
+ SLAB_DEFAULT_BLOCK_SIZE,
+ sizeof(LOCALLOCKOWNER));
}
@@ -906,25 +919,7 @@ LockAcquireExtended(const LOCKTAG *locktag,
locallock->nLocks = 0;
locallock->holdsStrongLockCount = false;
locallock->lockCleared = false;
- locallock->numLockOwners = 0;
- locallock->maxLockOwners = 8;
- locallock->lockOwners = NULL; /* in case next line fails */
- locallock->lockOwners = (LOCALLOCKOWNER *)
- MemoryContextAlloc(TopMemoryContext,
- locallock->maxLockOwners * sizeof(LOCALLOCKOWNER));
- }
- else
- {
- /* Make sure there will be room to remember the lock */
- if (locallock->numLockOwners >= locallock->maxLockOwners)
- {
- int newsize = locallock->maxLockOwners * 2;
-
- locallock->lockOwners = (LOCALLOCKOWNER *)
- repalloc(locallock->lockOwners,
- newsize * sizeof(LOCALLOCKOWNER));
- locallock->maxLockOwners = newsize;
- }
+ dlist_init(&locallock->locallockowners);
}
hashcode = locallock->hashcode;
@@ -1475,17 +1470,18 @@ CheckAndSetLockHeld(LOCALLOCK *locallock, bool acquired)
static void
RemoveLocalLock(LOCALLOCK *locallock)
{
- int i;
+ dlist_mutable_iter iter;
- for (i = locallock->numLockOwners - 1; i >= 0; i--)
+ dlist_foreach_modify(iter, &locallock->locallockowners)
{
- if (locallock->lockOwners[i].owner != NULL)
- ResourceOwnerForgetLock(locallock->lockOwners[i].owner, locallock);
+ LOCALLOCKOWNER *locallockowner = dlist_container(LOCALLOCKOWNER, locallock_node, iter.cur);
+
+ dlist_delete(&locallockowner->locallock_node);
+ if (locallockowner->owner != NULL)
+ ResourceOwnerForgetLock(locallockowner);
+ pfree(locallockowner);
}
- locallock->numLockOwners = 0;
- if (locallock->lockOwners != NULL)
- pfree(locallock->lockOwners);
- locallock->lockOwners = NULL;
+ Assert(dlist_is_empty(&locallock->locallockowners));
if (locallock->holdsStrongLockCount)
{
@@ -1791,26 +1787,38 @@ CleanUpLock(LOCK *lock, PROCLOCK *proclock,
static void
GrantLockLocal(LOCALLOCK *locallock, ResourceOwner owner)
{
- LOCALLOCKOWNER *lockOwners = locallock->lockOwners;
- int i;
+ LOCALLOCKOWNER *locallockowner;
+ dlist_iter iter;
- Assert(locallock->numLockOwners < locallock->maxLockOwners);
/* Count the total */
locallock->nLocks++;
/* Count the per-owner lock */
- for (i = 0; i < locallock->numLockOwners; i++)
+ dlist_foreach(iter, &locallock->locallockowners)
{
- if (lockOwners[i].owner == owner)
+ locallockowner = dlist_container(LOCALLOCKOWNER, locallock_node, iter.cur);
+
+ if (locallockowner->owner == owner)
{
- lockOwners[i].nLocks++;
+ locallockowner->nLocks++;
return;
}
}
- lockOwners[i].owner = owner;
- lockOwners[i].nLocks = 1;
- locallock->numLockOwners++;
+ locallockowner = MemoryContextAlloc(LocalLockOwnerContext, sizeof(LOCALLOCKOWNER));
+ locallockowner->owner = owner;
+ locallockowner->nLocks = 1;
+ locallockowner->locallock = locallock;
+
+ dlist_push_tail(&locallock->locallockowners, &locallockowner->locallock_node);
+
if (owner != NULL)
- ResourceOwnerRememberLock(owner, locallock);
+ ResourceOwnerRememberLock(owner, locallockowner);
+ else
+ {
+ /* session lock */
+ LOCKMETHODID lockmethodid = LOCALLOCK_LOCKMETHOD(*locallockowner->locallock);
+
+ Assert(lockmethodid > 0 && lockmethodid <= 2);
+ }
/* Indicate that the lock is acquired for certain types of locks. */
CheckAndSetLockHeld(locallock, true);
@@ -2148,9 +2156,9 @@ LockRelease(const LOCKTAG *locktag, LOCKMODE lockmode, bool sessionLock)
* Decrease the count for the resource owner.
*/
{
- LOCALLOCKOWNER *lockOwners = locallock->lockOwners;
ResourceOwner owner;
- int i;
+ dlist_mutable_iter iter;
+ bool found = false;
/* Identify owner for lock */
if (sessionLock)
@@ -2158,24 +2166,25 @@ LockRelease(const LOCKTAG *locktag, LOCKMODE lockmode, bool sessionLock)
else
owner = CurrentResourceOwner;
- for (i = locallock->numLockOwners - 1; i >= 0; i--)
+ dlist_foreach_modify(iter, &locallock->locallockowners)
{
- if (lockOwners[i].owner == owner)
+ LOCALLOCKOWNER *locallockowner = dlist_container(LOCALLOCKOWNER, locallock_node, iter.cur);
+
+ if (locallockowner->owner == owner)
{
- Assert(lockOwners[i].nLocks > 0);
- if (--lockOwners[i].nLocks == 0)
+ Assert(locallockowner->nLocks > 0);
+ if (--locallockowner->nLocks == 0)
{
+ dlist_delete(&locallockowner->locallock_node);
if (owner != NULL)
- ResourceOwnerForgetLock(owner, locallock);
- /* compact out unused slot */
- locallock->numLockOwners--;
- if (i < locallock->numLockOwners)
- lockOwners[i] = lockOwners[locallock->numLockOwners];
+ ResourceOwnerForgetLock(locallockowner);
+ pfree(locallockowner);
}
+ found = true;
break;
}
}
- if (i < 0)
+ if (!found)
{
/* don't release a lock belonging to another owner */
elog(WARNING, "you don't own a lock of type %s",
@@ -2368,29 +2377,30 @@ LockReleaseAll(LOCKMETHODID lockmethodid, bool allLocks)
*/
if (!allLocks)
{
- LOCALLOCKOWNER *lockOwners = locallock->lockOwners;
+ dlist_mutable_iter iter;
+ int session_locks = 0;
- /* If session lock is above array position 0, move it down to 0 */
- for (i = 0; i < locallock->numLockOwners; i++)
+ dlist_foreach_modify(iter, &locallock->locallockowners)
{
- if (lockOwners[i].owner == NULL)
- lockOwners[0] = lockOwners[i];
+ LOCALLOCKOWNER *locallockowner = dlist_container(LOCALLOCKOWNER, locallock_node, iter.cur);
+
+ if (locallockowner->owner != NULL)
+ {
+ dlist_delete(&locallockowner->locallock_node);
+ ResourceOwnerForgetLock(locallockowner);
+ pfree(locallockowner);
+ }
else
- ResourceOwnerForgetLock(lockOwners[i].owner, locallock);
+ session_locks += locallockowner->nLocks;
}
- if (locallock->numLockOwners > 0 &&
- lockOwners[0].owner == NULL &&
- lockOwners[0].nLocks > 0)
+ /* Fix the locallock to show just the session locks */
+ locallock->nLocks = session_locks;
+ if (session_locks > 0)
{
- /* Fix the locallock to show just the session locks */
- locallock->nLocks = lockOwners[0].nLocks;
- locallock->numLockOwners = 1;
/* We aren't deleting this locallock, so done */
continue;
}
- else
- locallock->numLockOwners = 0;
}
#ifdef USE_ASSERT_CHECKING
@@ -2590,52 +2600,41 @@ LockReleaseSession(LOCKMETHODID lockmethodid)
while ((locallock = (LOCALLOCK *) hash_seq_search(&status)) != NULL)
{
+ dlist_mutable_iter iter;
+
/* Ignore items that are not of the specified lock method */
if (LOCALLOCK_LOCKMETHOD(*locallock) != lockmethodid)
continue;
- ReleaseLockIfHeld(locallock, true);
+ dlist_foreach_modify(iter, &locallock->locallockowners)
+ {
+ LOCALLOCKOWNER *locallockowner = dlist_container(LOCALLOCKOWNER, locallock_node, iter.cur);
+
+ if (locallockowner->owner == NULL)
+ ReleaseLockIfHeld(locallockowner, true);
+ }
}
}
/*
* LockReleaseCurrentOwner
- * Release all locks belonging to CurrentResourceOwner
- *
- * If the caller knows what those locks are, it can pass them as an array.
- * That speeds up the call significantly, when a lot of locks are held.
- * Otherwise, pass NULL for locallocks, and we'll traverse through our hash
- * table to find them.
+ * Release all locks belonging to 'owner'
*/
void
-LockReleaseCurrentOwner(LOCALLOCK **locallocks, int nlocks)
+LockReleaseCurrentOwner(struct ResourceOwnerData *owner, LOCALLOCKOWNER *locallockowner)
{
- if (locallocks == NULL)
- {
- HASH_SEQ_STATUS status;
- LOCALLOCK *locallock;
-
- hash_seq_init(&status, LockMethodLocalHash);
+ Assert(locallockowner->owner == owner);
- while ((locallock = (LOCALLOCK *) hash_seq_search(&status)) != NULL)
- ReleaseLockIfHeld(locallock, false);
- }
- else
- {
- int i;
-
- for (i = nlocks - 1; i >= 0; i--)
- ReleaseLockIfHeld(locallocks[i], false);
- }
+ ReleaseLockIfHeld(locallockowner, false);
}
/*
* ReleaseLockIfHeld
- * Release any session-level locks on this lockable object if sessionLock
- * is true; else, release any locks held by CurrentResourceOwner.
+ * Release any session-level locks on this 'locallockowner' if sessionLock
+ * is true; else, release any locks held by 'locallockowner'.
*
* It is tempting to pass this a ResourceOwner pointer (or NULL for session
- * locks), but without refactoring LockRelease() we cannot support releasing
+ * locks), but without refactoring LockRelease() we cannot support releasing XXX
* locks belonging to resource owners other than CurrentResourceOwner.
* If we were to refactor, it'd be a good idea to fix it so we don't have to
* do a hashtable lookup of the locallock, too. However, currently this
@@ -2643,52 +2642,39 @@ LockReleaseCurrentOwner(LOCALLOCK **locallocks, int nlocks)
* convenience.
*/
static void
-ReleaseLockIfHeld(LOCALLOCK *locallock, bool sessionLock)
+ReleaseLockIfHeld(LOCALLOCKOWNER *locallockowner, bool sessionLock)
{
- ResourceOwner owner;
- LOCALLOCKOWNER *lockOwners;
- int i;
+ LOCALLOCK *locallock = locallockowner->locallock;
- /* Identify owner for lock (must match LockRelease!) */
+ /* release all references to the lock by this resource owner */
if (sessionLock)
- owner = NULL;
+ Assert(locallockowner->owner == NULL);
else
- owner = CurrentResourceOwner;
+ Assert(locallockowner->owner != NULL);
- /* Scan to see if there are any locks belonging to the target owner */
- lockOwners = locallock->lockOwners;
- for (i = locallock->numLockOwners - 1; i >= 0; i--)
+ if (locallockowner->nLocks < locallock->nLocks)
{
- if (lockOwners[i].owner == owner)
- {
- Assert(lockOwners[i].nLocks > 0);
- if (lockOwners[i].nLocks < locallock->nLocks)
- {
- /*
- * We will still hold this lock after forgetting this
- * ResourceOwner.
- */
- locallock->nLocks -= lockOwners[i].nLocks;
- /* compact out unused slot */
- locallock->numLockOwners--;
- if (owner != NULL)
- ResourceOwnerForgetLock(owner, locallock);
- if (i < locallock->numLockOwners)
- lockOwners[i] = lockOwners[locallock->numLockOwners];
- }
- else
- {
- Assert(lockOwners[i].nLocks == locallock->nLocks);
- /* We want to call LockRelease just once */
- lockOwners[i].nLocks = 1;
- locallock->nLocks = 1;
- if (!LockRelease(&locallock->tag.lock,
- locallock->tag.mode,
- sessionLock))
- elog(WARNING, "ReleaseLockIfHeld: failed??");
- }
- break;
- }
+ /*
+ * We will still hold this lock after forgetting this ResourceOwner.
+ */
+ locallock->nLocks -= locallockowner->nLocks;
+
+ dlist_delete(&locallockowner->locallock_node);
+ if (locallockowner->owner)
+ ResourceOwnerForgetLock(locallockowner);
+ pfree(locallockowner);
+ }
+ else
+ {
+ Assert(locallockowner->nLocks == locallock->nLocks);
+ /* We want to call LockRelease just once */
+ locallockowner->nLocks = 1;
+ locallock->nLocks = 1;
+
+ if (!LockRelease(&locallock->tag.lock,
+ locallock->tag.mode,
+ sessionLock))
+ elog(WARNING, "ReleaseLockIfHeld: failed??");
}
}
@@ -2696,82 +2682,47 @@ ReleaseLockIfHeld(LOCALLOCK *locallock, bool sessionLock)
* LockReassignCurrentOwner
* Reassign all locks belonging to CurrentResourceOwner to belong
* to its parent resource owner.
- *
- * If the caller knows what those locks are, it can pass them as an array.
- * That speeds up the call significantly, when a lot of locks are held
- * (e.g pg_dump with a large schema). Otherwise, pass NULL for locallocks,
- * and we'll traverse through our hash table to find them.
*/
void
-LockReassignCurrentOwner(LOCALLOCK **locallocks, int nlocks)
+LockReassignCurrentOwner(LOCALLOCKOWNER *locallockowner)
{
ResourceOwner parent = ResourceOwnerGetParent(CurrentResourceOwner);
- Assert(parent != NULL);
-
- if (locallocks == NULL)
- {
- HASH_SEQ_STATUS status;
- LOCALLOCK *locallock;
-
- hash_seq_init(&status, LockMethodLocalHash);
-
- while ((locallock = (LOCALLOCK *) hash_seq_search(&status)) != NULL)
- LockReassignOwner(locallock, parent);
- }
- else
- {
- int i;
-
- for (i = nlocks - 1; i >= 0; i--)
- LockReassignOwner(locallocks[i], parent);
- }
+ LockReassignOwner(locallockowner, parent);
}
/*
- * Subroutine of LockReassignCurrentOwner. Reassigns a given lock belonging to
- * CurrentResourceOwner to its parent.
+ * Subroutine of LockReassignCurrentOwner. Reassigns the given
+ *'locallockowner' to 'parent'.
*/
static void
-LockReassignOwner(LOCALLOCK *locallock, ResourceOwner parent)
+LockReassignOwner(LOCALLOCKOWNER *locallockowner, ResourceOwner parent)
{
- LOCALLOCKOWNER *lockOwners;
- int i;
- int ic = -1;
- int ip = -1;
+ LOCALLOCK *locallock = locallockowner->locallock;
+ dlist_iter iter;
- /*
- * Scan to see if there are any locks belonging to current owner or its
- * parent
- */
- lockOwners = locallock->lockOwners;
- for (i = locallock->numLockOwners - 1; i >= 0; i--)
+ ResourceOwnerForgetLock(locallockowner);
+
+ dlist_foreach(iter, &locallock->locallockowners)
{
- if (lockOwners[i].owner == CurrentResourceOwner)
- ic = i;
- else if (lockOwners[i].owner == parent)
- ip = i;
- }
+ LOCALLOCKOWNER *parentlocalowner = dlist_container(LOCALLOCKOWNER, locallock_node, iter.cur);
- if (ic < 0)
- return; /* no current locks */
+ Assert(parentlocalowner->locallock == locallock);
- if (ip < 0)
- {
- /* Parent has no slot, so just give it the child's slot */
- lockOwners[ic].owner = parent;
- ResourceOwnerRememberLock(parent, locallock);
- }
- else
- {
- /* Merge child's count with parent's */
- lockOwners[ip].nLocks += lockOwners[ic].nLocks;
- /* compact out unused slot */
- locallock->numLockOwners--;
- if (ic < locallock->numLockOwners)
- lockOwners[ic] = lockOwners[locallock->numLockOwners];
+ if (parentlocalowner->owner != parent)
+ continue;
+
+ parentlocalowner->nLocks += locallockowner->nLocks;
+
+ locallockowner->nLocks = 0;
+ dlist_delete(&locallockowner->locallock_node);
+ pfree(locallockowner);
+ return;
}
- ResourceOwnerForgetLock(CurrentResourceOwner, locallock);
+
+ /* reassign locallockowner to parent resowner */
+ locallockowner->owner = parent;
+ ResourceOwnerRememberLock(parent, locallockowner);
}
/*
@@ -3414,10 +3365,9 @@ CheckForSessionAndXactLocks(void)
while ((locallock = (LOCALLOCK *) hash_seq_search(&status)) != NULL)
{
- LOCALLOCKOWNER *lockOwners = locallock->lockOwners;
+ dlist_iter iter;
PerLockTagEntry *hentry;
bool found;
- int i;
/*
* Ignore VXID locks. We don't want those to be held by prepared
@@ -3438,9 +3388,11 @@ CheckForSessionAndXactLocks(void)
hentry->sessLock = hentry->xactLock = false;
/* Scan to see if we hold lock at session or xact level or both */
- for (i = locallock->numLockOwners - 1; i >= 0; i--)
+ dlist_foreach(iter, &locallock->locallockowners)
{
- if (lockOwners[i].owner == NULL)
+ LOCALLOCKOWNER *locallockowner = dlist_container(LOCALLOCKOWNER, locallock_node, iter.cur);
+
+ if (locallockowner->owner == NULL)
hentry->sessLock = true;
else
hentry->xactLock = true;
@@ -3487,10 +3439,9 @@ AtPrepare_Locks(void)
while ((locallock = (LOCALLOCK *) hash_seq_search(&status)) != NULL)
{
TwoPhaseLockRecord record;
- LOCALLOCKOWNER *lockOwners = locallock->lockOwners;
+ dlist_iter iter;
bool haveSessionLock;
bool haveXactLock;
- int i;
/*
* Ignore VXID locks. We don't want those to be held by prepared
@@ -3505,9 +3456,11 @@ AtPrepare_Locks(void)
/* Scan to see whether we hold it at session or transaction level */
haveSessionLock = haveXactLock = false;
- for (i = locallock->numLockOwners - 1; i >= 0; i--)
+ dlist_foreach(iter, &locallock->locallockowners)
{
- if (lockOwners[i].owner == NULL)
+ LOCALLOCKOWNER *locallockowner = dlist_container(LOCALLOCKOWNER, locallock_node, iter.cur);
+
+ if (locallockowner->owner == NULL)
haveSessionLock = true;
else
haveXactLock = true;
@@ -3599,10 +3552,9 @@ PostPrepare_Locks(FullTransactionId fxid)
while ((locallock = (LOCALLOCK *) hash_seq_search(&status)) != NULL)
{
- LOCALLOCKOWNER *lockOwners = locallock->lockOwners;
+ dlist_iter iter;
bool haveSessionLock;
bool haveXactLock;
- int i;
if (locallock->proclock == NULL || locallock->lock == NULL)
{
@@ -3621,9 +3573,11 @@ PostPrepare_Locks(FullTransactionId fxid)
/* Scan to see whether we hold it at session or transaction level */
haveSessionLock = haveXactLock = false;
- for (i = locallock->numLockOwners - 1; i >= 0; i--)
+ dlist_foreach(iter, &locallock->locallockowners)
{
- if (lockOwners[i].owner == NULL)
+ LOCALLOCKOWNER *locallockowner = dlist_container(LOCALLOCKOWNER, locallock_node, iter.cur);
+
+ if (locallockowner->owner == NULL)
haveSessionLock = true;
else
haveXactLock = true;
diff --git a/src/backend/utils/resowner/resowner.c b/src/backend/utils/resowner/resowner.c
index 06e1121c5ff..b327c9ee610 100644
--- a/src/backend/utils/resowner/resowner.c
+++ b/src/backend/utils/resowner/resowner.c
@@ -26,12 +26,11 @@
* release-priority of each resource, and release them in that order.
*
* Local lock references are special, they are not stored in the array or
- * the hash table. Instead, each resource owner has a separate small cache
+ * the hash table. Instead, each resource owner has a separate linked list
* of locks it owns. The lock manager has the same information in its local
- * lock hash table, and we fall back on that if the cache overflows, but
- * traversing the hash table is slower when there are a lot of locks
- * belonging to other resource owners. This is to speed up bulk releasing
- * or reassigning locks from a resource owner to its parent.
+ * lock hash table, but traversing the hash table is slower when there are a
+ * lot of locks belonging to other resource owners. This is to speed up
+ * bulk releasing or reassigning locks from a resource owner to its parent.
*
*
* Portions Copyright (c) 1996-2026, PostgreSQL Global Development Group
@@ -94,18 +93,6 @@ typedef struct ResourceElem
StaticAssertDecl(RESOWNER_HASH_MAX_ITEMS(RESOWNER_HASH_INIT_SIZE) >= RESOWNER_ARRAY_SIZE,
"initial hash size too small compared to array size");
-/*
- * MAX_RESOWNER_LOCKS is the size of the per-resource owner locks cache. It's
- * chosen based on some testing with pg_dump with a large schema. When the
- * tests were done (on 9.2), resource owners in a pg_dump run contained up
- * to 9 locks, regardless of the schema size, except for the top resource
- * owner which contained much more (overflowing the cache). 15 seems like a
- * nice round number that's somewhat higher than what pg_dump needs. Note that
- * making this number larger is not free - the bigger the cache, the slower
- * it is to release locks (in retail), when a resource owner holds many locks.
- */
-#define MAX_RESOWNER_LOCKS 15
-
/*
* ResourceOwner objects look like this
*/
@@ -127,10 +114,9 @@ struct ResourceOwnerData
bool sorted; /* are 'hash' and 'arr' sorted by priority? */
/*
- * Number of items in the locks cache, array, and hash table respectively.
- * (These are packed together to avoid padding in the struct.)
+ * Number of items in the array and hash table respectively. (These are
+ * packed together to avoid padding in the struct.)
*/
- uint8 nlocks; /* number of owned locks */
uint8 narr; /* how many items are stored in the array */
uint32 nhash; /* how many items are stored in the hash */
@@ -155,8 +141,7 @@ struct ResourceOwnerData
uint32 capacity; /* allocated length of hash[] */
uint32 grow_at; /* grow hash when reach this */
- /* The local locks cache. */
- LOCALLOCK *locks[MAX_RESOWNER_LOCKS]; /* list of owned locks */
+ dlist_head locks; /* dlist of owned locks */
/*
* AIO handles need be registered in critical sections and therefore
@@ -422,6 +407,7 @@ ResourceOwnerCreate(ResourceOwner parent, const char *name)
owner = (ResourceOwner) MemoryContextAllocZero(TopMemoryContext,
sizeof(struct ResourceOwnerData));
owner->name = name;
+ dlist_init(&owner->locks);
if (parent)
{
@@ -742,8 +728,19 @@ ResourceOwnerReleaseInternal(ResourceOwner owner,
}
else if (phase == RESOURCE_RELEASE_LOCKS)
{
+ dlist_mutable_iter iter;
+
if (isTopLevel)
{
+ dlist_foreach_modify(iter, &owner->locks)
+ {
+ LOCALLOCKOWNER *locallockowner = dlist_container(LOCALLOCKOWNER, resowner_node, iter.cur);
+
+ LockReleaseCurrentOwner(owner, locallockowner);
+ }
+
+ Assert(dlist_is_empty(&owner->locks));
+
/*
* For a top-level xact we are going to release all locks (or at
* least all non-session locks), so just do a single lmgr call at
@@ -762,30 +759,30 @@ ResourceOwnerReleaseInternal(ResourceOwner owner,
* subtransaction, we do NOT release its locks yet, but transfer
* them to the parent.
*/
- LOCALLOCK **locks;
- int nlocks;
+ if (isCommit)
+ {
+ dlist_foreach_modify(iter, &owner->locks)
+ {
+ LOCALLOCKOWNER *locallockowner = dlist_container(LOCALLOCKOWNER,
+ resowner_node,
+ iter.cur);
- Assert(owner->parent != NULL);
+ LockReassignCurrentOwner(locallockowner);
+ }
- /*
- * Pass the list of locks owned by this resource owner to the lock
- * manager, unless it has overflowed.
- */
- if (owner->nlocks > MAX_RESOWNER_LOCKS)
- {
- locks = NULL;
- nlocks = 0;
+ Assert(dlist_is_empty(&owner->locks));
}
else
{
- locks = owner->locks;
- nlocks = owner->nlocks;
- }
+ dlist_foreach_modify(iter, &owner->locks)
+ {
+ LOCALLOCKOWNER *locallockowner = dlist_container(LOCALLOCKOWNER, resowner_node, iter.cur);
- if (isCommit)
- LockReassignCurrentOwner(locks, nlocks);
- else
- LockReleaseCurrentOwner(locks, nlocks);
+ LockReleaseCurrentOwner(owner, locallockowner);
+ }
+
+ Assert(dlist_is_empty(&owner->locks));
+ }
}
}
else if (phase == RESOURCE_RELEASE_AFTER_LOCKS)
@@ -873,7 +870,7 @@ ResourceOwnerDelete(ResourceOwner owner)
/* And it better not own any resources, either */
Assert(owner->narr == 0);
Assert(owner->nhash == 0);
- Assert(owner->nlocks == 0 || owner->nlocks == MAX_RESOWNER_LOCKS + 1);
+ Assert(dlist_is_empty(&owner->locks));
/*
* Delete children. The recursive call will delink the child from me, so
@@ -1047,54 +1044,59 @@ ReleaseAuxProcessResourcesCallback(int code, Datum arg)
/*
* Remember that a Local Lock is owned by a ResourceOwner
- *
- * This is different from the generic ResourceOwnerRemember in that the list of
- * locks is only a lossy cache. It can hold up to MAX_RESOWNER_LOCKS entries,
- * and when it overflows, we stop tracking locks. The point of only remembering
- * only up to MAX_RESOWNER_LOCKS entries is that if a lot of locks are held,
- * ResourceOwnerForgetLock doesn't need to scan through a large array to find
- * the entry.
*/
void
-ResourceOwnerRememberLock(ResourceOwner owner, LOCALLOCK *locallock)
+ResourceOwnerRememberLock(ResourceOwner owner, LOCALLOCKOWNER *locallockowner)
{
- Assert(locallock != NULL);
-
- if (owner->nlocks > MAX_RESOWNER_LOCKS)
- return; /* we have already overflowed */
+ Assert(owner != NULL);
+ Assert(locallockowner != NULL);
- if (owner->nlocks < MAX_RESOWNER_LOCKS)
- owner->locks[owner->nlocks] = locallock;
- else
+#ifdef USE_ASSERT_CHECKING
{
- /* overflowed */
+ dlist_iter iter;
+
+ dlist_foreach(iter, &owner->locks)
+ {
+ LOCALLOCKOWNER *i = dlist_container(LOCALLOCKOWNER, resowner_node, iter.cur);
+
+ Assert(i->locallock != locallockowner->locallock);
+ }
}
- owner->nlocks++;
+#endif
+
+ dlist_push_tail(&owner->locks, &locallockowner->resowner_node);
}
/*
- * Forget that a Local Lock is owned by a ResourceOwner
+ * Forget that a Local Lock is owned by the given LOCALLOCKOWNER.
*/
void
-ResourceOwnerForgetLock(ResourceOwner owner, LOCALLOCK *locallock)
+ResourceOwnerForgetLock(LOCALLOCKOWNER *locallockowner)
{
- int i;
+#ifdef USE_ASSERT_CHECKING
+ ResourceOwner owner;
- if (owner->nlocks > MAX_RESOWNER_LOCKS)
- return; /* we have overflowed */
+ Assert(locallockowner != NULL);
+
+ owner = locallockowner->owner;
- Assert(owner->nlocks > 0);
- for (i = owner->nlocks - 1; i >= 0; i--)
{
- if (locallock == owner->locks[i])
+ dlist_iter iter;
+ bool found = false;
+
+ dlist_foreach(iter, &owner->locks)
{
- owner->locks[i] = owner->locks[owner->nlocks - 1];
- owner->nlocks--;
- return;
+ if (locallockowner == dlist_container(LOCALLOCKOWNER, resowner_node, iter.cur))
+ {
+ Assert(!found);
+ found = true;
+ }
}
+
+ Assert(found);
}
- elog(ERROR, "lock reference %p is not owned by resource owner %s",
- locallock, owner->name);
+#endif
+ dlist_delete(&locallockowner->resowner_node);
}
void
diff --git a/src/include/storage/lock.h b/src/include/storage/lock.h
index 55ffaa5e4a5..7db33b9abbe 100644
--- a/src/include/storage/lock.h
+++ b/src/include/storage/lock.h
@@ -422,6 +422,13 @@ typedef struct LOCALLOCKOWNER
* Must use a forward struct reference to avoid circularity.
*/
struct ResourceOwnerData *owner;
+
+ dlist_node resowner_node; /* dlist link for ResourceOwner.locks */
+
+ dlist_node locallock_node; /* dlist link for LOCALLOCK.locallockowners */
+
+ struct LOCALLOCK *locallock; /* pointer to the corresponding LOCALLOCK */
+
int64 nLocks; /* # of times held by this owner */
} LOCALLOCKOWNER;
@@ -435,9 +442,9 @@ typedef struct LOCALLOCK
LOCK *lock; /* associated LOCK object, if any */
PROCLOCK *proclock; /* associated PROCLOCK object, if any */
int64 nLocks; /* total number of times lock is held */
- int numLockOwners; /* # of relevant ResourceOwners */
- int maxLockOwners; /* allocated size of array */
- LOCALLOCKOWNER *lockOwners; /* dynamically resizable array */
+
+ dlist_head locallockowners; /* dlist of LOCALLOCKOWNER */
+
bool holdsStrongLockCount; /* bumped FastPathStrongRelationLocks */
bool lockCleared; /* we read all sinval msgs for lock */
} LOCALLOCK;
@@ -570,8 +577,8 @@ extern bool LockRelease(const LOCKTAG *locktag,
LOCKMODE lockmode, bool sessionLock);
extern void LockReleaseAll(LOCKMETHODID lockmethodid, bool allLocks);
extern void LockReleaseSession(LOCKMETHODID lockmethodid);
-extern void LockReleaseCurrentOwner(LOCALLOCK **locallocks, int nlocks);
-extern void LockReassignCurrentOwner(LOCALLOCK **locallocks, int nlocks);
+extern void LockReleaseCurrentOwner(struct ResourceOwnerData *owner, LOCALLOCKOWNER *locallockowner);
+extern void LockReassignCurrentOwner(LOCALLOCKOWNER *locallockowner);
extern bool LockHeldByMe(const LOCKTAG *locktag,
LOCKMODE lockmode, bool orstronger);
#ifdef USE_ASSERT_CHECKING
diff --git a/src/include/utils/resowner.h b/src/include/utils/resowner.h
index eb6033b4fdb..05ef56edc85 100644
--- a/src/include/utils/resowner.h
+++ b/src/include/utils/resowner.h
@@ -160,9 +160,9 @@ extern void CreateAuxProcessResourceOwner(void);
extern void ReleaseAuxProcessResources(bool isCommit);
/* special support for local lock management */
-struct LOCALLOCK;
-extern void ResourceOwnerRememberLock(ResourceOwner owner, struct LOCALLOCK *locallock);
-extern void ResourceOwnerForgetLock(ResourceOwner owner, struct LOCALLOCK *locallock);
+struct LOCALLOCKOWNER;
+extern void ResourceOwnerRememberLock(ResourceOwner owner, struct LOCALLOCKOWNER *locallockowner);
+extern void ResourceOwnerForgetLock(struct LOCALLOCKOWNER *locallockowner);
/* special support for AIO */
struct dlist_node;
--
2.47.3
From 8f4ef1461179955c216256e02d73af8896826fc6 Mon Sep 17 00:00:00 2001
From: Heikki Linnakangas <[email protected]>
Date: Mon, 19 Jan 2026 15:51:15 +0200
Subject: [PATCH v10 3/5] stop using releaseMask in PostPrepare_Locks
---
src/backend/access/transam/xact.c | 18 ++++++++-
src/backend/storage/lmgr/lock.c | 65 +++++++++++++++++--------------
src/include/storage/lock.h | 4 +-
3 files changed, 53 insertions(+), 34 deletions(-)
diff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c
index c857e23552f..7cf89e41699 100644
--- a/src/backend/access/transam/xact.c
+++ b/src/backend/access/transam/xact.c
@@ -2532,6 +2532,7 @@ PrepareTransaction(void)
FullTransactionId fxid = GetCurrentFullTransactionId();
GlobalTransaction gxact;
TimestampTz prepared_at;
+ HTAB *sessionandxactlocks;
Assert(!IsInParallelMode());
@@ -2681,7 +2682,17 @@ PrepareTransaction(void)
StartPrepare(gxact);
AtPrepare_Notify();
- AtPrepare_Locks();
+
+ /*
+ * Prepare the locks and save the returned hash table that describes if
+ * the lock is held at the session and/or transaction level. We need to
+ * know if we're dealing with session locks inside PostPrepare_Locks(),
+ * but we're unable to build the hash table there due to that function
+ * only discovering if we're dealing with a session lock while we're in a
+ * critical section, in which case we can't allocate memory for the hash
+ * table.
+ */
+ sessionandxactlocks = AtPrepare_Locks();
AtPrepare_PredicateLocks();
AtPrepare_PgStat();
AtPrepare_MultiXact();
@@ -2708,7 +2719,10 @@ PrepareTransaction(void)
* ProcArrayClearTransaction(). Otherwise, a GetLockConflicts() would
* conclude "xact already committed or aborted" for our locks.
*/
- PostPrepare_Locks(fxid);
+ PostPrepare_Locks(fxid, sessionandxactlocks);
+
+ /* We no longer need this hash table */
+ hash_destroy(sessionandxactlocks);
/*
* Let others know about no transaction in progress by me. This has to be
diff --git a/src/backend/storage/lmgr/lock.c b/src/backend/storage/lmgr/lock.c
index fe2821e31f7..4ccf225aef1 100644
--- a/src/backend/storage/lmgr/lock.c
+++ b/src/backend/storage/lmgr/lock.c
@@ -161,6 +161,16 @@ typedef struct TwoPhaseLockRecord
LOCKMODE lockmode;
} TwoPhaseLockRecord;
+/*
+ * Used by for a hash table entry type in AtPrepare_Locks() to communicate the
+ * session/xact lock status of each held lock for use in PostPrepare_Locks().
+ */
+typedef struct PerLockTagEntry
+{
+ LOCKTAG lock; /* identifies the lockable object */
+ bool sessLock; /* is any lockmode held at session level? */
+ bool xactLock; /* is any lockmode held at xact level? */
+} PerLockTagEntry;
/*
* Count of the number of fast path lock slots we believe to be used. This
@@ -3335,16 +3345,9 @@ LockRefindAndRelease(LockMethod lockMethodTable, PGPROC *proc,
* we can't implement this check by examining LOCALLOCK entries in isolation.
* We must build a transient hashtable that is indexed by locktag only.
*/
-static void
+static HTAB *
CheckForSessionAndXactLocks(void)
{
- typedef struct
- {
- LOCKTAG lock; /* identifies the lockable object */
- bool sessLock; /* is any lockmode held at session level? */
- bool xactLock; /* is any lockmode held at xact level? */
- } PerLockTagEntry;
-
HASHCTL hash_ctl;
HTAB *lockhtab;
HASH_SEQ_STATUS status;
@@ -3408,8 +3411,7 @@ CheckForSessionAndXactLocks(void)
errmsg("cannot PREPARE while holding both session-level and transaction-level locks on the same object")));
}
- /* Success, so clean up */
- hash_destroy(lockhtab);
+ return lockhtab;
}
/*
@@ -3417,6 +3419,11 @@ CheckForSessionAndXactLocks(void)
* Do the preparatory work for a PREPARE: make 2PC state file records
* for all locks currently held.
*
+ * Returns a hash table of PerLockTagEntry structs with an entry for each
+ * lock held by this backend marking if the lock is held at the session or
+ * xact level, or both. It is up to the calling function to call
+ * hash_destroy() on this table to free the memory used by it.
+ *
* Session-level locks are ignored, as are VXID locks.
*
* For the most part, we don't need to touch shared memory for this ---
@@ -3424,14 +3431,15 @@ CheckForSessionAndXactLocks(void)
* Fast-path locks are an exception, however: we move any such locks to
* the main table before allowing PREPARE TRANSACTION to succeed.
*/
-void
+HTAB *
AtPrepare_Locks(void)
{
HASH_SEQ_STATUS status;
LOCALLOCK *locallock;
+ HTAB *sessionandxactlocks;
/* First, verify there aren't locks of both xact and session level */
- CheckForSessionAndXactLocks();
+ sessionandxactlocks = CheckForSessionAndXactLocks();
/* Now do the per-locallock cleanup work */
hash_seq_init(&status, LockMethodLocalHash);
@@ -3504,6 +3512,8 @@ AtPrepare_Locks(void)
RegisterTwoPhaseRecord(TWOPHASE_RM_LOCK_ID, 0,
&record, sizeof(TwoPhaseLockRecord));
}
+
+ return sessionandxactlocks;
}
/*
@@ -3518,11 +3528,11 @@ AtPrepare_Locks(void)
* pointers in the transaction's resource owner. This is OK at the
* moment since resowner.c doesn't try to free locks retail at a toplevel
* transaction commit or abort. We could alternatively zero out nLocks
- * and leave the LOCALLOCK entries to be garbage-collected by LockReleaseAll,
- * but that probably costs more cycles.
+ * and leave the LOCALLOCK entries to be garbage-collected by
+ * ResourceOwnerRelease, but that probably costs more cycles.
*/
void
-PostPrepare_Locks(FullTransactionId fxid)
+PostPrepare_Locks(FullTransactionId fxid, HTAB *sessionandxactlocks)
{
PGPROC *newproc = TwoPhaseGetDummyProc(fxid, false);
HASH_SEQ_STATUS status;
@@ -3593,10 +3603,6 @@ PostPrepare_Locks(FullTransactionId fxid)
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("cannot PREPARE while holding both session-level and transaction-level locks on the same object")));
- /* Mark the proclock to show we need to release this lockmode */
- if (locallock->nLocks > 0)
- locallock->proclock->releaseMask |= LOCKBIT_ON(locallock->tag.mode);
-
/* And remove the locallock hashtable entry */
RemoveLocalLock(locallock);
}
@@ -3614,11 +3620,7 @@ PostPrepare_Locks(FullTransactionId fxid)
/*
* If the proclock list for this partition is empty, we can skip
- * acquiring the partition lock. This optimization is safer than the
- * situation in LockReleaseAll, because we got rid of any fast-path
- * locks during AtPrepare_Locks, so there cannot be any case where
- * another backend is adding something to our lists now. For safety,
- * though, we code this the same way as in LockReleaseAll.
+ * acquiring the partition lock.
*/
if (dlist_is_empty(procLocks))
continue; /* needn't examine this partition */
@@ -3627,6 +3629,8 @@ PostPrepare_Locks(FullTransactionId fxid)
dlist_foreach_modify(proclock_iter, procLocks)
{
+ PerLockTagEntry *locktagentry;
+
proclock = dlist_container(PROCLOCK, procLink, proclock_iter.cur);
Assert(proclock->tag.myProc == MyProc);
@@ -3644,13 +3648,14 @@ PostPrepare_Locks(FullTransactionId fxid)
Assert(lock->nGranted <= lock->nRequested);
Assert((proclock->holdMask & ~lock->grantMask) == 0);
- /* Ignore it if nothing to release (must be a session lock) */
- if (proclock->releaseMask == 0)
- continue;
+ locktagentry = hash_search(sessionandxactlocks,
+ &lock->tag,
+ HASH_FIND,
+ NULL);
- /* Else we should be releasing all locks */
- if (proclock->releaseMask != proclock->holdMask)
- elog(PANIC, "we seem to have dropped a bit somewhere");
+ /* skip session locks */
+ if (locktagentry != NULL && locktagentry->sessLock)
+ continue;
/*
* We cannot simply modify proclock->tag.myProc to reassign
diff --git a/src/include/storage/lock.h b/src/include/storage/lock.h
index 7db33b9abbe..34f1bc4383f 100644
--- a/src/include/storage/lock.h
+++ b/src/include/storage/lock.h
@@ -588,8 +588,8 @@ extern bool LockHasWaiters(const LOCKTAG *locktag,
LOCKMODE lockmode, bool sessionLock);
extern VirtualTransactionId *GetLockConflicts(const LOCKTAG *locktag,
LOCKMODE lockmode, int *countp);
-extern void AtPrepare_Locks(void);
-extern void PostPrepare_Locks(FullTransactionId fxid);
+extern HTAB *AtPrepare_Locks(void);
+extern void PostPrepare_Locks(FullTransactionId fxid, HTAB *sessionandxactlocks);
extern bool LockCheckConflicts(LockMethod lockMethodTable,
LOCKMODE lockmode,
LOCK *lock, PROCLOCK *proclock);
--
2.47.3
From 83779c35358099c6528c6a33b79f8d5fa64ccd15 Mon Sep 17 00:00:00 2001
From: Heikki Linnakangas <[email protected]>
Date: Mon, 19 Jan 2026 21:47:20 +0200
Subject: [PATCH v10 4/5] Remove LockReleaseAll
Author: David Rowley <[email protected]>
Discussion: https://www.postgresql.org/message-id/2f3621b8-abb1-b13f-fb0b-45f358a8d183%40iki.fi#63776637f09848bacb1305682519e1c3
---
src/backend/commands/discard.c | 7 +-
src/backend/replication/logical/launcher.c | 6 +-
src/backend/storage/lmgr/README | 6 -
src/backend/storage/lmgr/lock.c | 336 ++++-----------------
src/backend/storage/lmgr/proc.c | 15 +-
src/backend/utils/init/postinit.c | 6 +-
src/include/storage/lock.h | 12 +-
7 files changed, 82 insertions(+), 306 deletions(-)
diff --git a/src/backend/commands/discard.c b/src/backend/commands/discard.c
index 7b5520b9abe..f82cdbd5d6d 100644
--- a/src/backend/commands/discard.c
+++ b/src/backend/commands/discard.c
@@ -71,7 +71,12 @@ DiscardAll(bool isTopLevel)
ResetAllOptions();
DropAllPreparedStatements();
Async_UnlistenAll();
- LockReleaseAll(USER_LOCKMETHOD, true);
+ LockReleaseSession(USER_LOCKMETHOD);
+
+#ifdef USE_ASSERT_CHECKING
+ LockAssertNoneHeld(false);
+#endif
+
ResetPlanCache();
ResetTempTableNamespace();
ResetSequenceCaches();
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 3ed86480be2..85d4237c67d 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -914,7 +914,11 @@ logicalrep_worker_onexit(int code, Datum arg)
* The locks will be acquired once the worker is initialized.
*/
if (!InitializingApplyWorker)
- LockReleaseAll(DEFAULT_LOCKMETHOD, true);
+ LockReleaseSession(DEFAULT_LOCKMETHOD);
+
+#ifdef USE_ASSERT_CHECKING
+ LockAssertNoneHeld(false);
+#endif
ApplyLauncherWakeup();
}
diff --git a/src/backend/storage/lmgr/README b/src/backend/storage/lmgr/README
index 45de0fd2bd6..e7e0f29347a 100644
--- a/src/backend/storage/lmgr/README
+++ b/src/backend/storage/lmgr/README
@@ -182,12 +182,6 @@ holdMask -
subset of the PGPROC object's heldLocks mask (if the PGPROC is
currently waiting for another lock mode on this lock).
-releaseMask -
- A bitmask for the lock modes due to be released during LockReleaseAll.
- This must be a subset of the holdMask. Note that it is modified without
- taking the partition LWLock, and therefore it is unsafe for any
- backend except the one owning the PROCLOCK to examine/change it.
-
lockLink -
List link for shared memory queue of all the PROCLOCK objects for the
same LOCK.
diff --git a/src/backend/storage/lmgr/lock.c b/src/backend/storage/lmgr/lock.c
index 4ccf225aef1..504c4642713 100644
--- a/src/backend/storage/lmgr/lock.c
+++ b/src/backend/storage/lmgr/lock.c
@@ -22,8 +22,7 @@
* Interface:
*
* LockManagerShmemInit(), GetLocksMethodTable(), GetLockTagsMethodTable(),
- * LockAcquire(), LockRelease(), LockReleaseAll(),
- * LockCheckConflicts(), GrantLock()
+ * LockAcquire(), LockRelease(), LockCheckConflicts(), GrantLock()
*
*-------------------------------------------------------------------------
*/
@@ -341,6 +340,9 @@ static LOCALLOCK *awaitedLock;
static ResourceOwner awaitedOwner;
+static dlist_head session_locks[lengthof(LockMethods)];
+
+
#ifdef LOCK_DEBUG
/*------
@@ -530,6 +532,10 @@ InitLockManagerAccess(void)
&info,
HASH_ELEM | HASH_BLOBS);
+ /* Initialize each element of the session_locks array */
+ for (int i = 0; i < lengthof(LockMethods); i++)
+ dlist_init(&session_locks[i]);
+
/*
* Create a slab context for storing LOCALLOCKOWNERs. Slab seems like a
* good context type for this as it will manage fragmentation better than
@@ -1386,7 +1392,6 @@ SetupLockInTable(LockMethod lockMethodTable, PGPROC *proc,
proclock->groupLeader = proc->lockGroupLeader != NULL ?
proc->lockGroupLeader : proc;
proclock->holdMask = 0;
- proclock->releaseMask = 0;
/* Add proclock to appropriate lists */
dlist_push_tail(&lock->procLocks, &proclock->lockLink);
dlist_push_tail(&proc->myProcLocks[partition], &proclock->procLink);
@@ -1489,6 +1494,8 @@ RemoveLocalLock(LOCALLOCK *locallock)
dlist_delete(&locallockowner->locallock_node);
if (locallockowner->owner != NULL)
ResourceOwnerForgetLock(locallockowner);
+ else
+ dlist_delete(&locallockowner->resowner_node);
pfree(locallockowner);
}
Assert(dlist_is_empty(&locallock->locallockowners));
@@ -1828,6 +1835,7 @@ GrantLockLocal(LOCALLOCK *locallock, ResourceOwner owner)
LOCKMETHODID lockmethodid = LOCALLOCK_LOCKMETHOD(*locallockowner->locallock);
Assert(lockmethodid > 0 && lockmethodid <= 2);
+ dlist_push_tail(&session_locks[lockmethodid - 1], &locallockowner->resowner_node);
}
/* Indicate that the lock is acquired for certain types of locks. */
@@ -2188,6 +2196,8 @@ LockRelease(const LOCKTAG *locktag, LOCKMODE lockmode, bool sessionLock)
dlist_delete(&locallockowner->locallock_node);
if (owner != NULL)
ResourceOwnerForgetLock(locallockowner);
+ else
+ dlist_delete(&locallockowner->resowner_node);
pfree(locallockowner);
}
found = true;
@@ -2212,6 +2222,8 @@ LockRelease(const LOCKTAG *locktag, LOCKMODE lockmode, bool sessionLock)
if (locallock->nLocks > 0)
return true;
+ Assert(locallock->nLocks == 0);
+
/*
* At this point we can no longer suppose we are clear of invalidation
* messages related to this lock. Although we'll delete the LOCALLOCK
@@ -2314,284 +2326,46 @@ LockRelease(const LOCKTAG *locktag, LOCKMODE lockmode, bool sessionLock)
return true;
}
+#ifdef USE_ASSERT_CHECKING
/*
- * LockReleaseAll -- Release all locks of the specified lock method that
- * are held by the current process.
- *
- * Well, not necessarily *all* locks. The available behaviors are:
- * allLocks == true: release all locks including session locks.
- * allLocks == false: release all non-session locks.
+ * LockAssertNoneHeld -- Assert that we no longer hold any DEFAULT_LOCKMETHOD
+ * locks during an abort.
*/
void
-LockReleaseAll(LOCKMETHODID lockmethodid, bool allLocks)
+LockAssertNoneHeld(bool isCommit)
{
HASH_SEQ_STATUS status;
- LockMethod lockMethodTable;
- int i,
- numLockModes;
LOCALLOCK *locallock;
- LOCK *lock;
- int partition;
- bool have_fast_path_lwlock = false;
-
- if (lockmethodid <= 0 || lockmethodid >= lengthof(LockMethods))
- elog(ERROR, "unrecognized lock method: %d", lockmethodid);
- lockMethodTable = LockMethods[lockmethodid];
-
-#ifdef LOCK_DEBUG
- if (*(lockMethodTable->trace_flag))
- elog(LOG, "LockReleaseAll: lockmethod=%d", lockmethodid);
-#endif
-
- /*
- * Get rid of our fast-path VXID lock, if appropriate. Note that this is
- * the only way that the lock we hold on our own VXID can ever get
- * released: it is always and only released when a toplevel transaction
- * ends.
- */
- if (lockmethodid == DEFAULT_LOCKMETHOD)
- VirtualXactLockTableCleanup();
- numLockModes = lockMethodTable->numLockModes;
-
- /*
- * First we run through the locallock table and get rid of unwanted
- * entries, then we scan the process's proclocks and get rid of those. We
- * do this separately because we may have multiple locallock entries
- * pointing to the same proclock, and we daren't end up with any dangling
- * pointers. Fast-path locks are cleaned up during the locallock table
- * scan, though.
- */
- hash_seq_init(&status, LockMethodLocalHash);
-
- while ((locallock = (LOCALLOCK *) hash_seq_search(&status)) != NULL)
+ if (!isCommit)
{
- /*
- * If the LOCALLOCK entry is unused, something must've gone wrong
- * while trying to acquire this lock. Just forget the local entry.
- */
- if (locallock->nLocks == 0)
- {
- RemoveLocalLock(locallock);
- continue;
- }
+ hash_seq_init(&status, LockMethodLocalHash);
- /* Ignore items that are not of the lockmethod to be removed */
- if (LOCALLOCK_LOCKMETHOD(*locallock) != lockmethodid)
- continue;
-
- /*
- * If we are asked to release all locks, we can just zap the entry.
- * Otherwise, must scan to see if there are session locks. We assume
- * there is at most one lockOwners entry for session locks.
- */
- if (!allLocks)
+ while ((locallock = (LOCALLOCK *) hash_seq_search(&status)) != NULL)
{
- dlist_mutable_iter iter;
- int session_locks = 0;
+ dlist_iter local_iter;
- dlist_foreach_modify(iter, &locallock->locallockowners)
- {
- LOCALLOCKOWNER *locallockowner = dlist_container(LOCALLOCKOWNER, locallock_node, iter.cur);
+ Assert(locallock->nLocks >= 0);
- if (locallockowner->owner != NULL)
- {
- dlist_delete(&locallockowner->locallock_node);
- ResourceOwnerForgetLock(locallockowner);
- pfree(locallockowner);
- }
- else
- session_locks += locallockowner->nLocks;
- }
-
- /* Fix the locallock to show just the session locks */
- locallock->nLocks = session_locks;
- if (session_locks > 0)
+ dlist_foreach(local_iter, &locallock->locallockowners)
{
- /* We aren't deleting this locallock, so done */
- continue;
- }
- }
-
-#ifdef USE_ASSERT_CHECKING
-
- /*
- * Tuple locks are currently held only for short durations within a
- * transaction. Check that we didn't forget to release one.
- */
- if (LOCALLOCK_LOCKTAG(*locallock) == LOCKTAG_TUPLE && !allLocks)
- elog(WARNING, "tuple lock held at commit");
-#endif
-
- /*
- * If the lock or proclock pointers are NULL, this lock was taken via
- * the relation fast-path (and is not known to have been transferred).
- */
- if (locallock->proclock == NULL || locallock->lock == NULL)
- {
- LOCKMODE lockmode = locallock->tag.mode;
- Oid relid;
+ LOCALLOCKOWNER *locallockowner = dlist_container(LOCALLOCKOWNER,
+ locallock_node,
+ local_iter.cur);
- /* Verify that a fast-path lock is what we've got. */
- if (!EligibleForRelationFastPath(&locallock->tag.lock, lockmode))
- elog(PANIC, "locallock table corrupted");
+ Assert(locallockowner->owner == NULL);
- /*
- * If we don't currently hold the LWLock that protects our
- * fast-path data structures, we must acquire it before attempting
- * to release the lock via the fast-path. We will continue to
- * hold the LWLock until we're done scanning the locallock table,
- * unless we hit a transferred fast-path lock. (XXX is this
- * really such a good idea? There could be a lot of entries ...)
- */
- if (!have_fast_path_lwlock)
- {
- LWLockAcquire(&MyProc->fpInfoLock, LW_EXCLUSIVE);
- have_fast_path_lwlock = true;
- }
-
- /* Attempt fast-path release. */
- relid = locallock->tag.lock.locktag_field2;
- if (FastPathUnGrantRelationLock(relid, lockmode))
- {
- RemoveLocalLock(locallock);
- continue;
+ if (locallockowner->nLocks > 0 &&
+ LOCALLOCK_LOCKMETHOD(*locallock) == DEFAULT_LOCKMETHOD)
+ Assert(false);
}
-
- /*
- * Our lock, originally taken via the fast path, has been
- * transferred to the main lock table. That's going to require
- * some extra work, so release our fast-path lock before starting.
- */
- LWLockRelease(&MyProc->fpInfoLock);
- have_fast_path_lwlock = false;
-
- /*
- * Now dump the lock. We haven't got a pointer to the LOCK or
- * PROCLOCK in this case, so we have to handle this a bit
- * differently than a normal lock release. Unfortunately, this
- * requires an extra LWLock acquire-and-release cycle on the
- * partitionLock, but hopefully it shouldn't happen often.
- */
- LockRefindAndRelease(lockMethodTable, MyProc,
- &locallock->tag.lock, lockmode, false);
- RemoveLocalLock(locallock);
- continue;
}
-
- /* Mark the proclock to show we need to release this lockmode */
- if (locallock->nLocks > 0)
- locallock->proclock->releaseMask |= LOCKBIT_ON(locallock->tag.mode);
-
- /* And remove the locallock hashtable entry */
- RemoveLocalLock(locallock);
}
- /* Done with the fast-path data structures */
- if (have_fast_path_lwlock)
- LWLockRelease(&MyProc->fpInfoLock);
-
- /*
- * Now, scan each lock partition separately.
- */
- for (partition = 0; partition < NUM_LOCK_PARTITIONS; partition++)
- {
- LWLock *partitionLock;
- dlist_head *procLocks = &MyProc->myProcLocks[partition];
- dlist_mutable_iter proclock_iter;
-
- partitionLock = LockHashPartitionLockByIndex(partition);
-
- /*
- * If the proclock list for this partition is empty, we can skip
- * acquiring the partition lock. This optimization is trickier than
- * it looks, because another backend could be in process of adding
- * something to our proclock list due to promoting one of our
- * fast-path locks. However, any such lock must be one that we
- * decided not to delete above, so it's okay to skip it again now;
- * we'd just decide not to delete it again. We must, however, be
- * careful to re-fetch the list header once we've acquired the
- * partition lock, to be sure we have a valid, up-to-date pointer.
- * (There is probably no significant risk if pointer fetch/store is
- * atomic, but we don't wish to assume that.)
- *
- * XXX This argument assumes that the locallock table correctly
- * represents all of our fast-path locks. While allLocks mode
- * guarantees to clean up all of our normal locks regardless of the
- * locallock situation, we lose that guarantee for fast-path locks.
- * This is not ideal.
- */
- if (dlist_is_empty(procLocks))
- continue; /* needn't examine this partition */
-
- LWLockAcquire(partitionLock, LW_EXCLUSIVE);
-
- dlist_foreach_modify(proclock_iter, procLocks)
- {
- PROCLOCK *proclock = dlist_container(PROCLOCK, procLink, proclock_iter.cur);
- bool wakeupNeeded = false;
-
- Assert(proclock->tag.myProc == MyProc);
-
- lock = proclock->tag.myLock;
-
- /* Ignore items that are not of the lockmethod to be removed */
- if (LOCK_LOCKMETHOD(*lock) != lockmethodid)
- continue;
-
- /*
- * In allLocks mode, force release of all locks even if locallock
- * table had problems
- */
- if (allLocks)
- proclock->releaseMask = proclock->holdMask;
- else
- Assert((proclock->releaseMask & ~proclock->holdMask) == 0);
-
- /*
- * Ignore items that have nothing to be released, unless they have
- * holdMask == 0 and are therefore recyclable
- */
- if (proclock->releaseMask == 0 && proclock->holdMask != 0)
- continue;
-
- PROCLOCK_PRINT("LockReleaseAll", proclock);
- LOCK_PRINT("LockReleaseAll", lock, 0);
- Assert(lock->nRequested >= 0);
- Assert(lock->nGranted >= 0);
- Assert(lock->nGranted <= lock->nRequested);
- Assert((proclock->holdMask & ~lock->grantMask) == 0);
-
- /*
- * Release the previously-marked lock modes
- */
- for (i = 1; i <= numLockModes; i++)
- {
- if (proclock->releaseMask & LOCKBIT_ON(i))
- wakeupNeeded |= UnGrantLock(lock, i, proclock,
- lockMethodTable);
- }
- Assert((lock->nRequested >= 0) && (lock->nGranted >= 0));
- Assert(lock->nGranted <= lock->nRequested);
- LOCK_PRINT("LockReleaseAll: updated", lock, 0);
-
- proclock->releaseMask = 0;
-
- /* CleanUpLock will wake up waiters if needed. */
- CleanUpLock(lock, proclock,
- lockMethodTable,
- LockTagHashCode(&lock->tag),
- wakeupNeeded);
- } /* loop over PROCLOCKs within this partition */
-
- LWLockRelease(partitionLock);
- } /* loop over partitions */
-
-#ifdef LOCK_DEBUG
- if (*(lockMethodTable->trace_flag))
- elog(LOG, "LockReleaseAll done");
-#endif
+ for (uint32 g = 0; g < FastPathLockGroupsPerBackend; g++)
+ Assert(MyProc->fpLockBits[g] == 0);
}
+#endif
/*
* LockReleaseSession -- Release all session locks of the specified lock method
@@ -2600,30 +2374,21 @@ LockReleaseAll(LOCKMETHODID lockmethodid, bool allLocks)
void
LockReleaseSession(LOCKMETHODID lockmethodid)
{
- HASH_SEQ_STATUS status;
- LOCALLOCK *locallock;
+ dlist_mutable_iter iter;
if (lockmethodid <= 0 || lockmethodid >= lengthof(LockMethods))
elog(ERROR, "unrecognized lock method: %d", lockmethodid);
- hash_seq_init(&status, LockMethodLocalHash);
-
- while ((locallock = (LOCALLOCK *) hash_seq_search(&status)) != NULL)
+ dlist_foreach_modify(iter, &session_locks[lockmethodid - 1])
{
- dlist_mutable_iter iter;
+ LOCALLOCKOWNER *locallockowner = dlist_container(LOCALLOCKOWNER, resowner_node, iter.cur);
- /* Ignore items that are not of the specified lock method */
- if (LOCALLOCK_LOCKMETHOD(*locallock) != lockmethodid)
- continue;
+ Assert(LOCALLOCK_LOCKMETHOD(*locallockowner->locallock) == lockmethodid);
- dlist_foreach_modify(iter, &locallock->locallockowners)
- {
- LOCALLOCKOWNER *locallockowner = dlist_container(LOCALLOCKOWNER, locallock_node, iter.cur);
-
- if (locallockowner->owner == NULL)
- ReleaseLockIfHeld(locallockowner, true);
- }
+ ReleaseLockIfHeld(locallockowner, true);
}
+
+ Assert(dlist_is_empty(&session_locks[lockmethodid - 1]));
}
/*
@@ -2631,7 +2396,7 @@ LockReleaseSession(LOCKMETHODID lockmethodid)
* Release all locks belonging to 'owner'
*/
void
-LockReleaseCurrentOwner(struct ResourceOwnerData *owner, LOCALLOCKOWNER *locallockowner)
+LockReleaseCurrentOwner(ResourceOwner owner, LOCALLOCKOWNER *locallockowner)
{
Assert(locallockowner->owner == owner);
@@ -2640,8 +2405,8 @@ LockReleaseCurrentOwner(struct ResourceOwnerData *owner, LOCALLOCKOWNER *locallo
/*
* ReleaseLockIfHeld
- * Release any session-level locks on this 'locallockowner' if sessionLock
- * is true; else, release any locks held by 'locallockowner'.
+ * Release any session-level locks on this 'locallockowner' if
+ * sessionLock is true; else, release any locks held by 'locallockowner'.
*
* It is tempting to pass this a ResourceOwner pointer (or NULL for session
* locks), but without refactoring LockRelease() we cannot support releasing XXX
@@ -2670,8 +2435,10 @@ ReleaseLockIfHeld(LOCALLOCKOWNER *locallockowner, bool sessionLock)
locallock->nLocks -= locallockowner->nLocks;
dlist_delete(&locallockowner->locallock_node);
- if (locallockowner->owner)
+ if (!sessionLock)
ResourceOwnerForgetLock(locallockowner);
+ else
+ dlist_delete(&locallockowner->resowner_node);
pfree(locallockowner);
}
else
@@ -3239,7 +3006,7 @@ GetLockConflicts(const LOCKTAG *locktag, LOCKMODE lockmode, int *countp)
* We currently use this in two situations: first, to release locks held by
* prepared transactions on commit (see lock_twophase_postcommit); and second,
* to release locks taken via the fast-path, transferred to the main hash
- * table, and then released (see LockReleaseAll).
+ * table, and then released (see ResourceOwnerRelease).
*/
static void
LockRefindAndRelease(LockMethod lockMethodTable, PGPROC *proc,
@@ -3368,8 +3135,8 @@ CheckForSessionAndXactLocks(void)
while ((locallock = (LOCALLOCK *) hash_seq_search(&status)) != NULL)
{
- dlist_iter iter;
PerLockTagEntry *hentry;
+ dlist_iter iter;
bool found;
/*
@@ -4436,7 +4203,6 @@ lock_twophase_recover(FullTransactionId fxid, uint16 info,
Assert(proc->lockGroupLeader == NULL);
proclock->groupLeader = proc;
proclock->holdMask = 0;
- proclock->releaseMask = 0;
/* Add proclock to appropriate lists */
dlist_push_tail(&lock->procLocks, &proclock->lockLink);
dlist_push_tail(&proc->myProcLocks[partition],
@@ -4573,7 +4339,7 @@ lock_twophase_postabort(FullTransactionId fxid, uint16 info,
*
* We don't bother recording this lock in the local lock table, since it's
* only ever released at the end of a transaction. Instead,
- * LockReleaseAll() calls VirtualXactLockTableCleanup().
+ * ProcReleaseLocks() calls VirtualXactLockTableCleanup().
*/
void
VirtualXactLockTableInsert(VirtualTransactionId vxid)
diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c
index cb19b2c1169..35483d4f370 100644
--- a/src/backend/storage/lmgr/proc.c
+++ b/src/backend/storage/lmgr/proc.c
@@ -895,10 +895,17 @@ ProcReleaseLocks(bool isCommit)
return;
/* If waiting, get off wait queue (should only be needed after error) */
LockErrorCleanup();
- /* Release standard locks, including session-level if aborting */
- LockReleaseAll(DEFAULT_LOCKMETHOD, !isCommit);
- /* Release transaction-level advisory locks */
- LockReleaseAll(USER_LOCKMETHOD, false);
+
+ VirtualXactLockTableCleanup();
+
+ /* Release session-level locks if aborting */
+ if (!isCommit)
+ LockReleaseSession(DEFAULT_LOCKMETHOD);
+
+#ifdef USE_ASSERT_CHECKING
+ /* Ensure all locks were released */
+ LockAssertNoneHeld(isCommit);
+#endif
}
diff --git a/src/backend/utils/init/postinit.c b/src/backend/utils/init/postinit.c
index 3f401faf3de..37bee8d9f7b 100644
--- a/src/backend/utils/init/postinit.c
+++ b/src/backend/utils/init/postinit.c
@@ -1344,10 +1344,10 @@ ShutdownPostgres(int code, Datum arg)
AbortOutOfAnyTransaction();
/*
- * User locks are not released by transaction end, so be sure to release
- * them explicitly.
+ * Session locks are not released by transaction end, so be sure to
+ * release them explicitly.
*/
- LockReleaseAll(USER_LOCKMETHOD, true);
+ LockReleaseSession(USER_LOCKMETHOD);
}
diff --git a/src/include/storage/lock.h b/src/include/storage/lock.h
index 34f1bc4383f..6d4b0a0e51f 100644
--- a/src/include/storage/lock.h
+++ b/src/include/storage/lock.h
@@ -25,6 +25,7 @@
#include "storage/procnumber.h"
#include "storage/shmem.h"
#include "utils/timestamp.h"
+#include "lib/ilist.h"
/* struct PGPROC is declared in proc.h, but must forward-reference it */
typedef struct PGPROC PGPROC;
@@ -351,10 +352,6 @@ typedef struct LOCK
* Otherwise, proclock objects whose holdMasks are zero are recycled
* as soon as convenient.
*
- * releaseMask is workspace for LockReleaseAll(): it shows the locks due
- * to be released during the current call. This must only be examined or
- * set by the backend owning the PROCLOCK.
- *
* Each PROCLOCK object is linked into lists for both the associated LOCK
* object and the owning PGPROC object. Note that the PROCLOCK is entered
* into these lists as soon as it is created, even if no lock has yet been
@@ -376,7 +373,6 @@ typedef struct PROCLOCK
/* data */
PGPROC *groupLeader; /* proc's lock group leader, or proc itself */
LOCKMASK holdMask; /* bitmask for lock types currently held */
- LOCKMASK releaseMask; /* bitmask for lock types to be released */
dlist_node lockLink; /* list link in LOCK's list of proclocks */
dlist_node procLink; /* list link in PGPROC's list of proclocks */
} PROCLOCK;
@@ -575,7 +571,11 @@ extern void AbortStrongLockAcquire(void);
extern void MarkLockClear(LOCALLOCK *locallock);
extern bool LockRelease(const LOCKTAG *locktag,
LOCKMODE lockmode, bool sessionLock);
-extern void LockReleaseAll(LOCKMETHODID lockmethodid, bool allLocks);
+
+#ifdef USE_ASSERT_CHECKING
+extern void LockAssertNoneHeld(bool isCommit);
+#endif
+
extern void LockReleaseSession(LOCKMETHODID lockmethodid);
extern void LockReleaseCurrentOwner(struct ResourceOwnerData *owner, LOCALLOCKOWNER *locallockowner);
extern void LockReassignCurrentOwner(LOCALLOCKOWNER *locallockowner);
--
2.47.3
From 0c65ca99640b9f93843e76e32c8959ef749a282c Mon Sep 17 00:00:00 2001
From: Heikki Linnakangas <[email protected]>
Date: Mon, 19 Jan 2026 21:45:42 +0200
Subject: [PATCH v10 5/5] Fix assertion failure from ProcKill in 'subscription'
test
---
src/backend/utils/init/postinit.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/src/backend/utils/init/postinit.c b/src/backend/utils/init/postinit.c
index 37bee8d9f7b..ba41fab4ce5 100644
--- a/src/backend/utils/init/postinit.c
+++ b/src/backend/utils/init/postinit.c
@@ -1343,10 +1343,14 @@ ShutdownPostgres(int code, Datum arg)
/* Make sure we've killed any active transaction */
AbortOutOfAnyTransaction();
+ /* If waiting, get off wait queue (should only be needed after error) */
+ LockErrorCleanup();
+
/*
* Session locks are not released by transaction end, so be sure to
* release them explicitly.
*/
+ LockReleaseSession(DEFAULT_LOCKMETHOD);
LockReleaseSession(USER_LOCKMETHOD);
}
--
2.47.3