Hello, Tom.

I was exploring this issue further and discovered something strange.

"PROCLOCK hash" and "LOCK hash" are hash tables in shared memory. All
memory for these tables is in fact pre-allocated. But for some reason
these two tables are created (lock.c:394) with init_size =/= max_size.
It causes small overhead on calling memory allocator after hash table
creation and additional locking/unlocking.

I checked all other hash tables created via ShmemInitHash. All of these
tables have init_size == max_size. I suggest to create "PROCLOCK hash"
and "LOCK hash" with init_size == max_size too (see attached patch).
Granted this change doesn't cause any noticeable performance
improvements according to pgbench. Nevertheless it looks like a very
right thing to do which at least doesn't make anything worse.

If this patch seems to be OK next logical step I believe would be to
remove init_size parameter in ShmemInitHash procedure since in practice
it always equals max_size.

On Fri, 11 Dec 2015 10:30:30 -0500
Tom Lane <t...@sss.pgh.pa.us> wrote:

> Aleksander Alekseev <a.aleks...@postgrespro.ru> writes:
> > Turns out PostgreSQL can spend a lot of time waiting for a lock in
> > this particular place, especially if you are running PostgreSQL on
> > 60-core server. Which obviously is a pretty bad sign.
> > ...
> > I managed to fix this behaviour by modifying choose_nelem_alloc
> > procedure in dynahash.c (see attached patch).
> TBH, this is just voodoo.  I don't know why this change would have
> made any impact on lock acquisition performance, and neither do you,
> and the odds are good that it's pure chance that it changed
> anything.  One likely theory is that you managed to shift around
> memory allocations so that something aligned on a cacheline boundary
> when it hadn't before.  But, of course, the next patch that changes
> allocations anywhere in shared memory could change that back.  There
> are lots of effects like this that appear or disappear based on
> seemingly unrelated code changes when you're measuring edge-case
> performance.
> The patch is not necessarily bad in itself.  As time goes by and
> machines get bigger, it can make sense to allocate more memory at a
> time to reduce memory management overhead.  But arguing for it on the
> basis that it fixes lock allocation behavior with 60 cores is just
> horsepucky.  What you were measuring there was steady-state hash
> table behavior, not the speed of the allocate-some-more-memory code
> path.
>                       regards, tom lane

diff --git a/src/backend/storage/lmgr/lock.c b/src/backend/storage/lmgr/lock.c
index 76fc615..86d2f88 100644
--- a/src/backend/storage/lmgr/lock.c
+++ b/src/backend/storage/lmgr/lock.c
@@ -373,16 +373,14 @@ void
 	HASHCTL		info;
-	long		init_table_size,
-				max_table_size;
+	long		shmem_table_size;
 	bool		found;
 	 * Compute init/max size to request for lock hashtables.  Note these
 	 * calculations must agree with LockShmemSize!
-	max_table_size = NLOCKENTS();
-	init_table_size = max_table_size / 2;
+	shmem_table_size = NLOCKENTS();
 	 * Allocate hash table for LOCK structs.  This stores per-locked-object
@@ -394,14 +392,13 @@ InitLocks(void)
 	info.num_partitions = NUM_LOCK_PARTITIONS;
 	LockMethodLockHash = ShmemInitHash("LOCK hash",
-									   init_table_size,
-									   max_table_size,
+									   shmem_table_size,
+									   shmem_table_size,
 	/* Assume an average of 2 holders per lock */
-	max_table_size *= 2;
-	init_table_size *= 2;
+	shmem_table_size *= 2;
 	 * Allocate hash table for PROCLOCK structs.  This stores
@@ -413,8 +410,8 @@ InitLocks(void)
 	info.num_partitions = NUM_LOCK_PARTITIONS;
 	LockMethodProcLockHash = ShmemInitHash("PROCLOCK hash",
-										   init_table_size,
-										   max_table_size,
+										   shmem_table_size,
+										   shmem_table_size,
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to