Hi,

I've spent quite a bit of time trying to identify cases where having
more fast-path lock slots could be harmful, without any luck. I started
with the EPYC machine I used for the earlier tests, but found nothing,
except for a couple cases unrelated to this patch, because it affects
even cases without the patch applied at all. More like random noise or
maybe some issue with the VM (or differences to the VM used earlier). I
pushed the results to githus [1] anyway, if anyone wants to look.

So I switched to my smaller machines, and ran a simple test on master,
with the hard-coded arrays, and with the arrays moves out of PGPROC (and
sized per max_locks_per_transaction).

I was looking for regressions, so I wanted to test a case that can't
benefit from fast-path locking, while paying the costs. So I decided to
do pgbench -S with 4 partitions, because that fits into the 16 slots we
had before, and scale 1 to keep everything in memory. And then did a
couple read-only runs, first with 64 locks/transaction (default), then
with 1024 locks/transaction.

Attached is a shell script I used to collect this - it creates and
removes clusters, so be careful. Should be fairly obvious what it tests
and how.

The results for max_locks_per_transaction=64 look like this (the numbers
are throughput):


  machine      mode  clients   master   built-in   with-guc
  ---------------------------------------------------------
       i5  prepared        1    14970      14991      14981
                           4    51638      51615      51388
             simple        1    14042      14136      14008
                           4    48705      48572      48457
     ------------------------------------------------------
     xeon  prepared        1    13213      13330      13170
                           4    49280      49191      49263
                          16   151413     152268     151560
             simple        1    12250      12291      12316
                           4    45910      46148      45843
                          16   141774     142165     142310

And compared to master

  machine      mode  clients   built-in    with-guc
  -------------------------------------------------
       i5  prepared        1    100.14%     100.08%
                           4     99.95%      99.51%
             simple        1    100.67%      99.76%
                           4     99.73%      99.49%
     ----------------------------------------------
     xeon  prepared        1    100.89%      99.68%
                           4     99.82%      99.97%
                          16    100.56%     100.10%
             simple        1    100.34%     100.54%
                           4    100.52%      99.85%
                          16    100.28%     100.38%

So, no difference whatsoever - it's +/- 0.5%, well within random noise.
And with max_locks_per_transaction=1024 the story is exactly the same:

  machine      mode  clients   master   built-in   with-guc
  ---------------------------------------------------------
       i5  prepared        1    15000      14928      14948
                           4    51498      51351      51504
             simple        1    14124      14092      14065
                           4    48531      48517      48351
     xeon  prepared        1    13384      13325      13290
                           4    49257      49309      49345
                          16   151668     151940     152201
             simple        1    12357      12351      12363
                           4    46039      46126      46201
                          16   141851     142402     142427


  machine      mode  clients   built-in    with-guc
  -------------------------------------------------
       i5  prepared        1     99.52%      99.65%
                           4     99.71%     100.01%
             simple        1     99.77%      99.58%
                           4     99.97%      99.63%
     xeon  prepared        1     99.56%      99.30%
                           4    100.11%     100.18%
                          16    100.18%     100.35%
             simple        1     99.96%     100.05%
                           4    100.19%     100.35%
                          16    100.39%     100.41%

with max_locks_per_transaction=1024, it's fair to expect the fast-path
locking to be quite beneficial. Of course, it's possible the GUC is set
this high because of some rare issue (say, to run pg_dump, which needs
to lock everything).

I did look at docs if anything needs updating, but I don't think so. The
SGML docs only talk about fast-path locking at fairly high level, not
about how many we have etc. Same for src/backend/storage/lmgr/README,
which is focusing on the correctness of fast-path locking, and that's
not changed by this patch.

I also cleaned up (removed) some of the Asserts checking that we got a
valid group / slot index. I don't think this really helped in practice,
once I added asserts to the macros.


Anyway, at this point I'm quite happy with this improvement. I didn't
have any clear plan when to commit this, but I'm considering doing so
sometime next week, unless someone objects or asks for some additional
benchmarks etc.

One thing I'm not quite sure about yet is whether to commit this as a
single change, or the way the attached patches do that, with the first
patch keeping the larger array in PGPROC and the second patch making it
separate and sized on max_locks_per_transaction ... Opinions?



regards

[1] https://github.com/tvondra/pg-lock-scalability-results

-- 
Tomas Vondra
From 7ae67a162fdcb80746bed45260fa937fc025b08b Mon Sep 17 00:00:00 2001
From: Tomas Vondra <to...@vondra.me>
Date: Thu, 12 Sep 2024 23:09:41 +0200
Subject: [PATCH v20240912 1/2] Increase the number of fast-path lock slots

The fast-path locking introduced in 9.2 allowed each backend to acquire
up to 16 relation locks cheaply, provided the lock level allows that.
If a backend needs to hold more locks, it has to insert them into the
regular lock table in shared memory. This is considerably more
expensive, and on many-core systems may be subject to contention.

The limit of 16 entries was always rather low, even with simple queries
and schemas with only a few tables. We have to lock all relations - not
just tables, but also indexes, views, etc. Moreover, for planning we
need to lock all relations that might be used in the plan, not just
those that actually get used in the final plan. It only takes a couple
tables with multiple indexes to need more than 16 locks. It was quite
common to fill all fast-path slots.

As partitioning gets used more widely, with more and more partitions,
this limit is trivial to hit, with complex queries easily using hundreds
or even thousands of locks. For workloads doing a lot of I/O this is not
noticeable, but on large machines with enough RAM to keep the data in
memory, the access to the shared lock table may be a serious issue.

This patch improves this by increasing the number of fast-path slots
from 16 to 1024. The slots remain in PGPROC, and are organized as an
array of 16-slot groups (each group being effectively a clone of the
original fast-path approach). Instead of accessing this as a big hash
table with open addressing, we treat this as a 16-way set associative
cache. Each relation (identified by a "relid" OID) is mapped to a
particular 16-slot group by calculating a hash

    h(relid) = ((relid * P) mod N)

where P is a hard-coded prime, and N is the number of groups. This is
not a great hash function, but it works well enough - the main purpose
is to prevent "hot groups" with runs of consecutive OIDs, which might
fill some of the fast-path groups. The multiplication by P ensures that.
If the OIDs are already spread out, the hash should not group them.

The groups are processed by linear search. With only 16 entries this is
cheap, and the groups have very good locality.

Treating this as a simple hash table with open addressing would not be
efficient, especially once the hash table is getting almost full. The
usual solution is to grow the table, but for hash tables in shared
memory that's not trivial. It would also have worse locality, due to
more random access.

Luckily, fast-path locking already has a simple solution to deal with a
full hash table. The lock can be simply inserted into the shared lock
table, just like before. Of course, if this happens too often, that
reduces the benefit of fast-path locking.

This patch hard-codes the number of groups to 64, which means 1024
fast-path locks. As all the information is still stored in PGPROC, this
grows PGPROC by about 4.5kB (from ~840B to ~5kB). This is a trade off
exchanging memory for cheaper locking.

Ultimately, the number of fast-path slots should not be hard coded, but
adjustable based on what the workload does, perhaps using a GUC. That
however means it can't be stored in PGPROC directly.
---
 src/backend/storage/lmgr/lock.c | 118 ++++++++++++++++++++++++++------
 src/include/storage/proc.h      |   8 ++-
 2 files changed, 102 insertions(+), 24 deletions(-)

diff --git a/src/backend/storage/lmgr/lock.c b/src/backend/storage/lmgr/lock.c
index 83b99a98f08..d053ae0c409 100644
--- a/src/backend/storage/lmgr/lock.c
+++ b/src/backend/storage/lmgr/lock.c
@@ -167,7 +167,7 @@ typedef struct TwoPhaseLockRecord
  * our locks to the primary lock table, but it can never be lower than the
  * real value, since only we can acquire locks on our own behalf.
  */
-static int	FastPathLocalUseCount = 0;
+static int	FastPathLocalUseCounts[FP_LOCK_GROUPS_PER_BACKEND];
 
 /*
  * Flag to indicate if the relation extension lock is held by this backend.
@@ -184,23 +184,53 @@ static int	FastPathLocalUseCount = 0;
  */
 static bool IsRelationExtensionLockHeld PG_USED_FOR_ASSERTS_ONLY = false;
 
+/*
+ * Macros to calculate the group and index for a relation.
+ *
+ * The formula is a simple hash function, designed to spread the OIDs a bit,
+ * so that even contiguous values end up in different groups. In most cases
+ * there will be gaps anyway, but the multiplication should help a bit.
+ *
+ * The selected value (49157) is a prime not too close to 2^k, and it's
+ * small enough to not cause overflows (in 64-bit).
+ */
+#define FAST_PATH_LOCK_REL_GROUP(rel) \
+	(((uint64) (rel) * 49157) % FP_LOCK_GROUPS_PER_BACKEND)
+
+/* Calculate index in the whole per-backend array of lock slots. */
+#define FP_LOCK_SLOT_INDEX(group, index) \
+	(AssertMacro(((group) >= 0) && ((group) < FP_LOCK_GROUPS_PER_BACKEND)), \
+	 AssertMacro(((index) >= 0) && ((index) < FP_LOCK_SLOTS_PER_GROUP)), \
+	 ((group) * FP_LOCK_SLOTS_PER_GROUP + (index)))
+
+/*
+ * Given a lock index (into the per-backend array), calculated using the
+ * FP_LOCK_SLOT_INDEX macro, calculate group and index (within the group).
+ */
+#define FAST_PATH_LOCK_GROUP(index)	\
+	(AssertMacro(((index) >= 0) && ((index) < FP_LOCK_SLOTS_PER_BACKEND)), \
+	 ((index) / FP_LOCK_SLOTS_PER_GROUP))
+#define FAST_PATH_LOCK_INDEX(index)	\
+	(AssertMacro(((index) >= 0) && ((index) < FP_LOCK_SLOTS_PER_BACKEND)), \
+	 ((index) % FP_LOCK_SLOTS_PER_GROUP))
+
 /* Macros for manipulating proc->fpLockBits */
 #define FAST_PATH_BITS_PER_SLOT			3
 #define FAST_PATH_LOCKNUMBER_OFFSET		1
 #define FAST_PATH_MASK					((1 << FAST_PATH_BITS_PER_SLOT) - 1)
 #define FAST_PATH_GET_BITS(proc, n) \
-	(((proc)->fpLockBits >> (FAST_PATH_BITS_PER_SLOT * n)) & FAST_PATH_MASK)
+	(((proc)->fpLockBits[(n)/16] >> (FAST_PATH_BITS_PER_SLOT * FAST_PATH_LOCK_INDEX(n))) & FAST_PATH_MASK)
 #define FAST_PATH_BIT_POSITION(n, l) \
 	(AssertMacro((l) >= FAST_PATH_LOCKNUMBER_OFFSET), \
 	 AssertMacro((l) < FAST_PATH_BITS_PER_SLOT+FAST_PATH_LOCKNUMBER_OFFSET), \
 	 AssertMacro((n) < FP_LOCK_SLOTS_PER_BACKEND), \
-	 ((l) - FAST_PATH_LOCKNUMBER_OFFSET + FAST_PATH_BITS_PER_SLOT * (n)))
+	 ((l) - FAST_PATH_LOCKNUMBER_OFFSET + FAST_PATH_BITS_PER_SLOT * (FAST_PATH_LOCK_INDEX(n))))
 #define FAST_PATH_SET_LOCKMODE(proc, n, l) \
-	 (proc)->fpLockBits |= UINT64CONST(1) << FAST_PATH_BIT_POSITION(n, l)
+	 (proc)->fpLockBits[FAST_PATH_LOCK_GROUP(n)] |= UINT64CONST(1) << FAST_PATH_BIT_POSITION(n, l)
 #define FAST_PATH_CLEAR_LOCKMODE(proc, n, l) \
-	 (proc)->fpLockBits &= ~(UINT64CONST(1) << FAST_PATH_BIT_POSITION(n, l))
+	 (proc)->fpLockBits[FAST_PATH_LOCK_GROUP(n)] &= ~(UINT64CONST(1) << FAST_PATH_BIT_POSITION(n, l))
 #define FAST_PATH_CHECK_LOCKMODE(proc, n, l) \
-	 ((proc)->fpLockBits & (UINT64CONST(1) << FAST_PATH_BIT_POSITION(n, l)))
+	 ((proc)->fpLockBits[FAST_PATH_LOCK_GROUP(n)] & (UINT64CONST(1) << FAST_PATH_BIT_POSITION(n, l)))
 
 /*
  * The fast-path lock mechanism is concerned only with relation locks on
@@ -926,7 +956,7 @@ LockAcquireExtended(const LOCKTAG *locktag,
 	 * for now we don't worry about that case either.
 	 */
 	if (EligibleForRelationFastPath(locktag, lockmode) &&
-		FastPathLocalUseCount < FP_LOCK_SLOTS_PER_BACKEND)
+		FastPathLocalUseCounts[FAST_PATH_LOCK_REL_GROUP(locktag->locktag_field2)] < FP_LOCK_SLOTS_PER_GROUP)
 	{
 		uint32		fasthashcode = FastPathStrongLockHashPartition(hashcode);
 		bool		acquired;
@@ -1970,6 +2000,7 @@ LockRelease(const LOCKTAG *locktag, LOCKMODE lockmode, bool sessionLock)
 	PROCLOCK   *proclock;
 	LWLock	   *partitionLock;
 	bool		wakeupNeeded;
+	int			group;
 
 	if (lockmethodid <= 0 || lockmethodid >= lengthof(LockMethods))
 		elog(ERROR, "unrecognized lock method: %d", lockmethodid);
@@ -2063,9 +2094,12 @@ LockRelease(const LOCKTAG *locktag, LOCKMODE lockmode, bool sessionLock)
 	 */
 	locallock->lockCleared = false;
 
+	/* fast-path group the lock belongs to */
+	group = FAST_PATH_LOCK_REL_GROUP(locktag->locktag_field2);
+
 	/* Attempt fast release of any lock eligible for the fast path. */
 	if (EligibleForRelationFastPath(locktag, lockmode) &&
-		FastPathLocalUseCount > 0)
+		FastPathLocalUseCounts[group] > 0)
 	{
 		bool		released;
 
@@ -2633,12 +2667,21 @@ LockReassignOwner(LOCALLOCK *locallock, ResourceOwner parent)
 static bool
 FastPathGrantRelationLock(Oid relid, LOCKMODE lockmode)
 {
-	uint32		f;
 	uint32		unused_slot = FP_LOCK_SLOTS_PER_BACKEND;
+	uint32		i,
+				group;
+
+	/* fast-path group the lock belongs to */
+	group = FAST_PATH_LOCK_REL_GROUP(relid);
 
 	/* Scan for existing entry for this relid, remembering empty slot. */
-	for (f = 0; f < FP_LOCK_SLOTS_PER_BACKEND; f++)
+	for (i = 0; i < FP_LOCK_SLOTS_PER_GROUP; i++)
 	{
+		uint32		f;
+
+		/* index into the whole per-backend array */
+		f = FP_LOCK_SLOT_INDEX(group, i);
+
 		if (FAST_PATH_GET_BITS(MyProc, f) == 0)
 			unused_slot = f;
 		else if (MyProc->fpRelId[f] == relid)
@@ -2654,7 +2697,7 @@ FastPathGrantRelationLock(Oid relid, LOCKMODE lockmode)
 	{
 		MyProc->fpRelId[unused_slot] = relid;
 		FAST_PATH_SET_LOCKMODE(MyProc, unused_slot, lockmode);
-		++FastPathLocalUseCount;
+		++FastPathLocalUseCounts[group];
 		return true;
 	}
 
@@ -2670,12 +2713,21 @@ FastPathGrantRelationLock(Oid relid, LOCKMODE lockmode)
 static bool
 FastPathUnGrantRelationLock(Oid relid, LOCKMODE lockmode)
 {
-	uint32		f;
 	bool		result = false;
+	uint32		i,
+				group;
 
-	FastPathLocalUseCount = 0;
-	for (f = 0; f < FP_LOCK_SLOTS_PER_BACKEND; f++)
+	/* fast-path group the lock belongs to */
+	group = FAST_PATH_LOCK_REL_GROUP(relid);
+
+	FastPathLocalUseCounts[group] = 0;
+	for (i = 0; i < FP_LOCK_SLOTS_PER_GROUP; i++)
 	{
+		uint32		f;
+
+		/* index into the whole per-backend array */
+		f = FP_LOCK_SLOT_INDEX(group, i);
+
 		if (MyProc->fpRelId[f] == relid
 			&& FAST_PATH_CHECK_LOCKMODE(MyProc, f, lockmode))
 		{
@@ -2685,7 +2737,7 @@ FastPathUnGrantRelationLock(Oid relid, LOCKMODE lockmode)
 			/* we continue iterating so as to update FastPathLocalUseCount */
 		}
 		if (FAST_PATH_GET_BITS(MyProc, f) != 0)
-			++FastPathLocalUseCount;
+			++FastPathLocalUseCounts[group];
 	}
 	return result;
 }
@@ -2714,7 +2766,8 @@ FastPathTransferRelationLocks(LockMethod lockMethodTable, const LOCKTAG *locktag
 	for (i = 0; i < ProcGlobal->allProcCount; i++)
 	{
 		PGPROC	   *proc = &ProcGlobal->allProcs[i];
-		uint32		f;
+		uint32		j,
+					group;
 
 		LWLockAcquire(&proc->fpInfoLock, LW_EXCLUSIVE);
 
@@ -2739,9 +2792,16 @@ FastPathTransferRelationLocks(LockMethod lockMethodTable, const LOCKTAG *locktag
 			continue;
 		}
 
-		for (f = 0; f < FP_LOCK_SLOTS_PER_BACKEND; f++)
+		/* fast-path group the lock belongs to */
+		group = FAST_PATH_LOCK_REL_GROUP(relid);
+
+		for (j = 0; j < FP_LOCK_SLOTS_PER_GROUP; j++)
 		{
 			uint32		lockmode;
+			uint32		f;
+
+			/* index into the whole per-backend array */
+			f = FP_LOCK_SLOT_INDEX(group, j);
 
 			/* Look for an allocated slot matching the given relid. */
 			if (relid != proc->fpRelId[f] || FAST_PATH_GET_BITS(proc, f) == 0)
@@ -2793,13 +2853,21 @@ FastPathGetRelationLockEntry(LOCALLOCK *locallock)
 	PROCLOCK   *proclock = NULL;
 	LWLock	   *partitionLock = LockHashPartitionLock(locallock->hashcode);
 	Oid			relid = locktag->locktag_field2;
-	uint32		f;
+	uint32		i,
+				group;
+
+	/* fast-path group the lock belongs to */
+	group = FAST_PATH_LOCK_REL_GROUP(relid);
 
 	LWLockAcquire(&MyProc->fpInfoLock, LW_EXCLUSIVE);
 
-	for (f = 0; f < FP_LOCK_SLOTS_PER_BACKEND; f++)
+	for (i = 0; i < FP_LOCK_SLOTS_PER_GROUP; i++)
 	{
 		uint32		lockmode;
+		uint32		f;
+
+		/* index into the whole per-backend array */
+		f = FP_LOCK_SLOT_INDEX(group, i);
 
 		/* Look for an allocated slot matching the given relid. */
 		if (relid != MyProc->fpRelId[f] || FAST_PATH_GET_BITS(MyProc, f) == 0)
@@ -2903,6 +2971,10 @@ GetLockConflicts(const LOCKTAG *locktag, LOCKMODE lockmode, int *countp)
 	LWLock	   *partitionLock;
 	int			count = 0;
 	int			fast_count = 0;
+	uint32		group;
+
+	/* fast-path group the lock belongs to */
+	group = FAST_PATH_LOCK_REL_GROUP(locktag->locktag_field2);
 
 	if (lockmethodid <= 0 || lockmethodid >= lengthof(LockMethods))
 		elog(ERROR, "unrecognized lock method: %d", lockmethodid);
@@ -2957,7 +3029,7 @@ GetLockConflicts(const LOCKTAG *locktag, LOCKMODE lockmode, int *countp)
 		for (i = 0; i < ProcGlobal->allProcCount; i++)
 		{
 			PGPROC	   *proc = &ProcGlobal->allProcs[i];
-			uint32		f;
+			uint32		j;
 
 			/* A backend never blocks itself */
 			if (proc == MyProc)
@@ -2979,9 +3051,13 @@ GetLockConflicts(const LOCKTAG *locktag, LOCKMODE lockmode, int *countp)
 				continue;
 			}
 
-			for (f = 0; f < FP_LOCK_SLOTS_PER_BACKEND; f++)
+			for (j = 0; j < FP_LOCK_SLOTS_PER_GROUP; j++)
 			{
 				uint32		lockmask;
+				uint32		f;
+
+				/* index into the whole per-backend array */
+				f = FP_LOCK_SLOT_INDEX(group, j);
 
 				/* Look for an allocated slot matching the given relid. */
 				if (relid != proc->fpRelId[f])
diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
index deeb06c9e01..845058da9fa 100644
--- a/src/include/storage/proc.h
+++ b/src/include/storage/proc.h
@@ -83,8 +83,9 @@ struct XidCache
  * rather than the main lock table.  This eases contention on the lock
  * manager LWLocks.  See storage/lmgr/README for additional details.
  */
-#define		FP_LOCK_SLOTS_PER_BACKEND 16
-
+#define		FP_LOCK_GROUPS_PER_BACKEND	64
+#define		FP_LOCK_SLOTS_PER_GROUP		16	/* don't change */
+#define		FP_LOCK_SLOTS_PER_BACKEND	(FP_LOCK_SLOTS_PER_GROUP * FP_LOCK_GROUPS_PER_BACKEND)
 /*
  * Flags for PGPROC.delayChkptFlags
  *
@@ -292,7 +293,8 @@ struct PGPROC
 
 	/* Lock manager data, recording fast-path locks taken by this backend. */
 	LWLock		fpInfoLock;		/* protects per-backend fast-path state */
-	uint64		fpLockBits;		/* lock modes held for each fast-path slot */
+	uint64		fpLockBits[FP_LOCK_GROUPS_PER_BACKEND]; /* lock modes held for
+														 * each fast-path slot */
 	Oid			fpRelId[FP_LOCK_SLOTS_PER_BACKEND]; /* slots for rel oids */
 	bool		fpVXIDLock;		/* are we holding a fast-path VXID lock? */
 	LocalTransactionId fpLocalTransactionId;	/* lxid for fast-path VXID
-- 
2.46.0

From 1e3be15e39aadc58db4c9be86cfee64f0395dfd4 Mon Sep 17 00:00:00 2001
From: Tomas Vondra <to...@vondra.me>
Date: Thu, 12 Sep 2024 23:09:50 +0200
Subject: [PATCH v20240912 2/2] Set fast-path slots using
 max_locks_per_transaction

Instead of using a hard-coded value of 64 groups (1024 fast-path slots),
determine the value based on max_locks_per_transaction GUC. This size
is calculated at startup, before allocating shared memory.

The default value of max_locks_per_transaction value is 64, which means
4 groups of fast-path locks.

The purpose of the max_locks_per_transaction GUC is to size the shared
lock table, but it's the best information about the expected number of
locks available. It is often set to an average number of locks needed by
a backend, but some backends may need substantially fewer/more locks.

This means fast-path capacity calculated from max_locks_per_transaction
may not be sufficient for some backends, forcing use of the shared lock
table. The assumption is this is not a major issue - there can't be too
many of such backends, otherwise the max_locks_per_transaction would
need to be higher anyway (resolving the fast-path issue too).

If that happens to be a problem, the only solution is to increase the
GUC, even if the shared lock table had sufficient capacity. That is not
free, because each lock in the shared lock table requires about 500B.
With many backends this may be a substantial amount of memory, but then
again - that should only happen on machines with plenty of memory.

In the future we can consider a separate GUC for the number of fast-path
slots, but let's try without one first.

An alternative solution might be to size the fast-path arrays for a
multiple of max_locks_per_transaction. The cost of adding a fast-path
slot is much lower (only ~5B compared to ~500B per entry), so this would
be cheaper than increasing max_locks_per_transaction. But it's not clear
what multiple of max_locks_per_transaction to use.
---
 src/backend/bootstrap/bootstrap.c   |  2 ++
 src/backend/postmaster/postmaster.c |  5 +++
 src/backend/storage/lmgr/lock.c     | 28 +++++++++++++----
 src/backend/storage/lmgr/proc.c     | 47 +++++++++++++++++++++++++++++
 src/backend/tcop/postgres.c         |  3 ++
 src/backend/utils/init/postinit.c   | 34 +++++++++++++++++++++
 src/include/miscadmin.h             |  1 +
 src/include/storage/proc.h          | 11 ++++---
 8 files changed, 120 insertions(+), 11 deletions(-)

diff --git a/src/backend/bootstrap/bootstrap.c b/src/backend/bootstrap/bootstrap.c
index 7637581a184..ed59dfce893 100644
--- a/src/backend/bootstrap/bootstrap.c
+++ b/src/backend/bootstrap/bootstrap.c
@@ -309,6 +309,8 @@ BootstrapModeMain(int argc, char *argv[], bool check_only)
 
 	InitializeMaxBackends();
 
+	InitializeFastPathLocks();
+
 	CreateSharedMemoryAndSemaphores();
 
 	/*
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index 96bc1d1cfed..f4a16595d7f 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -903,6 +903,11 @@ PostmasterMain(int argc, char *argv[])
 	 */
 	InitializeMaxBackends();
 
+	/*
+	 * Also calculate the size of the fast-path lock arrays in PGPROC.
+	 */
+	InitializeFastPathLocks();
+
 	/*
 	 * Give preloaded libraries a chance to request additional shared memory.
 	 */
diff --git a/src/backend/storage/lmgr/lock.c b/src/backend/storage/lmgr/lock.c
index d053ae0c409..505aa52668e 100644
--- a/src/backend/storage/lmgr/lock.c
+++ b/src/backend/storage/lmgr/lock.c
@@ -166,8 +166,13 @@ typedef struct TwoPhaseLockRecord
  * might be higher than the real number if another backend has transferred
  * our locks to the primary lock table, but it can never be lower than the
  * real value, since only we can acquire locks on our own behalf.
+ *
+ * XXX Allocate a static array of the maximum size. We could have a pointer
+ * and then allocate just the right size to save a couple kB, but that does
+ * not seem worth the extra complexity of having to initialize it etc. This
+ * way it gets initialized automaticaly.
  */
-static int	FastPathLocalUseCounts[FP_LOCK_GROUPS_PER_BACKEND];
+static int	FastPathLocalUseCounts[FP_LOCK_GROUPS_PER_BACKEND_MAX];
 
 /*
  * Flag to indicate if the relation extension lock is held by this backend.
@@ -184,6 +189,17 @@ static int	FastPathLocalUseCounts[FP_LOCK_GROUPS_PER_BACKEND];
  */
 static bool IsRelationExtensionLockHeld PG_USED_FOR_ASSERTS_ONLY = false;
 
+/*
+ * Number of fast-path locks per backend - size of the arrays in PGPROC.
+ * This is set only once during start, before initializing shared memory,
+ * and remains constant after that.
+ *
+ * We set the limit based on max_locks_per_transaction GUC, because that's
+ * the best information about expected number of locks per backend we have.
+ * See InitializeFastPathLocks for details.
+ */
+int			FastPathLockGroupsPerBackend = 0;
+
 /*
  * Macros to calculate the group and index for a relation.
  *
@@ -195,11 +211,11 @@ static bool IsRelationExtensionLockHeld PG_USED_FOR_ASSERTS_ONLY = false;
  * small enough to not cause overflows (in 64-bit).
  */
 #define FAST_PATH_LOCK_REL_GROUP(rel) \
-	(((uint64) (rel) * 49157) % FP_LOCK_GROUPS_PER_BACKEND)
+	(((uint64) (rel) * 49157) % FastPathLockGroupsPerBackend)
 
 /* Calculate index in the whole per-backend array of lock slots. */
 #define FP_LOCK_SLOT_INDEX(group, index) \
-	(AssertMacro(((group) >= 0) && ((group) < FP_LOCK_GROUPS_PER_BACKEND)), \
+	(AssertMacro(((group) >= 0) && ((group) < FastPathLockGroupsPerBackend)), \
 	 AssertMacro(((index) >= 0) && ((index) < FP_LOCK_SLOTS_PER_GROUP)), \
 	 ((group) * FP_LOCK_SLOTS_PER_GROUP + (index)))
 
@@ -2973,9 +2989,6 @@ GetLockConflicts(const LOCKTAG *locktag, LOCKMODE lockmode, int *countp)
 	int			fast_count = 0;
 	uint32		group;
 
-	/* fast-path group the lock belongs to */
-	group = FAST_PATH_LOCK_REL_GROUP(locktag->locktag_field2);
-
 	if (lockmethodid <= 0 || lockmethodid >= lengthof(LockMethods))
 		elog(ERROR, "unrecognized lock method: %d", lockmethodid);
 	lockMethodTable = LockMethods[lockmethodid];
@@ -3005,6 +3018,9 @@ GetLockConflicts(const LOCKTAG *locktag, LOCKMODE lockmode, int *countp)
 	partitionLock = LockHashPartitionLock(hashcode);
 	conflictMask = lockMethodTable->conflictTab[lockmode];
 
+	/* fast-path group the lock belongs to */
+	group = FAST_PATH_LOCK_REL_GROUP(locktag->locktag_field2);
+
 	/*
 	 * Fast path locks might not have been entered in the primary lock table.
 	 * If the lock we're dealing with could conflict with such a lock, we must
diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c
index ac66da8638f..a91b6f8a6c0 100644
--- a/src/backend/storage/lmgr/proc.c
+++ b/src/backend/storage/lmgr/proc.c
@@ -103,6 +103,8 @@ ProcGlobalShmemSize(void)
 	Size		size = 0;
 	Size		TotalProcs =
 		add_size(MaxBackends, add_size(NUM_AUXILIARY_PROCS, max_prepared_xacts));
+	Size		fpLockBitsSize,
+				fpRelIdSize;
 
 	/* ProcGlobal */
 	size = add_size(size, sizeof(PROC_HDR));
@@ -113,6 +115,18 @@ ProcGlobalShmemSize(void)
 	size = add_size(size, mul_size(TotalProcs, sizeof(*ProcGlobal->subxidStates)));
 	size = add_size(size, mul_size(TotalProcs, sizeof(*ProcGlobal->statusFlags)));
 
+	/*
+	 * fast-path lock arrays
+	 *
+	 * XXX The explicit alignment may not be strictly necessary, as both
+	 * values are already multiples of 8 bytes, which is what MAXALIGN does.
+	 * But better to make that obvious.
+	 */
+	fpLockBitsSize = MAXALIGN(FastPathLockGroupsPerBackend * sizeof(uint64));
+	fpRelIdSize = MAXALIGN(FastPathLockGroupsPerBackend * sizeof(Oid) * FP_LOCK_SLOTS_PER_GROUP);
+
+	size = add_size(size, mul_size(TotalProcs, (fpLockBitsSize + fpRelIdSize)));
+
 	return size;
 }
 
@@ -162,6 +176,10 @@ InitProcGlobal(void)
 				j;
 	bool		found;
 	uint32		TotalProcs = MaxBackends + NUM_AUXILIARY_PROCS + max_prepared_xacts;
+	char	   *fpPtr,
+			   *fpEndPtr PG_USED_FOR_ASSERTS_ONLY;
+	Size		fpLockBitsSize,
+				fpRelIdSize;
 
 	/* Create the ProcGlobal shared structure */
 	ProcGlobal = (PROC_HDR *)
@@ -211,12 +229,38 @@ InitProcGlobal(void)
 	ProcGlobal->statusFlags = (uint8 *) ShmemAlloc(TotalProcs * sizeof(*ProcGlobal->statusFlags));
 	MemSet(ProcGlobal->statusFlags, 0, TotalProcs * sizeof(*ProcGlobal->statusFlags));
 
+	/*
+	 * Allocate arrays for fast-path locks. Those are variable-length, so
+	 * can't be included in PGPROC. We allocate a separate piece of shared
+	 * memory and then divide that between backends.
+	 */
+	fpLockBitsSize = MAXALIGN(FastPathLockGroupsPerBackend * sizeof(uint64));
+	fpRelIdSize = MAXALIGN(FastPathLockGroupsPerBackend * sizeof(Oid) * FP_LOCK_SLOTS_PER_GROUP);
+
+	fpPtr = ShmemAlloc(TotalProcs * (fpLockBitsSize + fpRelIdSize));
+	MemSet(fpPtr, 0, TotalProcs * (fpLockBitsSize + fpRelIdSize));
+
+	/* For asserts checking we did not overflow. */
+	fpEndPtr = fpPtr + (TotalProcs * (fpLockBitsSize + fpRelIdSize));
+
 	for (i = 0; i < TotalProcs; i++)
 	{
 		PGPROC	   *proc = &procs[i];
 
 		/* Common initialization for all PGPROCs, regardless of type. */
 
+		/*
+		 * Set the fast-path lock arrays, and move the pointer. We interleave
+		 * the two arrays, to keep at least some locality.
+		 */
+		proc->fpLockBits = (uint64 *) fpPtr;
+		fpPtr += fpLockBitsSize;
+
+		proc->fpRelId = (Oid *) fpPtr;
+		fpPtr += fpRelIdSize;
+
+		Assert(fpPtr <= fpEndPtr);
+
 		/*
 		 * Set up per-PGPROC semaphore, latch, and fpInfoLock.  Prepared xact
 		 * dummy PGPROCs don't need these though - they're never associated
@@ -278,6 +322,9 @@ InitProcGlobal(void)
 		pg_atomic_init_u64(&(proc->waitStart), 0);
 	}
 
+	/* We expect to consume exactly the expected amount of data. */
+	Assert(fpPtr = fpEndPtr);
+
 	/*
 	 * Save pointers to the blocks of PGPROC structures reserved for auxiliary
 	 * processes and prepared transactions.
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index 8bc6bea1135..f54ae00abca 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -4166,6 +4166,9 @@ PostgresSingleUserMain(int argc, char *argv[],
 	/* Initialize MaxBackends */
 	InitializeMaxBackends();
 
+	/* Initialize size of fast-path lock cache. */
+	InitializeFastPathLocks();
+
 	/*
 	 * Give preloaded libraries a chance to request additional shared memory.
 	 */
diff --git a/src/backend/utils/init/postinit.c b/src/backend/utils/init/postinit.c
index 3b50ce19a2c..1faf756c8d8 100644
--- a/src/backend/utils/init/postinit.c
+++ b/src/backend/utils/init/postinit.c
@@ -557,6 +557,40 @@ InitializeMaxBackends(void)
 						   MAX_BACKENDS)));
 }
 
+/*
+ * Initialize the number of fast-path lock slots in PGPROC.
+ *
+ * This must be called after modules have had the chance to alter GUCs in
+ * shared_preload_libraries and before shared memory size is determined.
+ *
+ * The default max_locks_per_xact=64 means 4 groups by default.
+ *
+ * We allow anything between 1 and 1024 groups, with the usual power-of-2
+ * logic. The 1 is the "old" value before allowing multiple groups, 1024
+ * is an arbitrary limit (matching max_locks_per_xact = 16k). Values over
+ * 1024 are unlikely to be beneficial - we're likely to hit other
+ * bottlenecks long before that.
+ */
+void
+InitializeFastPathLocks(void)
+{
+	Assert(FastPathLockGroupsPerBackend == 0);
+
+	/* we need at least one group */
+	FastPathLockGroupsPerBackend = 1;
+
+	while (FastPathLockGroupsPerBackend < FP_LOCK_GROUPS_PER_BACKEND_MAX)
+	{
+		/* stop once we exceed max_locks_per_xact */
+		if (FastPathLockGroupsPerBackend * FP_LOCK_SLOTS_PER_GROUP >= max_locks_per_xact)
+			break;
+
+		FastPathLockGroupsPerBackend *= 2;
+	}
+
+	Assert(FastPathLockGroupsPerBackend <= FP_LOCK_GROUPS_PER_BACKEND_MAX);
+}
+
 /*
  * Early initialization of a backend (either standalone or under postmaster).
  * This happens even before InitPostgres.
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 25348e71eb9..e26d108a470 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -475,6 +475,7 @@ extern PGDLLIMPORT ProcessingMode Mode;
 #define INIT_PG_OVERRIDE_ROLE_LOGIN		0x0004
 extern void pg_split_opts(char **argv, int *argcp, const char *optstr);
 extern void InitializeMaxBackends(void);
+extern void InitializeFastPathLocks(void);
 extern void InitPostgres(const char *in_dbname, Oid dboid,
 						 const char *username, Oid useroid,
 						 bits32 flags,
diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
index 845058da9fa..0e55c166529 100644
--- a/src/include/storage/proc.h
+++ b/src/include/storage/proc.h
@@ -83,9 +83,11 @@ struct XidCache
  * rather than the main lock table.  This eases contention on the lock
  * manager LWLocks.  See storage/lmgr/README for additional details.
  */
-#define		FP_LOCK_GROUPS_PER_BACKEND	64
+extern PGDLLIMPORT int FastPathLockGroupsPerBackend;
+#define		FP_LOCK_GROUPS_PER_BACKEND_MAX	1024
 #define		FP_LOCK_SLOTS_PER_GROUP		16	/* don't change */
-#define		FP_LOCK_SLOTS_PER_BACKEND	(FP_LOCK_SLOTS_PER_GROUP * FP_LOCK_GROUPS_PER_BACKEND)
+#define		FP_LOCK_SLOTS_PER_BACKEND	(FP_LOCK_SLOTS_PER_GROUP * FastPathLockGroupsPerBackend)
+
 /*
  * Flags for PGPROC.delayChkptFlags
  *
@@ -293,9 +295,8 @@ struct PGPROC
 
 	/* Lock manager data, recording fast-path locks taken by this backend. */
 	LWLock		fpInfoLock;		/* protects per-backend fast-path state */
-	uint64		fpLockBits[FP_LOCK_GROUPS_PER_BACKEND]; /* lock modes held for
-														 * each fast-path slot */
-	Oid			fpRelId[FP_LOCK_SLOTS_PER_BACKEND]; /* slots for rel oids */
+	uint64	   *fpLockBits;		/* lock modes held for each fast-path slot */
+	Oid		   *fpRelId;		/* slots for rel oids */
 	bool		fpVXIDLock;		/* are we holding a fast-path VXID lock? */
 	LocalTransactionId fpLocalTransactionId;	/* lxid for fast-path VXID
 												 * lock */
-- 
2.46.0

i5	1	master	simple	1	14080.9	14102.110047
i5	1	master	simple	4	48395.0	48354.116503
i5	1	master	prepared	1	14890.3	14934.195527
i5	1	master	prepared	4	51537.3	51522.624839
i5	1	built-in	simple	1	14148.3	14142.788300
i5	1	built-in	simple	4	48612.6	48612.890117
i5	1	built-in	prepared	1	15001.2	14972.190629
i5	1	built-in	prepared	4	50884.6	50909.650566
i5	1	built-in-guc	simple	1	14044.7	14030.903646
i5	1	built-in-guc	simple	4	48315.2	48269.468605
i5	1	built-in-guc	prepared	1	14906.4	14910.701579
i5	1	built-in-guc	prepared	4	51756.4	51749.293514
i5	2	master	simple	1	14022.7	14016.470398
i5	2	master	simple	4	48389.0	48375.120104
i5	2	master	prepared	1	15006.9	14987.175386
i5	2	master	prepared	4	51115.6	51115.335676
i5	2	built-in	simple	1	14089.5	14084.849778
i5	2	built-in	simple	4	48414.4	48429.565585
i5	2	built-in	prepared	1	14928.4	14953.955533
i5	2	built-in	prepared	4	51482.3	51469.249951
i5	2	built-in-guc	simple	1	14070.0	14026.562135
i5	2	built-in-guc	simple	4	48436.1	48420.536506
i5	2	built-in-guc	prepared	1	14744.7	14750.031143
i5	2	built-in-guc	prepared	4	51234.2	51220.396822
i5	3	master	simple	1	14096.8	14077.481886
i5	3	master	simple	4	48563.7	48562.921258
i5	3	master	prepared	1	14998.7	15008.616332
i5	3	master	prepared	4	51424.7	51395.277647
i5	3	built-in	simple	1	14172.6	14166.768129
i5	3	built-in	simple	4	48605.0	48578.513934
i5	3	built-in	prepared	1	15048.8	15034.991405
i5	3	built-in	prepared	4	51867.0	51856.876985
i5	3	built-in-guc	simple	1	14058.0	14053.123947
i5	3	built-in-guc	simple	4	48538.0	48530.856327
i5	3	built-in-guc	prepared	1	15010.9	15026.335263
i5	3	built-in-guc	prepared	4	51982.5	51989.466475
i5	4	master	simple	1	14154.6	14117.359676
i5	4	master	simple	4	48570.4	48570.852168
i5	4	master	prepared	1	14920.9	14939.788716
i5	4	master	prepared	4	51588.4	51578.604824
i5	4	built-in	simple	1	14107.9	14109.651452
i5	4	built-in	simple	4	48398.6	48400.304251
i5	4	built-in	prepared	1	14775.4	14782.665393
i5	4	built-in	prepared	4	51495.7	51460.438668
i5	4	built-in-guc	simple	1	14002.3	14006.983057
i5	4	built-in-guc	simple	4	48477.9	48455.621417
i5	4	built-in-guc	prepared	1	14943.6	14956.007087
i5	4	built-in-guc	prepared	4	51511.5	51472.586470
i5	5	master	simple	1	14230.4	14195.051584
i5	5	master	simple	4	48833.8	48826.516769
i5	5	master	prepared	1	14988.6	15018.695951
i5	5	master	prepared	4	51575.7	51569.855544
i5	5	built-in	simple	1	13911.2	13909.242643
i5	5	built-in	simple	4	48353.5	48357.250760
i5	5	built-in	prepared	1	14787.6	14804.359452
i5	5	built-in	prepared	4	51174.1	51159.879369
i5	5	built-in-guc	simple	1	14138.8	14143.359702
i5	5	built-in-guc	simple	4	48617.0	48601.743282
i5	5	built-in-guc	prepared	1	15032.7	15026.830519
i5	5	built-in-guc	prepared	4	51741.4	51723.602854
i5	6	master	simple	1	14136.2	14106.507155
i5	6	master	simple	4	48527.8	48509.882057
i5	6	master	prepared	1	14970.6	14977.933552
i5	6	master	prepared	4	51645.4	51602.356924
i5	6	built-in	simple	1	14103.5	14108.594121
i5	6	built-in	simple	4	48716.1	48707.753065
i5	6	built-in	prepared	1	15040.9	15014.295825
i5	6	built-in	prepared	4	51698.6	51698.996492
i5	6	built-in-guc	simple	1	14088.7	14070.366851
i5	6	built-in-guc	simple	4	48127.8	48053.241889
i5	6	built-in-guc	prepared	1	14981.7	14987.015459
i5	6	built-in-guc	prepared	4	51262.3	51272.774438
i5	7	master	simple	1	14196.7	14149.519287
i5	7	master	simple	4	48386.5	48364.892203
i5	7	master	prepared	1	14984.4	15007.463706
i5	7	master	prepared	4	51649.9	51647.311500
i5	7	built-in	simple	1	14041.2	14030.780130
i5	7	built-in	simple	4	48499.4	48477.027169
i5	7	built-in	prepared	1	14935.8	14965.537448
i5	7	built-in	prepared	4	51331.5	51328.492693
i5	7	built-in-guc	simple	1	14070.2	14051.170322
i5	7	built-in-guc	simple	4	48248.0	48233.555328
i5	7	built-in-guc	prepared	1	15002.2	14984.904159
i5	7	built-in-guc	prepared	4	51228.2	51222.004856
i5	8	master	simple	1	14105.6	14106.610241
i5	8	master	simple	4	48475.7	48465.042264
i5	8	master	prepared	1	14927.6	14924.996755
i5	8	master	prepared	4	51116.9	51109.524419
i5	8	built-in	simple	1	13984.8	13983.338245
i5	8	built-in	simple	4	48051.5	48066.170106
i5	8	built-in	prepared	1	14767.6	14728.470718
i5	8	built-in	prepared	4	50731.6	50725.547013
i5	8	built-in-guc	simple	1	14207.0	14172.414090
i5	8	built-in-guc	simple	4	48234.4	48238.768602
i5	8	built-in-guc	prepared	1	14951.6	14956.407597
i5	8	built-in-guc	prepared	4	51819.0	51817.032787
i5	9	master	simple	1	14128.9	14158.161197
i5	9	master	simple	4	48435.9	48446.319800
i5	9	master	prepared	1	15032.3	15070.132100
i5	9	master	prepared	4	51724.5	51714.625798
i5	9	built-in	simple	1	14096.4	14091.993716
i5	9	built-in	simple	4	48806.9	48802.519490
i5	9	built-in	prepared	1	14928.5	14951.378328
i5	9	built-in	prepared	4	51580.5	51557.651618
i5	9	built-in-guc	simple	1	14085.8	14095.357211
i5	9	built-in-guc	simple	4	48644.8	48638.348194
i5	9	built-in-guc	prepared	1	14909.6	14878.023280
i5	9	built-in-guc	prepared	4	50991.1	50947.761509
i5	10	master	simple	1	14194.6	14212.629724
i5	10	master	simple	4	48848.2	48838.334267
i5	10	master	prepared	1	15162.7	15134.028526
i5	10	master	prepared	4	51710.8	51722.211996
i5	10	built-in	simple	1	14321.5	14295.535844
i5	10	built-in	simple	4	48725.4	48735.632100
i5	10	built-in	prepared	1	15112.9	15075.799677
i5	10	built-in	prepared	4	51336.6	51340.594478
i5	10	built-in-guc	simple	1	14016.6	13995.114681
i5	10	built-in-guc	simple	4	48085.0	48069.448283
i5	10	built-in-guc	prepared	1	15020.8	15006.527436
i5	10	built-in-guc	prepared	4	51654.6	51621.983563
xeon	1	master	simple	1	12321.0	12113.686352
xeon	1	master	simple	4	46061.8	46167.908916
xeon	1	master	simple	16	141913.1	142484.077282
xeon	1	master	prepared	1	13136.7	13376.467508
xeon	1	master	prepared	4	48927.0	49125.699832
xeon	1	master	prepared	16	149617.6	149890.942091
xeon	1	built-in	simple	1	11698.6	11984.559439
xeon	1	built-in	simple	4	46253.3	46405.814755
xeon	1	built-in	simple	16	142108.7	142204.159251
xeon	1	built-in	prepared	1	13262.6	13278.766170
xeon	1	built-in	prepared	4	49373.4	49116.682926
xeon	1	built-in	prepared	16	150975.1	150882.901026
xeon	1	built-in-guc	simple	1	12411.3	12209.569675
xeon	1	built-in-guc	simple	4	46362.8	46292.608241
xeon	1	built-in-guc	simple	16	143097.6	143255.213414
xeon	1	built-in-guc	prepared	1	12891.4	12904.830056
xeon	1	built-in-guc	prepared	4	49037.5	49650.207298
xeon	1	built-in-guc	prepared	16	151605.2	151766.492639
xeon	2	master	simple	1	12594.0	12569.975147
xeon	2	master	simple	4	46095.0	46334.372497
xeon	2	master	simple	16	143110.6	143251.654967
xeon	2	master	prepared	1	13593.0	13849.216433
xeon	2	master	prepared	4	50241.3	50771.943481
xeon	2	master	prepared	16	153719.2	153964.165214
xeon	2	built-in	simple	1	12581.6	12362.760560
xeon	2	built-in	simple	4	46688.0	46920.393032
xeon	2	built-in	simple	16	143339.1	143608.465217
xeon	2	built-in	prepared	1	13377.2	13614.311099
xeon	2	built-in	prepared	4	50243.0	50331.113222
xeon	2	built-in	prepared	16	154896.0	155194.083386
xeon	2	built-in-guc	simple	1	12403.6	12432.490363
xeon	2	built-in-guc	simple	4	45747.6	46349.661315
xeon	2	built-in-guc	simple	16	143039.6	143113.625020
xeon	2	built-in-guc	prepared	1	13220.9	13345.973115
xeon	2	built-in-guc	prepared	4	48563.3	48829.965386
xeon	2	built-in-guc	prepared	16	151429.5	151732.394825
xeon	3	master	simple	1	12288.7	12298.476516
xeon	3	master	simple	4	45880.7	45765.051800
xeon	3	master	simple	16	141240.3	141242.056364
xeon	3	master	prepared	1	15067.5	14237.524890
xeon	3	master	prepared	4	48663.8	48875.792909
xeon	3	master	prepared	16	151295.7	151493.587220
xeon	3	built-in	simple	1	12428.7	12357.897364
xeon	3	built-in	simple	4	46276.9	46283.919558
xeon	3	built-in	simple	16	142198.6	142147.885070
xeon	3	built-in	prepared	1	13145.0	13129.735118
xeon	3	built-in	prepared	4	49243.3	49505.854684
xeon	3	built-in	prepared	16	151158.4	151490.885867
xeon	3	built-in-guc	simple	1	12876.9	12799.358713
xeon	3	built-in-guc	simple	4	45800.3	45910.069255
xeon	3	built-in-guc	simple	16	141606.1	141706.124079
xeon	3	built-in-guc	prepared	1	13175.5	13159.314980
xeon	3	built-in-guc	prepared	4	49724.0	49933.670965
xeon	3	built-in-guc	prepared	16	152262.3	152330.391810
xeon	4	master	simple	1	12342.7	12273.037282
xeon	4	master	simple	4	45735.3	46278.850548
xeon	4	master	simple	16	140322.0	140506.519265
xeon	4	master	prepared	1	12992.6	12901.871888
xeon	4	master	prepared	4	48357.9	48428.230404
xeon	4	master	prepared	16	149629.4	149732.767753
xeon	4	built-in	simple	1	12587.4	12567.639184
xeon	4	built-in	simple	4	46691.9	46699.962096
xeon	4	built-in	simple	16	142465.7	142756.256287
xeon	4	built-in	prepared	1	13218.2	13461.258227
xeon	4	built-in	prepared	4	48914.4	49140.509405
xeon	4	built-in	prepared	16	151772.2	151967.354148
xeon	4	built-in-guc	simple	1	12253.7	12252.431895
xeon	4	built-in-guc	simple	4	44455.3	44672.086158
xeon	4	built-in-guc	simple	16	139107.8	139458.570645
xeon	4	built-in-guc	prepared	1	12556.8	12806.292236
xeon	4	built-in-guc	prepared	4	48509.3	48610.813753
xeon	4	built-in-guc	prepared	16	151890.3	152296.476006
xeon	5	master	simple	1	12128.5	12170.079024
xeon	5	master	simple	4	46143.2	46060.806034
xeon	5	master	simple	16	140829.8	141020.431251
xeon	5	master	prepared	1	13340.0	13521.018080
xeon	5	master	prepared	4	51304.7	51024.242249
xeon	5	master	prepared	16	151775.1	151975.264934
xeon	5	built-in	simple	1	12477.3	12443.974140
xeon	5	built-in	simple	4	45561.2	45457.103949
xeon	5	built-in	simple	16	142424.3	142422.955525
xeon	5	built-in	prepared	1	13243.0	13348.347912
xeon	5	built-in	prepared	4	48858.5	48813.072588
xeon	5	built-in	prepared	16	151425.6	151647.701569
xeon	5	built-in-guc	simple	1	12437.4	12439.925856
xeon	5	built-in-guc	simple	4	45998.8	46430.348219
xeon	5	built-in-guc	simple	16	141725.0	141975.006134
xeon	5	built-in-guc	prepared	1	13320.2	13693.676450
xeon	5	built-in-guc	prepared	4	49468.9	49414.102662
xeon	5	built-in-guc	prepared	16	152397.1	152783.559526
xeon	6	master	simple	1	12299.8	12274.341755
xeon	6	master	simple	4	45690.0	45755.526417
xeon	6	master	simple	16	141695.4	141706.926812
xeon	6	master	prepared	1	12858.6	13022.596983
xeon	6	master	prepared	4	48825.9	48711.051510
xeon	6	master	prepared	16	150762.6	151108.027398
xeon	6	built-in	simple	1	12128.0	12077.033721
xeon	6	built-in	simple	4	45378.6	45568.192478
xeon	6	built-in	simple	16	141033.1	141343.963168
xeon	6	built-in	prepared	1	12965.8	13414.635061
xeon	6	built-in	prepared	4	48654.1	48712.591104
xeon	6	built-in	prepared	16	150590.4	150797.051462
xeon	6	built-in-guc	simple	1	12348.6	12351.488976
xeon	6	built-in-guc	simple	4	46467.8	46387.163084
xeon	6	built-in-guc	simple	16	143734.3	143821.513557
xeon	6	built-in-guc	prepared	1	13138.0	13788.425471
xeon	6	built-in-guc	prepared	4	50398.0	50608.933505
xeon	6	built-in-guc	prepared	16	154082.7	153894.209871
xeon	7	master	simple	1	12320.0	12799.854420
xeon	7	master	simple	4	46040.3	46140.135695
xeon	7	master	simple	16	142497.1	142876.561870
xeon	7	master	prepared	1	13200.7	13187.011622
xeon	7	master	prepared	4	49346.3	50063.218378
xeon	7	master	prepared	16	152825.5	152808.682996
xeon	7	built-in	simple	1	12213.6	11988.561946
xeon	7	built-in	simple	4	45205.1	45191.915439
xeon	7	built-in	simple	16	139702.3	139948.950278
xeon	7	built-in	prepared	1	12735.5	12944.767576
xeon	7	built-in	prepared	4	47940.1	48099.254923
xeon	7	built-in	prepared	16	148478.7	148768.219475
xeon	7	built-in-guc	simple	1	12479.3	12388.390629
xeon	7	built-in-guc	simple	4	45417.9	46094.883898
xeon	7	built-in-guc	simple	16	141538.1	141647.778772
xeon	7	built-in-guc	prepared	1	12913.6	12959.254618
xeon	7	built-in-guc	prepared	4	48440.0	48478.460796
xeon	7	built-in-guc	prepared	16	151040.8	151367.118367
xeon	8	master	simple	1	12063.3	12062.550554
xeon	8	master	simple	4	45022.6	45375.751462
xeon	8	master	simple	16	139378.0	139616.512389
xeon	8	master	prepared	1	13022.5	13034.608037
xeon	8	master	prepared	4	47756.4	48032.141669
xeon	8	master	prepared	16	150649.4	150739.169508
xeon	8	built-in	simple	1	12636.9	12582.355521
xeon	8	built-in	simple	4	46476.0	46441.387849
xeon	8	built-in	simple	16	144153.7	144359.367626
xeon	8	built-in	prepared	1	13238.2	13394.503058
xeon	8	built-in	prepared	4	49636.9	49557.493302
xeon	8	built-in	prepared	16	153845.1	154128.451439
xeon	8	built-in-guc	simple	1	12515.3	12517.217910
xeon	8	built-in-guc	simple	4	47009.4	47126.697586
xeon	8	built-in-guc	simple	16	143638.1	143847.444651
xeon	8	built-in-guc	prepared	1	13445.2	13700.718829
xeon	8	built-in-guc	prepared	4	50346.5	50134.059773
xeon	8	built-in-guc	prepared	16	152488.5	152623.934892
xeon	9	master	simple	1	12490.8	12454.969962
xeon	9	master	simple	4	46260.5	46149.849459
xeon	9	master	simple	16	142678.2	142717.472349
xeon	9	master	prepared	1	13057.0	13435.252281
xeon	9	master	prepared	4	49104.0	49233.865922
xeon	9	master	prepared	16	152324.8	152363.495307
xeon	9	built-in	simple	1	12547.5	12542.310980
xeon	9	built-in	simple	4	46361.9	46449.867649
xeon	9	built-in	simple	16	143568.7	143902.189886
xeon	9	built-in	prepared	1	13486.4	13646.233568
xeon	9	built-in	prepared	4	50489.4	51148.594980
xeon	9	built-in	prepared	16	153587.8	154082.377096
xeon	9	built-in-guc	simple	1	12200.7	12194.922957
xeon	9	built-in-guc	simple	4	46444.3	46608.534778
xeon	9	built-in-guc	simple	16	143248.9	143475.950349
xeon	9	built-in-guc	prepared	1	13327.9	13536.522876
xeon	9	built-in-guc	prepared	4	49420.3	49401.932784
xeon	9	built-in-guc	prepared	16	151859.1	152166.819766
xeon	10	master	simple	1	12611.0	12548.414937
xeon	10	master	simple	4	46396.9	46359.341299
xeon	10	master	simple	16	143053.6	143087.924347
xeon	10	master	prepared	1	13099.8	13274.694777
xeon	10	master	prepared	4	48372.9	48298.822079
xeon	10	master	prepared	16	152667.6	152607.979638
xeon	10	built-in	simple	1	12431.5	12603.897559
xeon	10	built-in	simple	4	45702.3	45837.274823
xeon	10	built-in	simple	16	141242.3	141321.486585
xeon	10	built-in	prepared	1	13004.4	13017.309945
xeon	10	built-in	prepared	4	48725.7	48660.610834
xeon	10	built-in	prepared	16	150256.6	150440.615260
xeon	10	built-in-guc	simple	1	12051.9	12046.154519
xeon	10	built-in-guc	simple	4	45916.9	46139.911810
xeon	10	built-in-guc	simple	16	141991.3	141968.515317
xeon	10	built-in-guc	prepared	1	13003.6	13005.107633
xeon	10	built-in-guc	prepared	4	48240.3	48390.738417
xeon	10	built-in-guc	prepared	16	150663.0	151043.965473

Attachment: run-lock-test.sh
Description: application/shellscript

i5	1	master	simple	1	13945.4	13954.827492
i5	1	master	simple	4	48711.9	48695.172011
i5	1	master	prepared	1	14879.3	14881.711975
i5	1	master	prepared	4	51852.1	51836.343646
i5	1	built-in	simple	1	14096.7	14105.113451
i5	1	built-in	simple	4	48051.5	48072.994414
i5	1	built-in	prepared	1	15149.0	15109.503629
i5	1	built-in	prepared	4	51799.7	51775.818023
i5	1	built-in-guc	simple	1	14132.6	14099.290708
i5	1	built-in-guc	simple	4	48341.8	48337.845499
i5	1	built-in-guc	prepared	1	14991.3	14975.235724
i5	1	built-in-guc	prepared	4	51385.2	51344.776611
i5	2	master	simple	1	14058.6	14074.818176
i5	2	master	simple	4	48833.5	48824.141424
i5	2	master	prepared	1	15262.2	15235.351798
i5	2	master	prepared	4	52125.2	52109.756349
i5	2	built-in	simple	1	14115.5	14140.507070
i5	2	built-in	simple	4	48600.6	48592.795749
i5	2	built-in	prepared	1	14975.8	15005.122055
i5	2	built-in	prepared	4	51855.1	51692.940208
i5	2	built-in-guc	simple	1	14016.9	14005.032892
i5	2	built-in-guc	simple	4	48132.1	48107.706786
i5	2	built-in-guc	prepared	1	14806.7	14825.469178
i5	2	built-in-guc	prepared	4	51070.6	51034.867739
i5	3	master	simple	1	14011.1	13999.267172
i5	3	master	simple	4	48052.7	48047.319336
i5	3	master	prepared	1	14966.7	14952.474371
i5	3	master	prepared	4	51242.4	51229.490027
i5	3	built-in	simple	1	14110.7	14071.999292
i5	3	built-in	simple	4	48247.2	48247.999507
i5	3	built-in	prepared	1	14809.9	14810.312961
i5	3	built-in	prepared	4	51448.7	51448.614480
i5	3	built-in-guc	simple	1	14048.7	14031.975903
i5	3	built-in-guc	simple	4	48604.6	48609.412321
i5	3	built-in-guc	prepared	1	14945.1	14956.656108
i5	3	built-in-guc	prepared	4	51254.7	51245.407622
i5	4	master	simple	1	14142.2	14145.531400
i5	4	master	simple	4	48863.5	48849.608202
i5	4	master	prepared	1	15128.6	15116.696271
i5	4	master	prepared	4	52158.8	52158.107937
i5	4	built-in	simple	1	14062.3	14060.649800
i5	4	built-in	simple	4	48637.5	48561.196135
i5	4	built-in	prepared	1	14950.6	14976.974040
i5	4	built-in	prepared	4	51651.1	51647.112204
i5	4	built-in-guc	simple	1	14142.3	14130.775720
i5	4	built-in-guc	simple	4	48436.2	48413.349946
i5	4	built-in-guc	prepared	1	14996.1	14987.062213
i5	4	built-in-guc	prepared	4	51418.0	51430.385529
i5	5	master	simple	1	14162.4	14140.067055
i5	5	master	simple	4	48716.0	48714.697662
i5	5	master	prepared	1	14874.2	14893.955329
i5	5	master	prepared	4	51642.8	51638.478382
i5	5	built-in	simple	1	14327.9	14315.687315
i5	5	built-in	simple	4	48575.2	48582.721717
i5	5	built-in	prepared	1	15148.0	15126.616486
i5	5	built-in	prepared	4	52061.0	52071.206161
i5	5	built-in-guc	simple	1	13969.8	13935.066582
i5	5	built-in-guc	simple	4	48706.3	48706.351805
i5	5	built-in-guc	prepared	1	15022.9	15031.313980
i5	5	built-in-guc	prepared	4	51507.2	51515.781386
i5	6	master	simple	1	13993.3	13998.592314
i5	6	master	simple	4	48385.4	48383.615463
i5	6	master	prepared	1	14939.4	14949.108858
i5	6	master	prepared	4	51351.5	51350.713986
i5	6	built-in	simple	1	14189.5	14164.803513
i5	6	built-in	simple	4	48431.3	48434.261393
i5	6	built-in	prepared	1	15186.1	15153.718243
i5	6	built-in	prepared	4	51595.1	51581.913299
i5	6	built-in-guc	simple	1	13912.9	13921.266247
i5	6	built-in-guc	simple	4	48354.3	48375.206008
i5	6	built-in-guc	prepared	1	14868.7	14886.172519
i5	6	built-in-guc	prepared	4	51305.5	51302.170437
i5	7	master	simple	1	14189.8	14175.239368
i5	7	master	simple	4	48911.2	48915.235044
i5	7	master	prepared	1	15074.6	15073.194243
i5	7	master	prepared	4	51887.9	51888.708951
i5	7	built-in	simple	1	14133.7	14132.483737
i5	7	built-in	simple	4	48609.2	48602.497604
i5	7	built-in	prepared	1	14903.9	14908.544561
i5	7	built-in	prepared	4	51121.5	50992.647588
i5	7	built-in-guc	simple	1	13984.7	13984.830373
i5	7	built-in-guc	simple	4	48190.8	48180.152544
i5	7	built-in-guc	prepared	1	14815.0	14824.572366
i5	7	built-in-guc	prepared	4	51322.2	51310.995523
i5	8	master	simple	1	14130.0	14121.928395
i5	8	master	simple	4	48255.4	48276.713939
i5	8	master	prepared	1	14918.2	14956.383950
i5	8	master	prepared	4	51352.3	51346.184069
i5	8	built-in	simple	1	14247.1	14216.424742
i5	8	built-in	simple	4	48584.3	48581.328213
i5	8	built-in	prepared	1	14912.3	14910.866013
i5	8	built-in	prepared	4	50929.8	50939.532619
i5	8	built-in-guc	simple	1	14076.1	14099.811859
i5	8	built-in-guc	simple	4	48720.0	48728.435473
i5	8	built-in-guc	prepared	1	15038.2	15048.531254
i5	8	built-in-guc	prepared	4	51851.7	51853.129676
i5	9	master	simple	1	13837.0	13844.483616
i5	9	master	simple	4	48834.0	48845.416664
i5	9	master	prepared	1	14972.0	14983.084695
i5	9	master	prepared	4	51628.7	51637.828840
i5	9	built-in	simple	1	14337.5	14294.360567
i5	9	built-in	simple	4	49205.3	49158.109018
i5	9	built-in	prepared	1	15122.1	15108.879391
i5	9	built-in	prepared	4	52111.8	52108.582691
i5	9	built-in-guc	simple	1	14010.9	14003.806280
i5	9	built-in-guc	simple	4	48584.5	48539.174916
i5	9	built-in-guc	prepared	1	15048.3	15027.601602
i5	9	built-in-guc	prepared	4	51848.5	51843.800690
i5	10	master	simple	1	14005.4	14009.837132
i5	10	master	simple	4	48524.4	48508.366950
i5	10	master	prepared	1	15028.8	14992.065530
i5	10	master	prepared	4	51142.5	51146.001485
i5	10	built-in	simple	1	14010.7	14041.493640
i5	10	built-in	simple	4	48587.0	48562.210561
i5	10	built-in	prepared	1	14957.6	14930.933240
i5	10	built-in	prepared	4	51520.2	51504.606674
i5	10	built-in-guc	simple	1	14002.5	14011.410482
i5	10	built-in-guc	simple	4	48503.6	48501.130182
i5	10	built-in-guc	prepared	1	14974.9	14992.937718
i5	10	built-in-guc	prepared	4	51514.8	51507.047113
xeon	1	master	simple	1	12457.3	12421.290207
xeon	1	master	simple	4	45750.1	45985.332307
xeon	1	master	simple	16	142636.6	143058.591697
xeon	1	master	prepared	1	13095.7	13121.200090
xeon	1	master	prepared	4	48766.5	48738.113161
xeon	1	master	prepared	16	151395.1	151532.467550
xeon	1	built-in	simple	1	11781.0	11870.365029
xeon	1	built-in	simple	4	46707.2	46638.688006
xeon	1	built-in	simple	16	143884.4	143688.011971
xeon	1	built-in	prepared	1	13314.1	13397.399643
xeon	1	built-in	prepared	4	51437.1	51211.051221
xeon	1	built-in	prepared	16	152932.6	153579.846747
xeon	1	built-in-guc	simple	1	12457.7	12323.612835
xeon	1	built-in-guc	simple	4	45892.0	45767.443306
xeon	1	built-in-guc	simple	16	141964.1	142164.895920
xeon	1	built-in-guc	prepared	1	13356.8	13483.316673
xeon	1	built-in-guc	prepared	4	49003.6	49190.649757
xeon	1	built-in-guc	prepared	16	151446.4	151689.764377
xeon	2	master	simple	1	12302.8	12305.818670
xeon	2	master	simple	4	46321.0	46278.719217
xeon	2	master	simple	16	143090.9	143378.881540
xeon	2	master	prepared	1	13294.1	13455.504052
xeon	2	master	prepared	4	49689.9	50305.283336
xeon	2	master	prepared	16	152136.7	152457.693553
xeon	2	built-in	simple	1	12213.1	12194.735753
xeon	2	built-in	simple	4	45806.6	45686.671251
xeon	2	built-in	simple	16	141558.5	141896.047974
xeon	2	built-in	prepared	1	13077.4	12887.912351
xeon	2	built-in	prepared	4	48903.0	48768.000806
xeon	2	built-in	prepared	16	150655.6	150946.858640
xeon	2	built-in-guc	simple	1	12487.6	12468.641761
xeon	2	built-in-guc	simple	4	46317.5	46255.072475
xeon	2	built-in-guc	simple	16	143521.1	143673.901901
xeon	2	built-in-guc	prepared	1	13045.2	13210.098505
xeon	2	built-in-guc	prepared	4	49633.9	49490.136535
xeon	2	built-in-guc	prepared	16	151411.6	151514.530714
xeon	3	master	simple	1	12316.2	12100.448108
xeon	3	master	simple	4	45929.8	45835.654872
xeon	3	master	simple	16	141676.4	141600.016349
xeon	3	master	prepared	1	13094.3	13186.001217
xeon	3	master	prepared	4	49108.5	49101.261003
xeon	3	master	prepared	16	149193.8	149416.838439
xeon	3	built-in	simple	1	12528.8	12533.810905
xeon	3	built-in	simple	4	47031.9	47245.958915
xeon	3	built-in	simple	16	142400.9	142435.321930
xeon	3	built-in	prepared	1	13162.6	13134.417088
xeon	3	built-in	prepared	4	49602.9	49676.601691
xeon	3	built-in	prepared	16	153723.5	153708.057905
xeon	3	built-in-guc	simple	1	12129.7	12185.239942
xeon	3	built-in-guc	simple	4	45800.9	45693.622135
xeon	3	built-in-guc	simple	16	142509.6	142465.767079
xeon	3	built-in-guc	prepared	1	13227.2	13607.218542
xeon	3	built-in-guc	prepared	4	49028.5	49285.997471
xeon	3	built-in-guc	prepared	16	151986.1	152478.424781
xeon	4	master	simple	1	12333.8	12321.136181
xeon	4	master	simple	4	45371.7	45336.617238
xeon	4	master	simple	16	141676.7	141718.240437
xeon	4	master	prepared	1	13314.6	13780.305538
xeon	4	master	prepared	4	49087.5	49302.928072
xeon	4	master	prepared	16	150996.5	151294.186193
xeon	4	built-in	simple	1	12336.0	12347.926660
xeon	4	built-in	simple	4	44687.6	44768.275156
xeon	4	built-in	simple	16	140751.7	140824.416046
xeon	4	built-in	prepared	1	15233.8	14543.205055
xeon	4	built-in	prepared	4	48716.1	48493.310586
xeon	4	built-in	prepared	16	150966.4	151106.085111
xeon	4	built-in-guc	simple	1	11949.3	12113.777423
xeon	4	built-in-guc	simple	4	46232.3	46653.438592
xeon	4	built-in-guc	simple	16	144307.6	144449.248169
xeon	4	built-in-guc	prepared	1	13352.0	13355.108998
xeon	4	built-in-guc	prepared	4	49241.4	49871.496891
xeon	4	built-in-guc	prepared	16	152192.8	152428.348687
xeon	5	master	simple	1	12143.7	12137.206377
xeon	5	master	simple	4	45646.7	45689.169637
xeon	5	master	simple	16	140830.4	140964.059442
xeon	5	master	prepared	1	13020.8	13061.404227
xeon	5	master	prepared	4	48762.8	48880.741440
xeon	5	master	prepared	16	148939.3	149180.893231
xeon	5	built-in	simple	1	12308.7	12375.878776
xeon	5	built-in	simple	4	48135.4	47906.733000
xeon	5	built-in	simple	16	142287.0	142433.458336
xeon	5	built-in	prepared	1	12651.6	12984.675023
xeon	5	built-in	prepared	4	49673.3	49636.365588
xeon	5	built-in	prepared	16	153120.1	153425.535499
xeon	5	built-in-guc	simple	1	12167.2	12158.948171
xeon	5	built-in-guc	simple	4	44843.9	44861.932622
xeon	5	built-in-guc	simple	16	139334.8	139418.873059
xeon	5	built-in-guc	prepared	1	12762.6	12840.172116
xeon	5	built-in-guc	prepared	4	48052.8	47924.203840
xeon	5	built-in-guc	prepared	16	149338.3	149466.926784
xeon	6	master	simple	1	12241.1	12192.157982
xeon	6	master	simple	4	45504.8	45693.083022
xeon	6	master	simple	16	140357.7	140600.608013
xeon	6	master	prepared	1	13116.9	13118.097681
xeon	6	master	prepared	4	49025.9	49256.292629
xeon	6	master	prepared	16	150550.5	150911.350990
xeon	6	built-in	simple	1	12117.0	12108.009794
xeon	6	built-in	simple	4	45597.7	45482.468376
xeon	6	built-in	simple	16	141024.5	141171.196029
xeon	6	built-in	prepared	1	13093.8	13136.873564
xeon	6	built-in	prepared	4	48407.0	48527.049175
xeon	6	built-in	prepared	16	148928.5	149220.865379
xeon	6	built-in-guc	simple	1	12340.7	12327.756912
xeon	6	built-in-guc	simple	4	45639.1	45732.258877
xeon	6	built-in-guc	simple	16	142188.2	142427.113552
xeon	6	built-in-guc	prepared	1	13074.3	13130.758414
xeon	6	built-in-guc	prepared	4	48054.2	47823.130155
xeon	6	built-in-guc	prepared	16	151366.0	151468.465086
xeon	7	master	simple	1	12458.5	12232.251979
xeon	7	master	simple	4	46191.9	46080.753175
xeon	7	master	simple	16	141907.8	142220.544197
xeon	7	master	prepared	1	13252.3	13220.871104
xeon	7	master	prepared	4	49208.6	49182.524739
xeon	7	master	prepared	16	150538.2	150778.213068
xeon	7	built-in	simple	1	12324.4	12312.734561
xeon	7	built-in	simple	4	45571.9	45875.664935
xeon	7	built-in	simple	16	143000.3	142999.611306
xeon	7	built-in	prepared	1	13399.6	13431.484674
xeon	7	built-in	prepared	4	49044.2	48962.880915
xeon	7	built-in	prepared	16	152351.2	152374.973553
xeon	7	built-in-guc	simple	1	12399.3	12377.823017
xeon	7	built-in-guc	simple	4	45771.1	45918.148541
xeon	7	built-in-guc	simple	16	141652.0	141589.891651
xeon	7	built-in-guc	prepared	1	12806.4	13021.277447
xeon	7	built-in-guc	prepared	4	48281.9	48684.758960
xeon	7	built-in-guc	prepared	16	149714.0	149723.274031
xeon	8	master	simple	1	12294.3	12267.598950
xeon	8	master	simple	4	46191.3	46342.282529
xeon	8	master	simple	16	141555.7	141828.810606
xeon	8	master	prepared	1	13254.4	13204.673608
xeon	8	master	prepared	4	49142.2	49378.568856
xeon	8	master	prepared	16	151609.9	151928.266387
xeon	8	built-in	simple	1	12413.6	12418.544828
xeon	8	built-in	simple	4	46309.4	46420.900468
xeon	8	built-in	simple	16	143031.0	143077.940248
xeon	8	built-in	prepared	1	13369.4	13774.173143
xeon	8	built-in	prepared	4	49453.1	49808.404341
xeon	8	built-in	prepared	16	152665.6	152766.604680
xeon	8	built-in-guc	simple	1	12270.1	12300.554079
xeon	8	built-in-guc	simple	4	44816.1	44896.275741
xeon	8	built-in-guc	simple	16	141102.9	141277.727653
xeon	8	built-in-guc	prepared	1	12853.7	13006.582353
xeon	8	built-in-guc	prepared	4	49189.6	50183.986276
xeon	8	built-in-guc	prepared	16	151390.5	151689.193537
xeon	9	master	simple	1	12358.6	12148.798480
xeon	9	master	simple	4	45615.5	45732.570486
xeon	9	master	simple	16	140979.9	141155.846623
xeon	9	master	prepared	1	13212.6	13295.050385
xeon	9	master	prepared	4	49197.9	49359.670203
xeon	9	master	prepared	16	151773.4	151696.645348
xeon	9	built-in	simple	1	12387.7	12269.813490
xeon	9	built-in	simple	4	45420.7	45559.355159
xeon	9	built-in	simple	16	141837.4	141882.647996
xeon	9	built-in	prepared	1	13218.3	13262.212145
xeon	9	built-in	prepared	4	49217.0	49419.717080
xeon	9	built-in	prepared	16	152184.0	152161.183441
xeon	9	built-in-guc	simple	1	12614.1	12607.549642
xeon	9	built-in-guc	simple	4	46348.9	46732.498833
xeon	9	built-in-guc	simple	16	142539.9	142833.568163
xeon	9	built-in-guc	prepared	1	13127.4	13224.211389
xeon	9	built-in-guc	prepared	4	49220.9	49240.426636
xeon	9	built-in-guc	prepared	16	151345.9	151606.285005
xeon	10	master	simple	1	12446.2	12473.293384
xeon	10	master	simple	4	46430.7	46283.409941
xeon	10	master	simple	16	143257.9	143243.023035
xeon	10	master	prepared	1	13327.4	13420.132901
xeon	10	master	prepared	4	49578.8	49413.363053
xeon	10	master	prepared	16	151985.1	152466.368236
xeon	10	built-in	simple	1	11670.0	11894.832095
xeon	10	built-in	simple	4	46441.1	46485.108513
xeon	10	built-in	simple	16	141231.1	141444.233496
xeon	10	built-in	prepared	1	13175.4	13404.120931
xeon	10	built-in	prepared	4	48438.3	48750.527722
xeon	10	built-in	prepared	16	151364.4	151808.579775
xeon	10	built-in-guc	simple	1	12306.6	12308.366084
xeon	10	built-in-guc	simple	4	46403.4	46508.962630
xeon	10	built-in-guc	simple	16	142039.9	142192.078044
xeon	10	built-in-guc	prepared	1	12561.5	12902.873643
xeon	10	built-in-guc	prepared	4	49244.9	49350.436567
xeon	10	built-in-guc	prepared	16	151447.2	151412.039990

Reply via email to