We can pass on what we've seen when running tests here with different
BufMapping and LockMgr partition sizes.

We use a TPC-C inspired benchmark. Currently it is configured to run 25
backend processes. The test runs for 16 minutes as this is the minimum
amount of time we can run and obtain useful information. This gives us
24,000 seconds (25 * 16 * 60) of processing time.

The following timings have been rounded to the nearest second and
represent the amount of time amongst all backend processes to acquire
and release locks. For example, a value of 2500 seconds would mean each
backend process (25) took ~100 seconds to acquire or release a lock.
Although, in reality, the time spent locking or releasing each partition
entry is not uniform and there are some definite hotspot entries. We can
pass on some of the lock output if anyone is interested.

When using 16 buffer and 16 lock partitions, we see that BufMapping
takes 809 seconds to acquire locks and 174 seconds to release locks. The
LockMgr takes 362 seconds to acquire locks and 26 seconds to release

When using 128 buffer and 128 lock partitions, we see that BufMapping
takes 277 seconds (532 seconds improvement) to acquire locks and 78
seconds (96 seconds improvement) to release locks. The LockMgr takes 235
seconds (127 seconds improvement) to acquire locks and 22 seconds (4
seconds improvement) to release locks.

Overall, 128 BufMapping partitions improves locking/releasing by 678
seconds, 128 LockMgr partitions improves locking/releasing by 131

With the improvements in the various locking times, one might expect an
improvement in the overall benchmark result. However, a 16 partition run
produces a result of 198.74 TPS and a 128 partition run produces a
result of 203.24 TPS.

Part of the time saved from BufMapping and LockMgr partitions is
absorbed into the WALInsertLock lock. For a 16 partition run, the total
time to lock/release the WALInsertLock lock is 5845 seconds. For 128
partitions, the WALInsertLock lock takes 6172 seconds, an increase of
327 seconds. Perhaps we have our WAL configured incorrectly?

Other static locks are also affected, but not as much as the
WALInsertLock lock. For example, the ProcArrayLock lock increases from
337 seconds to 348 seconds. The SInvalLock lock increases from 317
seconds to 331 seconds.

Due to expansion of time in other locks, a 128 partition run only spends
403 seconds less in locking than a 16 partition run.

We can generate some OProfile statistics, but most of the time saved is
probably absorbed into functions such as HeapTupleSatisfiesSnapshot and
PinBuffer which seem to have a very high overhead.


-----Original Message-----
[mailto:[EMAIL PROTECTED] On Behalf Of Simon Riggs
Sent: Tuesday, September 12, 2006 1:37 AM
To: Tom Lane
Cc: Mark Wong; Bruce Momjian; PostgreSQL-development
Subject: Re: [HACKERS] Lock partitions

On Mon, 2006-09-11 at 11:29 -0400, Tom Lane wrote:
> Mark Wong <[EMAIL PROTECTED]> writes:
> > Tom Lane wrote:
> >> It would be nice to see some results from the OSDL tests with, say,
> >> 8, and 16 lock partitions before we forget about the point though.
> >> Anybody know whether OSDL is in a position to run tests for us?
> > Yeah, I can run some dbt2 tests in the lab.  I'll get started on it.

> > We're still a little bit away from getting the automated testing for

> > PostgreSQL going again though.
> Great, thanks.  The thing to twiddle is LOG2_NUM_LOCK_PARTITIONS in
> src/include/storage/lwlock.h.  You need a full backend recompile
> after changing it, but you shouldn't need to initdb, if that helps.

IIRC we did that already and the answer was 16...

  Simon Riggs             
  EnterpriseDB   http://www.enterprisedb.com

---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend

---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
       subscribe-nomail command to [EMAIL PROTECTED] so that your
       message can get through to the mailing list cleanly

Reply via email to