Tom,

Taking the 4 lock vs 8 lock partitions, 4 LockMgr lock partitions spent
a total of 652 seconds in lock management (acquiring/releasing) and 8
LockMgr lock partitions spent a total of 536 in lock management. This is
an improvement of 116 seconds, but the TPS didn't improve by much - only
a 1.21 TPS improvement. 

The improvement in the LockMgr processing is consumed by the next system
bottleneck downstream as more work is being let through. In this
particular case it's the WALInsertLock lock. The 4 LockMgr lock
partition test spent a total of 5868 seconds in WALInsertLock lock
management whereas the 8 LockMgr partition test spent 5945 seconds in
WALInsertLock lock management which is an increase of 77 seconds. But,
that's not the only static lock that increased in time, it's just the
most significant increase. The WALWriteLock lock increased by 12
seconds, ProcArrayLock increased by 8 seconds and SInvalLock increased
by 5 seconds. This takes the total time flowing to other locks to 102
seconds.

The locks are not the only part of the puzzle. As improvements are made
to various areas like the BufMapping and LockMgr lock partitions, other
parts of the system start to get exercised in ways that were not
possible in previous releases. We're still trying to get our arms around
all the functions that might become bottlenecks when other lock
contention is minimized.

And, improvements are being made. The locking changes from 8.0.x to
8.1.x made a significant difference in scalability. Again, the current
lock improvements in 8.2 have realized ~20% improvement over 8.1.x,
based on our testing.

We added monitoring code to the LWLockAcquire and LWLockRelease
functions. We track the total time taken to pass through LWLockAcquire
and LWLockRelease. So, if a particular backend process takes 1 second to
run through LWLockAcquire, we will track that as 1 second in lock
acquisition. Irrespective of whether my backend process was spinning or
in a semaphore wait, it's 1 second that was taken away from processing a
statement/request. We could also add timing for semaphore waits within
LWLockAcquire, if that would be a useful statistic.

Let me know if there are any other tests or metrics that would be
useful.

David

-----Original Message-----
From: Tom Lane [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, September 13, 2006 1:36 PM
To: Strong, David
Cc: PostgreSQL-development
Subject: Re: [HACKERS] Lock partitions 

"Strong, David" <[EMAIL PROTECTED]> writes:
> We have some results for you. We left the buffer partition locks at
128
> as this did not seem to be a concern and we're still using 25 backend
> processes. We ran tests for 4, 8 and 16 lock partitions. 

> For 4 lock partitions, it took 620 seconds to acquire locks and 32
> seconds to release locks. The test produced 199.95 TPS.

> For 8 lock partitions, it took 505 seconds to acquire locks and 31
> seconds to release locks. The test produced 201.16 TPS.

> For 16 lock partitions, it took 362 seconds to acquire locks and 22
> seconds to release locks. The test produced 200.75 TPS.

> And, just for grins, using 128 buffer and 128 lock partitions, took
235
> seconds to acquire locks and 22 seconds to release locks. The test
> produced 203.24 TPS.

[ itch... ]  I can't help thinking there's something wrong with this;
the wait-time measurements seem sane, but why is there essentially no
change in the TPS result?

The above numbers are only for the lock-partition LWLocks, right?
What are the totals --- that is, how much time is spent blocked
vs. processing overall?

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?

               http://www.postgresql.org/docs/faq

Reply via email to