On 2014-09-25 09:51:17 -0400, Robert Haas wrote:
> On Tue, Sep 23, 2014 at 5:50 PM, Robert Haas <robertmh...@gmail.com> wrote:
> > The patch I attached the first time was just the last commit in the
> > git repository where I wrote the patch, rather than the changes that I
> > made on top of that commit.  So, yes, the results from the previous
> > message are with the patch attached to the follow-up.  I just typed
> > the wrong git command when attempting to extract that patch to attach
> > it to the email.
> Here are some more results.  TL;DR: The patch still looks good, but we
> should raise the number of buffer mapping partitions as well.

>  So I'm inclined to (a) push
> reduce-replacement-locking.patch and then also (b) bump up the number
> of buffer mapping locks to 128 (and increase MAX_SIMUL_LWLOCKS
> accordingly so that pg_buffercache doesn't get unhappy).

I'm happy with that. I don't think it's likely that a moderate increase
in the number of mapping lwlocks will be noticeably bad for any

One difference is that the total number of lwlock acquirations will be a
bit higher because currently it's more likely for the old and new to
fall into different partitions. But that's not really significant.

The other difference is the number of cachelines touched. Currently, in
concurrent workloads, there's already lots of L1 cache misses around the
buffer mapping locks because they're exclusively owned by a different
core/socket. So, to be effectively worse, the increase would need to
lead to lower overall cache hit rates by them not being in *any* cache
or displacing other content.

That leads me to wonder: Have you measured different, lower, number of
buffer mapping locks? 128 locks is, if we'd as we should align them
properly, 8KB of memory. Common L1 cache sizes are around 32k...


Andres Freund

 Andres Freund                     http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to