On Tue, Sep 23, 2014 at 10:31 AM, Robert Haas <robertmh...@gmail.com> wrote:
> [ review ]

Oh, by the way, I noticed that this patch breaks pg_buffercache.  If
we're going to have 128 lock partitions, we need to bump

But this gets at another point: the way we're benchmarking this right
now, we're really conflating the effects of three different things:

1. Changing the locking regimen around the freelist and clocksweep.
2. Adding a bgreclaimer process.
3. Raising the number of buffer locking partitions.

I think it's pretty clear that #1 and #2 are a good idea.  #3 is a
mixed bag, and it might account for the regressions you saw on some
test cases.  Increasing the number of buffer mapping locks means that
those locks take up more cache lines, which could slow things down in
cases where there's no reduction in contention.  It also means that
the chances of an allocated buffer ending up in the same buffer
mapping lock partition are 1/128 instead of 1/16, which means about
5.4 additional lwlock acquire/release cycles per 100 allocations.
That's not a ton, but it's not free either.

Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to