Jeff Janes <jeff.ja...@gmail.com> writes: > I think that the BufFreelistLock can be a contention bottleneck on a > system with a lot of CPUs that do a lot of shared-buffer allocations > which can fulfilled by the OS buffer cache.
Really? buffer/README says The buffer management policy is designed so that BufFreelistLock need not be taken except in paths that will require I/O, and thus will be slow anyway. It's hard to see how it's going to be much of a problem if you're going to be doing kernel calls as well. Is the test case you're looking at really representative of any common situation? > 1) Would it be useful for BufFreelistLock be partitioned, like > BufMappingLock, or via some kind of clever "virtual partitioning" that > could get the same benefit via another means? Maybe, but you could easily end up with a net loss if the partitioning makes buffer allocation significantly stupider (ie, higher probability of picking a less-than-optimal buffer to recycle). > For the clock sweep algorithm, I think you could access > nextVictimBuffer without any type of locking. This is wrong, mainly because you wouldn't have any security against two processes decrementing the usage count of the same buffer because they'd fetched the same value of nextVictimBuffer. That would probably happen often enough to severely compromise the accuracy of the usage counts and thus the accuracy of the LRU eviction behavior. See above. It might be worth looking into actual partitioning, so that more than one processor can usefully be working on the usage count management. But simply dropping the locking primitives isn't going to lead to anything except severe screw-ups. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers