Merlin Moncure <mmonc...@gmail.com> writes: > A potential issue with this line of thinking is that your pin delay > queue could get highly pressured by outer portions of the query (as in > the OP's case) that will get little or no benefit from the delayed > pin. But choosing a sufficiently sized drain queue would work for > most reasonable cases assuming 32 isn't enough? Why not something > much larger, for example the lesser of 1024, (NBuffers * .25) / > max_connections? In other words, for you to get much benefit, you > have to pin the buffer sufficiently more than 1/N times among all > buffers.
Allowing each backend to pin a large fraction of shared buffers sounds like a seriously bad idea to me. That's just going to increase thrashing of what remains. More generally, I don't believe that we have any way to know which buffers would be good candidates to keep pinned for a long time. Typically, we don't drop the pin in the first place if we know we're likely to touch that buffer again soon. btree root pages might be an exception, but I'm not even convinced of that one. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers