On 9 April 2014 15:09, Tom Lane <t...@sss.pgh.pa.us> wrote: > Andres Freund <and...@2ndquadrant.com> writes: >> On 2014-04-09 18:13:29 +0530, Pavan Deolasee wrote: >>> An orthogonal issue I noted is that we never check for overflow in the ref >>> count itself. While I understand overflowing int32 counter will take a >>> large number of pins on the same buffer, it can still happen in the worst >>> case, no ? Or is there a theoretical limit on the number of pins on the >>> same buffer by a single backend ? > >> I think we'll die much earlier, because the resource owner array keeping >> track of buffer pins will be larger than 1GB. > > The number of pins is bounded, more or less, by the number of scan nodes > in your query plan. You'll have run out of memory trying to plan the > query, assuming you live that long.
ISTM that there is a strong possibility that the last buffer pinned will be the next buffer to be unpinned. We can use that to optimise this. If we store the last 8 buffers pinned in the fast array then we will be very likely to hit the right buffer just by scanning the array. So if we treat the fast array as a circular LRU, we get * pinning a new buffer when array has an empty slot is O(1) * pinning a new buffer when array is full causes us to move the LRU into the hash table and then use that element * unpinning a buffer will most often be O(1), which then leaves an empty slot for next pin Doing it that way means all usage is O(1) apart from when we use >8 pins concurrently and that usage does not follow the regular pattern. > The resource managers are interesting to bring up in this context. > That mechanism didn't exist when PrivateRefCount was invented. > Is there a way we could lay off the work onto the resource managers? > (I don't see one right at the moment, but I'm under-caffeinated still.) Me neither. Good idea, but I think it would take a lot of refactoring to do that. We need to do something about this. We have complaints (via Heikki) that we are using too much memory in idle backends and small configs, plus we know we are using too much memory in larger servers. Reducing the memory usage here will reduce CPU L2 cache churn as well as increasing available RAM. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers