On Wed, Apr 6, 2011 at 6:32 PM, Kevin Grittner <kevin.gritt...@wicourts.gov> wrote: > Robert Haas <robertmh...@gmail.com> wrote: >> The real fix for this problem is probably to have the ability to >> actually return memory to the shared pool, rather than having >> everyone grab as they need it until there's no more and never give >> back. But that's not going to happen in 9.1, so the question is >> whether this is a sufficiently serious problem that we ought to >> impose the proposed stopgap fix between now and whenever we do >> that. > > There is a middle course between leaving the current approach of > preallocating half the maximum size and leaving the other half up > for grabs and the course Heikki proposes of making the maximum a > hard limit. I submitted a patch to preallocate the maximum, so a > request for a particular HTAB object will never get "out of shared > memory" unless it is past its maximum: > > http://archives.postgresql.org/message-id/4d948066020000250003c...@gw.wicourts.gov > > That would leave some extra which is factored into the calculations > up for grabs, but each table would be guaranteed at least its > maximum number of entries. This seems pretty safe to me, and not > very invasive. We could always revisit in this 9.2 if that's not > good enough.
OK, I agree. We certainly can't have a temporary demand for predicate locks starve out heavyweight locks for the rest of the postmaster lifetime, or visca versa. So we need to do at least that much. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers