Tom Lane wrote:
> Jan Wieck <> writes:
> > Attached is a complete rework of the fix from scratch, based on Tom's 
> > suggestion.
> > The code now maintains a double linked list as suggested, but only uses 
> > it to mark all currently valid entries as invalid when hashvalue == 0. 
> > If a specific entry is invalidated it performs a hash lookup for maximum 
> > efficiency in that case.
> That does not work; you're ignoring the possibility of hashvalue
> collisions, and for that matter not considering that the hash value
> is not equal to the hash key.  I don't think your code is invalidating
> the right entry at all during a foreign key constraint update, and
> it certainly cannot do the right thing if there's a hash value collision.

Would it make sense to remove the only the few oldest entries, instead
of all of them?  As is, I think this causes a storm of reloads every
once in a while, if the number of FKs in the system is large enough.
Maybe on a cache hit we could push the entry to the head of the list,
and then remove N entries from the back of the list when the threshold
is reached.

I think the assumption here is that most sessions will not reach the
threshold at all, except for those that access all tables such as
pg_dump -- that is, most sessions are short-lived.  But in cases
involved connection poolers, eventually all sessions would access all
tables, and thus be subject to the storm issue.

(Of course, this is all hypothetical.)

Álvaro Herrera      
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Sent via pgsql-hackers mailing list (
To make changes to your subscription:

Reply via email to