On Mon, 2009-12-14 at 15:24 -0500, Tom Lane wrote: > Simon Riggs <si...@2ndquadrant.com> writes: > > On Mon, 2009-12-14 at 20:32 +0200, Heikki Linnakangas wrote: > >>> I have ensured that they are always the same size, by definition, so no > >>> need to check. > >> > >> How did you ensure that? The hash table has no hard size limit. > > > The hash table is in shared memory and the entry size is fixed. My > > understanding was that this meant the hash table was fixed in size and > > could not grow beyond the allocation. If that assumption was wrong, then > > yes we could get an error. Is it? > > Entirely. The only thing the hash table size enters into is the sizing > of overall shared memory --- different hash tables then consume space > from the common pool, which includes not only the computed space > requirements but a pretty hefty slop overhead. You can go beyond the > original requested space if there is any slop left.
OK, thanks. > For a number of shared hashtables that actually have a fixed set of > entries, we avoid the risk of unexpected out-of-memory by forcing all > the entries to come into existence during startup. If your table > doesn't work that way then you cannot be sure of the exact point where > it will get an out-of-memory failure. The data structure was originally a list of fixed size, though is now a shared hash table. What is the best way of restricting the hash table to a maximum size? Your last para makes me think there is a way, but I can't see it directly. If there isn't a facility to do this and I need to add code, should I add optional code into the dynahash.c to track size, or should I add that in the data structure code that uses the hash functions (so, internally or externally). -- Simon Riggs www.2ndQuadrant.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers