Michael Paquier <michael.paqu...@gmail.com> writes:
> On Fri, Mar 25, 2016 at 9:17 PM, Robert Haas <robertmh...@gmail.com> wrote:
>> On Thu, Mar 24, 2016 at 9:50 AM, Aleksander Alekseev
>> <a.aleks...@postgrespro.ru> wrote:
>>> Currently this procedure has two arguments --- init_size and max_size.
>>> But since shared hash tables have fixed size there is little sense to
>>> pass two arguments. In fact currently ShmemInitHash is always called
>>> with init_size == max_size with only one exception, InitLocks procedure
>>> (see lock.c), which I believe is actually a bug.

>> No, I think we left it that way on purpose.  I don't remember the
>> discussion exactly, but I don't think it's hurting anything.

> My instinct is telling me that this is useful in this shape for
> plugins, and that it has been designed on purpose for that.

Yes, it is that way on purpose.  The comments for ShmemInitHash
explain why:

 * init_size is the number of hashtable entries to preallocate.  For a table
 * whose maximum size is certain, this should be equal to max_size; that
 * ensures that no run-time out-of-shared-memory failures can occur.

The sizes of the lock hashtables are *not* certain, and in particular
we're guessing as to the ratio of LOCK entries to PROCLOCK entries.
By setting max_size larger than init_size, we leave a pool of
initially-unallocated shared memory that can be doled out to either
hashtable on demand, thus adapting to whatever the actual load is.

(Of course, it'd be even better if such memory could be given back
when the peak demand passes.  But the lack of a full-on allocator
for shared memory is not a reason to dumb down ShmemInitHash.)

In short: the error in Aleksander's argument is the assumption that
shared hashtables have fixed size.  That's simply false.

                        regards, tom lane

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to