Alvaro Herrera <[EMAIL PROTECTED]> writes: > Is hashtable overhead all that large? Each table could be made > initially size-of-current-table/N entries. One problem is that > currently the memory freed from a hashtable is not put back into shmem > freespace, is it?
Yeah; the problem is mainly that we'd have to allocate extra space to allow for unevenness of usage across the multiple hashtables. It's hard to judge how large the effect would be without testing, but I think that this problem would inhibit us from having dozens or hundreds of separate partitions. A possible response is to try to improve dynahash.c to make its memory management more flexible, but I'd prefer not to get into that unless it becomes really necessary. A shared freespace pool would create a contention bottleneck of its own... regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq