Thomas Munro <thomas.mu...@enterprisedb.com> writes: > On Tue, Nov 29, 2016 at 6:27 AM, Tom Lane <t...@sss.pgh.pa.us> wrote: >> We could imagine providing an mmgr API function along the lines of "adjust >> this request size to the nearest thing that can be allocated efficiently". >> That would avoid the need for callers to know about aset.c overhead >> explicitly. I'm not sure how it could deal with platform-specific malloc >> vagaries though :-(
> Someone pointed out to me off-list that jemalloc's next size class > after 32KB is in fact 40KB by default[1]. So PostgreSQL uses 25% more > memory for hash joins than it thinks it does on some platforms. Ouch. > It doesn't seem that crazy to expose aset.c's overhead size to client > code does it? Most client code wouldn't care, but things that are > doing something closer to memory allocator work themselves like > dense_alloc could care. It could deal with its own overhead itself, > and subtract aset.c's overhead using a macro. Seeing that we now have several allocators with different overheads, I think that exposing this directly to clients is exactly what not to do. I still like the idea I sketched above of a context-type-specific function to adjust a request size to something that's efficient. But there's still the question of how do we know what's an efficient-sized malloc request. Is there good reason to suppose that powers of 2 are OK? regards, tom lane