On Wed, 13 Jul 2022 at 05:44, Andres Freund <and...@anarazel.de> wrote: > On 2022-07-12 20:22:57 +0300, Yura Sokolov wrote: > > I don't get, why "large chunk" needs additional fields for size and > > offset. > > Large allocation sizes are certainly rounded to page size. > > And allocations which doesn't fit 1GB we could easily round to 1MB. > > Then we could simply store `size>>20`. > > It will limit MaxAllocHugeSize to `(1<<(30+20))-1` - 1PB. Doubdfully we > > will deal with such huge allocations in near future. > > What would gain by doing something like this? The storage density loss of > storing an exact size is smaller than what you propose here.
I do agree that the 16-byte additional header size overhead for allocations >= 1GB are not really worth troubling too much over. However, if there was some way to make it so we always had an 8-byte header, it would simplify some of the code in places such as AllocSetFree(). For example, (ALLOC_BLOCKHDRSZ + hdrsize + chunksize) could be simplified at compile time if hdrsize was a known constant. I did consider that in all cases where the allocations are above allocChunkLimit that the chunk is put on a dedicated block and in fact, the blockoffset is always the same for those. I wondered if we could use the full 60 bits for the chunksize for those cases. The reason I didn't pursue that is because: #define MaxAllocHugeSize (SIZE_MAX / 2) That's 63-bits, so 60 isn't enough. Yeah, we likely could reduce that without upsetting anyone. It feels like it'll be a while before not being able to allocate a chunk of memory more than 1024 petabytes will be an issue, although, I do hope to grow old enough to one day come back here at laugh at that. David