In memory contexts, block and chunk sizes are likely to be limited by
some upper bounds. Some examples of those bounds can be
MEMORYCHUNK_MAX_BLOCKOFFSET and MEMORYCHUNK_MAX_VALUE. Both values are
only 1 less than 1GB.
This makes memory contexts to have blocks/chunks with sizes less than
1GB. Such sizes can be stored in 32-bits. Currently, "Size" type,
which is 64-bit, is used, but 32-bit integers should be enough to
store any value less than 1GB.

size_t (= Size) is the correct type in C to store the size of an object in memory. This is partially a self-documentation issue: If I see size_t in a function signature, I know what is intended; if I see uint32, I have to wonder what the intent was.

You could make an argument that using shorter types would save space for some internal structs, but then you'd have to show some more information where and why that would be beneficial. (But again, self-documentation: If one were to do that, I would argue for introducing a custom type like pg_short_size_t.)

Absent any strong performance argument, I don't see the benefit of this change. People might well want to experiment with MEMORYCHUNK_... settings larger than 1GB.



Reply via email to