On 04/04/2025 16:40, Дарья Шанина wrote:
Hello everyone!
I have a question.

What would be better for the function autoprewarm_dump_now in case when we need to allocate memory that exceeds 1 GB:

Hmm, so if I counted right, sizeof(BlockInfoRecord) == 20 bytes, which means that you can fit about 409 GB worth of buffers in a 1 GB allocation. So autoprewarm will currently not work with shared_buffers > 409 GB. That's indeed quite unfortunate.

1) allocate enough memory for the entire shared_buffer array (1..NBuffers) using palloc_extended;

That would be a pretty straightforward fix.

2) allocate the maximum of currently possible memory (1 GB) using an ordinary palloc.

That'd put an upper limit on how much is prewarmed. It'd be a weird limitation. And prewarming matters the most with large shared_buffers.

3) Don't pre-allocate the array, write it out in a streaming fashion.

Unfortunately the file format doesn't make that easy: the number of entries is at the beginning of the file. You could count the entries beforehand, but the buffers can change concurrently. You could write a placeholder first, and seek back to the beginning of the file to fill in the real number at the end. The problem with that is that the number of bytes needed for the count itself varies. I suppose we could write some spaces as placeholders to accommodate the max count.

In apw_load_buffers(), we also load the file into (DSM) memory. There's no similar 1 GB limit in dsm_create(), but I think it's a bit unfortunate that the array needs to be allocated upfront upon loading.

In short, ISTM the easy answer here is "use palloc_extended". But there's a lot of room for further optimizations.

--
Heikki Linnakangas
Neon (https://neon.tech)


Reply via email to