Hi, On 2024-02-17 23:40:51 +0100, Matthias van de Meent wrote: > > 5. Re-map the shared_buffers when needed. > > > > Between transactions, a backend should not hold any buffer pins. When > > there are no pins, you can munmap() the shared_buffers and mmap() it at > > a different address.
I hadn't quite realized that we don't seem to rely on shared_buffers having a specific address across processes. That does seem to make it a more viable to remap mappings in backends. However, I don't think this works with mmap(MAP_ANONYMOUS) - as long as we are using the process model. To my knowledge there is no way to get the same mapping in multiple already existing processes. Even mmap()ing /dev/zero after sharing file descriptors across processes doesn't work, if I recall correctly. We would have to use sysv/posix shared memory or such (or mmap() if files in tmpfs) for the shared buffers allocation. > This can quite realistically fail to find an unused memory region of > sufficient size when the heap is sufficiently fragmented, e.g. through > ASLR, which would make it difficult to use this dynamic > single-allocation shared_buffers in security-hardened environments. I haven't seen anywhere close to this bad fragmentation on 64bit machines so far - have you? Most implementations of ASLR randomize mmap locations across multiple runs of the same binary, not within the same binary. There are out-of-tree linux patches that make mmap() randomize every single allocation, but I am not sure that we ought to care about such things. Even if we were to care, on 64bit platforms it doesn't seem likely that we'd run out of space that quickly. AMD64 had 48bits of virtual address space from the start, and on recent CPUs that has grown to 57bits [1], that's a lot of space. And if you do run out of VM space, wouldn't that also affect lots of other things, like mmap() for malloc? Greetings, Andres Freund [1] https://en.wikipedia.org/wiki/Intel_5-level_paging