Thanks all for digging into this problem.

AFAIU the problem is not related to the memory settings in postgresql.conf. It is the kernel that
for whatever reasons report ENOMEM. Correct?

Am 2022-11-14 22:54, schrieb Christoph Moench-Tegeder:
## klaus.mailingli...@pernau.at (klaus.mailingli...@pernau.at):

On several servers we see the error message: PANIC:  could not flush
dirty data: Cannot allocate memory

As far as I can see, that "could not flush dirty data" happens total
three times in the code - there are other places where postgresql could
PANIC on fsync()-and-stuff-related issues, but they have different
messages.
Of these three places, there's an sync_file_range(), an posix_fadvise()
and an msync(), all in src/backend/storage/file/fd.c. "Cannot allocate
memory" would be ENOMEM, which posix_fadvise() does not return (as per
it's docs). So this would be sync_file_range(), which could run out
of memory (as per the manual) or msync() where ENOMEM actually means
"The indicated memory (or part of it) was not mapped". Both cases are
somewhat WTF for this setup.
What filesystem are you running?

Filesystem is ext4. VM technology is mixed: VMware, KVM and XEN PV. Kernel is 5.15.0-52-generic.

We have not seen this with Ubutnu 18.04 and 20.04 (although we might not have noticed it).

I guess upgrading to postgresql 13/14/15 does not help as the problem happens in the kernel.

Do you have any advice how to go further? Shall I lookout for certain kernel changes? In the kernel itself or in ext4 changelog?

Thanks
Klaus




Reply via email to