On Fri, Aug 25, 2023 at 8:35 AM Stephen Frost <sfr...@snowman.net> wrote:

> Greetings,
>
> This is getting a bit far afield in terms of this specific thread, but
> there's an ongoing effort to give PG administrators knobs to be able to
> control how much actual memory is used rather than depending on the
> kernel to actually tell us when we're "out" of memory.  There'll be new
> patches for the September commitfest posted soon.  If you're interested
> in this issue, it'd be great to get more folks involved in review and
> testing.
>

Noticed I missed this.  I'm interested.   Test #1 would be to set memory to
about max there is, maybe a hair under, turn off swap, and see what happens
in various dynamic load situations.

Disabling overcommit is not a practical solution in my experience; it moves
instability from one place to another and seems to make problems appear in
a broader set of situations. For zero downtime platforms it has place but I
would tend to roll the dice on a reboot even for direct user facing
applications given that it can provide relief for systemic conditions.

My unsophisticated hunch is that postgres and the kernel are not on the
same page about memory somehow and that the multi-process architecture
might be contributing to that issue.  Of course, regarding
rearchitecture skeptically and realistically is a good idea given the
effort and risks.

I guess, in summary, I would personally rate things like better management
of resource tradeoffs, better handling of transient dmenands, predictable
failure modes, and stability in dynamic workloads over things like better
performance in extremely high concurrency situations.  Others might think
differently for objectively good reasons.

merlin

Reply via email to