Andres Freund wrote:
> On 2018-03-07 08:01:38 -0300, Alvaro Herrera wrote:
> > I wonder if this is just because we refuse to acknowledge the notion of
> > a connection pooler. If we did, and the pooler told us "here, this
> > session is being given back to us by the application, we'll keep it
> > around until the next app comes along", could we clean the oldest
> > inactive cache entries at that point? Currently they use DISCARD for
> > that. Though this does nothing to fix hypothetical cache bloat for
> > pg_dump in bug #14936.
> I'm not seeing how this solves anything? You don't want to throw all
> caches away, therefore you need a target size. Then there's also the
> case of the cache being too large in a single "session".
Oh, I wasn't suggesting to throw away the whole cache at that point;
only that that is a convenient to do whatever cleanup we want to do.
What I'm not clear about is exactly what is the cleanup that we want to
do at that point. You say it should be based on some configured size;
Robert says any predefined size breaks [performance for] the case where
the workload uses size+1, so let's use time instead (evict anything not
used in more than X seconds?), but keeping in mind that a workload that
requires X+1 would also break. So it seems we've arrived at the
conclusion that the only possible solution is to let the user tell us
what time/size to use. But that sucks, because the user doesn't know
either (maybe they can measure, but how?), and they don't even know that
this setting is there to be tweaked; and if there is a performance
problem, how do they figure whether or not it can be fixed by fooling
with this parameter? I mean, maybe it's set to 10 and we suggest "maybe
11 works better" but it turns out not to, so "maybe 12 works better"?
How do you know when to stop increasing it?
This seems a bit like max_fsm_pages, that is to say, a disaster that was
only fixed by removing it.
Álvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services