On 2018-03-07 14:48:48 -0300, Alvaro Herrera wrote:
> Oh, I wasn't suggesting to throw away the whole cache at that point;
> only that that is a convenient to do whatever cleanup we want to do.

But why is that better than doing so continuously?

> What I'm not clear about is exactly what is the cleanup that we want to
> do at that point.  You say it should be based on some configured size;
> Robert says any predefined size breaks [performance for] the case where
> the workload uses size+1, so let's use time instead (evict anything not
> used in more than X seconds?), but keeping in mind that a workload that
> requires X+1 would also break.

We mostly seem to have found that adding a *minimum* size before
starting evicting basedon time solves both of our concerns?

> So it seems we've arrived at the
> conclusion that the only possible solution is to let the user tell us
> what time/size to use.  But that sucks, because the user doesn't know
> either (maybe they can measure, but how?), and they don't even know that
> this setting is there to be tweaked; and if there is a performance
> problem, how do they figure whether or not it can be fixed by fooling
> with this parameter?  I mean, maybe it's set to 10 and we suggest "maybe
> 11 works better" but it turns out not to, so "maybe 12 works better"?
> How do you know when to stop increasing it?

I don't think it's that complicated, for the size figure. Having a knob
that controls how much memory a backend uses isn't a new concept, and
can definitely depend on the usecase.

> This seems a bit like max_fsm_pages, that is to say, a disaster that was
> only fixed by removing it.

I don't think that's a meaningful comparison. max_fms_pages had
persistent effect, couldn't be tuned without restarts, and the
performance dropoffs were much more "cliff" like.


Andres Freund

Reply via email to