On Mon, Dec 18, 2017 at 11:46 AM, Andres Freund <and...@anarazel.de> wrote:
> I'm not 100% convinced either - but I also don't think it matters all
> that terribly much. As long as the overall hash hit rate is decent,
> minor increases in the absolute number of misses don't really matter
> that much for syscache imo.  I'd personally go for something like:
>
> 1) When about to resize, check if there's entries of a generation -2
>    around.
>
>    Don't resize if more than 15% of entries could be freed. Also, stop
>    reclaiming at that threshold, to avoid unnecessary purging cache
>    entries.
>
>    Using two generations allows a bit more time for cache entries to
>    marked as fresh before resizing next.
>
> 2) While resizing increment generation count by one.
>
> 3) Once a minute, increment generation count by one.
>
>
> The one thing I'm not quite have a good handle upon is how much, and if
> any, cache reclamation to do at 3). We don't really want to throw away
> all the caches just because a connection has been idle for a few
> minutes, in a connection pool that can happen occasionally. I think I'd
> for now *not* do any reclamation except at resize boundaries.

My starting inclination was almost the opposite.  I think that you
might be right that a minute or two of idle time isn't sufficient
reason to flush our local cache, but I'd be inclined to fix that by
incrementing the generation count every 10 minutes or so rather than
every minute, and still flush things more then 1 generation old.  The
reason for that is that I think we should ensure that the system
doesn't sit there idle forever with a giant cache.  If it's not using
those cache entries, I'd rather have it discard them and rebuild the
cache when it becomes active again.

Now, I also see that your point about trying to clean up before
resizing.  That does seem like a good idea, although we have to be
careful not to be too eager to clean up there, or we'll just result in
artificially limiting the cache size when it's unwise to do so.  But I
guess that's what you meant by "Also, stop reclaiming at that
threshold, to avoid unnecessary purging cache entries."  I think the
idea you are proposing is that:

1. The first time we are due to expand the hash table, we check
whether we can forestall that expansion by doing a cleanup; if so, we
do that instead.

2. After that, we just expand.

That seems like a fairly good idea, although it might be a better idea
to allow cleanup if enough time has passed.  If we hit the expansion
threshold twice an hour apart, there's no reason not to try cleanup
again.

Generally, the way I'm viewing this is that a syscache entry means
paying memory to save CPU time.  Each 8kB of memory we use to store
system cache entries is one less block we have for the OS page cache
to hold onto our data blocks.  If we had an oracle (the kind from
Delphi, not Redwood City) that told us with perfect accuracy when to
discard syscache entries, it would throw away syscache entries
whenever the marginal execution-time performance we could buy from
another 8kB in the page cache is greater than the marginal
execution-time performance we could buy from those syscache entries.
In reality, it's hard to know which of those things is of greater
value.  If the system isn't meaningfully memory-constrained, we ought
to just always hang onto the syscache entries, as we do today, but
it's hard to know that.  I think the place where this really becomes a
problem is on system with hundreds of connections + thousands of
tables + connection pooling; without some back-pressure, every backend
eventually caches everything, putting the system under severe memory
pressure for basically no performance gain.  Each new use of the
connection is probably for a limited set of tables, and only those
tables really syscache entries; holding onto things used long in the
past doesn't save enough to justify the memory used.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Reply via email to