Thank you for the comment. At Mon, 28 Aug 2017 21:31:58 -0400, Robert Haas <robertmh...@gmail.com> wrote in <ca+tgmozjn28uyjrq2k+5idhyxwbder68sctoc2p_nw7h7jb...@mail.gmail.com> > On Mon, Aug 28, 2017 at 5:24 AM, Kyotaro HORIGUCHI > <horiguchi.kyot...@lab.ntt.co.jp> wrote: > > This patch have had interferences from several commits after the > > last submission. I amended this patch to follow them (up to > > f97c55c), removed an unnecessary branch and edited some comments. > > I think the core problem for this patch is that there's no consensus > on what approach to take. Until that somehow gets sorted out, I think > this isn't going to make any progress. Unfortunately, I don't have a > clear idea what sort of solution everybody could tolerate. > > I still think that some kind of slow-expire behavior -- like a clock > hand that hits each backend every 10 minutes and expires entries not > used since the last hit -- is actually pretty sensible. It ensures > that idle or long-running backends don't accumulate infinite bloat > while still allowing the cache to grow large enough for good > performance when all entries are being regularly used. But Tom > doesn't like it. Other approaches were also discussed; none of them > seem like an obvious slam-dunk.
I suppose that it slows intermittent lookup of non-existent objects. I have tried a slight different thing. Removing entries by 'age', preserving specified number (or ratio to live entries) of younger negative entries. The problem of that approach was I didn't find how to determine the number of entries to preserve, or I didn't want to offer additional knobs for them. Finally I proposed the patch upthread since it doesn't need any assumption on usage. Though I can make another patch that does the same thing based on LRU, the same how-many-to-preserve problem ought to be resolved in order to avoid slowing the inermittent lookup. > Turning to the patch itself, I don't know how we decide whether the > patch is worth it. Scanning the whole (potentially large) cache to > remove negative entries has a cost, mostly in CPU cycles; keeping > those negative entries around for a long time also has a cost, mostly > in memory. I don't know how to decide whether these patches will help > more people than it hurts, or the other way around -- and it's not > clear that anyone else has a good idea about that either. Scanning a hash on invalidation of several catalogs (hopefully slightly) slows certain percentage of inavlidations on maybe most of workloads. Holding no-longer-lookedup entries surely kills a backend under certain workloads sooner or later. This doesn't save the pg_proc cases, but saves pg_statistic and pg_class cases. I'm not sure what other catalogs can bloat. I could reduce the complexity of this. Inval mechanism conveys only a hash value so this scans the whole of a cache for the target OIDs (with possible spurious targets). This will be resolved by letting inval mechanism convey an OID. (but this may need additional members in an inval entry.) Still, the full scan perfomed in CleanupCatCacheNegEntries doesn't seem easily avoidable. Separating the hash by OID of key or provide special dlist that points tuples in buckets will introduce another complexity. > Typos: funciton, paritial. Thanks. ispell told me of additional typos corresnpond, belive and undistinguisable. regards, -- Kyotaro Horiguchi NTT Open Source Software Center -- Sent via pgsql-hackers mailing list (email@example.com) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers