Kyotaro HORIGUCHI <horiguchi.kyot...@lab.ntt.co.jp> writes:
> At Thu, 8 Mar 2018 00:28:04 +0000, "Tsunakawa, Takayuki" 
> <tsunakawa.ta...@jp.fujitsu.com> wrote in 
> <0A3221C70F24FB45833433255569204D1F8FF0D9@G01JPEXMBYT05>
>> Yes.  We are now facing the problem of too much memory use by PostgreSQL, 
>> where about some applications randomly access about 200,000 tables.  It is 
>> estimated based on a small experiment that each backend will use several to 
>> ten GBs of local memory for CacheMemoryContext.  The total memory use will 
>> become over 1 TB when the expected maximum connections are used.
>> 
>> I haven't looked at this patch, but does it evict all kinds of entries in 
>> CacheMemoryContext, ie. relcache, plancache, etc?

> This works only for syscaches, which could bloat with entries for
> nonexistent objects.

> Plan cache is a utterly deferent thing. It is abandoned at the
> end of a transaction or such like.

When I was at Salesforce, we had *substantial* problems with plancache
bloat.  The driving factor there was plans associated with plpgsql
functions, which Salesforce had a huge number of.  In an environment
like that, there would be substantial value in being able to prune
both the plancache and plpgsql's function cache.  (Note that neither
of those things are "abandoned at the end of a transaction".)

> Relcache is not based on catcache and out of the scope of this
> patch since it doesn't get bloat with nonexistent entries. It
> uses dynahash and we could introduce a similar feature to it if
> we are willing to cap relcache size.

I think if the case of concern is an application with 200,000 tables,
it's just nonsense to claim that relcache size isn't an issue.

In short, it's not really apparent to me that negative syscache entries
are the major problem of this kind.  I'm afraid that you're drawing very
large conclusions from a specific workload.  Maybe we could fix that
workload some other way.

                        regards, tom lane

Reply via email to