Hello, At Thu, 8 Mar 2018 00:28:04 +0000, "Tsunakawa, Takayuki" <tsunakawa.ta...@jp.fujitsu.com> wrote in <0A3221C70F24FB45833433255569204D1F8FF0D9@G01JPEXMBYT05> > From: Alvaro Herrera [mailto:alvhe...@alvh.no-ip.org] > > The thing that comes to mind when reading this patch is that some time ago > > we made fun of other database software, "they are so complicated to > > configure, > > they have some magical settings that few people understand how to set". > > Postgres was so much better because it was simple to set up, no magic crap. > > But now it becomes apparent that that only was so because Postgres sucked, > > ie., we hadn't yet gotten to the point where we > > *needed* to introduce settings like that. Now we finally are? > > Yes. We are now facing the problem of too much memory use by PostgreSQL, > where about some applications randomly access about 200,000 tables. It is > estimated based on a small experiment that each backend will use several to > ten GBs of local memory for CacheMemoryContext. The total memory use will > become over 1 TB when the expected maximum connections are used. > > I haven't looked at this patch, but does it evict all kinds of entries in > CacheMemoryContext, ie. relcache, plancache, etc?
This works only for syscaches, which could bloat with entries for nonexistent objects. Plan cache is a utterly deferent thing. It is abandoned at the end of a transaction or such like. Relcache is not based on catcache and out of the scope of this patch since it doesn't get bloat with nonexistent entries. It uses dynahash and we could introduce a similar feature to it if we are willing to cap relcache size. regards, -- Kyotaro Horiguchi NTT Open Source Software Center