On Thu, Oct 10, 2019 at 3:36 PM Amit Kapila <amit.kapil...@gmail.com> wrote:
>
> On Thu, Oct 10, 2019 at 9:58 AM Masahiko Sawada <sawada.m...@gmail.com> wrote:
> >
> > On Wed, Oct 9, 2019 at 7:12 PM Dilip Kumar <dilipbal...@gmail.com> wrote:
> > >
> > > I think the current situation is not good but if we try to cap it to
> > > maintenance_work_mem + gin_*_work_mem then also I don't think it will
> > > make the situation much better.  However, I think the idea you
> > > proposed up-thread[1] is better.  At least the  maintenance_work_mem
> > > will be the top limit what the auto vacuum worker can use.
> > >
> >
> > I'm concerned that there are other index AMs that could consume more
> > memory like GIN. In principle we can vacuum third party index AMs and
> > will be able to even parallel vacuum them. I  expect that
> > maintenance_work_mem is the top limit of memory usage of maintenance
> > command but actually it's hard to set the limit to memory usage of
> > bulkdelete and cleanup by the core. So I thought that since GIN is the
> > one of the index AM it can have a new parameter to make its job
> > faster. If we have that parameter it might not make the current
> > situation much better but user will be able to set a lower value to
> > that parameter to not use the memory much while keeping the number of
> > index vacuums.
> >
>
> I can understand your concern why dividing maintenance_work_mem for
> vacuuming heap and cleaning up the index might be tricky especially
> because of third party indexes, but introducing new Guc isn't free
> either.  I think that should be the last resort and we need buy-in
> from more people for that.  Did you consider using work_mem for this?

Yeah that seems work too. But I wonder if it could be the similar
story to gin_pending_list_limit. I mean that previously we used to use
 work_mem as the maximum size of GIN pending list. But we concluded
that it was not appropriate to control both by one GUC so we
introduced gin_penidng_list_limit and the storage parameter at commit
263865a4 (originally it's pending_list_cleanup_size but rename to
gin_pending_list_limit at commit c291503b1). I feel that that story is
quite similar to this issue.

Regards,

--
Masahiko Sawada


Reply via email to