On Fri, Dec 13, 2019 at 11:08 AM Masahiko Sawada <masahiko.saw...@2ndquadrant.com> wrote: > > On Fri, 13 Dec 2019 at 14:19, Amit Kapila <amit.kapil...@gmail.com> wrote: > > > > > > > > > > > > How about adding an additional argument to ReinitializeParallelDSM() > > > > > that allows the number of workers to be reduced? That seems like it > > > > > would be less confusing than what you have now, and would involve > > > > > modify code in a lot fewer places. > > > > > > > > > > > > > Yeah, we can do that. We can maintain some information in > > > > LVParallelState which indicates whether we need to reinitialize the > > > > DSM before launching workers. Sawada-San, do you see any problem with > > > > this idea? > > > > > > I think the number of workers could be increased in cleanup phase. For > > > example, if we have 1 brin index and 2 gin indexes then in bulkdelete > > > phase we need only 1 worker but in cleanup we need 2 workers. > > > > > > > I think it shouldn't be more than the number with which we have > > created a parallel context, no? If that is the case, then I think it > > should be fine. > > Right. I thought that ReinitializeParallelDSM() with an additional > argument would reduce DSM but I understand that it doesn't actually > reduce DSM but just have a variable for the number of workers to > launch, is that right? >
Yeah, probably, we need to change the nworkers stored in the context and it should be lesser than the value already stored in that number. > And we also would need to call > ReinitializeParallelDSM() at the beginning of vacuum index or vacuum > cleanup since we don't know that we will do either index vacuum or > index cleanup, at the end of index vacum. > Right. -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com