On Wed, Dec 18, 2019 at 11:46 AM Masahiko Sawada <masahiko.saw...@2ndquadrant.com> wrote: > > On Wed, 18 Dec 2019 at 15:03, Amit Kapila <amit.kapil...@gmail.com> wrote: > > > > I was analyzing your changes related to ReinitializeParallelDSM() and > > it seems like we might launch more number of workers for the > > bulkdelete phase. While creating a parallel context, we used the > > maximum of "workers required for bulkdelete phase" and "workers > > required for cleanup", but now if the number of workers required in > > bulkdelete phase is lesser than a cleanup phase(as mentioned by you in > > one example), then we would launch more workers for bulkdelete phase. > > Good catch. Currently when creating a parallel context the number of > workers passed to CreateParallelContext() is set not only to > pcxt->nworkers but also pcxt->nworkers_to_launch. We would need to > specify the number of workers actually to launch after created the > parallel context or when creating it. Or I think we call > ReinitializeParallelDSM() even the first time running index vacuum. >
How about just having ReinitializeParallelWorkers which can be called only via vacuum even for the first time before the launch of workers as of now? -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com