On Fri, Mar 13, 2015 at 8:59 AM, Amit Kapila <amit.kapil...@gmail.com> wrote:
> We can't directly call DestroyParallelContext() to terminate workers as
> it can so happen that by that time some of the workers are still not
> started.

That shouldn't be a problem.  TerminateBackgroundWorker() not only
kills an existing worker if there is one, but also tells the
postmaster that if it hasn't started the worker yet, it should not
bother.  So at the conclusion of the first loop inside
DestroyParallelContext(), every running worker will have received
SIGTERM and no more workers will be started.

> So that can lead to problem.  I think what we need here is a way to know
> whether all workers are started. (basically need a new function
>WaitForParallelWorkersToStart()).  This API needs to be provided by
> parallel-mode patch.

I don't think so.  DestroyParallelContext() is intended to be good
enough for this purpose; if it's not, we should fix that instead of
adding a new function.

No matter what, re-scanning a parallel node is not going to be very
efficient.  But the way to deal with that is to make sure that such
nodes have a substantial startup cost, so that the planner won't pick
them in the case where it isn't going to work out well.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to