On Thu, Oct 13, 2016 at 5:28 AM, Robert Haas <robertmh...@gmail.com> wrote:
> On Wed, Oct 12, 2016 at 8:13 AM, Peter Eisentraut
> <peter.eisentr...@2ndquadrant.com> wrote:
>> On 10/4/16 10:16 AM, Julien Rouhaud wrote:
>>> Please find attached v9 of the patch, adding the parallel worker class
>>> and changing max_worker_processes default to 16 and max_parallel_workers
>>> to 8. I also added Amit's explanation for the need of a write barrier
>>> in ForgetBackgroundWorker().
>> This approach totally messes up the decoupling between the background
>> worker facilities and consumers of those facilities. Having dozens of
>> lines of code in bgworker.c that does the accounting and resource
>> checking on behalf of parallel.c looks very suspicious. Once we add
>> logical replication workers or whatever, we'll be tempted to add even
>> more stuff in there and it will become a mess.
> I attach a new version of the patch that I've been hacking on in my
> "spare time", which reduces the footprint in bgworker.c quite a bit.
Couple of comments -
@@ -370,6 +388,9 @@ ForgetBackgroundWorker(slist_mutable_iter *cur)
slot = &BackgroundWorkerData->slot[rw->rw_shmem_slot];
+ if ((rw-
>rw_worker.bgw_flags & BGWORKER_CLASS_PARALLEL) != 0)
slot->in_use = false;
It seems upthread , we agreed to have a write barrier before the
setting slot->in_use, but I don't see the same in patch.
Isn't it better to keep planner related checks from Julien's patch,
especially below one:
@@ -249,6 +249,7 @@ standard_planner(Query *parse, int cursorOptions,
parse->utilityStmt == NULL &&
max_parallel_workers_per_gather > 0 &&
+ max_parallel_workers > 0 &&
> I don't think it's wrong that the handling is done there, though. The
> process that is registering the background worker is well-placed to
> check whether there are already too many, and if it does not then the
> slot is consumed at least temporarily even if it busts the cap. On
> the flip side, the postmaster is the only process that is well-placed
> to know when a background worker terminates. The worker process
> itself can't be made responsible for it, as you suggest below, because
> it may never even start up in the first place (e.g. fork() returns
> EAGAIN). And the registering process can't be made responsible,
> because it might die before the worker.
>> I think we should think of a way where we can pass the required
>> information during background worker setup, like a pointer to the
>> resource-limiting GUC variable.
> I don't think this can work, per the above.
I see the value in Peter's point, but could not think of a way to
implement the same.
Sent via pgsql-hackers mailing list (firstname.lastname@example.org)
To make changes to your subscription: