On Wed, Mar 16, 2016 at 11:26 AM, Julien Rouhaud <julien.rouh...@dalibo.com>

> I'm not too familiar with parallel planning, but I tried to implement
> both in attached patch. I didn't put much effort into the
> parallel_threshold GUC documentation, because I didn't really see a good
> way to explain it. I'd e happy to improve it if needed. Also, to make
> this parameter easier to tune for users, perhaps we could divide the
> default value by 3 and use it as is in the first iteration in
> create_parallel_path() ?
> Also, global max_parallel_degree still needs to be at least 1 for the
> per table value to be considered.
All applies and works from my end.

Is the max_parallel_degree per table of much use here? It allows the max
number of workers per table to be set - but it's still bound by the same
formula (now from the GUC). So in reality it's only really useful for
limiting the number of workers, not raising it.

Would there be a common use case for limiting parallelism on a subset of
tables in a database you've explicitly set to have a higher amount
of parallel operations via the GUC? I struggle to think of one?

I think in practicality the reverse would be more common, you'd want to set
certain tables to a starting point of a certain number of workers (and ramp
up to more if the formula allowed it). You could set this to 0 for
never use parallel
agg on this table.

Another option is to allow access to the the threshold multiplier
(currently hard coded to 3) per table - but this might become pretty hard
to explain succinctly in the documentation.



The contents of this email are confidential and may be subject to legal or 
professional privilege and copyright. No representation is made that this 
email is free of viruses or other defects. If you have received this 
communication in error, you may not copy or distribute any part of it or 
otherwise disclose its contents to anyone. Please advise the sender of your 
incorrect receipt of this correspondence.

Reply via email to