On Fri, Feb 17, 2017 at 11:44 AM, Ashutosh Bapat
<ashutosh.ba...@enterprisedb.com> wrote:
> That's true for a partitioned table, but not necessarily for every
> append relation. Amit's patch is generic for all append relations. If
> the child plans are joins or subquery segments of set operations, I
> doubt if the same logic works. It may be better if we throw as many
> workers (or some function "summing" those up) as specified by those
> subplans. I guess, we have to use different logic for append relations
> which are base relations and append relations which are not base
> relations.

Well, I for one do not believe that if somebody writes a UNION ALL
with 100 branches, they should get 100 (or 99) workers.  Generally
speaking, the sweet spot for parallel workers on queries we've tested
so far has been between 1 and 4.  It's straining credulity to believe
that the number that's correct for parallel append is more than an
order of magnitude larger.  Since increasing resource commitment by
the logarithm of the problem size has worked reasonably well for table
scans, I believe we should pursue a similar approach here.  I'm
willing to negotiate on the details of what the formula I looked like,
but I'm not going to commit something that lets an Append relation try
to grab massively more resources than we'd use for some other plan
shape.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to