On Thu, Feb 16, 2017 at 8:15 PM, Robert Haas <robertmh...@gmail.com> wrote: > On Wed, Feb 15, 2017 at 11:15 PM, Ashutosh Bapat > <ashutosh.ba...@enterprisedb.com> wrote: >> If the user is ready throw 200 workers and if the subplans can use >> them to speed up the query 200 times (obviously I am exaggerating), >> why not to use those? When the user set >> max_parallel_workers_per_gather to that high a number, he meant it to >> be used by a gather, and that's what we should be doing. > > The reason is because of what Amit Khandekar wrote in his email -- you > get a result with a partitioned table that is wildly inconsistent with > the result you get for an unpartitioned table. You could equally well > argue that if the user sets max_parallel_workers_per_gather to 200, > and there's a parallel sequential scan of an 8MB table to be > performed, we ought to use all 200 workers for that. But the planner > in fact estimates a much lesser number of workers, because using 200 > workers for that task wastes a lot of resources for no real > performance benefit. If you partition that 8MB table into 100 tables > that are each 80kB, that shouldn't radically increase the number of > workers that get used.
That's true for a partitioned table, but not necessarily for every append relation. Amit's patch is generic for all append relations. If the child plans are joins or subquery segments of set operations, I doubt if the same logic works. It may be better if we throw as many workers (or some function "summing" those up) as specified by those subplans. I guess, we have to use different logic for append relations which are base relations and append relations which are not base relations. -- Best Wishes, Ashutosh Bapat EnterpriseDB Corporation The Postgres Database Company -- Sent via pgsql-hackers mailing list (email@example.com) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers