On 3 May 2017 at 07:13, Robert Haas <robertmh...@gmail.com> wrote: > Multiple people (including David Rowley > as well as folks here at EnterpriseDB) have demonstrated that for > certain queries, we can actually use a lot more workers and everything > works great. The problem is that for other queries, using a lot of > workers works terribly. The planner doesn't know how to figure out > which it'll be - and honestly, I don't either.
For me, it seems pretty much related to the number of tuples processed on a worker, vs how many they return. As a general rule, I'd say the higher this ratio, the higher the efficiency ratio will be for the worker. Although that's not taking into account contention points where workers must wait for fellow workers to complete some operation. I think parallel_tuple_cost is a good GUC to have, perhaps we can be smarter about the use of it when deciding on how many workers should be used. By efficiency, I mean that if a query takes 10 seconds in a normal serial plan, and adding 1 worker, it takes 5 seconds, it would be 100% efficient to use another worker. I charted this in . It would have been interesting to chart the same in a query that returned a larger number of groups, but I ran out of time, but I think it pretty much goes, without testing, that more groups == less efficiency. Which'll be due to more overhead in parallel tuple communication, and more work to do in the serial portion of the plan.  https://blog.2ndquadrant.com/parallel-monster-benchmark -- David Rowley http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (email@example.com) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers