sunithabeeram commented on a change in pull request #3470: [PINOT-7328] Reduce
lock contention in physical planning phase by reducing the total number of tasks
URL: https://github.com/apache/incubator-pinot/pull/3470#discussion_r233168151
##########
File path:
pinot-core/src/main/java/com/linkedin/pinot/core/plan/CombinePlanNode.java
##########
@@ -36,7 +36,7 @@
public class CombinePlanNode implements PlanNode {
private static final Logger LOGGER =
LoggerFactory.getLogger(CombinePlanNode.class);
- private static final int NUM_PLAN_NODES_THRESHOLD_FOR_PARALLEL_RUN = 10;
+ private static final int MAX_PLAN_TASKS = Math.min(10, (int)
(Runtime.getRuntime().availableProcessors() * .5));
Review comment:
Lets take a concrete example. Currently, for a simple SELECT count(*) from
table where colx=y; If the server has 100 segments to work with, in the current
code we'll end up creating 100 tasks. They will all be executed with a bounded
number of threads in the executor-service, but given the amount of work that is
typically done in the planNode.run(), the task executes very quickly, so the
threads hit the queue to pick up other tasks and that is where the contention
is. Even though there are tasks to be picked up, most "pqw" threads are waiting
to take tasks off the queue.
With this change, we limit the total number of tasks we create - so when a
thread picks up a task, it has sufficient work to do before it gets back to the
queue again. This ensures that we are taking advantage of parallelism as well
as containing the number of tasks. The choice of max of 10 is to ensure we have
threads for other queries as well (much like how it is done in CombineOperator
etc).
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]