kishoreg commented on a change in pull request #3470: [PINOT-7328] Reduce lock
contention in physical planning phase by reducing the total number of tasks
URL: https://github.com/apache/incubator-pinot/pull/3470#discussion_r233220024
##########
File path:
pinot-core/src/main/java/com/linkedin/pinot/core/plan/CombinePlanNode.java
##########
@@ -36,7 +36,7 @@
public class CombinePlanNode implements PlanNode {
private static final Logger LOGGER =
LoggerFactory.getLogger(CombinePlanNode.class);
- private static final int NUM_PLAN_NODES_THRESHOLD_FOR_PARALLEL_RUN = 10;
+ private static final int MAX_PLAN_TASKS = Math.min(10, (int)
(Runtime.getRuntime().availableProcessors() * .5));
Review comment:
Ok so there are really two variables - max threads and number of segments
per thread. The code is using same variable to represent both of them
MAX_PLAN_TASKS which is why I got confused. If possible lets create two
separate variables.
Also, another concern is for 11 segments, we will still have 10 threads
where as with 10 segments, we will be using only 1 thread. Is my understanding
right?
This is where the hybrid approach might work better where number of threads
can vary anywhere from 1 to MAX_THREADS=10.
//psuedo code
MAX_THREADS = 10;
MAX_SEGMENT_PER_THREAD = 100; //not sure whats the right number for this.
numThreads = Math.min(10, numSegments/MAX_SEGMENT_PER_THREAD); //anywhere
from 1 to 10
numSegmentPerThread = numSegments/numThreads;
In your code Outer loop will go from 0 to numThreads and inner loop in the
callJob will go from i*numSegmentPerThread to (i +1 ) * numSegmentPerThread
This will allow us to control the two variables independently.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]