loquisgon commented on a change in pull request #12221:
URL: https://github.com/apache/druid/pull/12221#discussion_r806214015
##########
File path:
indexing-service/src/main/java/org/apache/druid/indexing/overlord/ImmutableWorkerInfo.java
##########
@@ -119,12 +142,33 @@ public boolean isValidVersion(String minVersion)
return worker.getVersion().compareTo(minVersion) >= 0;
}
- public boolean canRunTask(Task task)
+ public boolean canRunTask(Task task, double parallelIndexWorkerRatio)
{
return (worker.getCapacity() - getCurrCapacityUsed() >=
task.getTaskResource().getRequiredCapacity()
+ && canRunParallelIndexTask(task, parallelIndexWorkerRatio)
&&
!getAvailabilityGroups().contains(task.getTaskResource().getAvailabilityGroup()));
}
+ public boolean canRunParallelIndexTask(Task task, double
parallelIndexWorkerRatio)
+ {
+ if (!task.getType().equals(ParallelIndexSupervisorTask.TYPE)) {
+ return true;
+ }
+ return getWorkerParallelIndexCapacity(parallelIndexWorkerRatio) -
getCurrParallelIndexCapacityUsed()
Review comment:
Did you look into considering actually have the supervisor task not take
any slots? In this way the limiting factor would be the number of sub-tasks
allowed but all tasks now are allowed to run with no starvation of any of them
(since their supervisor tasks are always allowed to run and their sub-tasks
will eventually run when slots free up, assuming that there is already some
fairness built in to guarantee older waiting sub-tasks get to run before newer
or something like that -- I am not sure that this exists). The potential issue
with letting all supervisor run with no task slots taken is that while they
don't use a lot of cpu they may use significant memory given that they keep
track of the metadata of running sub-tasks, but this can be deal with by adding
more memory to the middle managers in case of OOMs.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]