FrankChen021 edited a comment on issue #9940:
URL: https://github.com/apache/druid/issues/9940#issuecomment-636332701


   I think the task slot model( a task per process) is too heavy.
   
   Issuing multiple tasks simultaneously reaches the same goal as an executor 
service proposed here, but the problem of task slot is the number of task slots 
are fixed at the configuration file level. As the datasources grows, and the 
index task grows, there're less task slots left to do the compact or kill 
tasks. If we want to balance it, we have to enlarge the number of task slots 
and reboot the middle manager. This is the problem, if druid could provide a 
configuration center, to change the number of slots would not involve rebooting 
middle managers.
   
   In this scenario, operators could breaks down the kill task by smaller 
intervals to utilize multiple task slots. But from the user's view, user should 
not care about how to utilize these task slots, they just want that job to 
finish ASAP, so the break down should leave to druid. I think this is why this 
proposal is here. 
   
   Considering the task model here, maybe another viable way is the overlord 
breaks down the kill task into smaller tasks, each of which perform a kill in a 
smaller intervals. 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to