kfaraz commented on PR #13076:
URL: https://github.com/apache/druid/pull/13076#issuecomment-1243746717

   @AmatyaAvadhanula , thanks for taking a stab at this. Could you share some 
more details regarding why these changes are needed? I see that this would 
increase the possible concurrency of supervisor update operations, but some 
more details would be helpful.
   
   If the intention is higher concurrency, I feel that the approach can be 
improved. Maps such as `autoscalers` etc are already `ConcurrentHashMap`s and 
don't really need separate locks. You could just perform atomic updates on them 
using methods like `computeIfAbsent()`.
   
   But the real concern is the purpose of the original global `lock`. AFAICT 
from the code, that `lock` is there less to protect the supervisors from each 
other and more to ensure that while the manager itself is being started or 
stopped, no other action should occur. Changing the usage of this `lock` can 
have negative effects.
   
   For example, what would happen when the `SupervisorManager` is stopping and 
a new supervisor is submitted? You could try using `globalLock` on some of the 
operations inside `createOrUpdateSupervisorInternal()` but that could still 
lead to a weird state where the stopping thread is say, removing stuff from the 
`autoscalers` map but another thread keeps adding to it.
   
   A better implementation would be to maintain states of the 
`SupervisorManager` rather than a single `started` boolean. And while it is in 
`STARTING` or `STOPPING` state, every other action would have to be failed (or 
maybe just blocked?).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to