xwu99 commented on pull request #33941:
URL: https://github.com/apache/spark/pull/33941#issuecomment-938365699


   > A few other concerns here.
   > 
   > 1. how does this work with minimum executors, seems easiest enough to not 
worry about it for this and just reuse if there.
   
   The idea is resource reuse so min/max will be adjusted to take into account 
for all compatible executors. (i.e. all compatible executors will share one 
min/max executors number)
   
   
https://github.com/apache/spark/blob/67034f284803bd10a487b5c67eb4c552ace950c3/core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala#L558
   
   > 2. how does this apply to executor monitor idle timeout.  we don't timeout 
executors when more tasks to be run, do we take that into account here and keep 
compatible ones?
   
   Compatible executors will be used as the same as the executor with the 
original profile. It will also timeout if not more tasks. As mentioned above, 
it will share the min/max executors number with other compatible profiles.
   
   > 3. Does UI show proper/useful information here
   Will check what UI need to be changed. Maybe show some compatible info in 
the Environment / Resource Profiles
   
   > 4. I don't think this has taken numBarrierSlotsAvailable into account - 
see calculateAvailableSlots
   Didn't notice this, will check. 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to