mridulm commented on pull request #35858:
URL: https://github.com/apache/spark/pull/35858#issuecomment-1069673078


   Whether to acquire more resources or not is a policy decision at resource 
manager, not at the application level.
   Spark modulates the outstanding requests for containers based on the 
progress of its jobs - it does not know apriori what the expected runtime of a 
task is. If more tasks complete quickly, the outstanding container requests 
will go down, - or go up as the number of pending tasks increase.
   
   Resource manager will be factoring in a variety of policy decisions - quota 
enforcement, acquisition of resources, preemption of existing containers, etc 
in order to satisfy resource asks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to