abhishekd0907 commented on pull request #35858:
URL: https://github.com/apache/spark/pull/35858#issuecomment-1070358079


   > Whether to acquire more resources or not is a policy decision at resource 
manager, not at the application level. Spark modulates the outstanding requests 
for containers based on the progress of its jobs - it does not know apriori 
what the expected runtime of a task is. If more tasks complete quickly, the 
outstanding container requests will go down, - or go up as the number of 
pending tasks increase.
   > 
   > Resource manager will be factoring in a variety of policy decisions - 
quota enforcement, acquisition of resources, preemption of existing containers, 
etc in order to satisfy resource asks.
   
   I believe there is some merit in estimating the expected completion time of 
a stage based on run times of already completed tasks of the same stage. I 
agree this estimation will not be correct always, especially in cases where 
task times are skewed. But I believe we can come up with a best effort, 
data-driven, self-correcting logic and leverage it while requesting new 
executors. New Dynamic allocation logic will incorporate three key pieces of 
information, a) estimated completion time of active stages, b) nodes 
immediately available, and c) latency in adding new nodes. We can also make 
this new Dynamic Allocation logic configurable by leveraging the Resource 
Profiles framework and user can configure to use it only for workloads where it 
works better than DefaultResourceProfile.
   
   Let me know your thoughts @mridulm and let me know if I am missing some 
points.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to