Github user jerryshao commented on the pull request:

    https://github.com/apache/spark/pull/10761#issuecomment-172700451
  
    Hi @kevincox , IIUC looks like your description of dynamic allocation is 
quite similar to some kinds of preemption mechanism in the cluster manager. 
    
    >This allows jobs to utilize an entire cluster when it is unneeded but when 
another job starts (especially development or interactive jobs) the currently 
running jobs can scale back to allow it in. This means that there is no longer 
a balance between cluster utilization and interactive job launching.
    
    I'm doubting is it good to address such kind of resource related problem in 
application level? Since Spark is just an application that doesn't have a whole 
picture of cluster usage, to address such problem is quite hard for Spark. But 
for YARN, with capacity scheduler preemption enabled, also with priority 
supported this problem can be handled relatively easy.
    
    The complexity of this module makes the refactor not an easy work, we might 
have a better discussion before refactoring on this. 



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to