Github user pwendell commented on the pull request:

    https://github.com/apache/spark/pull/2401#issuecomment-55821489
  
    Hey will this have compatbility issues for existing deployments? I know 
many clusters where they just have Spark request the entire amount of memory on 
the node. With this, if a user upgrades their jobs could just starve. What if 
instead we just "scale down" the size of the executor based on what the user 
requests. I.e. if they request 20GB executors we reserve a few GB for this 
overhead. @andrewor14 how does this work in YARN? It might be good to have 
similar semantics to what they have there.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to