Github user andrewor14 commented on the pull request:

    https://github.com/apache/spark/pull/2485#issuecomment-57007876
  
    Hey I just talked to @pwendell about this. I think it's better for us to 
have a yarn config and a mesos config, but not generalize this to use a common 
`spark.executor.memory.overhead.*` config. The reason behind this is because 
this memory overhead doesn't make sense for standalone mode or other cluster 
managers that don't launch executors in containers. I think it's fine as long 
as the two yarn and mesos configs have the same semantics, so the user of one 
mode is not confused when they switch to another.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to