Github user andrewor14 commented on the pull request: https://github.com/apache/spark/pull/2401#issuecomment-57694715 @brndnmtthws Thanks for adapting this to what Yarn does. One concern I have now though is that `spark.yarn.executor.memoryOverhead` and `spark.mesos.executor.memoryOverhead` mean different things. The former refers to the raw memory in MB (384 MB), but the latter refers to a fraction of the executor memory (15%). I think this will be confusing to the user. Any idea on what we should do here @pwendell @tgravescs?
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org