[
https://issues.apache.org/jira/browse/SPARK-2444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14058930#comment-14058930
]
Thomas Graves commented on SPARK-2444:
--------------------------------------
Maybe I'm missing something in what you are asking. The default is 384MB and
you can set the overhead to be larger then that with the config
spark.yarn.executor.memoryOverhead. Then you will get Xmx as what you specify
(32GB) and a container size of what you specify +
spark.yarn.executor.memoryOverhead. Different applications could have very
different values for this that is why we decided to let the user set it.
In other things on Yarn you have to specify both of these. On MapReduce for
instance, you actually specify the container size and then specify the Xmx
separately. Here with Spark on Yarn it is the inverse based on what the
community thought was easier.
> Make spark.yarn.executor.memoryOverhead a first class citizen
> -------------------------------------------------------------
>
> Key: SPARK-2444
> URL: https://issues.apache.org/jira/browse/SPARK-2444
> Project: Spark
> Issue Type: Improvement
> Components: Documentation
> Affects Versions: 1.0.0
> Reporter: Nishkam Ravi
>
> Higher value of spark.yarn.executor.memoryOverhead is critical to running
> Spark applications on Yarn (https://issues.apache.org/jira/browse/SPARK-2398)
> at least for 1.0. It would be great to have this parameter highlighted in the
> docs/usage.
--
This message was sent by Atlassian JIRA
(v6.2#6252)