[ 
https://issues.apache.org/jira/browse/SPARK-2444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14058987#comment-14058987
 ] 

Marcelo Vanzin commented on SPARK-2444:
---------------------------------------

As Tom mentions, the setting is documented. You don't see it in the site 
because it's in 1.0.1, which hasn't been released yet.

As for a better default, I'm planning to look at having a reasonable default 
for that value based on the job's configuration; I plan to do some work on that 
hopefully today. Also, just to highlight what Patrick mentioned in SPARK-1930, 
we also should try to account for python overhead when running pyspark on Yarn.

> Make spark.yarn.executor.memoryOverhead a first class citizen
> -------------------------------------------------------------
>
>                 Key: SPARK-2444
>                 URL: https://issues.apache.org/jira/browse/SPARK-2444
>             Project: Spark
>          Issue Type: Improvement
>          Components: Documentation
>    Affects Versions: 1.0.0
>            Reporter: Nishkam Ravi
>
> Higher value of spark.yarn.executor.memoryOverhead is critical to running 
> Spark applications on Yarn (https://issues.apache.org/jira/browse/SPARK-2398) 
> at least for 1.0. It would be great to have this parameter highlighted in the 
> docs/usage. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to