[
https://issues.apache.org/jira/browse/SPARK-2444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14059147#comment-14059147
]
Nishkam Ravi commented on SPARK-2444:
-------------------------------------
[~tgraves] Thanks for the pointer. I guess I was referring to
http://spark.apache.org/docs/1.0.0/running-on-yarn.html
[~vanzin][~sowen] I have a simple patch for this. I have not submitted a PR yet
because having a multiplier was shot down earlier in the original PR
(https://github.com/apache/spark/pull/894). From my experiments so far, having
a multiplier than a constant value as default makes more sense. Memory_overhead
seems to increase with Yarn container size (as Sean points out). It's better to
allocate more than less in this case. I'm running additional experiments to
tune the multiplier.
> Make spark.yarn.executor.memoryOverhead a first class citizen
> -------------------------------------------------------------
>
> Key: SPARK-2444
> URL: https://issues.apache.org/jira/browse/SPARK-2444
> Project: Spark
> Issue Type: Improvement
> Components: Documentation
> Affects Versions: 1.0.0
> Reporter: Nishkam Ravi
>
> Higher value of spark.yarn.executor.memoryOverhead is critical to running
> Spark applications on Yarn (https://issues.apache.org/jira/browse/SPARK-2398)
> at least for 1.0. It would be great to have this parameter highlighted in the
> docs/usage.
--
This message was sent by Atlassian JIRA
(v6.2#6252)