[ 
https://issues.apache.org/jira/browse/SPARK-3535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134592#comment-14134592
 ] 

Andrew Ash commented on SPARK-3535:
-----------------------------------

Why does the task need extra memory if the heap size equals the available task 
memory?  Filesystem cache?

> Spark on Mesos not correctly setting heap overhead
> --------------------------------------------------
>
>                 Key: SPARK-3535
>                 URL: https://issues.apache.org/jira/browse/SPARK-3535
>             Project: Spark
>          Issue Type: Bug
>          Components: Mesos
>    Affects Versions: 1.1.0
>            Reporter: Brenden Matthews
>
> Spark on Mesos does account for any memory overhead.  The result is that 
> tasks are OOM killed nearly 95% of the time.
> Like with the Hadoop on Mesos project, Spark should set aside 15-25% of the 
> executor memory for JVM overhead.
> For example, see: 
> https://github.com/mesos/hadoop/blob/master/src/main/java/org/apache/hadoop/mapred/ResourcePolicy.java#L55-L63



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to