[ 
https://issues.apache.org/jira/browse/SPARK-6085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14342398#comment-14342398
 ] 

Ted Yu commented on SPARK-6085:
-------------------------------

In my opinion, priority for this JIRA should be Major.

Users who deploy Spark on YARN in production are highly likely to hit 
computation failure(s). This would impact their business. Without intimate 
knowledge of Spark, it would take them some time to figure out the root cause.

> Increase default value for memory overhead
> ------------------------------------------
>
>                 Key: SPARK-6085
>                 URL: https://issues.apache.org/jira/browse/SPARK-6085
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>            Reporter: Ted Yu
>            Priority: Minor
>
> Several users have communicated how current default memory overhead value 
> resulted in failed computation in Spark on YARN.
> See this thread:
> http://search-hadoop.com/m/JW1q58FDel
> Increasing default value for memory overhead would improve out of box user 
> experience.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to