[ 
https://issues.apache.org/jira/browse/SPARK-11154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000438#comment-15000438
 ] 

Thomas Graves commented on SPARK-11154:
---------------------------------------

sorry actually thinking about this more I'm torn. New configs might be less 
confusing then changing it in 2.0 or for new users hitting issue [~sowen] 
pointed out.  But I just hate to see yet more configs andI don't like the 
change of memory to mem in spark.yarn.am.memory because now they are different 
from the other spark  ones spark.executor.memory, etc.


> make specificaition spark.yarn.executor.memoryOverhead consistent with 
> typical JVM options
> ------------------------------------------------------------------------------------------
>
>                 Key: SPARK-11154
>                 URL: https://issues.apache.org/jira/browse/SPARK-11154
>             Project: Spark
>          Issue Type: Improvement
>          Components: Documentation, Spark Submit
>            Reporter: Dustin Cote
>            Priority: Minor
>
> spark.yarn.executor.memoryOverhead is currently specified in megabytes by 
> default, but it would be nice to allow users to specify the size as though it 
> were a typical -Xmx option to a JVM where you can have 'm' and 'g' appended 
> to the end to explicitly specify megabytes or gigabytes.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to