[
https://issues.apache.org/jira/browse/SPARK-11154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14973322#comment-14973322
]
Sean Owen commented on SPARK-11154:
-----------------------------------
I think that if this is done at all, it would have to be with a new property.
The old one would then be deprecated but continue to function. This would have
to be done for all such properties.
> make specificaition spark.yarn.executor.memoryOverhead consistent with
> typical JVM options
> ------------------------------------------------------------------------------------------
>
> Key: SPARK-11154
> URL: https://issues.apache.org/jira/browse/SPARK-11154
> Project: Spark
> Issue Type: Improvement
> Components: Documentation, Spark Submit
> Reporter: Dustin Cote
> Priority: Minor
>
> spark.yarn.executor.memoryOverhead is currently specified in megabytes by
> default, but it would be nice to allow users to specify the size as though it
> were a typical -Xmx option to a JVM where you can have 'm' and 'g' appended
> to the end to explicitly specify megabytes or gigabytes.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]