[
https://issues.apache.org/jira/browse/SPARK-4665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ryan Williams closed SPARK-4665.
--------------------------------
Resolution: Won't Fix
Per discussion on [#3525|https://github.com/apache/spark/pull/3525], people
don't feel that configuring memory overhead in this way is useful / worth
adding a config parameter for.
> Config value for setting yarn container overhead to a fraction of executor
> memory
> ---------------------------------------------------------------------------------
>
> Key: SPARK-4665
> URL: https://issues.apache.org/jira/browse/SPARK-4665
> Project: Spark
> Issue Type: Improvement
> Components: YARN
> Reporter: Ryan Williams
>
> Currently, the {{spark.yarn.executor.memoryOverhead}} config lets you specify
> a number of MB of "overhead" memory to allocate for each executor.
> It turns out that it is more useful to set this to a specific fraction of the
> executor memory size; currently a lower-bound of 7% is the default.
> This lower-bound percentage should be configurable, as well; I've heard rules
> of thumb that closer to 10% is desirable, and in general it makes more sense
> to be able to specify the fraction you're targeting than to specify an
> absolute number of MB.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]