[ 
https://issues.apache.org/jira/browse/SPARK-18765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcelo Vanzin resolved SPARK-18765.
------------------------------------
    Resolution: Won't Fix

I missed that this is already fixed in 2.0; since it's a new feature, I'd 
rather not add it to 1.6 (especially since it's unclear we'll have many new 
releases in that line).

> Make values for spark.yarn.{am|driver|executor}.memoryOverhead have 
> configurable units
> --------------------------------------------------------------------------------------
>
>                 Key: SPARK-18765
>                 URL: https://issues.apache.org/jira/browse/SPARK-18765
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 1.6.3
>            Reporter: Daisuke Kobayashi
>            Priority: Trivial
>
> {{spark.yarn.\{driver|executor|am\}.memoryOverhead}} values are isolated to 
> Megabytes today. Users provide a value without a unit and Spark assumes its 
> in MBs. Since the overhead is often a few gigabytes, we should change the 
> memory overhead to work the same way as executor or driver memory configs.
> Given 2.0 has already covered this, it's worth to have 1.X code line cover 
> this capability as well. My PR offers users being able to pass the values in 
> multiple ways (backward compatibility is not broken) like:
> {code}
> spark.yarn.executor.memoryOverhead=300m --> converted to 300
> spark.yarn.executor.memoryOverhead=500 --> converted to 500
> spark.yarn.executor.memoryOverhead=1g --> converted to 1024
> spark.yarn.executor.memoryOverhead=1024m --> converted to 1024
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to