[ 
https://issues.apache.org/jira/browse/SPARK-4299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen resolved SPARK-4299.
------------------------------
    Resolution: Not a Problem

This may have been fixed along the way, but from examining related issues 
recently (like https://issues.apache.org/jira/browse/SPARK-5861) I know that 
yarn-cluster mode does not set the driver process's JVM heap size since it's 
not the driver.

> In spark-submit, the driver-memory value is used for the 
> SPARK_SUBMIT_DRIVER_MEMORY value
> -----------------------------------------------------------------------------------------
>
>                 Key: SPARK-4299
>                 URL: https://issues.apache.org/jira/browse/SPARK-4299
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.1.0
>            Reporter: Virgile Devaux
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> In the spark-submit script, the lines below:
> elif [ "$1" = "--driver-memory" ]; then
>     export SPARK_SUBMIT_DRIVER_MEMORY=$2
> are wrong: spark-submit is not the process that will handle the driver when 
> you're in yarn-cluster mode. So, when I lanch spark-submit on a light server 
> with only 2Gb of memory and want to allocate 4gb of memory to the driver 
> (that will run in the ressource manager on a big fat yarn server with, say, 
> 64Gb of RAM) spark submit fails with a OutOfMemory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to