[
https://issues.apache.org/jira/browse/SPARK-4299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sean Owen reopened SPARK-4299:
------------------------------
Wait a sec. I am not clear it's resolved on testing with 1.3.0. But, it is a
duplicate of SPARK-3884
> In spark-submit, the driver-memory value is used for the
> SPARK_SUBMIT_DRIVER_MEMORY value
> -----------------------------------------------------------------------------------------
>
> Key: SPARK-4299
> URL: https://issues.apache.org/jira/browse/SPARK-4299
> Project: Spark
> Issue Type: Bug
> Components: Spark Core
> Affects Versions: 1.1.0
> Reporter: Virgile Devaux
> Original Estimate: 0.5h
> Remaining Estimate: 0.5h
>
> In the spark-submit script, the lines below:
> elif [ "$1" = "--driver-memory" ]; then
> export SPARK_SUBMIT_DRIVER_MEMORY=$2
> are wrong: spark-submit is not the process that will handle the driver when
> you're in yarn-cluster mode. So, when I lanch spark-submit on a light server
> with only 2Gb of memory and want to allocate 4gb of memory to the driver
> (that will run in the ressource manager on a big fat yarn server with, say,
> 64Gb of RAM) spark submit fails with a OutOfMemory.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]