[ https://issues.apache.org/jira/browse/SPARK-5772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Sean Owen resolved SPARK-5772. ------------------------------ Resolution: Won't Fix > spark-submit --driver-memory not correctly taken into account > ------------------------------------------------------------- > > Key: SPARK-5772 > URL: https://issues.apache.org/jira/browse/SPARK-5772 > Project: Spark > Issue Type: Bug > Components: Spark Submit > Affects Versions: 1.2.0, 1.2.1 > Environment: Debian 7 > Reporter: Guillaume Charhon > Priority: Minor > > spark-submit --driver-memory not taken correctly into account > The spark.driver.memory does not seem to be correctly taken into account. I > came across this issue as I had a java.lang.OutOfMemoryError: Java heap space > when I was doing a random forest training. > I did all my tests with 1 master and 4 worker nodes. All machines have 16 > cores, 106 Gb of RAM running Debian 7 on Google Compute Engine. > As I had the memory error, I noticed that the master had only 265.4 MB > registered on the Executor page of the Web UI. Workers machines have 42.4 GB. > in command line: > ../hadoop/spark-install/bin/spark-submit --driver-memory=83971m predict.py > --> does NOT work (master memory is not correct) > in spark-default.conf : > spark.driver.memory 83971m > -->works > in spark-env.sh: > SPARK_DRIVER_MEMORY=83971m > -->works > The spark.driver.memory is displayed with the correct value (83971m) on the > Web UI (http://spark-m:4040/environment/ > ) for all the tests. > However, we can see on the executor Web ui page > (http://spark-m:4040/executors/) that the memory is not correctly allocated > when the option is provided with the spark-submit command line. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org