Repository: spark Updated Branches: refs/heads/master d4950e6be -> 188ea348f
[SPARK-11242][SQL] In conf/spark-env.sh.template SPARK_DRIVER_MEMORY is documented incorrectly Minor fix on the comment Author: guoxi <gu...@us.ibm.com> Closes #9201 from xguo27/SPARK-11242. Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/188ea348 Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/188ea348 Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/188ea348 Branch: refs/heads/master Commit: 188ea348fdcf877d86f3c433cd15f6468fe3b42a Parents: d4950e6 Author: guoxi <gu...@us.ibm.com> Authored: Thu Oct 22 13:56:18 2015 -0700 Committer: Sean Owen <so...@cloudera.com> Committed: Thu Oct 22 13:56:18 2015 -0700 ---------------------------------------------------------------------- conf/spark-env.sh.template | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/spark/blob/188ea348/conf/spark-env.sh.template ---------------------------------------------------------------------- diff --git a/conf/spark-env.sh.template b/conf/spark-env.sh.template index 990ded4..771251f 100755 --- a/conf/spark-env.sh.template +++ b/conf/spark-env.sh.template @@ -36,10 +36,10 @@ # Options read in YARN client mode # - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files -# - SPARK_EXECUTOR_INSTANCES, Number of workers to start (Default: 2) -# - SPARK_EXECUTOR_CORES, Number of cores for the workers (Default: 1). -# - SPARK_EXECUTOR_MEMORY, Memory per Worker (e.g. 1000M, 2G) (Default: 1G) -# - SPARK_DRIVER_MEMORY, Memory for Master (e.g. 1000M, 2G) (Default: 1G) +# - SPARK_EXECUTOR_INSTANCES, Number of executors to start (Default: 2) +# - SPARK_EXECUTOR_CORES, Number of cores for the executors (Default: 1). +# - SPARK_EXECUTOR_MEMORY, Memory per Executor (e.g. 1000M, 2G) (Default: 1G) +# - SPARK_DRIVER_MEMORY, Memory for Driver (e.g. 1000M, 2G) (Default: 1G) # - SPARK_YARN_APP_NAME, The name of your application (Default: Spark) # - SPARK_YARN_QUEUE, The hadoop queue to use for allocation requests (Default: âdefaultâ) # - SPARK_YARN_DIST_FILES, Comma separated list of files to be distributed with the job. --------------------------------------------------------------------- To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org