yutoacts commented on a change in pull request #33604:
URL: https://github.com/apache/spark/pull/33604#discussion_r683071677
##########
File path: conf/spark-env.sh.template
##########
@@ -32,14 +32,18 @@
# - SPARK_LOCAL_DIRS, storage directories to use on this node for shuffle and
RDD data
# - MESOS_NATIVE_JAVA_LIBRARY, to point to your libmesos.so if you use Mesos
-# Options read in YARN client/cluster mode
+# Options read in any mode
# - SPARK_CONF_DIR, Alternate conf dir. (Default: ${SPARK_HOME}/conf)
-# - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files
-# - YARN_CONF_DIR, to point Spark towards YARN configuration files when you
use YARN
# - SPARK_EXECUTOR_CORES, Number of cores for the executors (Default: 1).
# - SPARK_EXECUTOR_MEMORY, Memory per Executor (e.g. 1000M, 2G) (Default: 1G)
# - SPARK_DRIVER_MEMORY, Memory for Driver (e.g. 1000M, 2G) (Default: 1G)
+# Options read in any cluster mode using HDFS (e.g. YARN)
Review comment:
Thanks for the review. You're right, I meant cluster mode by any cluster
manager but it must be confusing with YARN cluster mode. I fixed it, hope it
makes more sense.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]