ivoson commented on code in PR #36716:
URL: https://github.com/apache/spark/pull/36716#discussion_r902794551


##########
docs/job-scheduling.md:
##########
@@ -83,6 +83,10 @@ This feature is disabled by default and available on all 
coarse-grained cluster
 [Mesos coarse-grained mode](running-on-mesos.html#mesos-run-modes) and [K8s 
mode](running-on-kubernetes.html).
 
 
+### Caveats
+
+- In [standalone mode](spark-standalone.html), without explicitly setting 
cores for each executor, executors will get all the cores of a worker. In this 
case, when dynamic allocation enabled, spark will possibly acquire much more 
executors than expected. When you want to use dynamic allocation in [standalone 
mode](spark-standalone.html), you are recommended to explicitly set cores for 
each executor before the issue 
[SPARK-30299](https://issues.apache.org/jira/browse/SPARK-30299) got fixed.

Review Comment:
   > "...without explicitly setting `spark.executor.cores`..."
   
   done.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to