bjornjorgensen commented on code in PR #45982:
URL: https://github.com/apache/spark/pull/45982#discussion_r1561282169


##########
docs/job-scheduling.md:
##########
@@ -96,6 +96,12 @@ All other relevant configurations are optional and under the 
`spark.dynamicAlloc
 `spark.shuffle.service.*` namespaces. For more detail, see the
 [configurations page](configuration.html#dynamic-allocation).
 
+
+### Caveats
+
+- In [standalone mode](spark-standalone.html), without explicitly setting 
`spark.executor.cores`, each executor will get all the available cores of a 
worker. In this case, when dynamic allocation enabled, spark will possibly 
acquire much more executors than expected. When you want to use dynamic 
allocation in [standalone mode](spark-standalone.html), you are recommended to 
explicitly set cores for each executor before the issue 
[SPARK-30299](https://issues.apache.org/jira/browse/SPARK-30299) got fixed.
+- In [K8s mode](running-on-kubernetes.html), we can't using this feature by 
set `spark.shuffle.service.enabled` to `true` due to Spark on K8s doesn't 
support external shuffle service yet.

Review Comment:
   In [K8s mode](running-on-kubernetes.html), we cannot use this feature by 
setting `spark.shuffle.service.enabled` to `true` because Spark on K8s does not 
yet support the external shuffle service.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to