dongjoon-hyun commented on a change in pull request #29897:
URL: https://github.com/apache/spark/pull/29897#discussion_r497279320
##########
File path: docs/running-on-kubernetes.md
##########
@@ -307,7 +307,18 @@ And, the claim name of a `persistentVolumeClaim` with
volume name `checkpointpvc
spark.kubernetes.driver.volumes.persistentVolumeClaim.checkpointpvc.options.claimName=check-point-pvc-claim
```
-The configuration properties for mounting volumes into the executor pods use
prefix `spark.kubernetes.executor.` instead of `spark.kubernetes.driver.`. For
a complete list of available options for each supported type of volumes, please
refer to the [Spark Properties](#spark-properties) section below.
+The configuration properties for mounting volumes into the executor pods use
prefix `spark.kubernetes.executor.` instead of `spark.kubernetes.driver.`.
+
+For example, you can mount a dynamically-created persistent volume claim per
executor by using `OnDemand` as a claim name and `storageClass` and `sizeLimit`
options like the following. This is useful in case of [Dynamic
Allocation](configuration.html#dynamic-allocation).
Review comment:
@dbtsai . What do you mean by the following?
> Currently, this doesn't support DA yet.
Since Apache Spark 3.0.0, dynamic allocation with K8s has been supported
with shuffle data tracking. And, this feature is also developed for both
additional large disk requirement and dynamic allocation scenario. For example,
in case of dynamic allocation, the executor id increases monotonically and
indefinitely, so users cannot prepare pre-populated PVCs.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]