dongjoon-hyun commented on a change in pull request #29846:
URL: https://github.com/apache/spark/pull/29846#discussion_r493881157
##########
File path:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/MountVolumesFeatureStep.scala
##########
@@ -84,4 +112,15 @@ private[spark] class MountVolumesFeatureStep(conf:
KubernetesConf)
(volumeMount, volume)
}
}
+
+ override def getAdditionalKubernetesResources(): Seq[HasMetadata] = {
+ additionalResources
+ }
+}
+
+private[spark] object MountVolumesFeatureStep {
+ val PVC_ON_DEMAND = "OnDemand"
+ val PVC = "PersistentVolumeClaim"
+ val PVC_POSTFIX = "-pvc"
+ val PVC_ACCESS_MODE = "ReadWriteOnce"
Review comment:
Ya, it's possible, but I didn't do that in this PR because of the
followings.
- `PVC_ON_DEMAND`: It's a dummy placeholder because the existing code
expects some pre-defined names. We had better recommend a fixed one instead of
making a configurable one in this case.
- `PVC_POSTFIX`: It can be configurable but doesn't give much benefit
because this is a part of transient ids.
- `PVC_ACCESS_MODE`: Although this makes sense a lot, I leave this PR to
focus on a fixed one because this PR aims to generate a new PVC for each
executor. In other words, this PR is not suggesting creating a `ReadWriteMany`
PVC and sharing across in multiple executors.
For `ReadWriteMany` PVC, we don't need to use this PR. The existing Spark
PVC feature can mount a single `ReadWriteMany` PVC into all executors without
any problem and there is no burden to maintain `ReadWriteMany` PVC, because
it's always a single one. In addition, we also support NFS (AWS EFS) mounting,
too.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]