dongjoon-hyun commented on a change in pull request #32288:
URL: https://github.com/apache/spark/pull/32288#discussion_r618826955
##########
File path:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/KubernetesClusterSchedulerBackend.scala
##########
@@ -134,6 +134,13 @@ private[spark] class KubernetesClusterSchedulerBackend(
}
}
+ Utils.tryLogNonFatalError {
+ kubernetesClient
+ .persistentVolumeClaims()
+ .withLabel(SPARK_APP_ID_LABEL, applicationId())
+ .delete()
+ }
Review comment:
Previously, the lifecycle is tied with the executor pod.
Now, the lifecycle is tied with the driver pod. So, it will be deleted when
the driver pod die.
This code is to support early deletion at the app termination.
This is the same one for
`spark.kubernetes.driver.service.deleteOnTermination`~
##########
File path:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/ExecutorPodsAllocator.scala
##########
@@ -339,6 +339,9 @@ private[spark] class ExecutorPodsAllocator(
resources
.filter(_.getKind == "PersistentVolumeClaim")
.foreach { resource =>
+ if (conf.get(KUBERNETES_DRIVER_OWN_PVC) && driverPod.nonEmpty) {
+ addOwnerReference(driverPod.get, Seq(resource))
+ }
Review comment:
Yes, correct!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]