Github user mccheah commented on a diff in the pull request:
https://github.com/apache/spark/pull/21067#discussion_r181474442
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/KubernetesClusterSchedulerBackend.scala
---
@@ -59,15 +59,17 @@ private[spark] class KubernetesClusterSchedulerBackend(
private val kubernetesNamespace = conf.get(KUBERNETES_NAMESPACE)
private val kubernetesDriverPodName = conf
- .get(KUBERNETES_DRIVER_POD_NAME)
+ .get(KUBERNETES_DRIVER_JOB_NAME)
--- End diff --
You set the job name here but for the driver pod you really want the pod
name. It also seems very difficult to pass through the pod name in the driver
config since you only know the pod's name derived from the job after the job
has started. But we can probably use a unique label to look up the driver pod.
Is the label mapped to `job-name` guaranteed to be unique?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]