Github user liyinan926 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20669#discussion_r174608662
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/KubernetesClusterManager.scala
---
@@ -33,7 +33,9 @@ private[spark] class KubernetesClusterManager extends
ExternalClusterManager wit
override def canCreate(masterURL: String): Boolean =
masterURL.startsWith("k8s")
override def createTaskScheduler(sc: SparkContext, masterURL: String):
TaskScheduler = {
- if (masterURL.startsWith("k8s") && sc.deployMode == "client") {
+ if (masterURL.startsWith("k8s") &&
+ sc.deployMode == "client" &&
+ !sc.conf.contains(KUBERNETES_EXECUTOR_POD_NAME_PREFIX)) {
--- End diff --
I think it's safer to have a new internal config key that is only used for
this purpose. Checking the presence of `KUBERNETES_EXECUTOR_POD_NAME_PREFIX `
isn't sufficient as it is set by the submission client code that will be called
in client mode also.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]