Github user liyinan926 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20669#discussion_r174564815
  
    --- Diff: 
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/KubernetesClusterManager.scala
 ---
    @@ -33,7 +33,9 @@ private[spark] class KubernetesClusterManager extends 
ExternalClusterManager wit
       override def canCreate(masterURL: String): Boolean = 
masterURL.startsWith("k8s")
     
       override def createTaskScheduler(sc: SparkContext, masterURL: String): 
TaskScheduler = {
    -    if (masterURL.startsWith("k8s") && sc.deployMode == "client") {
    +    if (masterURL.startsWith("k8s") &&
    +      sc.deployMode == "client" &&
    +      !sc.conf.contains(KUBERNETES_EXECUTOR_POD_NAME_PREFIX)) {
    --- End diff --
    
    Is this sufficient to prevent use the client mode from end users? What 
about adding a special key when calling `spark-submit` in the driver and test 
that key here instead?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to