Github user vanzin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22904#discussion_r238440694
  
    --- Diff: 
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/SparkKubernetesClientFactory.scala
 ---
    @@ -67,8 +66,16 @@ private[spark] object SparkKubernetesClientFactory {
         val dispatcher = new Dispatcher(
           ThreadUtils.newDaemonCachedThreadPool("kubernetes-dispatcher"))
     
    -    // TODO [SPARK-25887] Create builder in a way that respects 
configurable context
    -    val config = new ConfigBuilder()
    +    // Allow for specifying a context used to auto-configure from the 
users K8S config file
    +    val kubeContext = sparkConf.get(KUBERNETES_CONTEXT).filter(c => 
StringUtils.isNotBlank(c))
    +    logInfo(s"Auto-configuring K8S client using " +
    +      s"${if (kubeContext.isEmpty) s"context ${kubeContext.get}" else 
"current context"}" +
    +      s" from users K8S config file")
    +
    +    // Start from an auto-configured config with the desired context
    +    // Fabric 8 uses null to indicate that the users current context 
should be used so if no
    +    // explicit setting pass null
    +    val config = new 
ConfigBuilder(autoConfigure(kubeContext.getOrElse(null)))
    --- End diff --
    
    What happens here when the context does not exist? Does it fall back to the 
default?
    
    e.g. in cluster mode, the config you're adding will be propagated to the 
driver, and then this code will be called with the same context as the 
submission node. What if that context does not exist inside the driver 
container?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to