Github user vanzin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22904#discussion_r239903144
  
    --- Diff: 
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/SparkKubernetesClientFactory.scala
 ---
    @@ -67,8 +66,16 @@ private[spark] object SparkKubernetesClientFactory {
         val dispatcher = new Dispatcher(
           ThreadUtils.newDaemonCachedThreadPool("kubernetes-dispatcher"))
     
    -    // TODO [SPARK-25887] Create builder in a way that respects 
configurable context
    -    val config = new ConfigBuilder()
    +    // Allow for specifying a context used to auto-configure from the 
users K8S config file
    +    val kubeContext = sparkConf.get(KUBERNETES_CONTEXT).filter(c => 
StringUtils.isNotBlank(c))
    +    logInfo(s"Auto-configuring K8S client using " +
    +      s"${if (kubeContext.isEmpty) s"context ${kubeContext.get}" else 
"current context"}" +
    +      s" from users K8S config file")
    +
    +    // Start from an auto-configured config with the desired context
    +    // Fabric 8 uses null to indicate that the users current context 
should be used so if no
    +    // explicit setting pass null
    +    val config = new 
ConfigBuilder(autoConfigure(kubeContext.getOrElse(null)))
    --- End diff --
    
    > I think this enhancement does not apply to client mode
    
    If you mean "client mode inside a k8s-managed docker container", then yes, 
you may need to do extra stuff, like mount the appropriate credentials. But in 
the "client mode with driver inside k8s pod" case, Spark does not create that 
pod for you. So I'm not sure how Spark can help with anything there; the 
`serviceName` configuration seems targeted at propagating the credentials of 
the submitter to the driver pod, and in that case Spark is not creating the 
driver pod at all.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to