[ 
https://issues.apache.org/jira/browse/SPARK-25887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16716615#comment-16716615
 ] 

ASF GitHub Bot commented on SPARK-25887:
----------------------------------------

aditanase commented on a change in pull request #22904: [SPARK-25887][K8S] 
Configurable K8S context support
URL: https://github.com/apache/spark/pull/22904#discussion_r240524639
 
 

 ##########
 File path: 
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/SparkKubernetesClientFactory.scala
 ##########
 @@ -67,8 +66,16 @@ private[spark] object SparkKubernetesClientFactory {
     val dispatcher = new Dispatcher(
       ThreadUtils.newDaemonCachedThreadPool("kubernetes-dispatcher"))
 
-    // TODO [SPARK-25887] Create builder in a way that respects configurable 
context
-    val config = new ConfigBuilder()
+    // Allow for specifying a context used to auto-configure from the users 
K8S config file
+    val kubeContext = sparkConf.get(KUBERNETES_CONTEXT).filter(c => 
StringUtils.isNotBlank(c))
+    logInfo(s"Auto-configuring K8S client using " +
+      s"${if (kubeContext.isEmpty) s"context ${kubeContext.get}" else "current 
context"}" +
+      s" from users K8S config file")
+
+    // Start from an auto-configured config with the desired context
+    // Fabric 8 uses null to indicate that the users current context should be 
used so if no
+    // explicit setting pass null
+    val config = new ConfigBuilder(autoConfigure(kubeContext.getOrElse(null)))
 
 Review comment:
   Of course :) I think what I'm asking is what real-world options do we forsee 
for client mode in a k8s context. I don't see many beyond a pod that the user 
creates. Whether straight spark or something like jupyter/zeppelin notebook, at 
the end of the day it's probably a docker container living in a pod.
   
   Thanks for taking the time, I won't spam this thread anymore. I'll continue 
the discussion on the issue I created, which I'll repurpose for improving the 
docs in this area.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Allow specifying Kubernetes context to use
> ------------------------------------------
>
>                 Key: SPARK-25887
>                 URL: https://issues.apache.org/jira/browse/SPARK-25887
>             Project: Spark
>          Issue Type: Improvement
>          Components: Kubernetes
>    Affects Versions: 2.3.0, 2.3.1, 2.3.2, 2.4.0
>            Reporter: Rob Vesse
>            Priority: Major
>
> In working on SPARK-25809 support was added to the integration testing 
> machinery for Spark on K8S to use an arbitrary context from the users K8S 
> config file.  However this can fail/cause false positives because regardless 
> of what the integration test harness does the K8S submission client uses the 
> Fabric 8 client library in such a way that it only ever configures itself 
> from the current context.
> For users who work with multiple K8S clusters or who have multiple K8S 
> "users" for interacting with their cluster being able to support arbitrary 
> contexts without forcing the user to first {{kubectl config use-context 
> <context>}} is an important improvement.
> This would be a fairly small fix to {{SparkKubernetesClientFactory}} and an 
> associated configuration key, likely {{spark.kubernetes.context}} to go along 
> with this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to