Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/22904#discussion_r238779965
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/SparkKubernetesClientFactory.scala
---
@@ -67,8 +66,16 @@ private[spark] object SparkKubernetesClientFactory {
val dispatcher = new Dispatcher(
ThreadUtils.newDaemonCachedThreadPool("kubernetes-dispatcher"))
- // TODO [SPARK-25887] Create builder in a way that respects
configurable context
- val config = new ConfigBuilder()
+ // Allow for specifying a context used to auto-configure from the
users K8S config file
+ val kubeContext = sparkConf.get(KUBERNETES_CONTEXT).filter(c =>
StringUtils.isNotBlank(c))
+ logInfo(s"Auto-configuring K8S client using " +
+ s"${if (kubeContext.isEmpty) s"context ${kubeContext.get}" else
"current context"}" +
+ s" from users K8S config file")
+
+ // Start from an auto-configured config with the desired context
+ // Fabric 8 uses null to indicate that the users current context
should be used so if no
+ // explicit setting pass null
+ val config = new
ConfigBuilder(autoConfigure(kubeContext.getOrElse(null)))
--- End diff --
Yes and I was referring to the K8S config file :) And yes the fact that we
would propagate `spark.kubernetes.context` into the pod shouldn't be an issue
because there won't be any K8S config file for it to interact with inside the
pod as in-pod K8S config should be from the service account token that gets
injected into the pod
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]