Yikun commented on a change in pull request #35215:
URL: https://github.com/apache/spark/pull/35215#discussion_r790218477



##########
File path: 
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/KubernetesClusterSchedulerBackend.scala
##########
@@ -76,13 +76,12 @@ private[spark] class KubernetesClusterSchedulerBackend(
 
   private def setUpExecutorConfigMap(driverPod: Option[Pod]): Unit = {
     val configMapName = KubernetesClientUtils.configMapNameExecutor
-    val resolvedExecutorProperties =
-      Map(KUBERNETES_NAMESPACE.key -> conf.get(KUBERNETES_NAMESPACE))
     val confFilesMap = KubernetesClientUtils

Review comment:
       Thanks for clarify, but the Kubernetes allow any key as configmap key 
(even if the key name is 'namespace'), we should't break any behavior due to 
spark implemention or workaround. 
   
   
   So I think any key put in configmap should be covert to the real 
configmap.key rather than do some hack and special process to occupy any user 
potential input.
   
   
   If we just need to set executor separately, I guess what we real needed is 
introducing a new conf like spark.k8s.executor.configmap.namespace otherwise 
back to spark.k8s.namespace directly.
   
   
   And I also didn't see any problem if we just use conf.namespace like this PR 
done. Could you explain some If you have any other concern?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to