dongjoon-hyun commented on code in PR #41257:
URL: https://github.com/apache/spark/pull/41257#discussion_r1228906376
##########
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala:
##########
@@ -165,6 +165,16 @@ private[spark] object Config extends Logging {
.checkValue(_ <= 1048576, "Must have at most 1048576 bytes")
.createWithDefault(1048576) // 1.0 MiB
+ val SPARK_CONF_DIR_CONFIG_MAP_NAME =
+ ConfigBuilder("spark.kubernetes.sparkConfDir.configMapName")
+ .internal()
+ .doc("Name of the config map that would be mounted as driver and
executor's " +
Review Comment:
~Do you think we can simply reuse the existing driver ConfigMap?~
- ~Technically, this proposal is not a simple reuse. This enforces the
user-provided ConfigMap which is a little overhead.~
- ~As you know, if we have a existing configMap, we can do this with pod
template instead of this new configuration.~
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]