Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/19954#discussion_r157327721
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
---
@@ -133,30 +132,78 @@ private[spark] object Config extends Logging {
val JARS_DOWNLOAD_LOCATION =
ConfigBuilder("spark.kubernetes.mountDependencies.jarsDownloadDir")
- .doc("Location to download jars to in the driver and executors. When
using" +
- " spark-submit, this directory must be empty and will be mounted
as an empty directory" +
- " volume on the driver and executor pod.")
+ .doc("Location to download jars to in the driver and executors. When
using " +
+ "spark-submit, this directory must be empty and will be mounted as
an empty directory " +
+ "volume on the driver and executor pod.")
.stringConf
.createWithDefault("/var/spark-data/spark-jars")
val FILES_DOWNLOAD_LOCATION =
ConfigBuilder("spark.kubernetes.mountDependencies.filesDownloadDir")
- .doc("Location to download files to in the driver and executors.
When using" +
- " spark-submit, this directory must be empty and will be mounted
as an empty directory" +
- " volume on the driver and executor pods.")
+ .doc("Location to download files to in the driver and executors.
When using " +
+ "spark-submit, this directory must be empty and will be mounted as
an empty directory " +
+ "volume on the driver and executor pods.")
.stringConf
.createWithDefault("/var/spark-data/spark-files")
+ val INIT_CONTAINER_DOCKER_IMAGE =
+ ConfigBuilder("spark.kubernetes.initContainer.docker.image")
--- End diff --
> Is it a required config?
No, as one may forgo the init container if they're building the deps into
the docker image itself and supplying it via `local:///` paths.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]