Github user onursatici commented on a diff in the pull request:
https://github.com/apache/spark/pull/22146#discussion_r214339066
--- Diff: docs/running-on-kubernetes.md ---
@@ -185,6 +185,21 @@ To use a secret through an environment variable use
the following options to the
--conf spark.kubernetes.executor.secretKeyRef.ENV_NAME=name:key
```
+## Pod Template
+Kubernetes allows defining pods from [template
files](https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/#pod-templates).
+Spark users can similarly use template files to define the driver or
executor pod configurations that Spark configurations do not support.
+To do so, specify the spark properties
`spark.kubernetes.driver.podTemplateFile` and
`spark.kubernetes.executor.podTemplateFile`
+to point to local files accessible to the `spark-submit` process. To allow
the driver pod access the executor pod template
+file, the file will be automatically mounted onto a volume in the driver
pod when it's created.
+
+It is important to note that Spark is opinionated about certain pod
configurations so there are values in the
+pod template that will always be overwritten by Spark. Therefore, users of
this feature should note that specifying
+the pod template file only lets Spark start with a template pod instead of
an empty pod during the pod-building process.
+For details, see the [full list](#pod-template-properties) of pod template
values that will be overwritten by spark.
+
+Pod template files can also define multiple containers. In such cases,
Spark will always assume that the first container in
+the list will be the driver or executor container.
--- End diff --
@skonto True, but this prevents the addition of a new spark conf for the
container name
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]