Github user erikerlandson commented on a diff in the pull request:
https://github.com/apache/spark/pull/22146#discussion_r212423941
--- Diff: docs/running-on-kubernetes.md ---
@@ -775,4 +787,183 @@ specific to Spark on Kubernetes.
This sets the major Python version of the docker image used to run the
driver and executor containers. Can either be 2 or 3.
</td>
</tr>
+<tr>
+ <td><code>spark.kubernetes.driver.containerName</code></td>
+ <td><code>"spark-kubernetes-driver"</code></td>
+ <td>
+ This sets the driver container name. If you are specifying a driver
[pod template](#pod-template), you can match this name to the
+ driver container name set in the template.
+ </td>
+</tr>
+<tr>
+ <td><code>spark.kubernetes.executor.containerName</code></td>
+ <td><code>"spark-kubernetes-executor"</code></td>
+ <td>
+ This sets the executor container name. If you are specifying a an
executor [pod template](#pod-template), you can match this name to the
+ driver container name set in the template.
+ </td>
+</tr>
+<tr>
+ <td><code>spark.kubernetes.driver.podTemplateFile</code></td>
+ <td>(none)</td>
+ <td>
+ Specify the local file that contains the driver [pod
template](#pod-template). For example
+
<code>spark.kubernetes.driver.podTemplateFile=/path/to/driver-pod-template.yaml`</code>
+ </td>
+</tr>
+<tr>
+ <td><code>spark.kubernetes.executor.podTemplateFile</code></td>
+ <td>(none)</td>
+ <td>
+ Specify the local file that contains the executor [pod
template](#pod-template). For example
+
<code>spark.kubernetes.executor.podTemplateFile=/path/to/executor-pod-template.yaml`</code>
+ </td>
+</tr>
+</table>
+
+#### Pod template properties
+
+See the below table for the full list of pod specifications that will be
overwritten by spark.
+
+### Pod Metadata
+
+<table class="table">
+<tr><th>Pod metadata key</th><th>Modified
value</th><th>Description</th></tr>
+<tr>
+ <td>name</td>
+ <td>Value of <code>spark.kubernetes.driver.pod.name</code></td>
+ <td>
+ The driver pod name will be overwritten with either the configured or
default value of
+ <code>spark.kubernetes.driver.pod.name</code>. The executor pod names
will be unaffected.
+ </td>
+</tr>
+<tr>
+ <td>namespace</td>
+ <td>Value of <code>spark.kubernetes.namespace</code></td>
+ <td>
+ Spark makes strong assumptions about the driver and executor
namespaces. Both driver and executor namespaces will
--- End diff --
if the spark conf value for namespace isn't set, can spark use the template
setting, or will spark's conf default also override the template?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]