Github user liyinan926 commented on a diff in the pull request:
https://github.com/apache/spark/pull/21748#discussion_r204138506
--- Diff: docs/running-on-kubernetes.md ---
@@ -129,20 +129,27 @@ Spark executors must be able to connect to the Spark
driver over a hostname and
executors. The specific network configuration that will be required for
Spark to work in client mode will vary per
setup. If you run your driver inside a Kubernetes pod, you can use a
[headless
service](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services)
to allow your
-driver pod to be routable from the executors by a stable hostname. Specify
the driver's hostname via `spark.driver.host`
-and your spark driver's port to `spark.driver.port`.
+driver pod to be routable from the executors by a stable hostname. When
deploying your headless service, ensure that
+the service's label selector will only match the driver pod and no other
pods; it is recommended to assign your driver
+pod a sufficiently unique label and to use that label in the node selector
of the headless service. Specify the driver's
--- End diff --
s/`node selector`/`label selector`/.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]