Github user liyinan926 commented on a diff in the pull request:
https://github.com/apache/spark/pull/21748#discussion_r202752192
--- Diff: docs/running-on-kubernetes.md ---
@@ -117,6 +117,37 @@ If the local proxy is running at localhost:8001,
`--master k8s://http://127.0.0.
spark-submit. Finally, notice that in the above example we specify a jar
with a specific URI with a scheme of `local://`.
This URI is the location of the example jar that is already in the Docker
image.
+## Client Mode
+
+Starting with Spark 2.4.0, it is possible to run Spark applications on
Kubernetes in client mode. When running a Spark
+application in client mode, a separate pod is not deployed to run the
driver. When running an application in
+client mode, it is recommended to account for the following factors:
+
+### Client Mode Networking
+
+Spark executors must be able to connect to the Spark driver over a
hostname and a port that is routable from the Spark
+executors. The specific network configuration that will be required for
Spark to work in client mode will vary per
+setup. If you run your driver inside a Kubernetes pod, you can use a
+[headless
service](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services)
to allow your
+driver pod to be routable from the executors by a stable hostname. Specify
the driver's hostname via `spark.driver.host`
+and your spark driver's port to `spark.driver.port`.
+
+### Client Mode Garbage Collection
--- End diff --
Can this be renamed to `Executor Pod Garbage Collection in Client Mode`?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]