Repository: spark
Updated Branches:
  refs/heads/master 0f56977f8 -> eea4a0330


[MINOR][K8S] Invalid property "spark.driver.pod.name" is referenced in docs.

## What changes were proposed in this pull request?

"Running on Kubernetes" references `spark.driver.pod.name` few places, and it 
should be `spark.kubernetes.driver.pod.name`.

## How was this patch tested?
See changes

Closes #23133 from Leemoonsoo/fix-driver-pod-name-prop.

Authored-by: Lee moon soo <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/eea4a033
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/eea4a033
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/eea4a033

Branch: refs/heads/master
Commit: eea4a0330b913cd45e369f09ec3d1dbb1b81f1b5
Parents: 0f56977
Author: Lee moon soo <[email protected]>
Authored: Sat Nov 24 16:09:13 2018 -0800
Committer: Dongjoon Hyun <[email protected]>
Committed: Sat Nov 24 16:09:13 2018 -0800

----------------------------------------------------------------------
 docs/running-on-kubernetes.md | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/eea4a033/docs/running-on-kubernetes.md
----------------------------------------------------------------------
diff --git a/docs/running-on-kubernetes.md b/docs/running-on-kubernetes.md
index a9d4488..e940d9a 100644
--- a/docs/running-on-kubernetes.md
+++ b/docs/running-on-kubernetes.md
@@ -166,7 +166,7 @@ hostname via `spark.driver.host` and your spark driver's 
port to `spark.driver.p
 
 ### Client Mode Executor Pod Garbage Collection
 
-If you run your Spark driver in a pod, it is highly recommended to set 
`spark.driver.pod.name` to the name of that pod.
+If you run your Spark driver in a pod, it is highly recommended to set 
`spark.kubernetes.driver.pod.name` to the name of that pod.
 When this property is set, the Spark scheduler will deploy the executor pods 
with an
 
[OwnerReference](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/),
 which in turn will
 ensure that once the driver pod is deleted from the cluster, all of the 
application's executor pods will also be deleted.
@@ -175,7 +175,7 @@ an OwnerReference pointing to that pod will be added to 
each executor pod's Owne
 setting the OwnerReference to a pod that is not actually that driver pod, or 
else the executors may be terminated
 prematurely when the wrong pod is deleted.
 
-If your application is not running inside a pod, or if `spark.driver.pod.name` 
is not set when your application is
+If your application is not running inside a pod, or if 
`spark.kubernetes.driver.pod.name` is not set when your application is
 actually running in a pod, keep in mind that the executor pods may not be 
properly deleted from the cluster when the
 application exits. The Spark scheduler attempts to delete these pods, but if 
the network request to the API server fails
 for any reason, these pods will remain in the cluster. The executor processes 
should exit when they cannot reach the


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to