This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
     new 9cd4be6  [SPARK-38561][K8S][DOCS][3.3] Add doc for `Customized 
Kubernetes Schedulers`
9cd4be6 is described below

commit 9cd4be69fe39462ea5fdf8949a489a5c152a3dfe
Author: Yikun Jiang <[email protected]>
AuthorDate: Tue Mar 29 11:01:06 2022 -0700

    [SPARK-38561][K8S][DOCS][3.3] Add doc for `Customized Kubernetes Schedulers`
    
    ### What changes were proposed in this pull request?
    This is PR to doc for basic framework capability for Customized Kubernetes 
Schedulers.
    
    ### Why are the changes needed?
    Guide user how to use spark on kubernetes custom scheduler
    
    ### Does this PR introduce _any_ user-facing change?
    No
    
    ### How was this patch tested?
    CI passed
    
    Closes #35955 from Yikun/SPARK-38561-3.3.
    
    Authored-by: Yikun Jiang <[email protected]>
    Signed-off-by: Dongjoon Hyun <[email protected]>
---
 docs/running-on-kubernetes.md | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/docs/running-on-kubernetes.md b/docs/running-on-kubernetes.md
index 6fec9ba..a262f98 100644
--- a/docs/running-on-kubernetes.md
+++ b/docs/running-on-kubernetes.md
@@ -1713,6 +1713,25 @@ spec:
     image: will-be-overwritten
 ```
 
+#### Customized Kubernetes Schedulers for Spark on Kubernetes
+
+Spark allows users to specify a custom Kubernetes schedulers.
+
+1. Specify a scheduler name.
+
+   Users can specify custom scheduler using 
<code>spark.kubernetes.scheduler.name</code> or
+   <code>spark.kubernetes.{driver/executor}.scheduler.name</code> 
configuration.
+
+2. Specify scheduler related configurations.
+
+   To configure the custom scheduler the user can use [Pod 
templates](#pod-template), add labels 
(<code>spark.kubernetes.{driver,executor}.label.*</code>), annotations 
(<code>spark.kubernetes.{driver/executor}.annotation.*</code>) or scheduler 
specific configurations (such as 
<code>spark.kubernetes.scheduler.volcano.podGroupTemplateFile</code>).
+
+3. Specify scheduler feature step.
+
+   Users may also consider to use 
<code>spark.kubernetes.{driver/executor}.pod.featureSteps</code> to support 
more complex requirements, including but not limited to:
+  - Create additional Kubernetes custom resources for driver/executor 
scheduling.
+  - Set scheduler hints according to configuration or existing Pod info 
dynamically.
+
 ### Stage Level Scheduling Overview
 
 Stage level scheduling is supported on Kubernetes when dynamic allocation is 
enabled. This also requires 
<code>spark.dynamicAllocation.shuffleTracking.enabled</code> to be enabled 
since Kubernetes doesn't support an external shuffle service at this time. The 
order in which containers for different profiles is requested from Kubernetes 
is not guaranteed. Note that since dynamic allocation on Kubernetes requires 
the shuffle tracking feature, this means that executors from previous stages t 
[...]

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to