This is an automated email from the ASF dual-hosted git repository.

yuteng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/yunikorn-site.git


The following commit(s) were added to refs/heads/master by this push:
     new 83d0c61e24 [YUNIKORN-2675] A Example doc of RayCluster and RayJob 
management wit… (#441)
83d0c61e24 is described below

commit 83d0c61e2482453713330e09f17709fb2801a67c
Author: mean-world <[email protected]>
AuthorDate: Sun Jun 30 11:42:06 2024 +0800

    [YUNIKORN-2675] A Example doc of RayCluster and RayJob management wit… 
(#441)
    
    * [YUNIKORN-2675] A Example doc of RayCluster and RayJob management with 
Yunikorn
    
    * [YUNIKORN-2675] A Example doc of RayCluster and RayJob management with 
Yunikorn
    
    * [YUNIKORN-2675] A Example doc of RayCluster and RayJob management with 
Yunikorn
    
    * [YUNIKORN-2675] A Example doc of RayCluster and RayJob management with 
Yunikorn
    
    * [YUNIKORN-2675] A Example doc of RayCluster and RayJob management with 
Yunikorn
    
    * [YUNIKORN-2675] A Example doc of RayCluster and RayJob management with 
Yunikorn
    
    * [YUNIKORN-2675] A Example doc of RayCluster and RayJob management with 
Yunikorn
    
    * [YUNIKORN-2675] A Example doc of RayCluster and RayJob management with 
Yunikorn
    
    * [YUNIKORN-2675] A Example doc of RayCluster and RayJob management with 
Yunikorn
---
 docs/assets/ray_cluster_cluster.png            | Bin 0 -> 169047 bytes
 docs/assets/ray_cluster_on_ui.png              | Bin 0 -> 150116 bytes
 docs/assets/ray_cluster_operator.png           | Bin 0 -> 153070 bytes
 docs/assets/ray_cluster_ray_dashborad.png      | Bin 0 -> 68327 bytes
 docs/assets/ray_job_job.png                    | Bin 0 -> 170877 bytes
 docs/assets/ray_job_on_ui.png                  | Bin 0 -> 146317 bytes
 docs/assets/ray_job_ray_dashboard.png          | Bin 0 -> 58353 bytes
 docs/user_guide/workloads/run_ray_cluster.md   |  83 +++++++++++++++++++++++++
 docs/user_guide/workloads/run_ray_job.md       |  73 ++++++++++++++++++++++
 docs/user_guide/workloads/workload_overview.md |   2 +
 sidebars.js                                    |   4 +-
 11 files changed, 161 insertions(+), 1 deletion(-)

diff --git a/docs/assets/ray_cluster_cluster.png 
b/docs/assets/ray_cluster_cluster.png
new file mode 100644
index 0000000000..be1da55704
Binary files /dev/null and b/docs/assets/ray_cluster_cluster.png differ
diff --git a/docs/assets/ray_cluster_on_ui.png 
b/docs/assets/ray_cluster_on_ui.png
new file mode 100644
index 0000000000..78643183e6
Binary files /dev/null and b/docs/assets/ray_cluster_on_ui.png differ
diff --git a/docs/assets/ray_cluster_operator.png 
b/docs/assets/ray_cluster_operator.png
new file mode 100644
index 0000000000..7973e2b1ee
Binary files /dev/null and b/docs/assets/ray_cluster_operator.png differ
diff --git a/docs/assets/ray_cluster_ray_dashborad.png 
b/docs/assets/ray_cluster_ray_dashborad.png
new file mode 100644
index 0000000000..cbc56fa191
Binary files /dev/null and b/docs/assets/ray_cluster_ray_dashborad.png differ
diff --git a/docs/assets/ray_job_job.png b/docs/assets/ray_job_job.png
new file mode 100644
index 0000000000..312d08e11b
Binary files /dev/null and b/docs/assets/ray_job_job.png differ
diff --git a/docs/assets/ray_job_on_ui.png b/docs/assets/ray_job_on_ui.png
new file mode 100644
index 0000000000..bcefeb5abf
Binary files /dev/null and b/docs/assets/ray_job_on_ui.png differ
diff --git a/docs/assets/ray_job_ray_dashboard.png 
b/docs/assets/ray_job_ray_dashboard.png
new file mode 100644
index 0000000000..7e80da3023
Binary files /dev/null and b/docs/assets/ray_job_ray_dashboard.png differ
diff --git a/docs/user_guide/workloads/run_ray_cluster.md 
b/docs/user_guide/workloads/run_ray_cluster.md
new file mode 100644
index 0000000000..19bceeadb4
--- /dev/null
+++ b/docs/user_guide/workloads/run_ray_cluster.md
@@ -0,0 +1,83 @@
+---
+id: run_ray_cluster
+title: Run Ray Cluster
+description: How to run Ray Cluster jobs with YuniKorn
+keywords:
+ - Ray_crd
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+## Note
+This example demonstrates how to set up 
[KubeRay](https://docs.ray.io/en/master/cluster/kubernetes/getting-started.html)
 and run a [Ray 
Cluster](https://docs.ray.io/en/master/cluster/kubernetes/getting-started/raycluster-quick-start.html)
 with the YuniKorn scheduler. It relies on an admission controller to configure 
the default applicationId and queue name. If you want more details, please 
refer to [Yunikorn supported 
labels](https://yunikorn.apache.org/docs/user_guide/labels_and_annotat [...]
+
+## Modify YuniKorn settings
+Follow [YuniKorn install guide](https://yunikorn.apache.org/docs/) and modify 
YuniKorn configmap "yunikorn-defaults" to allow ray operator based on k8s 
service account.
+```
+kubectl patch configmap yunikorn-defaults -n yunikorn --patch 
'{"data":{"admissionController.accessControl.systemUsers": 
"^system:serviceaccount:kube-system:,^system:serviceaccount:default:"}}' 
+```
+
+## Setup a KubeRay operator
+```
+helm repo add kuberay https://ray-project.github.io/kuberay-helm/
+helm repo update
+helm install kuberay-operator kuberay/kuberay-operator --version 1.1.1
+```
+- The result should be as shown below
+![ray_cluster_operator](../../assets/ray_cluster_operator.png)
+
+## Create Ray Cluster 
+```
+helm install raycluster kuberay/ray-cluster --version 1.1.1
+```
+- Ray Cluster result
+  ![ray_cluster_cluster](../../assets/ray_cluster_cluster.png)
+- YuniKorn UI
+  ![ray_cluster_on_ui](../../assets/ray_cluster_on_ui.png)
+  
+### Configure your Ray Cluster(optional)
+If you disable admission controller, you need to add the schedulerName: 
yunikorn in [raycluster 
spec](https://github.com/ray-project/kuberay/blob/master/helm-chart/ray-cluster/templates/raycluster-cluster.yaml#L40).
 
+```
+#example
+metadata:
+  labels:
+    applicaionId: ray-cluster-0001
+    queue: root.ray.clusters
+spec:
+  schedulerName: yunikorn # k8s will inform yunikorn based on this
+```
+
+## Submit a RayJob to Ray Cluster
+```
+export HEAD_POD=$(kubectl get pods --selector=ray.io/node-type=head -o 
custom-columns=POD:metadata.name --no-headers)
+echo $HEAD_POD
+
+kubectl exec -it $HEAD_POD -- python -c "import ray; ray.init(); 
print(ray.cluster_resources())"
+```
+
+Services in Kubernetes aren't directly accessible by default. However, you can 
use port-forwarding to connect to them locally.
+```
+kubectl port-forward service/raycluster-kuberay-head-svc 8265:8265
+```
+After port-forward set up, you can access the Ray dashboard by going to 
`http://localhost:8265` in your web browser.
+
+- Ray Dashboard
+  ![ray_cluster_ray_dashborad](../../assets/ray_cluster_ray_dashborad.png)
+
diff --git a/docs/user_guide/workloads/run_ray_job.md 
b/docs/user_guide/workloads/run_ray_job.md
new file mode 100644
index 0000000000..bc7a193baf
--- /dev/null
+++ b/docs/user_guide/workloads/run_ray_job.md
@@ -0,0 +1,73 @@
+---
+id: run_ray_job
+title: Run RayJob
+description: How to run RayJobs with YuniKorn
+keywords:
+ - Ray_crd
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+## Note
+This example is how to setup 
[KubeRay](https://docs.ray.io/en/master/cluster/kubernetes/getting-started.html)
 and run [Ray 
Job](https://docs.ray.io/en/master/cluster/kubernetes/getting-started/rayjob-quick-start.html)
 with YuniKorn scheduler. It relies on an admission controller to configure the 
default applicationId and queue name. If you want more details, please refer to 
[Yunikorn supported 
labels](https://yunikorn.apache.org/docs/user_guide/labels_and_annotations_in_yunikorn)
 and [Yu [...]
+
+## Modify YuniKorn settings
+Follow [YuniKorn install](https://yunikorn.apache.org/docs/) and modify 
YuniKorn configmap "yunikorn-defaults"
+```
+kubectl patch configmap yunikorn-defaults -n yunikorn --patch 
'{"data":{"admissionController.accessControl.systemUsers": 
"^system:serviceaccount:kube-system:,^system:serviceaccount:default:"}}' 
+```
+
+## Setup a KubeRay operator
+```
+helm repo add kuberay https://ray-project.github.io/kuberay-helm/
+helm repo update
+helm install kuberay-operator kuberay/kuberay-operator --version 1.1.1
+```
+
+### Configure your Ray Cluster(optional)
+If you disable admission controller, you need to add the schedulerName: 
yunikorn in [raycluster 
spec](https://github.com/ray-project/kuberay/blob/master/helm-chart/ray-cluster/templates/raycluster-cluster.yaml#L40).
 By using applicationId label, pods with the same applicationId are marked 
under the same application .
+```
+#example
+metadata:
+  labels:
+    applicaionId: ray-cluster-0001
+    queue: root.ray.clusters
+spec:
+  schedulerName: yunikorn # k8s will inform yunikorn based on this
+```
+
+## Run a RayJob
+```
+kubectl apply -f 
https://raw.githubusercontent.com/ray-project/kuberay/v1.1.1/ray-operator/config/samples/ray-job.sample.yaml
+```
+
+- View the job status
+    ![ray_job_job](../../assets/ray_job_job.png)
+
+Services in Kubernetes aren't directly accessible by default. However, you can 
use port-forwarding to connect to them locally.
+```
+kubectl port-forward service/raycluster-kuberay-head-svc 8265:8265
+```
+After port-forward set up, you can access the Ray dashboard by going to 
`http://localhost:8265` in your web browser.
+
+- Ray Dashboard
+    ![ray_job_ray_dashboard](../../assets/ray_job_ray_dashboard.png)
+- YuniKorn UI
+    ![ray_job_on_ui](../../assets/ray_job_on_ui.png)
\ No newline at end of file
diff --git a/docs/user_guide/workloads/workload_overview.md 
b/docs/user_guide/workloads/workload_overview.md
index 7040e79bd4..e9d27903c7 100644
--- a/docs/user_guide/workloads/workload_overview.md
+++ b/docs/user_guide/workloads/workload_overview.md
@@ -58,3 +58,5 @@ Examples of more advanced use cases can be found here:
 * [Run Flink Jobs](run_flink)
 * [Run TensorFlow Jobs](run_tf)
 * [Run MPI Jobs](run_mpi)
+* [Run Ray Cluster](run_ray_cluster)
+* [Run RayJob](run_ray_job)
diff --git a/sidebars.js b/sidebars.js
index 018de07bd0..bff0847f49 100644
--- a/sidebars.js
+++ b/sidebars.js
@@ -46,7 +46,9 @@ module.exports = {
                     'user_guide/workloads/run_spark',
                     'user_guide/workloads/run_flink',
                     'user_guide/workloads/run_tf',
-                    'user_guide/workloads/run_mpi'
+                    'user_guide/workloads/run_mpi',
+                    'user_guide/workloads/run_ray_cluster',
+                    'user_guide/workloads/run_ray_job'
                 ],
             },
             {


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to