This is an automated email from the ASF dual-hosted git repository.

wilfreds pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/yunikorn-site.git


The following commit(s) were added to refs/heads/master by this push:
     new 70ad1f0167 [YUNIKORN-1876] Build environment setup update (#324)
70ad1f0167 is described below

commit 70ad1f016729d608a65cd352d3a9d35d0147faeb
Author: Craig Condit <[email protected]>
AuthorDate: Tue Oct 17 12:25:45 2023 +1100

    [YUNIKORN-1876] Build environment setup update (#324)
    
    Update and cleaup the documentation to reflect the current usage and
    best practices of building YuniKorn.
    
    Closes: #324
    
    Signed-off-by: Wilfred Spiegelenburg <[email protected]>
---
 docs/assets/goland_debug.png                  | Bin 0 -> 290778 bytes
 docs/developer_guide/build.md                 | 210 ++++++++++++--------
 docs/developer_guide/env_setup.md             | 265 ++++++++++++++++++--------
 docs/developer_guide/openshift_development.md |  23 ++-
 4 files changed, 333 insertions(+), 165 deletions(-)

diff --git a/docs/assets/goland_debug.png b/docs/assets/goland_debug.png
new file mode 100644
index 0000000000..aa8803f804
Binary files /dev/null and b/docs/assets/goland_debug.png differ
diff --git a/docs/developer_guide/build.md b/docs/developer_guide/build.md
index 03fc459208..1013a9fdb9 100644
--- a/docs/developer_guide/build.md
+++ b/docs/developer_guide/build.md
@@ -22,21 +22,26 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-YuniKorn always works with a container orchestrator system. Currently, a 
Kubernetes shim [yunikorn-k8shim](https://github.com/apache/yunikorn-k8shim)
-is provided in our repositories, you can leverage it to develop YuniKorn 
scheduling features and integrate with Kubernetes.
-This document describes resources how to setup dev environment and how to do 
the development.
+YuniKorn always works with a container orchestrator system. Currently, a
+Kubernetes shim ([yunikorn-k8shim](https://github.com/apache/yunikorn-k8shim))
+is provided which provides a drop-in scheduler for the Kubernetes platform.
+This document describes how to setup and use a local development environment.
 
-## Development Environment setup
+## Dev Environment setup
 
-Read the [environment setup guide](developer_guide/env_setup.md) first to 
setup Docker and Kubernetes development environment.
+Read the [Dev Environment Setup](developer_guide/env_setup.md) guide first to
+setup Docker and Kubernetes development environment.
 
-## Build YuniKorn for Kubernetes
+## Build YuniKorn
 
-Prerequisite:
-- Golang: check the `.go_version` file in the root of the repositories for the 
version Yunikorn requires. The minimum version can change per release branch.  
Earlier Go versions might cause compilation issues. 
+Prerequisites:
+- Golang: check the `.go_version` file in the root of the repositories for the
+version Yunikorn requires. The minimum version can change per release branch.
+Using earlier Go versions will cause compilation issues. 
 
-You can build the scheduler for Kubernetes from 
[yunikorn-k8shim](https://github.com/apache/yunikorn-k8shim) project.
-The build procedure will build all components into a single executable that 
can be deployed and running on Kubernetes.
+You can build the scheduler for Kubernetes from the 
[yunikorn-k8shim](https://github.com/apache/yunikorn-k8shim)
+project. The build procedure will build all components into a single executable
+that can be deployed and running on Kubernetes.
 
 Start the integrated build process by pulling the `yunikorn-k8shim` repository:
 ```bash
@@ -44,56 +49,62 @@ mkdir $HOME/yunikorn/
 cd $HOME/yunikorn/
 git clone https://github.com/apache/yunikorn-k8shim.git
 ```
-At this point you have an environment that will allow you to build an 
integrated image for the YuniKorn scheduler.
-
-### A note on Go modules and git version
-Go use git to fetch module information.
-Certain modules cannot be retrieved if the git version installed on the 
machine used to build is old.
-A message similar to the one below will be logged when trying to build for the 
first time.
-```text
-go: finding modernc.org/[email protected]
-go: modernc.org/[email protected]: git fetch -f origin refs/heads/*:refs/heads/* 
refs/tags/*:refs/tags/* in <location>: exit status 128:
-       error: RPC failed; result=22, HTTP code = 404
-       fatal: The remote end hung up unexpectedly
-```
-Update git to a recent version to fix this issue.
-Git releases later than 1.22 are known to work.
 
-### Build Docker image
+At this point you have an environment that will allow you to build an
+integrated image for the YuniKorn scheduler.
 
-Building a docker image can be triggered by following command.
+### Build Docker images
 
-```
+Building the Docker images can be triggered by following command:
+```shell script
 make image
 ```
 
-The image with the build in configuration can be deployed directly on 
kubernetes.
-Some sample deployments that can be used are found under 
[deployments](https://github.com/apache/yunikorn-k8shim/tree/master/deployments/scheduler)
 directory.
-For the deployment that uses a config map you need to set up the ConfigMap in 
kubernetes.
-How to deploy the scheduler with a ConfigMap is explained in the [scheduler 
configuration deployment](developer_guide/deployment.md) document.
+This will generate images for the scheduler, scheduler plugin, and admission
+controller.
 
-The image build command will first build the integrated executable and then 
create the docker image.
-If you want to use pre-built images based on a release, please check the 
[Docker Hub repo](https://hub.docker.com/r/apache/yunikorn).
+The images created can be deployed directly on Kubernetes.
+Some sample deployments that can be used are found under the
+[deployments/scheduler](https://github.com/apache/yunikorn-k8shim/tree/master/deployments/scheduler)
+directory of the `yunikorn-k8shim` repository. Alternatively, the Helm charts
+located within the 
[helm-charts](https://github.com/apache/yunikorn-release/tree/master/helm-charts)
+directory of the `yunikorn-release` repository may be used. These match what 
is used
+for release builds.
 
-The default image tags are not suitable for deployments to an accessible 
repository as it uses a hardcoded user and would push to Docker Hub with proper 
credentials.
-You *must* update the `TAG` variable in the `Makefile` to push to an 
accessible repository.
-When you update the image tag be aware that the deployment examples given will 
also need to be updated to reflect the same change.
+The configuration of YuniKorn can be customized via a ConfigMap as explained  
in the
+[scheduler configuration deployment](developer_guide/deployment.md) document.
 
-### Inspect the docker image
+The `make image` build command will first build the integrated executables and
+then create the docker images. If you want to use pre-built images based on an
+offical release, please check the [Docker Hub 
repo](https://hub.docker.com/r/apache/yunikorn).
 
-The docker image built from previous step has embedded some important build 
info in image's metadata. You can retrieve
-these info with docker `inspect` command.
+The default image tags are not suitable for deployments to a private
+repository as these would attempt to push to Docker Hub without proper
+credentials. You *must* update the `REGISTRY` variable in the `Makefile` to
+push to an accessible repository. When you update the image tag be aware that
+the deployment examples given will also need to be updated to reflect the same
+change.
 
-```
+### Inspect Docker images
+
+The Docker images built from previous step have embedded some important build
+info in the image metadata. You can retrieve this information with docker
+`inspect` command:
+
+```shell script
 docker inspect apache/yunikorn:scheduler-amd64-latest
+docker inspect apache/yunikorn:scheduler-plugin-amd64-latest
+docker inspect apache/yunikorn:admission-controller-amd64-latest
 ```
 
-The `amd64` tag is dependent on your host architecture (i.e. for Intel it 
would be `amd64` and for Mac M1, it would be `arm64v8`).
+The `amd64` tag is dependent on your host architecture (i.e. for Intel it would
+be `amd64` and for Mac M1, it would be `arm64`).
 
-This info includes git revisions (last commit SHA) for each component, to help 
you understand which version of the source code
-was shipped by this image. They are listed as docker image `labels`, such as
+This info includes git revisions (last commit SHA) for each component, to help
+you understand which version of the source code was shipped by this image. They
+are listed as docker image `labels`, such as
 
-```
+```json
 "Labels": {
     "BuildTimeStamp": "2019-07-16T23:08:06+0800",
     "Version": "0.1",
@@ -105,32 +116,53 @@ was shipped by this image. They are listed as docker 
image `labels`, such as
 
 ### Dependencies
 
-The dependencies in the projects are managed using [go 
modules](https://blog.golang.org/using-go-modules).
-Go Modules require at least Go version 1.11 to be installed on the development 
system.
-
-If you want to modify one of the projects locally and build with your local 
dependencies you will need to change the module file. 
-Changing dependencies uses mod `replace` directives as explained in the 
[Update dependencies](#updating-dependencies).
-
-The YuniKorn project has four repositories three of those repositories have a 
dependency at the go level.
-These dependencies are part of the go modules and point to the github 
repositories.
-During the development cycle it can be required to break the dependency on the 
committed version from github.
-This requires making changes in the module file to allow loading a local copy 
or a forked copy from a different repository.  
+The dependencies in the projects are managed using
+[go modules](https://blog.golang.org/using-go-modules).
+
+If you want to modify one of the projects locally and build with your local
+dependencies you will need to change the module file.  Changing dependencies
+requires using `go.mod` `replace` directives as explained in the
+[Update dependencies](#updating-dependencies) section.
+
+The YuniKorn project has four code repositories:
+  - 
[yunikorn-scheduler-interface](https://github.com/apache/yunikorn-scheduler-interface)
+    (protobuf interface between core and shim)
+  - [yunikorn-core](https://github.com/apache/yunikorn-core)
+    (core scheduler logic)
+  - [yunikorn-k8shim](https://github.com/apache/yunikorn-k8shim)
+    (Kubernetes-specific shim)
+  - [yunikorn-web](https://github.com/apache/yunikorn-web)
+    (YuniKorn Web UI)
+
+Each of these dependencies is a Go module and there are dependencies between
+them. During the development cycle it can be required to break the dependency
+on the committed version from github. This requires making changes in the 
module
+file to allow loading a local copy or a forked copy from a different 
repository.  
+
+Additionally, there are two additional auxiliary repositories:
+  - [yunikorn-release](https://github.com/apache/yunikorn-release)
+    (release management scripts and official Helm charts)
+  - [yunikorn-site](https://github.com/apache/yunikorn-site)
+    (source of the yunikorn.apache.org web site)
 
 #### Affected repositories
 The following dependencies exist between the repositories:
 
-| repository| depends on |
+| Repository| Depends on |
 | --- | --- |
 | yunikorn-core | yunikorn-scheduler-interface | 
 | yunikorn-k8shim | yunikorn-scheduler-interface, yunikorn-core |
 | yunikorn-scheduler-interface | none |
-| yunikorn-web | yunikorn-core |
+| yunikorn-web | none |
 
-The `yunikorn-web` repository has no direct go dependency on the other 
repositories. However any change to the `yunikorn-core` webservices can affect 
the web interface. 
+The `yunikorn-web` repository has no direct go dependency on the other
+repositories. However any change to the `yunikorn-core` web services can affect
+the web interface. 
 
 #### Making local changes
 
-To make sure that the local changes will not break other parts of the build 
you should run:
+To make sure that the local changes will not break other parts of the
+build you should run:
 - A full build `make` (build target depends on the repository)
 - A full unit test run `make test`
 
@@ -138,41 +170,59 @@ Any test failures should be fixed before proceeding.
 
 #### Updating dependencies
 
-The simplest way is to use the `replace` directive in the module file. The 
`replace` directive allows you to override the import path with a new (local) 
path.
-There is no need to change any of the imports in the source code. The change 
must be made in the `go.mod` file of the repository that has the dependency. 
+The simplest way is to use the `replace` directive in the module file.
+The `replace` directive allows you to override the import path with a new
+(local) path. There is no need to change any of the imports in the source code.
+The change must be made in the `go.mod` file of the repository that has the
+dependency. 
 
 Using `replace` to use of a forked dependency, such as:
 ```
 replace github.com/apache/yunikorn-core => example.com/some/forked-yunikorn
 ```
 
-There is no requirement to fork and create a new repository. If you do not 
have a repository you can use a local checked out copy too. 
+There is no requirement to fork and create a new repository. If you do not have
+a repository you can use a local checked out copy too. 
+
 Using `replace` to use of a local directory as a dependency:
 ```
 replace github.com/apache/yunikorn-core => 
/User/example/local/checked-out-yunikorn
 ```
-and for the same dependency using a relative path:
+
+For the same dependency using a relative path:
 ```
 replace github.com/apache/yunikorn-core => ../checked-out-yunikorn
 ```
-Note: if the `replace` directive is using a local filesystem path, then the 
target must have the `go.mod` file at that location.
+Note: if the `replace` directive is using a local filesystem path, then the 
target
+must have a `go.mod` file at that location.
 
-Further details on the modules' wiki: [When should I use the 'replace' 
directive?](https://github.com/golang/go/wiki/Modules#when-should-i-use-the-replace-directive).
+Further details can be found on the Go Wiki:
+[When should I use the 'replace' 
directive?](https://github.com/golang/go/wiki/Modules#when-should-i-use-the-replace-directive)
 
-## Build the web UI
+## Build the Web UI
 
-Example deployments reference the [YuniKorn web 
UI](https://github.com/apache/yunikorn-web). 
-The `yunikorn-web` project has specific requirements for the build. Follow the 
steps in the 
[README](https://github.com/apache/yunikorn-web/blob/master/README.md) to 
prepare a development environment and build the web UI. However, the scheduler 
is fully functional without the web UI. 
+Example deployments reference the
+[YuniKorn Web UI](https://github.com/apache/yunikorn-web). The `yunikorn-web`
+project has specific requirements for the build. Follow the steps in the
+[README](https://github.com/apache/yunikorn-web/blob/master/README.md) to 
prepare
+a development environment and build the Web UI. However, the scheduler is fully
+functional without the Web UI.
 
-## Locally run the integrated scheduler
+## Run YuniKorn locally
 
-When you have a local development environment setup you can run the scheduler 
in your local Kubernetes environment.
-This has been tested in a desktop enviornment with Docker Desktop, Minikube, 
and Kind. See the [environment setup guide](developer_guide/env_setup.md) for 
further details.
+When you have a local development environment setup you can run the scheduler
+in your local Kubernetes environment. This has been tested in a desktop
+enviornment with Docker Desktop, Minikube, and Kind. See the
+[Dev Environment Setup](developer_guide/env_setup.md) guide for further 
details.
 
-```
+To run a local instance of the scheduler:
+
+```shell script
 make run
 ```
-It will connect with the kubernetes cluster using the users configured 
configuration located in `$HOME/.kube/config`.
+
+This will launch a local scheduler and connect to the Kubernetes cluster
+referenced in your `KUBECONFIG` or `$HOME/.kube/config`.
 
 To run YuniKorn in Kubernetes scheduler plugin mode instead, execute:
 
@@ -180,13 +230,15 @@ To run YuniKorn in Kubernetes scheduler plugin mode 
instead, execute:
 make run_plugin
 ```
 
-You can also use the same approach to run the scheduler locally but connecting 
to a remote kubernetes cluster,
-as long as the `$HOME/.kube/config` file is pointing to that remote cluster.
-
+You can also use the same approach to run the scheduler locally but connecting
+to a remote kubernetes cluster, as long as the `$HOME/.kube/config` file
+is pointing to that remote cluster.
 
-## Verify external interface changes with e2e tests
+## Run end-to-end tests
 
-Yunikorn has an external REST interface which is validated by end-to-end 
tests. However, the tests exist in the k8shim repository.
-Whenever a change is made to the external interface, make sure that it is 
validated by running e2e tests or adjust the test cases accordingly.
+In addition to the unit tests for each project, YuniKorn contains many e2e
+(end-to-end) tests in the `yunikorn-k8shim` repository which validate
+functionaliy of the scheduler on a functioning Kubernetes cluster.
 
-How to run the tests locally is described 
[here](https://github.com/apache/yunikorn-k8shim/blob/master/test/e2e/README.md).
+How to run the tests locally is described
+[here](https://github.com/apache/yunikorn-k8shim/blob/master/test/e2e/README.md).
diff --git a/docs/developer_guide/env_setup.md 
b/docs/developer_guide/env_setup.md
index 98327f0ee9..fd2aa6b3a0 100644
--- a/docs/developer_guide/env_setup.md
+++ b/docs/developer_guide/env_setup.md
@@ -22,169 +22,278 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-There are several ways to setup a local development environment for 
Kubernetes, the three most common ones are `Minikube` 
([docs](https://kubernetes.io/docs/setup/minikube/)), `docker-desktop` and 
`kind` ([kind](https://kind.sigs.k8s.io/))
-`Minikube` provisions a local Kubernetes cluster on several Virtual Machines 
(via VirtualBox or something similar). `docker-desktop` on the other hand, sets 
up Kubernetes cluster in docker containers.  `kind` provides lightweight 
Kubernetes clusters for Windows, Linux and Mac.  
+There are several ways to setup a local development environment for Kubernetes.
+The three most common ones are **Minikube** 
([docs](https://kubernetes.io/docs/setup/minikube/)),
+**Docker Desktop** and **Kind** ([docs](https://kind.sigs.k8s.io/)).
+**Minikube** provisions a local Kubernetes cluster on several Virtual Machines
+(via VirtualBox or something similar).
+
+**Docker Desktop**, on the other hand, sets up Kubernetes cluster using a local
+Docker installation.
+
+**Kind** provides lightweight Kubernetes clusters for Windows, Linux and Mac
+using an existing Docker installation.
 
 ## Local Kubernetes cluster using Docker Desktop
 
 In this tutorial, we will base all the installs on Docker Desktop.
-Even in this case we can use a lightweight 
[minikube](#local-kubernetes-cluster-with-minikube) setup which gives the same 
functionality with less impact.
+Even in this case we can use a lightweight 
[minikube](#local-kubernetes-cluster-with-minikube)
+setup which gives the same functionality with less impact.
 
 ### Installation
 
-Download and install 
[Docker-Desktop](https://www.docker.com/products/docker-desktop) on your 
laptop. Latest version has an embedded version of Kubernetes so no additional 
install is needed.
-Just simply follow the instruction 
[here](https://docs.docker.com/docker-for-mac/#kubernetes) to get Kubernetes up 
and running within docker-desktop.
+Download and install [Docker 
Desktop](https://www.docker.com/products/docker-desktop).
+Newer Docker versions have an embedded version of Kubernetes so no additional
+installation is needed. Follow the instructions 
[here](https://docs.docker.com/docker-for-mac/#kubernetes)
+to get Kubernetes up and running within Docker Desktop.
+Alternatively, a Kind cluster may be created (see instructions
+[here](https://kind.sigs.k8s.io/docs/user/quick-start/#creating-a-cluster)).
 
-Once Kubernetes is started in docker desktop, you should see something similar 
below:
+Once Kubernetes is started in Docker Desktop, you should see something similar
+to this:
 
 ![Kubernetes in Docker Desktop](./../assets/docker-desktop.png)
 
 This means that:
+
 1. Kubernetes is running.
-2. the command line tool `kubctl` is installed in the `/usr/local/bin` 
directory.
-3. the Kubernetes context is set to `docker-desktop`.
+2. The command line tool `kubectl` is installed in the `/usr/local/bin` 
directory.
+3. The Kubernetes context is set to `docker-desktop`.
 
 ### Deploy and access dashboard
 
-After setting up the local Kubernetes you need to deploy the dashboard using 
the following steps: 
-1. follow the instructions in [Kubernetes dashboard 
doc](https://github.com/kubernetes/dashboard) to deploy the dashboard.
-2. start the Kubernetes proxy in the background from a terminal to get access 
on the dashboard on the local host:   
+Optionally, after setting up Kubernetes you may wish to deploy the Kubernetes
+Dashboard Web UI. The dashboard may be deployed using the following steps:
+
+1. Follow the instructions [here](https://github.com/kubernetes/dashboard) to 
deploy the dashboard.
+2. Start the Kubernetes proxy in the background from a terminal to get access 
on the dashboard on the local host:   
     ```shell script
     kubectl proxy &
     ```
-3. access the dashboard at the following URL: [clickable 
link](http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login)
+3. Access the dashboard 
[here](http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login).
 
 ### Access local Kubernetes cluster
 
-The dashboard as deployed in the previous step requires a token or config to 
sign in. Here we use the token to sign in. The token is generated automatically 
and can be retrieved from the system.
+The dashboard as deployed in the previous step requires a token or config to
+sign in. Here we use the token to sign in. The token is generated
+automatically and can be retrieved from the system.
 
-1. retrieve the name of the dashboard token:
+1. Retrieve the name of the dashboard token:
     ```shell script
     kubectl -n kube-system get secret | grep kubernetes-dashboard-token
     ```
-2. retrieve the content of the token, note that the token name ends with a 
random 5 character code and needs to be replaced with the result of step 1. As 
an example:  
+2. Retrieve the content of the token. Note that the token name ends with a 
random
+   5 character code and needs to be replaced with the result of step 1. As an
+   example:
     ```shell script
     kubectl -n kube-system describe secret kubernetes-dashboard-token-tf6n8
     ```
-3. copy the token value which is part of the `Data` section with the tag 
`token`.
-4. select the **Token** option in the dashboard web UI:<br/>
+3. Copy the token value which is part of the `Data` section with the tag 
`token`.
+4. Select the **Token** option in the dashboard web UI:<br/>
     ![Token Access in dashboard](./../assets/dashboard_token_select.png)
-5. paste the token value into the input box and sign in:<br/>
+5. Paste the token value into the input box and sign in:<br/>
     ![Token Access in dashboard](./../assets/dashboard_secret.png)
 
 ## Local Kubernetes cluster with Minikube
-Minikube can be added to an existing Docker Desktop install. Minikube can 
either use the pre-installed hypervisor or use a hypervisor of choice. These 
instructions use [HyperKit](https://github.com/moby/hyperkit) which is embedded 
in Docker Desktop.   
-
-If you want to use a different hypervisor then HyperKit make sure that you 
follow the generic minikube install instructions. Do not forget to install the 
correct driver for the chosen hypervisor if required.
-The basic instructions are provided in the [minikube 
install](https://kubernetes.io/docs/tasks/tools/install-minikube/) instructions.
-
-Check hypervisor Docker Desktop should have already installed HyperKit. In a 
terminal run: `hyperkit` to confirm. Any response other than `hyperkit: command 
not found` confirms that HyperKit is installed and on the path. If it is not 
found you can choose a different hypervisor or fix the Docker Desktop install.
+Minikube can be added to an existing Docker Desktop install. Minikube can
+either use the pre-installed hypervisor or use a hypervisor of your choice.
+These instructions use [HyperKit](https://github.com/moby/hyperkit) which is
+embedded in Docker Desktop.
+
+If you want to use a different hypervisor then HyperKit make sure that you
+follow the generic minikube install instructions. Do not forget to install
+the correct driver for the chosen hypervisor if required. The minikube
+installation instructions can be found 
[here](https://kubernetes.io/docs/tasks/tools/install-minikube/).
+
+Docker Desktop should have already installed HyperKit. To verify this, open a
+terminal and run: `hyperkit`. Any response other than
+`hyperkit: command not found` confirms that HyperKit is installed and on
+the path. If it is not found you can choose a different hypervisor or
+fix the Docker Desktop install.
 
 ### Installing Minikube
-1. install minikube, you can either use brew or directly via these steps: 
+1. Install minikube, either via `brew` or directly via these steps: 
     ```shell script
     curl -Lo minikube 
https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64
     chmod +x minikube
     sudo mv minikube /usr/local/bin
     ```
-2. install HyperKit driver (required), you can either use brew or directly via 
these steps:
+2. Install HyperKit driver (required). You can either use `brew` or directly 
via these steps:
     ```shell script
     curl -LO 
https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-hyperkit
     sudo install -o root -g wheel -m 4755 docker-machine-driver-hyperkit 
/usr/local/bin/
     ```
-3. update the minikube config to default to the HyperKit install `minikube 
config set vm-driver hyperkit`
-4. change docker desktop to use minikube for Kubernetes:<br/>
+3. Update the minikube configuration to default to using HyperKit:
+   ```shell script
+   minikube config set vm-driver hyperkit
+   ```
+4. Change Docker Desktop to use minikube for Kubernetes:<br/>
     ![Kubernetes in Docker Desktop: minikube 
setting](./../assets/docker-dektop-minikube.png)
 
 ### Deploy and access the cluster
 After the installation is done you can start a new cluster.
-1. start the minikube cluster: `minikube start --kubernetes-version v1.24.7`
-2. start the minikube dashboard: `minikube dashboard &`
+1. Start the minikube cluster:
+   ```shell script
+   minikube start --kubernetes-version v1.24.7
+   ```
+2. Start the minikube dashboard:
+   ```shell script
+   minikube dashboard &
+   ```
 
 ### Build impact
-When you create images make sure that the build is run after pointing it to 
the right environment. 
-Without setting the enviromnent minikube might not find the docker images when 
deploying the scheduler.
-1. make sure minikube is started
-2. in the terminal where you wll run the build execute: `eval $(minikube 
docker-env)`
-3. run the image build from the yunikorn-k8shim repository root: `make image`
-4. deploy the scheduler as per the normal instructions.
+When you create images make sure that the build is run after pointing it to
+the correct cluster. Without setting the environment minikube might not find
+the docker images when deploying the scheduler.
+
+1. Make sure minikube is started.
+2. In the terminal where you wll run the build, execute:
+   ```shell script
+   eval $(minikube docker-env)
+   ```
+3. Run the image build from the yunikorn-k8shim repository root:
+   ```shell script
+   make image
+   ```
+4. Deploy the scheduler as per the normal instructions.
 
 ## Local Kubernetes Cluster with Kind
 
-Kind (Kubernetes in Docker) is a lightweight tool for running lightweight 
Kubernetes environments.  It is very easy to test different Kubernetes versions 
with kind.  You can just select the kind image you want.
+Kind (Kubernetes in Docker) is a lightweight tool for running lightweight
+Kubernetes environments. It is very easy to test different Kubernetes versions
+with Kind by specifing the version during cluster setup.
 
 ### Installation
 
-If you have go installed, you can run `go install sigs.k8s.io/kind@latest`.
+If you have go installed, you can run:
+```shell script
+go install sigs.k8s.io/kind@latest
+```
 
-Other ways can be found on the Kind 
[website](https://kind.sigs.k8s.io/docs/user/quick-start/#installation)
+Other installation methods can be found on the Kind
+[website](https://kind.sigs.k8s.io/docs/user/quick-start/#installation).
 
-To use Kind with Kubernetes 1.25, you will need to use [email protected] or greater.  
The release of kind does allow for particular versions of Kubernetes and you 
can get that information from the Kind release notes.
+To use Kind with Kubernetes 1.25 or later, you will need to use [email protected] or
+later. The release of kind does allow for running other versions of Kubernetes
+as well. The list of supported versions may be found in the Kind release notes.
 
 ### Using Kind
 
-To test a new version of Kubernetes, you can pull a corresponding image from 
kind's repo.
+To test a new version of Kubernetes, you can pull a corresponding image from
+kind's repository.
 
-Creating a v1.24.7 Kubernetes Cluster: `kind create cluster --name test 
--image kindest/node:v1.24.7`
+For example, to create a cluster running Kubernetes 1.26.6:
+```shell script
+kind create cluster --name test --image kindest/node:v1.26.6
+```
 
-Deleting a kind cluster: `kind delete cluster --name test`
+Kind will download the appropriate image and launch a new cluster named
+`test`. The active Kubernetes cluster will also be changed to `test`.
+
+To delete the kind cluster:
+```shell script
+kind delete cluster --name test
+```
 
 ### Loading your images
 
-In order to use a local image, you have to load your images into kind's 
registry.  If you run `make image`, you could use the following command to load 
your kind image.  This assumes AMD64 architecture.
+In order to use a local image, you have to load your images into kind's
+registry.  If you run `make image`, you could use the following command
+to load your kind image.  This assumes AMD64 architecture.
 
-The scheduler, web-ui and admission-controller examples are below: 
-scheduler:
-`kind load docker-image apache/yunikorn:scheduler-amd64-latest`
+The scheduler, web-ui and admission-controller examples are below:
 
-web: 
-`kind load docker-image apache/yunikorn:web-amd64-latest`
+```shell script
+kind load docker-image apache/yunikorn:scheduler-amd64-latest
+kind load docker-image apache/yunikorn:web-amd64-latest
+kind load docker-image apache/yunikorn:admission-amd64-latesta
+```
 
-admission-controller:
-`kind load docker-image apache/yunikorn:admission-amd64-latest`
+If running on an ARM system, replace `amd64` with `arm64` above.
 
 ## Debug code locally
 
-Note, this instruction requires you have GoLand IDE for development.
-
-In GoLand, go to yunikorn-k8shim project. Then click "Run" -> "Debug..." -> 
"Edit Configuration..." to get the pop-up configuration window.
-Note, you need to click "+" to create a new profile if the `Go Build` option 
is not available at the first time.
-
-![Debug Configuration](./../assets/goland_debug.jpg)
-
-The highlighted fields are the configurations you need to add. These include:
-
-- Run Kind: package
-- Package path: point to the path of `pkg/shim` package
-- Working directory: point to the path of the `conf` directory, this is where 
the program loads configuration file from
-- Program arguments: specify the arguments to run the program, such as 
`-kubeConfig=/path/to/.kube/config -interval=1s -clusterId=mycluster 
-clusterVersion=0.1 -name=yunikorn -policyGroup=queues -logEncoding=console 
-logLevel=-1`.
-Note, you need to replace `/path/to/.kube/config` with the local path to the 
kubeconfig file. And if you want to change or add more options, you can run 
`_output/bin/k8s-yunikorn-scheduler -h` to find out.
-
-Once the changes are done, click "Apply", then "Debug". You will need to set 
proper breakpoints in order to debug the program.
+The scheduler may be run locally for debugging. This example assumes
+you have installed the GoLand IDE for development.
+
+In GoLand, open the `yunikorn-k8shim` project. Then click "Run" ->
+"Debug..." -> "Edit Configuration..." to get the pop-up configuration
+window. Note, you need to click "+" to create a new profile if the `Go Build`
+option is not available at the first time.
+
+![Debug Configuration](./../assets/goland_debug.png)
+
+Set the following values in the dialog (as shown):
+
+- Run Kind: Package
+- Package path: `github.com/apache/yunikorn-k8shim/pkg/cmd/shim`
+- Working directory: Project base directory (`yunikorn-k8shim`)
+- Program arguments: Empty
+- Environment: If `KUBECONFIG` is not set globally, ensure it is set here.
+  Additionally, you may want to set `NAMESPACE=yunikorn`, as otherwise
+  YuniKorn will look for the `yunikorn-configs` ConfigMap under the
+  `default` Kubernetes namespace.
+
+Once the changes are done, click "Apply", then "Debug". You will need to
+set proper breakpoints in order to debug the program.
+
+## Debug the scheduler plugin
+
+The scheduler may also be run in plugin mode. In this mode, the YuniKorn
+scheduler is built on top of the default scheduler and runs as a
+plugin (rather than completely standalone). Functionally, it performs the
+same tasks, but relies on the upstream Kubernetes scheduler codebase for
+common functionality.
+
+The run configuration for the scheduler in plugin mode is as follows:
+
+- Run Kind: Package
+- Package path: `github.com/apache/yunikorn-k8shim/pkg/cmd/schedulerplugin`
+- Working directory: Project base directory (`yunikorn-k8shim`)
+- Program arguments:
+  ```
+  --bind-address=0.0.0.0
+  --leader-elect=false
+  --config=conf/scheduler-config-local.yaml
+  -v=2
+  ```
+- Environment: If `KUBECONFIG` is not set globally, ensure it is set here.
+  Additionally, you may want to set `NAMESPACE=yunikorn`, as otherwise
+  YuniKorn will look for the `yunikorn-configs` ConfigMap under the
+  `default` Kubernetes namespace.
+
+Additionally, before running for the first time, run `make init` from a
+terminal in the root of the `yunikorn-k8shim` repository. This will
+generate the contents of `conf/scheduler-config-local.yaml`, which is
+required.
 
 ## Access remote Kubernetes cluster
 
 This setup assumes you have already installed a remote Kubernetes cluster. 
-For a generic view on how to access a multiple cluster and integrate it follow 
the [accessing multiple 
clusters](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)
 documentation from Kubernetes.
+For a generic view on how to access a multiple cluster and integrate it follow
+the [accessing multiple 
clusters](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)
+documentation from Kubernetes.
 
 Or follow these simplified steps:
-1. get the Kubernetes `config` file from remote cluster, copy it to the local 
machine and give it a unique name i.e. `config-remote`
-2. save the `KUBECONFIG` environment variable (if set)
+1. Get the Kubernetes `config` file from remote cluster, copy it to the local
+   machine and give it a unique name i.e. `config-remote`
+2. Save the `KUBECONFIG` environment variable (if set)
     ```shell script
     export KUBECONFIG_SAVED=$KUBECONFIG
     ```
-3. add the new file to the environment variable
+3. Add the new file to the environment variable
     ```shell script
     export KUBECONFIG=$KUBECONFIG:config-remote
     ``` 
-4. run the command `kubectl config view` to check that both configs can be 
accessed
-5. switch context using `kubectl config use-context my-remote-cluster`
-6. confirm that the current context is now switched to the remote cluster 
config:
+4. Run the command `kubectl config view` to check that both configs can be 
accessed
+5. Switch context using `kubectl config use-context remote-cluster`
+6. Confirm that the current context is now switched to the remote cluster 
config:
     ```text
     kubectl config get-contexts
-    CURRENT   NAME                   CLUSTER                      AUTHINFO     
        NAMESPACE
-              docker-for-desktop     docker-for-desktop-cluster   
docker-for-desktop
-    *         my-remote-cluster      kubernetes                   
kubernetes-admin
+    CURRENT NAME           CLUSTER                AUTHINFO
+            docker-desktop docker-desktop-cluster docker-for-desktop
+    *       remote-cluster kubernetes             kubernetes-admin
     ```
 
-More docs can be found 
[here](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/)
  
+More documentation can be found
+[here](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/).
diff --git a/docs/developer_guide/openshift_development.md 
b/docs/developer_guide/openshift_development.md
index 8d21171e24..dcdcd530cb 100644
--- a/docs/developer_guide/openshift_development.md
+++ b/docs/developer_guide/openshift_development.md
@@ -109,22 +109,29 @@ The following steps assume you have a running CRC cluster 
in your laptop. Note t
    Note that if you manually pushed the Docker image to the 
`default-route-openshift-image-registry.apps-crc.testing` docker registry 
directly you need to have valid certs to access it. 
    On OpenShift there's service for this: 
`image-registry.openshift-image-registry.svc`, which is easier to use.
 
-   For example, if you want to override all of the three Docker images you 
should use the following configs:
+   For example, if you want to override all of the Docker images you should 
use the following configs:
    ```yaml
    image:
      repository: 
image-registry.openshift-image-registry.svc:5000/yunikorn/yunikorn
      tag: scheduler-latest
      pullPolicy: Always
-   
-   admission_controller_image:
+
+   pluginImage:
      repository: 
image-registry.openshift-image-registry.svc:5000/yunikorn/yunikorn
-     tag: admission-latest
+     tag: scheduler-plugin-latest
      pullPolicy: Always
    
-   web_image:
-     repository: 
image-registry.openshift-image-registry.svc:5000/yunikorn/yunikorn-web
-     tag: latest
-     pullPolicy: Always
+   admissionController:
+     image:
+       repository: 
image-registry.openshift-image-registry.svc:5000/yunikorn/yunikorn
+       tag: admission-latest
+       pullPolicy: Always
+   
+   web:
+     image:
+       repository: 
image-registry.openshift-image-registry.svc:5000/yunikorn/yunikorn-web
+       tag: latest
+       pullPolicy: Always
    ``` 
 
    You can find it in the yunikorn-release repo's helm chart directory.


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]


Reply via email to