This is an automated email from the ASF dual-hosted git repository.
zihaoxiang pushed a commit to branch dev
in repository https://gitbox.apache.org/repos/asf/dolphinscheduler.git
The following commit(s) were added to refs/heads/dev by this push:
new 8d6e9eecfc docs: fix spelling (#15996)
8d6e9eecfc is described below
commit 8d6e9eecfcbda09470b74a02fa02a64154045f6d
Author: John Bampton <[email protected]>
AuthorDate: Wed May 15 11:41:02 2024 +1000
docs: fix spelling (#15996)
---
docs/docs/en/guide/resource/configuration.md | 4 ++--
docs/docs/en/guide/start/docker.md | 2 +-
docs/docs/en/guide/task/datafactory.md | 6 +++---
docs/docs/en/guide/task/kubernetes.md | 2 +-
docs/docs/en/guide/task/mlflow.md | 2 +-
docs/docs/zh/guide/task/datafactory.md | 6 +++---
docs/docs/zh/guide/task/mlflow.md | 2 +-
7 files changed, 12 insertions(+), 12 deletions(-)
diff --git a/docs/docs/en/guide/resource/configuration.md
b/docs/docs/en/guide/resource/configuration.md
index 67c68a22c2..6bee9e5a67 100644
--- a/docs/docs/en/guide/resource/configuration.md
+++ b/docs/docs/en/guide/resource/configuration.md
@@ -2,8 +2,8 @@
- You could use `Resource Center` to upload text files, UDFs and other
task-related files.
- You could configure `Resource Center` to use distributed file system like
[Hadoop](https://hadoop.apache.org/docs/r2.7.0/) (2.6+),
[MinIO](https://github.com/minio/minio) cluster or remote storage products like
[AWS S3](https://aws.amazon.com/s3/), [Alibaba Cloud
OSS](https://www.aliyun.com/product/oss), [Huawei Cloud
OBS](https://support.huaweicloud.com/obs/index.html) etc.
-- You could configure `Resource Center` to use local file system. If you
deploy `DolphinScheduler` in `Standalone` mode, you could configure it to use
local file system for `Resouce Center` without the need of an external `HDFS`
system or `S3`.
-- Furthermore, if you deploy `DolphinScheduler` in `Cluster` mode, you could
use [S3FS-FUSE](https://github.com/s3fs-fuse/s3fs-fuse) to mount `S3` or
[JINDO-FUSE](https://help.aliyun.com/document_detail/187410.html) to mount
`OSS` to your machines and use the local file system for `Resouce Center`. In
this way, you could operate remote files as if on your local machines.
+- You could configure `Resource Center` to use local file system. If you
deploy `DolphinScheduler` in `Standalone` mode, you could configure it to use
local file system for `Resource Center` without the need of an external `HDFS`
system or `S3`.
+- Furthermore, if you deploy `DolphinScheduler` in `Cluster` mode, you could
use [S3FS-FUSE](https://github.com/s3fs-fuse/s3fs-fuse) to mount `S3` or
[JINDO-FUSE](https://help.aliyun.com/document_detail/187410.html) to mount
`OSS` to your machines and use the local file system for `Resource Center`. In
this way, you could operate remote files as if on your local machines.
## Use Local File System
diff --git a/docs/docs/en/guide/start/docker.md
b/docs/docs/en/guide/start/docker.md
index b98d0572ae..f29bcd5c75 100644
--- a/docs/docs/en/guide/start/docker.md
+++ b/docs/docs/en/guide/start/docker.md
@@ -128,7 +128,7 @@ and use `admin` and `dolphinscheduler123` as default
username and password in th

> Note: If you start the services by the way [using exists PostgreSQL
> ZooKeeper](#using-exists-postgresql-zookeeper), and
-> strating with multiple machine, you should change URL domain from
`localhost` to IP or hostname the api server running.
+> starting with multiple machine, you should change URL domain from
`localhost` to IP or hostname the api server running.
## Change Environment Variable
diff --git a/docs/docs/en/guide/task/datafactory.md
b/docs/docs/en/guide/task/datafactory.md
index 1936db81f1..67d00c1227 100644
--- a/docs/docs/en/guide/task/datafactory.md
+++ b/docs/docs/en/guide/task/datafactory.md
@@ -19,11 +19,11 @@ DolphinScheduler DataFactory functions:
### Application Permission Setting
-First, visit the `Subcription` page and choose `Access control (IAM)`, then
click `Add role assignment` to the authorization page.
-
+First, visit the `Subscription` page and choose `Access control (IAM)`, then
click `Add role assignment` to the authorization page.
+
After that, select `Contributor` role which satisfy functions calls in data
factory. Then click `Members` page, and click `Select members`.
Search application name or application `Object ID` to assign `Contributor`
role to application.
-
+
## Configurations
diff --git a/docs/docs/en/guide/task/kubernetes.md
b/docs/docs/en/guide/task/kubernetes.md
index 332669a60c..871a020294 100644
--- a/docs/docs/en/guide/task/kubernetes.md
+++ b/docs/docs/en/guide/task/kubernetes.md
@@ -26,7 +26,7 @@ K8S task type used to execute a batch task. In this task, the
worker submits the
| Command | The container execution command (yaml-style array), for
example: ["printenv"]
|
| Args | The args of execution command (yaml-style array), for
example: ["HOSTNAME", "KUBERNETES_PORT"]
|
| Custom label | The customized labels for k8s Job.
|
-| Node selector | The label selectors for running k8s pod. Different value
in value set should be seperated by comma, for example: `value1,value2`. You
can refer to
https://kubernetes.io/docs/reference/kubernetes-api/common-definitions/node-selector-requirement/
for configuration of different operators. |
+| Node selector | The label selectors for running k8s pod. Different value
in value set should be separated by comma, for example: `value1,value2`. You
can refer to
https://kubernetes.io/docs/reference/kubernetes-api/common-definitions/node-selector-requirement/
for configuration of different operators. |
| Custom parameter | It is a local user-defined parameter for K8S task, these
params will pass to container as environment variables.
|
## Task Example
diff --git a/docs/docs/en/guide/task/mlflow.md
b/docs/docs/en/guide/task/mlflow.md
index a14ac12483..d500e4cbf6 100644
--- a/docs/docs/en/guide/task/mlflow.md
+++ b/docs/docs/en/guide/task/mlflow.md
@@ -148,7 +148,7 @@ After this, you can visit the MLflow service
(`http://localhost:5000`) page to v
### Preset Algorithm Repository Configuration
-If you can't access github, you can modify the following fields in the
`commom.properties` configuration file to replace the github address with an
accessible address.
+If you can't access github, you can modify the following fields in the
`common.properties` configuration file to replace the github address with an
accessible address.
```yaml
# mlflow task plugin preset repository
diff --git a/docs/docs/zh/guide/task/datafactory.md
b/docs/docs/zh/guide/task/datafactory.md
index 0fa3375bc1..0822f60bec 100644
--- a/docs/docs/zh/guide/task/datafactory.md
+++ b/docs/docs/zh/guide/task/datafactory.md
@@ -19,10 +19,10 @@ DolphinScheduler DataFactory 组件的功能:
### 应用权限设置
-首先打开当前`Subcription`页面,点击`Access control (IAM)`,再点击`Add role assignment`进入授权页面。
-
+首先打开当前`Subscription`页面,点击`Access control (IAM)`,再点击`Add role assignment`进入授权页面。
+
首先选择`Contributor`角色足够满足调用数据工厂。然后选择`Members`页面,再选择`Select
members`,检索APP名称或APP的`Object ID`并添加,从给指定APP添加权限.
-
+
## 环境配置
diff --git a/docs/docs/zh/guide/task/mlflow.md
b/docs/docs/zh/guide/task/mlflow.md
index 3c384ce221..12d2bc5089 100644
--- a/docs/docs/zh/guide/task/mlflow.md
+++ b/docs/docs/zh/guide/task/mlflow.md
@@ -139,7 +139,7 @@ mlflow server -h 0.0.0.0 -p 5000 --serve-artifacts
--backend-store-uri sqlite://
### 内置算法仓库配置
-如果遇到github无法访问的情况,可以修改`commom.properties`配置文件的以下字段,将github地址替换能访问的地址。
+如果遇到github无法访问的情况,可以修改`common.properties`配置文件的以下字段,将github地址替换能访问的地址。
```yaml
# mlflow task plugin preset repository