This is an automated email from the ASF dual-hosted git repository.

chengshiwen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/dolphinscheduler-website.git


The following commit(s) were added to refs/heads/master by this push:
     new 406cbec  Fix some description error in docker deployment (#363)
406cbec is described below

commit 406cbec5489d8bc8fa7ffd96e34cbd2eb30cb24c
Author: Shiwen Cheng <[email protected]>
AuthorDate: Fri May 14 17:23:58 2021 +0800

    Fix some description error in docker deployment (#363)
---
 docs/en-us/1.3.6/user_doc/docker-deployment.md | 12 ++++++++----
 docs/zh-cn/1.3.6/user_doc/docker-deployment.md | 16 ++++++++++------
 2 files changed, 18 insertions(+), 10 deletions(-)

diff --git a/docs/en-us/1.3.6/user_doc/docker-deployment.md 
b/docs/en-us/1.3.6/user_doc/docker-deployment.md
index 4a1df11..17b8a6d 100644
--- a/docs/en-us/1.3.6/user_doc/docker-deployment.md
+++ b/docs/en-us/1.3.6/user_doc/docker-deployment.md
@@ -536,7 +536,7 @@ Take Spark 2.4.7 as an example:
 3. Copy the Spark 2.4.7 release binary into Docker container
 
 ```bash
-docker cp spark-2.4.7-bin-hadoop2.7.tgz dolphinscheduler-worker:/opt/soft
+docker cp spark-2.4.7-bin-hadoop2.7.tgz 
docker-swarm_dolphinscheduler-worker_1:/opt/soft
 ```
 
 Because the volume `dolphinscheduler-shared-local` is mounted on `/opt/soft`, 
all files in `/opt/soft` will not be lost
@@ -544,7 +544,7 @@ Because the volume `dolphinscheduler-shared-local` is 
mounted on `/opt/soft`, al
 4. Attach the container and ensure that `SPARK_HOME2` exists
 
 ```bash
-docker exec -it dolphinscheduler-worker bash
+docker exec -it docker-swarm_dolphinscheduler-worker_1 bash
 cd /opt/soft
 tar zxf spark-2.4.7-bin-hadoop2.7.tgz
 rm -f spark-2.4.7-bin-hadoop2.7.tgz
@@ -592,13 +592,13 @@ Take Spark 3.1.1 as an example:
 3. Copy the Spark 3.1.1 release binary into Docker container
 
 ```bash
-docker cp spark-3.1.1-bin-hadoop2.7.tgz dolphinscheduler-worker:/opt/soft
+docker cp spark-3.1.1-bin-hadoop2.7.tgz 
docker-swarm_dolphinscheduler-worker_1:/opt/soft
 ```
 
 4. Attach the container and ensure that `SPARK_HOME2` exists
 
 ```bash
-docker exec -it dolphinscheduler-worker bash
+docker exec -it docker-swarm_dolphinscheduler-worker_1 bash
 cd /opt/soft
 tar zxf spark-3.1.1-bin-hadoop2.7.tgz
 rm -f spark-3.1.1-bin-hadoop2.7.tgz
@@ -618,6 +618,8 @@ Check whether the task log contains the output like `Pi is 
roughly 3.146015`
 
 ### How to support shared storage between Master, Worker and Api server?
 
+> **Note**: If it is deployed on a single machine by `docker-compose`, step 1 
and 2 can be skipped directly, and execute the command like `docker cp 
hadoop-3.2.2.tar.gz docker-swarm_dolphinscheduler-worker_1:/opt/soft` to put 
Hadoop into the shared directory `/opt/soft` in the container
+
 For example, Master, Worker and Api server may use Hadoop at the same time
 
 1. Modify the volume `dolphinscheduler-shared-local` to support nfs in 
`docker-compose.yml`
@@ -639,6 +641,8 @@ volumes:
 
 ### How to support local file resource storage instead of HDFS and S3?
 
+> **Note**: If it is deployed on a single machine by `docker-compose`, step 2 
can be skipped directly
+
 1. Modify the following environment variables in `config.env.sh`:
 
 ```
diff --git a/docs/zh-cn/1.3.6/user_doc/docker-deployment.md 
b/docs/zh-cn/1.3.6/user_doc/docker-deployment.md
index b837ee3..70a7de0 100644
--- a/docs/zh-cn/1.3.6/user_doc/docker-deployment.md
+++ b/docs/zh-cn/1.3.6/user_doc/docker-deployment.md
@@ -536,7 +536,7 @@ docker build -t apache/dolphinscheduler:python3 .
 3. 复制 Spark 2.4.7 二进制包到 Docker 容器中
 
 ```bash
-docker cp spark-2.4.7-bin-hadoop2.7.tgz dolphinscheduler-worker:/opt/soft
+docker cp spark-2.4.7-bin-hadoop2.7.tgz 
docker-swarm_dolphinscheduler-worker_1:/opt/soft
 ```
 
 因为存储卷 `dolphinscheduler-shared-local` 被挂载到 `/opt/soft`, 因此 `/opt/soft` 
中的所有文件都不会丢失
@@ -544,11 +544,11 @@ docker cp spark-2.4.7-bin-hadoop2.7.tgz 
dolphinscheduler-worker:/opt/soft
 4. 登录到容器并确保 `SPARK_HOME2` 存在
 
 ```bash
-docker exec -it dolphinscheduler-worker bash
+docker exec -it docker-swarm_dolphinscheduler-worker_1 bash
 cd /opt/soft
 tar zxf spark-2.4.7-bin-hadoop2.7.tgz
 rm -f spark-2.4.7-bin-hadoop2.7.tgz
-ln -s spark-2.4.7-bin-hadoop2.7 spark2 # or just mv
+ln -s spark-2.4.7-bin-hadoop2.7 spark2 # 或者 mv
 $SPARK_HOME2/bin/spark-submit --version
 ```
 
@@ -592,17 +592,17 @@ Spark on YARN (部署方式为 `cluster` 或 `client`) 需要 Hadoop 支持. 类
 3. 复制 Spark 3.1.1 二进制包到 Docker 容器中
 
 ```bash
-docker cp spark-3.1.1-bin-hadoop2.7.tgz dolphinscheduler-worker:/opt/soft
+docker cp spark-3.1.1-bin-hadoop2.7.tgz 
docker-swarm_dolphinscheduler-worker_1:/opt/soft
 ```
 
 4. 登录到容器并确保 `SPARK_HOME2` 存在
 
 ```bash
-docker exec -it dolphinscheduler-worker bash
+docker exec -it docker-swarm_dolphinscheduler-worker_1 bash
 cd /opt/soft
 tar zxf spark-3.1.1-bin-hadoop2.7.tgz
 rm -f spark-3.1.1-bin-hadoop2.7.tgz
-ln -s spark-3.1.1-bin-hadoop2.7 spark2 # or just mv
+ln -s spark-3.1.1-bin-hadoop2.7 spark2 # 或者 mv
 $SPARK_HOME2/bin/spark-submit --version
 ```
 
@@ -618,6 +618,8 @@ $SPARK_HOME2/bin/spark-submit --class 
org.apache.spark.examples.SparkPi $SPARK_H
 
 ### 如何在 Master、Worker 和 Api 服务之间支持共享存储?
 
+> **注意**: 如果是在单机上通过 docker-compose 部署,则步骤 1 和 2 可以直接跳过,并且执行命令如 `docker cp 
hadoop-3.2.2.tar.gz docker-swarm_dolphinscheduler-worker_1:/opt/soft` 将 Hadoop 
放到容器中的共享目录 /opt/soft 下
+
 例如, Master、Worker 和 Api 服务可能同时使用 Hadoop
 
 1. 修改 `docker-compose.yml` 文件中的 `dolphinscheduler-shared-local` 存储卷,以支持 nfs
@@ -639,6 +641,8 @@ volumes:
 
 ### 如何支持本地文件存储而非 HDFS 和 S3?
 
+> **注意**: 如果是在单机上通过 docker-compose 部署,则步骤 2 可以直接跳过
+
 1. 修改 `config.env.sh` 文件中下面的环境变量:
 
 ```

Reply via email to