This is an automated email from the ASF dual-hosted git repository.
benjobs pushed a commit to branch dev
in repository
https://gitbox.apache.org/repos/asf/incubator-streampark-website.git
The following commit(s) were added to refs/heads/dev by this push:
new d14556e minor improvement
d14556e is described below
commit d14556e40ae67bdbf5ed12cd9f093b256e868fc3
Author: benjobs <[email protected]>
AuthorDate: Thu Oct 5 00:26:20 2023 +0800
minor improvement
---
community/submit_guide/submit-code.md | 16 +++----
docs/user-guide/4-dockerDeployment.md | 48 +++++++++++--------
.../current/user-guide/4-dockerDeployment.md | 56 ++++++++++++----------
3 files changed, 67 insertions(+), 53 deletions(-)
diff --git a/community/submit_guide/submit-code.md
b/community/submit_guide/submit-code.md
index e692635..003b08a 100644
--- a/community/submit_guide/submit-code.md
+++ b/community/submit_guide/submit-code.md
@@ -32,21 +32,21 @@ sidebar_position: 2
* Clone your repository to your local
- ```shell
+```shell
git clone [email protected]:apache/incubator-streampark.git
- ```
+```
* Add remote repository address, named upstream
- ```shell
- git remote add upstream [email protected]:apache/incubator-streampark.git
- ```
+```shell
+ git remote add upstream [email protected]:apache/incubator-streampark.git
+```
* View repository
- ```shell
- git remote -v
- ```
+```shell
+ git remote -v
+```
> At this time, there will be two repositories: origin (your own repository)
and upstream (remote repository)
diff --git a/docs/user-guide/4-dockerDeployment.md
b/docs/user-guide/4-dockerDeployment.md
index 141270b..18aad30 100644
--- a/docs/user-guide/4-dockerDeployment.md
+++ b/docs/user-guide/4-dockerDeployment.md
@@ -5,25 +5,28 @@ sidebar_position: 4
---
This tutorial uses the docker method to deploy StreamPark via Docker.
+
## Prepare
Docker 1.13.1+
Docker Compose 1.28.0+
-### Installing docker
+
+### 1. Install docker
To start the service with docker, you need to install
[docker](https://www.docker.com/) first
-### Installing docker-compose
+### 2. Install docker-compose
To start the service with docker-compose, you need to install
[docker-compose](https://docs.docker.com/compose/install/) first
-## Rapid StreamPark Deployment
-### StreamPark deployment based on h2 and docker-compose
+## StreamPark Deployment
+
+### 1. StreamPark deployment based on h2 and docker-compose
This method is suitable for beginners to learn and become familiar with the
features. The configuration will reset after the container is restarted. Below,
you can configure Mysql or Pgsql for persistence.
-#### Deployment
+#### 2. Deployment
-```html
+```shell
wget
https://raw.githubusercontent.com/apache/incubator-streampark/dev/deploy/docker/docker-compose.yaml
wget
https://raw.githubusercontent.com/apache/incubator-streampark/dev/deploy/docker/.env
docker-compose up -d
@@ -31,25 +34,25 @@ docker-compose up -d
Once the service is started, StreamPark can be accessed through
http://localhost:10000 and also through http://localhost:8081 to access Flink.
Accessing the StreamPark link will redirect you to the login page, where the
default user and password for StreamPark are admin and streampark respectively.
To learn more about the operation, please refer to the user manual for a quick
start.
-#### Configure flink home
+#### 3. Configure flink home

-#### Configure flink-session cluster
+#### 4. Configure flink-session cluster

Note:When configuring the flink-sessin cluster address, the ip address is not
localhost, but the host network ip, which can be obtained through ifconfig
-#### Submit a task
+#### 5. Submit flink job

-### Use existing Mysql services
-This approach is suitable for enterprise production, where you can quickly
deploy strempark based on docker and associate it with an online database
+##### Use existing Mysql services
+This approach is suitable for enterprise production, where you can quickly
deploy StreamPark based on docker and associate it with an online database
Note: The diversity of deployment support is maintained through the .env
configuration file, make sure there is one and only one .env file in the
directory
-```html
+```shell
wget
https://raw.githubusercontent.com/apache/incubator-streampark/dev/deploy/docker/docker-compose.yaml
wget
https://raw.githubusercontent.com/apache/incubator-streampark/dev/deploy/docker/mysql/.env
vim .env
@@ -59,7 +62,7 @@ First, you need to create the "streampark" database in MySQL,
and then manually
After that, modify the corresponding connection information.
-```html
+```shell
SPRING_PROFILES_ACTIVE=mysql
SPRING_DATASOURCE_URL=jdbc:mysql://localhost:3306/streampark?useSSL=false&useUnicode=true&characterEncoding=UTF-8&allowPublicKeyRetrieval=false&useJDBCCompliantTimezoneShift=true&useLegacyDatetimeCode=false&serverTimezone=GMT%2B8
SPRING_DATASOURCE_USERNAME=root
@@ -69,20 +72,23 @@ SPRING_DATASOURCE_PASSWORD=streampark
```
docker-compose up -d
```
-### Use existing Pgsql services
-```html
+##### Use existing Pgsql services
+
+```shell
wget
https://raw.githubusercontent.com/apache/incubator-streampark/dev/deploy/docker/docker-compose.yaml
wget
https://raw.githubusercontent.com/apache/incubator-streampark/dev/deploy/docker/pgsql/.env
vim .env
```
Modify the corresponding connection information
-```html
+
+```shell
SPRING_PROFILES_ACTIVE=pgsql
SPRING_DATASOURCE_URL=jdbc:postgresql://localhost:5432/streampark?stringtype=unspecified
SPRING_DATASOURCE_USERNAME=postgres
SPRING_DATASOURCE_PASSWORD=streampark
```
-```
+
+```shell
docker-compose up -d
```
@@ -93,14 +99,14 @@ cd incubator-streampark/deploy/docker
vim docker-compose
```
-```html
+```shell
build:
context: ../..
dockerfile: deploy/docker/console/Dockerfile
# image: ${HUB}:${TAG}
```
-```
+```shell
docker-compose up -d
```
@@ -177,7 +183,7 @@ volumes:
Finally, execute the start command:
-```sh
+```shell
cd deploy/docker
docker-compose up -d
```
@@ -190,7 +196,7 @@ You can use `docker ps` to check if the installation was
successful. If the foll
In the previous `env` file, `HADOOP_HOME` was declared, with the corresponding
directory being "/streampark/hadoop". Therefore, you need to upload the
`/etc/hadoop` from the Hadoop installation package to the `/streampark/hadoop`
directory. The commands are as follows:
-```sh
+```shell
## Upload Hadoop resources
docker cp entire etc directory
streampark-docker_streampark-console_1:/streampark/hadoop
## Enter the container
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/user-guide/4-dockerDeployment.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/user-guide/4-dockerDeployment.md
index 0962b32..97cd57c 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/user-guide/4-dockerDeployment.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/user-guide/4-dockerDeployment.md
@@ -4,53 +4,57 @@ title: 'Docker 快速使用教程'
sidebar_position: 4
---
-本教程使用 Docker 完成StreamPark的部署。
+本教程使用 Docker 完成 StreamPark 的部署。
+
## 前置条件
Docker 1.13.1+
Docker Compose 1.28.0+
-### 安装docker
+### 1. 安装 docker
使用 docker 启动服务,需要先安装 [docker](https://www.docker.com/)
-### 安装docker-compose
+### 2. 安装 docker-compose
使用 docker-compose 启动服务,需要先安装
[docker-compose](https://docs.docker.com/compose/install/)
-## 快速StreamPark部署
+## 部署 StreamPark
-### 基于h2和docker-compose进行StreamPark部署
+### 1. 基于 h2 和 docker-compose 部署 StreamPark
该方式适用于入门学习、熟悉功能特性,容器重启后配置会失效,下方可以配置Mysql、Pgsql进行持久化
-#### 部署
-```sh
+#### 2. 部署
+
+```shell
wget
https://raw.githubusercontent.com/apache/incubator-streampark/dev/deploy/docker/docker-compose.yaml
wget
https://raw.githubusercontent.com/apache/incubator-streampark/dev/deploy/docker/.env
docker-compose up -d
```
服务启动后,可以通过 http://localhost:10000 访问 StreamPark,同时也可以通过
http://localhost:8081访问Flink。访问StreamPark链接后会跳转到登陆页面,StreamPark 默认的用户和密码分别为
admin 和 streampark。想要了解更多操作请参考用户手册快速上手。

+
该部署方式会自动给你启动一个flink-session集群供你去进行flink任务使用,同时也会挂载本地docker服务以及~/.kube来用于k8s模式的任务提交
-#### 配置flink home
+#### 3. 配置flink home

-#### 配置session集群
+#### 4. 配置session集群

注意:在配置flink-sessin集群地址时,填写的ip地址不是localhost,而是宿主机网络ip,该ip地址可以通过ifconfig来进行获取
-#### 提交任务
+#### 5. 提交 Flink 作业

-### 沿用已有的 Mysql 服务
-该方式适用于企业生产,你可以基于docker快速部署strempark并将其和线上数据库进行关联
+#### 使用已有的 Mysql 服务
+该方式适用于企业生产,你可以基于 docker 快速部署 StreamPark 并将其和线上数据库进行关联
注意:部署支持的多样性是通过.env这个配置文件来进行维护的,要保证目录下有且仅有一个.env文件
-```sh
+
+```shell
wget
https://raw.githubusercontent.com/apache/incubator-streampark/dev/deploy/docker/docker-compose.yaml
wget
https://raw.githubusercontent.com/apache/incubator-streampark/dev/deploy/docker/mysql/.env
vim .env
@@ -59,41 +63,45 @@ vim .env
需要先在mysql先创建streampark数据库,然后手动执行对应的schema和data里面对应数据源的sql
然后修改对应的连接信息
-```sh
+
+```shell
SPRING_PROFILES_ACTIVE=mysql
SPRING_DATASOURCE_URL=jdbc:mysql://localhost:3306/streampark?useSSL=false&useUnicode=true&characterEncoding=UTF-8&allowPublicKeyRetrieval=false&useJDBCCompliantTimezoneShift=true&useLegacyDatetimeCode=false&serverTimezone=GMT%2B8
SPRING_DATASOURCE_USERNAME=root
SPRING_DATASOURCE_PASSWORD=streampark
```
-```sh
+```shell
docker-compose up -d
```
### 沿用已有的 Pgsql 服务
-```html
+
+```shell
wget
https://raw.githubusercontent.com/apache/incubator-streampark/dev/deploy/docker/docker-compose.yaml
wget
https://raw.githubusercontent.com/apache/incubator-streampark/dev/deploy/docker/pgsql/.env
vim .env
```
+
修改对应的连接信息
-```sh
+```shell
SPRING_PROFILES_ACTIVE=pgsql
SPRING_DATASOURCE_URL=jdbc:postgresql://localhost:5432/streampark?stringtype=unspecified
SPRING_DATASOURCE_USERNAME=postgres
SPRING_DATASOURCE_PASSWORD=streampark
```
-```sh
+```shell
docker-compose up -d
```
## 基于源码构建镜像进行StreamPark部署
-```sh
+
+```shell
git clone https://github.com/apache/incubator-streampark.git
cd incubator-streampark/deploy/docker
vim docker-compose.yaml
```
-```sh
+```shell
build:
context: ../..
dockerfile: deploy/docker/Dockerfile
@@ -101,7 +109,7 @@ vim docker-compose.yaml
```

-```sh
+```shell
cd ../..
./build.sh
```
@@ -180,7 +188,7 @@ volumes:
最后,执行启动命令:
-```sh
+```shell
cd deploy/docker
docker-compose up -d
```
@@ -191,9 +199,9 @@ docker-compose up -d
## 上传配置至容器
-在前面的env文件,声明了HADOOP_HOME,对应的目录为“/streampark/hadoop”,所以需要上传hadoop安装包下的/etc/hadoop至/streampark/hadoop目录,命令如下:
+在前面的env文件,声明了HADOOP_HOME,对应的目录为 `/streampark/hadoop`,所以需要上传hadoop安装包下的
`/etc/hadoop` 至 `/streampark/hadoop` 目录,命令如下:
-```sh
+```shell
## 上传hadoop资源
docker cp etc整个目录 streampark-docker_streampark-console_1:/streampark/hadoop
## 进入容器