This is an automated email from the ASF dual-hosted git repository.

tyrantlucifer pushed a commit to branch dev
in repository https://gitbox.apache.org/repos/asf/incubator-seatunnel.git


The following commit(s) were added to refs/heads/dev by this push:
     new d57b7f043 [Feature][Docs] Update usage docs (#4030)
d57b7f043 is described below

commit d57b7f04397e75f76fd0e0554f0c735d32da8278
Author: Tyrantlucifer <[email protected]>
AuthorDate: Thu Feb 2 10:21:24 2023 +0800

    [Feature][Docs] Update usage docs (#4030)
    
    * [Feature][Docs] Update docs
    
    * [Feature][Docs] Update docs
    
    * [Feature][Docs] Fix docs
    
    * [Feature][Docs] Update quick-start-flink
---
 docs/en/command/usage.mdx                     | 236 +++++++++++---------------
 docs/en/start-v2/locally/quick-start-flink.md |  14 +-
 docs/en/start-v2/locally/quick-start-spark.md |  18 +-
 3 files changed, 123 insertions(+), 145 deletions(-)

diff --git a/docs/en/command/usage.mdx b/docs/en/command/usage.mdx
index 9cb529ee6..d5797e06a 100644
--- a/docs/en/command/usage.mdx
+++ b/docs/en/command/usage.mdx
@@ -7,39 +7,39 @@ import TabItem from '@theme/TabItem';
 
 <Tabs
     groupId="engine-type"
-    defaultValue="spark"
+    defaultValue="spark2"
     values={[
-        {label: 'Spark', value: 'spark'},
-        {label: 'Flink', value: 'flink'},
-        {label: 'Spark V2', value: 'spark V2'},
-        {label: 'Flink V2', value: 'flink V2'},
+        {label: 'Spark 2', value: 'spark2'},
+        {label: 'Spark 3', value: 'spark3'},
+        {label: 'Flink 13 14', value: 'flink13'},
+        {label: 'Flink 15 16', value: 'flink15'},
     ]}>
-<TabItem value="spark">
+<TabItem value="spark2">
 
 ```bash
-bin/start-seatunnel-spark.sh
+bin/start-seatunnel-spark-2-connector-v2.sh
 ```
 
 </TabItem>
-<TabItem value="flink">
+<TabItem value="spark3">
 
 ```bash
-bin/start-seatunnel-flink.sh  
+bin/start-seatunnel-spark-3-connector-v2.sh
 ```
 
 </TabItem>
-<TabItem value="spark V2">
+<TabItem value="flink13">
 
-    ```bash
-    bin/start-seatunnel-spark-connector-v2.sh
-    ```
+```bash
+bin/start-seatunnel-flink-13-connector-v2.sh
+```
 
 </TabItem>
-<TabItem value="flink V2">
+<TabItem value="flink15">
 
-    ```bash
-    bin/start-seatunnel-flink-connector-v2.sh
-    ```
+```bash
+bin/start-seatunnel-flink-15-connector-v2.sh
+```
 
 </TabItem>
 </Tabs>
@@ -49,169 +49,127 @@ bin/start-seatunnel-flink.sh
 
 <Tabs
     groupId="engine-type"
-    defaultValue="spark"
+    defaultValue="spark2"
     values={[
-        {label: 'Spark', value: 'spark'},
-        {label: 'Flink', value: 'flink'},
-        {label: 'Spark V2', value: 'spark V2'},
-        {label: 'Flink V2', value: 'flink V2'},
+        {label: 'Spark 2', value: 'spark2'},
+        {label: 'Spark 3', value: 'spark3'},
+        {label: 'Flink 13 14', value: 'flink13'},
+        {label: 'Flink 15 16', value: 'flink15'},
     ]}>
-<TabItem value="spark">
+<TabItem value="spark2">
 
 ```bash
-bin/start-seatunnel-spark.sh \
-    -c config-path \
-    -m master \
-    -e deploy-mode \
-    -i city=beijing
+Usage: start-seatunnel-spark-2-connector-v2.sh [options]
+  Options:
+    --check           Whether check config (default: false)
+    -c, --config      Config file
+    -e, --deploy-mode Spark deploy mode, support [cluster, client] (default: 
+                      client) 
+    -h, --help        Show the usage message
+    -m, --master      Spark master, support [spark://host:port, 
+                      mesos://host:port, yarn, k8s://https://host:port, 
+                      local], default local[*] (default: local[*])
+    -n, --name        SeaTunnel job name (default: SeaTunnel)
+    -i, --variable    Variable substitution, such as -i city=beijing, or -i 
+                      date=20190318 (default: [])
 ```
 
-- Use `-m` or `--master` to specify the cluster manager
-
-- Use `-e` or `--deploy-mode` to specify the deployment mode
-
 </TabItem>
-<TabItem value="spark V2">
-
-    ```bash
-    bin/start-seatunnel-spark-connector-v2.sh \
-    -c config-path \
-    -m master \
-    -e deploy-mode \
-    -i city=beijing \
-    -n spark-test
-    ```
-
-    - Use `-m` or `--master` to specify the cluster manager
+<TabItem value="spark3">
 
-    - Use `-e` or `--deploy-mode` to specify the deployment mode
-
-    - Use `-n` or `--name` to specify the app name
+```bash
+Usage: start-seatunnel-spark-3-connector-v2.sh [options]
+  Options:
+    --check           Whether check config (default: false)
+    -c, --config      Config file
+    -e, --deploy-mode Spark deploy mode, support [cluster, client] (default: 
+                      client) 
+    -h, --help        Show the usage message
+    -m, --master      Spark master, support [spark://host:port, 
+                      mesos://host:port, yarn, k8s://https://host:port, 
+                      local], default local[*] (default: local[*])
+    -n, --name        SeaTunnel job name (default: SeaTunnel)
+    -i, --variable    Variable substitution, such as -i city=beijing, or -i 
+                      date=20190318 (default: [])
+```
 
 </TabItem>
-<TabItem value="flink">
+<TabItem value="flink13">
 
 ```bash
-bin/start-seatunnel-flink.sh \
-    -c config-path \
-    -i key=value \
-    -r run-application \
-    [other params]
+Usage: start-seatunnel-flink-13-connector-v2.sh [options]
+  Options:
+    --check            Whether check config (default: false)
+    -c, --config       Config file
+    -e, --deploy-mode  Flink job deploy mode, support [run, run-application] 
+                       (default: run)
+    -h, --help         Show the usage message
+    --master, --target Flink job submitted target master, support [local, 
+                       remote, yarn-session, yarn-per-job, kubernetes-session, 
+                       yarn-application, kubernetes-application]
+    -n, --name         SeaTunnel job name (default: SeaTunnel)
+    -i, --variable     Variable substitution, such as -i city=beijing, or -i 
+                       date=20190318 (default: [])
 ```
 
-- Use `-r` or `--run-mode` to specify the flink job run mode, you can use 
`run-application` or `run` (default value)
-
 </TabItem>
-<TabItem value="flink V2">
+<TabItem value="flink15">
 
-    ```bash
-    bin/start-seatunnel-flink-connector-v2.sh \
-    -c config-path \
-    -i key=value \
-    -r run-application \
-    -n flink-test \
-    [other params]
-    ```
-
-    - Use `-r` or `--run-mode` to specify the flink job run mode, you can use 
`run-application` or `run` (default value)
-
-    - Use `-n` or `--name` to specify the app name
+```bash
+Usage: start-seatunnel-flink-15-connector-v2.sh [options]
+  Options:
+    --check            Whether check config (default: false)
+    -c, --config       Config file
+    -e, --deploy-mode  Flink job deploy mode, support [run, run-application] 
+                       (default: run)
+    -h, --help         Show the usage message
+    --master, --target Flink job submitted target master, support [local, 
+                       remote, yarn-session, yarn-per-job, kubernetes-session, 
+                       yarn-application, kubernetes-application]
+    -n, --name         SeaTunnel job name (default: SeaTunnel)
+    -i, --variable     Variable substitution, such as -i city=beijing, or -i 
+                       date=20190318 (default: [])
+```
 
 </TabItem>
 </Tabs>
 
-- Use `-c` or `--config` to specify the path of the configuration file
-
-- Use `-i` or `--variable` to specify the variables in the configuration file, 
you can configure multiple
-
 ## Example
 
 <Tabs
     groupId="engine-type"
-    defaultValue="spark"
+    defaultValue="spark2"
     values={[
-        {label: 'Spark', value: 'spark'},
-        {label: 'Flink', value: 'flink'},
+        {label: 'Spark 2', value: 'spark2'},
+        {label: 'Spark 3', value: 'spark3'},
+        {label: 'Flink 13 14', value: 'flink13'},
+        {label: 'Flink 15 16', value: 'flink15'},
     ]}>
-<TabItem value="spark">
+<TabItem value="spark2">
 
 ```bash
-# Yarn client mode
-./bin/start-seatunnel-spark.sh \
-    --master yarn \
-    --deploy-mode client \
-    --config ./config/application.conf
-
-# Yarn cluster mode
-./bin/start-seatunnel-spark.sh \
-    --master yarn \
-    --deploy-mode cluster \
-    --config ./config/application.conf
+bin/start-seatunnel-spark-2-connector-v2.sh --config 
config/v2.batch.config.template -m local -e client
 ```
 
 </TabItem>
-<TabItem value="flink">
-
-```bash
-env {
-    execution.parallelism = 1
-}
-
-source {
-    FakeSourceStream {
-        result_table_name = "fake"
-        field_name = "name,age"
-    }
-}
-
-transform {
-    sql {
-        sql = "select name,age from fake where name='"${my_name}"'"
-    }
-}
-
-sink {
-    ConsoleSink {}
-}
-```
-
-**Run**
-
-```bash
-bin/start-seatunnel-flink.sh \
-    -c config-path \
-    -i my_name=kid-xiong
-```
-
-This designation will replace `"${my_name}"` in the configuration file with 
`kid-xiong`
-
-> All the configurations in the `env` section will be applied to Flink dynamic 
parameters with the format of `-D`, such as `-Dexecution.parallelism=1` .
-
-> For the rest of the parameters, refer to the original flink parameters. 
Check the flink parameter method: `bin/flink run -h` . The parameters can be 
added as needed. For example, `-m yarn-cluster` is specified as `on yarn` mode.
+<TabItem value="spark3">
 
 ```bash
-bin/flink run -h
+bin/start-seatunnel-spark-3-connector-v2.sh --config 
config/v2.batch.config.template -m local -e client
 ```
 
-For example:
-
-* `-p 2` specifies that the job parallelism is `2`
+</TabItem>
+<TabItem value="flink13">
 
 ```bash
-bin/start-seatunnel-flink.sh \
-    -p 2 \
-    -c config-path
+bin/start-seatunnel-flink-13-connector-v2.sh --config 
config/v2.batch.config.template
 ```
 
-* Configurable parameters of `flink yarn-cluster`
-
-For example: `-m yarn-cluster -ynm seatunnel` specifies that the job is 
running on `yarn`, and the name of `yarn WebUI` is `seatunnel`
+</TabItem>
+<TabItem value="flink15">
 
 ```bash
-bin/start-seatunnel-flink.sh \
-    -m yarn-cluster \
-    -ynm seatunnel \
-    -c config-path
+bin/start-seatunnel-flink-15-connector-v2.sh --config 
config/v2.batch.config.template
 ```
 
 </TabItem>
diff --git a/docs/en/start-v2/locally/quick-start-flink.md 
b/docs/en/start-v2/locally/quick-start-flink.md
index 3cc78a8db..d054418e9 100644
--- a/docs/en/start-v2/locally/quick-start-flink.md
+++ b/docs/en/start-v2/locally/quick-start-flink.md
@@ -10,7 +10,7 @@ Before starting, make sure you have downloaded and deployed 
SeaTunnel as describ
 
 ## Step 2: Deployment And Config Flink
 
-Please [download Flink](https://flink.apache.org/downloads.html) 
first(**required version >= 1.12.0 and version < 1.14.x **). For more 
information you could see [Getting Started: 
standalone](https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/deployment/resource-providers/standalone/overview/)
+Please [download Flink](https://flink.apache.org/downloads.html) 
first(**required version >= 1.12.0**). For more information you could see 
[Getting Started: 
standalone](https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/deployment/resource-providers/standalone/overview/)
 
 **Configure SeaTunnel**: Change the setting in `config/seatunnel-env.sh`, it 
is base on the path your engine install at [deployment](deployment.md).
 Change `FLINK_HOME` to the Flink deployment dir.
@@ -52,10 +52,18 @@ More information about config please check [config 
concept](../../concept/config
 
 You could start the application by the following commands
 
+flink version between `1.12.x` and `1.14.x`
+
 ```shell
 cd "apache-seatunnel-incubating-${version}"
-./bin/start-seatunnel-flink-connector-v2.sh --config 
./config/seatunnel.streaming.conf.template
+./bin/start-seatunnel-flink-13-connector-v2.sh --config 
./config/seatunnel.streaming.conf.template
+```
+
+flink version between `1.15.x` and `1.16.x`
 
+```shell
+cd "apache-seatunnel-incubating-${version}"
+./bin/start-seatunnel-flink-15-connector-v2.sh --config 
./config/seatunnel.streaming.conf.template
 ```
 
 **See The Output**: When you run the command, you could see its output in your 
console. You can think this
@@ -89,4 +97,4 @@ row=16 : SGZCr, 94186144
 For now, you are already take a quick look about SeaTunnel with Flink, you 
could see [connector](/docs/category/connector-v2) to find all
 source and sink SeaTunnel supported. Or see [SeaTunnel With 
Flink](../../other-engine/flink.md) if you want to know more about SeaTunnel 
Run With Flink.
 
-SeaTunnel have an own engine named SeaTunnel Engine and SeaTunnel Engine is 
the default engine of SeaTunnel. You can follow [Quick 
Start](quick-start-seatunnel-engine.md) to configure and run a data 
synchronization job.
+SeaTunnel have an own engine named `Zeta` and `Zeta` is the default engine of 
SeaTunnel. You can follow [Quick Start](quick-start-seatunnel-engine.md) to 
configure and run a data synchronization job.
diff --git a/docs/en/start-v2/locally/quick-start-spark.md 
b/docs/en/start-v2/locally/quick-start-spark.md
index 576b284a9..a52e4c058 100644
--- a/docs/en/start-v2/locally/quick-start-spark.md
+++ b/docs/en/start-v2/locally/quick-start-spark.md
@@ -10,7 +10,7 @@ Before starting, make sure you have downloaded and deployed 
SeaTunnel as describ
 
 ## Step 2: Deployment And Config Spark
 
-Please [download Spark](https://spark.apache.org/downloads.html) 
first(**required version >= 2 and version < 3.x **). For more information you 
could
+Please [download Spark](https://spark.apache.org/downloads.html) 
first(**required version >= 2.4.0**). For more information you could
 see [Getting Started: 
standalone](https://spark.apache.org/docs/latest/spark-standalone.html#installing-spark-standalone-to-a-cluster)
 
 **Configure SeaTunnel**: Change the setting in `config/seatunnel-env.sh`, it 
is base on the path your engine install at [deployment](deployment.md).
@@ -53,9 +53,21 @@ More information about config please check [config 
concept](../../concept/config
 
 You could start the application by the following commands
 
+spark 2.4.x
+
+```bash
+cd "apache-seatunnel-incubating-${version}"
+./bin/start-seatunnel-spark-2-connector-v2.sh \
+--master local[4] \
+--deploy-mode client \
+--config ./config/seatunnel.streaming.conf.template
+```
+
+spark3.x.x
+
 ```shell
 cd "apache-seatunnel-incubating-${version}"
-./bin/start-seatunnel-spark-connector-v2.sh \
+./bin/start-seatunnel-spark-3-connector-v2.sh \
 --master local[4] \
 --deploy-mode client \
 --config ./config/seatunnel.streaming.conf.template
@@ -92,4 +104,4 @@ row=16 : SGZCr, 94186144
 For now, you are already take a quick look about SeaTunnel with Spark, you 
could see [connector](/docs/category/connector-v2) to find all
 source and sink SeaTunnel supported. Or see [SeaTunnel With 
Spark](../../other-engine/spark.md) if you want to know more about SeaTunnel 
Run With Spark.
 
-SeaTunnel have an own engine named SeaTunnel Engine and SeaTunnel Engine is 
the default engine of SeaTunnel. You can follow [Quick 
Start](quick-start-seatunnel-engine.md) to configure and run a data 
synchronization job.
+SeaTunnel have an own engine named `Zeta` and `Zeta` is the default engine of 
SeaTunnel. You can follow [Quick Start](quick-start-seatunnel-engine.md) to 
configure and run a data synchronization job.

Reply via email to