This is an automated email from the ASF dual-hosted git repository.

gaojun2048 pushed a commit to branch add_st_engine_to_start-v2
in repository 
https://gitbox.apache.org/repos/asf/incubator-seatunnel-website.git

commit 5867255a8321fabd61f8e7625d786f3aeeeeb798
Author: gaojun <[email protected]>
AuthorDate: Mon Oct 31 18:44:06 2022 +0800

    add st engine to start v2
---
 .../version-2.2.0-beta/start-v2/kubernetes.mdx     |  10 +-
 .../version-2.2.0-beta/start-v2/local.mdx          |  82 +++++---------
 .../version-2.2.0-beta/start/kubernetes.mdx        |  20 ++--
 versioned_docs/version-2.2.0-beta/start/local.mdx  | 122 +++------------------
 4 files changed, 55 insertions(+), 179 deletions(-)

diff --git a/versioned_docs/version-2.2.0-beta/start-v2/kubernetes.mdx 
b/versioned_docs/version-2.2.0-beta/start-v2/kubernetes.mdx
index e5a0260b52..88e008a7d2 100644
--- a/versioned_docs/version-2.2.0-beta/start-v2/kubernetes.mdx
+++ b/versioned_docs/version-2.2.0-beta/start-v2/kubernetes.mdx
@@ -42,7 +42,7 @@ To run the image with SeaTunnel, first create a `Dockerfile`:
 ```Dockerfile
 FROM flink:1.13
 
-ENV SEATUNNEL_VERSION="2.2.0-beta"
+ENV SEATUNNEL_VERSION="2.3.0-beta"
 ENV SEATUNNEL_HOME = "/opt/seatunnel"
 
 RUN mkdir -p $SEATUNNEL_HOME
@@ -57,13 +57,13 @@ RUN rm -rf $SEATUNNEL_HOME/connectors/seatunnel
 
 Then run the following commands to build the image:
 ```bash
-docker build -t seatunnel:2.2.0-beta-flink-1.13 -f Dockerfile .
+docker build -t seatunnel:2.3.0-beta-flink-1.13 -f Dockerfile .
 ```
-Image `seatunnel:2.2.0-beta-flink-1.13` need to be present in the host 
(minikube) so that the deployment can take place.
+Image `seatunnel:2.3.0-beta-flink-1.13` need to be present in the host 
(minikube) so that the deployment can take place.
 
 Load image to minikube via:
 ```bash
-minikube image load seatunnel:2.2.0-beta-flink-1.13
+minikube image load seatunnel:2.3.0-beta-flink-1.13
 ```
 
 </TabItem>
@@ -168,7 +168,7 @@ metadata:
   namespace: default
   name: seatunnel-flink-streaming-example
 spec:
-  image: seatunnel:2.2.0-beta-flink-1.13
+  image: seatunnel:2.3.0-beta-flink-1.13
   flinkVersion: v1_14
   flinkConfiguration:
     taskmanager.numberOfTaskSlots: "2"
diff --git a/versioned_docs/version-2.2.0-beta/start-v2/local.mdx 
b/versioned_docs/version-2.2.0-beta/start-v2/local.mdx
index 26745d6794..f0bfbf827e 100644
--- a/versioned_docs/version-2.2.0-beta/start-v2/local.mdx
+++ b/versioned_docs/version-2.2.0-beta/start-v2/local.mdx
@@ -27,20 +27,20 @@ package `seatunnel-<version>-bin.tar.gz`
 Or you can download it by terminal
 
 ```shell
-export version="2.2.0-beta"
+export version="2.3.0-beta"
 wget 
"https://archive.apache.org/dist/incubator/seatunnel/${version}/apache-seatunnel-incubating-${version}-bin.tar.gz";
 tar -xzvf "apache-seatunnel-incubating-${version}-bin.tar.gz"
 ```
 <!-- TODO: We should add example module as quick start which is no need for 
install Spark or Flink -->
 
 ## Step 3: Install connectors plugin
-Since 2.2.0-beta, the binary package does not provide connector dependencies 
by default, so when using it for the first time, we need to execute the 
following command to install the connector: (Of course, you can also manually 
download the connector from [Apache Maven Repository](https://repo. 
maven.apache.org/maven2/org/apache/seatunnel/ to download, then manually move 
to the seatunnel subdirectory under the connectors directory).
+Since 2.3.0-beta, the binary package does not provide connector dependencies 
by default, so when using it for the first time, we need to execute the 
following command to install the connector: (Of course, you can also manually 
download the connector from [Apache Maven Repository](https://repo. 
maven.apache.org/maven2/org/apache/seatunnel/ to download, then manually move 
to the seatunnel subdirectory under the connectors directory).
 ```bash
-sh bin/install_plugin.sh 2.2.0-beta
+sh bin/install_plugin.sh 2.3.0-beta
 ```
-If you need to specify the version of the connector, take 2.2.0-beta as an 
example, we need to execute
+If you need to specify the version of the connector, take 2.3.0-beta as an 
example, we need to execute
 ```bash
-sh bin/install_plugin.sh 2.2.0-beta
+sh bin/install_plugin.sh 2.3.0-beta
 ```
 Usually we don't need all the connector plugins, so you can specify the 
plugins you need by configuring `config/plugin_config`, for example, you only 
need the `connector-console` plugin, then you can modify plugin.properties as
 ```plugin_config
@@ -79,27 +79,24 @@ If you want to install the V2 connector plugin manually, 
you only need to downlo
 
 ## Step 4: Configure SeaTunnel Application
 
+### Spark or Flink
+
 **Configure SeaTunnel**: Change the setting in `config/seatunnel-env.sh`, it 
is base on the path your engine install at [prepare step two](#prepare).
 Change `SPARK_HOME` if you using Spark as your engine, or change `FLINK_HOME` 
if you're using Flink.
 
+### SeaTunnel Engine
+
+SeaTunnel Engine is the default engine for SeaTunnel, You do not need to do 
other additional configuration operations.
+
+### Add Job Config File to define a job
+
 Edit `config/seatunnel.streaming.conf.template`, which determines the way and 
logic of data input, processing, and output after seatunnel is started.
 The following is an example of the configuration file, which is the same as 
the example application mentioned above.
 
 ```hocon
 env {
-  # You can set flink configuration here
   execution.parallelism = 1
-  job.mode = "STREAMING"
-  #execution.checkpoint.interval = 10000
-  #execution.checkpoint.data-uri = "hdfs://localhost:9000/checkpoint"
-
-
-  # For Spark
-  #spark.app.name = "SeaTunnel"
-  #spark.executor.instances = 2
-  #spark.executor.cores = 1
-  #spark.executor.memory = "1g"
-  #spark.master = local
+  job.mode = "BATCH"
 }
 
 source {
@@ -116,9 +113,7 @@ source {
 }
 
 transform {
-    sql {
-      sql = "select name,age from fake"
-    }
+
 }
 
 sink {
@@ -139,6 +134,7 @@ You could start the application by the following commands
   values={[
     {label: 'Spark', value: 'spark'},
     {label: 'Flink', value: 'flink'},
+    {label: 'SeaTunnel Engine', value: 'SeaTunnel Engine'},
   ]}>
 <TabItem value="spark">
 
@@ -159,6 +155,16 @@ cd "apache-seatunnel-incubating-${version}"
 --config ./config/seatunnel.streaming.conf.template
 ```
 
+</TabItem>
+
+<TabItem value="SeaTunnel Engine">
+
+    ```shell
+    cd "apache-seatunnel-incubating-${version}"
+    ./bin/seatunnel.sh \
+    --config ./config/seatunnel.streaming.conf.template -e local
+    ```
+
 </TabItem>
 </Tabs>
 
@@ -190,42 +196,6 @@ row=16 : SGZCr, 94186144
 
 If use Flink, The content printed in the TaskManager Stdout log of `flink 
WebUI`.
 
-## Explore More Build-in Examples
-
-Our local quick start is using one of the build-in example in directory 
`config`, and we provider more than one out-of-box
-example you could and feel free to have a try and make your hands dirty. All 
you have to do is change the started command
-option value in [running application](#run-seaTunnel-application) to the 
configuration you want to run, we use batch
-template in `config` as examples:
-
-<Tabs
-    groupId="engine-type"
-    defaultValue="spark"
-    values={[
-        {label: 'Spark', value: 'spark'},
-        {label: 'Flink', value: 'flink'},
-    ]}>
-<TabItem value="spark">
-
-```shell
-cd "apache-seatunnel-incubating-${version}"
-./bin/start-seatunnel-spark-connector-v2.sh \
---master local[4] \
---deploy-mode client \
---config ./config/spark.batch.conf.template
-```
-
-</TabItem>
-<TabItem value="flink">
-
-```shell
-cd "apache-seatunnel-incubating-${version}"
-./bin/start-seatunnel-flink-connector-v2.sh \
---config ./config/flink.batch.conf.template
-```
-
-</TabItem>
-</Tabs>
-
 ## What's More
 
 For now, you are already take a quick look about SeaTunnel, you could see 
[connector](/category/connector) to find all
diff --git a/versioned_docs/version-2.2.0-beta/start/kubernetes.mdx 
b/versioned_docs/version-2.2.0-beta/start/kubernetes.mdx
index 5c4b021317..740901da70 100644
--- a/versioned_docs/version-2.2.0-beta/start/kubernetes.mdx
+++ b/versioned_docs/version-2.2.0-beta/start/kubernetes.mdx
@@ -13,13 +13,13 @@ This section provides a quick guide to using SeaTunnel with 
Kubernetes.
 
 We assume that you have a local installations of the following:
 
-- [Docker](https://docs.docker.com/)
-- [Kubernetes](https://kubernetes.io/)
-- [Helm](https://helm.sh/docs/intro/quickstart/)
+- [docker](https://docs.docker.com/)
+- [kubernetes](https://kubernetes.io/)
+- [helm](https://helm.sh/docs/intro/quickstart/)
 
 So that the `kubectl` and `helm` commands are available on your local system.
 
-For Kubernetes [minikube](https://minikube.sigs.k8s.io/docs/start/) is our 
choice, at the time of writing this we are using version v1.23.3. You can start 
a cluster with the following command:
+For kubernetes [minikube](https://minikube.sigs.k8s.io/docs/start/) is our 
choice, at the time of writing this we are using version v1.23.3. You can start 
a cluster with the following command:
 
 ```bash
 minikube start --kubernetes-version=v1.23.3
@@ -42,7 +42,7 @@ To run the image with SeaTunnel, first create a `Dockerfile`:
 ```Dockerfile
 FROM flink:1.13
 
-ENV SEATUNNEL_VERSION="2.2.0-beta"
+ENV SEATUNNEL_VERSION="2.1.2"
 ENV SEATUNNEL_HOME = "/opt/seatunnel"
 
 RUN mkdir -p $SEATUNNEL_HOME
@@ -52,18 +52,18 @@ RUN tar -xzvf 
apache-seatunnel-incubating-${SEATUNNEL_VERSION}-bin.tar.gz
 
 RUN cp -r apache-seatunnel-incubating-${SEATUNNEL_VERSION}/* $SEATUNNEL_HOME/
 RUN rm -rf apache-seatunnel-incubating-${SEATUNNEL_VERSION}*
-RUN rm -rf $SEATUNNEL_HOME/connectors/flink
+RUN rm -rf $SEATUNNEL_HOME/connectors/spark
 ```
 
 Then run the following commands to build the image:
 ```bash
-docker build -t seatunnel:2.2.0-beta-flink-1.13 -f Dockerfile .
+docker build -t seatunnel:2.1.2-flink-1.13 -f Dockerfile .
 ```
-Image `seatunnel:2.2.0-beta-flink-1.13` need to be present in the host 
(minikube) so that the deployment can take place.
+Image `seatunnel:2.1.2-flink-1.13` need to be present in the host (minikube) 
so that the deployment can take place.
 
 Load image to minikube via:
 ```bash
-minikube image load seatunnel:2.2.0-beta-flink-1.13
+minikube image load seatunnel:2.1.2-flink-1.13
 ```
 
 </TabItem>
@@ -168,7 +168,7 @@ metadata:
   namespace: default
   name: seatunnel-flink-streaming-example
 spec:
-  image: seatunnel:2.2.0-beta-flink-1.13
+  image: seatunnel:2.1.2-flink-1.13
   flinkVersion: v1_14
   flinkConfiguration:
     taskmanager.numberOfTaskSlots: "2"
diff --git a/versioned_docs/version-2.2.0-beta/start/local.mdx 
b/versioned_docs/version-2.2.0-beta/start/local.mdx
index 0f4001883a..cfc9a1d3ff 100644
--- a/versioned_docs/version-2.2.0-beta/start/local.mdx
+++ b/versioned_docs/version-2.2.0-beta/start/local.mdx
@@ -7,9 +7,7 @@ import TabItem from '@theme/TabItem';
 
 # Set Up with Locally
 
-> Let's take an application that randomly generates data in memory, processes 
it through SQL, and finally outputs it to the console as an example.
-
-## Step 1: Prepare the environment
+## Prepare
 
 Before you getting start the local run, you need to make sure you already have 
installed the following software which SeaTunnel required:
 
@@ -19,7 +17,7 @@ Before you getting start the local run, you need to make sure 
you already have i
   see [Getting Started: 
standalone](https://spark.apache.org/docs/latest/spark-standalone.html#installing-spark-standalone-to-a-cluster)
   * Flink: Please [download Flink](https://flink.apache.org/downloads.html) 
first(**required version >= 1.12.0 and version < 1.14.x **). For more 
information you could see [Getting Started: 
standalone](https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/deployment/resource-providers/standalone/overview/)
 
-## Step 2: Download SeaTunnel
+## Installation
 
 Enter the [seatunnel download page](https://seatunnel.apache.org/download) and 
download the latest version of distribute
 package `seatunnel-<version>-bin.tar.gz`
@@ -27,127 +25,35 @@ package `seatunnel-<version>-bin.tar.gz`
 Or you can download it by terminal
 
 ```shell
-export version="2.2.0-beta"
+export version="2.1.0"
 wget 
"https://archive.apache.org/dist/incubator/seatunnel/${version}/apache-seatunnel-incubating-${version}-bin.tar.gz";
 tar -xzvf "apache-seatunnel-incubating-${version}-bin.tar.gz"
 ```
 <!-- TODO: We should add example module as quick start which is no need for 
install Spark or Flink -->
 
-## Step 3: Install connectors plugin
-Since 2.2.0-beta, the binary package does not provide connector dependencies 
by default, so when using it for the first time, we need to execute the 
following command to install the connector: (Of course, you can also manually 
download the connector from [Apache Maven Repository](https://repo. 
maven.apache.org/maven2/org/apache/seatunnel/ to download, then manually move 
to the corresponding subdirectory of the connectors directory, for example, 
flink plugins should be placed in the flink [...]
+## Install connectors plugin
+Since 2.3.0-beta, the binary package does not provide connector dependencies 
by default, so when using it for the first time, we need to execute the 
following command to install the connector: (Of course, you can also manually 
download the connector from [Apache Maven 
Repository](https://repo.maven.apache.org/maven2/org/apache/seatunnel/ to 
download, then manually move to the connectors directory).
 ```bash
-sh bin/install_plugin.sh 2.2.0-beta
+sh bin/install_plugin.sh
 ```
-If you need to specify the version of the connector, take 2.2.0-beta as an 
example, we need to execute
+If you need to specify the version of the connector, take 2.3.0-beta as an 
example, we need to execute
 ```bash
-sh bin/install_plugin.sh 2.2.0-beta
-```
-
-Usually we don't need all the connector plugins, so you can specify the 
plugins you need by configuring `config/plugin_config`, for example, you only 
need the `flink-console` plugin, then you can modify plugin.properties as
-```plugin_config
---flink-connectors--
-seatunnel-connector-flink-console
---end--
-```
-
-If we want our sample application to work properly, we need to add the 
following plugins
-<Tabs
-    groupId="engine-type"
-    defaultValue="spark"
-    values={[
-        {label: 'Spark', value: 'spark'},
-        {label: 'Flink', value: 'flink'},
-    ]}>
-<TabItem value="spark">
-
-```plugin_config
---spark-connectors--
-seatunnel-connector-spark-fake
-seatunnel-connector-spark-console
---end--
+sh bin/install_plugin.sh 2.3.0-beta
 ```
-
-</TabItem>
-<TabItem value="flink">
-
+Usually we don't need all the connector plugins, so you can specify the 
plugins you need by configuring `config/plugin_config`, for example, you only 
need the `flink-assert` plugin, then you can modify plugin.properties as
 ```plugin_config
 --flink-connectors--
-seatunnel-connector-flink-fake
-seatunnel-connector-flink-console
+seatunnel-connector-flink-assert
 --end--
 ```
 
-</TabItem>
-</Tabs>
-
-You can find all supported connectors and corresponding plugin_config 
configuration names under 
`${SEATUNNEL_HOME}/connectors/plugins-mapping.properties`.
-
-:::tip
-
-If you want to install the connector plugin by manually downloading the 
connector, you need to pay special attention to the following
-
-:::
-
-The connectors directory contains the following subdirectories, if they do not 
exist, you need to create them manually
-
-```
-flink
-flink-sql
-seatunnel
-spark
-```
-
-If you want to manually install the connector plugin of the flink engine, you 
need to download the connector plugin of the flink engine you need, and then 
put them in the flink directory. Similarly, if you want to manually install the 
connector plugin of the spark engine, you need to download the connector plugin 
of the spark engine you need, and then put them in the spark directory
-
-## Step 4: Configure SeaTunnel Application
+## Run SeaTunnel Application
 
 **Configure SeaTunnel**: Change the setting in `config/seatunnel-env.sh`, it 
is base on the path your engine install at [prepare step two](#prepare).
 Change `SPARK_HOME` if you using Spark as your engine, or change `FLINK_HOME` 
if you're using Flink.
 
-Edit `config/flink(spark).streaming.conf.template`, which determines the way 
and logic of data input, processing, and output after seatunnel is started.
-The following is an example of the configuration file, which is the same as 
the example application mentioned above.
-
-```hocon
-######
-###### This config file is a demonstration of streaming processing in 
SeaTunnel config
-######
-
-env {
-  # You can set flink configuration here
-  execution.parallelism = 1
-
-  # For Spark
-  #spark.app.name = "SeaTunnel"
-  #spark.executor.instances = 2
-  #spark.executor.cores = 1
-  #spark.executor.memory = "1g"
-  #spark.master = local
-}
-
-source {
-    FakeSourceStream {
-      result_table_name = "fake"
-      field_name = "name,age"
-    }
-}
-
-transform {
-    sql {
-      sql = "select name,age from fake"
-    }
-}
-
-sink {
-  ConsoleSink {}
-}
-
-```
-
-More information about config please check [config concept](../concept/config)
-
-## Step 5: Run SeaTunnel Application
-
-You could start the application by the following commands
+**Run Application with Build-in Configure**: We already provide an out-of-box 
configuration in the directory `config` which
+you could find when you extract the tarball. You could start the application 
by the following commands
 
 <Tabs
   groupId="engine-type"
@@ -218,7 +124,7 @@ topLevel, 20
 
 ## Explore More Build-in Examples
 
-Our local quick start is using one of the build-in example in directory 
`config`, and we provider more than one out-of-box
+Our local quick start is using one of the build-in example in directory 
`config`, and we provide more than one out-of-box
 example you could and feel free to have a try and make your hands dirty. All 
you have to do is change the started command
 option value in [running application](#run-seaTunnel-application) to the 
configuration you want to run, we use batch
 template in `config` as examples:

Reply via email to