This is an automated email from the ASF dual-hosted git repository.
kirs pushed a commit to branch dev
in repository https://gitbox.apache.org/repos/asf/incubator-seatunnel.git
The following commit(s) were added to refs/heads/dev by this push:
new d28d4d318 [Doc] add seatunnel engine to start-v2 (#3244)
d28d4d318 is described below
commit d28d4d318cfdf11839b411f540e4af18ed908abd
Author: Eric <[email protected]>
AuthorDate: Mon Nov 7 18:38:52 2022 +0800
[Doc] add seatunnel engine to start-v2 (#3244)
* add seatunnel engine to start-v2
Co-authored-by: Hisoka <[email protected]>
---
docs/en/start-v2/docker.md | 8 ++
docs/en/start-v2/kubernetes.mdx | 270 ++++++++++++++++++++++++++++++++++++++
docs/en/start-v2/local.mdx | 202 ++++++++++++++++++++++++++++
docs/en/start/local.mdx | 6 +-
docs/sidebars.js | 284 +++++++++++++++++++++-------------------
5 files changed, 632 insertions(+), 138 deletions(-)
diff --git a/docs/en/start-v2/docker.md b/docs/en/start-v2/docker.md
new file mode 100644
index 000000000..2553b9977
--- /dev/null
+++ b/docs/en/start-v2/docker.md
@@ -0,0 +1,8 @@
+---
+sidebar_position: 3
+---
+
+# Set Up with Docker
+
+<!-- TODO -->
+WIP
\ No newline at end of file
diff --git a/docs/en/start-v2/kubernetes.mdx b/docs/en/start-v2/kubernetes.mdx
new file mode 100644
index 000000000..88e008a7d
--- /dev/null
+++ b/docs/en/start-v2/kubernetes.mdx
@@ -0,0 +1,270 @@
+---
+sidebar_position: 4
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Set Up with Kubernetes
+
+This section provides a quick guide to using SeaTunnel with Kubernetes.
+
+## Prerequisites
+
+We assume that you have a local installations of the following:
+
+- [docker](https://docs.docker.com/)
+- [kubernetes](https://kubernetes.io/)
+- [helm](https://helm.sh/docs/intro/quickstart/)
+
+So that the `kubectl` and `helm` commands are available on your local system.
+
+For kubernetes [minikube](https://minikube.sigs.k8s.io/docs/start/) is our
choice, at the time of writing this we are using version v1.23.3. You can start
a cluster with the following command:
+
+```bash
+minikube start --kubernetes-version=v1.23.3
+```
+
+## Installation
+
+### SeaTunnel docker image
+
+To run the image with SeaTunnel, first create a `Dockerfile`:
+
+<Tabs
+ groupId="engine-type"
+ defaultValue="flink"
+ values={[
+ {label: 'Flink', value: 'flink'},
+ ]}>
+<TabItem value="flink">
+
+```Dockerfile
+FROM flink:1.13
+
+ENV SEATUNNEL_VERSION="2.3.0-beta"
+ENV SEATUNNEL_HOME = "/opt/seatunnel"
+
+RUN mkdir -p $SEATUNNEL_HOME
+
+RUN wget
https://archive.apache.org/dist/incubator/seatunnel/${SEATUNNEL_VERSION}/apache-seatunnel-incubating-${SEATUNNEL_VERSION}-bin.tar.gz
+RUN tar -xzvf apache-seatunnel-incubating-${SEATUNNEL_VERSION}-bin.tar.gz
+
+RUN cp -r apache-seatunnel-incubating-${SEATUNNEL_VERSION}/* $SEATUNNEL_HOME/
+RUN rm -rf apache-seatunnel-incubating-${SEATUNNEL_VERSION}*
+RUN rm -rf $SEATUNNEL_HOME/connectors/seatunnel
+```
+
+Then run the following commands to build the image:
+```bash
+docker build -t seatunnel:2.3.0-beta-flink-1.13 -f Dockerfile .
+```
+Image `seatunnel:2.3.0-beta-flink-1.13` need to be present in the host
(minikube) so that the deployment can take place.
+
+Load image to minikube via:
+```bash
+minikube image load seatunnel:2.3.0-beta-flink-1.13
+```
+
+</TabItem>
+</Tabs>
+
+### Deploying the operator
+
+<Tabs
+ groupId="engine-type"
+ defaultValue="flink"
+ values={[
+ {label: 'Flink', value: 'flink'},
+ ]}>
+<TabItem value="flink">
+
+The steps below provide a quick walk-through on setting up the Flink
Kubernetes Operator.
+
+Install the certificate manager on your Kubernetes cluster to enable adding
the webhook component (only needed once per Kubernetes cluster):
+
+```bash
+kubectl create -f
https://github.com/jetstack/cert-manager/releases/download/v1.7.1/cert-manager.yaml
+```
+Now you can deploy the latest stable Flink Kubernetes Operator version using
the included Helm chart:
+
+```bash
+
+helm repo add flink-operator-repo
https://downloads.apache.org/flink/flink-kubernetes-operator-0.1.0/
+
+helm install flink-kubernetes-operator
flink-operator-repo/flink-kubernetes-operator
+```
+
+You may verify your installation via `kubectl`:
+
+```bash
+kubectl get pods
+NAME READY STATUS
RESTARTS AGE
+flink-kubernetes-operator-5f466b8549-mgchb 1/1 Running 3
(23h ago) 16d
+
+```
+
+</TabItem>
+</Tabs>
+
+## Run SeaTunnel Application
+
+**Run Application:**: SeaTunnel already providers out-of-the-box
[configurations](https://github.com/apache/incubator-seatunnel/tree/dev/config).
+
+<Tabs
+ groupId="engine-type"
+ defaultValue="flink"
+ values={[
+ {label: 'Flink', value: 'flink'},
+ ]}>
+<TabItem value="flink">
+
+In this guide we are going to use
[flink.streaming.conf](https://github.com/apache/incubator-seatunnel/blob/dev/config/flink.streaming.conf.template):
+
+ ```conf
+env {
+ execution.parallelism = 1
+}
+
+source {
+ FakeSourceStream {
+ result_table_name = "fake"
+ field_name = "name,age"
+ }
+}
+
+transform {
+ sql {
+ sql = "select name,age from fake"
+ }
+}
+
+sink {
+ ConsoleSink {}
+}
+ ```
+
+This configuration need to be present when we are going to deploy the
application (SeaTunnel) to Flink cluster (on Kubernetes), we also need to
configure a Pod to Use a PersistentVolume for Storage.
+- Create `/mnt/data` on your Node. Open a shell to the single Node in your
cluster. How you open a shell depends on how you set up your cluster. For
example, in our case weare using Minikube, you can open a shell to your Node by
entering `minikube ssh`.
+In your shell on that Node, create a /mnt/data directory:
+```bash
+minikube ssh
+
+# This assumes that your Node uses "sudo" to run commands
+# as the superuser
+sudo mkdir /mnt/data
+```
+- Copy application (SeaTunnel) configuration files to your Node.
+```bash
+minikube cp flink.streaming.conf /mnt/data/flink.streaming.conf
+```
+
+Once the Flink Kubernetes Operator is running as seen in the previous steps
you are ready to submit a Flink (SeaTunnel) job:
+- Create `seatunnel-flink.yaml` FlinkDeployment manifest:
+```yaml
+apiVersion: flink.apache.org/v1alpha1
+kind: FlinkDeployment
+metadata:
+ namespace: default
+ name: seatunnel-flink-streaming-example
+spec:
+ image: seatunnel:2.3.0-beta-flink-1.13
+ flinkVersion: v1_14
+ flinkConfiguration:
+ taskmanager.numberOfTaskSlots: "2"
+ serviceAccount: flink
+ jobManager:
+ replicas: 1
+ resource:
+ memory: "2048m"
+ cpu: 1
+ taskManager:
+ resource:
+ memory: "2048m"
+ cpu: 2
+ podTemplate:
+ spec:
+ containers:
+ - name: flink-main-container
+ volumeMounts:
+ - mountPath: /data
+ name: config-volume
+ volumes:
+ - name: config-volume
+ hostPath:
+ path: "/mnt/data"
+ type: Directory
+
+ job:
+ jarURI: local:///opt/seatunnel/lib/seatunnel-flink-starter.jar
+ entryClass: org.apache.seatunnel.core.starter.flink.SeatunnelFlink
+ args: ["--config", "/data/flink.streaming.conf"]
+ parallelism: 2
+ upgradeMode: stateless
+
+```
+- Run the example application:
+```bash
+kubectl apply -f seatunnel-flink.yaml
+```
+</TabItem>
+</Tabs>
+
+**See The Output**
+
+<Tabs
+ groupId="engine-type"
+ defaultValue="flink"
+ values={[
+ {label: 'Flink', value: 'flink'},
+ ]}>
+<TabItem value="flink">
+
+You may follow the logs of your job, after a successful startup (which can
take on the order of a minute in a fresh environment, seconds afterwards) you
can:
+
+```bash
+kubectl logs -f deploy/seatunnel-flink-streaming-example
+```
+
+To expose the Flink Dashboard you may add a port-forward rule:
+```bash
+kubectl port-forward svc/seatunnel-flink-streaming-example-rest 8081
+```
+Now the Flink Dashboard is accessible at
[localhost:8081](http://localhost:8081).
+
+Or launch `minikube dashboard` for a web-based Kubernetes user interface.
+
+The content printed in the TaskManager Stdout log:
+```bash
+kubectl logs \
+-l 'app in (seatunnel-flink-streaming-example), component in (taskmanager)' \
+--tail=-1 \
+-f
+```
+looks like the below (your content may be different since we use
`FakeSourceStream` to automatically generate random stream data):
+
+```shell
++I[Kid Xiong, 1650316786086]
++I[Ricky Huo, 1650316787089]
++I[Ricky Huo, 1650316788089]
++I[Ricky Huo, 1650316789090]
++I[Kid Xiong, 1650316790090]
++I[Kid Xiong, 1650316791091]
++I[Kid Xiong, 1650316792092]
+```
+
+To stop your job and delete your FlinkDeployment you can simply:
+
+```bash
+kubectl delete -f seatunnel-flink.yaml
+```
+</TabItem>
+</Tabs>
+
+
+Happy SeaTunneling!
+
+## What's More
+
+For now, you are already taking a quick look at SeaTunnel, you could see
[connector](/category/connector) to find all source and sink SeaTunnel
supported.
+Or see [deployment](../deployment.mdx) if you want to submit your application
in another kind of your engine cluster.
diff --git a/docs/en/start-v2/local.mdx b/docs/en/start-v2/local.mdx
new file mode 100644
index 000000000..082b3a8d0
--- /dev/null
+++ b/docs/en/start-v2/local.mdx
@@ -0,0 +1,202 @@
+---
+sidebar_position: 2
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Set Up with Locally
+
+> Let's take an application that randomly generates data in memory, processes
it through SQL, and finally outputs it to the console as an example.
+
+## Step 1: Prepare the environment
+
+Before you getting start the local run, you need to make sure you already have
installed the following software which SeaTunnel required:
+
+* [Java](https://www.java.com/en/download/) (Java 8 or 11, other versions
greater than Java 8 can theoretically work as well) installed and `JAVA_HOME`
set.
+* Download the engine, you can choose and download one of them from below as
your favour, you could see more information about [why we need engine in
SeaTunnel](../faq.md#why-i-should-install-computing-engine-like-spark-or-flink)
+* Spark: Please [download Spark](https://spark.apache.org/downloads.html)
first(**required version >= 2 and version < 3.x **). For more information you
could
+see [Getting Started:
standalone](https://spark.apache.org/docs/latest/spark-standalone.html#installing-spark-standalone-to-a-cluster)
+* Flink: Please [download Flink](https://flink.apache.org/downloads.html)
first(**required version >= 1.12.0 and version < 1.14.x **). For more
information you could see [Getting Started:
standalone](https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/deployment/resource-providers/standalone/overview/)
+
+## Step 2: Download SeaTunnel
+
+Enter the [seatunnel download page](https://seatunnel.apache.org/download) and
download the latest version of distribute
+package `seatunnel-<version>-bin.tar.gz`
+
+Or you can download it by terminal
+
+```shell
+export version="2.3.0-beta"
+wget
"https://archive.apache.org/dist/incubator/seatunnel/${version}/apache-seatunnel-incubating-${version}-bin.tar.gz"
+tar -xzvf "apache-seatunnel-incubating-${version}-bin.tar.gz"
+```
+<!-- TODO: We should add example module as quick start which is no need for
install Spark or Flink -->
+
+## Step 3: Install connectors plugin
+Since 2.2.0-beta, the binary package does not provide connector dependencies
by default, so when using it for the first time, we need to execute the
following command to install the connector: (Of course, you can also manually
download the connector from [Apache Maven
Repository](https://repo.maven.apache.org/maven2/org/apache/seatunnel/ to
download, then manually move to the seatunnel subdirectory under the connectors
directory).
+```bash
+sh bin/install_plugin.sh 2.3.0-beta
+```
+If you need to specify the version of the connector, take 2.3.0-beta as an
example, we need to execute
+```bash
+sh bin/install_plugin.sh 2.3.0-beta
+```
+Usually we don't need all the connector plugins, so you can specify the
plugins you need by configuring `config/plugin_config`, for example, you only
need the `connector-console` plugin, then you can modify plugin.properties as
+```plugin_config
+--seatunnel-connectors--
+connector-console
+--end--
+```
+If we want our sample application to work properly, we need to add the
following plugins
+
+```plugin_config
+--seatunnel-connectors--
+connector-fake
+connector-console
+--end--
+```
+
+You can find all supported connectors and corresponding plugin_config
configuration names under
`${SEATUNNEL_HOME}/connectors/plugins-mapping.properties`.
+
+:::tip
+
+If you want to install the connector plugin by manually downloading the
connector, you need to pay special attention to the following
+
+The connectors directory contains the following subdirectories, if they do not
exist, you need to create them manually
+
+```
+flink
+flink-sql
+seatunnel
+spark
+```
+
+If you want to install the V2 connector plugin manually, you only need to
download the V2 connector plugin you need and put them in the seatunnel
directory
+
+
+## Step 4: Configure SeaTunnel Application
+
+### Spark or Flink
+
+**Configure SeaTunnel**: Change the setting in `config/seatunnel-env.sh`, it
is base on the path your engine install at [prepare step two](#prepare).
+Change `SPARK_HOME` if you using Spark as your engine, or change `FLINK_HOME`
if you're using Flink.
+
+### SeaTunnel Engine
+
+SeaTunnel Engine is the default engine for SeaTunnel, You do not need to do
other additional configuration operations.
+
+### Add Job Config File to define a job
+
+Edit `config/seatunnel.streaming.conf.template`, which determines the way and
logic of data input, processing, and output after seatunnel is started.
+The following is an example of the configuration file, which is the same as
the example application mentioned above.
+
+```hocon
+env {
+ execution.parallelism = 1
+ job.mode = "BATCH"
+}
+
+source {
+ FakeSource {
+ result_table_name = "fake"
+ row.num = 16
+ schema = {
+ fields {
+ name = "string"
+ age = "int"
+ }
+ }
+ }
+}
+
+transform {
+
+}
+
+sink {
+ Console {}
+}
+
+```
+
+More information about config please check [config concept](../concept/config)
+
+## Step 5: Run SeaTunnel Application
+
+You could start the application by the following commands
+
+<Tabs
+ groupId="engine-type"
+ defaultValue="spark"
+ values={[
+ {label: 'Spark', value: 'spark'},
+ {label: 'Flink', value: 'flink'},
+ {label: 'SeaTunnel Engine', value: 'SeaTunnel Engine'},
+ ]}>
+
+<TabItem value="spark">
+
+ ```shell
+ cd "apache-seatunnel-incubating-${version}"
+ ./bin/start-seatunnel-spark-connector-v2.sh \
+ --master local[4] \
+ --deploy-mode client \
+ --config ./config/seatunnel.streaming.conf.template
+ ```
+</TabItem>
+
+<TabItem value="flink">
+
+ ```shell
+ cd "apache-seatunnel-incubating-${version}"
+ ./bin/start-seatunnel-flink-connector-v2.sh \
+ --config ./config/seatunnel.streaming.conf.template
+ ```
+
+</TabItem>
+
+<TabItem value="SeaTunnel Engine">
+
+ ```shell
+ cd "apache-seatunnel-incubating-${version}"
+ ./bin/seatunnel.sh \
+ --config ./config/seatunnel.streaming.conf.template -e local
+ ```
+</TabItem>
+
+</Tabs>
+
+**See The Output**: When you run the command, you could see its output in your
console or in Flink/Spark UI, You can think this
+is a sign that the command ran successfully or not.
+
+The SeaTunnel console will prints some logs as below:
+
+```shell
+fields : name, age
+types : STRING, INT
+row=1 : elWaB, 1984352560
+row=2 : uAtnp, 762961563
+row=3 : TQEIB, 2042675010
+row=4 : DcFjo, 593971283
+row=5 : SenEb, 2099913608
+row=6 : DHjkg, 1928005856
+row=7 : eScCM, 526029657
+row=8 : sgOeE, 600878991
+row=9 : gwdvw, 1951126920
+row=10 : nSiKE, 488708928
+row=11 : xubpl, 1420202810
+row=12 : rHZqb, 331185742
+row=13 : rciGD, 1112878259
+row=14 : qLhdI, 1457046294
+row=15 : ZTkRx, 1240668386
+row=16 : SGZCr, 94186144
+```
+
+If use Flink, The content printed in the TaskManager Stdout log of `flink
WebUI`.
+
+## What's More
+
+For now, you are already take a quick look about SeaTunnel, you could see
[connector](/category/connector) to find all
+source and sink SeaTunnel supported. Or see [deployment](../deployment.mdx) if
you want to submit your application in other
+kind of your engine cluster.
diff --git a/docs/en/start/local.mdx b/docs/en/start/local.mdx
index 243aa9233..7b0de23f8 100644
--- a/docs/en/start/local.mdx
+++ b/docs/en/start/local.mdx
@@ -32,13 +32,13 @@ tar -xzvf
"apache-seatunnel-incubating-${version}-bin.tar.gz"
<!-- TODO: We should add example module as quick start which is no need for
install Spark or Flink -->
## Install connectors plugin
-Since 2.2.0-beta, the binary package does not provide connector dependencies
by default, so when using it for the first time, we need to execute the
following command to install the connector: (Of course, you can also manually
download the connector from [Apache Maven
Repository](https://repo.maven.apache.org/maven2/org/apache/seatunnel/ to
download, then manually move to the connectors directory).
+Since 2.3.0-beta, the binary package does not provide connector dependencies
by default, so when using it for the first time, we need to execute the
following command to install the connector: (Of course, you can also manually
download the connector from [Apache Maven
Repository](https://repo.maven.apache.org/maven2/org/apache/seatunnel/ to
download, then manually move to the connectors directory).
```bash
sh bin/install-plugin.sh
```
-If you need to specify the version of the connector, take 2.2.0-beta as an
example, we need to execute
+If you need to specify the version of the connector, take 2.3.0-beta as an
example, we need to execute
```bash
-sh bin/install-plugin.sh 2.2.0-beta
+sh bin/install_plugin.sh 2.3.0-beta
```
Usually we don't need all the connector plugins, so you can specify the
plugins you need by configuring `config/plugin_config`, for example, you only
need the `flink-assert` plugin, then you can modify plugin.properties as
```plugin_config
diff --git a/docs/sidebars.js b/docs/sidebars.js
index 734e35539..c7da60b3f 100644
--- a/docs/sidebars.js
+++ b/docs/sidebars.js
@@ -44,173 +44,187 @@ const sidebars = {
],
*/
- docs: [
+ "docs": [
{
- type: 'category',
- label: 'Introduction',
- items: [
- 'intro/about',
- 'intro/why',
- 'intro/history',
- ],
+ "type": "category",
+ "label": "Introduction",
+ "items": [
+ "intro/about",
+ "intro/why",
+ "intro/history"
+ ]
},
{
- type: 'category',
- label: 'Quick Start',
- link: {
- type: 'generated-index',
- title: 'Quick Start for SeaTunnel',
- description: 'In this section, you could learn how to get up
and running Apache SeaTunnel in both locally or in Docker environment.',
- slug: '/category/start',
- keywords: ['start'],
- image: '/img/favicon.ico',
+ "type": "category",
+ "label": "Quick Start",
+ "link": {
+ "type": "generated-index",
+ "title": "Quick Start for SeaTunnel",
+ "description": "In this section, you could learn how to get up
and running Apache SeaTunnel in both locally or in Docker environment.",
+ "slug": "/category/start",
+ "keywords": ["start"],
+ "image": "/img/favicon.ico"
},
- items: [
- 'start/local',
- 'start/docker',
- 'start/kubernetes'
- ],
+ "items": [
+ "start/local",
+ "start/docker",
+ "start/kubernetes"
+ ]
},
{
- type: 'category',
- label: 'Concept',
- items: [
- 'concept/config',
- 'concept/connector-v2-features',
- ],
+ "type": "category",
+ "label": "Quick Start - V2",
+ "link": {
+ "type": "generated-index",
+ "title": "Quick Start(V2) for SeaTunnel",
+ "description": "In this section, you could learn how to get up
and running Apache SeaTunnel in both locally or in Docker environment.",
+ "slug": "/category/start-v2",
+ "keywords": ["start"],
+ "image": "/img/favicon.ico"
+ },
+ "items": [
+ "start-v2/local",
+ "start-v2/docker",
+ "start-v2/kubernetes"
+ ]
},
- 'Connector-v2-release-state',
{
- type: 'category',
- label: 'Connector-V2',
- items: [
+ "type": "category",
+ "label": "Concept",
+ "items": [
+ "concept/config",
+ "concept/connector-v2-features"
+ ]
+ },
+ {
+ "type": "category",
+ "label": "Connector",
+ "items": [
{
- type: 'category',
- label: 'Sink',
- link: {
- type: 'generated-index',
- title: 'Sink-V2 of SeaTunnel',
- description: 'List all Sink supported Apache SeaTunnel
for now.',
- // Should remove the `v2` suffix when we migrate all
sink to v2 and delete the old one
- slug: '/category/sink-v2',
- keywords: ['sink'],
- image: '/img/favicon.ico',
+ "type": "category",
+ "label": "Source",
+ "link": {
+ "type": "generated-index",
+ "title": "Source of SeaTunnel",
+ "description": "List all source supported Apache
SeaTunnel for now.",
+ "slug": "/category/source",
+ "keywords": ["source"],
+ "image": "/img/favicon.ico"
},
- items: [
+ "items": [
{
- type: 'autogenerated',
- dirName: 'connector-v2/sink',
- },
- ],
+ "type": "autogenerated",
+ "dirName": "connector/source"
+ }
+ ]
},
{
- type: 'category',
- label: 'Source',
- link: {
- type: 'generated-index',
- title: 'Source-V2 of SeaTunnel',
- description: 'List all source supported Apache
SeaTunnel for now.',
- // Should remove the `v2` suffix when we migrate all
sink to v2 and delete the old one
- slug: '/category/source-v2',
- keywords: ['source'],
- image: '/img/favicon.ico',
+ "type": "category",
+ "label": "Sink",
+ "link": {
+ "type": "generated-index",
+ "title": "Sink of SeaTunnel",
+ "description": "List all sink supported Apache
SeaTunnel for now.",
+ "slug": "/category/sink",
+ "keywords": ["sink"],
+ "image": "/img/favicon.ico"
},
- items: [
+ "items": [
{
- type: 'autogenerated',
- dirName: 'connector-v2/source',
- },
- ],
-
+ "type": "autogenerated",
+ "dirName": "connector/sink"
+ }
+ ]
},
- ],
- },
- {
- type: 'category',
- label: 'Connector',
- items: [
{
- type: 'category',
- label: 'Source',
- link: {
- type: 'generated-index',
- title: 'Source of SeaTunnel',
- description: 'List all source supported Apache
SeaTunnel for now.',
- slug: '/category/source',
- keywords: ['source'],
- image: '/img/favicon.ico',
+ "type": "category",
+ "label": "flink-sql",
+ "link": {
+ "type": "generated-index",
+ "title": "Flink-sql of SeaTunnel",
+ "description": "List all flink-sql supported Apache
SeaTunnel for now.",
+ "slug": "/category/flink-sql",
+ "keywords": ["flink-sql"],
+ "image": "/img/favicon.ico"
},
- items: [
+ "items": [
{
- type: 'autogenerated',
- dirName: 'connector/source',
- },
- ],
- },
+ "type": "autogenerated",
+ "dirName": "connector/flink-sql"
+ }
+ ]
+ }
+ ]
+ },
+ "Connector-v2-release-state",
+ {
+ "type": "category",
+ "label": "Connector-V2",
+ "items": [
{
- type: 'category',
- label: 'Sink',
- link: {
- type: 'generated-index',
- title: 'Sink of SeaTunnel',
- description: 'List all sink supported Apache SeaTunnel
for now.',
- slug: '/category/sink',
- keywords: ['sink'],
- image: '/img/favicon.ico',
+ "type": "category",
+ "label": "Source",
+ "link": {
+ "type": "generated-index",
+ "title": "Source(V2) of SeaTunnel",
+ "description": "List all source(v2) supported Apache
SeaTunnel for now.",
+ "slug": "/category/source-v2",
+ "keywords": ["source"],
+ "image": "/img/favicon.ico"
},
- items: [
+ "items": [
{
- type: 'autogenerated',
- dirName: 'connector/sink',
- },
- ],
+ "type": "autogenerated",
+ "dirName": "connector-v2/source"
+ }
+ ]
},
{
- type: 'category',
- label: 'flink-sql',
- link: {
- type: 'generated-index',
- title: 'Flink-sql of SeaTunnel',
- description: 'List all flink-sql supported Apache
SeaTunnel for now.',
- slug: '/category/flink-sql',
- keywords: ['flink-sql'],
- image: '/img/favicon.ico',
+ "type": "category",
+ "label": "Sink",
+ "link": {
+ "type": "generated-index",
+ "title": "Sink(V2) of SeaTunnel",
+ "description": "List all sink(v2) supported Apache
SeaTunnel for now.",
+ "slug": "/category/sink-v2",
+ "keywords": ["sink"],
+ "image": "/img/favicon.ico"
},
- items: [
+ "items": [
{
- type: 'autogenerated',
- dirName: 'connector/flink-sql',
- },
- ],
- },
- ],
+ "type": "autogenerated",
+ "dirName": "connector-v2/sink"
+ }
+ ]
+ }
+ ]
},
{
- type: 'category',
- label: 'Transform',
- link: {
- type: 'generated-index',
- title: 'Transform of SeaTunnel',
- description: 'List all transform supported Apache SeaTunnel
for now.',
- slug: '/category/transform',
- keywords: ['transform'],
- image: '/img/favicon.ico',
+ "type": "category",
+ "label": "Transform",
+ "link": {
+ "type": "generated-index",
+ "title": "Transform of SeaTunnel",
+ "description": "List all transform supported Apache SeaTunnel
for now.",
+ "slug": "/category/transform",
+ "keywords": ["transform"],
+ "image": "/img/favicon.ico"
},
- items: [
+ "items": [
{
- type: 'autogenerated',
- dirName: 'transform',
- },
- ],
+ "type": "autogenerated",
+ "dirName": "transform"
+ }
+ ]
},
{
- type: 'category',
- label: 'Command',
- items: [
- 'command/usage',
- ],
+ "type": "category",
+ "label": "Command",
+ "items": [
+ "command/usage"
+ ]
},
- 'deployment',
+ "deployment",
{
type: 'category',
label: 'Contribution',
@@ -221,7 +235,7 @@ const sidebars = {
'contribution/contribute-transform-v2-guide',
],
},
- 'faq',
+ "faq"
]
};