This is an automated email from the ASF dual-hosted git repository.
tyrantlucifer pushed a commit to branch dev
in repository https://gitbox.apache.org/repos/asf/incubator-seatunnel.git
The following commit(s) were added to refs/heads/dev by this push:
new 89aab1a6e [Improve][Doc] Add key features in connector documents
(#2625)
89aab1a6e is described below
commit 89aab1a6e3622344a7f7656eb74430ca8f34474b
Author: Eric <[email protected]>
AuthorDate: Sat Sep 3 22:03:31 2022 +0800
[Improve][Doc] Add key features in connector documents (#2625)
* Add key features in connector documents
* Add key features to a new file and link it to all connector documents
* Add key features to a new file and link it to all connector documents
* Add more infomation to connector-v2-features.md
* fix review problem
* fix review problem
---
docs/en/concept/connector-v2-features.md | 65 ++++++++++++++++++++++++++
docs/en/connector-v2/sink/Assert.md | 5 ++
docs/en/connector-v2/sink/Clickhouse.md | 10 +++-
docs/en/connector-v2/sink/ClickhouseFile.md | 5 ++
docs/en/connector-v2/sink/Datahub.md | 5 ++
docs/en/connector-v2/sink/Elasticsearch.md | 5 ++
docs/en/connector-v2/{source => sink}/Email.md | 5 ++
docs/en/connector-v2/sink/Enterprise-WeChat.md | 4 ++
docs/en/connector-v2/sink/Feishu.md | 5 ++
docs/en/connector-v2/sink/FtpFile.md | 5 ++
docs/en/connector-v2/sink/Greenplum.md | 5 ++
docs/en/connector-v2/sink/HdfsFile.md | 16 ++++++-
docs/en/connector-v2/sink/Hive.md | 12 +++++
docs/en/connector-v2/sink/Http.md | 7 ++-
docs/en/connector-v2/sink/IoTDB.md | 11 ++++-
docs/en/connector-v2/sink/Jdbc.md | 9 ++++
docs/en/connector-v2/sink/Kudu.md | 5 ++
docs/en/connector-v2/sink/LocalFile.md | 16 ++++++-
docs/en/connector-v2/sink/Neo4j.md | 5 ++
docs/en/connector-v2/sink/Phoenix.md | 5 ++
docs/en/connector-v2/sink/Socket.md | 4 ++
docs/en/connector-v2/sink/dingtalk.md | 5 ++
docs/en/connector-v2/source/Clickhouse.md | 14 +++++-
docs/en/connector-v2/source/FakeSource.md | 9 ++++
docs/en/connector-v2/source/Greenplum.md | 12 +++++
docs/en/connector-v2/source/HdfsFile.md | 15 ++++++
docs/en/connector-v2/source/Http.md | 11 ++++-
docs/en/connector-v2/source/Hudi.md | 12 +++++
docs/en/connector-v2/source/IoTDB.md | 14 +++++-
docs/en/connector-v2/source/Jdbc.md | 14 +++++-
docs/en/connector-v2/source/Kudu.md | 11 ++++-
docs/en/connector-v2/source/LocalFile.md | 15 ++++++
docs/en/connector-v2/source/OssFile.md | 16 +++++++
docs/en/connector-v2/source/Phoenix.md | 12 +++++
docs/en/connector-v2/source/Redis.md | 11 ++++-
docs/en/connector-v2/source/Socket.md | 11 ++++-
docs/en/connector-v2/source/pulsar.md | 11 ++++-
37 files changed, 389 insertions(+), 13 deletions(-)
diff --git a/docs/en/concept/connector-v2-features.md
b/docs/en/concept/connector-v2-features.md
new file mode 100644
index 000000000..d400722fa
--- /dev/null
+++ b/docs/en/concept/connector-v2-features.md
@@ -0,0 +1,65 @@
+# Intro To Connector V2 Features
+
+## Differences Between Connector V2 And Connector v1
+
+Since https://github.com/apache/incubator-seatunnel/issues/1608 We Added
Connector V2 Features.
+Connector V2 is a connector defined based on the Seatunnel Connector API
interface. Unlike Connector V1, Connector V2 supports the following features.
+
+* **Multi Engine Support** SeaTunnel Connector API is an engine independent
API. The connectors developed based on this API can run in multiple engines.
Currently, Flink and Spark are supported, and we will support other engines in
the future.
+* **Multi Engine Version Support** Decoupling the connector from the engine
through the translation layer solves the problem that most connectors need to
modify the code in order to support a new version of the underlying engine.
+* **Unified Batch And Stream** Connector V2 can perform batch processing or
streaming processing. We do not need to develop connectors for batch and stream
separately.
+* **Multiplexing JDBC/Log connection.** Connector V2 supports JDBC resource
reuse and sharing database log parsing.
+
+## Source Connector Features
+
+Source connectors have some common core features, and each source connector
supports them to varying degrees.
+
+### exactly-once
+
+If each piece of data in the data source will only be sent downstream by the
source once, we think this source connector supports exactly once.
+
+In SeaTunnel, we can save the read **Split** and its **offset**(The position
of the read data in split at that time,
+such as line number, byte size, offset, etc) as **StateSnapshot** when
checkpoint. If the task restarted, we will get the last **StateSnapshot**
+and then locate the **Split** and **offset** read last time and continue to
send data downstream.
+
+For example `File`, `Kafka`.
+
+### schema projection
+
+If the source connector supports selective reading of certain columns or
redefine columns order or supports the data format read through `schema`
params, we think it supports schema projection.
+
+For example `JDBCSource` can use sql define read columns, `KafkaSource` can
use `schema` params to define the read schema.
+
+### batch
+
+Batch Job Mode, The data read is bounded and the job will stop when all data
read complete.
+
+### stream
+
+Streaming Job Mode, The data read is unbounded and the job never stop.
+
+### parallelism
+
+Parallelism Source Connector support config `parallelism`, every parallelism
will create a task to read the data.
+In the **Parallelism Source Connector**, the source will be split into
multiple splits, and then the enumerator will allocate the splits to the
SourceReader for processing.
+
+### support user-defined split
+
+User can config the split rule.
+
+## Sink Connector Features
+
+Sink connectors have some common core features, and each sink connector
supports them to varying degrees.
+
+### exactly-once
+
+When any piece of data flows into a distributed system, if the system
processes any piece of data accurately only once in the whole processing
process and the processing results are correct, it is considered that the
system meets the exact once consistency.
+
+For sink connector, the sink connector supports exactly-once if any piece of
data only write into target once. There are generally two ways to achieve this:
+
+* The target database supports key deduplication. For example `MySQL`, `Kudu`.
+* The target support **XA Transaction**(This transaction can be used across
sessions. Even if the program that created the transaction has ended, the newly
started program only needs to know the ID of the last transaction to resubmit
or roll back the transaction). Then we can use **Two-phase Commit** to ensure
**exactly-once**. For example `File`, `MySQL`.
+
+### schema projection
+
+If a sink connector supports the fields and their types or redefine columns
order written in the configuration, we think it supports schema projection.
\ No newline at end of file
diff --git a/docs/en/connector-v2/sink/Assert.md
b/docs/en/connector-v2/sink/Assert.md
index 9e5c49acf..5a1612126 100644
--- a/docs/en/connector-v2/sink/Assert.md
+++ b/docs/en/connector-v2/sink/Assert.md
@@ -6,6 +6,11 @@
A flink sink plugin which can assert illegal data by user defined rules
+## Key features
+
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [x] [schema projection](../../concept/connector-v2-features.md)
+
## Options
| name | type | required | default value |
diff --git a/docs/en/connector-v2/sink/Clickhouse.md
b/docs/en/connector-v2/sink/Clickhouse.md
index ff1e89989..02ac2246e 100644
--- a/docs/en/connector-v2/sink/Clickhouse.md
+++ b/docs/en/connector-v2/sink/Clickhouse.md
@@ -4,7 +4,15 @@
## Description
-Used to write data to Clickhouse. Supports Batch and Streaming mode.
+Used to write data to Clickhouse.
+
+## Key features
+
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+
+The Clickhouse sink plug-in can achieve accuracy once by implementing
idempotent writing, and needs to cooperate with aggregatingmergetree and other
engines that support deduplication.
+
+- [ ] [schema projection](../../concept/connector-v2-features.md)
:::tip
diff --git a/docs/en/connector-v2/sink/ClickhouseFile.md
b/docs/en/connector-v2/sink/ClickhouseFile.md
index f1c6e3024..90e196c92 100644
--- a/docs/en/connector-v2/sink/ClickhouseFile.md
+++ b/docs/en/connector-v2/sink/ClickhouseFile.md
@@ -8,6 +8,11 @@ Generate the clickhouse data file with the clickhouse-local
program, and then se
server, also call bulk load. This connector only support clickhouse table
which engine is 'Distributed'.And `internal_replication` option
should be `true`. Supports Batch and Streaming mode.
+## Key features
+
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+
:::tip
Write data to Clickhouse can also be done using JDBC
diff --git a/docs/en/connector-v2/sink/Datahub.md
b/docs/en/connector-v2/sink/Datahub.md
index 292944cd5..800c2a54b 100644
--- a/docs/en/connector-v2/sink/Datahub.md
+++ b/docs/en/connector-v2/sink/Datahub.md
@@ -6,6 +6,11 @@
A sink plugin which use send message to datahub
+## Key features
+
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+
## Options
| name | type | required | default value |
diff --git a/docs/en/connector-v2/sink/Elasticsearch.md
b/docs/en/connector-v2/sink/Elasticsearch.md
index c8fbb551e..0d743c799 100644
--- a/docs/en/connector-v2/sink/Elasticsearch.md
+++ b/docs/en/connector-v2/sink/Elasticsearch.md
@@ -4,6 +4,11 @@
Output data to `Elasticsearch`.
+## Key features
+
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+
:::tip
Engine Supported
diff --git a/docs/en/connector-v2/source/Email.md
b/docs/en/connector-v2/sink/Email.md
similarity index 92%
rename from docs/en/connector-v2/source/Email.md
rename to docs/en/connector-v2/sink/Email.md
index fdd137117..cc74cf495 100644
--- a/docs/en/connector-v2/source/Email.md
+++ b/docs/en/connector-v2/sink/Email.md
@@ -8,6 +8,11 @@ Send the data as a file to email.
The tested email version is 1.5.6.
+## Key features
+
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+
## Options
| name | type | required | default value |
diff --git a/docs/en/connector-v2/sink/Enterprise-WeChat.md
b/docs/en/connector-v2/sink/Enterprise-WeChat.md
index 303648212..28ec03059 100644
--- a/docs/en/connector-v2/sink/Enterprise-WeChat.md
+++ b/docs/en/connector-v2/sink/Enterprise-WeChat.md
@@ -13,6 +13,10 @@ A sink plugin which use Enterprise WeChat robot send message
> ```
**Tips: WeChat sink only support `string` webhook and the data from source
will be treated as body content in web hook.**
+## Key features
+
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [schema projection](../../concept/connector-v2-features.md)
## Options
diff --git a/docs/en/connector-v2/sink/Feishu.md
b/docs/en/connector-v2/sink/Feishu.md
index 5359cc588..311a5d7fe 100644
--- a/docs/en/connector-v2/sink/Feishu.md
+++ b/docs/en/connector-v2/sink/Feishu.md
@@ -10,6 +10,11 @@ Used to launch feishu web hooks using data.
**Tips: Feishu sink only support `post json` webhook and the data from source
will be treated as body content in web hook.**
+## Key features
+
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+
## Options
| name | type | required | default value |
diff --git a/docs/en/connector-v2/sink/FtpFile.md
b/docs/en/connector-v2/sink/FtpFile.md
index 0384671c3..009d1bd61 100644
--- a/docs/en/connector-v2/sink/FtpFile.md
+++ b/docs/en/connector-v2/sink/FtpFile.md
@@ -6,7 +6,12 @@
Output data to Ftp .
+## Key features
+- [x] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+
+## Options
| name | type | required | default value
|
|----------------------------------|---------|----------|-----------------------------------------------------------|
diff --git a/docs/en/connector-v2/sink/Greenplum.md
b/docs/en/connector-v2/sink/Greenplum.md
index 9317e5c62..91af690d5 100644
--- a/docs/en/connector-v2/sink/Greenplum.md
+++ b/docs/en/connector-v2/sink/Greenplum.md
@@ -6,6 +6,11 @@
Write data to Greenplum using [Jdbc connector](Jdbc.md).
+## Key features
+
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+
:::tip
Not support exactly-once semantics (XA transaction is not yet supported in
Greenplum database).
diff --git a/docs/en/connector-v2/sink/HdfsFile.md
b/docs/en/connector-v2/sink/HdfsFile.md
index aabc00ab4..e2e3b7561 100644
--- a/docs/en/connector-v2/sink/HdfsFile.md
+++ b/docs/en/connector-v2/sink/HdfsFile.md
@@ -4,7 +4,21 @@
## Description
-Output data to hdfs file. Support bounded and unbounded job.
+Output data to hdfs file
+
+## Key features
+
+- [x] [exactly-once](../../concept/connector-v2-features.md)
+
+By default, we use 2PC commit to ensure `exactly-once`
+
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+- [x] file format
+ - [x] text
+ - [x] csv
+ - [x] parquet
+ - [x] orc
+ - [x] json
## Options
diff --git a/docs/en/connector-v2/sink/Hive.md
b/docs/en/connector-v2/sink/Hive.md
index b5ae8edc4..56b49ad7b 100644
--- a/docs/en/connector-v2/sink/Hive.md
+++ b/docs/en/connector-v2/sink/Hive.md
@@ -8,6 +8,18 @@ Write data to Hive.
In order to use this connector, You must ensure your spark/flink cluster
already integrated hive. The tested hive version is 2.3.9.
+## Key features
+
+- [x] [exactly-once](../../concept/connector-v2-features.md)
+
+By default, we use 2PC commit to ensure `exactly-once`
+
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+- [x] file format
+ - [x] text
+ - [x] parquet
+ - [x] orc
+
## Options
| name | type | required | default value
|
diff --git a/docs/en/connector-v2/sink/Http.md
b/docs/en/connector-v2/sink/Http.md
index c871b9c16..8f4ab2572 100644
--- a/docs/en/connector-v2/sink/Http.md
+++ b/docs/en/connector-v2/sink/Http.md
@@ -4,12 +4,17 @@
## Description
-Used to launch web hooks using data. Both support streaming and batch mode.
+Used to launch web hooks using data.
> For example, if the data from upstream is [`age: 12, name: tyrantlucifer`],
> the body content is the following: `{"age": 12, "name": "tyrantlucifer"}`
**Tips: Http sink only support `post json` webhook and the data from source
will be treated as body content in web hook.**
+## Key features
+
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+
## Options
| name | type | required | default value |
diff --git a/docs/en/connector-v2/sink/IoTDB.md
b/docs/en/connector-v2/sink/IoTDB.md
index 3ea624fd5..31389c03f 100644
--- a/docs/en/connector-v2/sink/IoTDB.md
+++ b/docs/en/connector-v2/sink/IoTDB.md
@@ -4,7 +4,16 @@
## Description
-Used to write data to IoTDB. Supports Batch and Streaming mode.
+Used to write data to IoTDB.
+
+## Key features
+
+- [x] [exactly-once](../../concept/connector-v2-features.md)
+
+IoTDB supports the `exactly-once` feature through idempotent writing. If two
pieces of data have
+the same `key` and `timestamp`, the new data will overwrite the old one.
+
+- [ ] [schema projection](../../concept/connector-v2-features.md)
:::tip
diff --git a/docs/en/connector-v2/sink/Jdbc.md
b/docs/en/connector-v2/sink/Jdbc.md
index e5063c61b..f8c883cec 100644
--- a/docs/en/connector-v2/sink/Jdbc.md
+++ b/docs/en/connector-v2/sink/Jdbc.md
@@ -4,6 +4,15 @@
## Description
Write data through jdbc. Support Batch mode and Streaming mode, support
concurrent writing, support exactly-once semantics (using XA transaction
guarantee).
+
+## Key features
+
+- [x] [exactly-once](../../concept/connector-v2-features.md)
+
+Use `Xa transactions` to ensure `exactly-once`. So only support `exactly-once`
for the database which is support `Xa transactions`. You can set
`is_exactly_once=true` to enable it.
+
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+
## Options
| name | type | required | default value |
diff --git a/docs/en/connector-v2/sink/Kudu.md
b/docs/en/connector-v2/sink/Kudu.md
index 9a67831da..ae08b3afa 100644
--- a/docs/en/connector-v2/sink/Kudu.md
+++ b/docs/en/connector-v2/sink/Kudu.md
@@ -8,6 +8,11 @@ Write data to Kudu.
The tested kudu version is 1.11.1.
+## Key features
+
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+
## Options
| name | type | required | default value |
diff --git a/docs/en/connector-v2/sink/LocalFile.md
b/docs/en/connector-v2/sink/LocalFile.md
index 12d5fd54e..0df942e37 100644
--- a/docs/en/connector-v2/sink/LocalFile.md
+++ b/docs/en/connector-v2/sink/LocalFile.md
@@ -4,7 +4,21 @@
## Description
-Output data to local file. Support bounded and unbounded job.
+Output data to local file.
+
+## Key features
+
+- [x] [exactly-once](../../concept/connector-v2-features.md)
+
+By default, we use 2PC commit to ensure `exactly-once`
+
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+- [x] file format
+ - [x] text
+ - [x] csv
+ - [x] parquet
+ - [x] orc
+ - [x] json
## Options
diff --git a/docs/en/connector-v2/sink/Neo4j.md
b/docs/en/connector-v2/sink/Neo4j.md
index 4ab8017fe..519212b01 100644
--- a/docs/en/connector-v2/sink/Neo4j.md
+++ b/docs/en/connector-v2/sink/Neo4j.md
@@ -8,6 +8,11 @@ Write data to Neo4j.
`neo4j-java-driver` version 4.4.9
+## Key features
+
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+
## Options
| name | type | required | default value |
diff --git a/docs/en/connector-v2/sink/Phoenix.md
b/docs/en/connector-v2/sink/Phoenix.md
index 746c54d31..f7383daea 100644
--- a/docs/en/connector-v2/sink/Phoenix.md
+++ b/docs/en/connector-v2/sink/Phoenix.md
@@ -12,6 +12,11 @@ Two ways of connecting Phoenix with Java JDBC. One is to
connect to zookeeper th
> Tips: Not support exactly-once semantics (XA transaction is not yet
> supported in Phoenix).
+## Key features
+
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+
## Options
### driver [string]
diff --git a/docs/en/connector-v2/sink/Socket.md
b/docs/en/connector-v2/sink/Socket.md
index 7339f7b01..498cfa99d 100644
--- a/docs/en/connector-v2/sink/Socket.md
+++ b/docs/en/connector-v2/sink/Socket.md
@@ -7,6 +7,10 @@
Used to send data to Socket Server. Both support streaming and batch mode.
> For example, if the data from upstream is [`age: 12, name: jared`], the
> content send to socket server is the following: `{"name":"jared","age":17}`
+## Key features
+
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [schema projection](../../concept/connector-v2-features.md)
## Options
diff --git a/docs/en/connector-v2/sink/dingtalk.md
b/docs/en/connector-v2/sink/dingtalk.md
index 6fe0e2a43..e949ae2bc 100644
--- a/docs/en/connector-v2/sink/dingtalk.md
+++ b/docs/en/connector-v2/sink/dingtalk.md
@@ -6,6 +6,11 @@
A sink plugin which use DingTalk robot send message
+## Key features
+
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+
## Options
| name | type | required | default value |
diff --git a/docs/en/connector-v2/source/Clickhouse.md
b/docs/en/connector-v2/source/Clickhouse.md
index 7e761c0ee..e73c621b2 100644
--- a/docs/en/connector-v2/source/Clickhouse.md
+++ b/docs/en/connector-v2/source/Clickhouse.md
@@ -4,7 +4,19 @@
## Description
-Used to read data from Clickhouse. Currently, only supports Batch mode.
+Used to read data from Clickhouse.
+
+## Key features
+
+- [x] [batch](../../concept/connector-v2-features.md)
+- [ ] [stream](../../concept/connector-v2-features.md)
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [x] [schema projection](../../concept/connector-v2-features.md)
+
+supports query SQL and can achieve projection effect.
+
+- [ ] [parallelism](../../concept/connector-v2-features.md)
+- [ ] [support user-defined split](../../concept/connector-v2-features.md)
:::tip
diff --git a/docs/en/connector-v2/source/FakeSource.md
b/docs/en/connector-v2/source/FakeSource.md
index 9c4bf4ffd..3c66ce679 100644
--- a/docs/en/connector-v2/source/FakeSource.md
+++ b/docs/en/connector-v2/source/FakeSource.md
@@ -7,6 +7,15 @@
The FakeSource is a virtual data source, which randomly generates the number
of rows according to the data structure of the user-defined schema,
just for testing, such as type conversion and feature testing
+## Key features
+
+- [x] [batch](../../concept/connector-v2-features.md)
+- [x] [stream](../../concept/connector-v2-features.md)
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [x] [schema projection](../../concept/connector-v2-features.md)
+- [ ] [parallelism](../../concept/connector-v2-features.md)
+- [ ] [support user-defined split](../../concept/connector-v2-features.md)
+
## Options
| name | type | required | default value |
diff --git a/docs/en/connector-v2/source/Greenplum.md
b/docs/en/connector-v2/source/Greenplum.md
index cd140549b..fad156c24 100644
--- a/docs/en/connector-v2/source/Greenplum.md
+++ b/docs/en/connector-v2/source/Greenplum.md
@@ -6,6 +6,18 @@
Read Greenplum data through [Jdbc connector](Jdbc.md).
+## Key features
+
+- [x] [batch](../../concept/connector-v2-features.md)
+- [ ] [stream](../../concept/connector-v2-features.md)
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [x] [schema projection](../../concept/connector-v2-features.md)
+
+supports query SQL and can achieve projection effect.
+
+- [x] [parallelism](../../concept/connector-v2-features.md)
+- [ ] [support user-defined split](../../concept/connector-v2-features.md)
+
:::tip
Optional jdbc drivers:
diff --git a/docs/en/connector-v2/source/HdfsFile.md
b/docs/en/connector-v2/source/HdfsFile.md
index 00bbe5fdd..e6b3fc90c 100644
--- a/docs/en/connector-v2/source/HdfsFile.md
+++ b/docs/en/connector-v2/source/HdfsFile.md
@@ -6,6 +6,21 @@
Read data from hdfs file system.
+## Key features
+
+- [x] [batch](../../concept/connector-v2-features.md)
+- [ ] [stream](../../concept/connector-v2-features.md)
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [x] [schema projection](../../concept/connector-v2-features.md)
+- [x] [parallelism](../../concept/connector-v2-features.md)
+- [ ] [support user-defined split](../../concept/connector-v2-features.md)
+- [x] file format
+ - [x] text
+ - [x] csv
+ - [x] parquet
+ - [x] orc
+ - [x] json
+
## Options
| name | type | required | default value |
diff --git a/docs/en/connector-v2/source/Http.md
b/docs/en/connector-v2/source/Http.md
index 507cf64f1..0fbbc43e1 100644
--- a/docs/en/connector-v2/source/Http.md
+++ b/docs/en/connector-v2/source/Http.md
@@ -4,7 +4,16 @@
## Description
-Used to read data from Http. Both support streaming and batch mode.
+Used to read data from Http.
+
+## Key features
+
+- [x] [batch](../../concept/connector-v2-features.md)
+- [x] [stream](../../concept/connector-v2-features.md)
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [x] [schema projection](../../concept/connector-v2-features.md)
+- [ ] [parallelism](../../concept/connector-v2-features.md)
+- [ ] [support user-defined split](../../concept/connector-v2-features.md)
## Options
diff --git a/docs/en/connector-v2/source/Hudi.md
b/docs/en/connector-v2/source/Hudi.md
index 2fb9f0604..7eae78720 100644
--- a/docs/en/connector-v2/source/Hudi.md
+++ b/docs/en/connector-v2/source/Hudi.md
@@ -8,6 +8,18 @@ Used to read data from Hudi. Currently, only supports hudi cow
table and Snapsho
In order to use this connector, You must ensure your spark/flink cluster
already integrated hive. The tested hive version is 2.3.9.
+## Key features
+
+- [x] [batch](../../concept/connector-v2-features.md)
+
+Currently, only supports hudi cow table and Snapshot Query with Batch Mode
+
+- [ ] [stream](../../concept/connector-v2-features.md)
+- [x] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+- [x] [parallelism](../../concept/connector-v2-features.md)
+- [ ] [support user-defined split](../../concept/connector-v2-features.md)
+
## Options
| name | type | required | default value |
diff --git a/docs/en/connector-v2/source/IoTDB.md
b/docs/en/connector-v2/source/IoTDB.md
index cd241e420..01a3487a3 100644
--- a/docs/en/connector-v2/source/IoTDB.md
+++ b/docs/en/connector-v2/source/IoTDB.md
@@ -4,7 +4,19 @@
## Description
-Read external data source data through IoTDB. Currently supports Batch mode.
+Read external data source data through IoTDB.
+
+## Key features
+
+- [x] [batch](../../concept/connector-v2-features.md)
+- [ ] [stream](../../concept/connector-v2-features.md)
+- [x] [exactly-once](../../concept/connector-v2-features.md)
+- [x] [schema projection](../../concept/connector-v2-features.md)
+
+supports query SQL and can achieve projection effect.
+
+- [x] [parallelism](../../concept/connector-v2-features.md)
+- [ ] [support user-defined split](../../concept/connector-v2-features.md)
## Options
diff --git a/docs/en/connector-v2/source/Jdbc.md
b/docs/en/connector-v2/source/Jdbc.md
index 18c075d2c..5f1e47ac9 100644
--- a/docs/en/connector-v2/source/Jdbc.md
+++ b/docs/en/connector-v2/source/Jdbc.md
@@ -4,7 +4,19 @@
## Description
-Read external data source data through JDBC. Currently supports mysql and
Postgres databases, and supports Batch mode.
+Read external data source data through JDBC.
+
+## Key features
+
+- [x] [batch](../../concept/connector-v2-features.md)
+- [ ] [stream](../../concept/connector-v2-features.md)
+- [x] [exactly-once](../../concept/connector-v2-features.md)
+- [x] [schema projection](../../concept/connector-v2-features.md)
+
+supports query SQL and can achieve projection effect.
+
+- [x] [parallelism](../../concept/connector-v2-features.md)
+- [ ] [support user-defined split](../../concept/connector-v2-features.md)
## Options
diff --git a/docs/en/connector-v2/source/Kudu.md
b/docs/en/connector-v2/source/Kudu.md
index 3cb6cff76..22ff42623 100644
--- a/docs/en/connector-v2/source/Kudu.md
+++ b/docs/en/connector-v2/source/Kudu.md
@@ -4,10 +4,19 @@
## Description
-Used to read data from Kudu. Currently, only supports Query with Batch Mode.
+Used to read data from Kudu.
The tested kudu version is 1.11.1.
+## Key features
+
+- [x] [batch](../../concept/connector-v2-features.md)
+- [ ] [stream](../../concept/connector-v2-features.md)
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+- [ ] [parallelism](../../concept/connector-v2-features.md)
+- [ ] [support user-defined split](../../concept/connector-v2-features.md)
+
## Options
| name | type | required | default value |
diff --git a/docs/en/connector-v2/source/LocalFile.md
b/docs/en/connector-v2/source/LocalFile.md
index e6ac6142f..1067f2079 100644
--- a/docs/en/connector-v2/source/LocalFile.md
+++ b/docs/en/connector-v2/source/LocalFile.md
@@ -6,6 +6,21 @@
Read data from local file system.
+## Key features
+
+- [x] [batch](../../concept/connector-v2-features.md)
+- [ ] [stream](../../concept/connector-v2-features.md)
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [x] [schema projection](../../concept/connector-v2-features.md)
+- [x] [parallelism](../../concept/connector-v2-features.md)
+- [ ] [support user-defined split](../../concept/connector-v2-features.md)
+- [x] file format
+ - [x] text
+ - [x] csv
+ - [x] parquet
+ - [x] orc
+ - [x] json
+
## Options
| name | type | required | default value |
diff --git a/docs/en/connector-v2/source/OssFile.md
b/docs/en/connector-v2/source/OssFile.md
index e81914f54..b5eda6dd2 100644
--- a/docs/en/connector-v2/source/OssFile.md
+++ b/docs/en/connector-v2/source/OssFile.md
@@ -9,6 +9,22 @@ Read data from aliyun oss file system.
> Tips: We made some trade-offs in order to support more file types, so we
> used the HDFS protocol for internal access to OSS and this connector need
> some hadoop dependencies.
> It's only support hadoop version **2.9.X+**.
+## Key features
+
+- [x] [batch](../../concept/connector-v2-features.md)
+- [ ] [stream](../../concept/connector-v2-features.md)
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [x] [schema projection](../../concept/connector-v2-features.md)
+- [x] file format
+ - [x] text
+ - [x] csv
+ - [x] parquet
+ - [x] orc
+ - [x] json
+
+- [x] [parallelism](../../concept/connector-v2-features.md)
+- [ ] [support user-defined split](../../concept/connector-v2-features.md)
+
## Options
| name | type | required | default value |
diff --git a/docs/en/connector-v2/source/Phoenix.md
b/docs/en/connector-v2/source/Phoenix.md
index 9d68f70ce..a82196ea3 100644
--- a/docs/en/connector-v2/source/Phoenix.md
+++ b/docs/en/connector-v2/source/Phoenix.md
@@ -10,6 +10,18 @@ Two ways of connecting Phoenix with Java JDBC. One is to
connect to zookeeper th
> Tips: By default, the (thin) driver jar is used. If you want to use the
> (thick) driver or other versions of Phoenix (thin) driver, you need to
> recompile the jdbc connector module
+## Key features
+
+- [x] [batch](../../concept/connector-v2-features.md)
+- [x] [stream](../../concept/connector-v2-features.md)
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [x] [schema projection](../../concept/connector-v2-features.md)
+
+supports query SQL and can achieve projection effect.
+
+- [ ] [parallelism](../../concept/connector-v2-features.md)
+- [ ] [support user-defined split](../../concept/connector-v2-features.md)
+
## Options
### driver [string]
diff --git a/docs/en/connector-v2/source/Redis.md
b/docs/en/connector-v2/source/Redis.md
index 62f4abd93..dfb1b4340 100644
--- a/docs/en/connector-v2/source/Redis.md
+++ b/docs/en/connector-v2/source/Redis.md
@@ -4,7 +4,16 @@
## Description
-Used to read data from Redis. Only support batch mode.
+Used to read data from Redis.
+
+## Key features
+
+- [x] [batch](../../concept/connector-v2-features.md)
+- [ ] [stream](../../concept/connector-v2-features.md)
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [x] [schema projection](../../concept/connector-v2-features.md)
+- [ ] [parallelism](../../concept/connector-v2-features.md)
+- [ ] [support user-defined split](../../concept/connector-v2-features.md)
## Options
diff --git a/docs/en/connector-v2/source/Socket.md
b/docs/en/connector-v2/source/Socket.md
index b9b0a2540..84a2b487e 100644
--- a/docs/en/connector-v2/source/Socket.md
+++ b/docs/en/connector-v2/source/Socket.md
@@ -4,7 +4,16 @@
## Description
-Used to read data from Socket. Both support streaming and batch mode.
+Used to read data from Socket.
+
+## Key features
+
+- [x] [batch](../../concept/connector-v2-features.md)
+- [x] [stream](../../concept/connector-v2-features.md)
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+- [ ] [parallelism](../../concept/connector-v2-features.md)
+- [ ] [support user-defined split](../../concept/connector-v2-features.md)
## Options
diff --git a/docs/en/connector-v2/source/pulsar.md
b/docs/en/connector-v2/source/pulsar.md
index b028dd361..02c42a260 100644
--- a/docs/en/connector-v2/source/pulsar.md
+++ b/docs/en/connector-v2/source/pulsar.md
@@ -4,7 +4,16 @@
## Description
-Source connector for Apache Pulsar. It can support both off-line and real-time
jobs.
+Source connector for Apache Pulsar.
+
+## Key features
+
+- [x] [batch](../../concept/connector-v2-features.md)
+- [x] [stream](../../concept/connector-v2-features.md)
+- [x] [exactly-once](../../concept/connector-v2-features.md)
+- [x] [schema projection](../../concept/connector-v2-features.md)
+- [x] [parallelism](../../concept/connector-v2-features.md)
+- [ ] [support user-defined split](../../concept/connector-v2-features.md)
## Options