This is an automated email from the ASF dual-hosted git repository.

fanjia pushed a commit to branch dev
in repository https://gitbox.apache.org/repos/asf/seatunnel.git


The following commit(s) were added to refs/heads/dev by this push:
     new 7c62b61eed [Improve][Connector-V2][doc]Modify some document title 
specifications (#6237)
7c62b61eed is described below

commit 7c62b61eed8a87422058ed70a79e6dcf3a950b31
Author: ZhilinLi <[email protected]>
AuthorDate: Thu Jan 18 15:04:52 2024 +0800

    [Improve][Connector-V2][doc]Modify some document title specifications 
(#6237)
---
 docs/en/connector-v2/formats/avro.md               |  2 +-
 docs/en/connector-v2/formats/canal-json.md         |  4 +-
 .../formats/cdc-compatible-debezium-json.md        |  6 +-
 docs/en/connector-v2/formats/debezium-json.md      |  6 +-
 .../formats/kafka-compatible-kafkaconnect-json.md  |  4 +-
 docs/en/connector-v2/formats/ogg-json.md           |  2 +-
 docs/en/connector-v2/sink/AmazonDynamoDB.md        |  4 +-
 docs/en/connector-v2/sink/Assert.md                | 64 +++++++++++-----------
 docs/en/connector-v2/sink/Clickhouse.md            |  2 +-
 docs/en/connector-v2/sink/ClickhouseFile.md        |  2 +-
 docs/en/connector-v2/sink/Console.md               |  2 +-
 docs/en/connector-v2/sink/CosFile.md               |  4 +-
 docs/en/connector-v2/sink/DB2.md                   |  2 +-
 docs/en/connector-v2/sink/Doris.md                 | 20 +++----
 docs/en/connector-v2/sink/Feishu.md                |  2 +-
 docs/en/connector-v2/sink/FtpFile.md               |  2 +-
 docs/en/connector-v2/sink/Greenplum.md             |  2 +-
 docs/en/connector-v2/sink/IoTDB.md                 |  5 +-
 docs/en/connector-v2/sink/Jdbc.md                  |  4 +-
 docs/en/connector-v2/sink/Kingbase.md              |  2 +-
 docs/en/connector-v2/sink/Kudu.md                  |  4 +-
 docs/en/connector-v2/sink/LocalFile.md             |  4 +-
 docs/en/connector-v2/sink/MongoDB.md               |  3 +-
 docs/en/connector-v2/sink/Mysql.md                 |  2 +-
 docs/en/connector-v2/sink/Oracle.md                |  2 +-
 docs/en/connector-v2/sink/OssFile.md               |  6 +-
 docs/en/connector-v2/sink/OssJindoFile.md          |  2 +-
 docs/en/connector-v2/sink/PostgreSql.md            |  2 +-
 docs/en/connector-v2/sink/RocketMQ.md              |  2 +-
 docs/en/connector-v2/sink/Snowflake.md             | 46 ++++++++--------
 docs/en/connector-v2/sink/SqlServer.md             |  6 +-
 docs/en/connector-v2/sink/Vertica.md               |  2 +-
 docs/en/connector-v2/source/Clickhouse.md          |  2 +-
 docs/en/connector-v2/source/DB2.md                 |  2 +-
 docs/en/connector-v2/source/FakeSource.md          | 24 ++++++--
 docs/en/connector-v2/source/Hive-jdbc.md           |  2 +-
 docs/en/connector-v2/source/Hudi.md                |  2 +-
 docs/en/connector-v2/source/IoTDB.md               |  2 +-
 docs/en/connector-v2/source/Kudu.md                | 23 ++++----
 docs/en/connector-v2/source/MongoDB-CDC.md         |  4 +-
 docs/en/connector-v2/source/MySQL-CDC.md           |  4 +-
 docs/en/connector-v2/source/Mysql.md               |  2 +-
 docs/en/connector-v2/source/Oracle.md              |  4 +-
 docs/en/connector-v2/source/PostgreSQL.md          |  2 +-
 docs/en/connector-v2/source/RocketMQ.md            |  2 +-
 docs/en/connector-v2/source/SftpFile.md            |  2 +-
 docs/en/connector-v2/source/SqlServer-CDC.md       |  2 +-
 docs/en/connector-v2/source/Vertica.md             |  2 +-
 48 files changed, 153 insertions(+), 150 deletions(-)

diff --git a/docs/en/connector-v2/formats/avro.md 
b/docs/en/connector-v2/formats/avro.md
index b9ee961daf..638657b345 100644
--- a/docs/en/connector-v2/formats/avro.md
+++ b/docs/en/connector-v2/formats/avro.md
@@ -2,7 +2,7 @@
 
 Avro is very popular in streaming data pipeline. Now seatunnel supports Avro 
format in kafka connector.
 
-# How to use Avro format
+# How To Use
 
 ## Kafka uses example
 
diff --git a/docs/en/connector-v2/formats/canal-json.md 
b/docs/en/connector-v2/formats/canal-json.md
index 9412e1c5f2..1697a8c618 100644
--- a/docs/en/connector-v2/formats/canal-json.md
+++ b/docs/en/connector-v2/formats/canal-json.md
@@ -15,14 +15,14 @@ SeaTunnel also supports to encode the INSERT/UPDATE/DELETE 
messages in SeaTunnel
 
 # Format Options
 
-|             option             | default | required |                        
                                                                        
Description                                                                     
                            |
+|             Option             | Default | Required |                        
                                                                        
Description                                                                     
                            |
 
|--------------------------------|---------|----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
 | format                         | (none)  | yes      | Specify what format to 
use, here should be 'canal_json'.                                               
                                                                                
                    |
 | canal_json.ignore-parse-errors | false   | no       | Skip fields and rows 
with parse errors instead of failing. Fields are set to null in case of errors. 
                                                                                
                      |
 | canal_json.database.include    | (none)  | no       | An optional regular 
expression to only read the specific databases changelog rows by regular 
matching the "database" meta field in the Canal record. The pattern string is 
compatible with Java's Pattern. |
 | canal_json.table.include       | (none)  | no       | An optional regular 
expression to only read the specific tables changelog rows by regular matching 
the "table" meta field in the Canal record. The pattern string is compatible 
with Java's Pattern.       |
 
-# How to use Canal format
+# How to use
 
 ## Kafka uses example
 
diff --git a/docs/en/connector-v2/formats/cdc-compatible-debezium-json.md 
b/docs/en/connector-v2/formats/cdc-compatible-debezium-json.md
index 86683090f6..b35501a62a 100644
--- a/docs/en/connector-v2/formats/cdc-compatible-debezium-json.md
+++ b/docs/en/connector-v2/formats/cdc-compatible-debezium-json.md
@@ -1,12 +1,12 @@
-# CDC compatible debezium-json
+# CDC Compatible Debezium-json
 
 SeaTunnel supports to interpret cdc record as Debezium-JSON messages publish 
to mq(kafka) system.
 
 This is useful in many cases to leverage this feature, such as compatible with 
the debezium ecosystem.
 
-# How to use
+# How To Use
 
-## MySQL-CDC output to Kafka
+## MySQL-CDC Sink Kafka
 
 ```bash
 env {
diff --git a/docs/en/connector-v2/formats/debezium-json.md 
b/docs/en/connector-v2/formats/debezium-json.md
index 73813d2a83..a01e6c70d6 100644
--- a/docs/en/connector-v2/formats/debezium-json.md
+++ b/docs/en/connector-v2/formats/debezium-json.md
@@ -15,14 +15,14 @@ Seatunnel also supports to encode the INSERT/UPDATE/DELETE 
messages in Seatunnel
 
 # Format Options
 
-|              option               | default | required |                     
                        Description                                             
 |
+|              Option               | Default | Required |                     
                        Description                                             
 |
 
|-----------------------------------|---------|----------|------------------------------------------------------------------------------------------------------|
 | format                            | (none)  | yes      | Specify what format 
to use, here should be 'debezium_json'.                                         
 |
 | debezium-json.ignore-parse-errors | false   | no       | Skip fields and 
rows with parse errors instead of failing. Fields are set to null in case of 
errors. |
 
-# How to use Debezium format
+# How To Use
 
-## Kafka uses example
+## Kafka Uses example
 
 Debezium provides a unified format for changelog, here is a simple example for 
an update operation captured from a MySQL products table:
 
diff --git a/docs/en/connector-v2/formats/kafka-compatible-kafkaconnect-json.md 
b/docs/en/connector-v2/formats/kafka-compatible-kafkaconnect-json.md
index af5e23d426..def638367c 100644
--- a/docs/en/connector-v2/formats/kafka-compatible-kafkaconnect-json.md
+++ b/docs/en/connector-v2/formats/kafka-compatible-kafkaconnect-json.md
@@ -2,9 +2,9 @@
 
 Seatunnel connector kafka supports parsing data extracted through kafka 
connect source, especially data extracted from kafka connect jdbc and kafka 
connect debezium
 
-# How to use
+# How To Use
 
-## Kafka output to mysql
+## Kafka Sink Mysql
 
 ```bash
 env {
diff --git a/docs/en/connector-v2/formats/ogg-json.md 
b/docs/en/connector-v2/formats/ogg-json.md
index e01817cec9..629edde72e 100644
--- a/docs/en/connector-v2/formats/ogg-json.md
+++ b/docs/en/connector-v2/formats/ogg-json.md
@@ -13,7 +13,7 @@ Seatunnel also supports to encode the INSERT/UPDATE/DELETE 
messages in Seatunnel
 
 # Format Options
 
-|            option            | default | required |                          
                                                                      
Description                                                                     
                            |
+|            Option            | Default | Required |                          
                                                                      
Description                                                                     
                            |
 
|------------------------------|---------|----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
 | format                       | (none)  | yes      | Specify what format to 
use, here should be '-json'.                                                    
                                                                                
                    |
 | ogg_json.ignore-parse-errors | false   | no       | Skip fields and rows 
with parse errors instead of failing. Fields are set to null in case of errors. 
                                                                                
                      |
diff --git a/docs/en/connector-v2/sink/AmazonDynamoDB.md 
b/docs/en/connector-v2/sink/AmazonDynamoDB.md
index 6e880fb4af..63211077c7 100644
--- a/docs/en/connector-v2/sink/AmazonDynamoDB.md
+++ b/docs/en/connector-v2/sink/AmazonDynamoDB.md
@@ -6,13 +6,13 @@
 
 Write data to Amazon DynamoDB
 
-## Key features
+## Key Features
 
 - [ ] [exactly-once](../../concept/connector-v2-features.md)
 
 ## Options
 
-|       name        |  type  | required | default value |
+|       Name        |  Type  | Required | Default value |
 |-------------------|--------|----------|---------------|
 | url               | string | yes      | -             |
 | region            | string | yes      | -             |
diff --git a/docs/en/connector-v2/sink/Assert.md 
b/docs/en/connector-v2/sink/Assert.md
index dff2657eaf..8257ff8f65 100644
--- a/docs/en/connector-v2/sink/Assert.md
+++ b/docs/en/connector-v2/sink/Assert.md
@@ -6,43 +6,43 @@
 
 A flink sink plugin which can assert illegal data by user defined rules
 
-## Key features
+## Key Features
 
 - [ ] [exactly-once](../../concept/connector-v2-features.md)
 
 ## Options
 
-|                                              name                            
                  |    type    | required | default value |
-|------------------------------------------------------------------------------------------------|------------|----------|---------------|
-| rules                                                                        
                  | ConfigMap  | yes      | -             |
-| rules.field_rules                                                            
                  | string     | yes      | -             |
-| rules.field_rules.field_name                                                 
                  | string     | yes      | -             |
-| rules.field_rules.field_type                                                 
                  | string     | no       | -             |
-| rules.field_rules.field_value                                                
                  | ConfigList | no       | -             |
-| rules.field_rules.field_value.rule_type                                      
                  | string     | no       | -             |
-| rules.field_rules.field_value.rule_value                                     
                  | double     | no       | -             |
-| rules.row_rules                                                              
                  | string     | yes      | -             |
-| rules.row_rules.rule_type                                                    
                  | string     | no       | -             |
-| rules.row_rules.rule_value                                                   
                  | string     | no       | -             |
-| rules.catalog_table_rule                                                     
                  | ConfigMap  | no       | -             |
-| rules.catalog_table_rule.primary_key_rule                                    
                  | ConfigMap  | no       | -             |
-| rules.catalog_table_rule.primary_key_rule.primary_key_name                   
                  | string     | no       | -             |
-| rules.catalog_table_rule.primary_key_rule.primary_key_columns                
                  | list       | no       | -             |
-| rules.catalog_table_rule.constraint_key_rule                                 
                  | ConfigList | no       | -             |
-| rules.catalog_table_rule.constraint_key_rule.constraint_key_name             
                  | string     | no       | -             |
-| rules.catalog_table_rule.constraint_key_rule.constraint_key_type             
                  | string     | no       | -             |
-| rules.catalog_table_rule.constraint_key_rule.constraint_key_columns          
                  | ConfigList | no       | -             |
-| 
rules.catalog_table_rule.constraint_key_rule.constraint_key_columns.constraint_key_column_name
 | string     | no       | -             |
-| 
rules.catalog_table_rule.constraint_key_rule.constraint_key_columns.constraint_key_sort_type
   | string     | no       | -             |
-| rules.catalog_table_rule.column_rule                                         
                  | ConfigList | no       | -             |
-| rules.catalog_table_rule.column_rule.name                                    
                  | string     | no       | -             |
-| rules.catalog_table_rule.column_rule.type                                    
                  | string     | no       | -             |
-| rules.catalog_table_rule.column_rule.column_length                           
                  | int        | no       | -             |
-| rules.catalog_table_rule.column_rule.nullable                                
                  | boolean    | no       | -             |
-| rules.catalog_table_rule.column_rule.default_value                           
                  | string     | no       | -             |
-| rules.catalog_table_rule.column_rule.comment                                 
                  | comment    | no       | -             |
-| rules.table-names                                                            
                  | list       | no       | -             |
-| common-options                                                               
                  |            | no       | -             |
+|                                              Name                            
                  |    Type    | Required | Default |
+|------------------------------------------------------------------------------------------------|------------|----------|---------|
+| rules                                                                        
                  | ConfigMap  | yes      | -       |
+| rules.field_rules                                                            
                  | string     | yes      | -       |
+| rules.field_rules.field_name                                                 
                  | string     | yes      | -       |
+| rules.field_rules.field_type                                                 
                  | string     | no       | -       |
+| rules.field_rules.field_value                                                
                  | ConfigList | no       | -       |
+| rules.field_rules.field_value.rule_type                                      
                  | string     | no       | -       |
+| rules.field_rules.field_value.rule_value                                     
                  | double     | no       | -       |
+| rules.row_rules                                                              
                  | string     | yes      | -       |
+| rules.row_rules.rule_type                                                    
                  | string     | no       | -       |
+| rules.row_rules.rule_value                                                   
                  | string     | no       | -       |
+| rules.catalog_table_rule                                                     
                  | ConfigMap  | no       | -       |
+| rules.catalog_table_rule.primary_key_rule                                    
                  | ConfigMap  | no       | -       |
+| rules.catalog_table_rule.primary_key_rule.primary_key_name                   
                  | string     | no       | -       |
+| rules.catalog_table_rule.primary_key_rule.primary_key_columns                
                  | list       | no       | -       |
+| rules.catalog_table_rule.constraint_key_rule                                 
                  | ConfigList | no       | -       |
+| rules.catalog_table_rule.constraint_key_rule.constraint_key_name             
                  | string     | no       | -       |
+| rules.catalog_table_rule.constraint_key_rule.constraint_key_type             
                  | string     | no       | -       |
+| rules.catalog_table_rule.constraint_key_rule.constraint_key_columns          
                  | ConfigList | no       | -       |
+| 
rules.catalog_table_rule.constraint_key_rule.constraint_key_columns.constraint_key_column_name
 | string     | no       | -       |
+| 
rules.catalog_table_rule.constraint_key_rule.constraint_key_columns.constraint_key_sort_type
   | string     | no       | -       |
+| rules.catalog_table_rule.column_rule                                         
                  | ConfigList | no       | -       |
+| rules.catalog_table_rule.column_rule.name                                    
                  | string     | no       | -       |
+| rules.catalog_table_rule.column_rule.type                                    
                  | string     | no       | -       |
+| rules.catalog_table_rule.column_rule.column_length                           
                  | int        | no       | -       |
+| rules.catalog_table_rule.column_rule.nullable                                
                  | boolean    | no       | -       |
+| rules.catalog_table_rule.column_rule.default_value                           
                  | string     | no       | -       |
+| rules.catalog_table_rule.column_rule.comment                                 
                  | comment    | no       | -       |
+| rules.table-names                                                            
                  | list       | no       | -       |
+| common-options                                                               
                  |            | no       | -       |
 
 ### rules [ConfigMap]
 
diff --git a/docs/en/connector-v2/sink/Clickhouse.md 
b/docs/en/connector-v2/sink/Clickhouse.md
index 2b2b55e1a6..3798e2baae 100644
--- a/docs/en/connector-v2/sink/Clickhouse.md
+++ b/docs/en/connector-v2/sink/Clickhouse.md
@@ -30,7 +30,7 @@ They can be downloaded via install-plugin.sh or from the 
Maven central repositor
 
 ## Data Type Mapping
 
-| SeaTunnel Data type |                                                        
     Clickhouse Data type                                                       
       |
+| SeaTunnel Data Type |                                                        
     Clickhouse Data Type                                                       
       |
 
|---------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|
 | STRING              | String / Int128 / UInt128 / Int256 / UInt256 / Point / 
Ring / Polygon MultiPolygon                                                     
       |
 | INT                 | Int8 / UInt8 / Int16 / UInt16 / Int32                  
                                                                                
       |
diff --git a/docs/en/connector-v2/sink/ClickhouseFile.md 
b/docs/en/connector-v2/sink/ClickhouseFile.md
index cf53ce8b3d..ebafbc0162 100644
--- a/docs/en/connector-v2/sink/ClickhouseFile.md
+++ b/docs/en/connector-v2/sink/ClickhouseFile.md
@@ -20,7 +20,7 @@ Write data to Clickhouse can also be done using JDBC
 
 ## Options
 
-|          name          |  type   | required |             default value      
        |
+|          Name          |  Type   | Required |                Default         
        |
 
|------------------------|---------|----------|----------------------------------------|
 | host                   | string  | yes      | -                              
        |
 | database               | string  | yes      | -                              
        |
diff --git a/docs/en/connector-v2/sink/Console.md 
b/docs/en/connector-v2/sink/Console.md
index f23d6d9240..5d83c81026 100644
--- a/docs/en/connector-v2/sink/Console.md
+++ b/docs/en/connector-v2/sink/Console.md
@@ -18,7 +18,7 @@ Used to send data to Console. Both support streaming and 
batch mode.
 
 > For example, if the data from upstream is [`age: 12, name: jared`], the 
 > content send to console is the following: `{"name":"jared","age":17}`
 
-## Key features
+## Key Features
 
 - [ ] [exactly-once](../../concept/connector-v2-features.md)
 
diff --git a/docs/en/connector-v2/sink/CosFile.md 
b/docs/en/connector-v2/sink/CosFile.md
index 0535401734..f0d6517a05 100644
--- a/docs/en/connector-v2/sink/CosFile.md
+++ b/docs/en/connector-v2/sink/CosFile.md
@@ -16,7 +16,7 @@ To use this connector you need put 
hadoop-cos-{hadoop.version}-{version}.jar and
 
 :::
 
-## Key features
+## Key Features
 
 - [x] [exactly-once](../../concept/connector-v2-features.md)
 
@@ -32,7 +32,7 @@ By default, we use 2PC commit to ensure `exactly-once`
 
 ## Options
 
-|               name               |  type   | required |               
default value                |                                                  
    remarks                                                      |
+|               Name               |  Type   | Required |                  
Default                   |                                                    
Description                                                    |
 
|----------------------------------|---------|----------|--------------------------------------------|-------------------------------------------------------------------------------------------------------------------|
 | path                             | string  | yes      | -                    
                      |                                                         
                                                          |
 | tmp_path                         | string  | no       | /tmp/seatunnel       
                      | The result file will write to a tmp path first and then 
use `mv` to submit tmp dir to target dir. Need a COS dir. |
diff --git a/docs/en/connector-v2/sink/DB2.md b/docs/en/connector-v2/sink/DB2.md
index 583dd0021d..72baba8703 100644
--- a/docs/en/connector-v2/sink/DB2.md
+++ b/docs/en/connector-v2/sink/DB2.md
@@ -34,7 +34,7 @@ semantics (using XA transaction guarantee).
 
 ## Data Type Mapping
 
-|                                            DB2 Data type                     
                        | SeaTunnel Data type |
+|                                            DB2 Data Type                     
                        | SeaTunnel Data Type |
 
|------------------------------------------------------------------------------------------------------|---------------------|
 | BOOLEAN                                                                      
                        | BOOLEAN             |
 | SMALLINT                                                                     
                        | SHORT               |
diff --git a/docs/en/connector-v2/sink/Doris.md 
b/docs/en/connector-v2/sink/Doris.md
index a485eaf8c7..620e9e8fa5 100644
--- a/docs/en/connector-v2/sink/Doris.md
+++ b/docs/en/connector-v2/sink/Doris.md
@@ -2,6 +2,12 @@
 
 > Doris sink connector
 
+## Support Doris Version
+
+- exactly-once & cdc supported  `Doris version is >= 1.1.x`
+- Array data type supported  `Doris version is >= 1.2.x`
+- Map data type will be support in `Doris version is 2.x`
+
 ## Support Those Engines
 
 > Spark<br/>
@@ -18,18 +24,6 @@
 Used to send data to Doris. Both support streaming and batch mode.
 The internal implementation of Doris sink connector is cached and imported by 
stream load in batches.
 
-## Supported DataSource Info
-
-:::tip
-
-Version Supported
-
-* exactly-once & cdc supported  `Doris version is >= 1.1.x`
-* Array data type supported  `Doris version is >= 1.2.x`
-* Map data type will be support in `Doris version is 2.x`
-
-:::
-
 ## Sink Options
 
 |              Name              |  Type   | Required |           Default      
      |                                                                         
                                                                Description     
                                                                                
                                                     |
@@ -120,7 +114,7 @@ You can use the following placeholders
 
 ## Data Type Mapping
 
-| Doris Data type |           SeaTunnel Data type           |
+| Doris Data Type |           SeaTunnel Data Type           |
 |-----------------|-----------------------------------------|
 | BOOLEAN         | BOOLEAN                                 |
 | TINYINT         | TINYINT                                 |
diff --git a/docs/en/connector-v2/sink/Feishu.md 
b/docs/en/connector-v2/sink/Feishu.md
index 5573086db3..b965d8413f 100644
--- a/docs/en/connector-v2/sink/Feishu.md
+++ b/docs/en/connector-v2/sink/Feishu.md
@@ -23,7 +23,7 @@ Used to launch Feishu web hooks using data.
 
 ## Data Type Mapping
 
-|     Seatunnel Data type     | Feishu Data type |
+|     Seatunnel Data Type     | Feishu Data Type |
 |-----------------------------|------------------|
 | ROW<br/>MAP                 | Json             |
 | NULL                        | null             |
diff --git a/docs/en/connector-v2/sink/FtpFile.md 
b/docs/en/connector-v2/sink/FtpFile.md
index 3233fc3c6d..cdc3512485 100644
--- a/docs/en/connector-v2/sink/FtpFile.md
+++ b/docs/en/connector-v2/sink/FtpFile.md
@@ -30,7 +30,7 @@ By default, we use 2PC commit to ensure `exactly-once`
 
 ## Options
 
-|               name               |  type   | required |               
default value                |                                                  
    remarks                                                      |
+|               Name               |  Type   | Required |                  
Default                   |                                                    
Description                                                    |
 
|----------------------------------|---------|----------|--------------------------------------------|-------------------------------------------------------------------------------------------------------------------|
 | host                             | string  | yes      | -                    
                      |                                                         
                                                          |
 | port                             | int     | yes      | -                    
                      |                                                         
                                                          |
diff --git a/docs/en/connector-v2/sink/Greenplum.md 
b/docs/en/connector-v2/sink/Greenplum.md
index acddeb9763..6d4622b437 100644
--- a/docs/en/connector-v2/sink/Greenplum.md
+++ b/docs/en/connector-v2/sink/Greenplum.md
@@ -6,7 +6,7 @@
 
 Write data to Greenplum using [Jdbc connector](Jdbc.md).
 
-## Key features
+## Key Features
 
 - [ ] [exactly-once](../../concept/connector-v2-features.md)
 
diff --git a/docs/en/connector-v2/sink/IoTDB.md 
b/docs/en/connector-v2/sink/IoTDB.md
index ebf1a9e38f..8ace6724cb 100644
--- a/docs/en/connector-v2/sink/IoTDB.md
+++ b/docs/en/connector-v2/sink/IoTDB.md
@@ -35,7 +35,7 @@ There is a conflict of thrift version between IoTDB and 
Spark.Therefore, you nee
 
 ## Data Type Mapping
 
-| IotDB Data type | SeaTunnel Data type |
+| IotDB Data Type | SeaTunnel Data Type |
 |-----------------|---------------------|
 | BOOLEAN         | BOOLEAN             |
 | INT32           | TINYINT             |
@@ -98,9 +98,6 @@ source {
     }
   }
 }
-
-...
-
 ```
 
 Upstream SeaTunnelRow data format is the following:
diff --git a/docs/en/connector-v2/sink/Jdbc.md 
b/docs/en/connector-v2/sink/Jdbc.md
index 4e4a8b704e..ef7458014a 100644
--- a/docs/en/connector-v2/sink/Jdbc.md
+++ b/docs/en/connector-v2/sink/Jdbc.md
@@ -15,7 +15,7 @@ e.g. If you use MySQL, should download and copy 
`mysql-connector-java-xxx.jar` t
 
 :::
 
-## Key features
+## Key Features
 
 - [x] [exactly-once](../../concept/connector-v2-features.md)
 
@@ -26,7 +26,7 @@ support `Xa transactions`. You can set `is_exactly_once=true` 
to enable it.
 
 ## Options
 
-|                   name                    |  type   | required |        
default value         |
+|                   Name                    |  Type   | Required |           
Default            |
 
|-------------------------------------------|---------|----------|------------------------------|
 | url                                       | String  | Yes      | -           
                 |
 | driver                                    | String  | Yes      | -           
                 |
diff --git a/docs/en/connector-v2/sink/Kingbase.md 
b/docs/en/connector-v2/sink/Kingbase.md
index c2204d0209..361ca9a728 100644
--- a/docs/en/connector-v2/sink/Kingbase.md
+++ b/docs/en/connector-v2/sink/Kingbase.md
@@ -36,7 +36,7 @@
 
 ## Data Type Mapping
 
-|              Kingbase Data type              |                               
                                 SeaTunnel Data type                            
                                    |
+|              Kingbase Data Type              |                               
                                 SeaTunnel Data Type                            
                                    |
 
|----------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------|
 | BOOL                                         | BOOLEAN                       
                                                                                
                                    |
 | INT2                                         | SHORT                         
                                                                                
                                    |
diff --git a/docs/en/connector-v2/sink/Kudu.md 
b/docs/en/connector-v2/sink/Kudu.md
index 08518d7c72..aa43a72522 100644
--- a/docs/en/connector-v2/sink/Kudu.md
+++ b/docs/en/connector-v2/sink/Kudu.md
@@ -12,14 +12,14 @@
 > Flink<br/>
 > SeaTunnel Zeta<br/>
 
-## Key features
+## Key Features
 
 - [ ] [exactly-once](../../concept/connector-v2-features.md)
 - [x] [cdc](../../concept/connector-v2-features.md)
 
 ## Data Type Mapping
 
-| SeaTunnel Data type |      kudu Data type      |
+| SeaTunnel Data Type |      Kudu Data Type      |
 |---------------------|--------------------------|
 | BOOLEAN             | BOOL                     |
 | INT                 | INT8<br/>INT16<br/>INT32 |
diff --git a/docs/en/connector-v2/sink/LocalFile.md 
b/docs/en/connector-v2/sink/LocalFile.md
index e9d7985051..2f88f0fe72 100644
--- a/docs/en/connector-v2/sink/LocalFile.md
+++ b/docs/en/connector-v2/sink/LocalFile.md
@@ -14,7 +14,7 @@ If you use SeaTunnel Engine, It automatically integrated the 
hadoop jar when you
 
 :::
 
-## Key features
+## Key Features
 
 - [x] [exactly-once](../../concept/connector-v2-features.md)
 
@@ -30,7 +30,7 @@ By default, we use 2PC commit to ensure `exactly-once`
 
 ## Options
 
-|               name               |  type   | required |               
default value                |                                              
remarks                                              |
+|               Name               |  Type   | Required |                  
Default                   |                                            
Description                                            |
 
|----------------------------------|---------|----------|--------------------------------------------|---------------------------------------------------------------------------------------------------|
 | path                             | string  | yes      | -                    
                      |                                                         
                                          |
 | tmp_path                         | string  | no       | /tmp/seatunnel       
                      | The result file will write to a tmp path first and then 
use `mv` to submit tmp dir to target dir. |
diff --git a/docs/en/connector-v2/sink/MongoDB.md 
b/docs/en/connector-v2/sink/MongoDB.md
index 31dc46743a..e1cfd34eba 100644
--- a/docs/en/connector-v2/sink/MongoDB.md
+++ b/docs/en/connector-v2/sink/MongoDB.md
@@ -8,8 +8,7 @@
 > Flink<br/>
 > SeaTunnel Zeta<br/>
 
-Key Features
-------------
+## Key features
 
 - [x] [exactly-once](../../concept/connector-v2-features.md)
 - [x] [cdc](../../concept/connector-v2-features.md)
diff --git a/docs/en/connector-v2/sink/Mysql.md 
b/docs/en/connector-v2/sink/Mysql.md
index 10dd1c526d..ab18ca2dc3 100644
--- a/docs/en/connector-v2/sink/Mysql.md
+++ b/docs/en/connector-v2/sink/Mysql.md
@@ -38,7 +38,7 @@ semantics (using XA transaction guarantee).
 
 ## Data Type Mapping
 
-|                                                          Mysql Data type     
                                                     |                          
                                       SeaTunnel Data type                      
                                           |
+|                                                          Mysql Data Type     
                                                     |                          
                                       SeaTunnel Data Type                      
                                           |
 
|-----------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------|
 | BIT(1)<br/>INT UNSIGNED                                                      
                                                     | BOOLEAN                  
                                                                                
                                           |
 | TINYINT<br/>TINYINT UNSIGNED<br/>SMALLINT<br/>SMALLINT 
UNSIGNED<br/>MEDIUMINT<br/>MEDIUMINT UNSIGNED<br/>INT<br/>INTEGER<br/>YEAR | 
INT                                                                             
                                                                    |
diff --git a/docs/en/connector-v2/sink/Oracle.md 
b/docs/en/connector-v2/sink/Oracle.md
index e99b9ba89d..0d2b7ab504 100644
--- a/docs/en/connector-v2/sink/Oracle.md
+++ b/docs/en/connector-v2/sink/Oracle.md
@@ -35,7 +35,7 @@ semantics (using XA transaction guarantee).
 
 ## Data Type Mapping
 
-|                                   Oracle Data type                           
        | SeaTunnel Data type |
+|                                   Oracle Data Type                           
        | SeaTunnel Data Type |
 
|--------------------------------------------------------------------------------------|---------------------|
 | INTEGER                                                                      
        | INT                 |
 | FLOAT                                                                        
        | DECIMAL(38, 18)     |
diff --git a/docs/en/connector-v2/sink/OssFile.md 
b/docs/en/connector-v2/sink/OssFile.md
index f9e817ba56..7cbab4347d 100644
--- a/docs/en/connector-v2/sink/OssFile.md
+++ b/docs/en/connector-v2/sink/OssFile.md
@@ -39,7 +39,7 @@ If write to `csv`, `text` file type, All column will be 
string.
 
 ### Orc File Type
 
-| SeaTunnel Data type  |     Orc Data type     |
+| SeaTunnel Data Type  |     Orc Data Type     |
 |----------------------|-----------------------|
 | STRING               | STRING                |
 | BOOLEAN              | BOOLEAN               |
@@ -61,7 +61,7 @@ If write to `csv`, `text` file type, All column will be 
string.
 
 ### Parquet File Type
 
-| SeaTunnel Data type  |   Parquet Data type   |
+| SeaTunnel Data Type  |   Parquet Data Type   |
 |----------------------|-----------------------|
 | STRING               | STRING                |
 | BOOLEAN              | BOOLEAN               |
@@ -83,7 +83,7 @@ If write to `csv`, `text` file type, All column will be 
string.
 
 ## Options
 
-|               name               |  type   | required |               
default value                |                                                  
    remarks                                                      |
+|               Name               |  Type   | Required |                  
Default                   |                                                    
Description                                                    |
 
|----------------------------------|---------|----------|--------------------------------------------|-------------------------------------------------------------------------------------------------------------------|
 | path                             | string  | yes      | The oss path to 
write file in.             |                                                    
                                                               |
 | tmp_path                         | string  | no       | /tmp/seatunnel       
                      | The result file will write to a tmp path first and then 
use `mv` to submit tmp dir to target dir. Need a OSS dir. |
diff --git a/docs/en/connector-v2/sink/OssJindoFile.md 
b/docs/en/connector-v2/sink/OssJindoFile.md
index eb4e81a8fb..40441ea83e 100644
--- a/docs/en/connector-v2/sink/OssJindoFile.md
+++ b/docs/en/connector-v2/sink/OssJindoFile.md
@@ -36,7 +36,7 @@ By default, we use 2PC commit to ensure `exactly-once`
 
 ## Options
 
-|               name               |  type   | required |               
default value                |                                                  
    remarks                                                      |
+|               Name               |  Type   | Required |                  
Default                   |                                                    
Description                                                    |
 
|----------------------------------|---------|----------|--------------------------------------------|-------------------------------------------------------------------------------------------------------------------|
 | path                             | string  | yes      | -                    
                      |                                                         
                                                          |
 | tmp_path                         | string  | no       | /tmp/seatunnel       
                      | The result file will write to a tmp path first and then 
use `mv` to submit tmp dir to target dir. Need a OSS dir. |
diff --git a/docs/en/connector-v2/sink/PostgreSql.md 
b/docs/en/connector-v2/sink/PostgreSql.md
index 0868d64dc5..3e056376bd 100644
--- a/docs/en/connector-v2/sink/PostgreSql.md
+++ b/docs/en/connector-v2/sink/PostgreSql.md
@@ -36,7 +36,7 @@ semantics (using XA transaction guarantee).
 
 ## Data Type Mapping
 
-|                                       PostgreSQL Data type                   
                    |                                                           
   SeaTunnel Data type                                                          
     |
+|                                       PostgreSQL Data Type                   
                    |                                                           
   SeaTunnel Data Type                                                          
     |
 
|--------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------|
 | BOOL<br/>                                                                    
                    | BOOLEAN                                                   
                                                                                
     |
 | _BOOL<br/>                                                                   
                    | ARRAY&LT;BOOLEAN&GT;                                      
                                                                                
     |
diff --git a/docs/en/connector-v2/sink/RocketMQ.md 
b/docs/en/connector-v2/sink/RocketMQ.md
index 60ccf49c4c..a31534ec26 100644
--- a/docs/en/connector-v2/sink/RocketMQ.md
+++ b/docs/en/connector-v2/sink/RocketMQ.md
@@ -12,7 +12,7 @@
 > Flink<br/>
 > SeaTunnel Zeta<br/>
 
-## Key features
+## Key Features
 
 - [x] [exactly-once](../../concept/connector-v2-features.md)
 
diff --git a/docs/en/connector-v2/sink/Snowflake.md 
b/docs/en/connector-v2/sink/Snowflake.md
index 62f9bd86ea..b6da5f6ed2 100644
--- a/docs/en/connector-v2/sink/Snowflake.md
+++ b/docs/en/connector-v2/sink/Snowflake.md
@@ -8,7 +8,7 @@
 > Flink<br/>
 > SeaTunnel Zeta<br/>
 
-## Key features
+## Key Features
 
 - [ ] [exactly-once](../../concept/connector-v2-features.md)
 - [x] [cdc](../../concept/connector-v2-features.md)
@@ -19,7 +19,7 @@ Write data through jdbc. Support Batch mode and Streaming 
mode, support concurre
 
 ## Supported DataSource list
 
-| datasource |                    supported versions                    |      
            driver                   |                          url             
              |                                    maven                        
            |
+| Datasource |                    Supported Versions                    |      
            Driver                   |                          Url             
              |                                    Maven                        
            |
 
|------------|----------------------------------------------------------|-------------------------------------------|--------------------------------------------------------|-----------------------------------------------------------------------------|
 | snowflake  | Different dependency version has different driver class. | 
net.snowflake.client.jdbc.SnowflakeDriver | 
jdbc:snowflake://<account_name>.snowflakecomputing.com | 
[Download](https://mvnrepository.com/artifact/net.snowflake/snowflake-jdbc) |
 
@@ -30,7 +30,7 @@ Write data through jdbc. Support Batch mode and Streaming 
mode, support concurre
 
 ## Data Type Mapping
 
-|                             Snowflake Data type                             
| SeaTunnel Data type |
+|                             Snowflake Data Type                             
| SeaTunnel Data Type |
 
|-----------------------------------------------------------------------------|---------------------|
 | BOOLEAN                                                                     
| BOOLEAN             |
 | TINYINT<br/>SMALLINT<br/>BYTEINT<br/>                                       
| SHORT_TYPE          |
@@ -48,26 +48,26 @@ Write data through jdbc. Support Batch mode and Streaming 
mode, support concurre
 
 ## Options
 
-|                   name                    |  type   | required | default 
value |                                                                         
                                         description                            
                                                                                
       |
-|-------------------------------------------|---------|----------|---------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| url                                       | String  | Yes      | -           
  | The URL of the JDBC connection. Refer to a case: 
jdbc:snowflake://<account_name>.snowflakecomputing.com                          
                                                                                
                              |
-| driver                                    | String  | Yes      | -           
  | The jdbc class name used to connect to the remote data source,<br/> if you 
use Snowflake the value is `net.snowflake.client.jdbc.SnowflakeDriver`.         
                                                                                
    |
-| user                                      | String  | No       | -           
  | Connection instance user name                                               
                                                                                
                                                                                
   |
-| password                                  | String  | No       | -           
  | Connection instance password                                                
                                                                                
                                                                                
   |
-| query                                     | String  | No       | -           
  | Use this sql write upstream input datas to database. e.g `INSERT 
...`,`query` have the higher priority                                           
                                                                                
              |
-| database                                  | String  | No       | -           
  | Use this `database` and `table-name` auto-generate sql and receive upstream 
input datas write to database.<br/>This option is mutually exclusive with 
`query` and has a higher priority.                                              
         |
-| table                                     | String  | No       | -           
  | Use database and this table-name auto-generate sql and receive upstream 
input datas write to database.<br/>This option is mutually exclusive with 
`query` and has a higher priority.                                              
             |
-| primary_keys                              | Array   | No       | -           
  | This option is used to support operations such as `insert`, `delete`, and 
`update` when automatically generate sql.                                       
                                                                                
     |
-| support_upsert_by_query_primary_key_exist | Boolean | No       | false       
  | Choose to use INSERT sql, UPDATE sql to process update events(INSERT, 
UPDATE_AFTER) based on query primary key exists. This configuration is only 
used when database unsupport upsert syntax. **Note**: that this method has low 
performance   |
-| connection_check_timeout_sec              | Int     | No       | 30          
  | The time in seconds to wait for the database operation used to validate the 
connection to complete.                                                         
                                                                                
   |
-| max_retries                               | Int     | No       | 0           
  | The number of retries to submit failed (executeBatch)                       
                                                                                
                                                                                
   |
-| batch_size                                | Int     | No       | 1000        
  | For batch writing, when the number of buffered records reaches the number 
of `batch_size` or the time reaches `checkpoint.interval`<br/>, the data will 
be flushed into the database                                                    
       |
-| max_commit_attempts                       | Int     | No       | 3           
  | The number of retries for transaction commit failures                       
                                                                                
                                                                                
   |
-| transaction_timeout_sec                   | Int     | No       | -1          
  | The timeout after the transaction is opened, the default is -1 (never 
timeout). Note that setting the timeout may affect<br/>exactly-once semantics   
                                                                                
         |
-| auto_commit                               | Boolean | No       | true        
  | Automatic transaction commit is enabled by default                          
                                                                                
                                                                                
   |
-| properties                                | Map     | No       | -           
  | Additional connection configuration parameters,when properties and URL have 
the same parameters, the priority is determined by the <br/>specific 
implementation of the driver. For example, in MySQL, properties take precedence 
over the URL. |
-| common-options                            |         | No       | -           
  | Sink plugin common parameters, please refer to [Sink Common 
Options](common-options.md) for details                                         
                                                                                
                   |
-| enable_upsert                             | Boolean | No       | true        
  | Enable upsert by primary_keys exist, If the task has no key duplicate data, 
setting this parameter to `false` can speed up data import                      
                                                                                
   |
+|                   Name                    |  Type   | Required | Default |   
                                                                                
                               Description                                      
                                                                             |
+|-------------------------------------------|---------|----------|---------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| url                                       | String  | Yes      | -       | 
The URL of the JDBC connection. Refer to a case: 
jdbc:snowflake://<account_name>.snowflakecomputing.com                          
                                                                                
                              |
+| driver                                    | String  | Yes      | -       | 
The jdbc class name used to connect to the remote data source,<br/> if you use 
Snowflake the value is `net.snowflake.client.jdbc.SnowflakeDriver`.             
                                                                                
|
+| user                                      | String  | No       | -       | 
Connection instance user name                                                   
                                                                                
                                                                               |
+| password                                  | String  | No       | -       | 
Connection instance password                                                    
                                                                                
                                                                               |
+| query                                     | String  | No       | -       | 
Use this sql write upstream input datas to database. e.g `INSERT ...`,`query` 
have the higher priority                                                        
                                                                                
 |
+| database                                  | String  | No       | -       | 
Use this `database` and `table-name` auto-generate sql and receive upstream 
input datas write to database.<br/>This option is mutually exclusive with 
`query` and has a higher priority.                                              
         |
+| table                                     | String  | No       | -       | 
Use database and this table-name auto-generate sql and receive upstream input 
datas write to database.<br/>This option is mutually exclusive with `query` and 
has a higher priority.                                                          
 |
+| primary_keys                              | Array   | No       | -       | 
This option is used to support operations such as `insert`, `delete`, and 
`update` when automatically generate sql.                                       
                                                                                
     |
+| support_upsert_by_query_primary_key_exist | Boolean | No       | false   | 
Choose to use INSERT sql, UPDATE sql to process update events(INSERT, 
UPDATE_AFTER) based on query primary key exists. This configuration is only 
used when database unsupport upsert syntax. **Note**: that this method has low 
performance   |
+| connection_check_timeout_sec              | Int     | No       | 30      | 
The time in seconds to wait for the database operation used to validate the 
connection to complete.                                                         
                                                                                
   |
+| max_retries                               | Int     | No       | 0       | 
The number of retries to submit failed (executeBatch)                           
                                                                                
                                                                               |
+| batch_size                                | Int     | No       | 1000    | 
For batch writing, when the number of buffered records reaches the number of 
`batch_size` or the time reaches `checkpoint.interval`<br/>, the data will be 
flushed into the database                                                       
    |
+| max_commit_attempts                       | Int     | No       | 3       | 
The number of retries for transaction commit failures                           
                                                                                
                                                                               |
+| transaction_timeout_sec                   | Int     | No       | -1      | 
The timeout after the transaction is opened, the default is -1 (never timeout). 
Note that setting the timeout may affect<br/>exactly-once semantics             
                                                                               |
+| auto_commit                               | Boolean | No       | true    | 
Automatic transaction commit is enabled by default                              
                                                                                
                                                                               |
+| properties                                | Map     | No       | -       | 
Additional connection configuration parameters,when properties and URL have the 
same parameters, the priority is determined by the <br/>specific implementation 
of the driver. For example, in MySQL, properties take precedence over the URL. |
+| common-options                            |         | No       | -       | 
Sink plugin common parameters, please refer to [Sink Common 
Options](common-options.md) for details                                         
                                                                                
                   |
+| enable_upsert                             | Boolean | No       | true    | 
Enable upsert by primary_keys exist, If the task has no key duplicate data, 
setting this parameter to `false` can speed up data import                      
                                                                                
   |
 
 ## tips
 
diff --git a/docs/en/connector-v2/sink/SqlServer.md 
b/docs/en/connector-v2/sink/SqlServer.md
index 761af9c1ee..72ad7ff29f 100644
--- a/docs/en/connector-v2/sink/SqlServer.md
+++ b/docs/en/connector-v2/sink/SqlServer.md
@@ -12,7 +12,7 @@
 > Flink<br/>
 > Seatunnel Zeta<br/>
 
-## Key features
+## Key Features
 
 - [x] [exactly-once](../../concept/connector-v2-features.md)
 - [x] [cdc](../../concept/connector-v2-features.md)
@@ -27,7 +27,7 @@ semantics (using XA transaction guarantee).
 
 ## Supported DataSource Info
 
-| datasource |   supported versions    |                    driver             
       |               url               |                                      
 maven                                       |
+| Datasource |   Supported Versions    |                    Driver             
       |               Url               |                                      
 Maven                                       |
 
|------------|-------------------------|----------------------------------------------|---------------------------------|-----------------------------------------------------------------------------------|
 | SQL Server | support version >= 2008 | 
com.microsoft.sqlserver.jdbc.SQLServerDriver | jdbc:sqlserver://localhost:1433 
| 
[Download](https://mvnrepository.com/artifact/com.microsoft.sqlserver/mssql-jdbc)
 |
 
@@ -38,7 +38,7 @@ semantics (using XA transaction guarantee).
 
 ## Data Type Mapping
 
-|                       SQLserver Data type                       |            
                                                        Seatunnel Data type     
                                                               |
+|                       SQLserver Data Type                       |            
                                                        SeaTunnel Data Type     
                                                               |
 
|-----------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------|
 | BIT                                                             | BOOLEAN    
                                                                                
                                                               |
 | TINYINT<br/>SMALLINT                                            | SHORT      
                                                                                
                                                               |
diff --git a/docs/en/connector-v2/sink/Vertica.md 
b/docs/en/connector-v2/sink/Vertica.md
index f8c6c4a746..620e8c0457 100644
--- a/docs/en/connector-v2/sink/Vertica.md
+++ b/docs/en/connector-v2/sink/Vertica.md
@@ -34,7 +34,7 @@ semantics (using XA transaction guarantee).
 
 ## Data Type Mapping
 
-|                                                         Vertica Data type    
                                                     |                          
                                       SeaTunnel Data type                      
                                           |
+|                                                         Vertica Data Type    
                                                     |                          
                                       SeaTunnel Data Type                      
                                           |
 
|-----------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------|
 | BIT(1)<br/>INT UNSIGNED                                                      
                                                     | BOOLEAN                  
                                                                                
                                           |
 | TINYINT<br/>TINYINT UNSIGNED<br/>SMALLINT<br/>SMALLINT 
UNSIGNED<br/>MEDIUMINT<br/>MEDIUMINT UNSIGNED<br/>INT<br/>INTEGER<br/>YEAR | 
INT                                                                             
                                                                    |
diff --git a/docs/en/connector-v2/source/Clickhouse.md 
b/docs/en/connector-v2/source/Clickhouse.md
index 284b7b14cb..c23b25e92e 100644
--- a/docs/en/connector-v2/source/Clickhouse.md
+++ b/docs/en/connector-v2/source/Clickhouse.md
@@ -34,7 +34,7 @@ They can be downloaded via install-plugin.sh or from the 
Maven central repositor
 
 ## Data Type Mapping
 
-|                                                             Clickhouse Data 
type                                                              | SeaTunnel 
Data type |
+|                                                             Clickhouse Data 
Type                                                              | SeaTunnel 
Data Type |
 
|-----------------------------------------------------------------------------------------------------------------------------------------------|---------------------|
 | String / Int128 / UInt128 / Int256 / UInt256 / Point / Ring / Polygon 
MultiPolygon                                                            | 
STRING              |
 | Int8 / UInt8 / Int16 / UInt16 / Int32                                        
                                                                 | INT          
       |
diff --git a/docs/en/connector-v2/source/DB2.md 
b/docs/en/connector-v2/source/DB2.md
index 0c512588ac..a5a6992845 100644
--- a/docs/en/connector-v2/source/DB2.md
+++ b/docs/en/connector-v2/source/DB2.md
@@ -36,7 +36,7 @@ Read external data source data through JDBC.
 
 ## Data Type Mapping
 
-|                                            DB2 Data type                     
                        | SeaTunnel Data type |
+|                                            DB2 Data Type                     
                        | SeaTunnel Data Type |
 
|------------------------------------------------------------------------------------------------------|---------------------|---|
 | BOOLEAN                                                                      
                        | BOOLEAN             |
 | SMALLINT                                                                     
                        | SHORT               |
diff --git a/docs/en/connector-v2/source/FakeSource.md 
b/docs/en/connector-v2/source/FakeSource.md
index dff5e61bfa..43cc8dc671 100644
--- a/docs/en/connector-v2/source/FakeSource.md
+++ b/docs/en/connector-v2/source/FakeSource.md
@@ -2,6 +2,12 @@
 
 > FakeSource connector
 
+## Support Those Engines
+
+> Spark<br/>
+> Flink<br/>
+> SeaTunnel Zeta<br/>
+
 ## Description
 
 The FakeSource is a virtual data source, which randomly generates the number 
of rows according to the data structure of the user-defined schema,
@@ -371,14 +377,20 @@ rows = [
 
 ### Options `table-names` Case
 
-```agsl
-FakeSource {
-    table-names = ["test.table1", "test.table2"]
+```hocon
+
+source {
+  # This is a example source plugin **only for test and demonstrate the 
feature source plugin**
+  FakeSource {
+    table-names = ["test.table1", "test.table2", "test.table3"]
+    parallelism = 1
     schema = {
-        table = "database.schema.table"
-        ...
+      fields {
+        name = "string"
+        age = "int"
+      }
     }
-    ...
+  }
 }
 ```
 
diff --git a/docs/en/connector-v2/source/Hive-jdbc.md 
b/docs/en/connector-v2/source/Hive-jdbc.md
index b301ea02f5..e30db04d32 100644
--- a/docs/en/connector-v2/source/Hive-jdbc.md
+++ b/docs/en/connector-v2/source/Hive-jdbc.md
@@ -41,7 +41,7 @@ Read external data source data through JDBC.
 
 ## Data Type Mapping
 
-|                                      Hive Data type                          
             | SeaTunnel Data type |
+|                                      Hive Data Type                          
             | SeaTunnel Data Type |
 
|-------------------------------------------------------------------------------------------|---------------------|
 | BOOLEAN                                                                      
             | BOOLEAN             |
 | TINYINT<br/> SMALLINT                                                        
             | SHORT               |
diff --git a/docs/en/connector-v2/source/Hudi.md 
b/docs/en/connector-v2/source/Hudi.md
index 46a2815b5c..353142a8e4 100644
--- a/docs/en/connector-v2/source/Hudi.md
+++ b/docs/en/connector-v2/source/Hudi.md
@@ -33,7 +33,7 @@ In order to use this connector, You must ensure your 
spark/flink cluster already
 
 ## Data Type Mapping
 
-| Hudi Data type | Seatunnel Data type |
+| Hudi Data Type | Seatunnel Data Type |
 |----------------|---------------------|
 | ALL TYPE       | STRING              |
 
diff --git a/docs/en/connector-v2/source/IoTDB.md 
b/docs/en/connector-v2/source/IoTDB.md
index 1dda73e59c..7969f366f9 100644
--- a/docs/en/connector-v2/source/IoTDB.md
+++ b/docs/en/connector-v2/source/IoTDB.md
@@ -38,7 +38,7 @@ There is a conflict of thrift version between IoTDB and 
Spark.Therefore, you nee
 
 ## Data Type Mapping
 
-| IotDB Data type | SeaTunnel Data type |
+| IotDB Data Type | SeaTunnel Data Type |
 |-----------------|---------------------|
 | BOOLEAN         | BOOLEAN             |
 | INT32           | TINYINT             |
diff --git a/docs/en/connector-v2/source/Kudu.md 
b/docs/en/connector-v2/source/Kudu.md
index ac836b970a..4d834e5e2d 100644
--- a/docs/en/connector-v2/source/Kudu.md
+++ b/docs/en/connector-v2/source/Kudu.md
@@ -28,7 +28,7 @@ The tested kudu version is 1.11.1.
 
 ## Data Type Mapping
 
-|      kudu Data type      | SeaTunnel Data type |
+|      kudu Data Type      | SeaTunnel Data Type |
 |--------------------------|---------------------|
 | BOOL                     | BOOLEAN             |
 | INT8<br/>INT16<br/>INT32 | INT                 |
@@ -75,14 +75,14 @@ env {
 
 source {
   # This is a example source plugin **only for test and demonstrate the 
feature source plugin**
-  kudu{
-   kudu_masters = "kudu-master:7051"
-   table_name = "kudu_source_table"
-   result_table_name = "kudu"
-   enable_kerberos = true
-   kerberos_principal = "[email protected]"
-   kerberos_keytab = "xx.keytab"
-}
+  kudu {
+    kudu_masters = "kudu-master:7051"
+    table_name = "kudu_source_table"
+    result_table_name = "kudu"
+    enable_kerberos = true
+    kerberos_principal = "[email protected]"
+    kerberos_keytab = "xx.keytab"
+  }
 }
 
 transform {
@@ -93,14 +93,15 @@ sink {
     source_table_name = "kudu"
   }
 
-   kudu{
+  kudu {
     source_table_name = "kudu"
     kudu_masters = "kudu-master:7051"
     table_name = "kudu_sink_table"
     enable_kerberos = true
     kerberos_principal = "[email protected]"
     kerberos_keytab = "xx.keytab"
- }
+  }
+}
 ```
 
 ### Multiple Table
diff --git a/docs/en/connector-v2/source/MongoDB-CDC.md 
b/docs/en/connector-v2/source/MongoDB-CDC.md
index 14e240f50a..a7bd980b6d 100644
--- a/docs/en/connector-v2/source/MongoDB-CDC.md
+++ b/docs/en/connector-v2/source/MongoDB-CDC.md
@@ -75,7 +75,7 @@ db.createUser(
 
 The following table lists the field data type mapping from MongoDB BSON type 
to Seatunnel data type.
 
-| MongoDB BSON type | Seatunnel Data type |
+| MongoDB BSON Type | SeaTunnel Data Type |
 |-------------------|---------------------|
 | ObjectId          | STRING              |
 | String            | STRING              |
@@ -92,7 +92,7 @@ The following table lists the field data type mapping from 
MongoDB BSON type to
 
 For specific types in MongoDB, we use Extended JSON format to map them to 
Seatunnel STRING type.
 
-| MongoDB BSON type |                                       Seatunnel STRING   
                                    |
+| MongoDB BSON type |                                       SeaTunnel STRING   
                                    |
 
|-------------------|----------------------------------------------------------------------------------------------|
 | Symbol            | {"_value": {"$symbol": "12"}}                            
                                    |
 | RegularExpression | {"_value": {"$regularExpression": {"pattern": "^9$", 
"options": "i"}}}                       |
diff --git a/docs/en/connector-v2/source/MySQL-CDC.md 
b/docs/en/connector-v2/source/MySQL-CDC.md
index bc562213a2..499830f7fa 100644
--- a/docs/en/connector-v2/source/MySQL-CDC.md
+++ b/docs/en/connector-v2/source/MySQL-CDC.md
@@ -55,7 +55,7 @@ mysql> GRANT SELECT, RELOAD, SHOW DATABASES, REPLICATION 
SLAVE, REPLICATION CLIE
 mysql> FLUSH PRIVILEGES;
 ```
 
-### Enabling the MySQL binlog
+### Enabling the MySQL Binlog
 
 You must enable binary logging for MySQL replication. The binary logs record 
transaction updates for replication tools to propagate changes.
 
@@ -127,7 +127,7 @@ When an initial consistent snapshot is made for large 
databases, your establishe
 
 ## Data Type Mapping
 
-|                                     Mysql Data type                          
            | SeaTunnel Data type |
+|                                     Mysql Data Type                          
            | SeaTunnel Data Type |
 
|------------------------------------------------------------------------------------------|---------------------|
 | BIT(1)<br/>TINYINT(1)                                                        
            | BOOLEAN             |
 | TINYINT                                                                      
            | TINYINT             |
diff --git a/docs/en/connector-v2/source/Mysql.md 
b/docs/en/connector-v2/source/Mysql.md
index e1fe6a3e8e..216d3874bf 100644
--- a/docs/en/connector-v2/source/Mysql.md
+++ b/docs/en/connector-v2/source/Mysql.md
@@ -41,7 +41,7 @@ Read external data source data through JDBC.
 
 ## Data Type Mapping
 
-|                                                          Mysql Data type     
                                                     |                          
                                       SeaTunnel Data type                      
                                           |
+|                                                          Mysql Data Type     
                                                     |                          
                                       SeaTunnel Data Type                      
                                           |
 
|-----------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------|
 | BIT(1)<br/>INT UNSIGNED                                                      
                                                     | BOOLEAN                  
                                                                                
                                           |
 | TINYINT<br/>TINYINT UNSIGNED<br/>SMALLINT<br/>SMALLINT 
UNSIGNED<br/>MEDIUMINT<br/>MEDIUMINT UNSIGNED<br/>INT<br/>INTEGER<br/>YEAR | 
INT                                                                             
                                                                    |
diff --git a/docs/en/connector-v2/source/Oracle.md 
b/docs/en/connector-v2/source/Oracle.md
index 46d1761967..eec999fbcf 100644
--- a/docs/en/connector-v2/source/Oracle.md
+++ b/docs/en/connector-v2/source/Oracle.md
@@ -25,7 +25,7 @@ Read external data source data through JDBC.
 
 ## Supported DataSource Info
 
-| Datasource |                    Supported versions                    |      
    Driver          |                  Url                   |                  
             Maven                                |
+| Datasource |                    Supported Versions                    |      
    Driver          |                  Url                   |                  
             Maven                                |
 
|------------|----------------------------------------------------------|--------------------------|----------------------------------------|--------------------------------------------------------------------|
 | Oracle     | Different dependency version has different driver class. | 
oracle.jdbc.OracleDriver | jdbc:oracle:thin:@datasource01:1523:xe | 
https://mvnrepository.com/artifact/com.oracle.database.jdbc/ojdbc8 |
 
@@ -37,7 +37,7 @@ Read external data source data through JDBC.
 
 ## Data Type Mapping
 
-|                                   Oracle Data type                           
        | SeaTunnel Data type |
+|                                   Oracle Data Type                           
        | SeaTunnel Data Type |
 
|--------------------------------------------------------------------------------------|---------------------|
 | INTEGER                                                                      
        | INT                 |
 | FLOAT                                                                        
        | DECIMAL(38, 18)     |
diff --git a/docs/en/connector-v2/source/PostgreSQL.md 
b/docs/en/connector-v2/source/PostgreSQL.md
index e991f22c1f..34dcd5ec10 100644
--- a/docs/en/connector-v2/source/PostgreSQL.md
+++ b/docs/en/connector-v2/source/PostgreSQL.md
@@ -25,7 +25,7 @@ Read external data source data through JDBC.
 
 ## Supported DataSource Info
 
-| Datasource |                     Supported versions                     |    
    Driver         |                  Url                  |                    
              Maven                                   |
+| Datasource |                     Supported Versions                     |    
    Driver         |                  Url                  |                    
              Maven                                   |
 
|------------|------------------------------------------------------------|-----------------------|---------------------------------------|--------------------------------------------------------------------------|
 | PostgreSQL | Different dependency version has different driver class.   | 
org.postgresql.Driver | jdbc:postgresql://localhost:5432/test | 
[Download](https://mvnrepository.com/artifact/org.postgresql/postgresql) |
 | PostgreSQL | If you want to manipulate the GEOMETRY type in PostgreSQL. | 
org.postgresql.Driver | jdbc:postgresql://localhost:5432/test | 
[Download](https://mvnrepository.com/artifact/net.postgis/postgis-jdbc)  |
diff --git a/docs/en/connector-v2/source/RocketMQ.md 
b/docs/en/connector-v2/source/RocketMQ.md
index 4e903dc900..d496a259bd 100644
--- a/docs/en/connector-v2/source/RocketMQ.md
+++ b/docs/en/connector-v2/source/RocketMQ.md
@@ -12,7 +12,7 @@
 > Flink<br/>
 > SeaTunnel Zeta<br/>
 
-## Key features
+## Key Features
 
 - [x] [batch](../../concept/connector-v2-features.md)
 - [x] [stream](../../concept/connector-v2-features.md)
diff --git a/docs/en/connector-v2/source/SftpFile.md 
b/docs/en/connector-v2/source/SftpFile.md
index 05b3bc4f38..b6096606a2 100644
--- a/docs/en/connector-v2/source/SftpFile.md
+++ b/docs/en/connector-v2/source/SftpFile.md
@@ -8,7 +8,7 @@
 > Flink<br/>
 > SeaTunnel Zeta<br/>
 
-## Key features
+## Key Features
 
 - [x] [batch](../../concept/connector-v2-features.md)
 - [ ] [stream](../../concept/connector-v2-features.md)
diff --git a/docs/en/connector-v2/source/SqlServer-CDC.md 
b/docs/en/connector-v2/source/SqlServer-CDC.md
index 62b788ac15..02cc4c21ac 100644
--- a/docs/en/connector-v2/source/SqlServer-CDC.md
+++ b/docs/en/connector-v2/source/SqlServer-CDC.md
@@ -37,7 +37,7 @@ Please download and put SqlServer driver in 
`${SEATUNNEL_HOME}/lib/` dir. For ex
 
 ## Data Type Mapping
 
-|                                        SQLserver Data type                   
                     |                SeaTunnel Data type                 |
+|                                        SQLserver Data Type                   
                     |                SeaTunnel Data Type                 |
 
|---------------------------------------------------------------------------------------------------|----------------------------------------------------|
 | 
CHAR<br/>VARCHAR<br/>NCHAR<br/>NVARCHAR<br/>STRUCT<br/>CLOB<br/>LONGVARCHAR<br/>LONGNVARCHAR<br/>
 | STRING                                             |
 | BLOB                                                                         
                     | BYTES                                              |
diff --git a/docs/en/connector-v2/source/Vertica.md 
b/docs/en/connector-v2/source/Vertica.md
index 9e945356ff..c78625dab0 100644
--- a/docs/en/connector-v2/source/Vertica.md
+++ b/docs/en/connector-v2/source/Vertica.md
@@ -36,7 +36,7 @@ Read external data source data through JDBC.
 
 ## Data Type Mapping
 
-|                                                        Vertical Data type    
                                                     |                          
                                       SeaTunnel Data type                      
                                           |
+|                                                        Vertical Data Type    
                                                     |                          
                                       SeaTunnel Data Type                      
                                           |
 
|-----------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------|
 | BIT                                                                          
                                                     | BOOLEAN                  
                                                                                
                                           |
 | TINYINT<br/>TINYINT UNSIGNED<br/>SMALLINT<br/>SMALLINT 
UNSIGNED<br/>MEDIUMINT<br/>MEDIUMINT UNSIGNED<br/>INT<br/>INTEGER<br/>YEAR | 
INT                                                                             
                                                                    |

Reply via email to