This is an automated email from the ASF dual-hosted git repository.

fanjia pushed a commit to branch dev
in repository https://gitbox.apache.org/repos/asf/seatunnel.git


The following commit(s) were added to refs/heads/dev by this push:
     new 16a430e909 [Doc] Improve doc of driver put path
16a430e909 is described below

commit 16a430e909e87c6c71144238f9f24df4f845d5cf
Author: Eric <[email protected]>
AuthorDate: Fri Jan 19 10:06:35 2024 +0800

    [Doc] Improve doc of driver put path
---
 docs/en/connector-v2/sink/DB2.md             | 25 +++++++++++++----------
 docs/en/connector-v2/sink/IoTDB.md           | 20 +++++++++++++------
 docs/en/connector-v2/sink/Jdbc.md            | 10 ++++++----
 docs/en/connector-v2/sink/Mysql.md           | 25 +++++++++++++----------
 docs/en/connector-v2/sink/Oracle.md          | 20 ++++++++++++++-----
 docs/en/connector-v2/sink/PostgreSql.md      | 20 ++++++++++++++-----
 docs/en/connector-v2/sink/SqlServer.md       | 20 ++++++++++++++-----
 docs/en/connector-v2/sink/Vertica.md         | 20 ++++++++++++++-----
 docs/en/connector-v2/source/DB2.md           | 18 +++++++++++++----
 docs/en/connector-v2/source/IoTDB.md         | 30 ++++++++++++++++++----------
 docs/en/connector-v2/source/Jdbc.md          | 10 ++++++++++
 docs/en/connector-v2/source/MySQL-CDC.md     | 20 ++++++++++++-------
 docs/en/connector-v2/source/Mysql.md         | 18 +++++++++++++----
 docs/en/connector-v2/source/Oracle-CDC.md    | 12 ++++++++++-
 docs/en/connector-v2/source/Oracle.md        | 20 +++++++++++++++----
 docs/en/connector-v2/source/Postgre-CDC.md   | 10 +++++++++-
 docs/en/connector-v2/source/PostgreSQL.md    | 10 ++++++++++
 docs/en/connector-v2/source/SqlServer-CDC.md | 10 +++++++++-
 docs/en/connector-v2/source/SqlServer.md     | 10 ++++++++++
 docs/en/connector-v2/source/Vertica.md       | 23 ++++++++++++---------
 20 files changed, 260 insertions(+), 91 deletions(-)

diff --git a/docs/en/connector-v2/sink/DB2.md b/docs/en/connector-v2/sink/DB2.md
index 72baba8703..5c3de37306 100644
--- a/docs/en/connector-v2/sink/DB2.md
+++ b/docs/en/connector-v2/sink/DB2.md
@@ -8,6 +8,21 @@
 > Flink<br/>
 > SeaTunnel Zeta<br/>
 
+## Description
+
+Write data through jdbc. Support Batch mode and Streaming mode, support 
concurrent writing, support exactly-once
+semantics (using XA transaction guarantee).
+
+## Using Dependency
+
+### For Spark/Flink Engine
+
+> 1. You need to ensure that the [jdbc driver jar 
package](https://mvnrepository.com/artifact/com.ibm.db2.jcc/db2jcc) has been 
placed in directory `${SEATUNNEL_HOME}/plugins/`.
+
+### For SeaTunnel Zeta Engine
+
+> 1. You need to ensure that the [jdbc driver jar 
package](https://mvnrepository.com/artifact/com.ibm.db2.jcc/db2jcc) has been 
placed in directory `${SEATUNNEL_HOME}/lib/`.
+
 ## Key Features
 
 - [x] [exactly-once](../../concept/connector-v2-features.md)
@@ -16,22 +31,12 @@
 > Use `Xa transactions` to ensure `exactly-once`. So only support 
 > `exactly-once` for the database which is
 > support `Xa transactions`. You can set `is_exactly_once=true` to enable it.
 
-## Description
-
-Write data through jdbc. Support Batch mode and Streaming mode, support 
concurrent writing, support exactly-once
-semantics (using XA transaction guarantee).
-
 ## Supported DataSource Info
 
 | Datasource |                    Supported Versions                    |      
       Driver             |                Url                |                 
                Maven                                 |
 
|------------|----------------------------------------------------------|--------------------------------|-----------------------------------|-----------------------------------------------------------------------|
 | DB2        | Different dependency version has different driver class. | 
com.ibm.db2.jdbc.app.DB2Driver | jdbc:db2://127.0.0.1:50000/dbname | 
[Download](https://mvnrepository.com/artifact/com.ibm.db2.jcc/db2jcc) |
 
-## Database Dependency
-
-> Please download the support list corresponding to 'Maven' and copy it to the 
'$SEATNUNNEL_HOME/plugins/jdbc/lib/' working directory<br/>
-> For example DB2 datasource: cp db2-connector-java-xxx.jar 
$SEATNUNNEL_HOME/plugins/jdbc/lib/
-
 ## Data Type Mapping
 
 |                                            DB2 Data Type                     
                        | SeaTunnel Data Type |
diff --git a/docs/en/connector-v2/sink/IoTDB.md 
b/docs/en/connector-v2/sink/IoTDB.md
index 8ace6724cb..9cbcd68b8a 100644
--- a/docs/en/connector-v2/sink/IoTDB.md
+++ b/docs/en/connector-v2/sink/IoTDB.md
@@ -8,6 +8,20 @@
 > Flink<br/>
 > SeaTunnel Zeta<br/>
 
+## Description
+
+Used to write data to IoTDB.
+
+## Using Dependency
+
+### For Spark/Flink Engine
+
+> 1. You need to ensure that the [jdbc driver jar 
package](https://mvnrepository.com/artifact/org.apache.iotdb/iotdb-jdbc) has 
been placed in directory `${SEATUNNEL_HOME}/plugins/`.
+
+### For SeaTunnel Zeta Engine
+
+> 1. You need to ensure that the [jdbc driver jar 
package](https://mvnrepository.com/artifact/org.apache.iotdb/iotdb-jdbc) has 
been placed in directory `${SEATUNNEL_HOME}/lib/`.
+
 ## Key Features
 
 - [x] [exactly-once](../../concept/connector-v2-features.md)
@@ -15,10 +29,6 @@
 IoTDB supports the `exactly-once` feature through idempotent writing. If two 
pieces of data have
 the same `key` and `timestamp`, the new data will overwrite the old one.
 
-## Description
-
-Used to write data to IoTDB.
-
 :::tip
 
 There is a conflict of thrift version between IoTDB and Spark.Therefore, you 
need to execute `rm -f $SPARK_HOME/jars/libthrift*` and `cp 
$IOTDB_HOME/lib/libthrift* $SPARK_HOME/jars/` to resolve it.
@@ -31,8 +41,6 @@ There is a conflict of thrift version between IoTDB and 
Spark.Therefore, you nee
 |------------|--------------------|----------------|
 | IoTDB      | `>= 0.13.0`        | localhost:6667 |
 
-## Database Dependency
-
 ## Data Type Mapping
 
 | IotDB Data Type | SeaTunnel Data Type |
diff --git a/docs/en/connector-v2/sink/Jdbc.md 
b/docs/en/connector-v2/sink/Jdbc.md
index ef7458014a..bfe49277ea 100644
--- a/docs/en/connector-v2/sink/Jdbc.md
+++ b/docs/en/connector-v2/sink/Jdbc.md
@@ -7,13 +7,15 @@
 Write data through jdbc. Support Batch mode and Streaming mode, support 
concurrent writing, support exactly-once
 semantics (using XA transaction guarantee).
 
-:::tip
+## Using Dependency
 
-Warn: for license compliance, you have to provide database driver yourself, 
copy to `$SEATNUNNEL_HOME/lib/` directory in order to make them work.
+### For Spark/Flink Engine
 
-e.g. If you use MySQL, should download and copy `mysql-connector-java-xxx.jar` 
to `$SEATNUNNEL_HOME/lib/`. For Spark/Flink, you should also copy it to 
`$SPARK_HOME/jars/` or `$FLINK_HOME/lib/`.
+> 1. You need to ensure that the jdbc driver jar package has been placed in 
directory `${SEATUNNEL_HOME}/plugins/`.
 
-:::
+### For SeaTunnel Zeta Engine
+
+> 1. You need to ensure that the jdbc driver jar package has been placed in 
directory `${SEATUNNEL_HOME}/lib/`.
 
 ## Key Features
 
diff --git a/docs/en/connector-v2/sink/Mysql.md 
b/docs/en/connector-v2/sink/Mysql.md
index ab18ca2dc3..b6c3fd4a9b 100644
--- a/docs/en/connector-v2/sink/Mysql.md
+++ b/docs/en/connector-v2/sink/Mysql.md
@@ -12,6 +12,21 @@
 > Flink<br/>
 > SeaTunnel Zeta<br/>
 
+## Description
+
+Write data through jdbc. Support Batch mode and Streaming mode, support 
concurrent writing, support exactly-once
+semantics (using XA transaction guarantee).
+
+## Using Dependency
+
+### For Spark/Flink Engine
+
+> 1. You need to ensure that the [jdbc driver jar 
package](https://mvnrepository.com/artifact/mysql/mysql-connector-java) has 
been placed in directory `${SEATUNNEL_HOME}/plugins/`.
+
+### For SeaTunnel Zeta Engine
+
+> 1. You need to ensure that the [jdbc driver jar 
package](https://mvnrepository.com/artifact/mysql/mysql-connector-java) has 
been placed in directory `${SEATUNNEL_HOME}/lib/`.
+
 ## Key Features
 
 - [x] [exactly-once](../../concept/connector-v2-features.md)
@@ -20,22 +35,12 @@
 > Use `Xa transactions` to ensure `exactly-once`. So only support 
 > `exactly-once` for the database which is
 > support `Xa transactions`. You can set `is_exactly_once=true` to enable it.
 
-## Description
-
-Write data through jdbc. Support Batch mode and Streaming mode, support 
concurrent writing, support exactly-once
-semantics (using XA transaction guarantee).
-
 ## Supported DataSource Info
 
 | Datasource |                    Supported Versions                    |      
    Driver          |                  Url                  |                   
                Maven                                   |
 
|------------|----------------------------------------------------------|--------------------------|---------------------------------------|---------------------------------------------------------------------------|
 | Mysql      | Different dependency version has different driver class. | 
com.mysql.cj.jdbc.Driver | jdbc:mysql://localhost:3306:3306/test | 
[Download](https://mvnrepository.com/artifact/mysql/mysql-connector-java) |
 
-## Database Dependency
-
-> Please download the support list corresponding to 'Maven' and copy it to the 
'$SEATNUNNEL_HOME/plugins/jdbc/lib/' working directory<br/>
-> For example Mysql datasource: cp mysql-connector-java-xxx.jar 
$SEATNUNNEL_HOME/plugins/jdbc/lib/
-
 ## Data Type Mapping
 
 |                                                          Mysql Data Type     
                                                     |                          
                                       SeaTunnel Data Type                      
                                           |
diff --git a/docs/en/connector-v2/sink/Oracle.md 
b/docs/en/connector-v2/sink/Oracle.md
index 0d2b7ab504..f250f552bd 100644
--- a/docs/en/connector-v2/sink/Oracle.md
+++ b/docs/en/connector-v2/sink/Oracle.md
@@ -8,6 +8,21 @@
 > Flink<br/>
 > SeaTunnel Zeta<br/>
 
+## Description
+
+Write data through jdbc. Support Batch mode and Streaming mode, support 
concurrent writing, support exactly-once
+semantics (using XA transaction guarantee).
+
+## Using Dependency
+
+### For Spark/Flink Engine
+
+> 1. You need to ensure that the [jdbc driver jar 
package](https://mvnrepository.com/artifact/com.oracle.database.jdbc/ojdbc8) 
has been placed in directory `${SEATUNNEL_HOME}/plugins/`.
+
+### For SeaTunnel Zeta Engine
+
+> 1. You need to ensure that the [jdbc driver jar 
package](https://mvnrepository.com/artifact/com.oracle.database.jdbc/ojdbc8) 
has been placed in directory `${SEATUNNEL_HOME}/lib/`.
+
 ## Key Features
 
 - [x] [exactly-once](../../concept/connector-v2-features.md)
@@ -16,11 +31,6 @@
 > Use `Xa transactions` to ensure `exactly-once`. So only support 
 > `exactly-once` for the database which is
 > support `Xa transactions`. You can set `is_exactly_once=true` to enable it.
 
-## Description
-
-Write data through jdbc. Support Batch mode and Streaming mode, support 
concurrent writing, support exactly-once
-semantics (using XA transaction guarantee).
-
 ## Supported DataSource Info
 
 | Datasource |                    Supported Versions                    |      
    Driver          |                  Url                   |                  
             Maven                                |
diff --git a/docs/en/connector-v2/sink/PostgreSql.md 
b/docs/en/connector-v2/sink/PostgreSql.md
index 3e056376bd..a750755e31 100644
--- a/docs/en/connector-v2/sink/PostgreSql.md
+++ b/docs/en/connector-v2/sink/PostgreSql.md
@@ -8,6 +8,21 @@
 > Flink<br/>
 > SeaTunnel Zeta<br/>
 
+## Description
+
+Write data through jdbc. Support Batch mode and Streaming mode, support 
concurrent writing, support exactly-once
+semantics (using XA transaction guarantee).
+
+## Using Dependency
+
+### For Spark/Flink Engine
+
+> 1. You need to ensure that the [jdbc driver jar 
package](https://mvnrepository.com/artifact/org.postgresql/postgresql) has been 
placed in directory `${SEATUNNEL_HOME}/plugins/`.
+
+### For SeaTunnel Zeta Engine
+
+> 1. You need to ensure that the [jdbc driver jar 
package](https://mvnrepository.com/artifact/org.postgresql/postgresql) has been 
placed in directory `${SEATUNNEL_HOME}/lib/`.
+
 ## Key Features
 
 - [x] [exactly-once](../../concept/connector-v2-features.md)
@@ -16,11 +31,6 @@
 > Use `Xa transactions` to ensure `exactly-once`. So only support 
 > `exactly-once` for the database which is
 > support `Xa transactions`. You can set `is_exactly_once=true` to enable it.
 
-## Description
-
-Write data through jdbc. Support Batch mode and Streaming mode, support 
concurrent writing, support exactly-once
-semantics (using XA transaction guarantee).
-
 ## Supported DataSource Info
 
 | Datasource |                     Supported Versions                     |    
    Driver         |                  Url                  |                    
              Maven                                   |
diff --git a/docs/en/connector-v2/sink/SqlServer.md 
b/docs/en/connector-v2/sink/SqlServer.md
index 72ad7ff29f..1a50a01d6a 100644
--- a/docs/en/connector-v2/sink/SqlServer.md
+++ b/docs/en/connector-v2/sink/SqlServer.md
@@ -12,6 +12,21 @@
 > Flink<br/>
 > Seatunnel Zeta<br/>
 
+## Description
+
+Write data through jdbc. Support Batch mode and Streaming mode, support 
concurrent writing, support exactly-once
+semantics (using XA transaction guarantee).
+
+## Using Dependency
+
+### For Spark/Flink Engine
+
+> 1. You need to ensure that the [jdbc driver jar 
package](https://mvnrepository.com/artifact/com.microsoft.sqlserver/mssql-jdbc) 
has been placed in directory `${SEATUNNEL_HOME}/plugins/`.
+
+### For SeaTunnel Zeta Engine
+
+> 1. You need to ensure that the [jdbc driver jar 
package](https://mvnrepository.com/artifact/com.microsoft.sqlserver/mssql-jdbc) 
has been placed in directory `${SEATUNNEL_HOME}/lib/`.
+
 ## Key Features
 
 - [x] [exactly-once](../../concept/connector-v2-features.md)
@@ -20,11 +35,6 @@
 > Use `Xa transactions` to ensure `exactly-once`. So only support 
 > `exactly-once` for the database which is
 > support `Xa transactions`. You can set `is_exactly_once=true` to enable it.
 
-## Description
-
-Write data through jdbc. Support Batch mode and Streaming mode, support 
concurrent writing, support exactly-once
-semantics (using XA transaction guarantee).
-
 ## Supported DataSource Info
 
 | Datasource |   Supported Versions    |                    Driver             
       |               Url               |                                      
 Maven                                       |
diff --git a/docs/en/connector-v2/sink/Vertica.md 
b/docs/en/connector-v2/sink/Vertica.md
index 620e8c0457..dc302c5d7d 100644
--- a/docs/en/connector-v2/sink/Vertica.md
+++ b/docs/en/connector-v2/sink/Vertica.md
@@ -8,6 +8,21 @@
 > Flink<br/>
 > SeaTunnel Zeta<br/>
 
+## Description
+
+Write data through jdbc. Support Batch mode and Streaming mode, support 
concurrent writing, support exactly-once
+semantics (using XA transaction guarantee).
+
+## Using Dependency
+
+### For Spark/Flink Engine
+
+> 1. You need to ensure that the [jdbc driver jar 
package](https://www.vertica.com/download/vertica/client-drivers/) has been 
placed in directory `${SEATUNNEL_HOME}/plugins/`.
+
+### For SeaTunnel Zeta Engine
+
+> 1. You need to ensure that the [jdbc driver jar 
package](https://www.vertica.com/download/vertica/client-drivers/) has been 
placed in directory `${SEATUNNEL_HOME}/lib/`.
+
 ## Key Features
 
 - [x] [exactly-once](../../concept/connector-v2-features.md)
@@ -16,11 +31,6 @@
 > Use `Xa transactions` to ensure `exactly-once`. So only support 
 > `exactly-once` for the database which is
 > support `Xa transactions`. You can set `is_exactly_once=true` to enable it.
 
-## Description
-
-Write data through jdbc. Support Batch mode and Streaming mode, support 
concurrent writing, support exactly-once
-semantics (using XA transaction guarantee).
-
 ## Supported DataSource Info
 
 | Datasource |                    Supported Versions                    |      
   Driver          |                  Url                  |                    
            Maven                                 |
diff --git a/docs/en/connector-v2/source/DB2.md 
b/docs/en/connector-v2/source/DB2.md
index a5a6992845..0d2df826a0 100644
--- a/docs/en/connector-v2/source/DB2.md
+++ b/docs/en/connector-v2/source/DB2.md
@@ -8,6 +8,20 @@
 > Flink<br/>
 > SeaTunnel Zeta<br/>
 
+## Description
+
+Read external data source data through JDBC.
+
+## Using Dependency
+
+### For Spark/Flink Engine
+
+> 1. You need to ensure that the [jdbc driver jar 
package](https://mvnrepository.com/artifact/com.ibm.db2.jcc/db2jcc) has been 
placed in directory `${SEATUNNEL_HOME}/plugins/`.
+
+### For SeaTunnel Zeta Engine
+
+> 1. You need to ensure that the [jdbc driver jar 
package](https://mvnrepository.com/artifact/com.ibm.db2.jcc/db2jcc) has been 
placed in directory `${SEATUNNEL_HOME}/lib/`.
+
 ## Key Features
 
 - [x] [batch](../../concept/connector-v2-features.md)
@@ -19,10 +33,6 @@
 
 > supports query SQL and can achieve projection effect.
 
-## Description
-
-Read external data source data through JDBC.
-
 ## Supported DataSource Info
 
 | Datasource |                    Supported versions                    |      
       Driver             |                Url                |                 
                Maven                                 |
diff --git a/docs/en/connector-v2/source/IoTDB.md 
b/docs/en/connector-v2/source/IoTDB.md
index 7969f366f9..ee9f04cb7a 100644
--- a/docs/en/connector-v2/source/IoTDB.md
+++ b/docs/en/connector-v2/source/IoTDB.md
@@ -8,6 +8,26 @@
 > Flink<br/>
 > SeaTunnel Zeta<br/>
 
+## Description
+
+Read external data source data through IoTDB.
+
+:::tip
+
+There is a conflict of thrift version between IoTDB and Spark.Therefore, you 
need to execute `rm -f $SPARK_HOME/jars/libthrift*` and `cp 
$IOTDB_HOME/lib/libthrift* $SPARK_HOME/jars/` to resolve it.
+
+:::
+
+## Using Dependency
+
+### For Spark/Flink Engine
+
+> 1. You need to ensure that the [jdbc driver jar 
package](https://mvnrepository.com/artifact/org.apache.iotdb/iotdb-jdbc) has 
been placed in directory `${SEATUNNEL_HOME}/plugins/`.
+
+### For SeaTunnel Zeta Engine
+
+> 1. You need to ensure that the [jdbc driver jar 
package](https://mvnrepository.com/artifact/org.apache.iotdb/iotdb-jdbc) has 
been placed in directory `${SEATUNNEL_HOME}/lib/`.
+
 ## Key features
 
 - [x] [batch](../../concept/connector-v2-features.md)
@@ -20,16 +40,6 @@ supports query SQL and can achieve projection effect.
 - [x] [parallelism](../../concept/connector-v2-features.md)
 - [ ] [support user-defined split](../../concept/connector-v2-features.md)
 
-## Description
-
-Read external data source data through IoTDB.
-
-:::tip
-
-There is a conflict of thrift version between IoTDB and Spark.Therefore, you 
need to execute `rm -f $SPARK_HOME/jars/libthrift*` and `cp 
$IOTDB_HOME/lib/libthrift* $SPARK_HOME/jars/` to resolve it.
-
-:::
-
 ## Supported DataSource Info
 
 | Datasource | Supported Versions |      Url       |
diff --git a/docs/en/connector-v2/source/Jdbc.md 
b/docs/en/connector-v2/source/Jdbc.md
index 38ed1f6512..09c3ab636d 100644
--- a/docs/en/connector-v2/source/Jdbc.md
+++ b/docs/en/connector-v2/source/Jdbc.md
@@ -14,6 +14,16 @@ e.g. If you use MySQL, should download and copy 
`mysql-connector-java-xxx.jar` t
 
 :::
 
+## Using Dependency
+
+### For Spark/Flink Engine
+
+> 1. You need to ensure that the [jdbc driver jar 
package](https://mvnrepository.com/artifact/mysql/mysql-connector-java) has 
been placed in directory `${SEATUNNEL_HOME}/plugins/`.
+
+### For SeaTunnel Zeta Engine
+
+> 1. You need to ensure that the [jdbc driver jar 
package](https://mvnrepository.com/artifact/mysql/mysql-connector-java) has 
been placed in directory `${SEATUNNEL_HOME}/lib/`.
+
 ## Key features
 
 - [x] [batch](../../concept/connector-v2-features.md)
diff --git a/docs/en/connector-v2/source/MySQL-CDC.md 
b/docs/en/connector-v2/source/MySQL-CDC.md
index 850ffb3d58..8037b162cb 100644
--- a/docs/en/connector-v2/source/MySQL-CDC.md
+++ b/docs/en/connector-v2/source/MySQL-CDC.md
@@ -7,6 +7,11 @@
 > SeaTunnel Zeta<br/>
 > Flink <br/>
 
+## Description
+
+The MySQL CDC connector allows for reading snapshot data and incremental data 
from MySQL database. This document
+describes how to set up the MySQL CDC connector to run SQL queries against 
MySQL databases.
+
 ## Key features
 
 - [ ] [batch](../../concept/connector-v2-features.md)
@@ -16,22 +21,23 @@
 - [x] [parallelism](../../concept/connector-v2-features.md)
 - [x] [support user-defined split](../../concept/connector-v2-features.md)
 
-## Description
-
-The MySQL CDC connector allows for reading snapshot data and incremental data 
from MySQL database. This document
-describes how to set up the MySQL CDC connector to run SQL queries against 
MySQL databases.
-
 ## Supported DataSource Info
 
 | Datasource |                                                               
Supported versions                                                              
  |          Driver          |               Url                |               
                 Maven                                 |
 
|------------|-------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------|----------------------------------|----------------------------------------------------------------------|
 | MySQL      | <li> [MySQL](https://dev.mysql.com/doc): 5.6, 5.7, 8.0.x 
</li><li> [RDS MySQL](https://www.aliyun.com/product/rds/mysql): 5.6, 5.7, 
8.0.x </li> | com.mysql.cj.jdbc.Driver | jdbc:mysql://localhost:3306/test | 
https://mvnrepository.com/artifact/mysql/mysql-connector-java/8.0.28 |
 
-## Database Dependency
+## Using Dependency
 
 ### Install Jdbc Driver
 
-Please download and put mysql driver in `${SEATUNNEL_HOME}/lib/` dir. For 
example: cp mysql-connector-java-xxx.jar `$SEATNUNNEL_HOME/lib/`
+#### For Flink Engine
+
+> 1. You need to ensure that the [jdbc driver jar 
package](https://mvnrepository.com/artifact/mysql/mysql-connector-java) has 
been placed in directory `${SEATUNNEL_HOME}/plugins/`.
+
+#### For SeaTunnel Zeta Engine
+
+> 1. You need to ensure that the [jdbc driver jar 
package](https://mvnrepository.com/artifact/mysql/mysql-connector-java) has 
been placed in directory `${SEATUNNEL_HOME}/lib/`.
 
 ### Creating MySQL user
 
diff --git a/docs/en/connector-v2/source/Mysql.md 
b/docs/en/connector-v2/source/Mysql.md
index 216d3874bf..82e1e38da7 100644
--- a/docs/en/connector-v2/source/Mysql.md
+++ b/docs/en/connector-v2/source/Mysql.md
@@ -2,6 +2,10 @@
 
 > JDBC Mysql Source Connector
 
+## Description
+
+Read external data source data through JDBC.
+
 ## Support Mysql Version
 
 - 5.5/5.6/5.7/8.0
@@ -12,6 +16,16 @@
 > Flink<br/>
 > SeaTunnel Zeta<br/>
 
+## Using Dependency
+
+### For Spark/Flink Engine
+
+> 1. You need to ensure that the [jdbc driver jar 
package](https://mvnrepository.com/artifact/mysql/mysql-connector-java) has 
been placed in directory `${SEATUNNEL_HOME}/plugins/`.
+
+### For SeaTunnel Zeta Engine
+
+> 1. You need to ensure that the [jdbc driver jar 
package](https://mvnrepository.com/artifact/mysql/mysql-connector-java) has 
been placed in directory `${SEATUNNEL_HOME}/lib/`.
+
 ## Key Features
 
 - [x] [batch](../../concept/connector-v2-features.md)
@@ -24,10 +38,6 @@
 
 > supports query SQL and can achieve projection effect.
 
-## Description
-
-Read external data source data through JDBC.
-
 ## Supported DataSource Info
 
 | Datasource |                    Supported versions                    |      
    Driver          |                  Url                  |                   
                Maven                                   |
diff --git a/docs/en/connector-v2/source/Oracle-CDC.md 
b/docs/en/connector-v2/source/Oracle-CDC.md
index 51375f3492..22a824a0c2 100644
--- a/docs/en/connector-v2/source/Oracle-CDC.md
+++ b/docs/en/connector-v2/source/Oracle-CDC.md
@@ -31,7 +31,17 @@ describes how to set up the Oracle CDC connector to run SQL 
queries against Orac
 
 ### Install Jdbc Driver
 
-Please download and put oracle driver in `${SEATUNNEL_HOME}/lib/` dir. For 
example: cp ojdbc8-xxxxxx.jar `$SEATNUNNEL_HOME/lib/`
+#### For Spark/Flink Engine
+
+> 1. You need to ensure that the [jdbc driver jar 
package](https://mvnrepository.com/artifact/com.oracle.database.jdbc/ojdbc8) 
has been placed in directory `${SEATUNNEL_HOME}/plugins/`.
+> 2. To support the i18n character set, copy the `orai18n.jar` to the 
`$SEATNUNNEL_HOME/plugins/` directory.
+
+#### For SeaTunnel Zeta Engine
+
+> 1. You need to ensure that the [jdbc driver jar 
package](https://mvnrepository.com/artifact/com.oracle.database.jdbc/ojdbc8) 
has been placed in directory `${SEATUNNEL_HOME}/lib/`.
+> 2. To support the i18n character set, copy the `orai18n.jar` to the 
`$SEATNUNNEL_HOME/lib/` directory.
+
+### Enable Oracle Logminer
 
 > To enable Oracle CDC (Change Data Capture) using Logminer in Seatunnel, 
 > which is a built-in tool provided by Oracle, follow the steps below:
 
diff --git a/docs/en/connector-v2/source/Oracle.md 
b/docs/en/connector-v2/source/Oracle.md
index eec999fbcf..ba44142a31 100644
--- a/docs/en/connector-v2/source/Oracle.md
+++ b/docs/en/connector-v2/source/Oracle.md
@@ -2,12 +2,28 @@
 
 > JDBC Oracle Source Connector
 
+## Description
+
+Read external data source data through JDBC.
+
 ## Support Those Engines
 
 > Spark<br/>
 > Flink<br/>
 > SeaTunnel Zeta<br/>
 
+## Using Dependency
+
+### For Spark/Flink Engine
+
+> 1. You need to ensure that the [jdbc driver jar 
package](https://mvnrepository.com/artifact/com.oracle.database.jdbc/ojdbc8) 
has been placed in directory `${SEATUNNEL_HOME}/plugins/`.
+> 2. To support the i18n character set, copy the `orai18n.jar` to the 
`$SEATNUNNEL_HOME/plugins/` directory.
+
+### For SeaTunnel Zeta Engine
+
+> 1. You need to ensure that the [jdbc driver jar 
package](https://mvnrepository.com/artifact/com.oracle.database.jdbc/ojdbc8) 
has been placed in directory `${SEATUNNEL_HOME}/lib/`.
+> 2. To support the i18n character set, copy the `orai18n.jar` to the 
`$SEATNUNNEL_HOME/lib/` directory.
+
 ## Key Features
 
 - [x] [batch](../../concept/connector-v2-features.md)
@@ -19,10 +35,6 @@
 
 > supports query SQL and can achieve projection effect.
 
-## Description
-
-Read external data source data through JDBC.
-
 ## Supported DataSource Info
 
 | Datasource |                    Supported Versions                    |      
    Driver          |                  Url                   |                  
             Maven                                |
diff --git a/docs/en/connector-v2/source/Postgre-CDC.md 
b/docs/en/connector-v2/source/Postgre-CDC.md
index a9df4d0d08..6b65a4a7cb 100644
--- a/docs/en/connector-v2/source/Postgre-CDC.md
+++ b/docs/en/connector-v2/source/Postgre-CDC.md
@@ -28,10 +28,18 @@ describes how to set up the Postgre CDC connector to run 
SQL queries against Pos
 | PostgreSQL | Different dependency version has different driver class.   | 
org.postgresql.Driver | jdbc:postgresql://localhost:5432/test | 
[Download](https://mvnrepository.com/artifact/org.postgresql/postgresql) |
 | PostgreSQL | If you want to manipulate the GEOMETRY type in PostgreSQL. | 
org.postgresql.Driver | jdbc:postgresql://localhost:5432/test | 
[Download](https://mvnrepository.com/artifact/net.postgis/postgis-jdbc)  |
 
-## Database Dependency
+## Using Dependency
 
 ### Install Jdbc Driver
 
+#### For Spark/Flink Engine
+
+> 1. You need to ensure that the [jdbc driver jar 
package](https://mvnrepository.com/artifact/org.postgresql/postgresql) has been 
placed in directory `${SEATUNNEL_HOME}/plugins/`.
+
+#### For SeaTunnel Zeta Engine
+
+> 1. You need to ensure that the [jdbc driver jar 
package](https://mvnrepository.com/artifact/org.postgresql/postgresql) has been 
placed in directory `${SEATUNNEL_HOME}/lib/`.
+
 Please download and put Postgre driver in `${SEATUNNEL_HOME}/lib/` dir. For 
example: cp postgresql-xxx.jar `$SEATNUNNEL_HOME/lib/`
 
 > Here are the steps to enable CDC (Change Data Capture) in PostgreSQL:
diff --git a/docs/en/connector-v2/source/PostgreSQL.md 
b/docs/en/connector-v2/source/PostgreSQL.md
index 34dcd5ec10..ab5436107e 100644
--- a/docs/en/connector-v2/source/PostgreSQL.md
+++ b/docs/en/connector-v2/source/PostgreSQL.md
@@ -8,6 +8,16 @@
 > Flink<br/>
 > SeaTunnel Zeta<br/>
 
+## Using Dependency
+
+### For Spark/Flink Engine
+
+> 1. You need to ensure that the [jdbc driver jar 
package](https://mvnrepository.com/artifact/org.postgresql/postgresql) has been 
placed in directory `${SEATUNNEL_HOME}/plugins/`.
+
+### For SeaTunnel Zeta Engine
+
+> 1. You need to ensure that the [jdbc driver jar 
package](https://mvnrepository.com/artifact/org.postgresql/postgresql) has been 
placed in directory `${SEATUNNEL_HOME}/lib/`.
+
 ## Key Features
 
 - [x] [batch](../../concept/connector-v2-features.md)
diff --git a/docs/en/connector-v2/source/SqlServer-CDC.md 
b/docs/en/connector-v2/source/SqlServer-CDC.md
index 1c932c28f5..f893677890 100644
--- a/docs/en/connector-v2/source/SqlServer-CDC.md
+++ b/docs/en/connector-v2/source/SqlServer-CDC.md
@@ -31,9 +31,17 @@ describes how to setup the Sql Server CDC connector to run 
SQL queries against S
 
|------------|---------------------------------------------------------------|----------------------------------------------|---------------------------------------------------------------|-----------------------------------------------------------------------|
 | SqlServer  | <li> server:2019 (Or later version for information only)</li> | 
com.microsoft.sqlserver.jdbc.SQLServerDriver | 
jdbc:sqlserver://localhost:1433;databaseName=column_type_test | 
https://mvnrepository.com/artifact/com.microsoft.sqlserver/mssql-jdbc |
 
+## Using Dependency
+
 ### Install Jdbc Driver
 
-Please download and put SqlServer driver in `${SEATUNNEL_HOME}/lib/` dir. For 
example: cp mssql-jdbc-xxx.jar `$SEATNUNNEL_HOME/lib/`
+#### For Spark/Flink Engine
+
+> 1. You need to ensure that the [jdbc driver jar 
package](https://mvnrepository.com/artifact/com.microsoft.sqlserver/mssql-jdbc) 
has been placed in directory `${SEATUNNEL_HOME}/plugins/`.
+
+#### For SeaTunnel Zeta Engine
+
+> 1. You need to ensure that the [jdbc driver jar 
package](https://mvnrepository.com/artifact/com.microsoft.sqlserver/mssql-jdbc) 
has been placed in directory `${SEATUNNEL_HOME}/lib/`.
 
 ## Data Type Mapping
 
diff --git a/docs/en/connector-v2/source/SqlServer.md 
b/docs/en/connector-v2/source/SqlServer.md
index 0cc6a0dacd..df73612396 100644
--- a/docs/en/connector-v2/source/SqlServer.md
+++ b/docs/en/connector-v2/source/SqlServer.md
@@ -12,6 +12,16 @@
 > Flink <br/>
 > Seatunnel Zeta <br/>
 
+## Using Dependency
+
+### For Spark/Flink Engine
+
+> 1. You need to ensure that the [jdbc driver jar 
package](https://mvnrepository.com/artifact/com.microsoft.sqlserver/mssql-jdbc) 
has been placed in directory `${SEATUNNEL_HOME}/plugins/`.
+
+### For SeaTunnel Zeta Engine
+
+> 1. You need to ensure that the [jdbc driver jar 
package](https://mvnrepository.com/artifact/com.microsoft.sqlserver/mssql-jdbc) 
has been placed in directory `${SEATUNNEL_HOME}/lib/`.
+
 ## Key Features
 
 - [x] [batch](../../concept/connector-v2-features.md)
diff --git a/docs/en/connector-v2/source/Vertica.md 
b/docs/en/connector-v2/source/Vertica.md
index c78625dab0..1d8a83faa1 100644
--- a/docs/en/connector-v2/source/Vertica.md
+++ b/docs/en/connector-v2/source/Vertica.md
@@ -2,12 +2,26 @@
 
 > JDBC Vertica Source Connector
 
+## Description
+
+Read external data source data through JDBC.
+
 ## Support Those Engines
 
 > Spark<br/>
 > Flink<br/>
 > SeaTunnel Zeta<br/>
 
+## Using Dependency
+
+### For Spark/Flink Engine
+
+> 1. You need to ensure that the [jdbc driver jar 
package](https://www.vertica.com/download/vertica/client-drivers/) has been 
placed in directory `${SEATUNNEL_HOME}/plugins/`.
+
+### For SeaTunnel Zeta Engine
+
+> 1. You need to ensure that the [jdbc driver jar 
package](https://www.vertica.com/download/vertica/client-drivers/) has been 
placed in directory `${SEATUNNEL_HOME}/lib/`.
+
 ## Key Features
 
 - [x] [batch](../../concept/connector-v2-features.md)
@@ -19,21 +33,12 @@
 
 > supports query SQL and can achieve projection effect.
 
-## Description
-
-Read external data source data through JDBC.
-
 ## Supported DataSource Info
 
 | Datasource |                    Supported versions                    |      
   Driver          |                  Url                  |                    
            Maven                                 |
 
|------------|----------------------------------------------------------|-------------------------|---------------------------------------|----------------------------------------------------------------------|
 | Vertica    | Different dependency version has different driver class. | 
com.vertica.jdbc.Driver | jdbc:vertica://localhost:5433/vertica | 
[Download](https://www.vertica.com/download/vertica/client-drivers/) |
 
-## Database Dependency
-
-> Please download the support list corresponding to 'Maven' and copy it to the 
'$SEATNUNNEL_HOME/plugins/jdbc/lib/' working directory<br/>
-> For example Vertica datasource: cp vertica-jdbc-xxx.jar 
$SEATNUNNEL_HOME/plugins/jdbc/lib/
-
 ## Data Type Mapping
 
 |                                                        Vertical Data Type    
                                                     |                          
                                       SeaTunnel Data Type                      
                                           |

Reply via email to