This is an automated email from the ASF dual-hosted git repository.

wanghailin pushed a commit to branch dev
in repository https://gitbox.apache.org/repos/asf/seatunnel.git


The following commit(s) were added to refs/heads/dev by this push:
     new 7704e4f4b2 [Doc] [JDBC Oracle] Add JDBC Oracle Documentation (#5239)
7704e4f4b2 is described below

commit 7704e4f4b224da365954cd246aabd524c8e206c0
Author: ic4y <[email protected]>
AuthorDate: Tue Aug 8 10:59:54 2023 +0800

    [Doc] [JDBC Oracle] Add JDBC Oracle Documentation (#5239)
---
 docs/en/connector-v2/sink/Oracle.md   | 191 ++++++++++++++++++++++++++++++++++
 docs/en/connector-v2/source/Oracle.md | 154 +++++++++++++++++++++++++++
 2 files changed, 345 insertions(+)

diff --git a/docs/en/connector-v2/sink/Oracle.md 
b/docs/en/connector-v2/sink/Oracle.md
new file mode 100644
index 0000000000..feda00b815
--- /dev/null
+++ b/docs/en/connector-v2/sink/Oracle.md
@@ -0,0 +1,191 @@
+# Oracle
+
+> JDBC Oracle Sink Connector
+
+## Support Those Engines
+
+> Spark<br/>
+> Flink<br/>
+> SeaTunnel Zeta<br/>
+
+## Key Features
+
+- [x] [exactly-once](../../concept/connector-v2-features.md)
+- [x] [cdc](../../concept/connector-v2-features.md)
+
+> Use `Xa transactions` to ensure `exactly-once`. So only support 
`exactly-once` for the database which is
+> support `Xa transactions`. You can set `is_exactly_once=true` to enable it.
+
+## Description
+
+Write data through jdbc. Support Batch mode and Streaming mode, support 
concurrent writing, support exactly-once
+semantics (using XA transaction guarantee).
+
+## Supported DataSource Info
+
+| Datasource |                    Supported Versions                    |      
    Driver          |                  Url                   |                  
             Maven                                |
+|------------|----------------------------------------------------------|--------------------------|----------------------------------------|--------------------------------------------------------------------|
+| Oracle     | Different dependency version has different driver class. | 
oracle.jdbc.OracleDriver | jdbc:oracle:thin:@datasource01:1523:xe | 
https://mvnrepository.com/artifact/com.oracle.database.jdbc/ojdbc8 |
+
+## Database Dependency
+
+> Please download the support list corresponding to 'Maven' and copy it to the 
'$SEATNUNNEL_HOME/plugins/jdbc/lib/' working directory<br/>
+> For example Oracle datasource: cp ojdbc8-xxxxxx.jar 
$SEATNUNNEL_HOME/lib/<br/>
+> To support the i18n character set, copy the orai18n.jar to the 
$SEATNUNNEL_HOME/lib/ directory.
+
+## Data Type Mapping
+
+|                                 PostgreSQL Data type                         
        | SeaTunnel Data type |
+|--------------------------------------------------------------------------------------|---------------------|
+| INTEGER                                                                      
        | INT                 |
+| FLOAT                                                                        
        | DECIMAL(38, 18)     |
+| NUMBER(precision <= 9, scale == 0)                                           
        | INT                 |
+| NUMBER(9 < precision <= 18, scale == 0)                                      
        | BIGINT              |
+| NUMBER(18 < precision, scale == 0)                                           
        | DECIMAL(38, 0)      |
+| NUMBER(scale != 0)                                                           
        | DECIMAL(38, 18)     |
+| BINARY_DOUBLE                                                                
        | DOUBLE              |
+| BINARY_FLOAT<br/>REAL                                                        
        | FLOAT               |
+| 
CHAR<br/>NCHAR<br/>NVARCHAR2<br/>VARCHAR2<br/>LONG<br/>ROWID<br/>NCLOB<br/>CLOB<br/>
 | STRING              |
+| DATE                                                                         
        | DATE                |
+| TIMESTAMP<br/>TIMESTAMP WITH LOCAL TIME ZONE                                 
        | TIMESTAMP           |
+| BLOB<br/>RAW<br/>LONG RAW<br/>BFILE                                          
        | BYTES               |
+
+## Options
+
+|                   Name                    |  Type   | Required | Default |   
                                                                                
                              Description                                       
                                                                           |
+|-------------------------------------------|---------|----------|---------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| url                                       | String  | Yes      | -       | 
The URL of the JDBC connection. Refer to a case: 
jdbc:postgresql://localhost:5432/test                                           
                                                                                
                            |
+| driver                                    | String  | Yes      | -       | 
The jdbc class name used to connect to the remote data source,<br/> if you use 
Oracle the value is `oracle.jdbc.OracleDriver`.                                 
                                                                              |
+| user                                      | String  | No       | -       | 
Connection instance user name                                                   
                                                                                
                                                                             |
+| password                                  | String  | No       | -       | 
Connection instance password                                                    
                                                                                
                                                                             |
+| query                                     | String  | No       | -       | 
Use this sql write upstream input datas to database. e.g `INSERT ...`,`query` 
have the higher priority                                                        
                                                                               |
+| database                                  | String  | No       | -       | 
Use this `database` and `table-name` auto-generate sql and receive upstream 
input datas write to database.<br/>This option is mutually exclusive with 
`query` and has a higher priority.                                              
       |
+| table                                     | String  | No       | -       | 
Use database and this table-name auto-generate sql and receive upstream input 
datas write to database.<br/>This option is mutually exclusive with `query` and 
has a higher priority.                                                         |
+| primary_keys                              | Array   | No       | -       | 
This option is used to support operations such as `insert`, `delete`, and 
`update` when automatically generate sql.                                       
                                                                                
   |
+| support_upsert_by_query_primary_key_exist | Boolean | No       | false   | 
Choose to use INSERT sql, UPDATE sql to process update events(INSERT, 
UPDATE_AFTER) based on query primary key exists. This configuration is only 
used when database unsupport upsert syntax. **Note**: that this method has low 
performance |
+| connection_check_timeout_sec              | Int     | No       | 30      | 
The time in seconds to wait for the database operation used to validate the 
connection to complete.                                                         
                                                                                
 |
+| max_retries                               | Int     | No       | 0       | 
The number of retries to submit failed (executeBatch)                           
                                                                                
                                                                             |
+| batch_size                                | Int     | No       | 1000    | 
For batch writing, when the number of buffered records reaches the number of 
`batch_size` or the time reaches `batch_interval_ms`<br/>, the data will be 
flushed into the database                                                       
    |
+| batch_interval_ms                         | Int     | No       | 1000    | 
For batch writing, when the number of buffers reaches the number of 
`batch_size` or the time reaches `batch_interval_ms`, the data will be flushed 
into the database                                                               
          |
+| is_exactly_once                           | Boolean | No       | false   | 
Whether to enable exactly-once semantics, which will use Xa transactions. If 
on, you need to<br/>set `xa_data_source_class_name`.                            
                                                                                
|
+| generate_sink_sql                         | Boolean | No       | false   | 
Generate sql statements based on the database table you want to write to.       
                                                                                
                                                                             |
+| xa_data_source_class_name                 | String  | No       | -       | 
The xa data source class name of the database Driver, for example, Oracle is 
`oracle.jdbc.xa.client.OracleXADataSource`, and<br/>please refer to appendix 
for other data sources                                                          
   |
+| max_commit_attempts                       | Int     | No       | 3       | 
The number of retries for transaction commit failures                           
                                                                                
                                                                             |
+| transaction_timeout_sec                   | Int     | No       | -1      | 
The timeout after the transaction is opened, the default is -1 (never timeout). 
Note that setting the timeout may affect<br/>exactly-once semantics             
                                                                             |
+| auto_commit                               | Boolean | No       | true    | 
Automatic transaction commit is enabled by default                              
                                                                                
                                                                             |
+| common-options                            |         | no       | -       | 
Sink plugin common parameters, please refer to [Sink Common 
Options](common-options.md) for details                                         
                                                                                
                 |
+
+### Tips
+
+> If partition_column is not set, it will run in single concurrency, and if 
partition_column is set, it will be executed  in parallel according to the 
concurrency of tasks.
+
+## Task Example
+
+### Simple:
+
+> This example defines a SeaTunnel synchronization task that automatically 
generates data through FakeSource and sends it to JDBC Sink. FakeSource 
generates a total of 16 rows of data (row.num=16), with each row having two 
fields, name (string type) and age (int type). The final target table is 
test_table will also be 16 rows of data in the table. Before run this job, you 
need create database test and table test_table in your PostgreSQL. And if you 
have not yet installed and deployed Sea [...]
+
+```
+# Defining the runtime environment
+env {
+  # You can set flink configuration here
+  execution.parallelism = 1
+  job.mode = "BATCH"
+}
+
+source {
+  FakeSource {
+    parallelism = 1
+    result_table_name = "fake"
+    row.num = 16
+    schema = {
+      fields {
+        name = "string"
+        age = "int"
+      }
+    }
+  }
+  # If you would like to get more information about how to configure seatunnel 
and see full list of source plugins,
+  # please go to https://seatunnel.apache.org/docs/category/source-v2
+}
+
+transform {
+  # If you would like to get more information about how to configure seatunnel 
and see full list of transform plugins,
+    # please go to https://seatunnel.apache.org/docs/category/transform-v2
+}
+
+sink {
+    jdbc {
+        url = "jdbc:oracle:thin:@datasource01:1523:xe"
+        driver = "oracle.jdbc.OracleDriver"
+        user = root
+        password = 123456
+        query = "INSERT INTO TEST.TEST_TABLE(NAME,AGE) VALUES(?,?)"
+     }
+  # If you would like to get more information about how to configure seatunnel 
and see full list of sink plugins,
+  # please go to https://seatunnel.apache.org/docs/category/sink-v2
+}
+```
+
+### Generate Sink SQL
+
+> This example  not need to write complex sql statements, you can configure 
the database name table name to automatically generate add statements for you
+
+```
+sink {
+    Jdbc {
+        url = "jdbc:oracle:thin:@datasource01:1523:xe"
+        driver = "oracle.jdbc.OracleDriver"
+        user = root
+        password = 123456
+        
+        generate_sink_sql = true
+        database = XE
+        table = "TEST.TEST_TABLE"
+    }
+}
+```
+
+### Exactly-once :
+
+> For accurate write scene we guarantee accurate once
+
+```
+sink {
+    jdbc {
+        url = "jdbc:oracle:thin:@datasource01:1523:xe"
+        driver = "oracle.jdbc.OracleDriver"
+    
+        max_retries = 0
+        user = root
+        password = 123456
+        query = "INSERT INTO TEST.TEST_TABLE(NAME,AGE) VALUES(?,?)"
+    
+        is_exactly_once = "true"
+    
+        xa_data_source_class_name = "oracle.jdbc.xa.client.OracleXADataSource"
+    }
+}
+```
+
+### CDC(Change Data Capture) Event
+
+> CDC change data is also supported by us In this case, you need config 
database, table and primary_keys.
+
+```
+sink {
+    jdbc {
+        url = "jdbc:oracle:thin:@datasource01:1523:xe"
+        driver = "oracle.jdbc.OracleDriver"
+        user = root
+        password = 123456
+        
+        generate_sink_sql = true
+        # You need to configure both database and table
+        database = XE
+        table = "TEST.TEST_TABLE"
+        primary_keys = ["ID"]
+    }
+}
+```
+
diff --git a/docs/en/connector-v2/source/Oracle.md 
b/docs/en/connector-v2/source/Oracle.md
new file mode 100644
index 0000000000..0bf39ee398
--- /dev/null
+++ b/docs/en/connector-v2/source/Oracle.md
@@ -0,0 +1,154 @@
+# Oracle
+
+> JDBC Oracle Source Connector
+
+## Support Those Engines
+
+> Spark<br/>
+> Flink<br/>
+> SeaTunnel Zeta<br/>
+
+## Key Features
+
+- [x] [batch](../../concept/connector-v2-features.md)
+- [ ] [stream](../../concept/connector-v2-features.md)
+- [x] [exactly-once](../../concept/connector-v2-features.md)
+- [x] [column projection](../../concept/connector-v2-features.md)
+- [x] [parallelism](../../concept/connector-v2-features.md)
+- [x] [support user-defined split](../../concept/connector-v2-features.md)
+
+> supports query SQL and can achieve projection effect.
+
+## Description
+
+Read external data source data through JDBC.
+
+## Supported DataSource Info
+
+| Datasource |                    Supported versions                    |      
    Driver          |                  Url                   |                  
             Maven                                |
+|------------|----------------------------------------------------------|--------------------------|----------------------------------------|--------------------------------------------------------------------|
+| Oracle     | Different dependency version has different driver class. | 
oracle.jdbc.OracleDriver | jdbc:oracle:thin:@datasource01:1523:xe | 
https://mvnrepository.com/artifact/com.oracle.database.jdbc/ojdbc8 |
+
+## Database Dependency
+
+> Please download the support list corresponding to 'Maven' and copy it to the 
'$SEATNUNNEL_HOME/plugins/jdbc/lib/' working directory<br/>
+> For example Oracle datasource: cp ojdbc8-xxxxxx.jar 
$SEATNUNNEL_HOME/lib/<br/>
+> To support the i18n character set, copy the orai18n.jar to the 
$SEATNUNNEL_HOME/lib/ directory.
+
+## Data Type Mapping
+
+|                                 PostgreSQL Data type                         
        | SeaTunnel Data type |
+|--------------------------------------------------------------------------------------|---------------------|
+| INTEGER                                                                      
        | INT                 |
+| FLOAT                                                                        
        | DECIMAL(38, 18)     |
+| NUMBER(precision <= 9, scale == 0)                                           
        | INT                 |
+| NUMBER(9 < precision <= 18, scale == 0)                                      
        | BIGINT              |
+| NUMBER(18 < precision, scale == 0)                                           
        | DECIMAL(38, 0)      |
+| NUMBER(scale != 0)                                                           
        | DECIMAL(38, 18)     |
+| BINARY_DOUBLE                                                                
        | DOUBLE              |
+| BINARY_FLOAT<br/>REAL                                                        
        | FLOAT               |
+| 
CHAR<br/>NCHAR<br/>NVARCHAR2<br/>VARCHAR2<br/>LONG<br/>ROWID<br/>NCLOB<br/>CLOB<br/>
 | STRING              |
+| DATE                                                                         
        | DATE                |
+| TIMESTAMP<br/>TIMESTAMP WITH LOCAL TIME ZONE                                 
        | TIMESTAMP           |
+| BLOB<br/>RAW<br/>LONG RAW<br/>BFILE                                          
        | BYTES               |
+
+## Source Options
+
+|             Name             |  Type  | Required |     Default     |         
                                                                                
                                   Description                                  
                                                                                
          |
+|------------------------------|--------|----------|-----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| url                          | String | Yes      | -               | The URL 
of the JDBC connection. Refer to a case: jdbc:oracle:thin:@datasource01:1523:xe 
                                                                                
                                                                                
          |
+| driver                       | String | Yes      | -               | The 
jdbc class name used to connect to the remote data source,<br/> if you use 
MySQL the value is `oracle.jdbc.OracleDriver`.                                  
                                                                                
                   |
+| user                         | String | No       | -               | 
Connection instance user name                                                   
                                                                                
                                                                                
                  |
+| password                     | String | No       | -               | 
Connection instance password                                                    
                                                                                
                                                                                
                  |
+| query                        | String | Yes      | -               | Query 
statement                                                                       
                                                                                
                                                                                
            |
+| connection_check_timeout_sec | Int    | No       | 30              | The 
time in seconds to wait for the database operation used to validate the 
connection to complete                                                          
                                                                                
                      |
+| partition_column             | String | No       | -               | The 
column name for parallelism's partition, only support numeric type,Only support 
numeric type primary key, and only can config one column.                       
                                                                                
              |
+| partition_lower_bound        | Long   | No       | -               | The 
partition_column min value for scan, if not set SeaTunnel will query database 
get min value.                                                                  
                                                                                
                |
+| partition_upper_bound        | Long   | No       | -               | The 
partition_column max value for scan, if not set SeaTunnel will query database 
get max value.                                                                  
                                                                                
                |
+| partition_num                | Int    | No       | job parallelism | The 
number of partition count, only support positive integer. default value is job 
parallelism                                                                     
                                                                                
               |
+| fetch_size                   | Int    | No       | 0               | For 
queries that return a large number of objects,you can configure<br/> the row 
fetch size used in the query toimprove performance by<br/> reducing the number 
database hits required to satisfy the selection criteria.<br/> Zero means use 
jdbc default value. |
+| common-options               |        | No       | -               | Source 
plugin common parameters, please refer to [Source Common 
Options](common-options.md) for details                                         
                                                                                
                                  |
+
+### Tips
+
+> If partition_column is not set, it will run in single concurrency, and if 
partition_column is set, it will be executed  in parallel according to the 
concurrency of tasks.
+
+## Task Example
+
+### Simple:
+
+> This example queries type_bin 'table' 16 data in your test "database" in 
single parallel and queries all of its fields. You can also specify which 
fields to query for final output to the console.
+
+```
+# Defining the runtime environment
+env {
+  # You can set flink configuration here
+  execution.parallelism = 2
+  job.mode = "BATCH"
+}
+source{
+    Jdbc {
+        url = "jdbc:oracle:thin:@datasource01:1523:xe"
+        driver = "oracle.jdbc.OracleDriver"
+        user = "root"
+        password = "123456"
+        query = "SELECT * FROM TEST_TABLE"
+    }
+}
+
+transform {
+    # If you would like to get more information about how to configure 
seatunnel and see full list of transform plugins,
+    # please go to https://seatunnel.apache.org/docs/transform-v2/sql
+}
+
+sink {
+    Console {}
+}
+```
+
+### Parallel:
+
+> Read your query table in parallel with the shard field you configured and 
the shard data  You can do this if you want to read the whole table
+
+```
+source {
+    Jdbc {
+        url = "jdbc:oracle:thin:@datasource01:1523:xe"
+        driver = "oracle.jdbc.OracleDriver"
+        connection_check_timeout_sec = 100
+        user = "root"
+        password = "123456"
+        # Define query logic as required
+        query = "SELECT * FROM TEST_TABLE"
+        # Parallel sharding reads fields
+        partition_column = "ID"
+        # Number of fragments
+        partition_num = 10
+    }
+}
+```
+
+### Parallel Boundary:
+
+> It is more efficient to specify the data within the upper and lower bounds 
of the query It is more efficient to read your data source according to the 
upper and lower boundaries you configured
+
+```
+source {
+    Jdbc {
+        url = "jdbc:oracle:thin:@datasource01:1523:xe"
+        driver = "oracle.jdbc.OracleDriver"
+        connection_check_timeout_sec = 100
+        user = "root"
+        password = "123456"
+        # Define query logic as required
+        query = "SELECT * FROM TEST_TABLE"
+        partition_column = "ID"
+        # Read start boundary
+        partition_lower_bound = 1
+        # Read end boundary
+        partition_upper_bound = 500
+        partition_num = 10
+    }
+}
+```
+

Reply via email to