gong commented on code in PR #392:
URL:
https://github.com/apache/incubator-inlong-website/pull/392#discussion_r895445689
##########
docs/data_node/load_node/greenplum.md:
##########
@@ -1,4 +1,116 @@
---
title: Greenplum
sidebar_position: 9
----
\ No newline at end of file
+---
+
+## Greenplum Load Node
+
+The `Greenplum Load Node` supports to write data into Greenplum database. This
document describes how to set up the Greenplum Load
+Node to run SQL queries against Greenplum database.
+
+## Supported Version
+
+| Load Node | Driver | Group Id | Artifact Id | JAR |
+|--------------------------|--------|----------|-------------|-----|
+| [Greenplum](./greenplum.md) | PostgreSQL | org.postgresql | postgresql |
[Download](https://jdbc.postgresql.org/download.html) |
+
+## Dependencies
+
+In order to set up the `Greenplum Load Node`, the following provides
dependency information for both projects using a
+build automation tool (such as Maven or SBT) and SQL Client with Sort
Connectors JAR bundles.
+
+### Maven dependency
+
+```xml
+<dependency>
+ <groupId>org.apache.inlong</groupId>
+ <artifactId>sort-connector-jdbc</artifactId>
+ <!-- Choose the version that suits your application -->
+ <version>inlong_version</version>
+</dependency>
+```
+
+## How to create a PostgreSQL Load Node
+
+### Usage for SQL API
+
+```sql
+
+-- MySQL extract node
+CREATE TABLE `mysql_extract_table`(
+ PRIMARY KEY (`id`) NOT ENFORCED,
+ `id` BIGINT,
+ `name` STRING,
+ `age` INT
+) WITH (
+ 'connector' = 'mysql-cdc-inlong',
+ 'url' = 'jdbc:mysql://localhost:3306/read',
+ 'username' = 'inlong',
+ 'password' = 'inlong',
+ 'table-name' = 'user'
+)
+
+-- Greenplum load node
+CREATE TABLE `greenplum_load_table`(
+ PRIMARY KEY (`id`) NOT ENFORCED,
+ `id` BIGINT,
+ `name` STRING,
+ `age` INT
+) WITH (
+ 'connector' = 'jdbc-inlong',
+ 'url' = 'jdbc:postgresql://localhost:5432/write',
+ 'dialect-impl' = 'org.apache.inlong.sort.jdbc.dialect.GreenplumDialect',
+ 'username' = 'inlong',
+ 'password' = 'inlong',
+ 'table-name' = 'public.user'
+)
+
+-- write data into Greenplum
+INSERT INTO greenplum_load_table
+SELECT id, name , age FROM mysql_extract_table;
+
+```
+
+### Usage for InLong Dashboard
+
+TODO: It will be supported in the future.
+
+### Usage for InLong Manager Client
+
+TODO: It will be supported in the future.
+
+## Greenplum Load Node Options
+
+| Option | Required | Default | Type | Description |
+|---------|----------|---------|------|------------|
+| connector | required | (none) | String | Specify what connector to use, here
should be 'jdbc-inlong'. |
+| url | required | (none) | String | The JDBC database url. |
+| dialect-impl | required | (none) | String |
`org.apache.inlong.sort.jdbc.dialect.GreenplumDialect` |
+| table-name | required | (none) | String | The name of JDBC table to connect.
|
+| driver | optional | (none) | String | The class name of the JDBC driver to
use to connect to this URL, if not set, it will automatically be derived from
the URL. |
+| username | optional | (none) | String | The JDBC user name. 'username' and
'password' must both be specified if any of them is specified. |
+| password | optional | (none) | String | The JDBC password. |
+| connection.max-retry-timeout | optional | 60s | Duration | Maximum timeout
between retries. The timeout should be in second granularity and shouldn't be
smaller than 1 second. |
+| sink.buffer-flush.max-rows | optional | 100 | Integer | The max size of
buffered records before flush. Can be set to zero to disable it. |
+| sink.buffer-flush.interval | optional | 1s | Duration | The flush interval
mills, over this time, asynchronous threads will flush data. Can be set to '0'
to disable it. Note, 'sink.buffer-flush.max-rows' can be set to '0' with the
flush interval set allowing for complete async processing of buffered actions.
| |
+| sink.max-retries | optional | 3 | Integer | The max retry times if writing
records to database failed. |
+| sink.parallelism | optional | (none) | Integer | Defines the parallelism of
the JDBC sink operator. By default, the parallelism is determined by the
framework using the same parallelism of the upstream chained operator. |
+
+## Data Type Mapping
+
+| Greenplum type | Flink SQL type |
+|-----------------|----------------|
+| | TINYINT |
Review Comment:
Yes.Data Type is not a one-to-one correspondence
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]