wuchong commented on a change in pull request #9802: [FLINK-13361][documention] 
Add documentation for JDBC connector for Table API & SQL
URL: https://github.com/apache/flink/pull/9802#discussion_r332878876
 
 

 ##########
 File path: docs/dev/table/connect.md
 ##########
 @@ -1075,6 +1075,88 @@ CREATE TABLE MyUserTable (
 
 {% top %}
 
+### JDBC Connector
+
+<span class="label label-primary">Source: Batch</span>
+<span class="label label-primary">Sink: Batch</span>
+<span class="label label-primary">Sink: Streaming Append Mode</span>
+<span class="label label-primary">Sink: Streaming Upsert Mode</span>
+<span class="label label-primary">Temporal Join: Sync Mode</span>
+
+The JDBC connector allows for reading from an JDBC client.
+The JDBC connector allows for writing into an JDBC client.
+
+The connector can operate in [upsert mode](#update-modes) for exchanging 
UPSERT/DELETE messages with the external system using a [key defined by the 
query](./streaming/dynamic_tables.html#table-to-stream-conversion).
+
+For append-only queries, the connector can also operate in [append 
mode](#update-modes) for exchanging only INSERT messages with the external 
system.
+
+To use this connector, add the following dependency to your project:
+
+{% highlight xml %}
+<dependency>
+  <groupId>org.apache.flink</groupId>
+  <artifactId>flink-connector-jdbc{{ site.scala_version_suffix }}</artifactId>
+  <version>{{site.version }}</version>
+</dependency>
+{% endhighlight %}
+
+And must also specify JDBC library, for example, if want to use Mysql library, 
the following dependency to your project:
+
+{% highlight xml %}
+<dependency>
+    <groupId>mysql</groupId>
+    <artifactId>mysql-connector-java</artifactId>
+    <version>8.0.17</version>
+</dependency>
+{% endhighlight %}
+
+**Library support:** Now, we only support mysql, derby, postgres.
+
+The connector can be defined as follows:
+
+<div data-lang="DDL" markdown="1">
+{% highlight sql %}
+CREATE TABLE MyUserTable (
+  ...
+) WITH (
+  'connector.type' = 'jdbc', -- required: specify this table type is jdbc
+  
+  'connector.url' = 'jdbc:derby:memory:upsert', -- required: JDBC DB url
+  
+  'connector.table' = 'jdbc_table_name',  -- required: jdbc table name
+  
+  'connector.driver' = 'driver', -- optional: jdbc driver, if not set, it will 
automatically guess based on the URL.
+
+  'connector.username' = 'name', -- optional: jdbc user name and password
+  'connector.password' = 'password',
+  
+  -- scan options, optional, used when reading from table
+  'connector.read.partition.column' = 'column_name', -- optional, name of the 
column used for partitioning the input.
 
 Review comment:
   I think we can take Spark's documentation as a reference for this. I think 
it is still confusing. 
   
   > These options must all be specified if any of them is specified. In 
addition, numPartitions must be specified. They describe how to partition the 
table when reading in parallel from multiple workers. partitionColumn must be a 
numeric, date, or timestamp column from the table in question. Notice that 
lowerBound and upperBound are just used to decide the partition stride, not for 
filtering the rows in table. So all rows in the table will be partitioned and 
returned. This option applies only to reading.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to