ASF GitHub Bot commented on FLINK-6281:

Github user fhueske commented on a diff in the pull request:

    --- Diff: docs/dev/table/sourceSinks.md ---
    @@ -202,7 +202,38 @@ val csvTableSource = CsvTableSource
     Provided TableSinks
    +### JDBCAppendSink
    +<code>JDBCAppendSink</code> allows you to bridge the data stream to the 
JDBC driver. The sink only supports append-only data. It does not support 
retractions and upserts from Flink's perspectives. However, you can customize 
the query using <code>REPLACE</code> or <code>INSERT OVERWRITE</code> to 
implement upsert inside the database.
    +To use the JDBC sink, you have to add the JDBC connector dependency 
(<code>flink-jdbc</code>) to your project. Then you can create the sink using 
    +<div class="codetabs" markdown="1">
    +<div data-lang="java" markdown="1">
    +{% highlight java %}
    +JDBCAppendTableSink sink = JDBCAppendTableSink.builder()
    +  .setDrivername("org.apache.derby.jdbc.EmbeddedDriver")
    +  .setDBUrl("jdbc:derby:memory:ebookshop")
    +  .setQuery("INSERT INTO books (id) VALUES (?)")
    +  .setFieldTypes(new TypeInformation<?>[] {INT_TYPE_INFO})
    --- End diff --
    change to `setParameterTypes()` if we rename the method.

> Create TableSink for JDBC
> -------------------------
>                 Key: FLINK-6281
>                 URL: https://issues.apache.org/jira/browse/FLINK-6281
>             Project: Flink
>          Issue Type: Improvement
>          Components: Table API & SQL
>            Reporter: Haohui Mai
>            Assignee: Haohui Mai
> It would be nice to integrate the table APIs with the JDBC connectors so that 
> the rows in the tables can be directly pushed into JDBC.

This message was sent by Atlassian JIRA

Reply via email to