[ 
https://issues.apache.org/jira/browse/FLINK-6281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15978358#comment-15978358
 ] 

ASF GitHub Bot commented on FLINK-6281:
---------------------------------------

Github user fhueske commented on the issue:

    https://github.com/apache/flink/pull/3712
  
    Hi @haohui, I think a JdbcTableSink would be a great feature! 
    
    However, there is a big issue with wrapping the `JdbcOutputFormat`. 
OutputFormats are not integrated with Flink's checkpointing mechanism. The 
`JdbcOutputFormat` buffers rows to write them out in batches. When records are 
buffered that arrived before the last checkpoint, they will be lost in case of 
a failure because they will not be replayed.
    
    The JdbcTableSink should be integrated with Flink's checkpointing 
mechanism. In a nutshell, it should buffer records and commit them to the 
database when a checkpoint is taken. I think we need to think a bit more about 
a proper design for this feature. @zentol and @aljoscha might have some 
thoughts on this as well as they are more familiar with the implementation of 
checkpoint-aware sinks.
    
    What do you think?


> Create TableSink for JDBC
> -------------------------
>
>                 Key: FLINK-6281
>                 URL: https://issues.apache.org/jira/browse/FLINK-6281
>             Project: Flink
>          Issue Type: Improvement
>            Reporter: Haohui Mai
>            Assignee: Haohui Mai
>
> It would be nice to integrate the table APIs with the JDBC connectors so that 
> the rows in the tables can be directly pushed into JDBC.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to