[ 
https://issues.apache.org/jira/browse/FLINK-16497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17126373#comment-17126373
 ] 

Danny Chen commented on FLINK-16497:
------------------------------------

I think as a popular streaming engine, ensure good throughput and performance 
should be in the first class. Most of the client tools have a default flush 
strategy(either buffer size or interval)[1][2]. We should also follow that.

I would suggest a default flush size (100) and flush interval(1s), it performs 
well for production and in local test 1s is also an acceptable latency.

[1] https://kafka.apache.org/22/documentation.html#producerconfigs
[2] https://github.com/searchbox-io/Jest

> Improve default flush strategy for JDBC sink to make it work out-of-box
> -----------------------------------------------------------------------
>
>                 Key: FLINK-16497
>                 URL: https://issues.apache.org/jira/browse/FLINK-16497
>             Project: Flink
>          Issue Type: Improvement
>          Components: Connectors / JDBC, Table SQL / Ecosystem
>            Reporter: Jark Wu
>            Priority: Critical
>             Fix For: 1.11.0
>
>
> Currently, JDBC sink provides 2 flush options:
> {code}
> 'connector.write.flush.max-rows' = '5000', -- default is 5000
> 'connector.write.flush.interval' = '2s', -- no default value
> {code}
> That means if flush interval is not set, the buffered output rows may not be 
> flushed to database for a long time. That is a surprising behavior because no 
> results are outputed by default. 
> So I propose to have a default flush '1s' interval for JDBC sink or default 1 
> row for flush size. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to