[
https://issues.apache.org/jira/browse/FLINK-14442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16955698#comment-16955698
]
Jark Wu commented on FLINK-14442:
---------------------------------
Hi [~javadevmtl], yes, it will flush no matter the batch size or interval
triggers first.
This is supported from 1.9.0, see {{connector.write.flush.max-rows}} and
{{connector.write.flush.interval}} in the 1.9 docs:
https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/table/connect.html#jdbc-connector.
Sorry I gave a snapshot version docuementation before.
> Add time based interval execution to JDBC connectors.
> -----------------------------------------------------
>
> Key: FLINK-14442
> URL: https://issues.apache.org/jira/browse/FLINK-14442
> Project: Flink
> Issue Type: Improvement
> Components: Connectors / JDBC
> Affects Versions: 1.8.0, 1.8.1, 1.8.2, 1.9.0
> Reporter: None none
> Priority: Minor
>
> Hi, currently the JDBC sink/output only supports batch interval execution.
> For data to be streamed/committed to the JDBC database we need to wait for
> the batch interval to be filled up.
> For example if you set a batch interval of 100 but only get 99 records then
> no data will be committed to the database.
> The JDBC driver should maybe also have a time based interval so that data is
> eventually pushed to the database.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)