[
https://issues.apache.org/jira/browse/FLINK-19522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jark Wu closed FLINK-19522.
---------------------------
Fix Version/s: 1.12.0
Resolution: Fixed
Implemented in master: 84a602547ffd80cdba99fb7fec5fc7640f3962ee
> Add ability to set auto commit on jdbc driver from Table/SQL API
> ----------------------------------------------------------------
>
> Key: FLINK-19522
> URL: https://issues.apache.org/jira/browse/FLINK-19522
> Project: Flink
> Issue Type: Improvement
> Components: Connectors / JDBC, Table SQL / Ecosystem
> Affects Versions: 1.11.2
> Reporter: Dylan Forciea
> Assignee: Dylan Forciea
> Priority: Major
> Labels: pull-request-available
> Fix For: 1.12.0
>
> Attachments: Screen Shot 2020-10-01 at 5.03.24 PM.png, Screen Shot
> 2020-10-01 at 5.03.31 PM.png
>
>
> When I tried to stream data from postgres via the JDBC source connector in
> the SQL api, it was loading the entirety of the table into memory before
> starting streaming. This is due to the postgres JDBC driver requiring the
> autoCommit flag to be set to false for streaming to take place.
> FLINK-12198 provided the means to do this with the JDBCInputSource, but this
> did not extend to the SQL description. This option should be added.
> To reproduce, create a very large table and try to read it in with the SQL
> api. You will see a large spike of memory usage and no data streaming, and
> then it will start all at once. I will attach a couple of graphs before and
> after I made a patch to the code myself to set auto-commit.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)