[
https://issues.apache.org/jira/browse/FLINK-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Fabian Hueske closed FLINK-7221.
--------------------------------
Resolution: Fixed
Fix Version/s: 1.3.3
1.4.0
Fixed for 1.3.3 with cd4c2b59026d252b73027e639c9f023c9459dd5a
Fixed for 1.4.0 with 6ea9dbb00d076754d13e05288aa13d7e946d6567
> JDBCOutputFormat swallows errors on last batch
> ----------------------------------------------
>
> Key: FLINK-7221
> URL: https://issues.apache.org/jira/browse/FLINK-7221
> Project: Flink
> Issue Type: Bug
> Components: Batch Connectors and Input/Output Formats
> Affects Versions: 1.3.1
> Environment: Java 1.8.0_131, PostgreSQL driver 42.1.3
> Reporter: Ken Geis
> Assignee: Fabian Hueske
> Fix For: 1.4.0, 1.3.3
>
>
> I have a data set with ~17000 rows that I was trying to write to a PostgreSQL
> table that I did not (yet) have permission on. No data was loaded, and Flink
> did not report any problem outputting the data set. The only indication I
> found of my problem was in the PostgreSQL log.
> With the default parallelism (8) and the default batch interval (5000), my
> batches were ~2000 rows each, so they were never executed in
> {{JDBCOutputFormat.writeRecord(..)}}. {{JDBCOutputFormat.close()}} does a
> final call on {{upload.executeBatch()}}, but if there is a problem, it is
> logged at INFO level and not rethrown.
> If I decrease the batch interval to 100 or 1000, then an error is properly
> reported.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)