[ 
https://issues.apache.org/jira/browse/FLINK-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai closed FLINK-7221.
--------------------------------------
       Resolution: Fixed
    Fix Version/s:     (was: 1.3.3)
                   1.3.4

> JDBCOutputFormat swallows errors on last batch
> ----------------------------------------------
>
>                 Key: FLINK-7221
>                 URL: https://issues.apache.org/jira/browse/FLINK-7221
>             Project: Flink
>          Issue Type: Bug
>          Components: Batch Connectors and Input/Output Formats
>    Affects Versions: 1.3.1
>         Environment: Java 1.8.0_131, PostgreSQL driver 42.1.3
>            Reporter: Ken Geis
>            Assignee: Fabian Hueske
>            Priority: Major
>             Fix For: 1.3.4, 1.4.0
>
>
> I have a data set with ~17000 rows that I was trying to write to a PostgreSQL 
> table that I did not (yet) have permission on. No data was loaded, and Flink 
> did not report any problem outputting the data set. The only indication I 
> found of my problem was in the PostgreSQL log.
> With the default parallelism (8) and the default batch interval (5000), my 
> batches were ~2000 rows each, so they were never executed in 
> {{JDBCOutputFormat.writeRecord(..)}}. {{JDBCOutputFormat.close()}} does a 
> final call on {{upload.executeBatch()}}, but if there is a problem, it is 
> logged at INFO level and not rethrown. 
> If I decrease the batch interval to 100 or 1000, then an error is properly 
> reported.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to