[ 
https://issues.apache.org/jira/browse/FLINK-24728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng updated FLINK-24728:
--------------------------------
    Affects Version/s:     (was: 1.13.4)
                           (was: 1.14.1)
                           (was: 1.15.0)
                           (was: 1.12.6)
                           (was: 1.11.5)
                       1.11.4
                       1.14.0
                       1.12.5
                       1.13.3

> Batch SQL file sink forgets to close the output stream
> ------------------------------------------------------
>
>                 Key: FLINK-24728
>                 URL: https://issues.apache.org/jira/browse/FLINK-24728
>             Project: Flink
>          Issue Type: Bug
>          Components: Table SQL / Runtime
>    Affects Versions: 1.11.4, 1.14.0, 1.12.5, 1.13.3
>            Reporter: Caizhi Weng
>            Assignee: Caizhi Weng
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 1.15.0, 1.14.1, 1.13.4
>
>
> I tried to write a large avro file into HDFS and discover that the displayed 
> file size in HDFS is extremely small, but copying that file to local yields 
> the correct size. If we create another Flink job and read that avro file from 
> HDFS, the job will finish without outputting any record because the file size 
> Flink gets from HDFS is the very small file size.
> This is because the output format created in 
> {{FileSystemTableSink#createBulkWriterOutputFormat}} only finishes the 
> {{BulkWriter}}. According to the java doc of {{BulkWriter#finish}} bulk 
> writers should not close the output stream and should leave them to the 
> framework.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to