Github user zentol commented on a diff in the pull request:

    https://github.com/apache/flink/pull/5563#discussion_r170606282
  
    --- Diff: 
flink-connectors/flink-connector-filesystem/src/main/java/org/apache/flink/streaming/connectors/fs/AvroKeyValueSinkWriter.java
 ---
    @@ -158,7 +158,7 @@ public void open(FileSystem fs, Path path) throws 
IOException {
     
        @Override
        public void close() throws IOException {
    -           super.close(); //the order is important since super.close 
flushes inside
    +           flush(); // super.close() also does a flush
                if (keyValueWriter != null) {
    --- End diff --
    
    For this issue that would be enough.
    
    However we can still leak streams if `open()` throws an exception after 
`super.open()` returns. Then the stream is already open. If you use the 
BucketingSink this will leak the stream:
    
    ```
    // openNewPartFile
    bucketState.writer.open(fs, inProgressPath); // if this throws exception 
the stream can be open
    bucketState.isWriterOpen = true;
    ```
    
    ```
    // closeCurrentPartFile
    if (bucketState.isWriterOpen) { // not closing partially opened writer
        bucketState.writer.close();
        bucketState.isWriterOpen = false;
    }
    ```
    
    So we should modify open() to properly close the stream.


---

Reply via email to