doki23 commented on code in PR #2951:
URL: https://github.com/apache/parquet-java/pull/2951#discussion_r1677098229


##########
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/ParquetFileWriter.java:
##########
@@ -1658,11 +1659,15 @@ public void end(Map<String, String> extraMetaData) 
throws IOException {
 
   @Override
   public void close() throws IOException {
+    if (closed) {
+      return;
+    }
     try (PositionOutputStream temp = out) {
       temp.flush();
       if (crcAllocator != null) {
         crcAllocator.close();
       }
+      closed = true;

Review Comment:
   > it's rare for people to actually retry failed close operation
   
   I agree with you, it's rare.
   But if someone retries, it will not work as expected -- file is not closed 
as expected. This is not user friendly. And if people don't retry, it's no 
matter where we put it in, right? But it will also lead to another problem -- 
the external finally block will throw an exception that suppresses the original 
exception.
   
   edit
   --------------
   I find `InteralParquetRecordWriter` sets closed to `true` in the finally 
block too... So it's ok to follow up.
   ```java
     public void close() throws IOException, InterruptedException {
       if (!closed) {
         try {
           ......
         } finally {
           AutoCloseables.uncheckedClose(columnStore, pageStore, 
bloomFilterWriteStore, parquetFileWriter);
           closed = true;
         }
       }
     }
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to