viirya commented on code in PR #211:
URL: 
https://github.com/apache/arrow-datafusion-comet/pull/211#discussion_r1535675511


##########
common/src/main/scala/org/apache/spark/sql/comet/execution/shuffle/ArrowReaderIterator.scala:
##########
@@ -36,6 +36,13 @@ class ArrowReaderIterator(channel: ReadableByteChannel) 
extends Iterator[Columna
       return true
     }
 
+    // Release the previous batch.
+    // If it is not released, when closing the reader, arrow library will 
complain about
+    // memory leak.
+    if (currentBatch != null) {
+      currentBatch.close()
+    }
+

Review Comment:
   > This sounds like a data corruption problem. If the just loaded batch is 
closed/released, the just loaded ColumnarBatch would be corrupted? But it seems 
like that the CI passes without any issue previously.
   > 
   > When working on #206, I also found out it might be inconvenient to use 
Arrow Java's memory API. It requires extra caution to allocate and release 
ArrowBuf correctly.
   
   Due to 
https://github.com/apache/arrow-datafusion-comet/pull/211#discussion_r1535661988,
 this issue is not exposed before.
   
   I feel that Arrow Java API is hard to use and somehow counter-intuitive, 
especially compared with arrow-rs.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to