advancedxy commented on code in PR #211:
URL: 
https://github.com/apache/arrow-datafusion-comet/pull/211#discussion_r1535163844


##########
common/src/main/scala/org/apache/spark/sql/comet/execution/shuffle/ArrowReaderIterator.scala:
##########
@@ -36,6 +36,13 @@ class ArrowReaderIterator(channel: ReadableByteChannel) 
extends Iterator[Columna
       return true
     }
 
+    // Release the previous batch.
+    // If it is not released, when closing the reader, arrow library will 
complain about
+    // memory leak.
+    if (currentBatch != null) {
+      currentBatch.close()
+    }
+

Review Comment:
   > Because ArrowStreamReader loads data into same vectors of root internally. 
After loading next batch, close will release the just loaded batch instead of 
previous batch.
   
   This sounds like a data corruption problem. If the just loaded batch is 
closed/released, the just loaded ColumnarBatch would be corrupted? But it seems 
like that the CI passes without any issue previously.
   
   When working on #206, I also found out it might be inconvenient to use Arrow 
Java's memory API.  It requires extra caution to allocate and release ArrowBuf 
correctly.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to