shangxinli commented on code in PR #982:
URL: https://github.com/apache/parquet-mr/pull/982#discussion_r950883464


##########
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/CodecFactory.java:
##########
@@ -109,7 +110,17 @@ public BytesInput decompress(BytesInput bytes, int 
uncompressedSize) throws IOEx
           decompressor.reset();
         }
         InputStream is = codec.createInputStream(bytes.toInputStream(), 
decompressor);
-        decompressed = BytesInput.from(is, uncompressedSize);
+
+        // We need to explicitly close the ZstdDecompressorStream here to 
release the resources it holds to avoid
+        // off-heap memory fragmentation issue, see 
https://issues.apache.org/jira/browse/PARQUET-2160.
+        // This change will load the decompressor stream into heap a little 
earlier, since the problem it solves
+        // only happens in the ZSTD codec, so this modification is only made 
for ZSTD streams.
+        if (codec instanceof ZstandardCodec) {
+          decompressed = BytesInput.copy(BytesInput.from(is, 
uncompressedSize));

Review Comment:
   I understand we had the discussion in the Jira that ByteInput.copy() just 
loads into a heap in advance but not add extra overall. Can we have a benchmark 
on the heap/GC(Heap size, GC time etc). I just want to make sure we fix one 
problem while introducing another problem.  
   
   Other than that, the ZSTD is treated especially might be OK since we had 
pretty decent coments. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to