[
https://issues.apache.org/jira/browse/HADOOP-12007?focusedWorklogId=792946&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-792946
]
ASF GitHub Bot logged work on HADOOP-12007:
-------------------------------------------
Author: ASF GitHub Bot
Created on: 19/Jul/22 22:29
Start Date: 19/Jul/22 22:29
Worklog Time Spent: 10m
Work Description: kevins-29 commented on code in PR #4585:
URL: https://github.com/apache/hadoop/pull/4585#discussion_r925006154
##########
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodecPool.java:
##########
@@ -189,4 +198,54 @@ public void testDecompressorNotReturnSameInstance() {
CodecPool.returnDecompressor(decompressor);
}
}
+
+ @Test(timeout = 10000)
+ public void testDoNotPoolCompressorNotUseableAfterReturn() throws
IOException {
+
+ final GzipCodec gzipCodec = new GzipCodec();
+ gzipCodec.setConf(new Configuration());
+
+ // BuiltInGzipCompressor is an explicit example of a Compressor with the
@DoNotPool annotation
+ final Compressor compressor = new BuiltInGzipCompressor(new
Configuration());
+ CodecPool.returnCompressor(compressor);
+
+ try (CompressionOutputStream outputStream =
+ gzipCodec.createOutputStream(new ByteArrayOutputStream(),
compressor)) {
+ outputStream.write(1);
+ fail("Compressor from Codec with @DoNotPool should not be useable after
returning to CodecPool");
+ } catch (NullPointerException exception) {
Review Comment:
Unfortunately I couldn't find another way to test that the underlying
Compressor/Decompress has been closed. There is `finished` but that is set by
`reset()` and has different semantics.
##########
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodecPool.java:
##########
@@ -189,4 +198,54 @@ public void testDecompressorNotReturnSameInstance() {
CodecPool.returnDecompressor(decompressor);
}
}
+
+ @Test(timeout = 10000)
+ public void testDoNotPoolCompressorNotUseableAfterReturn() throws
IOException {
+
+ final GzipCodec gzipCodec = new GzipCodec();
+ gzipCodec.setConf(new Configuration());
+
+ // BuiltInGzipCompressor is an explicit example of a Compressor with the
@DoNotPool annotation
+ final Compressor compressor = new BuiltInGzipCompressor(new
Configuration());
+ CodecPool.returnCompressor(compressor);
+
+ try (CompressionOutputStream outputStream =
+ gzipCodec.createOutputStream(new ByteArrayOutputStream(),
compressor)) {
+ outputStream.write(1);
Review Comment:
Thank you.
Issue Time Tracking
-------------------
Worklog Id: (was: 792946)
Time Spent: 1h 10m (was: 1h)
> GzipCodec native CodecPool leaks memory
> ---------------------------------------
>
> Key: HADOOP-12007
> URL: https://issues.apache.org/jira/browse/HADOOP-12007
> Project: Hadoop Common
> Issue Type: Bug
> Affects Versions: 2.7.0
> Reporter: Yejun Yang
> Priority: Major
> Labels: pull-request-available
> Time Spent: 1h 10m
> Remaining Estimate: 0h
>
> org/apache/hadoop/io/compress/GzipCodec.java call
> CompressionCodec.Util.createOutputStreamWithCodecPool to use CodecPool. But
> compressor objects are actually never returned to pool which cause memory
> leak.
> HADOOP-10591 uses CompressionOutputStream.close() to return Compressor object
> to pool. But CompressionCodec.Util.createOutputStreamWithCodecPool actually
> returns a CompressorStream which overrides close().
> This cause CodecPool.returnCompressor never being called. In my log file I
> can see lots of "Got brand-new compressor [.gz]" but no "Got recycled
> compressor".
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]