[
https://issues.apache.org/jira/browse/HADOOP-12007?focusedWorklogId=792947&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-792947
]
ASF GitHub Bot logged work on HADOOP-12007:
-------------------------------------------
Author: ASF GitHub Bot
Created on: 19/Jul/22 22:29
Start Date: 19/Jul/22 22:29
Worklog Time Spent: 10m
Work Description: kevins-29 commented on code in PR #4585:
URL: https://github.com/apache/hadoop/pull/4585#discussion_r925006321
##########
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodecPool.java:
##########
@@ -189,4 +198,54 @@ public void testDecompressorNotReturnSameInstance() {
CodecPool.returnDecompressor(decompressor);
}
}
+
+ @Test(timeout = 10000)
+ public void testDoNotPoolCompressorNotUseableAfterReturn() throws
IOException {
+
+ final GzipCodec gzipCodec = new GzipCodec();
+ gzipCodec.setConf(new Configuration());
+
+ // BuiltInGzipCompressor is an explicit example of a Compressor with the
@DoNotPool annotation
+ final Compressor compressor = new BuiltInGzipCompressor(new
Configuration());
+ CodecPool.returnCompressor(compressor);
+
+ try (CompressionOutputStream outputStream =
+ gzipCodec.createOutputStream(new ByteArrayOutputStream(),
compressor)) {
+ outputStream.write(1);
+ fail("Compressor from Codec with @DoNotPool should not be useable after
returning to CodecPool");
+ } catch (NullPointerException exception) {
+ Assert.assertEquals("Deflater has been closed", exception.getMessage());
+ }
+ }
+
+ @Test(timeout = 10000)
+ public void testDoNotPoolDecompressorNotUseableAfterReturn() throws
IOException {
+
+ final GzipCodec gzipCodec = new GzipCodec();
+ gzipCodec.setConf(new Configuration());
+
+ final Random random = new Random();
+ final byte[] bytes = new byte[1024];
+ random.nextBytes(bytes);
+
+ ByteArrayOutputStream baos = new ByteArrayOutputStream();
+ try (OutputStream outputStream = gzipCodec.createOutputStream(baos)) {
+ outputStream.write(bytes);
+ }
+
+ final byte[] gzipBytes = baos.toByteArray();
+ final ByteArrayInputStream bais = new ByteArrayInputStream(gzipBytes);
+
+ // BuiltInGzipDecompressor is an explicit example of a Decompressor with
the @DoNotPool annotation
+ final Decompressor decompressor = new BuiltInGzipDecompressor();
+ CodecPool.returnDecompressor(decompressor);
+
+ try (CompressionInputStream inputStream =
Review Comment:
Thank you
Issue Time Tracking
-------------------
Worklog Id: (was: 792947)
Time Spent: 1h 20m (was: 1h 10m)
> GzipCodec native CodecPool leaks memory
> ---------------------------------------
>
> Key: HADOOP-12007
> URL: https://issues.apache.org/jira/browse/HADOOP-12007
> Project: Hadoop Common
> Issue Type: Bug
> Affects Versions: 2.7.0
> Reporter: Yejun Yang
> Priority: Major
> Labels: pull-request-available
> Time Spent: 1h 20m
> Remaining Estimate: 0h
>
> org/apache/hadoop/io/compress/GzipCodec.java call
> CompressionCodec.Util.createOutputStreamWithCodecPool to use CodecPool. But
> compressor objects are actually never returned to pool which cause memory
> leak.
> HADOOP-10591 uses CompressionOutputStream.close() to return Compressor object
> to pool. But CompressionCodec.Util.createOutputStreamWithCodecPool actually
> returns a CompressorStream which overrides close().
> This cause CodecPool.returnCompressor never being called. In my log file I
> can see lots of "Got brand-new compressor [.gz]" but no "Got recycled
> compressor".
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]