[
https://issues.apache.org/jira/browse/HADOOP-12007?focusedWorklogId=793120&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-793120
]
ASF GitHub Bot logged work on HADOOP-12007:
-------------------------------------------
Author: ASF GitHub Bot
Created on: 20/Jul/22 09:26
Start Date: 20/Jul/22 09:26
Worklog Time Spent: 10m
Work Description: kevins-29 commented on code in PR #4585:
URL: https://github.com/apache/hadoop/pull/4585#discussion_r925383498
##########
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodecPool.java:
##########
@@ -17,20 +17,22 @@
*/
package org.apache.hadoop.io.compress;
-import static org.junit.Assert.assertEquals;
-
-import java.util.concurrent.Callable;
-import java.util.concurrent.ExecutorService;
-import java.util.concurrent.Executors;
-import java.util.concurrent.LinkedBlockingDeque;
-import java.util.concurrent.TimeUnit;
-
import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.compress.zlib.BuiltInGzipCompressor;
+import org.apache.hadoop.io.compress.zlib.BuiltInGzipDecompressor;
+import org.apache.hadoop.test.LambdaTestUtils;
import org.junit.Before;
import org.junit.Test;
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.OutputStream;
import java.util.HashSet;
+import java.util.Random;
import java.util.Set;
+import java.util.concurrent.*;
Review Comment:
Apologies, I didn't notice that had optimised the imports
Issue Time Tracking
-------------------
Worklog Id: (was: 793120)
Time Spent: 2h (was: 1h 50m)
> GzipCodec native CodecPool leaks memory
> ---------------------------------------
>
> Key: HADOOP-12007
> URL: https://issues.apache.org/jira/browse/HADOOP-12007
> Project: Hadoop Common
> Issue Type: Bug
> Affects Versions: 2.7.0
> Reporter: Yejun Yang
> Priority: Major
> Labels: pull-request-available
> Time Spent: 2h
> Remaining Estimate: 0h
>
> org/apache/hadoop/io/compress/GzipCodec.java call
> CompressionCodec.Util.createOutputStreamWithCodecPool to use CodecPool. But
> compressor objects are actually never returned to pool which cause memory
> leak.
> HADOOP-10591 uses CompressionOutputStream.close() to return Compressor object
> to pool. But CompressionCodec.Util.createOutputStreamWithCodecPool actually
> returns a CompressorStream which overrides close().
> This cause CodecPool.returnCompressor never being called. In my log file I
> can see lots of "Got brand-new compressor [.gz]" but no "Got recycled
> compressor".
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]