[ 
https://issues.apache.org/jira/browse/FLUME-2416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14045542#comment-14045542
 ] 

Hudson commented on FLUME-2416:
-------------------------------

SUCCESS: Integrated in flume-trunk #643 (See 
[https://builds.apache.org/job/flume-trunk/643/])
FLUME-2416: Use CodecPool in compressed stream to prevent leak of direct 
buffers (jarcec: 
http://git-wip-us.apache.org/repos/asf/flume/repo?p=flume.git&a=commit&h=9940dcbfefbe1248f65aa83f2f84e352ce022041)
* 
flume-ng-sinks/flume-hdfs-sink/src/main/java/org/apache/flume/sink/hdfs/HDFSCompressedDataStream.java


> Use CodecPool in compressed stream to prevent leak of direct buffers
> --------------------------------------------------------------------
>
>                 Key: FLUME-2416
>                 URL: https://issues.apache.org/jira/browse/FLUME-2416
>             Project: Flume
>          Issue Type: Bug
>            Reporter: Hari Shreedharan
>            Assignee: Hari Shreedharan
>             Fix For: v1.6.0
>
>         Attachments: FLUME-2416.patch
>
>
> Even though they may no longer be references, Java only cleans up direct 
> buffers on full gc. If there is enough heap available, a full GC is never hit 
> and these buffers are leaked. Hadoop keeps creating new compressors instead 
> of using the pools causing a leak - which is a bug in itself which is being 
> addressed by HADOOP-10591



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to