[
https://issues.apache.org/jira/browse/HADOOP-10591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14064390#comment-14064390
]
Colin Patrick McCabe commented on HADOOP-10591:
-----------------------------------------------
Test failures are unrelated. TestIPC is timing out resolving a hostname (looks
like a jenkins problem), and the symlink tests have been failing for some other
patches too.
> Compression codecs must used pooled direct buffers or deallocate direct
> buffers when stream is closed
> -----------------------------------------------------------------------------------------------------
>
> Key: HADOOP-10591
> URL: https://issues.apache.org/jira/browse/HADOOP-10591
> Project: Hadoop Common
> Issue Type: Bug
> Affects Versions: 2.2.0
> Reporter: Hari Shreedharan
> Assignee: Colin Patrick McCabe
> Attachments: HADOOP-10591.001.patch, HADOOP-10591.002.patch
>
>
> Currently direct buffers allocated by compression codecs like Gzip (which
> allocates 2 direct buffers per instance) are not deallocated when the stream
> is closed. Eventually for long running processes which create a huge number
> of files, these direct buffers are left hanging till a full gc, which may or
> may not happen in a reasonable amount of time - especially if the process
> does not use a whole lot of heap.
> Either these buffers should be pooled or they should be deallocated when the
> stream is closed.
--
This message was sent by Atlassian JIRA
(v6.2#6252)