[ 
https://issues.apache.org/jira/browse/HBASE-5387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13206012#comment-13206012
 ] 

Mikhail Bautin commented on HBASE-5387:
---------------------------------------

@Ted: I think this addresses the root cause of TestHFileBlock and 
TestForceCacheImportantBlocks failures. As you suspected, Hadoop QA was 
pointing to a real bug in HBase. However, I think we have had this issue for a 
while (even in HFile v1), and it just got exposed as I increased the volume of 
IO happening within a single unit test. I will add a ulimit setting to our 
internal test runs so that we catch memory leaks like this in the future.

                
> Reuse compression streams in HFileBlock.Writer
> ----------------------------------------------
>
>                 Key: HBASE-5387
>                 URL: https://issues.apache.org/jira/browse/HBASE-5387
>             Project: HBase
>          Issue Type: Bug
>            Reporter: Mikhail Bautin
>            Assignee: Mikhail Bautin
>         Attachments: Fix-deflater-leak-2012-02-10_18_48_45.patch
>
>
> We need to to reuse compression streams in HFileBlock.Writer instead of 
> allocating them every time. The motivation is that when using Java's built-in 
> implementation of Gzip, we allocate a new GZIPOutputStream object and an 
> associated native data structure every time we create a compression stream. 
> The native data structure is only deallocated in the finalizer. This is one 
> suspected cause of recent TestHFileBlock failures on Hadoop QA: 
> https://builds.apache.org/job/HBase-TRUNK/2658/testReport/org.apache.hadoop.hbase.io.hfile/TestHFileBlock/testPreviousOffset_1_/.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to