[ 
https://issues.apache.org/jira/browse/HBASE-11042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved HBASE-11042.
-----------------------------------

      Resolution: Fixed
        Assignee: Lars Hofhansl
    Hadoop Flags: Reviewed

Alright. Committed to 0.94. Thanks [~stack].

> TestForceCacheImportantBlocks OOMs occasionally in 0.94
> -------------------------------------------------------
>
>                 Key: HBASE-11042
>                 URL: https://issues.apache.org/jira/browse/HBASE-11042
>             Project: HBase
>          Issue Type: Bug
>            Reporter: Lars Hofhansl
>            Assignee: Lars Hofhansl
>             Fix For: 0.94.19
>
>         Attachments: 11042-0.94.txt
>
>
> This trace:
> {code}
> Caused by: java.lang.OutOfMemoryError
>       at java.util.zip.Deflater.init(Native Method)
>       at java.util.zip.Deflater.<init>(Deflater.java:169)
>       at java.util.zip.GZIPOutputStream.<init>(GZIPOutputStream.java:91)
>       at java.util.zip.GZIPOutputStream.<init>(GZIPOutputStream.java:110)
>       at 
> org.apache.hadoop.hbase.io.hfile.ReusableStreamGzipCodec$ReusableGzipOutputStream$ResetableGZIPOutputStream.<init>(ReusableStreamGzipCodec.java:79)
>       at 
> org.apache.hadoop.hbase.io.hfile.ReusableStreamGzipCodec$ReusableGzipOutputStream.<init>(ReusableStreamGzipCodec.java:90)
>       at 
> org.apache.hadoop.hbase.io.hfile.ReusableStreamGzipCodec.createOutputStream(ReusableStreamGzipCodec.java:130)
>       at 
> org.apache.hadoop.io.compress.GzipCodec.createOutputStream(GzipCodec.java:101)
>       at 
> org.apache.hadoop.hbase.io.hfile.Compression$Algorithm.createPlainCompressionStream(Compression.java:299)
>       at 
> org.apache.hadoop.hbase.io.hfile.Compression$Algorithm.createCompressionStream(Compression.java:283)
>       at 
> org.apache.hadoop.hbase.io.hfile.HFileWriterV1.getCompressingStream(HFileWriterV1.java:207)
>       at 
> org.apache.hadoop.hbase.io.hfile.HFileWriterV1.close(HFileWriterV1.java:356)
>       at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.close(StoreFile.java:1330)
>       at 
> org.apache.hadoop.hbase.regionserver.Store.internalFlushCache(Store.java:913)
> {code}
> Note that is caused specifically by HFileWriteV1 when using compression. It 
> looks like the compression resources are not released.
> Not sure it's worth fixing this at this point. The test can be fixed by 
> either not using compression (why are we using compression anyway), or by not 
> testing for HFileV1.
> [~stack] it seems you know the the code in HFileWriterV1. Do you want to have 
> a look? Maybe there is a quick fix in HFileWriterV1.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to