Does anybody know what may have caused the recent OOM failures in 
TestHFileBlock.testConcurrentReading[1]?


This is the exception:


Caused by: java.lang.OutOfMemoryError
    at java.util.zip.Inflater.init(Native Method)
    at java.util.zip.Inflater.<init>(Inflater.java:83)
    at 
org.apache.hadoop.io.compress.zlib.BuiltInGzipDecompressor.<init>(BuiltInGzipDecompressor.java:45)
    at 
org.apache.hadoop.io.compress.GzipCodec.createDecompressor(GzipCodec.java:136)
    at 
org.apache.hadoop.io.compress.CodecPool.getDecompressor(CodecPool.java:127)
    at 
org.apache.hadoop.hbase.io.hfile.Compression$Algorithm.getDecompressor(Compression.java:290)
    at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.decompress(HFileBlock.java:1397)
    at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1830)
    at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1643)
    at 
org.apache.hadoop.hbase.io.hfile.TestHFileBlock$BlockReaderThread.call(TestHFileBlock.java:639)
    at 
org.apache.hadoop.hbase.io.hfile.TestHFileBlock$BlockReaderThread.call(TestHFileBlock.java:603)
    at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
    at java.util.concurrent.FutureTask.run(FutureTask.java:138) 


Here's the latest test run with that failure: 
https://builds.apache.org/job/HBase-0.94/635/

Looks like this is creating a new Decompressor for each single block. Looking 
at the code that seems to be by design when the BuiltInGzipDecompressor is used.
Seems somewhat inefficient, though.


I initially thought this was caused by HBASE-7336, but that turned out to be 
not the case (OOMs still occurred with that change reverted).

If anybody knows anything about this, please let me know. It might also just be 
an environment issue.


Thanks.


-- Lars

Reply via email to