hfile doesn't recycle decompressors
-----------------------------------

                 Key: HBASE-1293
                 URL: https://issues.apache.org/jira/browse/HBASE-1293
             Project: Hadoop HBase
          Issue Type: Bug
    Affects Versions: 0.20.0
         Environment: - all -
            Reporter: ryan rawson
             Fix For: 0.20.0


The Compression codec stuff from hadoop has the concept of recycling 
compressors and decompressors - this is because a compression codec uses 
"direct buffers" which reside outside the JVM regular heap space.  There is a 
risk that under heavy concurrent load we could run out of that 'direct buffer' 
heap space in the JVM.

HFile does not call algorithm.returnDecompressor and returnCompressor.  We 
should fix that.


I found this bug via OOM crashes under jdk 1.7 - it appears to be partially due 
to the size of my cluster (200gb, 800 regions, 19 servers) and partially due to 
weaknesses in JVM 1.7.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to