[ 
https://issues.apache.org/jira/browse/HBASE-1410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12708585#action_12708585
 ] 

stack commented on HBASE-1410:
------------------------------

In 0.19.x, it was indices that consumed heap.  hfile indices are smaller but we 
could run into same issue only this time no skip entries recourse.

What you thinking Andrew?  Changing compacting algorithm so we only do fixed 
amount at a time (I thought we did this anyways but we must not).

> compactions are not memory efficient 
> -------------------------------------
>
>                 Key: HBASE-1410
>                 URL: https://issues.apache.org/jira/browse/HBASE-1410
>             Project: Hadoop HBase
>          Issue Type: Improvement
>            Reporter: Andrew Purtell
>             Fix For: 0.20.0
>
>
> Compactions read a lot of data into the heap. Prior to HBASE-1058 or 
> successor issues, it was possible to stack up hundreds if not thousands of 
> flushes in a store. Eventually when compaction is possible, no HRS has enough 
> heap to commit for the compaction process and all OOME as the region in 
> question is (re)deployed. HBASE-1058 is not the ideal solution.This issue 
> suggests creating a memory efficient compaction process which can scale. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to