[ 
https://issues.apache.org/jira/browse/HBASE-14978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15057869#comment-15057869
 ] 

Anoop Sam John commented on HBASE-14978:
----------------------------------------

Understand your point of cache is being blocked for longer time.. ( i mean some 
buckets)
But my worry is the fact of considering every off heap Cell to be from a new 
block.  This will kill all the perf improvements we made in HBASE-11425.  
In patch we increment the block size for each cell by the underlying value 
buffer capacity. In case of off heap Cells from L2 cache, this buffer is the 
bucket buffer being created in ByteBufferArray which is of size 4 MB.   So any 
size limit we make, with the off heap cells from L2 cache, we will reach there 
so easily and will make much more RPCs!

> Don't allow Multi to retain too many blocks
> -------------------------------------------
>
>                 Key: HBASE-14978
>                 URL: https://issues.apache.org/jira/browse/HBASE-14978
>             Project: HBase
>          Issue Type: Improvement
>    Affects Versions: 2.0.0, 1.2.0, 1.3.0
>            Reporter: Elliott Clark
>            Assignee: Elliott Clark
>            Priority: Critical
>         Attachments: HBASE-14978-v1.patch, HBASE-14978-v2.patch, 
> HBASE-14978.patch
>
>
> Scans and Multi's have limits on the total size of cells that can be 
> returned. However if those requests are not all pointing at the same blocks 
> then the KeyValues can keep alive a lot more data than their size.
> Take the following example:
> A multi with a list of 10000 gets to a fat row. Each column being returned in 
> in a different block. Each column is small 32 bytes or so.
> So the total cell size will be 32 * 10000 = ~320kb. However if each block is 
> 128k then total retained heap size will be almost 2gigs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to