[ 
https://issues.apache.org/jira/browse/HBASE-17757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15900867#comment-15900867
 ] 

Anoop Sam John commented on HBASE-17757:
----------------------------------------

How abt thinking the block size limit to be a hard limit than a soft one?  I 
mean dont allow to cross the limit at all.  We have 1K size adjustment in BC 
bucket sizes to adjust this possible overgrow + header /checksum...  (We remove 
this extra bytes like header etc now while caching to BC? Keeps forgetting 
this.. [[email protected]] )

> Unify blocksize after encoding to decrease memory fragment 
> -----------------------------------------------------------
>
>                 Key: HBASE-17757
>                 URL: https://issues.apache.org/jira/browse/HBASE-17757
>             Project: HBase
>          Issue Type: New Feature
>            Reporter: Allan Yang
>            Assignee: Allan Yang
>         Attachments: HBASE-17757.patch
>
>
> Usually, we store encoded block(uncompressed) in blockcache/bucketCache. 
> Though we have set the blocksize, after encoding, blocksize is varied. Varied 
> blocksize will cause memory fragment problem, which will result in more FGC 
> finally.In order to relief the memory fragment, This issue adjusts the 
> encoded block to a unified size.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to