[
https://issues.apache.org/jira/browse/HBASE-17757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15923381#comment-15923381
]
Allan Yang commented on HBASE-17757:
------------------------------------
{quote}
blockWriter.encodedBlockSizeWritten() >= encodedBlockSizeLimit ||
blockWriter.blockSizeWritten() >= hFileContext.getBlocksize() so only the old
way of block size determines the block boundary.
{quote}
So, if users set the blocksize to 64KB, and the default ratio is 1, then they
will have blocks whose size is 64KB AFTER encoding. But before this patch, if
users set the blocksize to 64KB, they will get blocks with 64KB BEFORE
encoding. That is a difference.
> Unify blocksize after encoding to decrease memory fragment
> -----------------------------------------------------------
>
> Key: HBASE-17757
> URL: https://issues.apache.org/jira/browse/HBASE-17757
> Project: HBase
> Issue Type: New Feature
> Reporter: Allan Yang
> Assignee: Allan Yang
> Attachments: HBASE-17757.patch
>
>
> Usually, we store encoded block(uncompressed) in blockcache/bucketCache.
> Though we have set the blocksize, after encoding, blocksize is varied. Varied
> blocksize will cause memory fragment problem, which will result in more FGC
> finally.In order to relief the memory fragment, This issue adjusts the
> encoded block to a unified size.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)