[
https://issues.apache.org/jira/browse/HBASE-15248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15753974#comment-15753974
]
Anoop Sam John commented on HBASE-15248:
----------------------------------------
BC is having diff sized buckets and slots in it. This sizes as per the
possible HFile block sizes. By def we consider 4K, 8K... 512K.. But we give
an additional +1KB size for each of the bucket size and so to a slot.. So a 4K
is actually becoming 5K. And ya extra bytes are taken off by this cur header
and next block header.
So right now u have removed the eager fetch of next block header right?
[[email protected]]..
Still the cur block, header we will need?
Am trying to solve issues under the parent so that I can make HBASE-17204 to
happen.
> BLOCKSIZE 4k should result in 4096 bytes on disk; i.e. fit inside a
> BucketCache 'block' of 4k
> ---------------------------------------------------------------------------------------------
>
> Key: HBASE-15248
> URL: https://issues.apache.org/jira/browse/HBASE-15248
> Project: HBase
> Issue Type: Sub-task
> Components: BucketCache
> Reporter: stack
>
> Chatting w/ a gentleman named Daniel Pol who is messing w/ bucketcache, he
> wants blocks to be the size specified in the configuration and no bigger. His
> hardware set ups fetches pages of 4k and so a block that has 4k of payload
> but has then a header and the header of the next block (which helps figure
> whats next when scanning) ends up being 4203 bytes or something, and this
> then then translates into two seeks per block fetch.
> This issue is about what it would take to stay inside our configured size
> boundary writing out blocks.
> If not possible, give back better signal on what to do so you could fit
> inside a particular constraint.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)