[
https://issues.apache.org/jira/browse/HBASE-15248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
stack updated HBASE-15248:
--------------------------
Summary: BLOCKSIZE 4k should result in 4096 bytes on disk; i.e. fit inside
a BucketCache 'block' of 4k (was: One block, one seek: a.k.a BLOCKSIZE 4k
should result in 4096 bytes on disk)
> BLOCKSIZE 4k should result in 4096 bytes on disk; i.e. fit inside a
> BucketCache 'block' of 4k
> ---------------------------------------------------------------------------------------------
>
> Key: HBASE-15248
> URL: https://issues.apache.org/jira/browse/HBASE-15248
> Project: HBase
> Issue Type: Sub-task
> Components: BucketCache
> Reporter: stack
>
> Chatting w/ a gentleman named Daniel Pol who is messing w/ bucketcache, he
> wants blocks to be the size specified in the configuration and no bigger. His
> hardware set ups fetches pages of 4k and so a block that has 4k of payload
> but has then a header and the header of the next block (which helps figure
> whats next when scanning) ends up being 4203 bytes or something, and this
> then then translates into two seeks per block fetch.
> This issue is about what it would take to stay inside our configured size
> boundary writing out blocks.
> If not possible, give back better signal on what to do so you could fit
> inside a particular constraint.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)