[ 
https://issues.apache.org/jira/browse/HBASE-15248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15898738#comment-15898738
 ] 

Hudson commented on HBASE-15248:
--------------------------------

SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #2626 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/2626/])
Javadoc on what BLOCKSIZE means (related to HBASE-15248) (stack: rev 
4beae9a56e2d5d2992d017ba876d18f78415e067)
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java


> BLOCKSIZE 4k should result in 4096 bytes on disk; i.e. fit inside a 
> BucketCache 'block' of 4k
> ---------------------------------------------------------------------------------------------
>
>                 Key: HBASE-15248
>                 URL: https://issues.apache.org/jira/browse/HBASE-15248
>             Project: HBase
>          Issue Type: Sub-task
>          Components: BucketCache
>            Reporter: stack
>
> Chatting w/ a gentleman named Daniel Pol who is messing w/ bucketcache, he 
> wants blocks to be the size specified in the configuration and no bigger. His 
> hardware set ups fetches pages of 4k and so a block that has 4k of payload 
> but has then a header and the header of the next block (which helps figure 
> whats next when scanning) ends up being 4203 bytes or something, and this 
> then then translates into two seeks per block fetch.
> This issue is about what it would take to stay inside our configured size 
> boundary writing out blocks.
> If not possible, give back better signal on what to do so you could fit 
> inside a particular constraint.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to