[
https://issues.apache.org/jira/browse/HBASE-15241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
stack updated HBASE-15241:
--------------------------
Summary: Blockcache hits hbase.ui.blockcache.by.file.max limit and is
silent that it will load no more blocks (was: Blockcache only loads 100k
blocks from a file)
> Blockcache hits hbase.ui.blockcache.by.file.max limit and is silent that it
> will load no more blocks
> -----------------------------------------------------------------------------------------------------
>
> Key: HBASE-15241
> URL: https://issues.apache.org/jira/browse/HBASE-15241
> Project: HBase
> Issue Type: Sub-task
> Components: BucketCache
> Reporter: stack
>
> We can only load 100k blocks from a file. If 256Gs of SSD and blocks are 4k
> in size to align with SSD block read, and you want it all in cache, the 100k
> limit gets in the way (The 100k may be absolute limit... checking. In UI I
> see 100k only). There is a configuration which lets you up the number per
> file, hbase.ui.blockcache.by.file.max. This helps.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)