Hi,

I'm dealing with latency sensitive random read application on HBase. It seems 
that block cache is designed for sequential read. I use the default 64k block 
size which is much bigger than my row (10k after GZ compression).
I assume block cache store compressed data, one block can hold 6 rows, but in 
random read, maybe 1 row is ever accessed, 5/6 of the cache space is wasted.
Is there a better way of caching for random read. Lower the block size to 32k 
or even 16k might be a choice.

Thanks!

Reply via email to