Hi,

We have an HBase system running HBase 1.3.1 on an AWS EMR service. Our
BucketCache is configured for 400 GB on a set of attached EBS disk volumes,
with all column families marked for in-memory in their column family
schemas using INMEMORY => 'true' (except for one column family we only ever
write to, so we set BUCKETCACHE => 'false' on that one).

Even though all column families are marked INMEMORY, we have the following
ratios set:

"hbase.bucketcache.memory.factor":"0.8",

"hbase.bucketcache.single.factor":"0.1",

"hbase.bucketcache.multi.factor":"0.1",

Currently the bucket cache shows evictions even though it has tons of free
space. I am trying to understand why we get any evictions at all? We do
have minor compactions going on, but we have not
set hbase.rs.evictblocksonclose to any value and from looking at the code,
it defaults to false. The total bucket cache size is nowhere near any of
the above limits, in fact on some long running servers where we stopped
traffic, the cache size went down to 0. Which makes me think something is
evicting blocks from the bucket cache in the background.

You can see a screenshot from one of the regionserver L2 stats UI pages at
https://imgur.com/a/2ZUSv . Another interesting thing to me on this page is
that it has non-zero evicted blocks but says Evictions: 0

Any help understanding this would be appreciated.

----
Saad

Reply via email to