[GitHub] [hbase] virajjasani commented on a change in pull request #3215: HBASE-25698 Fixing IllegalReferenceCountException when using TinyLfuBlockCache
virajjasani commented on a change in pull request #3215: URL: https://github.com/apache/hbase/pull/3215#discussion_r654285359 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/TinyLfuBlockCache.java ## @@ -171,8 +177,10 @@ public Cacheable getBlock(BlockCacheKey cacheKey, if ((value != null) && caching) { if ((value instanceof HFileBlock) && ((HFileBlock) value).isSharedMem()) { value = HFileBlock.deepCloneOnHeap((HFileBlock) value); +cacheBlockUtil(cacheKey, value, true); Review comment: > U can do the deepclone in asReferencedHeapBlock() only based on isSharedMem right? retain() call is anyways needed LRUBlockCache does not perform block.retain() if block is cloned: ``` * 1. if cache the cloned heap block, its refCnt is an totally new one, it's easy to handle; * 2. if cache the original heap block, we're sure that it won't be tracked in ByteBuffAllocator's * reservoir, if both RPC and LRUBlockCache release the block, then it can be garbage collected by * JVM, so need a retain here. ``` ``` private Cacheable asReferencedHeapBlock(Cacheable buf) { if (buf instanceof HFileBlock) { HFileBlock blk = ((HFileBlock) buf); if (blk.isSharedMem()) { return HFileBlock.deepCloneOnHeap(blk); } } // The block will be referenced by this LRUBlockCache, so should increase its refCnt here. return buf.retain(); } ``` ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/TinyLfuBlockCache.java ## @@ -171,8 +177,10 @@ public Cacheable getBlock(BlockCacheKey cacheKey, if ((value != null) && caching) { if ((value instanceof HFileBlock) && ((HFileBlock) value).isSharedMem()) { value = HFileBlock.deepCloneOnHeap((HFileBlock) value); +cacheBlockUtil(cacheKey, value, true); Review comment: @anoopsjohn This is simplified now. ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/TinyLfuBlockCache.java ## @@ -188,21 +196,58 @@ public void cacheBlock(BlockCacheKey cacheKey, Cacheable value, boolean inMemory @Override public void cacheBlock(BlockCacheKey key, Cacheable value) { +cacheBlockUtil(key, value, false); + } + + private void cacheBlockUtil(BlockCacheKey key, Cacheable value, boolean deepClonedOnHeap) { Review comment: Done -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] virajjasani commented on a change in pull request #3215: HBASE-25698 Fixing IllegalReferenceCountException when using TinyLfuBlockCache
virajjasani commented on a change in pull request #3215: URL: https://github.com/apache/hbase/pull/3215#discussion_r650515508 ## File path: hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java ## @@ -890,4 +892,60 @@ public void testDBEShipped() throws IOException { writer.close(); } } + + /** + * Test case for CombinedBlockCache with TinyLfu as L1 cache + */ + @Test + public void testReaderWithTinyLfuCombinedBlockCache() throws Exception { +testReaderCombinedCache(true); + } + + /** + * Test case for CombinedBlockCache with AdaptiveLRU as L1 cache + */ + @Test + public void testReaderWithAdaptiveLruCombinedBlockCache() throws Exception { +testReaderCombinedCache(false); + } + + private void testReaderCombinedCache(final boolean isTinyLfu) throws Exception { +int bufCount = 1024; +int blockSize = 64 * 1024; +ByteBuffAllocator alloc = initAllocator(true, bufCount, blockSize, 0); +fillByteBuffAllocator(alloc, bufCount); +Path storeFilePath = writeStoreFile(); +// Open the file reader with CombinedBlockCache +BlockCache combined = initCombinedBlockCache(isTinyLfu ? "TinyLfu" : "AdaptiveLRU"); +conf.setBoolean(EVICT_BLOCKS_ON_CLOSE_KEY, true); +CacheConfig cacheConfig = new CacheConfig(conf, null, combined, alloc); +HFile.Reader reader = HFile.createReader(fs, storeFilePath, cacheConfig, true, conf); +long offset = 0; +while (offset < reader.getTrailer().getLoadOnOpenDataOffset()) { + BlockCacheKey key = new BlockCacheKey(storeFilePath.getName(), offset); + HFileBlock block = reader.readBlock(offset, -1, true, true, false, true, null, null); + offset += block.getOnDiskSizeWithHeader(); + // Read the cached block. + Cacheable cachedBlock = combined.getBlock(key, false, false, true); + try { +Assert.assertNotNull(cachedBlock); +Assert.assertTrue(cachedBlock instanceof HFileBlock); +HFileBlock hfb = (HFileBlock) cachedBlock; +// Data block will be cached in BucketCache, so it should be an off-heap block. +if (hfb.getBlockType().isData()) { + Assert.assertTrue(hfb.isSharedMem()); +} else if (!isTinyLfu) { + Assert.assertFalse(hfb.isSharedMem()); +} + } finally { +cachedBlock.release(); + } + block.release(); // return back the ByteBuffer back to allocator. +} +reader.close(); +combined.shutdown(); +Assert.assertEquals(bufCount, alloc.getFreeBufferCount()); +alloc.clean(); Review comment: Done -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] virajjasani commented on a change in pull request #3215: HBASE-25698 Fixing IllegalReferenceCountException when using TinyLfuBlockCache
virajjasani commented on a change in pull request #3215: URL: https://github.com/apache/hbase/pull/3215#discussion_r648542799 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/TinyLfuBlockCache.java ## @@ -158,7 +158,13 @@ public boolean containsBlock(BlockCacheKey cacheKey) { @Override public Cacheable getBlock(BlockCacheKey cacheKey, boolean caching, boolean repeat, boolean updateCacheMetrics) { -Cacheable value = cache.getIfPresent(cacheKey); +Cacheable value = cache.asMap().computeIfPresent(cacheKey, (blockCacheKey, cacheable) -> { + // It will be referenced by RPC path, so increase here. NOTICE: Must do the retain inside + // this block. because if retain outside the map#computeIfPresent, the evictBlock may remove + // the block and release, then we're retaining a block with refCnt=0 which is disallowed. + cacheable.retain(); + return cacheable; +}); Review comment: @saintstack @anoopsjohn @ben-manes How about this one? I am yet to benchmark this and perform chaos testing with this, but before I do it, just wanted to see if you are aligned with this rough patch. ``` diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java index 3e5ba1d19c..bb2b394ccd 100644 --- a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java +++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java @@ -39,6 +39,7 @@ import org.apache.hadoop.hbase.io.HeapSize; import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding; import org.apache.hadoop.hbase.util.ClassSize; import org.apache.hadoop.util.StringUtils; +import org.apache.hbase.thirdparty.io.netty.util.IllegalReferenceCountException; import org.apache.yetus.audience.InterfaceAudience; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -510,14 +511,15 @@ public class LruBlockCache implements FirstLevelBlockCache { @Override public Cacheable getBlock(BlockCacheKey cacheKey, boolean caching, boolean repeat, boolean updateCacheMetrics) { -LruCachedBlock cb = map.computeIfPresent(cacheKey, (key, val) -> { - // It will be referenced by RPC path, so increase here. NOTICE: Must do the retain inside - // this block. because if retain outside the map#computeIfPresent, the evictBlock may remove - // the block and release, then we're retaining a block with refCnt=0 which is disallowed. - // see HBASE-22422. - val.getBuffer().retain(); - return val; -}); +LruCachedBlock cb = map.get(cacheKey); +if (cb != null) { + try { +cb.getBuffer().retain(); + } catch (IllegalReferenceCountException e) { +// map.remove(cacheKey); ==> not required here +cb = null; + } +} if (cb == null) { if (!repeat && updateCacheMetrics) { stats.miss(caching, cacheKey.isPrimary(), cacheKey.getBlockType()); ``` And this perf improvement is to be followed by all L1 caching, something we can take up as a follow up task. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] virajjasani commented on a change in pull request #3215: HBASE-25698 Fixing IllegalReferenceCountException when using TinyLfuBlockCache
virajjasani commented on a change in pull request #3215: URL: https://github.com/apache/hbase/pull/3215#discussion_r648564135 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/TinyLfuBlockCache.java ## @@ -158,7 +158,13 @@ public boolean containsBlock(BlockCacheKey cacheKey) { @Override public Cacheable getBlock(BlockCacheKey cacheKey, boolean caching, boolean repeat, boolean updateCacheMetrics) { -Cacheable value = cache.getIfPresent(cacheKey); +Cacheable value = cache.asMap().computeIfPresent(cacheKey, (blockCacheKey, cacheable) -> { + // It will be referenced by RPC path, so increase here. NOTICE: Must do the retain inside + // this block. because if retain outside the map#computeIfPresent, the evictBlock may remove + // the block and release, then we're retaining a block with refCnt=0 which is disallowed. + cacheable.retain(); + return cacheable; +}); Review comment: Sounds good, `map.remove(cacheKey, cb)` too should not be required in this case. Thanks @ben-manes -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] virajjasani commented on a change in pull request #3215: HBASE-25698 Fixing IllegalReferenceCountException when using TinyLfuBlockCache
virajjasani commented on a change in pull request #3215: URL: https://github.com/apache/hbase/pull/3215#discussion_r648542799 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/TinyLfuBlockCache.java ## @@ -158,7 +158,13 @@ public boolean containsBlock(BlockCacheKey cacheKey) { @Override public Cacheable getBlock(BlockCacheKey cacheKey, boolean caching, boolean repeat, boolean updateCacheMetrics) { -Cacheable value = cache.getIfPresent(cacheKey); +Cacheable value = cache.asMap().computeIfPresent(cacheKey, (blockCacheKey, cacheable) -> { + // It will be referenced by RPC path, so increase here. NOTICE: Must do the retain inside + // this block. because if retain outside the map#computeIfPresent, the evictBlock may remove + // the block and release, then we're retaining a block with refCnt=0 which is disallowed. + cacheable.retain(); + return cacheable; +}); Review comment: @saintstack @anoopsjohn @ben-manes How about this one? I am yet to benchmark this and perform chaos testing with this, but before I do it, just wanted to see if you are aligned with this rough patch. ``` diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java index 3e5ba1d19c..bb2b394ccd 100644 --- a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java +++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java @@ -39,6 +39,7 @@ import org.apache.hadoop.hbase.io.HeapSize; import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding; import org.apache.hadoop.hbase.util.ClassSize; import org.apache.hadoop.util.StringUtils; +import org.apache.hbase.thirdparty.io.netty.util.IllegalReferenceCountException; import org.apache.yetus.audience.InterfaceAudience; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -510,14 +511,15 @@ public class LruBlockCache implements FirstLevelBlockCache { @Override public Cacheable getBlock(BlockCacheKey cacheKey, boolean caching, boolean repeat, boolean updateCacheMetrics) { -LruCachedBlock cb = map.computeIfPresent(cacheKey, (key, val) -> { - // It will be referenced by RPC path, so increase here. NOTICE: Must do the retain inside - // this block. because if retain outside the map#computeIfPresent, the evictBlock may remove - // the block and release, then we're retaining a block with refCnt=0 which is disallowed. - // see HBASE-22422. - val.getBuffer().retain(); - return val; -}); +LruCachedBlock cb = map.get(cacheKey); +if (cb != null) { + try { +cb.getBuffer().retain(); + } catch (IllegalReferenceCountException e) { +map.remove(cacheKey); +cb = null; + } +} if (cb == null) { if (!repeat && updateCacheMetrics) { stats.miss(caching, cacheKey.isPrimary(), cacheKey.getBlockType()); ``` And this perf improvement is to be followed by all L1 caching, something we can take up as a follow up task. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] virajjasani commented on a change in pull request #3215: HBASE-25698 Fixing IllegalReferenceCountException when using TinyLfuBlockCache
virajjasani commented on a change in pull request #3215: URL: https://github.com/apache/hbase/pull/3215#discussion_r648542799 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/TinyLfuBlockCache.java ## @@ -158,7 +158,13 @@ public boolean containsBlock(BlockCacheKey cacheKey) { @Override public Cacheable getBlock(BlockCacheKey cacheKey, boolean caching, boolean repeat, boolean updateCacheMetrics) { -Cacheable value = cache.getIfPresent(cacheKey); +Cacheable value = cache.asMap().computeIfPresent(cacheKey, (blockCacheKey, cacheable) -> { + // It will be referenced by RPC path, so increase here. NOTICE: Must do the retain inside + // this block. because if retain outside the map#computeIfPresent, the evictBlock may remove + // the block and release, then we're retaining a block with refCnt=0 which is disallowed. + cacheable.retain(); + return cacheable; +}); Review comment: @saintstack @anoopsjohn @ben-manes How about this one? I am yet to benchmark this and perform chaos testing with this, but before I do it, just wanted to see if you are aligned with this rough patch. ``` diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java index 3e5ba1d19c..bb2b394ccd 100644 --- a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java +++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java @@ -39,6 +39,7 @@ import org.apache.hadoop.hbase.io.HeapSize; import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding; import org.apache.hadoop.hbase.util.ClassSize; import org.apache.hadoop.util.StringUtils; +import org.apache.hbase.thirdparty.io.netty.util.IllegalReferenceCountException; import org.apache.yetus.audience.InterfaceAudience; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -510,14 +511,15 @@ public class LruBlockCache implements FirstLevelBlockCache { @Override public Cacheable getBlock(BlockCacheKey cacheKey, boolean caching, boolean repeat, boolean updateCacheMetrics) { -LruCachedBlock cb = map.computeIfPresent(cacheKey, (key, val) -> { - // It will be referenced by RPC path, so increase here. NOTICE: Must do the retain inside - // this block. because if retain outside the map#computeIfPresent, the evictBlock may remove - // the block and release, then we're retaining a block with refCnt=0 which is disallowed. - // see HBASE-22422. - val.getBuffer().retain(); - return val; -}); +LruCachedBlock cb = map.get(cacheKey); +if (cb != null) { + try { +cb.getBuffer().retain(); + } catch (IllegalReferenceCountException e) { +map.remove(cacheKey); +cb = null; + } +} if (cb == null) { if (!repeat && updateCacheMetrics) { stats.miss(caching, cacheKey.isPrimary(), cacheKey.getBlockType()); ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] virajjasani commented on a change in pull request #3215: HBASE-25698 Fixing IllegalReferenceCountException when using TinyLfuBlockCache
virajjasani commented on a change in pull request #3215: URL: https://github.com/apache/hbase/pull/3215#discussion_r638493210 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/TinyLfuBlockCache.java ## @@ -158,7 +158,13 @@ public boolean containsBlock(BlockCacheKey cacheKey) { @Override public Cacheable getBlock(BlockCacheKey cacheKey, boolean caching, boolean repeat, boolean updateCacheMetrics) { -Cacheable value = cache.getIfPresent(cacheKey); +Cacheable value = cache.asMap().computeIfPresent(cacheKey, (blockCacheKey, cacheable) -> { + // It will be referenced by RPC path, so increase here. NOTICE: Must do the retain inside + // this block. because if retain outside the map#computeIfPresent, the evictBlock may remove + // the block and release, then we're retaining a block with refCnt=0 which is disallowed. + cacheable.retain(); + return cacheable; +}); Review comment: I am sure many of perf regressions reported in HBase 2 (compared to HBase 1) in dev/user mailing lists related to read requests might be related to using CHM#computeIfPresent usages for every onheap and offheap cache hits. Yes, refCount makes code look better but on the other hand, we have perf issues. I believe we should think about this and see if we really need netty based refCount, or at least continue using CHM#get and ride over `IllegalReferenceCountException` by swallowing and evicting the block (I believe that's what @ben-manes's suggestion is). And the final decision should be applicable to all l1Cache strategies: SLRU, TinyLfu, AdaptiveLRU. Otherwise BlockCache will have clear perf issues in HBase 1 vs HBase 2. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] virajjasani commented on a change in pull request #3215: HBASE-25698 Fixing IllegalReferenceCountException when using TinyLfuBlockCache
virajjasani commented on a change in pull request #3215: URL: https://github.com/apache/hbase/pull/3215#discussion_r638493210 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/TinyLfuBlockCache.java ## @@ -158,7 +158,13 @@ public boolean containsBlock(BlockCacheKey cacheKey) { @Override public Cacheable getBlock(BlockCacheKey cacheKey, boolean caching, boolean repeat, boolean updateCacheMetrics) { -Cacheable value = cache.getIfPresent(cacheKey); +Cacheable value = cache.asMap().computeIfPresent(cacheKey, (blockCacheKey, cacheable) -> { + // It will be referenced by RPC path, so increase here. NOTICE: Must do the retain inside + // this block. because if retain outside the map#computeIfPresent, the evictBlock may remove + // the block and release, then we're retaining a block with refCnt=0 which is disallowed. + cacheable.retain(); + return cacheable; +}); Review comment: I am sure many of perf regressions reported in HBase 2 (compared to HBase 1) in dev/user mailing lists related to read requests might be related to using CHM#computeIfPresent usages for every onheap and offheap cache hits. Yes, refCount makes code look better but on the other hand, we have perf issues. I believe we should think about this and see if we really need netty based refCount, or at least continue using CHM#get and ride over `IllegalReferenceCountException` by swallowing it (I believe that's what @ben-manes's suggestion is). And the final decision should be applicable to all l1Cache strategies: SLRU, TinyLfu, AdaptiveLRU. Otherwise BlockCache will have clear perf issues in HBase 1 vs HBase 2. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] virajjasani commented on a change in pull request #3215: HBASE-25698 Fixing IllegalReferenceCountException when using TinyLfuBlockCache
virajjasani commented on a change in pull request #3215: URL: https://github.com/apache/hbase/pull/3215#discussion_r638493210 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/TinyLfuBlockCache.java ## @@ -158,7 +158,13 @@ public boolean containsBlock(BlockCacheKey cacheKey) { @Override public Cacheable getBlock(BlockCacheKey cacheKey, boolean caching, boolean repeat, boolean updateCacheMetrics) { -Cacheable value = cache.getIfPresent(cacheKey); +Cacheable value = cache.asMap().computeIfPresent(cacheKey, (blockCacheKey, cacheable) -> { + // It will be referenced by RPC path, so increase here. NOTICE: Must do the retain inside + // this block. because if retain outside the map#computeIfPresent, the evictBlock may remove + // the block and release, then we're retaining a block with refCnt=0 which is disallowed. + cacheable.retain(); + return cacheable; +}); Review comment: I am sure many of perf regressions reported in HBase 2 (compared to HBase 1) in dev/user mailing lists related to read requests might be related to using CHM#computeIfPresent usages for every onheap and offheap cache hits. Yes, refCount makes code look better but on the other hand, we have perf issues. I believe we should think about this and see if we really need netty based refCount, or at least continue using CHM#get and ride over `IllegalReferenceCountException` by swallowing it (I believe that's what @ben-manes's suggestion is). Otherwise BlockCache will have clear perf issues in HBase 1 vs HBase 2. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] virajjasani commented on a change in pull request #3215: HBASE-25698 Fixing IllegalReferenceCountException when using TinyLfuBlockCache
virajjasani commented on a change in pull request #3215: URL: https://github.com/apache/hbase/pull/3215#discussion_r638125789 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/TinyLfuBlockCache.java ## @@ -158,7 +158,13 @@ public boolean containsBlock(BlockCacheKey cacheKey) { @Override public Cacheable getBlock(BlockCacheKey cacheKey, boolean caching, boolean repeat, boolean updateCacheMetrics) { -Cacheable value = cache.getIfPresent(cacheKey); +Cacheable value = cache.asMap().computeIfPresent(cacheKey, (blockCacheKey, cacheable) -> { + // It will be referenced by RPC path, so increase here. NOTICE: Must do the retain inside + // this block. because if retain outside the map#computeIfPresent, the evictBlock may remove + // the block and release, then we're retaining a block with refCnt=0 which is disallowed. + cacheable.retain(); + return cacheable; +}); Review comment: Hmm, yeah locks would slow us down. On the other hand, based on discussion on HBASE-22422 , it seems computeIfPresent (locking) is necessary to prevent concurrency issues with #retain and #release. Based on @openinx's comment [here](https://issues.apache.org/jira/browse/HBASE-22422?focusedCommentId=16848024=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16848024), wondering if the sawtooth graph of QPS is similar concurrency issue and not resolved yet. @saintstack Any suggestions? Have you been using Offheap read path with LRU recently? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] virajjasani commented on a change in pull request #3215: HBASE-25698 Fixing IllegalReferenceCountException when using TinyLfuBlockCache
virajjasani commented on a change in pull request #3215: URL: https://github.com/apache/hbase/pull/3215#discussion_r636421078 ## File path: hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java ## @@ -890,4 +892,60 @@ public void testDBEShipped() throws IOException { writer.close(); } } + + /** + * Test case for CombinedBlockCache with TinyLfu as L1 cache + */ + @Test + public void testReaderWithTinyLfuCombinedBlockCache() throws Exception { +testReaderCombinedCache(true); + } + + /** + * Test case for CombinedBlockCache with AdaptiveLRU as L1 cache + */ + @Test + public void testReaderWithAdaptiveLruCombinedBlockCache() throws Exception { +testReaderCombinedCache(false); + } + + private void testReaderCombinedCache(final boolean isTinyLfu) throws Exception { Review comment: Sounds good. Will do this in next iteration. ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/TinyLfuBlockCache.java ## @@ -171,8 +177,10 @@ public Cacheable getBlock(BlockCacheKey cacheKey, if ((value != null) && caching) { if ((value instanceof HFileBlock) && ((HFileBlock) value).isSharedMem()) { value = HFileBlock.deepCloneOnHeap((HFileBlock) value); +cacheBlockUtil(cacheKey, value, true); Review comment: Without this if/else, we would perform `deepCloneOnHeap()` twice right? Just above this line, we do deep clone and then if we just call `cacheBlock()` only, it will again perform `deepClonedOnHeap` because the same condition as above holds true i.e ``` if ((value instanceof HFileBlock) && ((HFileBlock) value).isSharedMem()) ``` ## File path: hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java ## @@ -890,4 +892,60 @@ public void testDBEShipped() throws IOException { writer.close(); } } + + /** + * Test case for CombinedBlockCache with TinyLfu as L1 cache + */ + @Test + public void testReaderWithTinyLfuCombinedBlockCache() throws Exception { +testReaderCombinedCache(true); + } + + /** + * Test case for CombinedBlockCache with AdaptiveLRU as L1 cache + */ + @Test + public void testReaderWithAdaptiveLruCombinedBlockCache() throws Exception { +testReaderCombinedCache(false); + } + + private void testReaderCombinedCache(final boolean isTinyLfu) throws Exception { +int bufCount = 1024; +int blockSize = 64 * 1024; +ByteBuffAllocator alloc = initAllocator(true, bufCount, blockSize, 0); +fillByteBuffAllocator(alloc, bufCount); +Path storeFilePath = writeStoreFile(); +// Open the file reader with CombinedBlockCache +BlockCache combined = initCombinedBlockCache(isTinyLfu ? "TinyLfu" : "AdaptiveLRU"); +conf.setBoolean(EVICT_BLOCKS_ON_CLOSE_KEY, true); +CacheConfig cacheConfig = new CacheConfig(conf, null, combined, alloc); +HFile.Reader reader = HFile.createReader(fs, storeFilePath, cacheConfig, true, conf); +long offset = 0; +while (offset < reader.getTrailer().getLoadOnOpenDataOffset()) { + BlockCacheKey key = new BlockCacheKey(storeFilePath.getName(), offset); + HFileBlock block = reader.readBlock(offset, -1, true, true, false, true, null, null); + offset += block.getOnDiskSizeWithHeader(); + // Read the cached block. + Cacheable cachedBlock = combined.getBlock(key, false, false, true); + try { +Assert.assertNotNull(cachedBlock); +Assert.assertTrue(cachedBlock instanceof HFileBlock); +HFileBlock hfb = (HFileBlock) cachedBlock; +// Data block will be cached in BucketCache, so it should be an off-heap block. +if (hfb.getBlockType().isData()) { + Assert.assertTrue(hfb.isSharedMem()); +} else if (!isTinyLfu) { + Assert.assertFalse(hfb.isSharedMem()); Review comment: Yes, that's what it looks like. I got to know from this test. Shall we continue having this check? Or you think different treatment of non-data blocks is an issue? ## File path: hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java ## @@ -890,4 +892,60 @@ public void testDBEShipped() throws IOException { writer.close(); } } + + /** + * Test case for CombinedBlockCache with TinyLfu as L1 cache + */ + @Test + public void testReaderWithTinyLfuCombinedBlockCache() throws Exception { +testReaderCombinedCache(true); + } + + /** + * Test case for CombinedBlockCache with AdaptiveLRU as L1 cache + */ + @Test + public void testReaderWithAdaptiveLruCombinedBlockCache() throws Exception { +testReaderCombinedCache(false); + } + + private void testReaderCombinedCache(final boolean isTinyLfu) throws Exception { +int bufCount = 1024; +int blockSize = 64 * 1024; +ByteBuffAllocator alloc = initAllocator(true, bufCount, blockSize, 0); +fillByteBuffAllocator(alloc,
[GitHub] [hbase] virajjasani commented on a change in pull request #3215: HBASE-25698 Fixing IllegalReferenceCountException when using TinyLfuBlockCache
virajjasani commented on a change in pull request #3215: URL: https://github.com/apache/hbase/pull/3215#discussion_r624966797 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/TinyLfuBlockCache.java ## @@ -196,13 +202,17 @@ public void cacheBlock(BlockCacheKey key, Cacheable value) { key.getHfileName(), key.getOffset(), value.heapSize(), DEFAULT_MAX_BLOCK_SIZE)); } } else { + value.retain(); Review comment: We can make use of `isSharedMem()` similar to how LRU does it? ``` private Cacheable asReferencedHeapBlock(Cacheable buf) { if (buf instanceof HFileBlock) { HFileBlock blk = ((HFileBlock) buf); if (blk.isSharedMem()) { return HFileBlock.deepCloneOnHeap(blk); } } // The block will be referenced by this LRUBlockCache, so should increase its refCnt here. return buf.retain(); } ``` And instead of directly retaining value here, we can call this method. That seems like the only thing we are missing? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] virajjasani commented on a change in pull request #3215: HBASE-25698 Fixing IllegalReferenceCountException when using TinyLfuBlockCache
virajjasani commented on a change in pull request #3215: URL: https://github.com/apache/hbase/pull/3215#discussion_r624699556 ## File path: hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java ## @@ -890,4 +892,89 @@ public void testDBEShipped() throws IOException { writer.close(); } } + + /** + * Test case for CombinedBlockCache with TinyLfu as L1 cache + */ + @Test + public void testReaderWithTinyLfuCombinedBlockCache() throws Exception { Review comment: Done -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org