virajjasani commented on a change in pull request #3215:
URL: https://github.com/apache/hbase/pull/3215#discussion_r636421078
##########
File path:
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java
##########
@@ -890,4 +892,60 @@ public void testDBEShipped() throws IOException {
writer.close();
}
}
+
+ /**
+ * Test case for CombinedBlockCache with TinyLfu as L1 cache
+ */
+ @Test
+ public void testReaderWithTinyLfuCombinedBlockCache() throws Exception {
+ testReaderCombinedCache(true);
+ }
+
+ /**
+ * Test case for CombinedBlockCache with AdaptiveLRU as L1 cache
+ */
+ @Test
+ public void testReaderWithAdaptiveLruCombinedBlockCache() throws Exception {
+ testReaderCombinedCache(false);
+ }
+
+ private void testReaderCombinedCache(final boolean isTinyLfu) throws
Exception {
Review comment:
Sounds good. Will do this in next iteration.
##########
File path:
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/TinyLfuBlockCache.java
##########
@@ -171,8 +177,10 @@ public Cacheable getBlock(BlockCacheKey cacheKey,
if ((value != null) && caching) {
if ((value instanceof HFileBlock) && ((HFileBlock)
value).isSharedMem()) {
value = HFileBlock.deepCloneOnHeap((HFileBlock) value);
+ cacheBlockUtil(cacheKey, value, true);
Review comment:
Without this if/else, we would perform `deepCloneOnHeap()` twice right?
Just above this line, we do deep clone and then if we just call `cacheBlock()`
only, it will again perform `deepClonedOnHeap` because the same condition as
above holds true i.e
```
if ((value instanceof HFileBlock) && ((HFileBlock) value).isSharedMem())
```
##########
File path:
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java
##########
@@ -890,4 +892,60 @@ public void testDBEShipped() throws IOException {
writer.close();
}
}
+
+ /**
+ * Test case for CombinedBlockCache with TinyLfu as L1 cache
+ */
+ @Test
+ public void testReaderWithTinyLfuCombinedBlockCache() throws Exception {
+ testReaderCombinedCache(true);
+ }
+
+ /**
+ * Test case for CombinedBlockCache with AdaptiveLRU as L1 cache
+ */
+ @Test
+ public void testReaderWithAdaptiveLruCombinedBlockCache() throws Exception {
+ testReaderCombinedCache(false);
+ }
+
+ private void testReaderCombinedCache(final boolean isTinyLfu) throws
Exception {
+ int bufCount = 1024;
+ int blockSize = 64 * 1024;
+ ByteBuffAllocator alloc = initAllocator(true, bufCount, blockSize, 0);
+ fillByteBuffAllocator(alloc, bufCount);
+ Path storeFilePath = writeStoreFile();
+ // Open the file reader with CombinedBlockCache
+ BlockCache combined = initCombinedBlockCache(isTinyLfu ? "TinyLfu" :
"AdaptiveLRU");
+ conf.setBoolean(EVICT_BLOCKS_ON_CLOSE_KEY, true);
+ CacheConfig cacheConfig = new CacheConfig(conf, null, combined, alloc);
+ HFile.Reader reader = HFile.createReader(fs, storeFilePath, cacheConfig,
true, conf);
+ long offset = 0;
+ while (offset < reader.getTrailer().getLoadOnOpenDataOffset()) {
+ BlockCacheKey key = new BlockCacheKey(storeFilePath.getName(), offset);
+ HFileBlock block = reader.readBlock(offset, -1, true, true, false, true,
null, null);
+ offset += block.getOnDiskSizeWithHeader();
+ // Read the cached block.
+ Cacheable cachedBlock = combined.getBlock(key, false, false, true);
+ try {
+ Assert.assertNotNull(cachedBlock);
+ Assert.assertTrue(cachedBlock instanceof HFileBlock);
+ HFileBlock hfb = (HFileBlock) cachedBlock;
+ // Data block will be cached in BucketCache, so it should be an
off-heap block.
+ if (hfb.getBlockType().isData()) {
+ Assert.assertTrue(hfb.isSharedMem());
+ } else if (!isTinyLfu) {
+ Assert.assertFalse(hfb.isSharedMem());
Review comment:
Yes, that's what it looks like. I got to know from this test. Shall we
continue having this check? Or you think different treatment of non-data blocks
is an issue?
##########
File path:
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java
##########
@@ -890,4 +892,60 @@ public void testDBEShipped() throws IOException {
writer.close();
}
}
+
+ /**
+ * Test case for CombinedBlockCache with TinyLfu as L1 cache
+ */
+ @Test
+ public void testReaderWithTinyLfuCombinedBlockCache() throws Exception {
+ testReaderCombinedCache(true);
+ }
+
+ /**
+ * Test case for CombinedBlockCache with AdaptiveLRU as L1 cache
+ */
+ @Test
+ public void testReaderWithAdaptiveLruCombinedBlockCache() throws Exception {
+ testReaderCombinedCache(false);
+ }
+
+ private void testReaderCombinedCache(final boolean isTinyLfu) throws
Exception {
+ int bufCount = 1024;
+ int blockSize = 64 * 1024;
+ ByteBuffAllocator alloc = initAllocator(true, bufCount, blockSize, 0);
+ fillByteBuffAllocator(alloc, bufCount);
+ Path storeFilePath = writeStoreFile();
+ // Open the file reader with CombinedBlockCache
+ BlockCache combined = initCombinedBlockCache(isTinyLfu ? "TinyLfu" :
"AdaptiveLRU");
+ conf.setBoolean(EVICT_BLOCKS_ON_CLOSE_KEY, true);
+ CacheConfig cacheConfig = new CacheConfig(conf, null, combined, alloc);
+ HFile.Reader reader = HFile.createReader(fs, storeFilePath, cacheConfig,
true, conf);
+ long offset = 0;
+ while (offset < reader.getTrailer().getLoadOnOpenDataOffset()) {
+ BlockCacheKey key = new BlockCacheKey(storeFilePath.getName(), offset);
+ HFileBlock block = reader.readBlock(offset, -1, true, true, false, true,
null, null);
+ offset += block.getOnDiskSizeWithHeader();
+ // Read the cached block.
+ Cacheable cachedBlock = combined.getBlock(key, false, false, true);
+ try {
+ Assert.assertNotNull(cachedBlock);
+ Assert.assertTrue(cachedBlock instanceof HFileBlock);
+ HFileBlock hfb = (HFileBlock) cachedBlock;
+ // Data block will be cached in BucketCache, so it should be an
off-heap block.
+ if (hfb.getBlockType().isData()) {
+ Assert.assertTrue(hfb.isSharedMem());
+ } else if (!isTinyLfu) {
+ Assert.assertFalse(hfb.isSharedMem());
+ }
+ } finally {
+ cachedBlock.release();
+ }
+ block.release(); // return back the ByteBuffer back to allocator.
+ }
+ reader.close();
+ combined.shutdown();
+ Assert.assertEquals(bufCount, alloc.getFreeBufferCount());
+ alloc.clean();
Review comment:
Sure, let me take it up.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]