Apache9 commented on code in PR #5906:
URL: https://github.com/apache/hbase/pull/5906#discussion_r1636576239


##########
hbase-server/src/main/java/org/apache/hadoop/hbase/io/HalfStoreFileReader.java:
##########
@@ -349,4 +353,42 @@ public long getFilterEntries() {
     // Estimate the number of entries as half the original file; this may be 
wildly inaccurate.
     return super.getFilterEntries() / 2;
   }
+
+  /**
+   * Overrides close method to handle cache evictions for the referred file. 
If evictionOnClose is
+   * true, we will seek to the block containing the splitCell and evict all 
blocks from offset 0 up
+   * to that block offset if this is a bottom half reader, or the from the 
split block offset up to
+   * the end of the file if this is a top half reader.
+   * @param evictOnClose true if it should evict the file blocks from the 
cache.
+   */
+  @Override
+  public void close(boolean evictOnClose) throws IOException {
+    if (closed.compareAndSet(false, true)) {
+      if (evictOnClose && StoreFileInfo.isReference(this.reader.getPath())) {

Review Comment:
   Do we really need to check isReference here? We will only use 
HalfStoreFileReader when the store file is a reference file?



##########
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheUtil.java:
##########
@@ -244,6 +248,45 @@ public static int getMaxCachedBlocksByFile(Configuration 
conf) {
     return conf == null ? DEFAULT_MAX : 
conf.getInt("hbase.ui.blockcache.by.file.max", DEFAULT_MAX);
   }
 
+  /**
+   * Similarly to HFileBlock.Writer.getBlockForCaching(), creates a HFileBlock 
instance without

Review Comment:
   Could we share some common code between these two methods? Otherwise how can 
we align these two methods?



##########
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CombinedBlockCache.java:
##########
@@ -109,8 +109,6 @@ public Cacheable getBlock(BlockCacheKey cacheKey, boolean 
caching, boolean repea
         }
       } else {
         if (existInL1) {
-          LOG.warn("Cache key {} had block type {}, but was found in L1 
cache.", cacheKey,

Review Comment:
   Why removing this warn log?



##########
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFilePreadReader.java:
##########
@@ -45,7 +45,7 @@ public HFilePreadReader(ReaderContext context, HFileInfo 
fileInfo, CacheConfig c
     });
 
     // Prefetch file blocks upon open if requested
-    if (cacheConf.shouldPrefetchOnOpen() && cacheIfCompactionsOff() && 
shouldCache.booleanValue()) {
+    if (cacheConf.shouldPrefetchOnOpen() && shouldCache.booleanValue()) {

Review Comment:
   Why removing cacheIfCompactionsOff?



##########
hbase-server/src/main/java/org/apache/hadoop/hbase/io/HalfStoreFileReader.java:
##########
@@ -349,4 +353,42 @@ public long getFilterEntries() {
     // Estimate the number of entries as half the original file; this may be 
wildly inaccurate.
     return super.getFilterEntries() / 2;
   }
+
+  /**
+   * Overrides close method to handle cache evictions for the referred file. 
If evictionOnClose is
+   * true, we will seek to the block containing the splitCell and evict all 
blocks from offset 0 up
+   * to that block offset if this is a bottom half reader, or the from the 
split block offset up to
+   * the end of the file if this is a top half reader.
+   * @param evictOnClose true if it should evict the file blocks from the 
cache.
+   */
+  @Override
+  public void close(boolean evictOnClose) throws IOException {
+    if (closed.compareAndSet(false, true)) {
+      if (evictOnClose && StoreFileInfo.isReference(this.reader.getPath())) {
+        final HFileReaderImpl.HFileScannerImpl s =

Review Comment:
   Usually it is not a good idea to the implementation class directly at upper 
layer. Why we need to use HFileScannerImpl here?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to