sanjeet006py commented on code in PR #7136:
URL: https://github.com/apache/hbase/pull/7136#discussion_r2185159074


##########
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderImpl.java:
##########
@@ -1114,6 +1115,15 @@ private HFileBlock getCachedBlock(BlockCacheKey 
cacheKey, boolean cacheBlock, bo
             compressedBlock.release();
           }
         }
+        boolean isScanMetricsEnabled = 
ThreadLocalServerSideScanMetrics.isScanMetricsEnabled();
+        if (isScanMetricsEnabled) {
+          int cachedBlockBytesRead = cachedBlock.getOnDiskSizeWithHeader();
+          // Account for the header size of the next block if it exists
+          if (cachedBlock.getNextBlockOnDiskSize() > 0) {
+            cachedBlockBytesRead += cachedBlock.headerSize();
+          }
+          
ThreadLocalServerSideScanMetrics.addBytesReadFromBlockCache(cachedBlockBytesRead);
+        }

Review Comment:
   Actually, I intentionally placed the code such that those blocks are also 
counted. I don't have any specific reason for current approach but intent was 
to capture any bytes read whether useful or non-useful. But I do see value in 
your suggestion. If we don't count invalid blocks then whenever such invalid 
blocks be encountered we will see drop in `bytesReadFromBlockcache` metric 
which won't happen in current approach.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to