ramkrish86 commented on a change in pull request #257: HBASE-22463 Some paths 
in HFileScannerImpl did not consider block#release which will exhaust the 
ByteBuffAllocator
URL: https://github.com/apache/hbase/pull/257#discussion_r288941586
 
 

 ##########
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderImpl.java
 ##########
 @@ -520,20 +520,18 @@ public HFileScannerImpl(final HFile.Reader reader, final 
boolean cacheBlocks,
     }
 
     void updateCurrBlockRef(HFileBlock block) {
-      if (block != null && this.curBlock != null &&
-          block.getOffset() == this.curBlock.getOffset()) {
+      if (block != null && curBlock != null && block.getOffset() == 
curBlock.getOffset()) {
         return;
       }
-      // We don't have to keep ref to EXCLUSIVE type of block
-      if (this.curBlock != null && this.curBlock.usesSharedMemory()) {
 
 Review comment:
   So already as part of this JIRA you removed the MEMTYPE and so even the 
bucket cache engines like FileIOEngine will remove the memtype and so all the 
blocks will go through the release() way by adding to prevBlocks and then 
getting released. The new PR #268 will remove the need for addition to 
prevBlocks as it is a noop. So now both bucket cache and read from HDFS both 
will create onheap/offheap blocks and the type of block will determine what it 
is . 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to