sunhelly commented on a change in pull request #2699:
URL: https://github.com/apache/hbase/pull/2699#discussion_r532180863



##########
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileInfo.java
##########
@@ -359,24 +360,32 @@ public void initMetaAndIndex(HFile.Reader reader) throws 
IOException {
     // Initialize an block iterator, and parse load-on-open blocks in the 
following.
     blockIter = blockReader.blockRange(trailer.getLoadOnOpenDataOffset(),
         context.getFileSize() - trailer.getTrailerSize());
-    // Data index. We also read statistics about the block index written after
-    // the root level.
-    this.dataIndexReader = new HFileBlockIndex
-        .CellBasedKeyBlockIndexReader(trailer.createComparator(), 
trailer.getNumDataIndexLevels());
-    
dataIndexReader.readMultiLevelIndexRoot(blockIter.nextBlockWithBlockType(BlockType.ROOT_INDEX),
-        trailer.getDataIndexCount());
-    reader.setDataBlockIndexReader(dataIndexReader);
-    // Meta index.
-    this.metaIndexReader = new HFileBlockIndex.ByteArrayKeyBlockIndexReader(1);
-    
metaIndexReader.readRootIndex(blockIter.nextBlockWithBlockType(BlockType.ROOT_INDEX),
+    try {
+      // Data index. We also read statistics about the block index written 
after
+      // the root level.
+      this.dataIndexReader =
+        new 
HFileBlockIndex.CellBasedKeyBlockIndexReader(trailer.createComparator(), 
trailer.getNumDataIndexLevels());
+      dataIndexReader
+        
.readMultiLevelIndexRoot(blockIter.nextBlockWithBlockType(BlockType.ROOT_INDEX),
 trailer.getDataIndexCount());
+      reader.setDataBlockIndexReader(dataIndexReader);
+      // Meta index.
+      this.metaIndexReader = new 
HFileBlockIndex.ByteArrayKeyBlockIndexReader(1);
+      
metaIndexReader.readRootIndex(blockIter.nextBlockWithBlockType(BlockType.ROOT_INDEX),
         trailer.getMetaIndexCount());
-    reader.setMetaBlockIndexReader(metaIndexReader);
-    loadMetaInfo(blockIter, hfileContext);
-    
reader.setDataBlockEncoder(HFileDataBlockEncoderImpl.createFromFileInfo(this));
-    // Load-On-Open info
-    HFileBlock b;
-    while ((b = blockIter.nextBlock()) != null) {
-      loadOnOpenBlocks.add(b);
+      reader.setMetaBlockIndexReader(metaIndexReader);
+      loadMetaInfo(blockIter, hfileContext);
+      
reader.setDataBlockEncoder(HFileDataBlockEncoderImpl.createFromFileInfo(this));
+      // Load-On-Open info
+      HFileBlock b;
+      while ((b = blockIter.nextBlock()) != null) {
+        loadOnOpenBlocks.add(b);
+      }
+    } catch (Throwable t) {
+      IOUtils.closeQuietly(context.getInputStreamWrapper());
+      throw new CorruptHFileException("Problem reading data index and meta 
index from file "
+        + context.getFilePath(), t);
+    } finally {
+      context.getInputStreamWrapper().unbuffer();

Review comment:
       Thanks for reviewing, @Apache9 . 
   
   `IOUtils.closeQuietly` just calls `stream.close()`, and does not set stream 
be null. `stream=null` is always explicitly called when needed. So I think 
`context.getInputStreamWrapper.unbuffer` will not throw NPE here. In 
`unbuffer()` method, it closes the `bockReader` when it is not null. As a 
result, `unbuffer` will not throws NPE eighter.
   I have made a UT to create a context and called `IOUtils.closeQuietly` 
followed by`context.getInputStreamWrapper().unbuffer()`, no exceptions occurred.
   The `try...cache..finally` codes here is the same as those in 
`HFile.createReader`.
   If it seems a little bit not graceful and ugly, adding a check of 
`context.getInputStreamWrapper()!=null` may be helpful?
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to