liaoyuxiangqin created HDFS-13246:
-------------------------------------

             Summary: FileInputStream redundant closes in readReplicasFromCache 
                 Key: HDFS-13246
                 URL: https://issues.apache.org/jira/browse/HDFS-13246
             Project: Hadoop HDFS
          Issue Type: Improvement
          Components: datanode
    Affects Versions: 3.2.0
            Reporter: liaoyuxiangqin


When i read the readReplicasFromCache() of BlockPoolSlice class in datanode, I 
found the following code closes fileinputstream redundant, I think  
IOUtils.closeStream(inputStream) in finally code block could guarantee close 
the inputStream correctly, So the

inputStream.close() can remove. Thanks.

 
{code:java|title=BlockPoolSlice.java|borderStyle=solid}
FileInputStream inputStream = null;
    try {
      inputStream = fileIoProvider.getFileInputStream(volume, replicaFile);
      BlockListAsLongs blocksList =
          BlockListAsLongs.readFrom(inputStream, maxDataLength);
      if (blocksList == null) {
        return false;
      }

      for (BlockReportReplica replica : blocksList) {
        switch (replica.getState()) {
        case FINALIZED:
          addReplicaToReplicasMap(replica, tmpReplicaMap, lazyWriteReplicaMap, 
true);
          break;
        case RUR:
        case RBW:
        case RWR:
          addReplicaToReplicasMap(replica, tmpReplicaMap, lazyWriteReplicaMap, 
false);
          break;
        default:
          break;
        }
      }
      inputStream.close();

      // Now it is safe to add the replica into volumeMap
      // In case of any exception during parsing this cache file, fall back
      // to scan all the files on disk.
      for (Iterator<ReplicaInfo> iter =
          tmpReplicaMap.replicas(bpid).iterator(); iter.hasNext(); ) {
        ReplicaInfo info = iter.next();
        // We use a lightweight GSet to store replicaInfo, we need to remove
        // it from one GSet before adding to another.
        iter.remove();
        volumeMap.add(bpid, info);
      }
      LOG.info("Successfully read replica from cache file : "
          + replicaFile.getPath());
      return true;
    } catch (Exception e) {
      // Any exception we need to revert back to read from disk
      // Log the error and return false
      LOG.info("Exception occurred while reading the replicas cache file: "
          + replicaFile.getPath(), e );
      return false;
    }
    finally {
      if (!fileIoProvider.delete(volume, replicaFile)) {
        LOG.info("Failed to delete replica cache file: " +
            replicaFile.getPath());
      }

      // close the inputStream
      IOUtils.closeStream(inputStream);
    }

{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to