[jira] [Updated] (HDFS-13246) FileInputStream redundant closes in readReplicasFromCache

2018-03-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HDFS-13246:
--
Fix Version/s: 3.1.0

> FileInputStream redundant closes in readReplicasFromCache 
> --
>
> Key: HDFS-13246
> URL: https://issues.apache.org/jira/browse/HDFS-13246
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: liaoyuxiangqin
>Assignee: liaoyuxiangqin
>Priority: Minor
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HDFS-13246.001.patch
>
>
> When i read the readReplicasFromCache() of BlockPoolSlice class in datanode, 
> I found the following code closes fileinputstream redundant, I think  
> IOUtils.closeStream(inputStream) in finally code block could guarantee close 
> the inputStream correctly, So the
> inputStream.close() can remove. Thanks.
>  
> {code:java|title=BlockPoolSlice.java|borderStyle=solid}
> FileInputStream inputStream = null;
> try {
>   inputStream = fileIoProvider.getFileInputStream(volume, replicaFile);
>   BlockListAsLongs blocksList =
>   BlockListAsLongs.readFrom(inputStream, maxDataLength);
>   if (blocksList == null) {
> return false;
>   }
>   for (BlockReportReplica replica : blocksList) {
> switch (replica.getState()) {
> case FINALIZED:
>   addReplicaToReplicasMap(replica, tmpReplicaMap, 
> lazyWriteReplicaMap, true);
>   break;
> case RUR:
> case RBW:
> case RWR:
>   addReplicaToReplicasMap(replica, tmpReplicaMap, 
> lazyWriteReplicaMap, false);
>   break;
> default:
>   break;
> }
>   }
>   inputStream.close();
>   // Now it is safe to add the replica into volumeMap
>   // In case of any exception during parsing this cache file, fall back
>   // to scan all the files on disk.
>   for (Iterator iter =
>   tmpReplicaMap.replicas(bpid).iterator(); iter.hasNext(); ) {
> ReplicaInfo info = iter.next();
> // We use a lightweight GSet to store replicaInfo, we need to remove
> // it from one GSet before adding to another.
> iter.remove();
> volumeMap.add(bpid, info);
>   }
>   LOG.info("Successfully read replica from cache file : "
>   + replicaFile.getPath());
>   return true;
> } catch (Exception e) {
>   // Any exception we need to revert back to read from disk
>   // Log the error and return false
>   LOG.info("Exception occurred while reading the replicas cache file: "
>   + replicaFile.getPath(), e );
>   return false;
> }
> finally {
>   if (!fileIoProvider.delete(volume, replicaFile)) {
> LOG.info("Failed to delete replica cache file: " +
> replicaFile.getPath());
>   }
>   // close the inputStream
>   IOUtils.closeStream(inputStream);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13246) FileInputStream redundant closes in readReplicasFromCache

2018-03-14 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13246:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks for the contribution [~liaoyuxiangqin], and Rushabh 
for review

> FileInputStream redundant closes in readReplicasFromCache 
> --
>
> Key: HDFS-13246
> URL: https://issues.apache.org/jira/browse/HDFS-13246
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: liaoyuxiangqin
>Assignee: liaoyuxiangqin
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HDFS-13246.001.patch
>
>
> When i read the readReplicasFromCache() of BlockPoolSlice class in datanode, 
> I found the following code closes fileinputstream redundant, I think  
> IOUtils.closeStream(inputStream) in finally code block could guarantee close 
> the inputStream correctly, So the
> inputStream.close() can remove. Thanks.
>  
> {code:java|title=BlockPoolSlice.java|borderStyle=solid}
> FileInputStream inputStream = null;
> try {
>   inputStream = fileIoProvider.getFileInputStream(volume, replicaFile);
>   BlockListAsLongs blocksList =
>   BlockListAsLongs.readFrom(inputStream, maxDataLength);
>   if (blocksList == null) {
> return false;
>   }
>   for (BlockReportReplica replica : blocksList) {
> switch (replica.getState()) {
> case FINALIZED:
>   addReplicaToReplicasMap(replica, tmpReplicaMap, 
> lazyWriteReplicaMap, true);
>   break;
> case RUR:
> case RBW:
> case RWR:
>   addReplicaToReplicasMap(replica, tmpReplicaMap, 
> lazyWriteReplicaMap, false);
>   break;
> default:
>   break;
> }
>   }
>   inputStream.close();
>   // Now it is safe to add the replica into volumeMap
>   // In case of any exception during parsing this cache file, fall back
>   // to scan all the files on disk.
>   for (Iterator iter =
>   tmpReplicaMap.replicas(bpid).iterator(); iter.hasNext(); ) {
> ReplicaInfo info = iter.next();
> // We use a lightweight GSet to store replicaInfo, we need to remove
> // it from one GSet before adding to another.
> iter.remove();
> volumeMap.add(bpid, info);
>   }
>   LOG.info("Successfully read replica from cache file : "
>   + replicaFile.getPath());
>   return true;
> } catch (Exception e) {
>   // Any exception we need to revert back to read from disk
>   // Log the error and return false
>   LOG.info("Exception occurred while reading the replicas cache file: "
>   + replicaFile.getPath(), e );
>   return false;
> }
> finally {
>   if (!fileIoProvider.delete(volume, replicaFile)) {
> LOG.info("Failed to delete replica cache file: " +
> replicaFile.getPath());
>   }
>   // close the inputStream
>   IOUtils.closeStream(inputStream);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13246) FileInputStream redundant closes in readReplicasFromCache

2018-03-14 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13246:
-
Priority: Minor  (was: Major)

> FileInputStream redundant closes in readReplicasFromCache 
> --
>
> Key: HDFS-13246
> URL: https://issues.apache.org/jira/browse/HDFS-13246
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: liaoyuxiangqin
>Assignee: liaoyuxiangqin
>Priority: Minor
> Attachments: HDFS-13246.001.patch
>
>
> When i read the readReplicasFromCache() of BlockPoolSlice class in datanode, 
> I found the following code closes fileinputstream redundant, I think  
> IOUtils.closeStream(inputStream) in finally code block could guarantee close 
> the inputStream correctly, So the
> inputStream.close() can remove. Thanks.
>  
> {code:java|title=BlockPoolSlice.java|borderStyle=solid}
> FileInputStream inputStream = null;
> try {
>   inputStream = fileIoProvider.getFileInputStream(volume, replicaFile);
>   BlockListAsLongs blocksList =
>   BlockListAsLongs.readFrom(inputStream, maxDataLength);
>   if (blocksList == null) {
> return false;
>   }
>   for (BlockReportReplica replica : blocksList) {
> switch (replica.getState()) {
> case FINALIZED:
>   addReplicaToReplicasMap(replica, tmpReplicaMap, 
> lazyWriteReplicaMap, true);
>   break;
> case RUR:
> case RBW:
> case RWR:
>   addReplicaToReplicasMap(replica, tmpReplicaMap, 
> lazyWriteReplicaMap, false);
>   break;
> default:
>   break;
> }
>   }
>   inputStream.close();
>   // Now it is safe to add the replica into volumeMap
>   // In case of any exception during parsing this cache file, fall back
>   // to scan all the files on disk.
>   for (Iterator iter =
>   tmpReplicaMap.replicas(bpid).iterator(); iter.hasNext(); ) {
> ReplicaInfo info = iter.next();
> // We use a lightweight GSet to store replicaInfo, we need to remove
> // it from one GSet before adding to another.
> iter.remove();
> volumeMap.add(bpid, info);
>   }
>   LOG.info("Successfully read replica from cache file : "
>   + replicaFile.getPath());
>   return true;
> } catch (Exception e) {
>   // Any exception we need to revert back to read from disk
>   // Log the error and return false
>   LOG.info("Exception occurred while reading the replicas cache file: "
>   + replicaFile.getPath(), e );
>   return false;
> }
> finally {
>   if (!fileIoProvider.delete(volume, replicaFile)) {
> LOG.info("Failed to delete replica cache file: " +
> replicaFile.getPath());
>   }
>   // close the inputStream
>   IOUtils.closeStream(inputStream);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13246) FileInputStream redundant closes in readReplicasFromCache

2018-03-07 Thread liaoyuxiangqin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liaoyuxiangqin updated HDFS-13246:
--
Attachment: HDFS-13246.001.patch

> FileInputStream redundant closes in readReplicasFromCache 
> --
>
> Key: HDFS-13246
> URL: https://issues.apache.org/jira/browse/HDFS-13246
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: liaoyuxiangqin
>Priority: Major
> Attachments: HDFS-13246.001.patch
>
>
> When i read the readReplicasFromCache() of BlockPoolSlice class in datanode, 
> I found the following code closes fileinputstream redundant, I think  
> IOUtils.closeStream(inputStream) in finally code block could guarantee close 
> the inputStream correctly, So the
> inputStream.close() can remove. Thanks.
>  
> {code:java|title=BlockPoolSlice.java|borderStyle=solid}
> FileInputStream inputStream = null;
> try {
>   inputStream = fileIoProvider.getFileInputStream(volume, replicaFile);
>   BlockListAsLongs blocksList =
>   BlockListAsLongs.readFrom(inputStream, maxDataLength);
>   if (blocksList == null) {
> return false;
>   }
>   for (BlockReportReplica replica : blocksList) {
> switch (replica.getState()) {
> case FINALIZED:
>   addReplicaToReplicasMap(replica, tmpReplicaMap, 
> lazyWriteReplicaMap, true);
>   break;
> case RUR:
> case RBW:
> case RWR:
>   addReplicaToReplicasMap(replica, tmpReplicaMap, 
> lazyWriteReplicaMap, false);
>   break;
> default:
>   break;
> }
>   }
>   inputStream.close();
>   // Now it is safe to add the replica into volumeMap
>   // In case of any exception during parsing this cache file, fall back
>   // to scan all the files on disk.
>   for (Iterator iter =
>   tmpReplicaMap.replicas(bpid).iterator(); iter.hasNext(); ) {
> ReplicaInfo info = iter.next();
> // We use a lightweight GSet to store replicaInfo, we need to remove
> // it from one GSet before adding to another.
> iter.remove();
> volumeMap.add(bpid, info);
>   }
>   LOG.info("Successfully read replica from cache file : "
>   + replicaFile.getPath());
>   return true;
> } catch (Exception e) {
>   // Any exception we need to revert back to read from disk
>   // Log the error and return false
>   LOG.info("Exception occurred while reading the replicas cache file: "
>   + replicaFile.getPath(), e );
>   return false;
> }
> finally {
>   if (!fileIoProvider.delete(volume, replicaFile)) {
> LOG.info("Failed to delete replica cache file: " +
> replicaFile.getPath());
>   }
>   // close the inputStream
>   IOUtils.closeStream(inputStream);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13246) FileInputStream redundant closes in readReplicasFromCache

2018-03-07 Thread liaoyuxiangqin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liaoyuxiangqin updated HDFS-13246:
--
Status: Patch Available  (was: Open)

> FileInputStream redundant closes in readReplicasFromCache 
> --
>
> Key: HDFS-13246
> URL: https://issues.apache.org/jira/browse/HDFS-13246
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: liaoyuxiangqin
>Priority: Major
> Attachments: HDFS-13246.001.patch
>
>
> When i read the readReplicasFromCache() of BlockPoolSlice class in datanode, 
> I found the following code closes fileinputstream redundant, I think  
> IOUtils.closeStream(inputStream) in finally code block could guarantee close 
> the inputStream correctly, So the
> inputStream.close() can remove. Thanks.
>  
> {code:java|title=BlockPoolSlice.java|borderStyle=solid}
> FileInputStream inputStream = null;
> try {
>   inputStream = fileIoProvider.getFileInputStream(volume, replicaFile);
>   BlockListAsLongs blocksList =
>   BlockListAsLongs.readFrom(inputStream, maxDataLength);
>   if (blocksList == null) {
> return false;
>   }
>   for (BlockReportReplica replica : blocksList) {
> switch (replica.getState()) {
> case FINALIZED:
>   addReplicaToReplicasMap(replica, tmpReplicaMap, 
> lazyWriteReplicaMap, true);
>   break;
> case RUR:
> case RBW:
> case RWR:
>   addReplicaToReplicasMap(replica, tmpReplicaMap, 
> lazyWriteReplicaMap, false);
>   break;
> default:
>   break;
> }
>   }
>   inputStream.close();
>   // Now it is safe to add the replica into volumeMap
>   // In case of any exception during parsing this cache file, fall back
>   // to scan all the files on disk.
>   for (Iterator iter =
>   tmpReplicaMap.replicas(bpid).iterator(); iter.hasNext(); ) {
> ReplicaInfo info = iter.next();
> // We use a lightweight GSet to store replicaInfo, we need to remove
> // it from one GSet before adding to another.
> iter.remove();
> volumeMap.add(bpid, info);
>   }
>   LOG.info("Successfully read replica from cache file : "
>   + replicaFile.getPath());
>   return true;
> } catch (Exception e) {
>   // Any exception we need to revert back to read from disk
>   // Log the error and return false
>   LOG.info("Exception occurred while reading the replicas cache file: "
>   + replicaFile.getPath(), e );
>   return false;
> }
> finally {
>   if (!fileIoProvider.delete(volume, replicaFile)) {
> LOG.info("Failed to delete replica cache file: " +
> replicaFile.getPath());
>   }
>   // close the inputStream
>   IOUtils.closeStream(inputStream);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org