[ 
https://issues.apache.org/jira/browse/HDFS-11182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15762486#comment-15762486
 ] 

ASF GitHub Bot commented on HDFS-11182:
---------------------------------------

Github user xiaoyuyao commented on a diff in the pull request:

    https://github.com/apache/hadoop/pull/168#discussion_r93132992
  
    --- Diff: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeList.java
 ---
    @@ -235,23 +233,14 @@ public void run() {
        * Use {@link checkDirsLock} to allow only one instance of checkDirs() 
call.
        *
        * @return list of all the failed volumes.
    +   * @param failedVolumes
        */
    -  Set<StorageLocation> checkDirs() {
    +  void handleVolumeFailures(Set<FsVolumeSpi> failedVolumes) {
         try (AutoCloseableLock lock = checkDirsLock.acquire()) {
    -      Set<StorageLocation> failedLocations = null;
    -      // Make a copy of volumes for performing modification 
    -      final List<FsVolumeImpl> volumeList = getVolumes();
     
    -      for(Iterator<FsVolumeImpl> i = volumeList.iterator(); i.hasNext(); ) 
{
    -        final FsVolumeImpl fsv = i.next();
    +      for(FsVolumeSpi vol : failedVolumes) {
    +        FsVolumeImpl fsv = (FsVolumeImpl) vol;
    --- End diff --
    
    Thanks for the explanation.  Looks good to me. 


> Update DataNode to use DatasetVolumeChecker
> -------------------------------------------
>
>                 Key: HDFS-11182
>                 URL: https://issues.apache.org/jira/browse/HDFS-11182
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: datanode
>            Reporter: Arpit Agarwal
>            Assignee: Arpit Agarwal
>
> Update DataNode to use the DatasetVolumeChecker class introduced in 
> HDFS-11149 to parallelize disk checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to