[ 
https://issues.apache.org/jira/browse/HDFS-11182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15762358#comment-15762358
 ] 

ASF GitHub Bot commented on HDFS-11182:
---------------------------------------

Github user xiaoyuyao commented on a diff in the pull request:

    https://github.com/apache/hadoop/pull/168#discussion_r93123146
  
    --- Diff: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeList.java
 ---
    @@ -235,23 +233,14 @@ public void run() {
        * Use {@link checkDirsLock} to allow only one instance of checkDirs() 
call.
        *
        * @return list of all the failed volumes.
    +   * @param failedVolumes
        */
    -  Set<StorageLocation> checkDirs() {
    +  void handleVolumeFailures(Set<FsVolumeSpi> failedVolumes) {
         try (AutoCloseableLock lock = checkDirsLock.acquire()) {
    -      Set<StorageLocation> failedLocations = null;
    -      // Make a copy of volumes for performing modification 
    -      final List<FsVolumeImpl> volumeList = getVolumes();
     
    -      for(Iterator<FsVolumeImpl> i = volumeList.iterator(); i.hasNext(); ) 
{
    -        final FsVolumeImpl fsv = i.next();
    +      for(FsVolumeSpi vol : failedVolumes) {
    +        FsVolumeImpl fsv = (FsVolumeImpl) vol;
    --- End diff --
    
    Is it a safe cast from FsVolumeSpi to FsVolumeImpl? Can we add some log 
here in case the cast fail?


> Update DataNode to use DatasetVolumeChecker
> -------------------------------------------
>
>                 Key: HDFS-11182
>                 URL: https://issues.apache.org/jira/browse/HDFS-11182
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: datanode
>            Reporter: Arpit Agarwal
>            Assignee: Arpit Agarwal
>
> Update DataNode to use the DatasetVolumeChecker class introduced in 
> HDFS-11149 to parallelize disk checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to