[ 
https://issues.apache.org/jira/browse/HDFS-11182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15763150#comment-15763150
 ] 

ASF GitHub Bot commented on HDFS-11182:
---------------------------------------

Github user arp7 commented on a diff in the pull request:

    https://github.com/apache/hadoop/pull/168#discussion_r93168439
  
    --- Diff: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
 ---
    @@ -2051,14 +2044,13 @@ public void shutdown() {
        * Check if there is a disk failure asynchronously and if so, handle the 
error
        */
       public void checkDiskErrorAsync() {
    -    synchronized(checkDiskErrorMutex) {
    -      checkDiskErrorFlag = true;
    -      if(checkDiskErrorThread == null) {
    -        startCheckDiskErrorThread();
    -        checkDiskErrorThread.start();
    -        LOG.info("Starting CheckDiskError Thread");
    -      }
    -    }
    +    volumeChecker.checkAllVolumesAsync(
    +        data, (healthyVolumes, failedVolumes) -> {
    +          LOG.info("checkDiskErrorAsync callback got {} failed volumes: 
{}",
    +              failedVolumes.size(), failedVolumes);
    +          lastDiskErrorCheck = Time.monotonicNow();
    --- End diff --
    
    The DataNode does not maintain a timer object right now. It is only passed 
to DatasetVolumeChecker during construction for unit testability of that class.


> Update DataNode to use DatasetVolumeChecker
> -------------------------------------------
>
>                 Key: HDFS-11182
>                 URL: https://issues.apache.org/jira/browse/HDFS-11182
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: datanode
>            Reporter: Arpit Agarwal
>            Assignee: Arpit Agarwal
>
> Update DataNode to use the DatasetVolumeChecker class introduced in 
> HDFS-11149 to parallelize disk checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to