[ 
https://issues.apache.org/jira/browse/HDFS-15945?focusedWorklogId=577286&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-577286
 ]

ASF GitHub Bot logged work on HDFS-15945:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 06/Apr/21 02:55
            Start Date: 06/Apr/21 02:55
    Worklog Time Spent: 10m 
      Work Description: tasanuma commented on a change in pull request #2854:
URL: https://github.com/apache/hadoop/pull/2854#discussion_r607453382



##########
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
##########
@@ -4584,8 +4584,14 @@ void processExtraRedundancyBlocksOnInService(
    */
   boolean isNodeHealthyForDecommissionOrMaintenance(DatanodeDescriptor node) {
     if (!node.checkBlockReportReceived()) {
-      LOG.info("Node {} hasn't sent its first block report.", node);
-      return false;
+      if (node.getCapacity() == 0 && node.getNumBlocks() == 0) {

Review comment:
       Thanks for your comment, @virajjasani.
   
   > But is it possible to have 0 numBlocks but capacity > 0 under any 
circumstances?
   
   Yes, when we add a new datanode, it has usually 0 numBlocks and capacity > 0.
   
   > Should we handle it if that is possible?
   
   Oh, after thinking about it, it doesn't matter what the capacity is, it may 
be considered safe to decommission if the numBlocks is 0.
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 577286)
    Time Spent: 1h  (was: 50m)

> DataNodes with zero capacity and zero blocks should be decommissioned 
> immediately
> ---------------------------------------------------------------------------------
>
>                 Key: HDFS-15945
>                 URL: https://issues.apache.org/jira/browse/HDFS-15945
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Takanobu Asanuma
>            Assignee: Takanobu Asanuma
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 1h
>  Remaining Estimate: 0h
>
> Such as when there is a storage problem, DataNode capacity and block count 
> sometimes become zero.
>  When we tried to decommission those DataNodes, we ran into an issue that the 
> decommission did not complete because the NameNode had not received their 
> first block report.
> {noformat}
> INFO  blockmanagement.DatanodeAdminManager 
> (DatanodeAdminManager.java:startDecommission(183)) - Starting decommission of 
> 127.0.0.1:58343 
> [DISK]DS-a29de094-2b19-4834-8318-76cda3bd86bf:NORMAL:127.0.0.1:58343 with 0 
> blocks
> INFO  blockmanagement.BlockManager 
> (BlockManager.java:isNodeHealthyForDecommissionOrMaintenance(4587)) - Node 
> 127.0.0.1:58343 hasn't sent its first block report.
> INFO  blockmanagement.DatanodeAdminDefaultMonitor 
> (DatanodeAdminDefaultMonitor.java:check(258)) - Node 127.0.0.1:58343 isn't 
> healthy. It needs to replicate 0 more blocks. Decommission In Progress is 
> still in progress.
> {noformat}
> To make matters worse, even if we stopped these DataNodes afterward, they 
> remained in a dead&decommissioning state until NameNode restarted.
> I think those DataNodes should be decommissioned immediately even if NameNode 
> hasn't recived the first block report.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to