[ 
https://issues.apache.org/jira/browse/HDFS-11285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15806306#comment-15806306
 ] 

Andrew Wang commented on HDFS-11285:
------------------------------------

Thanks for the diagram, that's something we should put as a code comment in 
DecommissionManager :)

Like you say, it looks like we don't have a 5->6 transition. 
BlockManager#isNodeHealthyForDecommissionOrMaintenance requires the node to be 
alive, and actually will log a WARN with the procedure you've been running with 
5->4->6. So at least it seems like this behavior is intentional.

I'm betting though that the straggler blocks on the DN are caused by 
open-for-write files though, and I'd prefer to solve that problem rather than 
adding a 5->6 transition. Could you run {{hdfs fsck}} with {{-openforwrite}} 
and {{-files -blocks -locations}} to confirm? Also check in the NN logs since 
we should be printing information about what blocks are preventing 
decommissioning.

> Dead DataNodes keep a long time in (Dead, DECOMMISSION_INPROGRESS), and never 
> transition to (Dead, DECOMMISSIONED)
> ------------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-11285
>                 URL: https://issues.apache.org/jira/browse/HDFS-11285
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 2.7.1
>            Reporter: Lantao Jin
>         Attachments: DecomStatus.png
>
>
> We have seen the use case of decommissioning DataNodes that are already dead 
> or unresponsive, and not expected to rejoin the cluster. In a large cluster, 
> we met more than 100 nodes were dead, decommissioning and their {panel} Under 
> replicated blocks {panel} {panel} Blocks with no live replicas {panel} were 
> all ZERO. Actually It has been fixed in 
> [HDFS-7374|https://issues.apache.org/jira/browse/HDFS-7374]. After that, we 
> can refreshNode twice to eliminate this case. But, seems this patch missed 
> after refactor[HDFS-7411|https://issues.apache.org/jira/browse/HDFS-7411]. We 
> are using a Hadoop version based 2.7.1 and only below operations can 
> transition the status from {panel} Dead, DECOMMISSION_INPROGRESS {panel} to 
> {panel} Dead, DECOMMISSIONED {panel}:
> # Retire it from hdfs-exclude
> # refreshNodes
> # Re-add it to hdfs-exclude
> # refreshNodes
> So, why the code removed after refactor in the new DecommissionManager?
> {code:java}
> if (!node.isAlive) {
>   LOG.info("Dead node " + node + " is decommissioned immediately.");
>   node.setDecommissioned();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to