[
https://issues.apache.org/jira/browse/HDFS-11285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15907420#comment-15907420
]
Lantao Jin commented on HDFS-11285:
-----------------------------------
Hi [~andrew.wang], I found there are lots of logs in NameNode like below:
{code}
2017-03-13 03:59:52,620 INFO
org.apache.hadoop.hdfs.server.blockmanagement.DecommissionManager: Block:
blk_13651215184_111
3964818077{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=2,
replicas=[ReplicaUC[[DISK]DS-a74fff1e-dc86-4e60-8e69
-9c9023a7fd3c:NORMAL:10.115.21.54:50010|RBW]]}, Expected Replicas: 3, live
replicas: 0, corrupt replicas: 0, decommissione
d replicas: 0, decommissioning replicas: 1, excess replicas: 0, Is Open File:
true, Datanodes having this block: 10.115.21
.54:50010 , Current Datanode: 10.115.21.54:50010, Is current datanode
decommissioning: true
{code}
*But the block blk_13651215184 can't be found in that node 10.115.21.54.*
Notice the UCState=COMMITTED, and this will cause decommission_inprogress never
completed.
> Dead DataNodes keep a long time in (Dead, DECOMMISSION_INPROGRESS), and never
> transition to (Dead, DECOMMISSIONED)
> ------------------------------------------------------------------------------------------------------------------
>
> Key: HDFS-11285
> URL: https://issues.apache.org/jira/browse/HDFS-11285
> Project: Hadoop HDFS
> Issue Type: Bug
> Affects Versions: 2.7.1
> Reporter: Lantao Jin
> Attachments: DecomStatus.png
>
>
> We have seen the use case of decommissioning DataNodes that are already dead
> or unresponsive, and not expected to rejoin the cluster. In a large cluster,
> we met more than 100 nodes were dead, decommissioning and their {{Under
> replicated blocks}} {{Blocks with no live replicas}} were all ZERO. Actually
> It has been fixed in
> [HDFS-7374|https://issues.apache.org/jira/browse/HDFS-7374]. After that, we
> can refreshNode twice to eliminate this case. But, seems this patch missed
> after refactor[HDFS-7411|https://issues.apache.org/jira/browse/HDFS-7411]. We
> are using a Hadoop version based 2.7.1 and only below operations can
> transition the status from {{Dead, DECOMMISSION_INPROGRESS}} to {{Dead,
> DECOMMISSIONED}}:
> # Retire it from hdfs-exclude
> # refreshNodes
> # Re-add it to hdfs-exclude
> # refreshNodes
> So, why the code removed after refactor in the new DecommissionManager?
> {code:java}
> if (!node.isAlive) {
> LOG.info("Dead node " + node + " is decommissioned immediately.");
> node.setDecommissioned();
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]