[
https://issues.apache.org/jira/browse/HDFS-11790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16022321#comment-16022321
]
Ming Ma commented on HDFS-11790:
--------------------------------
Thanks [~manojg] for reporting this. Hmm, the existing code should take care of
this. Wonder if it is due to some corner cases where the following functions
don't skip maintenance nodes properly.
* BlockManager#createLocatedBlock should skip IN_MAINTENANCE nodes.
* BlockManager#chooseSourceDatanodes should skip MAINTENANCE_NOT_FOR_READ nodes
set for IN_MAINTENANCE nodes.
> Decommissioning of a DataNode after MaintenanceState takes a very long time
> to complete
> ---------------------------------------------------------------------------------------
>
> Key: HDFS-11790
> URL: https://issues.apache.org/jira/browse/HDFS-11790
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: hdfs
> Affects Versions: 3.0.0-alpha1
> Reporter: Manoj Govindassamy
> Assignee: Manoj Govindassamy
> Attachments: HDFS-11790-test.01.patch
>
>
> *Problem:*
> When a DataNode is requested for Decommissioning after it successfully
> transitioned to MaintenanceState (HDFS-7877), the decommissioning state
> transition is stuck for a long time even for very small number of blocks in
> the cluster.
> *Details:*
> * A DataNode DN1 wa requested for MaintenanceState and it successfully
> transitioned from ENTERING_MAINTENANCE state IN_MAINTENANCE state as there
> are sufficient replication for all its blocks.
> * As DN1 was in maintenance state now, the DataNode process was stopped on
> DN1. Later the same DN1 was requested for Decommissioning.
> * As part of Decommissioning, all the blocks residing in DN1 were requested
> for re-replicated to other DataNodes, so that DN1 could transition from
> ENTERING_DECOMMISSION to DECOMMISSIONED.
> * But, re-replication for few blocks was stuck for a long time. Eventually it
> got completed.
> * Digging the code and logs, found that the IN_MAINTENANCE DN1 was chosen as
> a source datanode for re-replication of few of the blocks. Since DataNode
> process on DN1 was already stopped, the re-replication was stuck for a long
> time.
> * Eventually PendingReplicationMonitor timed out, and those re-replication
> were re-scheduled for those timed out blocks. Again, during the
> re-replication also, the IN_MAINT DN1 was chose as a source datanode for few
> of the blocks leading to timeout again. This iteration continued for few
> times until all blocks get re-replicated.
> * By design, IN_MAINT datandoes should not be chosen for any read or write
> operations.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]