[ 
https://issues.apache.org/jira/browse/HDFS-11790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-11790:
--------------------------------------
    Description: 
*Problem:*
When a DataNode is requested for Decommissioning after it successfully 
transitioned to MaintenanceState (HDFS-7877), the decommissioning state 
transition is stuck for a long time even for very small number of blocks in the 
cluster. 

*Details:*
* A DataNode DN1 wa requested for MaintenanceState and it successfully 
transitioned from ENTERING_MAINTENANCE state IN_MAINTENANCE state as there are 
sufficient replication for all its blocks.
* As DN1 was in maintenance state now, the DataNode process was stopped on DN1. 
Later the same DN1 was requested for Decommissioning. 
* As part of Decommissioning, all the blocks residing in DN1 were requested for 
re-replicated to other DataNodes, so that DN1 could transition from 
ENTERING_DECOMMISSION to DECOMMISSIONED. 
* But, re-replication for few blocks was stuck for a long time. Eventually it 
got completed.
* Digging the code and logs, found that the IN_MAINTENANCE DN1 was chosen as a 
source datanode for re-replication of few of the blocks. Since DataNode process 
on DN1 was already stopped, the re-replication was stuck for a long time.
* Eventually PendingReplicationMonitor timed out, and those re-replication were 
re-scheduled for those timed out blocks. Again, during the re-replication also, 
the IN_MAINT DN1 was chose as a source datanode for few of the blocks leading 
to timeout again. This iteration continued for few times until all blocks get 
re-replicated.
* By design, IN_MAINT datandoes should not be chosen for any read or write 
operations.  

  was:
Problem:
When a DataNode is requested for Decommissioning after it successfully 
transitioned to MaintenanceState (HDFS-7877), the decommissioning state 
transition is stuck for a long time even for very small number of blocks in the 
cluster. 

Details:
* A DataNode DN1 wa requested for MaintenanceState and it successfully 
transitioned from ENTERING_MAINTENANCE state IN_MAINTENANCE state as there are 
sufficient replication for all its blocks.
* As DN1 was in maintenance state now, the DataNode process was stopped on DN1. 
Later the same DN1 was requested for Decommissioning. 
* As part of Decommissioning, all the blocks residing in DN1 were requested for 
re-replicated to other DataNodes, so that DN1 could transition from 
ENTERING_DECOMMISSION to DECOMMISSIONED. 
* But, re-replication for few blocks was stuck for a long time. Eventually it 
got completed.
* Digging the code and logs, found that the IN_MAINTENANCE DN1 was chosen as a 
source datanode for re-replication of few of the blocks. Since DataNode process 
on DN1 was already stopped, the re-replication was stuck for a long time.
* Eventually PendingReplicationMonitor timed out, and those re-replication were 
re-scheduled for those timed out blocks. Again, during the re-replication also, 
the IN_MAINT DN1 was chose as a source datanode for few of the blocks leading 
to timeout again. This iteration continued for few times until all blocks get 
re-replicated.
* By design, IN_MAINT datandoes should not be chosen for any read or write 
operations.  


> Decommissioning of a DataNode after MaintenanceState takes a very long time 
> to complete
> ---------------------------------------------------------------------------------------
>
>                 Key: HDFS-11790
>                 URL: https://issues.apache.org/jira/browse/HDFS-11790
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs
>    Affects Versions: 3.0.0-alpha1
>            Reporter: Manoj Govindassamy
>            Assignee: Manoj Govindassamy
>
> *Problem:*
> When a DataNode is requested for Decommissioning after it successfully 
> transitioned to MaintenanceState (HDFS-7877), the decommissioning state 
> transition is stuck for a long time even for very small number of blocks in 
> the cluster. 
> *Details:*
> * A DataNode DN1 wa requested for MaintenanceState and it successfully 
> transitioned from ENTERING_MAINTENANCE state IN_MAINTENANCE state as there 
> are sufficient replication for all its blocks.
> * As DN1 was in maintenance state now, the DataNode process was stopped on 
> DN1. Later the same DN1 was requested for Decommissioning. 
> * As part of Decommissioning, all the blocks residing in DN1 were requested 
> for re-replicated to other DataNodes, so that DN1 could transition from 
> ENTERING_DECOMMISSION to DECOMMISSIONED. 
> * But, re-replication for few blocks was stuck for a long time. Eventually it 
> got completed.
> * Digging the code and logs, found that the IN_MAINTENANCE DN1 was chosen as 
> a source datanode for re-replication of few of the blocks. Since DataNode 
> process on DN1 was already stopped, the re-replication was stuck for a long 
> time.
> * Eventually PendingReplicationMonitor timed out, and those re-replication 
> were re-scheduled for those timed out blocks. Again, during the 
> re-replication also, the IN_MAINT DN1 was chose as a source datanode for few 
> of the blocks leading to timeout again. This iteration continued for few 
> times until all blocks get re-replicated.
> * By design, IN_MAINT datandoes should not be chosen for any read or write 
> operations.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to