[
https://issues.apache.org/jira/browse/HDFS-11499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15898571#comment-15898571
]
Lukas Majercak commented on HDFS-11499:
---------------------------------------
[~manojg], yes
TestDecommission#testDecommissionWithOpenFileAndDatanodeFailing(). It will wait
for all three DNs to be decommissioned right, and the log you showed is just 1
of them. I would suggest increasing it to 360sec yes. It finishes in 30seconds~
on my machine, similar to
TestDecommission#testDeadNodeCountAfterNamenodeRestart, which has 360sec
timeout.
> Decommissioning stuck because of failing recovery
> -------------------------------------------------
>
> Key: HDFS-11499
> URL: https://issues.apache.org/jira/browse/HDFS-11499
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: hdfs, namenode
> Affects Versions: 2.7.1, 2.7.2, 2.7.3, 3.0.0-alpha2
> Reporter: Lukas Majercak
> Assignee: Lukas Majercak
> Labels: blockmanagement, decommission, recovery
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-11499.02.patch, HDFS-11499.patch
>
>
> Block recovery will fail to finalize the file if the locations of the last,
> incomplete block are being decommissioned. Vice versa, the decommissioning
> will be stuck, waiting for the last block to be completed.
> {code:xml}
> org.apache.hadoop.ipc.RemoteException(java.lang.IllegalStateException):
> Failed to finalize INodeFile testRecoveryFile since blocks[255] is
> non-complete, where blocks=[blk_1073741825_1001, blk_1073741826_1002...
> {code}
> The fix is to count replicas on decommissioning nodes when completing last
> block in BlockManager.commitOrCompleteLastBlock, as we know that the
> DecommissionManager will not decommission a node that has UC blocks.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]