Yongjun Zhang created HDFS-7235:
-----------------------------------
Summary: Can not decommission DN which has invalid block due to
bad disk
Key: HDFS-7235
URL: https://issues.apache.org/jira/browse/HDFS-7235
Project: Hadoop HDFS
Issue Type: Bug
Components: datanode, namenode
Affects Versions: 2.6.0
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang
When to decommission a DN, the process hangs.
What happens is, when NN chooses a replica as a source to replicate data on the
to-be-decommissioned DN to other DNs, it favors choosing this DN
to-be-decommissioned as the source of transfer (see BlockManager.java).
However, because of the bad disk, the DN would detect the source block to be
transfered as invalidBlock with the following logic in FsDatasetImpl.java:
{code}
/** Does the block exist and have the given state? */
private boolean isValid(final ExtendedBlock b, final ReplicaState state) {
final ReplicaInfo replicaInfo = volumeMap.get(b.getBlockPoolId(),
b.getLocalBlock());
return replicaInfo != null
&& replicaInfo.getState() == state
&& replicaInfo.getBlockFile().exists();
}
{code}
The reason that this method returns false (detecting invalid block) is because
the block file doesn't exist in this case.
The key issue we found here is, after DN detects an invalid block for the above
reason, it doesn't report the invalid block back to NN, thus NN doesn't know
that the block is corrupted, and keeps sending the data transfer request to the
same DN to be decommissioned, again and again. This caused an infinite loop, so
the decommission process hangs.
Thanks [~qwertymaniac] for reporting the issue and initial analysis.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)