[
https://issues.apache.org/jira/browse/HDFS-7235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14177792#comment-14177792
]
Colin Patrick McCabe commented on HDFS-7235:
--------------------------------------------
{code}
1786 boolean needToReportBadBlock = false;
1787 synchronized(data) {
1788 ReplicaInfo replicaInfo = (ReplicaInfo) data.getReplica(
1789 block.getBlockPoolId(), block.getBlockId());
1790 needToReportBadBlock = (replicaInfo != null
1791 && replicaInfo.getState() == ReplicaState.FINALIZED
1792 && !replicaInfo.getBlockFile().exists());
1793 }
1794 if (needToReportBadBlock) {
1795 // Report back to NN bad block caused by non-existent block
file.
1796 reportBadBlock(bpos, block, "Can't replicate block " + block
1797 + " because the block file doesn't exist");
1798 } else {
1799 String errStr = "Can't send invalid block " + block;
1800 LOG.info(errStr);
1801 bpos.trySendErrorReport(DatanodeProtocol.INVALID_BLOCK, errStr);
1802 }
{code}
We shouldn't log a message saying that "the block file doesn't exist" if the
block file exists, but is not finalized.
I also don't see why we need to call {{FSDatasetSpi#getLength}}, if we already
have access to the replica length here.
I would suggest having your synchronized section set a string named
{{replicaProblem}}. Then if the string is null at the end, there is no
problem-- otherwise, the problem is contained in {{replicaProblem}}. Then you
can check existence, replica state, and length all at once.
bq. BTW, about the WATCH-OUT, I was just thinking that someone could add
another condition in the FsDatasetImpl#isValidBlock that makes the method to
return false. But that's remote and probably won't happen.
We don't even need to call {{isValidBlock}}. {{getReplica}} gives you all the
info you need. Please take out this call, since it's unnecessary.
> Can not decommission DN which has invalid block due to bad disk
> ---------------------------------------------------------------
>
> Key: HDFS-7235
> URL: https://issues.apache.org/jira/browse/HDFS-7235
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: datanode, namenode
> Affects Versions: 2.6.0
> Reporter: Yongjun Zhang
> Assignee: Yongjun Zhang
> Attachments: HDFS-7235.001.patch, HDFS-7235.002.patch,
> HDFS-7235.003.patch
>
>
> When to decommission a DN, the process hangs.
> What happens is, when NN chooses a replica as a source to replicate data on
> the to-be-decommissioned DN to other DNs, it favors choosing this DN
> to-be-decommissioned as the source of transfer (see BlockManager.java).
> However, because of the bad disk, the DN would detect the source block to be
> transfered as invalidBlock with the following logic in FsDatasetImpl.java:
> {code}
> /** Does the block exist and have the given state? */
> private boolean isValid(final ExtendedBlock b, final ReplicaState state) {
> final ReplicaInfo replicaInfo = volumeMap.get(b.getBlockPoolId(),
> b.getLocalBlock());
> return replicaInfo != null
> && replicaInfo.getState() == state
> && replicaInfo.getBlockFile().exists();
> }
> {code}
> The reason that this method returns false (detecting invalid block) is
> because the block file doesn't exist due to bad disk in this case.
> The key issue we found here is, after DN detects an invalid block for the
> above reason, it doesn't report the invalid block back to NN, thus NN doesn't
> know that the block is corrupted, and keeps sending the data transfer request
> to the same DN to be decommissioned, again and again. This caused an infinite
> loop, so the decommission process hangs.
> Thanks [~qwertymaniac] for reporting the issue and initial analysis.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)