[
https://issues.apache.org/jira/browse/HDFS-1371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12905992#action_12905992
]
Tsz Wo (Nicholas), SZE commented on HDFS-1371:
----------------------------------------------
> I think the readers does not check corrupt flag is done on purpose and I like
> it.
I agree that the behavior is okay. It probably is better to add a warning
message when LocatedBlock.corrupt == true. Or the client should report
"non-corrupted block" to the NN after the client successfully read the block.
If this was really done on purpose but not a bug, do we have documentation
about this "feature"? I have not found any javadoc mentioning it.
> One bad node can incorrectly flag many files as corrupt
> -------------------------------------------------------
>
> Key: HDFS-1371
> URL: https://issues.apache.org/jira/browse/HDFS-1371
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: hdfs client, name-node
> Affects Versions: 0.20.1
> Environment: yahoo internal version
> [knogu...@gwgd4003 ~]$ hadoop version
> Hadoop 0.20.104.3.1007030707
> Reporter: Koji Noguchi
>
> On our cluster, 12 files were reported as corrupt by fsck even though the
> replicas on the datanodes were healthy.
> Turns out that all the replicas (12 files x 3 replicas per file) were
> reported corrupt from one node.
> Surprisingly, these files were still readable/accessible from dfsclient
> (-get/-cat) without any problems.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.