[
https://issues.apache.org/jira/browse/HDFS-1371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12905949#action_12905949
]
Todd Lipcon commented on HDFS-1371:
-----------------------------------
Just to clarify, when you say "reported corrupt" you mean from client nodes,
right?
> One bad node can incorrectly flag many files as corrupt
> -------------------------------------------------------
>
> Key: HDFS-1371
> URL: https://issues.apache.org/jira/browse/HDFS-1371
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: hdfs client, name-node
> Affects Versions: 0.20.1
> Environment: yahoo internal version
> [knogu...@gwgd4003 ~]$ hadoop version
> Hadoop 0.20.104.3.1007030707
> Reporter: Koji Noguchi
>
> On our cluster, 12 files were reported as corrupt by fsck even though the
> replicas on the datanodes were healthy.
> Turns out that all the replicas (12 files x 3 replicas per file) were
> reported corrupt from one node.
> Surprisingly, these files were still readable/accessible from dfsclient
> (-get/-cat) without any problems.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.