[
https://issues.apache.org/jira/browse/HDFS-1371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12905962#action_12905962
]
Tsz Wo (Nicholas), SZE commented on HDFS-1371:
----------------------------------------------
> Is this actually apache hadoop 0.20 or 0.21 or trunk or what?
I have not checked the codes in details. I believe this problem exists in all
Apache Hadoop 0.20, 0.21, trunk and even some earlier versions.
> One bad node can incorrectly flag many files as corrupt
> -------------------------------------------------------
>
> Key: HDFS-1371
> URL: https://issues.apache.org/jira/browse/HDFS-1371
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: hdfs client, name-node
> Affects Versions: 0.20.1
> Environment: yahoo internal version
> [knogu...@gwgd4003 ~]$ hadoop version
> Hadoop 0.20.104.3.1007030707
> Reporter: Koji Noguchi
>
> On our cluster, 12 files were reported as corrupt by fsck even though the
> replicas on the datanodes were healthy.
> Turns out that all the replicas (12 files x 3 replicas per file) were
> reported corrupt from one node.
> Surprisingly, these files were still readable/accessible from dfsclient
> (-get/-cat) without any problems.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.