[ 
https://issues.apache.org/jira/browse/HDFS-1371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDFS-1371:
---------------------------------------

        Fix Version/s: 0.23.0
    Affects Version/s: 0.23.0
         Hadoop Flags: [Reviewed]
               Status: Patch Available  (was: Open)

> One bad node can incorrectly flag many files as corrupt
> -------------------------------------------------------
>
>                 Key: HDFS-1371
>                 URL: https://issues.apache.org/jira/browse/HDFS-1371
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs client, name-node
>    Affects Versions: 0.20.1, 0.23.0
>         Environment: yahoo internal version 
> [knoguchi@gwgd4003 ~]$ hadoop version
> Hadoop 0.20.104.3.1007030707
>            Reporter: Koji Noguchi
>            Assignee: Tanping Wang
>             Fix For: 0.23.0
>
>         Attachments: HDFS-1371.04252011.patch, HDFS-1371.0503.patch, 
> HDFS-1371.0513.patch, HDFS-1371.0515.patch, HDFS-1371.0517.2.patch, 
> HDFS-1371.0517.patch
>
>
> On our cluster, 12 files were reported as corrupt by fsck even though the 
> replicas on the datanodes were healthy.
> Turns out that all the replicas (12 files x 3 replicas per file) were 
> reported corrupt from one node.
> Surprisingly, these files were still readable/accessible from dfsclient 
> (-get/-cat) without any problems.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to