[ 
https://issues.apache.org/jira/browse/HADOOP-855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12464389
 ] 

Hadoop QA commented on HADOOP-855:
----------------------------------

+1, because 
http://issues.apache.org/jira/secure/attachment/12348878/hadoop-855-7.patch 
applied and successfully tested against trunk revision r495045.

> HDFS should repair corrupted files
> ----------------------------------
>
>                 Key: HADOOP-855
>                 URL: https://issues.apache.org/jira/browse/HADOOP-855
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>            Reporter: Wendy Chien
>         Assigned To: Wendy Chien
>         Attachments: hadoop-855-5.patch, hadoop-855-7.patch
>
>
> While reading if we discover a mismatch between a block and checksum, we want 
> to report this back to the namenode to delete the corrupted block or crc.
> To implement this, we need to do the following:
> DFSInputStream
> 1. move DFSInputStream out of DFSClient
> 2. add member variable to keep track of current datanode (the chosen node)
> DistributedFileSystem
> 1. change reportChecksumFailure parameter crc from int to FSInputStream 
> (needed to be able to delete it). 
> 2. determine specific block and datanode from DFSInputStream passed to 
> reportChecksumFailure  
> 3. call namenode to delete block/crc vis DFSClient
> ClientProtocol
> 1. add method to ask namenode to delete certain blocks on specifc datanode.
> Namenode
> 1. add ability to delete certain blocks on specific datanode

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to