[ 
http://issues.apache.org/jira/browse/HADOOP-731?page=comments#action_12451394 ] 
            
Hairong Kuang commented on HADOOP-731:
--------------------------------------

I feel that a patch to http://issues.apache.org/jira/browse/HADOOP-698 should 
also fix this problem.

> Sometimes when a dfs file is accessed and one copy has a checksum error the 
> I/O command fails, even if another copy is alright.
> -------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-731
>                 URL: http://issues.apache.org/jira/browse/HADOOP-731
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.7.2
>            Reporter: Dick King
>
> for a particular file [alas, the file no longer exists -- I had to progress]  
>     $dfs -cp foo bar        
> and
>     $dfs -get foo local
> failed on a checksum error.  The dfs browser's download function retrieved 
> the file, so either that function doesn't check, or more likely the download 
> function got a different copy.
> When a checksum fails on one copy of a file that is redundantly stored, I 
> would prefer that dfs try a different copy, mark the bad one as not existing 
> [which should induce a fresh copy being made from one of the good copies 
> eventually], and make the call continue to work and deliver bytes.
> Ideally, if all copies have checksum errors but it's possible to piece 
> together a good copy I would like that to be done.
> -dk

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to