[ 
https://issues.apache.org/jira/browse/HDFS-1111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12861141#action_12861141
 ] 

Rodrigo Schmidt commented on HDFS-1111:
---------------------------------------

Andrew, I would propose the following to address your points:

1) Make it clear whether the list is complete or there might be other corrupt 
files not listed.
2) The "may be" must be taken out. It's known for sure that the listed files 
ARE corrupted at the moment of the call.
3) Whenever the list is incomplete, the output should suggest running a 
complete fsck, which is the guaranteed way to list all the corrupt files.

Is that fine with you?

> getCorruptFiles() should give some hint that the list is not complete
> ---------------------------------------------------------------------
>
>                 Key: HDFS-1111
>                 URL: https://issues.apache.org/jira/browse/HDFS-1111
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>            Reporter: Rodrigo Schmidt
>            Assignee: Rodrigo Schmidt
>
> If the list of corruptfiles returned by the namenode doesn't say anything if 
> the number of corrupted files is larger than the call output limit (which 
> means the list is not complete). There should be a way to hint incompleteness 
> to clients.
> A simple hack would be to add an extra entry to the array returned with the 
> value null. Clients could interpret this as a sign that there are other 
> corrupt files in the system.
> We should also do some rephrasing of the fsck output to make it more 
> confident when the list is not complete and less confident when the list is 
> known to be incomplete.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to