[ 
https://issues.apache.org/jira/browse/HDFS-1111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12863282#action_12863282
 ] 

Rodrigo Schmidt commented on HDFS-1111:
---------------------------------------

getCorruptFiles() is currently an RPC and I think it's a good idea to have it 
like that, even though the Fsck usage doesn't require it (the RaidNode will 
eventually call this to recover files automatically). I haven't checked but I 
thought the RPC protocol we use passes parameters by value. That is, they are 
not returned back from the server. I imagine that passing parameters by copy 
(as you proposed) would make RPC calls unnecessarily expensive in the general 
case.

> getCorruptFiles() should give some hint that the list is not complete
> ---------------------------------------------------------------------
>
>                 Key: HDFS-1111
>                 URL: https://issues.apache.org/jira/browse/HDFS-1111
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>            Reporter: Rodrigo Schmidt
>            Assignee: Rodrigo Schmidt
>
> If the list of corruptfiles returned by the namenode doesn't say anything if 
> the number of corrupted files is larger than the call output limit (which 
> means the list is not complete). There should be a way to hint incompleteness 
> to clients.
> A simple hack would be to add an extra entry to the array returned with the 
> value null. Clients could interpret this as a sign that there are other 
> corrupt files in the system.
> We should also do some rephrasing of the fsck output to make it more 
> confident when the list is not complete and less confident when the list is 
> known to be incomplete.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to