[
https://issues.apache.org/jira/browse/HDFS-2263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13176244#comment-13176244
]
Todd Lipcon commented on HDFS-2263:
-----------------------------------
Haven't looked at the patch, but in general we should only report "corrupt" if
we have a verifiable case of bad checksum. Other "generic errors" out of
OP_READ_BLOCK shouldn't trigger a bad block being reported for the reason Harsh
mentioned, even if it's the "final retry" -- eg maybe the client got
partitioned from the DNs but not the NN. In that case we don't want it going
and reporting bad blocks everywhere.
> Make DFSClient report bad blocks more quickly
> ---------------------------------------------
>
> Key: HDFS-2263
> URL: https://issues.apache.org/jira/browse/HDFS-2263
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: hdfs client
> Affects Versions: 0.20.2
> Reporter: Aaron T. Myers
> Assignee: Harsh J
> Attachments: HDFS-2263.patch
>
>
> In certain circumstances the DFSClient may detect a block as being bad
> without reporting it promptly to the NN.
> If when reading a file a client finds an invalid checksum of a block, it
> immediately reports that bad block to the NN. If when serving up a block a DN
> finds a truncated block, it reports this to the client, but the client merely
> adds that DN to the list of dead nodes and moves on to trying another DN,
> without reporting this to the NN.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira