[
https://issues.apache.org/jira/browse/HDFS-378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12800592#action_12800592
]
stack commented on HDFS-378:
----------------------------
Tracking failures by DN rather than by block or as is currently done, without
regard for DN or block, makes sense given the exposition above. Keeping track
by DN rather than blocks should be a good deal easier to track. The map of
past failures to DNs sounds great, especially the bit where DNs that are not in
the failure Map are prioritized. I like it.
The 'improvement' sounds grand too but something to do later?
> DFSClient should track failures by block rather than globally
> -------------------------------------------------------------
>
> Key: HDFS-378
> URL: https://issues.apache.org/jira/browse/HDFS-378
> Project: Hadoop HDFS
> Issue Type: Improvement
> Reporter: Chris Douglas
>
> Rather than tracking the total number of times DFSInputStream failed to
> locate a datanode for a particular block, such failures and the the list of
> datanodes involved should be scoped to individual blocks. In particular, the
> "deadnode" list should be a map of blocks to a list of failed nodes, the
> latter reset and the nodes retried per the existing semantics.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.