[
https://issues.apache.org/jira/browse/HDFS-630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12764380#action_12764380
]
Cosmin Lehene commented on HDFS-630:
------------------------------------
I'll try to submit the patch for trunk including unit tests. This fix is
important to have HBase running correctly in case of datanode failures
(http://issues.apache.org/jira/browse/HBASE-1876) so we'll probably have to
maintain the patch for 0.20.x as well.
> In DFSOutputStream.nextBlockOutputStream(), the client can exclude specific
> datanodes when locating the next block.
> -------------------------------------------------------------------------------------------------------------------
>
> Key: HDFS-630
> URL: https://issues.apache.org/jira/browse/HDFS-630
> Project: Hadoop HDFS
> Issue Type: New Feature
> Components: hdfs client
> Affects Versions: 0.20.1, 0.21.0
> Reporter: Ruyue Ma
> Assignee: Ruyue Ma
> Priority: Minor
> Fix For: 0.21.0
>
> Attachments: HDFS-630.patch
>
>
> created from hdfs-200.
> If during a write, the dfsclient sees that a block replica location for a
> newly allocated block is not-connectable, it re-requests the NN to get a
> fresh set of replica locations of the block. It tries this
> dfs.client.block.write.retries times (default 3), sleeping 6 seconds between
> each retry ( see DFSClient.nextBlockOutputStream).
> This setting works well when you have a reasonable size cluster; if u have
> few datanodes in the cluster, every retry maybe pick the dead-datanode and
> the above logic bails out.
> Our solution: when getting block location from namenode, we give nn the
> excluded datanodes. The list of dead datanodes is only for one block
> allocation.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.