[
https://issues.apache.org/jira/browse/HDFS-767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12794929#action_12794929
]
Hudson commented on HDFS-767:
-----------------------------
Integrated in Hadoop-Hdfs-trunk-Commit #158 (See
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/158/])
. An improved retry policy when the DFSClient is unable to fetch a
block from the datanode. (Ning Zhang via dhruba)
> Job failure due to BlockMissingException
> ----------------------------------------
>
> Key: HDFS-767
> URL: https://issues.apache.org/jira/browse/HDFS-767
> Project: Hadoop HDFS
> Issue Type: Improvement
> Reporter: Ning Zhang
> Assignee: Ning Zhang
> Fix For: 0.22.0
>
> Attachments: HDFS-767.patch, HDFS-767_2.patch, HDFS-767_3.patch,
> HDFS-767_4.txt
>
>
> If a block is request by too many mappers/reducers (say, 3000) at the same
> time, a BlockMissingException is thrown because it exceeds the upper limit (I
> think 256 by default) of number of threads accessing the same block at the
> same time. The DFSClient wil catch that exception and retry 3 times after
> waiting for 3 seconds. Since the wait time is a fixed value, a lot of clients
> will retry at about the same time and a large portion of them get another
> failure. After 3 retries, there are about 256*4 = 1024 clients got the block.
> If the number of clients are more than that, the job will fail.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.