[
https://issues.apache.org/jira/browse/HDFS-767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12780765#action_12780765
]
Raghu Angadi commented on HDFS-767:
-----------------------------------
I wasn't aware of limit on number of accessors for single block. Anyone knows
reason behind such a restriction?
> Job failure due to BlockMissingException
> ----------------------------------------
>
> Key: HDFS-767
> URL: https://issues.apache.org/jira/browse/HDFS-767
> Project: Hadoop HDFS
> Issue Type: Improvement
> Reporter: Ning Zhang
> Assignee: Ning Zhang
> Attachments: HDFS-767.patch
>
>
> If a block is request by too many mappers/reducers (say, 3000) at the same
> time, a BlockMissingException is thrown because it exceeds the upper limit (I
> think 256 by default) of number of threads accessing the same block at the
> same time. The DFSClient wil catch that exception and retry 3 times after
> waiting for 3 seconds. Since the wait time is a fixed value, a lot of clients
> will retry at about the same time and a large portion of them get another
> failure. After 3 retries, there are about 256*4 = 1024 clients got the block.
> If the number of clients are more than that, the job will fail.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.