[
https://issues.apache.org/jira/browse/HDFS-767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12790415#action_12790415
]
dhruba borthakur commented on HDFS-767:
---------------------------------------
Steve: can we leave this patch to continue to use Random()? If most JVM's
implementation of Random() already takes machine's mac address, disk, etc (as
Todd points out), then can we can depend on that. In fact, there any many
places in the HDFS code that uses a Random() object and changing it in one
place might not matter much.
> Job failure due to BlockMissingException
> ----------------------------------------
>
> Key: HDFS-767
> URL: https://issues.apache.org/jira/browse/HDFS-767
> Project: Hadoop HDFS
> Issue Type: Improvement
> Reporter: Ning Zhang
> Assignee: Ning Zhang
> Attachments: HDFS-767.patch
>
>
> If a block is request by too many mappers/reducers (say, 3000) at the same
> time, a BlockMissingException is thrown because it exceeds the upper limit (I
> think 256 by default) of number of threads accessing the same block at the
> same time. The DFSClient wil catch that exception and retry 3 times after
> waiting for 3 seconds. Since the wait time is a fixed value, a lot of clients
> will retry at about the same time and a large portion of them get another
> failure. After 3 retries, there are about 256*4 = 1024 clients got the block.
> If the number of clients are more than that, the job will fail.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.