[
https://issues.apache.org/jira/browse/HADOOP-2639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12563249#action_12563249
]
Devaraj Das commented on HADOOP-2639:
-------------------------------------
I still don't clearly see why we should not consider all *running tasks* when
we refer to runningReduceTasks/runningMapTasks. Even for the above failureRate
calculation, we should consider _tasks_, no? Why should we not consider
speculative tasks when it comes to reporting fetchfailures?
> Reducers stuck in shuffle
> -------------------------
>
> Key: HADOOP-2639
> URL: https://issues.apache.org/jira/browse/HADOOP-2639
> Project: Hadoop Core
> Issue Type: Bug
> Components: mapred
> Reporter: Amareshwari Sri Ramadasu
> Assignee: Amar Kamat
> Priority: Blocker
> Fix For: 0.16.0
>
> Attachments: HADOOP-2639.patch
>
>
> I started sort benchmark on 500 nodes. It has 40000 maps and 900 reducers.
> There are 11 reducers stuck in shuffle with 33% progress. I could see a node
> down which ran 80 maps on it. And all these reducers are trying to fetch map
> output from that node.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.