[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14693807#comment-14693807
 ] 

Chris Douglas commented on MAPREDUCE-5817:
------------------------------------------

bq. The current patch skips re-running mappers only if all reducers are 
complete. So I don't think reducers will fail beyond that point? Did I 
understand your question right?

I see; sorry, I hadn't read the rest of the JIRA carefully. That's a fairly 
narrow window, isn't it? We may not need an extra state, if we kill all running 
maps when the last reducer completes. The condition this adds prevents new maps 
from being scheduled while cleanup/commit code is running.

Minor: could {{allReducersComplete()}} call {{getCompletedReduces()}}?

+1 on the patch

> mappers get rescheduled on node transition even after all reducers are 
> completed
> --------------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-5817
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5817
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: applicationmaster
>    Affects Versions: 2.3.0
>            Reporter: Sangjin Lee
>            Assignee: Sangjin Lee
>         Attachments: MAPREDUCE-5817.001.patch, mapreduce-5817.patch
>
>
> We're seeing a behavior where a job runs long after all reducers were already 
> finished. We found that the job was rescheduling and running a number of 
> mappers beyond the point of reducer completion. In one situation, the job ran 
> for some 9 more hours after all reducers completed!
> This happens because whenever a node transition (to an unusable state) comes 
> into the app master, it just reschedules all mappers that already ran on the 
> node in all cases.
> Therefore, if any node transition has a potential to extend the job period. 
> Once this window opens, another node transition can prolong it, and this can 
> happen indefinitely in theory.
> If there is some instability in the pool (unhealthy, etc.) for a duration, 
> then any big job is severely vulnerable to this problem.
> If all reducers have been completed, JobImpl.actOnUnusableNode() should not 
> reschedule mapper tasks. If all reducers are completed, the mapper outputs 
> are no longer needed, and there is no need to reschedule mapper tasks as they 
> would not be consumed anyway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to