[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14033114#comment-14033114
 ] 

Karthik Kambatla commented on MAPREDUCE-5844:
---------------------------------------------

Thanks for updating the patch, Maysam.

Few comments:
# Unfortunately, RMContainerAllocator and RMContainerRequestor are not 
annotated to @Private classes. So, all the fields/methods that are made 
accessible should have a @Private annotation in addition the @VisibleForTesting 
annotation.
# By moving TestRMContainerAllocator to be in the same package as the above two 
files, we can limit the visibility to package-private instead of public. Can 
you please check if that is straight-forward? 
# Can we combine the following two statements into one? 
{code}
    allocationDelayThresholdMs = conf.getInt(
        MRJobConfig.MR_JOB_REDUCER_PREEMPT_DELAY_SEC,
        MRJobConfig.DEFAULT_MR_JOB_REDUCER_PREEMPT_DELAY_SEC);
    allocationDelayThresholdMs *= 1000; //sec -> ms
{code}
# Nit: Rename setMapResourceReqt and setReduceResourceReqt to end in Request 
instead of Reqt? #
# Nit: In the tests, can we use a smaller sleep time? Also, instead of sleeping 
for an extra second, can we sleep for the exact time and then check if the 
reducer gets preempted in a loop with much smaller sleep? YARN/MR should use a 
Clock so tests don't have to actually sleep for that long.

> Reducer Preemption is too aggressive
> ------------------------------------
>
>                 Key: MAPREDUCE-5844
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5844
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>            Reporter: Maysam Yabandeh
>            Assignee: Maysam Yabandeh
>         Attachments: MAPREDUCE-5844.patch, MAPREDUCE-5844.patch
>
>
> We observed cases where the reducer preemption makes the job finish much 
> later, and the preemption does not seem to be necessary since after 
> preemption both the preempted reducer and the mapper are assigned 
> immediately--meaning that there was already enough space for the mapper.
> The logic for triggering preemption is at 
> RMContainerAllocator::preemptReducesIfNeeded
> The preemption is triggered if the following is true:
> {code}
> headroom +  am * |m| + pr * |r| < mapResourceRequest
> {code} 
> where am: number of assigned mappers, |m| is mapper size, pr is number of 
> reducers being preempted, and |r| is the reducer size.
> The original idea apparently was that if headroom is not big enough for the 
> new mapper requests, reducers should be preempted. This would work if the job 
> is alone in the cluster. Once we have queues, the headroom calculation 
> becomes more complicated and it would require a separate headroom calculation 
> per queue/job.
> So, as a result headroom variable is kind of given up currently: *headroom is 
> always set to 0* What this implies to the speculation is that speculation 
> becomes very aggressive, not considering whether there is enough space for 
> the mappers or not.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to