[
https://issues.apache.org/jira/browse/YARN-3054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15191538#comment-15191538
]
Karthik Kambatla commented on YARN-3054:
----------------------------------------
Thanks [~peng.zhang]. I understand it now.
The elegant way of handling this would be to have a preemption priority or even
a preemption cost per container, which is different from the priority that is
used for allocation. That is a larger conversation to be had. Let us move this
out of this umbrella and look at it for both schedulers together.
That said, I would expect MapReduce to realize that pending mappers are blocked
on waiting reducers and resolve this. MAPREDUCE-6302 and co. attempt to fix
this, so you shouldn't see issues with job completion itself.
> Preempt policy in FairScheduler may cause mapreduce job never finish
> --------------------------------------------------------------------
>
> Key: YARN-3054
> URL: https://issues.apache.org/jira/browse/YARN-3054
> Project: Hadoop YARN
> Issue Type: Sub-task
> Components: fairscheduler
> Affects Versions: 2.6.0
> Reporter: Peng Zhang
>
> Preemption policy is related with schedule policy now. Using comparator of
> schedule policy to find preemption candidate cannot guarantee a subset of
> containers never be preempted. And this may cause tasks to be preempted
> periodically before they finish. So job cannot make any progress.
> I think preemption in YARN should got below assurance:
> 1. Mapreduce jobs can get additional resources when others are idle;
> 2. Mapreduce jobs for one user in one queue can still progress with its min
> share when others preempt resources back.
> Maybe always preempt the latest app and container can get this?
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)