[ 
https://issues.apache.org/jira/browse/YARN-4390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15258653#comment-15258653
 ] 

Wangda Tan commented on YARN-4390:
----------------------------------

[~kasha],

Thanks for comment,

There're two different readers of schedulerNode, one is scheduler itself.

Since all changes SchedulerNode needs scheduler's synchronized lock, so all 
reader of scheduler can get a consistent value.
PreemptionPolicy can read inconsistent value in some cases, but that can be 
fixed in next runs because preemptionPolicy doesn't kill containers immediately.

I will also try to run tests to see if it causes performance issue without the 
volatile changes.


> Do surgical preemption based on reserved container in CapacityScheduler
> -----------------------------------------------------------------------
>
>                 Key: YARN-4390
>                 URL: https://issues.apache.org/jira/browse/YARN-4390
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>          Components: capacity scheduler
>    Affects Versions: 3.0.0, 2.8.0, 2.7.3
>            Reporter: Eric Payne
>            Assignee: Wangda Tan
>         Attachments: YARN-4390-design.1.pdf, YARN-4390-test-results.pdf, 
> YARN-4390.1.patch, YARN-4390.2.patch, YARN-4390.3.branch-2.patch, 
> YARN-4390.3.patch, YARN-4390.4.patch, YARN-4390.5.patch, YARN-4390.6.patch
>
>
> There are multiple reasons why preemption could unnecessarily preempt 
> containers. One is that an app could be requesting a large container (say 
> 8-GB), and the preemption monitor could conceivably preempt multiple 
> containers (say 8, 1-GB containers) in order to fill the large container 
> request. These smaller containers would then be rejected by the requesting AM 
> and potentially given right back to the preempted app.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to