[
https://issues.apache.org/jira/browse/YARN-4390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15258018#comment-15258018
]
Karthik Kambatla commented on YARN-4390:
----------------------------------------
Looked only at the SchedulerNode changes, based on the comment from YARN-4808.
I am not sure trading a lock for volatile for the Resource fields in
SchedulerNode is okay. Don't we want the updates to be atomic? Is it okay for a
reader to read an inconsistent value - memory updated, but not cpu? Also, do we
know for a fact that holding a lock on SchedulerNode is causing performance
issues?
> Do surgical preemption based on reserved container in CapacityScheduler
> -----------------------------------------------------------------------
>
> Key: YARN-4390
> URL: https://issues.apache.org/jira/browse/YARN-4390
> Project: Hadoop YARN
> Issue Type: Sub-task
> Components: capacity scheduler
> Affects Versions: 3.0.0, 2.8.0, 2.7.3
> Reporter: Eric Payne
> Assignee: Wangda Tan
> Attachments: YARN-4390-design.1.pdf, YARN-4390-test-results.pdf,
> YARN-4390.1.patch, YARN-4390.2.patch, YARN-4390.3.branch-2.patch,
> YARN-4390.3.patch, YARN-4390.4.patch, YARN-4390.5.patch, YARN-4390.6.patch
>
>
> There are multiple reasons why preemption could unnecessarily preempt
> containers. One is that an app could be requesting a large container (say
> 8-GB), and the preemption monitor could conceivably preempt multiple
> containers (say 8, 1-GB containers) in order to fill the large container
> request. These smaller containers would then be rejected by the requesting AM
> and potentially given right back to the preempted app.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)