[ 
https://issues.apache.org/jira/browse/YARN-4390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15253884#comment-15253884
 ] 

Eric Payne commented on YARN-4390:
----------------------------------

{quote}
And since it uses R/W lock, write lock will be acquired only if node add / move 
or node resource update. So in most cases, nobody acquires write lock. I agree 
to cache node list inside PCPP if we do see performance issues.
{quote}
[~leftnoteasy], yes, that is a very good point. I was not thinking about 
{{ClusterNodeTracker#getNodes}} using the read lock, which, of course, can have 
multiple readers at any time. After thinking more about it, I don't think this 
will cause much of a strain on the RM.

I still want to experiment with the patch a little more.

> Consider container request size during CS preemption
> ----------------------------------------------------
>
>                 Key: YARN-4390
>                 URL: https://issues.apache.org/jira/browse/YARN-4390
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>          Components: capacity scheduler
>    Affects Versions: 3.0.0, 2.8.0, 2.7.3
>            Reporter: Eric Payne
>            Assignee: Wangda Tan
>         Attachments: YARN-4390-design.1.pdf, YARN-4390-test-results.pdf, 
> YARN-4390.1.patch, YARN-4390.2.patch, YARN-4390.3.branch-2.patch, 
> YARN-4390.3.patch, YARN-4390.4.patch
>
>
> There are multiple reasons why preemption could unnecessarily preempt 
> containers. One is that an app could be requesting a large container (say 
> 8-GB), and the preemption monitor could conceivably preempt multiple 
> containers (say 8, 1-GB containers) in order to fill the large container 
> request. These smaller containers would then be rejected by the requesting AM 
> and potentially given right back to the preempted app.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to