[ 
https://issues.apache.org/jira/browse/YARN-6956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16125041#comment-16125041
 ] 

Steven Rand edited comment on YARN-6956 at 8/13/17 8:30 PM:
------------------------------------------------------------

Thanks for the clarifications. All three of those suggestions make sense to me.

I've attached a patch for considering a configurable number of RRs. It seems 
simplest to me to create separate JIRAs for prioritizing the RR(s) to check and 
honoring delay scheduling in preemption -- does that seem reasonable?

EDIT: A couple of questions I had about the patch:

* I don't have a good sense for how to pick the default number of RRs to look 
at, and the choice of 10 for {{MIN_RESOURCE_REQUESTS_FOR_PREEMPTION_DEFAULT}} 
was fairly arbitrary. Happy to change that to something more reasonable if 
someone else has better intuition there.
* If adding a new configuration point as in the patch makes sense, where should 
I add docs for it? My guess is {{yarn-default.xml}}, but I wasn't completely 
sure.


was (Author: steven rand):
Thanks for the clarifications. All three of those suggestions make sense to me.

I've attached a patch for considering a configurable number of RRs. It seems 
simplest to me to create separate JIRAs for prioritizing the RR(s) to check and 
honoring delay scheduling in preemption -- does that seem reasonable?

> preemption may only consider resource requests for one node
> -----------------------------------------------------------
>
>                 Key: YARN-6956
>                 URL: https://issues.apache.org/jira/browse/YARN-6956
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: fairscheduler
>    Affects Versions: 2.9.0, 3.0.0-beta1
>         Environment: CDH 5.11.0
>            Reporter: Steven Rand
>            Assignee: Steven Rand
>         Attachments: YARN-6956.001.patch
>
>
> I'm observing the following series of events on a CDH 5.11.0 cluster, which 
> seem to be possible after YARN-6163:
> 1. An application is considered to be starved, so {{FSPreemptionThread}} 
> calls {{identifyContainersToPreempt}}, and that calls 
> {{FSAppAttempt#getStarvedResourceRequests}} to get a list of 
> {{ResourceRequest}} instances that are enough to address the app's starvation.
> 2. The first {{ResourceRequest}} that {{getStarvedResourceRequests}} sees is 
> enough to address the app's starvation, so we break out of the loop over 
> {{appSchedulingInfo.getAllResourceRequests()}} after only one iteration: 
> https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java#L1180.
>  We return only this one {{ResourceRequest}} back to the 
> {{identifyContainersToPreempt}} method.
> 3. It turns out that this particular {{ResourceRequest}} happens to have a 
> value for {{getResourceName}} that identifies a specific node in the cluster. 
> This causes preemption to only consider containers on that node, and not the 
> rest of the cluster.
> [~kasha], does that make sense? I'm happy to submit a patch if I'm 
> understanding the problem correctly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to