[
https://issues.apache.org/jira/browse/YARN-5605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15471261#comment-15471261
]
ASF GitHub Bot commented on YARN-5605:
--------------------------------------
Github user templedf commented on a diff in the pull request:
https://github.com/apache/hadoop/pull/124#discussion_r77868830
--- Diff:
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSLeafQueue.java
---
@@ -316,26 +377,12 @@ public Resource assignContainer(FSSchedulerNode node)
{
return assigned;
}
- // Apps that have resource demands.
- TreeSet<FSAppAttempt> pendingForResourceApps =
- new TreeSet<FSAppAttempt>(policy.getComparator());
- readLock.lock();
- try {
- for (FSAppAttempt app : runnableApps) {
- Resource pending = app.getAppAttemptResourceUsage().getPending();
- if (!pending.equals(Resources.none())) {
- pendingForResourceApps.add(app);
- }
- }
- } finally {
- readLock.unlock();
- }
- for (FSAppAttempt sched : pendingForResourceApps) {
+ for (FSAppAttempt sched : fetchAppsWithDemand()) {
if (SchedulerAppUtils.isPlaceBlacklisted(sched, node, LOG)) {
continue;
--- End diff --
It would be nice to replace this _continue_ by wrapping the next few lines
in the _if_.
> Preempt containers (all on one node) to meet the requirement of starved
> applications
> ------------------------------------------------------------------------------------
>
> Key: YARN-5605
> URL: https://issues.apache.org/jira/browse/YARN-5605
> Project: Hadoop YARN
> Issue Type: Sub-task
> Components: fairscheduler
> Reporter: Karthik Kambatla
> Assignee: Karthik Kambatla
> Attachments: yarn-5605-1.patch
>
>
> Required items:
> # Identify starved applications
> # Identify a node that has enough containers from applications over their
> fairshare.
> # Preempt those containers
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]