[ https://issues.apache.org/jira/browse/YARN-5864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15803713#comment-15803713 ]
Hadoop QA commented on YARN-5864: --------------------------------- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 16 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 6s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 46s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 8s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 147 new + 1573 unchanged - 21 fixed = 1720 total (was 1594) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 15s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 1m 23s{color} | {color:red} hadoop-yarn in the patch failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 22s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 53s{color} | {color:red} hadoop-yarn in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 39m 38s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 30s{color} | {color:red} The patch generated 2 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}112m 6s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | | Inconsistent synchronization of org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.policy.PriorityUtilizationQueueOrderingPolicy.queues; locked 50% of time Unsynchronized access at PriorityUtilizationQueueOrderingPolicy.java:50% of time Unsynchronized access at PriorityUtilizationQueueOrderingPolicy.java:[line 162] | | Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart | | | hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesSchedulerActivities | | | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation | | | hadoop.yarn.server.timeline.webapp.TestTimelineWebServices | | | hadoop.yarn.server.resourcemanager.TestRMRestart | | | hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesSchedulerActivities | | | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-5864 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12845936/YARN-5864.001.patch | | Optional Tests | asflicense findbugs xml compile javac javadoc mvninstall mvnsite unit checkstyle | | uname | Linux e1a3748a1856 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 4a659ff | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/14582/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt | | findbugs | https://builds.apache.org/job/PreCommit-YARN-Build/14582/artifact/patchprocess/new-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.html | | javadoc | https://builds.apache.org/job/PreCommit-YARN-Build/14582/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn.txt | | javadoc | https://builds.apache.org/job/PreCommit-YARN-Build/14582/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/14582/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/14582/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/14582/testReport/ | | asflicense | https://builds.apache.org/job/PreCommit-YARN-Build/14582/artifact/patchprocess/patch-asflicense-problems.txt | | modules | C: hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/14582/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > YARN Capacity Scheduler - Queue Priorities > ------------------------------------------ > > Key: YARN-5864 > URL: https://issues.apache.org/jira/browse/YARN-5864 > Project: Hadoop YARN > Issue Type: New Feature > Reporter: Wangda Tan > Assignee: Wangda Tan > Attachments: YARN-5864.001.patch, YARN-5864.poc-0.patch, > YARN-CapacityScheduler-Queue-Priorities-design-v1.pdf > > > Currently, Capacity Scheduler at every parent-queue level uses relative > used-capacities of the chil-queues to decide which queue can get next > available resource first. > For example, > - Q1 & Q2 are child queues under queueA > - Q1 has 20% of configured capacity, 5% of used-capacity and > - Q2 has 80% of configured capacity, 8% of used-capacity. > In the situation, the relative used-capacities are calculated as below > - Relative used-capacity of Q1 is 5/20 = 0.25 > - Relative used-capacity of Q2 is 8/80 = 0.10 > In the above example, per today’s Capacity Scheduler’s algorithm, Q2 is > selected by the scheduler first to receive next available resource. > Simply ordering queues according to relative used-capacities sometimes causes > a few troubles because scarce resources could be assigned to less-important > apps first. > # Latency sensitivity: This can be a problem with latency sensitive > applications where waiting till the ‘other’ queue gets full is not going to > cut it. The delay in scheduling directly reflects in the response times of > these applications. > # Resource fragmentation for large-container apps: Today’s algorithm also > causes issues with applications that need very large containers. It is > possible that existing queues are all within their resource guarantees but > their current allocation distribution on each node may be such that an > application which needs large container simply cannot fit on those nodes. > Services: > # The above problem (2) gets worse with long running applications. With short > running apps, previous containers may eventually finish and make enough space > for the apps with large containers. But with long running services in the > cluster, the large containers’ application may never get resources on any > nodes even if its demands are not yet met. > # Long running services are sometimes more picky w.r.t placement than normal > batch apps. For example, for a long running service in a separate queue (say > queue=service), during peak hours it may want to launch instances on 50% of > the cluster nodes. On each node, it may want to launch a large container, say > 200G memory per container. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org