[
https://issues.apache.org/jira/browse/YARN-4140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14939395#comment-14939395
]
Hadoop QA commented on YARN-4140:
---------------------------------
\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch | 17m 36s | Pre-patch trunk compilation is
healthy. |
| {color:green}+1{color} | @author | 0m 0s | The patch does not contain any
@author tags. |
| {color:green}+1{color} | tests included | 0m 0s | The patch appears to
include 1 new or modified test files. |
| {color:green}+1{color} | javac | 8m 14s | There were no new javac warning
messages. |
| {color:green}+1{color} | javadoc | 10m 8s | There were no new javadoc
warning messages. |
| {color:red}-1{color} | release audit | 0m 16s | The applied patch generated
1 release audit warnings. |
| {color:green}+1{color} | checkstyle | 0m 50s | There were no new checkstyle
issues. |
| {color:green}+1{color} | whitespace | 0m 0s | The patch has no lines that
end in whitespace. |
| {color:green}+1{color} | install | 1m 30s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse | 0m 33s | The patch built with
eclipse:eclipse. |
| {color:green}+1{color} | findbugs | 1m 29s | The patch does not introduce
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | yarn tests | 59m 59s | Tests passed in
hadoop-yarn-server-resourcemanager. |
| | | 100m 38s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL |
http://issues.apache.org/jira/secure/attachment/12764536/0013-YARN-4140.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 5db371f |
| Release Audit |
https://builds.apache.org/job/PreCommit-YARN-Build/9317/artifact/patchprocess/patchReleaseAuditProblems.txt
|
| hadoop-yarn-server-resourcemanager test log |
https://builds.apache.org/job/PreCommit-YARN-Build/9317/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
|
| Test Results |
https://builds.apache.org/job/PreCommit-YARN-Build/9317/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output |
https://builds.apache.org/job/PreCommit-YARN-Build/9317/console |
This message was automatically generated.
> RM container allocation delayed incase of app submitted to Nodelabel partition
> ------------------------------------------------------------------------------
>
> Key: YARN-4140
> URL: https://issues.apache.org/jira/browse/YARN-4140
> Project: Hadoop YARN
> Issue Type: Sub-task
> Components: api, client, resourcemanager
> Reporter: Bibin A Chundatt
> Assignee: Bibin A Chundatt
> Attachments: 0001-YARN-4140.patch, 0002-YARN-4140.patch,
> 0003-YARN-4140.patch, 0004-YARN-4140.patch, 0005-YARN-4140.patch,
> 0006-YARN-4140.patch, 0007-YARN-4140.patch, 0008-YARN-4140.patch,
> 0009-YARN-4140.patch, 0010-YARN-4140.patch, 0011-YARN-4140.patch,
> 0012-YARN-4140.patch, 0013-YARN-4140.patch
>
>
> Trying to run application on Nodelabel partition I found that the
> application execution time is delayed by 5 – 10 min for 500 containers .
> Total 3 machines 2 machines were in same partition and app submitted to same.
> After enabling debug was able to find the below
> # From AM the container ask is for OFF-SWITCH
> # RM allocating all containers to NODE_LOCAL as shown in logs below.
> # So since I was having about 500 containers time taken was about – 6 minutes
> to allocate 1st map after AM allocation.
> # Tested with about 1K maps using PI job took 17 minutes to allocate next
> container after AM allocation
> Once 500 container allocation on NODE_LOCAL is done the next container
> allocation is done on OFF_SWITCH
> {code}
> 2015-09-09 15:21:58,954 DEBUG
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
> showRequests: application=application_1441791998224_0001 request={Priority:
> 20, Capability: <memory:512, vCores:1>, # Containers: 500, Location:
> /default-rack, Relax Locality: true, Node Label Expression: }
> 2015-09-09 15:21:58,954 DEBUG
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
> showRequests: application=application_1441791998224_0001 request={Priority:
> 20, Capability: <memory:512, vCores:1>, # Containers: 500, Location: *, Relax
> Locality: true, Node Label Expression: 3}
> 2015-09-09 15:21:58,954 DEBUG
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
> showRequests: application=application_1441791998224_0001 request={Priority:
> 20, Capability: <memory:512, vCores:1>, # Containers: 500, Location:
> host-10-19-92-143, Relax Locality: true, Node Label Expression: }
> 2015-09-09 15:21:58,954 DEBUG
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
> showRequests: application=application_1441791998224_0001 request={Priority:
> 20, Capability: <memory:512, vCores:1>, # Containers: 500, Location:
> host-10-19-92-117, Relax Locality: true, Node Label Expression: }
> 2015-09-09 15:21:58,954 DEBUG
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
> Assigned to queue: root.b.b1 stats: b1: capacity=1.0, absoluteCapacity=0.5,
> usedResources=<memory:0, vCores:0>, usedCapacity=0.0,
> absoluteUsedCapacity=0.0, numApps=1, numContainers=1 --> <memory:0,
> vCores:0>, NODE_LOCAL
> {code}
>
> {code}
> 2015-09-09 14:35:45,467 DEBUG
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
> Assigned to queue: root.b.b1 stats: b1: capacity=1.0, absoluteCapacity=0.5,
> usedResources=<memory:0, vCores:0>, usedCapacity=0.0,
> absoluteUsedCapacity=0.0, numApps=1, numContainers=1 --> <memory:0,
> vCores:0>, NODE_LOCAL
> 2015-09-09 14:35:45,831 DEBUG
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
> Assigned to queue: root.b.b1 stats: b1: capacity=1.0, absoluteCapacity=0.5,
> usedResources=<memory:0, vCores:0>, usedCapacity=0.0,
> absoluteUsedCapacity=0.0, numApps=1, numContainers=1 --> <memory:0,
> vCores:0>, NODE_LOCAL
> 2015-09-09 14:35:46,469 DEBUG
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
> Assigned to queue: root.b.b1 stats: b1: capacity=1.0, absoluteCapacity=0.5,
> usedResources=<memory:0, vCores:0>, usedCapacity=0.0,
> absoluteUsedCapacity=0.0, numApps=1, numContainers=1 --> <memory:0,
> vCores:0>, NODE_LOCAL
> 2015-09-09 14:35:46,832 DEBUG
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
> Assigned to queue: root.b.b1 stats: b1: capacity=1.0, absoluteCapacity=0.5,
> usedResources=<memory:0, vCores:0>, usedCapacity=0.0,
> absoluteUsedCapacity=0.0, numApps=1, numContainers=1 --> <memory:0,
> vCores:0>, NODE_LOCAL
> {code}
> {code}
> dsperf@host-127:/opt/bibin/dsperf/HAINSTALL/install/hadoop/resourcemanager/logs1>
> cat hadoop-dsperf-resourcemanager-host-127.log | grep "NODE_LOCAL" | grep
> "root.b.b1" | wc -l
> 500
> {code}
>
> (Consumes about 6 minutes)
>
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)