[
https://issues.apache.org/jira/browse/YARN-4140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14804452#comment-14804452
]
Hadoop QA commented on YARN-4140:
---------------------------------
\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch | 17m 10s | Pre-patch trunk compilation is
healthy. |
| {color:green}+1{color} | @author | 0m 0s | The patch does not contain any
@author tags. |
| {color:green}+1{color} | tests included | 0m 0s | The patch appears to
include 1 new or modified test files. |
| {color:green}+1{color} | javac | 8m 1s | There were no new javac warning
messages. |
| {color:green}+1{color} | javadoc | 10m 27s | There were no new javadoc
warning messages. |
| {color:green}+1{color} | release audit | 0m 24s | The applied patch does
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle | 0m 53s | There were no new checkstyle
issues. |
| {color:red}-1{color} | whitespace | 0m 0s | The patch has 1 line(s) that
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install | 1m 34s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse | 0m 34s | The patch built with
eclipse:eclipse. |
| {color:green}+1{color} | findbugs | 1m 32s | The patch does not introduce
any new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | yarn tests | 55m 41s | Tests failed in
hadoop-yarn-server-resourcemanager. |
| | | 96m 19s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests |
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
|
| | hadoop.yarn.server.resourcemanager.scheduler.fifo.TestFifoScheduler |
| | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler |
| | hadoop.yarn.server.resourcemanager.TestClientRMService |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL |
http://issues.apache.org/jira/secure/attachment/12757131/0005-YARN-4140.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 3f82f58 |
| whitespace |
https://builds.apache.org/job/PreCommit-YARN-Build/9196/artifact/patchprocess/whitespace.txt
|
| hadoop-yarn-server-resourcemanager test log |
https://builds.apache.org/job/PreCommit-YARN-Build/9196/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
|
| Test Results |
https://builds.apache.org/job/PreCommit-YARN-Build/9196/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output |
https://builds.apache.org/job/PreCommit-YARN-Build/9196/console |
This message was automatically generated.
> RM container allocation delayed incase of app submitted to Nodelabel partition
> ------------------------------------------------------------------------------
>
> Key: YARN-4140
> URL: https://issues.apache.org/jira/browse/YARN-4140
> Project: Hadoop YARN
> Issue Type: Sub-task
> Components: api, client, resourcemanager
> Reporter: Bibin A Chundatt
> Assignee: Bibin A Chundatt
> Attachments: 0001-YARN-4140.patch, 0002-YARN-4140.patch,
> 0003-YARN-4140.patch, 0004-YARN-4140.patch, 0005-YARN-4140.patch
>
>
> Trying to run application on Nodelabel partition I found that the
> application execution time is delayed by 5 – 10 min for 500 containers .
> Total 3 machines 2 machines were in same partition and app submitted to same.
> After enabling debug was able to find the below
> # From AM the container ask is for OFF-SWITCH
> # RM allocating all containers to NODE_LOCAL as shown in logs below.
> # So since I was having about 500 containers time taken was about – 6 minutes
> to allocate 1st map after AM allocation.
> # Tested with about 1K maps using PI job took 17 minutes to allocate next
> container after AM allocation
> Once 500 container allocation on NODE_LOCAL is done the next container
> allocation is done on OFF_SWITCH
> {code}
> 2015-09-09 15:21:58,954 DEBUG
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
> showRequests: application=application_1441791998224_0001 request={Priority:
> 20, Capability: <memory:512, vCores:1>, # Containers: 500, Location:
> /default-rack, Relax Locality: true, Node Label Expression: }
> 2015-09-09 15:21:58,954 DEBUG
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
> showRequests: application=application_1441791998224_0001 request={Priority:
> 20, Capability: <memory:512, vCores:1>, # Containers: 500, Location: *, Relax
> Locality: true, Node Label Expression: 3}
> 2015-09-09 15:21:58,954 DEBUG
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
> showRequests: application=application_1441791998224_0001 request={Priority:
> 20, Capability: <memory:512, vCores:1>, # Containers: 500, Location:
> host-10-19-92-143, Relax Locality: true, Node Label Expression: }
> 2015-09-09 15:21:58,954 DEBUG
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
> showRequests: application=application_1441791998224_0001 request={Priority:
> 20, Capability: <memory:512, vCores:1>, # Containers: 500, Location:
> host-10-19-92-117, Relax Locality: true, Node Label Expression: }
> 2015-09-09 15:21:58,954 DEBUG
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
> Assigned to queue: root.b.b1 stats: b1: capacity=1.0, absoluteCapacity=0.5,
> usedResources=<memory:0, vCores:0>, usedCapacity=0.0,
> absoluteUsedCapacity=0.0, numApps=1, numContainers=1 --> <memory:0,
> vCores:0>, NODE_LOCAL
> {code}
>
> {code}
> 2015-09-09 14:35:45,467 DEBUG
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
> Assigned to queue: root.b.b1 stats: b1: capacity=1.0, absoluteCapacity=0.5,
> usedResources=<memory:0, vCores:0>, usedCapacity=0.0,
> absoluteUsedCapacity=0.0, numApps=1, numContainers=1 --> <memory:0,
> vCores:0>, NODE_LOCAL
> 2015-09-09 14:35:45,831 DEBUG
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
> Assigned to queue: root.b.b1 stats: b1: capacity=1.0, absoluteCapacity=0.5,
> usedResources=<memory:0, vCores:0>, usedCapacity=0.0,
> absoluteUsedCapacity=0.0, numApps=1, numContainers=1 --> <memory:0,
> vCores:0>, NODE_LOCAL
> 2015-09-09 14:35:46,469 DEBUG
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
> Assigned to queue: root.b.b1 stats: b1: capacity=1.0, absoluteCapacity=0.5,
> usedResources=<memory:0, vCores:0>, usedCapacity=0.0,
> absoluteUsedCapacity=0.0, numApps=1, numContainers=1 --> <memory:0,
> vCores:0>, NODE_LOCAL
> 2015-09-09 14:35:46,832 DEBUG
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
> Assigned to queue: root.b.b1 stats: b1: capacity=1.0, absoluteCapacity=0.5,
> usedResources=<memory:0, vCores:0>, usedCapacity=0.0,
> absoluteUsedCapacity=0.0, numApps=1, numContainers=1 --> <memory:0,
> vCores:0>, NODE_LOCAL
> {code}
> {code}
> dsperf@host-127:/opt/bibin/dsperf/HAINSTALL/install/hadoop/resourcemanager/logs1>
> cat hadoop-dsperf-resourcemanager-host-127.log | grep "NODE_LOCAL" | grep
> "root.b.b1" | wc -l
> 500
> {code}
>
> (Consumes about 6 minutes)
>
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)