[jira] [Commented] (YARN-7612) Add Processor Framework for Rich Placement Constraints

2017-12-29 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306663#comment-16306663
 ] 

Wangda Tan commented on YARN-7612:
--

And also, this makes Processor enabled by default: 

{code}
540   public static final boolean DEFAULT_RM_PLACEMENT_CONSTRAINTS_ENABLED 
= true;
{code}
It should be false, correct? [~asuresh].

> Add Processor Framework for Rich Placement Constraints
> --
>
> Key: YARN-7612
> URL: https://issues.apache.org/jira/browse/YARN-7612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Fix For: YARN-6592
>
> Attachments: YARN-7612-YARN-6592.001.patch, 
> YARN-7612-YARN-6592.002.patch, YARN-7612-YARN-6592.003.patch, 
> YARN-7612-YARN-6592.004.patch, YARN-7612-YARN-6592.005.patch, 
> YARN-7612-YARN-6592.006.patch, YARN-7612-YARN-6592.007.patch, 
> YARN-7612-YARN-6592.008.patch, YARN-7612-YARN-6592.009.patch, 
> YARN-7612-YARN-6592.010.patch, YARN-7612-YARN-6592.011.patch, 
> YARN-7612-YARN-6592.012.patch, YARN-7612-v2.wip.patch, YARN-7612.wip.patch
>
>
> This introduces a Placement Processor and a Planning algorithm framework to 
> handle placement constraints and scheduling requests from an app and places 
> them on nodes.
> The actual planning algorithm(s) will be handled in a YARN-7613.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6599) Support rich placement constraints in scheduler

2017-12-29 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306657#comment-16306657
 ] 

genericqa commented on YARN-6599:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
34s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
25s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
23s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
46s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
15s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
YARN-6592 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
42s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
15s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 24s{color} | {color:orange} root: The patch generated 44 new + 1219 
unchanged - 7 fixed = 1263 total (was 1226) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
22s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
42s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
9s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 57s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 31s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 58s{color} 
| {color:red} hadoop-mapreduce-client-app in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
57s{color} | {color:green} hadoop-sls in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
33s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}197m 37s{color} | 
{color:black} {color} |
\\
\\
|| 

[jira] [Commented] (YARN-7682) Expose canAssign method in the PlacementConstraintManager

2017-12-29 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306649#comment-16306649
 ] 

Arun Suresh commented on YARN-7682:
---

bq. What about the following: given that the external API is there in the 
PlacementConstraints, what if we let users specify random cmin and then fail 
(ie, canAssign=false) if there are not already that many containers in the 
target. 
Agreed - I actually like that idea. Since our rejected scheduling requests, we 
introduced in YARN-7612 can also be leveraged here:
if source != target, then that implies that the target allocation tags HAVE to 
be on the node before the request with the source tags arrives - which means 
the app should send the requests with the target allocation tags in one 
allocate() call - and then wait for the containers to be placed - then only can 
it send the requests with the different source tags in a subsequent allocate() 
call.
If the app sends the requests with source tags != target tags in the first 
allocate() call, the Processor will reject those requests and the AM will be 
force to resend the requests. The subsequent allocate() call should not fail - 
since by then the target tags would be on the node.


> Expose canAssign method in the PlacementConstraintManager
> -
>
> Key: YARN-7682
> URL: https://issues.apache.org/jira/browse/YARN-7682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7682-YARN-6592.001.patch, YARN-7682.wip.patch
>
>
> As per discussion in YARN-7613. Lets expose {{canAssign}} method in the 
> PlacementConstraintManager that takes a sourceTags, applicationId, 
> SchedulerNode and AllocationTagsManager and returns true if constraints are 
> not violated by placing the container on the node.
> I prefer not passing in the SchedulingRequest, since it can have > 1 
> numAllocations. We want this api to be called for single allocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7670) Modifications to the ResourceScheduler to support SchedulingRequests

2017-12-29 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306638#comment-16306638
 ] 

Wangda Tan commented on YARN-7670:
--

[~asuresh], 

bq. YARN-6592 branch is up todate - I just rebased it with trunk and force 
pushed - am guessing thats what you meant ?
Oh actually what I meant is merging the two commits, one is for the main change 
and the other one is addendum fix. It's not necessary to commit addendum patch 
to branch. 

bq. So, my intention was that it would be a generic API - not something only 
the Processor can use. My understanding from our previous discussions were that 
the Scheduler itself is split into 2 phases: First is the Proposal phase and 
second the Commit phase. I think the API introduced here is a formalization of 
the the Commit phase - no ?
It is true that we discussed this and it should not be a separate API, however, 
the problem is current logic assumes "don't check pending resource". And the 
only module needs this behavior is placement processor. Unless checking pending 
resource is supported by the API, we can change other logic inside 
CapacityScheduler to use the new added API.

> Modifications to the ResourceScheduler to support SchedulingRequests
> 
>
> Key: YARN-7670
> URL: https://issues.apache.org/jira/browse/YARN-7670
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Fix For: YARN-6592
>
> Attachments: YARN-7670-YARN-6592.001.patch, 
> YARN-7670-YARN-6592.002.patch, YARN-7670-YARN-6592.003.patch, 
> YARN-7670-YARN-6592.addendum.patch
>
>
> As per discussions in YARN-7612. This JIRA tracks the changes to the 
> ResourceScheduler interface and implementation in CapacityScheduler to 
> support SchedulingRequests



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7670) Modifications to the ResourceScheduler to support SchedulingRequests

2017-12-29 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306636#comment-16306636
 ] 

Arun Suresh commented on YARN-7670:
---

bq. Could you help to merge the commits using git hard push since we're in 
branch.
YARN-6592 branch is up todate - I just rebased it with trunk and force pushed - 
am guessing thats what you meant ?

bq. And also, the new added method in ResourceScheduler should not be used by a 
module other than placement processor
So, my intention was that it would be a generic API - not something only the 
Processor can use. My understanding from our previous discussions were that the 
Scheduler itself is split into 2 phases: First is the Proposal phase and second 
the Commit phase. I think the API introduced here is a formalization of the the 
Commit phase - no ?
Given the de-coupling, I feel we should expose the later phase as a proper 
public API to be used by future features.

> Modifications to the ResourceScheduler to support SchedulingRequests
> 
>
> Key: YARN-7670
> URL: https://issues.apache.org/jira/browse/YARN-7670
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Fix For: YARN-6592
>
> Attachments: YARN-7670-YARN-6592.001.patch, 
> YARN-7670-YARN-6592.002.patch, YARN-7670-YARN-6592.003.patch, 
> YARN-7670-YARN-6592.addendum.patch
>
>
> As per discussions in YARN-7612. This JIRA tracks the changes to the 
> ResourceScheduler interface and implementation in CapacityScheduler to 
> support SchedulingRequests



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6599) Support rich placement constraints in scheduler

2017-12-29 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6599:
-
Attachment: YARN-6599-YARN-6592.wip.002.patch

Attached wip ver.2 patch to see what breaks. This patch should include all 
major logics except tests.

> Support rich placement constraints in scheduler
> ---
>
> Key: YARN-6599
> URL: https://issues.apache.org/jira/browse/YARN-6599
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-6599-YARN-6592.wip.002.patch, 
> YARN-6599.poc.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7346) Fix compilation errors against hbase2 alpha release

2017-12-29 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306611#comment-16306611
 ] 

Ted Yu commented on YARN-7346:
--

bq. Unless HBase releases beta-1

You can find maven artifacts for beta-1 RC here:
https://repository.apache.org/content/groups/staging/org/apache/hbase/hbase-client/2.0.0-beta-1/

> Fix compilation errors against hbase2 alpha release
> ---
>
> Key: YARN-7346
> URL: https://issues.apache.org/jira/browse/YARN-7346
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Vrushali C
> Attachments: YARN-7346.00.patch, YARN-7346.prelim1.patch, 
> YARN-7346.prelim2.patch, YARN-7581.prelim.patch
>
>
> When compiling hadoop-yarn-server-timelineservice-hbase against 2.0.0-alpha3, 
> I got the following errors:
> https://pastebin.com/Ms4jYEVB
> This issue is to fix the compilation errors.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7555) Support multiple resource types in YARN native services

2017-12-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306600#comment-16306600
 ] 

Hudson commented on YARN-7555:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #13428 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13428/])
YARN-7555. Support multiple resource types in YARN native services. (wangda: 
rev 7467e8fe5a95230986fed9d748769304af3f2b61)
* (edit) hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/api/records/Resource.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ServiceScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/TestServiceAM.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/resources/org/apache/hadoop/yarn/service/conf/examples/app.json
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/dev-support/findbugs-exclude.xml
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/api/records/ResourceInformation.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/conf/TestAppJsonResolve.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/component/Component.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/src/main/resources/definition/YARN-Simplified-V1-API-Layer-For-Services.yaml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/MockServiceAM.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/YarnServiceAPI.md


> Support multiple resource types in YARN native services
> ---
>
> Key: YARN-7555
> URL: https://issues.apache.org/jira/browse/YARN-7555
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-7555.003.patch, YARN-7555.004.patch, 
> YARN-7555.005.patch, YARN-7555.006.patch, YARN-7555.007.patch, 
> YARN-7555.wip-001.patch
>
>
> We need to support specifying multiple resource type in addition to 
> memory/cpu in YARN native services



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7666) Introduce scheduler specific environment variable support in ASC for better scheduling placement configurations

2017-12-29 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306599#comment-16306599
 ] 

Wangda Tan commented on YARN-7666:
--

Thanks [~sunilg] for working on this JIRA.
In general, the patch looks good since it can let the client select scheduling 
policies without modifying any application code. 

Some suggestions:
1) It's better to move 

{code} 
66@InterfaceAudience.Private
67public static final String ENV_APPLICATION_PLACEMENT_TYPE_CLASS =
68"APPLICATION_PLACEMENT_TYPE_CLASS";
69  
70@InterfaceAudience.Private
71public static final Class
72DEFAULT_APPLICATION_PLACEMENT_TYPE_CLASS = 
LocalityAppPlacementAllocator.class;
{code} 

To a new class such as ApplicationSchdudlingConfig. (which can locate inside 
rm.scheduler.common) package temporarily instead of exposing to end user. 

2) Regarding the naming of the field. Instead of naming it {{*environment}}, I 
would prefer to call it properties or hints to avoid people think it is a 
process environment which will be passed to AM process. 

For comment from [~asuresh], it makes sense to let the client notify AM about 
whatever required. A simple way of doing this (without changing YARN framework 
code) is to either set hints to AM's environment or Java launch properties. To 
make it generic, we can also include a copy of ApplicationSubmissionContext to 
RegisterAMResponse or a new data structure just for scheduling hints. In any 
case, I would suggest to move these changes to a separate JIRA.



> Introduce scheduler specific environment variable support in ASC for better 
> scheduling placement configurations
> ---
>
> Key: YARN-7666
> URL: https://issues.apache.org/jira/browse/YARN-7666
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-7666.001.patch, YARN-7666.002.patch
>
>
> Introduce a scheduler specific key-value map to hold env variables in ASC.
> And also convert AppPlacementAllocator initialization to each app based on 
> policy configured at each app.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7448) [API] Add SchedulingRequest to the AllocateRequest

2017-12-29 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7448:
-
Fix Version/s: (was: 3.1.0)
   YARN-6592

> [API] Add SchedulingRequest to the AllocateRequest
> --
>
> Key: YARN-7448
> URL: https://issues.apache.org/jira/browse/YARN-7448
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Fix For: YARN-6592
>
> Attachments: YARN-7448-YARN-6592.001.patch, 
> YARN-7448-YARN-6592.002.patch, YARN-7448-YARN-6592.003.patch, 
> YARN-7448-YARN-6592.004.patch, YARN-7448-YARN-6592.005.patch, 
> YARN-7448-YARN-6592.006.patch, YARN-7448-YARN-6592.007.patch, 
> YARN-7448-YARN-6592.008.patch, YARN-7448-YARN-6592.009.patch
>
>
> YARN-6594 introduces the {{SchedulingRequest}}. This JIRA tracks the 
> inclusion of the SchedulingRequest into the AllocateRequest.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7555) Support multiple resource types in YARN native services

2017-12-29 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306594#comment-16306594
 ] 

genericqa commented on YARN-7555:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m  
3s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
36s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
59s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 58s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 7 new + 49 unchanged - 0 fixed = 56 total (was 49) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}102m 58s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
49s{color} | {color:green} hadoop-yarn-services in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
50s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | 

[jira] [Updated] (YARN-6596) Introduce Placement Constraint Manager module

2017-12-29 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6596:
-
Fix Version/s: (was: 3.1.0)
   YARN-6592

> Introduce Placement Constraint Manager module
> -
>
> Key: YARN-6596
> URL: https://issues.apache.org/jira/browse/YARN-6596
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Fix For: YARN-6592
>
> Attachments: YARN-6596-YARN-6592.001.patch, 
> YARN-6596-YARN-6592.002.patch, YARN-6596-YARN-6592.003.patch
>
>
> This RM module will be responsible for storing placement constraints, 
> allocation tags, and node attributes.
> It will be used when determining the placement of SchedulingRequests with 
> constraints.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7653) Rack cardinality support for AllocationTagsManager

2017-12-29 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7653:
-
Fix Version/s: (was: 3.1.0)
   YARN-6592

> Rack cardinality support for AllocationTagsManager
> --
>
> Key: YARN-7653
> URL: https://issues.apache.org/jira/browse/YARN-7653
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
> Fix For: YARN-6592
>
> Attachments: YARN-7653-YARN-6592.001.patch, 
> YARN-7653-YARN-6592.002.patch, YARN-7653-YARN-6592.003.patch
>
>
> AllocationTagsManager currently supports node and cluster-wide tag 
> cardinality retrieval.
> If we want to support arbitrary node-groups/scopes for our placement 
> constraints TagsManager should be extended to provide such functionality.
> As a first step we need to support RACK scope cardinality retrieval (as 
> defined in our API).
> i.e. how many "spark" containers are currently running on "RACK-1"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7653) Rack cardinality support for AllocationTagsManager

2017-12-29 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306593#comment-16306593
 ] 

Wangda Tan commented on YARN-7653:
--

Thanks [~pgaref] for working on the JIRA, it looks like the newly added logics 
only take care of rack cardinality. Please let me know if I missed any 
discussions or logic in the patch. Instead of hardcoded for the rack name, I 
think we should support generic node group concept. Otherwise, we have to redo 
all the changes in the patch to support generic node group. 

I just updated the title of the JIRA to better reflect its changes, and I 
suggest to change logic to support generic node group cardinality before the 
decay of code architecture.

Thoughts?

> Rack cardinality support for AllocationTagsManager
> --
>
> Key: YARN-7653
> URL: https://issues.apache.org/jira/browse/YARN-7653
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
> Fix For: YARN-6592
>
> Attachments: YARN-7653-YARN-6592.001.patch, 
> YARN-7653-YARN-6592.002.patch, YARN-7653-YARN-6592.003.patch
>
>
> AllocationTagsManager currently supports node and cluster-wide tag 
> cardinality retrieval.
> If we want to support arbitrary node-groups/scopes for our placement 
> constraints TagsManager should be extended to provide such functionality.
> As a first step we need to support RACK scope cardinality retrieval (as 
> defined in our API).
> i.e. how many "spark" containers are currently running on "RACK-1"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7653) Rack cardinality support for AllocationTagsManager

2017-12-29 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7653:
-
Summary: Rack cardinality support for AllocationTagsManager  (was: Node 
group support for AllocationTagsManager)

> Rack cardinality support for AllocationTagsManager
> --
>
> Key: YARN-7653
> URL: https://issues.apache.org/jira/browse/YARN-7653
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
> Fix For: 3.1.0
>
> Attachments: YARN-7653-YARN-6592.001.patch, 
> YARN-7653-YARN-6592.002.patch, YARN-7653-YARN-6592.003.patch
>
>
> AllocationTagsManager currently supports node and cluster-wide tag 
> cardinality retrieval.
> If we want to support arbitrary node-groups/scopes for our placement 
> constraints TagsManager should be extended to provide such functionality.
> As a first step we need to support RACK scope cardinality retrieval (as 
> defined in our API).
> i.e. how many "spark" containers are currently running on "RACK-1"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7612) Add Processor Framework for Rich Placement Constraints

2017-12-29 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306588#comment-16306588
 ] 

Wangda Tan commented on YARN-7612:
--

Thanks [~asuresh] for working on the JIRA, I think for all added fields to 
YarnConfiguration should be marked as {{@private}}. Could you take care of this 
in the next JIRA? 

> Add Processor Framework for Rich Placement Constraints
> --
>
> Key: YARN-7612
> URL: https://issues.apache.org/jira/browse/YARN-7612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Fix For: YARN-6592
>
> Attachments: YARN-7612-YARN-6592.001.patch, 
> YARN-7612-YARN-6592.002.patch, YARN-7612-YARN-6592.003.patch, 
> YARN-7612-YARN-6592.004.patch, YARN-7612-YARN-6592.005.patch, 
> YARN-7612-YARN-6592.006.patch, YARN-7612-YARN-6592.007.patch, 
> YARN-7612-YARN-6592.008.patch, YARN-7612-YARN-6592.009.patch, 
> YARN-7612-YARN-6592.010.patch, YARN-7612-YARN-6592.011.patch, 
> YARN-7612-YARN-6592.012.patch, YARN-7612-v2.wip.patch, YARN-7612.wip.patch
>
>
> This introduces a Placement Processor and a Planning algorithm framework to 
> handle placement constraints and scheduling requests from an app and places 
> them on nodes.
> The actual planning algorithm(s) will be handled in a YARN-7613.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7670) Modifications to the ResourceScheduler to support SchedulingRequests

2017-12-29 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306587#comment-16306587
 ] 

Wangda Tan commented on YARN-7670:
--

Thanks [~asuresh] for working on the JIRA. Could you help to merge the commits 
using git hard push since we're in branch.

And also, the new added method in ResourceScheduler should not be used by a 
module other than placement processor. It will be better to file a new JIRA (or 
take care in the next patch) to update the java docs. 

Thoughts?

> Modifications to the ResourceScheduler to support SchedulingRequests
> 
>
> Key: YARN-7670
> URL: https://issues.apache.org/jira/browse/YARN-7670
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Fix For: YARN-6592
>
> Attachments: YARN-7670-YARN-6592.001.patch, 
> YARN-7670-YARN-6592.002.patch, YARN-7670-YARN-6592.003.patch, 
> YARN-7670-YARN-6592.addendum.patch
>
>
> As per discussions in YARN-7612. This JIRA tracks the changes to the 
> ResourceScheduler interface and implementation in CapacityScheduler to 
> support SchedulingRequests



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7612) Add Processor Framework for Rich Placement Constraints

2017-12-29 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7612:
-
Fix Version/s: (was: 3.1.0)
   YARN-6592

> Add Processor Framework for Rich Placement Constraints
> --
>
> Key: YARN-7612
> URL: https://issues.apache.org/jira/browse/YARN-7612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Fix For: YARN-6592
>
> Attachments: YARN-7612-YARN-6592.001.patch, 
> YARN-7612-YARN-6592.002.patch, YARN-7612-YARN-6592.003.patch, 
> YARN-7612-YARN-6592.004.patch, YARN-7612-YARN-6592.005.patch, 
> YARN-7612-YARN-6592.006.patch, YARN-7612-YARN-6592.007.patch, 
> YARN-7612-YARN-6592.008.patch, YARN-7612-YARN-6592.009.patch, 
> YARN-7612-YARN-6592.010.patch, YARN-7612-YARN-6592.011.patch, 
> YARN-7612-YARN-6592.012.patch, YARN-7612-v2.wip.patch, YARN-7612.wip.patch
>
>
> This introduces a Placement Processor and a Planning algorithm framework to 
> handle placement constraints and scheduling requests from an app and places 
> them on nodes.
> The actual planning algorithm(s) will be handled in a YARN-7613.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7670) Modifications to the ResourceScheduler to support SchedulingRequests

2017-12-29 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7670:
-
Fix Version/s: (was: 3.1.0)
   YARN-6592

> Modifications to the ResourceScheduler to support SchedulingRequests
> 
>
> Key: YARN-7670
> URL: https://issues.apache.org/jira/browse/YARN-7670
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Fix For: YARN-6592
>
> Attachments: YARN-7670-YARN-6592.001.patch, 
> YARN-7670-YARN-6592.002.patch, YARN-7670-YARN-6592.003.patch, 
> YARN-7670-YARN-6592.addendum.patch
>
>
> As per discussions in YARN-7612. This JIRA tracks the changes to the 
> ResourceScheduler interface and implementation in CapacityScheduler to 
> support SchedulingRequests



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7669) API and interface modifications for placement constraint processor

2017-12-29 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7669:
-
Fix Version/s: (was: 3.1.0)
   YARN-6592

> API and interface modifications for placement constraint processor
> --
>
> Key: YARN-7669
> URL: https://issues.apache.org/jira/browse/YARN-7669
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Fix For: YARN-6592
>
> Attachments: YARN-7669-YARN-6592.001.patch, 
> YARN-7669-YARN-6592.002.patch, YARN-7669-YARN-6592.003.patch, 
> YARN-7669-YARN-6592.004.patch, YARN-7669-YARN-6592.005.patch
>
>
> As per discussions in YARN-7612. This JIRA will introduce the generic 
> interfaces which will be implemented in YARN-7612



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7687) ContainerLogAppender Improvements

2017-12-29 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306571#comment-16306571
 ] 

Miklos Szegedi commented on YARN-7687:
--

Thank you for the patch [~belugabehr].
Please fix the outstanding checkstyle issue.
{code}
49  isBuffered = (maxEvents > 0);
{code}
I am not sure why this is needed, it adds to the memory footprint. You could 
just check for {{maxEvents}} in {{append}} mentioning in a comment that a 
positive value means that the data is buffered.
{code}
50if (maxEvents > 0) {  
51  tail = new LinkedList();  
52}
{code}
Can you tell me the reason why this check was removed? The lazy creation did 
actually save some memory if no buffering is used.

> ContainerLogAppender Improvements
> -
>
> Key: YARN-7687
> URL: https://issues.apache.org/jira/browse/YARN-7687
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Priority: Trivial
> Attachments: YARN-7687.1.patch
>
>
> * Use Array-backed collection instead of LinkedList
> * Ignore calls to {{close()}} after the initial call
> * Clear the queue after {{close}} is called to let garbage collection do its 
> magic on the items inside of it
> * Fix int-to-long conversion issue (overflow)
> * Remove superfluous white space



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7688) Miscellaneous Improvements To ProcfsBasedProcessTree

2017-12-29 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306550#comment-16306550
 ] 

Miklos Szegedi commented on YARN-7688:
--

Thank you for the patch [~belugabehr].
Please fix the outstanding checkstyle issue.
{code}
227 if (!"1".equals(pID)) {
232   if ("1".equals(ppid)) {
{code}
I actually liked the original order better.
{code}
165 LOG.info("ProcessTree: " + p);
{code}
How about {{LOG.info("ProcessTree: ", p);}}?


> Miscellaneous Improvements To ProcfsBasedProcessTree
> 
>
> Key: YARN-7688
> URL: https://issues.apache.org/jira/browse/YARN-7688
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Priority: Minor
> Attachments: YARN-7688.1.patch, YARN-7688.2.patch, YARN-7688.3.patch
>
>
> * Use ArrayDeque for performance instead of LinkedList
> * Use more Apache Commons routines to replace existing implementations
> * Remove superfluous code guards around DEBUG statements
> * Remove superfluous annotations in the tests
> * Other small improvements



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7580) ContainersMonitorImpl logged message lacks detail when exceeding memory limits

2017-12-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306549#comment-16306549
 ] 

Hudson commented on YARN-7580:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13426 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13426/])
YARN-7580. ContainersMonitorImpl logged message lacks detail when (szegedim: 
rev b82049b4f0065b76c3eb590d57eb5bf0ebc2f204)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/TestContainersMonitor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainersMonitorImpl.java


> ContainersMonitorImpl logged message lacks detail when exceeding memory limits
> --
>
> Key: YARN-7580
> URL: https://issues.apache.org/jira/browse/YARN-7580
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.1.0
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
> Fix For: 3.1.0
>
> Attachments: YARN-7580.001.patch, YARN-7580.002.patch
>
>
> Currently in the RM logs container memory usage for a container that exceeds 
> the memory limit is reported like this:
> {code}
> 2016-06-14 09:15:36,694 INFO [AsyncDispatcher event handler] 
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics 
> report from attempt_1464251583966_0932_r_000876_0: Container 
> [pid=134938,containerID=container_1464251583966_0932_01_002237] is running 
> beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory 
> used; 1.9 GB of 2.1 GB virtual memory used. Killing container.
> {code}
> Two enhancements as part of this jira:
> - make it clearer which limit we exceed
> - show exactly how much we exceeded the limit by



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7580) ContainersMonitorImpl logged message lacks detail when exceeding memory limits

2017-12-29 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306540#comment-16306540
 ] 

Miklos Szegedi commented on YARN-7580:
--

Thank you for the contribution [~wilfreds]!

> ContainersMonitorImpl logged message lacks detail when exceeding memory limits
> --
>
> Key: YARN-7580
> URL: https://issues.apache.org/jira/browse/YARN-7580
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.1.0
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
> Fix For: 3.1.0
>
> Attachments: YARN-7580.001.patch, YARN-7580.002.patch
>
>
> Currently in the RM logs container memory usage for a container that exceeds 
> the memory limit is reported like this:
> {code}
> 2016-06-14 09:15:36,694 INFO [AsyncDispatcher event handler] 
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics 
> report from attempt_1464251583966_0932_r_000876_0: Container 
> [pid=134938,containerID=container_1464251583966_0932_01_002237] is running 
> beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory 
> used; 1.9 GB of 2.1 GB virtual memory used. Killing container.
> {code}
> Two enhancements as part of this jira:
> - make it clearer which limit we exceed
> - show exactly how much we exceeded the limit by



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7689) TestRMContainerAllocator fails after YARN-6124

2017-12-29 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306521#comment-16306521
 ] 

Miklos Szegedi commented on YARN-7689:
--

Thank you for the patch [~wilfreds].
{code}
// If re-init is called before init the manager is null and the RM
// will crash, this happens in a number of tests.
{code}
I would say "Protect against uninitialized scheduling monitor manager"
In general I would try to update the test to call the init. The reason is that 
this check may hide important race conditions in the future and will lead the 
code crash later or miss monitoring due to an uninitialized monitor manager.

> TestRMContainerAllocator fails after YARN-6124
> --
>
> Key: YARN-7689
> URL: https://issues.apache.org/jira/browse/YARN-7689
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Affects Versions: 3.1.0
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
> Attachments: YARN-7689.001.patch
>
>
> After the change that was made for YARN-6124 multiple tests in the 
> TestRMContainerAllocator from MapReduce fail with the following NPE:
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler.reinitialize(AbstractYarnScheduler.java:1437)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fifo.FifoScheduler.reinitialize(FifoScheduler.java:320)
>   at 
> org.apache.hadoop.mapreduce.v2.app.rm.TestRMContainerAllocator$ExcessReduceContainerAllocateScheduler.(TestRMContainerAllocator.java:1808)
>   at 
> org.apache.hadoop.mapreduce.v2.app.rm.TestRMContainerAllocator$MyResourceManager2.createScheduler(TestRMContainerAllocator.java:970)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:659)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:1133)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:316)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.serviceInit(MockRM.java:1334)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.(MockRM.java:162)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.(MockRM.java:141)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.(MockRM.java:137)
>   at 
> org.apache.hadoop.mapreduce.v2.app.rm.TestRMContainerAllocator$MyResourceManager.(TestRMContainerAllocator.java:928)
> {code}
> In the test we just call reinitiaize on a scheduler and never call init.
> The stop of the service is guarded and so should the start and the re-init.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7580) ContainersMonitorImpl logged message lacks detail when exceeding memory limits

2017-12-29 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306509#comment-16306509
 ] 

Miklos Szegedi commented on YARN-7580:
--

+1. I will commit this shortly.

> ContainersMonitorImpl logged message lacks detail when exceeding memory limits
> --
>
> Key: YARN-7580
> URL: https://issues.apache.org/jira/browse/YARN-7580
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.1.0
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
> Attachments: YARN-7580.001.patch, YARN-7580.002.patch
>
>
> Currently in the RM logs container memory usage for a container that exceeds 
> the memory limit is reported like this:
> {code}
> 2016-06-14 09:15:36,694 INFO [AsyncDispatcher event handler] 
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics 
> report from attempt_1464251583966_0932_r_000876_0: Container 
> [pid=134938,containerID=container_1464251583966_0932_01_002237] is running 
> beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory 
> used; 1.9 GB of 2.1 GB virtual memory used. Killing container.
> {code}
> Two enhancements as part of this jira:
> - make it clearer which limit we exceed
> - show exactly how much we exceeded the limit by



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7688) Miscellaneous Improvements To ProcfsBasedProcessTree

2017-12-29 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306433#comment-16306433
 ] 

genericqa commented on YARN-7688:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 22s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common: The patch generated 1 new + 
101 unchanged - 2 fixed = 102 total (was 103) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
6s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7688 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12904025/YARN-7688.3.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 255fb0261dd3 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a55884c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/19056/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19056/testReport/ |
| Max. process+thread count | 407 (vs. ulimit of 5000) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
| Console output | 

[jira] [Commented] (YARN-7682) Expose canAssign method in the PlacementConstraintManager

2017-12-29 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306416#comment-16306416
 ] 

Konstantinos Karanasos commented on YARN-7682:
--

What about the following: given that the external API is there in the 
PlacementConstraints, what if we let users specify random cmin and then fail 
(ie, canAssign=false) if there are not already that many containers in the 
target. That would not be gang scheduling and it allows us to not differentiate 
between source!=target or source==target cases, which means we keep it simpler 
for the first version. 
Then instead of restricting the API, we can restrict such use cases when we add 
the constraint validation later on (for now we just assume non gang semantics). 
I agree that extending canAssign for multiple containers should not be done for 
the first version. 

> Expose canAssign method in the PlacementConstraintManager
> -
>
> Key: YARN-7682
> URL: https://issues.apache.org/jira/browse/YARN-7682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7682-YARN-6592.001.patch, YARN-7682.wip.patch
>
>
> As per discussion in YARN-7613. Lets expose {{canAssign}} method in the 
> PlacementConstraintManager that takes a sourceTags, applicationId, 
> SchedulerNode and AllocationTagsManager and returns true if constraints are 
> not violated by placing the container on the node.
> I prefer not passing in the SchedulingRequest, since it can have > 1 
> numAllocations. We want this api to be called for single allocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7687) ContainerLogAppender Improvements

2017-12-29 Thread BELUGA BEHR (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306380#comment-16306380
 ] 

BELUGA BEHR commented on YARN-7687:
---

[~miklos.szeg...@cloudera.com] Since you are entertaining some of my patches... 
:)

> ContainerLogAppender Improvements
> -
>
> Key: YARN-7687
> URL: https://issues.apache.org/jira/browse/YARN-7687
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Priority: Trivial
> Attachments: YARN-7687.1.patch
>
>
> * Use Array-backed collection instead of LinkedList
> * Ignore calls to {{close()}} after the initial call
> * Clear the queue after {{close}} is called to let garbage collection do its 
> magic on the items inside of it
> * Fix int-to-long conversion issue (overflow)
> * Remove superfluous white space



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7688) Miscellaneous Improvements To ProcfsBasedProcessTree

2017-12-29 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated YARN-7688:
--
Attachment: YARN-7688.3.patch

> Miscellaneous Improvements To ProcfsBasedProcessTree
> 
>
> Key: YARN-7688
> URL: https://issues.apache.org/jira/browse/YARN-7688
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Priority: Minor
> Attachments: YARN-7688.1.patch, YARN-7688.2.patch, YARN-7688.3.patch
>
>
> * Use ArrayDeque for performance instead of LinkedList
> * Use more Apache Commons routines to replace existing implementations
> * Remove superfluous code guards around DEBUG statements
> * Remove superfluous annotations in the tests
> * Other small improvements



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7684) The Total Memory and VCores display in the yarn UI is not correct with labeled node

2017-12-29 Thread Zhao Yi Ming (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306359#comment-16306359
 ] 

Zhao Yi Ming commented on YARN-7684:


[~sunilg] 
Thanks for your update! You are correct, when we view total resource per label, 
the cluster resource is not correct.

And please ignore my comment about the issue happened on version > 2.7.3. This 
is my mistake to understand.  Sorry for the wrong comments!

 I noticed the [YARN-4484|https://issues.apache.org/jira/browse/YARN-4484] fix 
target release is 2.8.0, we did not try the 2.8.0 release, we just found the 
problem on 2.7.1 and 2.7.3. So next we will try your fix on  2.8.0. Any results 
I will update here. Thanks!



> The Total Memory  and VCores display in the yarn UI is not correct with 
> labeled node
> 
>
> Key: YARN-7684
> URL: https://issues.apache.org/jira/browse/YARN-7684
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Affects Versions: 2.7.3
>Reporter: Zhao Yi Ming
>Assignee: Zhao Yi Ming
> Fix For: 2.7.3
>
> Attachments: YARN-7684-branch-2.7.3.001.patch, yarn_issue.pdf
>
>
> The Total Memory  and VCores display in the yarn UI is not correct with 
> labeled node
> recreate steps:
> 1. should have a hadoop cluster
> 2. enabled the yarn Node Labels feature
> 3. create a label eg: yarn rmadmin -addToClusterNodeLabels "test"
> 4. add a node into the label eg: yarn rmadmin -replaceLabelsOnNode 
> "zhaoyim02.com=test"
> 5. then go to the yarn UI http://:8088/cluster/nodes



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7692) Resource Manager goes down when a user not included in a priority acl submits a job

2017-12-29 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306253#comment-16306253
 ] 

Rohith Sharma K S commented on YARN-7692:
-

+1 lgtm! pending jenkins

> Resource Manager goes down when a user not included in a priority acl submits 
> a job
> ---
>
> Key: YARN-7692
> URL: https://issues.apache.org/jira/browse/YARN-7692
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.9.0, 2.8.3, 3.0.0
>Reporter: Charan Hebri
>Assignee: Sunil G
>Priority: Blocker
> Attachments: YARN-7692.001.patch
>
>
> Test scenario
> --
> 1. A cluster is created, no ACLs are included
> 2. Submit jobs with an existing user say 'user_a'
> 3. Enable ACLs and create a priority ACL entry via the property 
> yarn.scheduler.capacity.priority-acls. Do not include the user, 'user_a' in 
> this ACL.
> 4. Submit a job with the 'user_a'
> The observed behavior in this case is that the job is rejected as 'user_a' 
> does not have the permission to run the job which is expected behavior. But 
> Resource Manager also goes down when it tries to recover previous 
> applications and fails to recover them.
> Below is the exception seen,
> {noformat}
> 2017-12-27 10:52:30,064 INFO  conf.Configuration 
> (Configuration.java:getConfResourceAsInputStream(2659)) - found resource 
> yarn-site.xml at file:/etc/hadoop/3.0.0.0-636/0/yarn-site.xml
> 2017-12-27 10:52:30,065 INFO  scheduler.AbstractYarnScheduler 
> (AbstractYarnScheduler.java:setClusterMaxPriority(911)) - Updated the cluste 
> max priority to maxClusterLevelAppPriority = 10
> 2017-12-27 10:52:30,066 INFO  resourcemanager.ResourceManager 
> (ResourceManager.java:transitionToActive(1177)) - Transitioning to active 
> state
> 2017-12-27 10:52:30,097 INFO  resourcemanager.ResourceManager 
> (ResourceManager.java:serviceStart(765)) - Recovery started
> 2017-12-27 10:52:30,102 INFO  recovery.RMStateStore 
> (RMStateStore.java:checkVersion(747)) - Loaded RM state version info 1.5
> 2017-12-27 10:52:30,375 INFO  security.RMDelegationTokenSecretManager 
> (RMDelegationTokenSecretManager.java:recover(196)) - recovering 
> RMDelegationTokenSecretManager.
> 2017-12-27 10:52:30,380 INFO  resourcemanager.RMAppManager 
> (RMAppManager.java:recover(561)) - Recovering 51 applications
> 2017-12-27 10:52:30,432 INFO  resourcemanager.RMAppManager 
> (RMAppManager.java:recover(571)) - Successfully recovered 0 out of 51 
> applications
> 2017-12-27 10:52:30,432 ERROR resourcemanager.ResourceManager 
> (ResourceManager.java:serviceStart(776)) - Failed to load/recover state
> org.apache.hadoop.yarn.exceptions.YarnException: 
> org.apache.hadoop.security.AccessControlException: User hrt_qa (auth:SIMPLE) 
> does not have permission to submit/update application_1514268754125_0001 for 0
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.checkAndGetApplicationPriority(CapacityScheduler.java:2348)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:396)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.recoverApplication(RMAppManager.java:358)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.recover(RMAppManager.java:567)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.recover(ResourceManager.java:1390)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStart(ResourceManager.java:771)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startActiveServices(ResourceManager.java:1143)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1183)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1179)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToActive(ResourceManager.java:1179)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.transitionToActive(AdminService.java:320)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ActiveStandbyElectorBasedElectorService.becomeActive(ActiveStandbyElectorBasedElectorService.java:144)
> at 
> 

[jira] [Assigned] (YARN-7291) Better input parsing for resource in allocation file

2017-12-29 Thread Szilard Nemeth (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth reassigned YARN-7291:


Assignee: Szilard Nemeth

> Better input parsing for resource in allocation file
> 
>
> Key: YARN-7291
> URL: https://issues.apache.org/jira/browse/YARN-7291
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.1.0
>Reporter: Yufei Gu
>Assignee: Szilard Nemeth
>Priority: Minor
>  Labels: newbie
>
> When you set max/min share for queues in fair scheduler allocation file,  
> "1024 mb, 2 4 vcores" is parsed the same as "1024 mb, 4 vcores" without any 
> issue, the same to "50% memory, 50% 100%cpu" which is parsed the same as "50% 
> memory, 100%cpu". That causes confusing. We should fix it. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7692) Resource Manager goes down when a user not included in a priority acl submits a job

2017-12-29 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306174#comment-16306174
 ] 

genericqa commented on YARN-7692:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 32m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 17m 
27s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
13s{color} | {color:red} hadoop-yarn-server-resourcemanager in trunk failed. 
{color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
16s{color} | {color:red} hadoop-yarn-server-resourcemanager in trunk failed. 
{color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
14s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
15s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 15s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 0 unchanged - 26 fixed = 0 total (was 26) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
16s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 122 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
5s{color} | {color:red} The patch 1152 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
1m  3s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
15s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
16s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 17s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:blue}0{color} | {color:blue} asflicense {color} | {color:blue}  0m 
16s{color} | {color:blue} ASF License check generated no output? {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7692 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12904001/YARN-7692.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 14b82cb16006 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | 

[jira] [Commented] (YARN-6918) Remove acls after queue delete to avoid memory leak

2017-12-29 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306162#comment-16306162
 ] 

Sunil G commented on YARN-6918:
---

Yes. Lets move forward here.
It makes sense to have {{removePermission}} to clean up acls post queue delete. 
(by using new queue management feature)

Few comments
# Why removePermission needs ugi as second param?
# Pls remove {{import java.util.*;}} and add necessary imports from util

> Remove acls after queue delete to avoid memory leak
> ---
>
> Key: YARN-6918
> URL: https://issues.apache.org/jira/browse/YARN-6918
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-6918.001.patch
>
>
> Acl for deleted queue need to removed from allAcls to avoid leak 
> (Priority,YarnAuthorizer)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7684) The Total Memory and VCores display in the yarn UI is not correct with labeled node

2017-12-29 Thread Zhao Yi Ming (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao Yi Ming updated YARN-7684:
---
Labels:   (was: patch)

> The Total Memory  and VCores display in the yarn UI is not correct with 
> labeled node
> 
>
> Key: YARN-7684
> URL: https://issues.apache.org/jira/browse/YARN-7684
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Affects Versions: 2.7.3
>Reporter: Zhao Yi Ming
>Assignee: Zhao Yi Ming
> Fix For: 2.7.3
>
> Attachments: YARN-7684-branch-2.7.3.001.patch, yarn_issue.pdf
>
>
> The Total Memory  and VCores display in the yarn UI is not correct with 
> labeled node
> recreate steps:
> 1. should have a hadoop cluster
> 2. enabled the yarn Node Labels feature
> 3. create a label eg: yarn rmadmin -addToClusterNodeLabels "test"
> 4. add a node into the label eg: yarn rmadmin -replaceLabelsOnNode 
> "zhaoyim02.com=test"
> 5. then go to the yarn UI http://:8088/cluster/nodes



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7684) The Total Memory and VCores display in the yarn UI is not correct with labeled node

2017-12-29 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306156#comment-16306156
 ] 

Sunil G commented on YARN-7684:
---

Yes. Using cluster resource might be solving this temporary.
However we have partitions and hence cluster resource is not correct when we 
view total resource per label.

{{ this.totalMB = availableMB + allocatedMB;}}
For root queue, ideally we should be able to sum up to parent root level. For 
label, I think we have not correctly calculating queue metrics. YARN-4484 has 
handled this case, however if its not working now, we need to see how it got 
broken.

> The Total Memory  and VCores display in the yarn UI is not correct with 
> labeled node
> 
>
> Key: YARN-7684
> URL: https://issues.apache.org/jira/browse/YARN-7684
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Affects Versions: 2.7.3
>Reporter: Zhao Yi Ming
>Assignee: Zhao Yi Ming
>  Labels: patch
> Fix For: 2.7.3
>
> Attachments: YARN-7684-branch-2.7.3.001.patch, yarn_issue.pdf
>
>
> The Total Memory  and VCores display in the yarn UI is not correct with 
> labeled node
> recreate steps:
> 1. should have a hadoop cluster
> 2. enabled the yarn Node Labels feature
> 3. create a label eg: yarn rmadmin -addToClusterNodeLabels "test"
> 4. add a node into the label eg: yarn rmadmin -replaceLabelsOnNode 
> "zhaoyim02.com=test"
> 5. then go to the yarn UI http://:8088/cluster/nodes



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7692) Resource Manager goes down when a user not included in a priority acl submits a job

2017-12-29 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-7692:
--
Attachment: YARN-7692.001.patch

Attaching patch to avoid checking ACLs during recovery.

[~rohithsharma], please help to review.

> Resource Manager goes down when a user not included in a priority acl submits 
> a job
> ---
>
> Key: YARN-7692
> URL: https://issues.apache.org/jira/browse/YARN-7692
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.9.0, 2.8.3, 3.0.0
>Reporter: Charan Hebri
>Assignee: Sunil G
>Priority: Blocker
> Attachments: YARN-7692.001.patch
>
>
> Test scenario
> --
> 1. A cluster is created, no ACLs are included
> 2. Submit jobs with an existing user say 'user_a'
> 3. Enable ACLs and create a priority ACL entry via the property 
> yarn.scheduler.capacity.priority-acls. Do not include the user, 'user_a' in 
> this ACL.
> 4. Submit a job with the 'user_a'
> The observed behavior in this case is that the job is rejected as 'user_a' 
> does not have the permission to run the job which is expected behavior. But 
> Resource Manager also goes down when it tries to recover previous 
> applications and fails to recover them.
> Below is the exception seen,
> {noformat}
> 2017-12-27 10:52:30,064 INFO  conf.Configuration 
> (Configuration.java:getConfResourceAsInputStream(2659)) - found resource 
> yarn-site.xml at file:/etc/hadoop/3.0.0.0-636/0/yarn-site.xml
> 2017-12-27 10:52:30,065 INFO  scheduler.AbstractYarnScheduler 
> (AbstractYarnScheduler.java:setClusterMaxPriority(911)) - Updated the cluste 
> max priority to maxClusterLevelAppPriority = 10
> 2017-12-27 10:52:30,066 INFO  resourcemanager.ResourceManager 
> (ResourceManager.java:transitionToActive(1177)) - Transitioning to active 
> state
> 2017-12-27 10:52:30,097 INFO  resourcemanager.ResourceManager 
> (ResourceManager.java:serviceStart(765)) - Recovery started
> 2017-12-27 10:52:30,102 INFO  recovery.RMStateStore 
> (RMStateStore.java:checkVersion(747)) - Loaded RM state version info 1.5
> 2017-12-27 10:52:30,375 INFO  security.RMDelegationTokenSecretManager 
> (RMDelegationTokenSecretManager.java:recover(196)) - recovering 
> RMDelegationTokenSecretManager.
> 2017-12-27 10:52:30,380 INFO  resourcemanager.RMAppManager 
> (RMAppManager.java:recover(561)) - Recovering 51 applications
> 2017-12-27 10:52:30,432 INFO  resourcemanager.RMAppManager 
> (RMAppManager.java:recover(571)) - Successfully recovered 0 out of 51 
> applications
> 2017-12-27 10:52:30,432 ERROR resourcemanager.ResourceManager 
> (ResourceManager.java:serviceStart(776)) - Failed to load/recover state
> org.apache.hadoop.yarn.exceptions.YarnException: 
> org.apache.hadoop.security.AccessControlException: User hrt_qa (auth:SIMPLE) 
> does not have permission to submit/update application_1514268754125_0001 for 0
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.checkAndGetApplicationPriority(CapacityScheduler.java:2348)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:396)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.recoverApplication(RMAppManager.java:358)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.recover(RMAppManager.java:567)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.recover(ResourceManager.java:1390)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStart(ResourceManager.java:771)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startActiveServices(ResourceManager.java:1143)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1183)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1179)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToActive(ResourceManager.java:1179)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.transitionToActive(AdminService.java:320)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ActiveStandbyElectorBasedElectorService.becomeActive(ActiveStandbyElectorBasedElectorService.java:144)
> at 
> 

[jira] [Commented] (YARN-7684) The Total Memory and VCores display in the yarn UI is not correct with labeled node

2017-12-29 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306077#comment-16306077
 ] 

genericqa commented on YARN-7684:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  7m 
50s{color} | {color:red} Docker failed to build yetus/hadoop:c420dfe. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-7684 |
| GITHUB PR | https://github.com/apache/hadoop/pull/320 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19054/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> The Total Memory  and VCores display in the yarn UI is not correct with 
> labeled node
> 
>
> Key: YARN-7684
> URL: https://issues.apache.org/jira/browse/YARN-7684
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Affects Versions: 2.7.3
>Reporter: Zhao Yi Ming
>Assignee: Zhao Yi Ming
>  Labels: patch
> Fix For: 2.7.3
>
> Attachments: YARN-7684-branch-2.7.3.001.patch, yarn_issue.pdf
>
>
> The Total Memory  and VCores display in the yarn UI is not correct with 
> labeled node
> recreate steps:
> 1. should have a hadoop cluster
> 2. enabled the yarn Node Labels feature
> 3. create a label eg: yarn rmadmin -addToClusterNodeLabels "test"
> 4. add a node into the label eg: yarn rmadmin -replaceLabelsOnNode 
> "zhaoyim02.com=test"
> 5. then go to the yarn UI http://:8088/cluster/nodes



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7690) expose reserved memory/Vcores of nodemanager at webUI

2017-12-29 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306072#comment-16306072
 ] 

genericqa commented on YARN-7690:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 35s{color} 
| {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 1 new + 13 unchanged - 0 fixed = 14 total (was 13) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 3 new + 20 unchanged - 0 fixed = 23 total (was 20) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 16s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}110m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesNodes |
|   | hadoop.yarn.server.resourcemanager.webapp.TestNodesPage |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7690 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12903982/YARN-7690.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9ea35f94cf09 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5bf7e59 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| javac |