[jira] [Commented] (YARN-5927) BaseContainerManagerTest::waitForNMContainerState timeout accounting is not accurate

2017-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027683#comment-16027683
 ] 

Hadoop QA commented on YARN-5927:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
42s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 5 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 
39s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 34m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-5927 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12851399/YARN-5927.03.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 67adc2d236a7 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 89bb8bf |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/16037/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16037/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16037/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> BaseContainerManagerTest::waitForNMContainerState timeout accounting is not 
> accurate
> 
>
> Key: YARN-5927
> 

[jira] [Commented] (YARN-5927) BaseContainerManagerTest::waitForNMContainerState timeout accounting is not accurate

2017-05-27 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027677#comment-16027677
 ] 

Kai Sasaki commented on YARN-5927:
--

[~ka...@cloudera.com] I updated accordingly. Could you take a look when you 
have time?

> BaseContainerManagerTest::waitForNMContainerState timeout accounting is not 
> accurate
> 
>
> Key: YARN-5927
> URL: https://issues.apache.org/jira/browse/YARN-5927
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Kai Sasaki
>Priority: Trivial
>  Labels: newbie
> Attachments: YARN-5917.01.patch, YARN-5917.02.patch, 
> YARN-5927.03.patch
>
>
> See below that timeoutSecs is increased twice. We also do a sleep right away 
> before even checking the observed value.
> {code}
> do {
>   Thread.sleep(2000);
>  ...
>   timeoutSecs += 2;
> } while (!finalStates.contains(currentState)
> && timeoutSecs++ < timeOutMax);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4925) ContainerRequest in AMRMClient, application should be able to specify nodes/racks together with nodeLabelExpression

2017-05-27 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027623#comment-16027623
 ] 

Jonathan Hung commented on YARN-4925:
-

Thanks [~bibinchundatt] I am planning on working on this as well.

Seems the test failures are possibly related to YARN-5208, which depends on 
HADOOP-12954. Created YARN-6662 and HADOOP-14463 for these issues, respectively.

> ContainerRequest in AMRMClient, application should be able to specify 
> nodes/racks together with nodeLabelExpression
> ---
>
> Key: YARN-4925
> URL: https://issues.apache.org/jira/browse/YARN-4925
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: release-blocker
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: 0001-YARN-4925.patch, 0002-YARN-4925.patch, 
> YARN-4925-branch-2.7.001.patch
>
>
> Currently with nodelabel AMRMClient will not be able to specify nodelabels 
> with Node/Rack requests.For application like spark NODE_LOCAL requests cannot 
> be asked with label expression.
> As per the check in  {{AMRMClientImpl#checkNodeLabelExpression}}
> {noformat}
> // Don't allow specify node label against ANY request
> if ((containerRequest.getRacks() != null && 
> (!containerRequest.getRacks().isEmpty()))
> || 
> (containerRequest.getNodes() != null && 
> (!containerRequest.getNodes().isEmpty( {
>   throw new InvalidContainerRequestException(
>   "Cannot specify node label with rack and node");
> }
> {noformat}
> {{AppSchedulingInfo#updateResourceRequests}} we do reset of labels to that of 
> OFF-SWITCH. 
> The above check is not required for ContainerRequest ask /cc [~wangda] thank 
> you for confirming



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6662) Port YARN-5208 to branch-2.8, branch-2.7

2017-05-27 Thread Jonathan Hung (JIRA)
Jonathan Hung created YARN-6662:
---

 Summary: Port YARN-5208 to branch-2.8, branch-2.7
 Key: YARN-6662
 URL: https://issues.apache.org/jira/browse/YARN-6662
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Jonathan Hung
Assignee: Jonathan Hung






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6661) Too much CLEANUP event hang ApplicationMasterLauncher thread pool

2017-05-27 Thread JackZhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JackZhou updated YARN-6661:
---
Issue Type: Bug  (was: Improvement)

> Too much CLEANUP event hang ApplicationMasterLauncher thread pool
> -
>
> Key: YARN-6661
> URL: https://issues.apache.org/jira/browse/YARN-6661
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.7.2
> Environment: hadoop 2.7.2 
>Reporter: JackZhou
> Fix For: 2.9.0
>
>
> Some one else have already come up with the similar problem and fix it.
> We can look the jira(https://issues.apache.org/jira/browse/YARN-3809) for 
> detail.
> But I think the fix have not solve the problem completely, blow was the 
> problem I encountered:
> There is about 1000 nodes in my hadoop cluster, and I submit about 1800 apps.
> I failover my active rm and rm will failover all those 1800 apps.
> When a application failover, It will wait for AM container register itself. 
> But there is a bug in my AM (I do it intentionally), and it will not register 
> itself.
> So the RM will wait for about 10mins for the AM expiration, and it will send 
> a CLEANUP event to 
> ApplicationMasterLauncher thread pool. Because there is about 1800 apps, so 
> it will hang the ApplicationMasterLauncher
> thread pool for a large time. I have already use the 
> patch(https://issues.apache.org/jira/secure/attachment/12740804/YARN-3809.03.patch),
>  so
> a CLEANUP event will hang a thread 10 * 20 = 200s. But I have 1800 apps, so 
> for each of my thread, it will
> hang 1800 / 50 * 200s = 7200s=20min.
> Because the AM have register itself during 10mins, so it will retry and 
> create a new application attempt. 
> The application attempt will accept a container from RM, and send a LAUNCH to 
> ApplicationMasterLauncher thread pool.
> Because the 1800 CLEANUP will hang the 50 thread pools about 20mins. So the 
> application attempt will not 
> start the AM container during 10min. 
> And it will expire, and send a CLEANUP event to ApplicationMasterLauncher 
> thread pools too.
> As you can see, none of my application can really run it. 
> Each of them have 5 application attempts as follows, and each of them keep 
> retrying.
> appattempt_1495786030132_4000_05
> appattempt_1495786030132_4000_04
> appattempt_1495786030132_4000_03
> appattempt_1495786030132_4000_02  
> appattempt_1495786030132_4000_01
> So all of my apps have hang several hours, and none of them can really run. 
> I think this is a bug!!! We can treat CLEANUP and LAUNCH as different events.
> And use some other thread to deal with LAUNCH event or use other way.
> Sorry, I english is so poor. I don't know have I describe it clearly.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6661) Too much CLEANUP event hang ApplicationMasterLauncher thread pool

2017-05-27 Thread JackZhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JackZhou updated YARN-6661:
---
Description: 
Some one else have already come up with the similar problem and fix it.
We can look the jira(https://issues.apache.org/jira/browse/YARN-3809) for 
detail.
But I think the fix have not solve the problem completely, blow was the problem 
I encountered:
There is about 1000 nodes in my hadoop cluster, and I submit about 1800 apps.
I failover my active rm and rm will failover all those 1800 apps.
When a application failover, It will wait for AM container register itself. 
But there is a bug in my AM (I do it intentionally), and it will not register 
itself.
So the RM will wait for about 10mins for the AM expiration, and it will send a 
CLEANUP event to 
ApplicationMasterLauncher thread pool. Because there is about 1800 apps, so it 
will hang the ApplicationMasterLauncher
thread pool for a large time. I have already use the 
patch(https://issues.apache.org/jira/secure/attachment/12740804/YARN-3809.03.patch),
 so
a CLEANUP event will hang a thread 10 * 20 = 200s. But I have 1800 apps, so for 
each of my thread, it will
hang 1800 / 50 * 200s = 7200s=20min.
Because the AM have register itself during 10mins, so it will retry and create 
a new application attempt. 
The application attempt will accept a container from RM, and send a LAUNCH to 
ApplicationMasterLauncher thread pool.
Because the 1800 CLEANUP will hang the 50 thread pools about 20mins. So the 
application attempt will not 
start the AM container during 10min. 
And it will expire, and send a CLEANUP event to ApplicationMasterLauncher 
thread pools too.
As you can see, none of my application can really run it. 
Each of them have 5 application attempts as follows, and each of them keep 
retrying.
appattempt_1495786030132_4000_05
appattempt_1495786030132_4000_04
appattempt_1495786030132_4000_03
appattempt_1495786030132_4000_02
appattempt_1495786030132_4000_01
So all of my apps have hang several hours, and none of them can really run. 
I think this is a bug!!! We can treat CLEANUP and LAUNCH as different events.
And use some other thread to deal with LAUNCH event or use other way.
Sorry, I english is so poor. I don't know have I describe it clearly.

> Too much CLEANUP event hang ApplicationMasterLauncher thread pool
> -
>
> Key: YARN-6661
> URL: https://issues.apache.org/jira/browse/YARN-6661
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.7.2
> Environment: hadoop 2.7.2 
>Reporter: JackZhou
> Fix For: 2.9.0
>
>
> Some one else have already come up with the similar problem and fix it.
> We can look the jira(https://issues.apache.org/jira/browse/YARN-3809) for 
> detail.
> But I think the fix have not solve the problem completely, blow was the 
> problem I encountered:
> There is about 1000 nodes in my hadoop cluster, and I submit about 1800 apps.
> I failover my active rm and rm will failover all those 1800 apps.
> When a application failover, It will wait for AM container register itself. 
> But there is a bug in my AM (I do it intentionally), and it will not register 
> itself.
> So the RM will wait for about 10mins for the AM expiration, and it will send 
> a CLEANUP event to 
> ApplicationMasterLauncher thread pool. Because there is about 1800 apps, so 
> it will hang the ApplicationMasterLauncher
> thread pool for a large time. I have already use the 
> patch(https://issues.apache.org/jira/secure/attachment/12740804/YARN-3809.03.patch),
>  so
> a CLEANUP event will hang a thread 10 * 20 = 200s. But I have 1800 apps, so 
> for each of my thread, it will
> hang 1800 / 50 * 200s = 7200s=20min.
> Because the AM have register itself during 10mins, so it will retry and 
> create a new application attempt. 
> The application attempt will accept a container from RM, and send a LAUNCH to 
> ApplicationMasterLauncher thread pool.
> Because the 1800 CLEANUP will hang the 50 thread pools about 20mins. So the 
> application attempt will not 
> start the AM container during 10min. 
> And it will expire, and send a CLEANUP event to ApplicationMasterLauncher 
> thread pools too.
> As you can see, none of my application can really run it. 
> Each of them have 5 application attempts as follows, and each of them keep 
> retrying.
> appattempt_1495786030132_4000_05
> appattempt_1495786030132_4000_04
> appattempt_1495786030132_4000_03
> appattempt_1495786030132_4000_02  
> appattempt_1495786030132_4000_01
> So all of my apps have hang several hours, and none of them can really run. 
> I think this is a bug!!! We can treat CLEANUP and LAUNCH as different events.
> And use some other thread to deal with LAUNCH event or use 

[jira] [Commented] (YARN-6661) Too much CLEANUP event hang ApplicationMasterLauncher thread pool

2017-05-27 Thread JackZhou (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027391#comment-16027391
 ] 

JackZhou commented on YARN-6661:


Some one else have already come up with the similar problem and fix it.
We can look the jira(https://issues.apache.org/jira/browse/YARN-3809) for 
detail.

But I think the fix have not solve the problem completely, blow was the problem 
I encountered:
There is about 1000 nodes in my hadoop cluster, and I submit about 1800 apps.
I failover my active rm and rm will failover all those 1800 apps.
When a application failover, It will wait for AM container register itself. 
But there is a bug in my AM (I do it intentionally), and it will not register 
itself.

So the RM will wait for about 10mins for the AM expiration, and it will send a 
CLEANUP event to 
ApplicationMasterLauncher thread pool. Because there is about 1800 apps, so it 
will hang the ApplicationMasterLauncher
thread pool for a large time.  I have already use the 
patch(https://issues.apache.org/jira/secure/attachment/12740804/YARN-3809.03.patch),
 so
a CLEANUP event will hang a thread 10 * 20 = 200s. But I have 1800 apps, so for 
each of my thread, it will
hang 1800 / 50 * 200s = 7200s=20min.

Because the AM have register itself during 10mins, so it will retry and create 
a new application attempt. 
The application attempt will accept a container from RM, and send a LAUNCH to 
ApplicationMasterLauncher thread pool.
Because the 1800 CLEANUP will hang the 50 thread pools about 20mins. So the 
application attempt will not 
start the AM container during 10min. 
And it will expire, and send a CLEANUP event to ApplicationMasterLauncher 
thread pools too.

As you can see, none of my application can really run it. 
Each of them have 5 application attempts as follows,  and each of them keep 
retrying.
appattempt_1495786030132_4000_05
appattempt_1495786030132_4000_04
appattempt_1495786030132_4000_03
appattempt_1495786030132_4000_02
appattempt_1495786030132_4000_01

So all of my apps have hang several hours, and none of them can really run. 
I think this is a bug!!! We can treat CLEANUP and LAUNCH as different events.
And use some other thread to deal with LAUNCH event or use other way.

Sorry, I english is so poor. I don't know have I describe it clearly.


> Too much CLEANUP event hang ApplicationMasterLauncher thread pool
> -
>
> Key: YARN-6661
> URL: https://issues.apache.org/jira/browse/YARN-6661
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.7.2
> Environment: hadoop 2.7.2 
>Reporter: JackZhou
> Fix For: 2.9.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6661) Too much CLEANUP event hang ApplicationMasterLauncher thread pool

2017-05-27 Thread JackZhou (JIRA)
JackZhou created YARN-6661:
--

 Summary: Too much CLEANUP event hang ApplicationMasterLauncher 
thread pool
 Key: YARN-6661
 URL: https://issues.apache.org/jira/browse/YARN-6661
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: fairscheduler
Affects Versions: 2.7.2
 Environment: hadoop 2.7.2 
Reporter: JackZhou
 Fix For: 2.9.0






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6111) Rumen input does't work in SLS

2017-05-27 Thread YuJie Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027309#comment-16027309
 ] 

YuJie Huang commented on YARN-6111:
---

Ok. Tank you very much!

> Rumen input does't work in SLS
> --
>
> Key: YARN-6111
> URL: https://issues.apache.org/jira/browse/YARN-6111
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Affects Versions: 2.6.0, 2.7.3, 3.0.0-alpha2
> Environment: ubuntu14.0.4 os
>Reporter: YuJie Huang
>Assignee: Yufei Gu
>  Labels: test
> Fix For: 3.0.0-alpha4
>
> Attachments: YARN-6111.001.patch
>
>
> Hi guys,
> I am trying to learn the use of SLS.
> I would like to get the file realtimetrack.json, but this it only 
> contains "[]" at the end of a simulation. This is the command I use to 
> run the instance:
> HADOOP_HOME $ bin/slsrun.sh --input-rumen=sample-data/2jobsmin-rumen-jh.json 
> --output-dir=sample-data 
> All other files, including metrics, appears to be properly populated.I can 
> also trace with web:http://localhost:10001/simulate
> Can someone help?
> Thanks



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3666) Federation Intercepting and propagating AM-RM communications (part one: home RM only)

2017-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027272#comment-16027272
 ] 

Hadoop QA commented on YARN-3666:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
11s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
20s{color} | {color:green} YARN-2915 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
56s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in YARN-2915 has 5 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 17s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 4 new + 17 unchanged - 0 fixed = 21 total (was 17) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
3s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 generated 1 new + 5 unchanged - 0 fixed = 6 total (was 5) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
12s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m 12s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
|  |  Redundant nullcheck of 
org.apache.hadoop.yarn.server.nodemanager.amrmproxy.FederationInterceptor.amRegistrationResponse
 which is known to be null in 
org.apache.hadoop.yarn.server.nodemanager.amrmproxy.FederationInterceptor.registerApplicationMaster(RegisterApplicationMasterRequest)
  Redundant null check at FederationInterceptor.java:is known to be null in 
org.apache.hadoop.yarn.server.nodemanager.amrmproxy.FederationInterceptor.registerApplicationMaster(RegisterApplicationMasterRequest)
  Redundant null check at FederationInterceptor.java:[line 190] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-3666 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12870186/YARN-3666-YARN-2915.v6.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c9616e557cad 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality |