[jira] [Updated] (YARN-7592) yarn.federation.failover.enabled missing in yarn-default.xml

2018-09-05 Thread Bibin A Chundatt (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-7592:
---
Attachment: IssueReproduce.patch

> yarn.federation.failover.enabled missing in yarn-default.xml
> 
>
> Key: YARN-7592
> URL: https://issues.apache.org/jira/browse/YARN-7592
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 3.0.0-beta1
>Reporter: Gera Shegalov
>Priority: Major
> Attachments: IssueReproduce.patch
>
>
> yarn.federation.failover.enabled should be documented in yarn-default.xml. I 
> am also not sure why it should be true by default and force the HA retry 
> policy in {{RMProxy#createRMProxy}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7592) yarn.federation.failover.enabled missing in yarn-default.xml

2018-09-05 Thread Bibin A Chundatt (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-7592:
---
Attachment: (was: IssueReproduce.patch)

> yarn.federation.failover.enabled missing in yarn-default.xml
> 
>
> Key: YARN-7592
> URL: https://issues.apache.org/jira/browse/YARN-7592
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 3.0.0-beta1
>Reporter: Gera Shegalov
>Priority: Major
>
> yarn.federation.failover.enabled should be documented in yarn-default.xml. I 
> am also not sure why it should be true by default and force the HA retry 
> policy in {{RMProxy#createRMProxy}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7592) yarn.federation.failover.enabled missing in yarn-default.xml

2018-09-05 Thread Bibin A Chundatt (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-7592:
---
Attachment: IssueReproduce.patch

> yarn.federation.failover.enabled missing in yarn-default.xml
> 
>
> Key: YARN-7592
> URL: https://issues.apache.org/jira/browse/YARN-7592
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 3.0.0-beta1
>Reporter: Gera Shegalov
>Priority: Major
> Attachments: IssueReproduce.patch
>
>
> yarn.federation.failover.enabled should be documented in yarn-default.xml. I 
> am also not sure why it should be true by default and force the HA retry 
> policy in {{RMProxy#createRMProxy}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7592) yarn.federation.failover.enabled missing in yarn-default.xml

2018-09-05 Thread Bibin A Chundatt (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16605292#comment-16605292
 ] 

Bibin A Chundatt commented on YARN-7592:


Thank you [~subru] for comment

Issue is in registration of nodemanager. Nodemanager is not able to start.
{code}
2018-09-06 11:09:16,276 INFO  [main] service.AbstractService 
(AbstractService.java:noteFailure(267)) - Service 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl failed in state 
STARTED
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
java.lang.ArrayIndexOutOfBoundsException: 0
at 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:263)
at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
at 
org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
at 
org.apache.hadoop.yarn.server.TestFederationCluster.testNonHANodeManagerRegistration(TestFederationCluster.java:53)
...
Caused by: java.lang.ArrayIndexOutOfBoundsException: 0
at 
org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider.init(ConfiguredRMFailoverProxyProvider.java:62)
at 
org.apache.hadoop.yarn.client.RMProxy.createRMFailoverProxyProvider(RMProxy.java:174)
at 
org.apache.hadoop.yarn.client.RMProxy.newProxyInstance(RMProxy.java:129)
at org.apache.hadoop.yarn.client.RMProxy.createRMProxy(RMProxy.java:121)
at 
org.apache.hadoop.yarn.server.api.ServerRMProxy.createRMProxy(ServerRMProxy.java:74)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.getRMClient(NodeStatusUpdaterImpl.java:346)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:256)
... 26 more
{code}
Attaching patch to reproduce the issue.

Currently i havent added FederationInterceptor for Nodemanager configuration. 

> yarn.federation.failover.enabled missing in yarn-default.xml
> 
>
> Key: YARN-7592
> URL: https://issues.apache.org/jira/browse/YARN-7592
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 3.0.0-beta1
>Reporter: Gera Shegalov
>Priority: Major
>
> yarn.federation.failover.enabled should be documented in yarn-default.xml. I 
> am also not sure why it should be true by default and force the HA retry 
> policy in {{RMProxy#createRMProxy}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8745) Misplaced the TestRMWebServicesFairScheduler.java file.

2018-09-05 Thread Y. SREENIVASULU REDDY (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16605276#comment-16605276
 ] 

Y. SREENIVASULU REDDY commented on YARN-8745:
-

[~bibinchundatt] 
ok i will address all those things and will provide a patch.

> Misplaced the TestRMWebServicesFairScheduler.java file.
> ---
>
> Key: YARN-8745
> URL: https://issues.apache.org/jira/browse/YARN-8745
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, test
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Y. SREENIVASULU REDDY
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-8745.001.patch
>
>
> TestRMWebServicesFairScheduler.java file exist in
> {noformat}
> hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesFairScheduler.java
> {noformat}
> But the package structure is 
> {noformat}
> package org.apache.hadoop.yarn.server.resourcemanager.webapp.fairscheduler;
> {noformat}
> so moving the file to proper package.
> YARN-7451 issue triggered from this ID.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8747) ui2 page loading failed due to js error under some time zone configuration

2018-09-05 Thread collinma (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16605151#comment-16605151
 ] 

collinma commented on YARN-8747:


more information if someone want to reproduce: the browser is chrome(simplified 
Chinese) and the os is win7 in simplified Chinese

> ui2 page loading failed due to js error under some time zone configuration
> --
>
> Key: YARN-8747
> URL: https://issues.apache.org/jira/browse/YARN-8747
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Affects Versions: 3.1.1
>Reporter: collinma
>Priority: Blocker
> Attachments: image-2018-09-05-18-54-03-991.png
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> We deployed hadoop 3.1.1 on centos 7.2 servers whose timezone is configured 
> as GMT+8,  the web browser time zone is GMT+8 too. yarn ui page loaded failed 
> due to js error:
>  
> !image-2018-09-05-18-54-03-991.png!
> The moment-timezone js component raised that error. This has been fixed in 
> moment-timezone 
> v0.5.1([see|[https://github.com/moment/moment-timezone/issues/294]).] We need 
> to update moment-timezone version accordingly



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5597) YARN Federation improvements

2018-09-05 Thread Subru Krishnan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-5597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16605094#comment-16605094
 ] 

Subru Krishnan commented on YARN-5597:
--

[~bibinchundatt], we use a RDBMS (SQL) for the Federation store and ZK for RM 
store as 1) there's no leader election in Federation and 2) We only store 
metadata for which a DB performs great and not what ZK is intended for (IMHO Zk 
has abused/misused a lot).

That said, [~elgoiri] has a deployment with ZK for both Federation and RM 
stores, so he should be able to guide you.

> YARN Federation improvements
> 
>
> Key: YARN-5597
> URL: https://issues.apache.org/jira/browse/YARN-5597
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
>Priority: Major
>
> This umbrella JIRA tracks set of improvements over the YARN Federation MVP 
> (YARN-2915)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7592) yarn.federation.failover.enabled missing in yarn-default.xml

2018-09-05 Thread Subru Krishnan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16605092#comment-16605092
 ] 

Subru Krishnan commented on YARN-7592:
--

[~bibinchundatt]/[~jira.shegalov], I have tested multiple times with a similar 
setup (for 2.9 release) and never faced any issues.

 

FYI the FEDERATION_FAILOVER_ENABLED is automatically set by 
{{FederationProxyProviderUtil}} if HA is enabled as you can see 
[here|https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/failover/FederationProxyProviderUtil.java#L128].

> yarn.federation.failover.enabled missing in yarn-default.xml
> 
>
> Key: YARN-7592
> URL: https://issues.apache.org/jira/browse/YARN-7592
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 3.0.0-beta1
>Reporter: Gera Shegalov
>Priority: Major
>
> yarn.federation.failover.enabled should be documented in yarn-default.xml. I 
> am also not sure why it should be true by default and force the HA retry 
> policy in {{RMProxy#createRMProxy}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8730) TestRMWebServiceAppsNodelabel#testAppsRunning fails

2018-09-05 Thread Jason Lowe (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16605013#comment-16605013
 ] 

Jason Lowe commented on YARN-8730:
--

Thanks for posting the test-patch results!  I agree the test failures and ASF 
warnings are unrelated.  Committing this.

> TestRMWebServiceAppsNodelabel#testAppsRunning fails
> ---
>
> Key: YARN-8730
> URL: https://issues.apache.org/jira/browse/YARN-8730
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.8.4
>Reporter: Jason Lowe
>Assignee: Eric Payne
>Priority: Major
> Attachments: YARN-8730.001.branch-2.8.patch
>
>
> TestRMWebServiceAppsNodelabel is failing in branch-2.8:
> {noformat}
> Running 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServiceAppsNodelabel
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.473 sec <<< 
> FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServiceAppsNodelabel
> testAppsRunning(org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServiceAppsNodelabel)
>   Time elapsed: 6.708 sec  <<< FAILURE!
> org.junit.ComparisonFailure: partition amused 
> expected:<{"[]memory":1024,"vCores...> but 
> was:<{"[res":{"memory":1024,"memorySize":1024,"virtualCores":1},"]memory":1024,"vCores...>
>   at org.junit.Assert.assertEquals(Assert.java:115)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServiceAppsNodelabel.verifyResource(TestRMWebServiceAppsNodelabel.java:222)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServiceAppsNodelabel.testAppsRunning(TestRMWebServiceAppsNodelabel.java:205)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8706) DelayedProcessKiller is executed for Docker containers even though docker stop sends a KILL signal after the specified grace period

2018-09-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604930#comment-16604930
 ] 

Hadoop QA commented on YARN-8706:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
49s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
33s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 99m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | YARN-8706 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938524/YARN-8706.004.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 6cae489c93dc 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9af96d4 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/21772/testReport/ |
| Max. process+thread count | 302 (vs. ulimit of 1) |
| 

[jira] [Commented] (YARN-8659) RMWebServices returns only RUNNING apps when filtered with queue

2018-09-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604911#comment-16604911
 ] 

Hadoop QA commented on YARN-8659:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 32s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 8 new + 44 unchanged - 4 fixed = 52 total (was 48) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 50s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}126m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestApplicationMasterService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | YARN-8659 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938519/YARN-8659.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ffa8d4321477 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9af96d4 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/21771/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 

[jira] [Commented] (YARN-6456) Allow administrators to set a single ContainerRuntime for all containers

2018-09-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604874#comment-16604874
 ] 

Hadoop QA commented on YARN-6456:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
44s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
11s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 18m 
24s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | YARN-6456 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938517/YARN-6456.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux be38b7a1531d 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Updated] (YARN-8706) DelayedProcessKiller is executed for Docker containers even though docker stop sends a KILL signal after the specified grace period

2018-09-05 Thread Chandni Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-8706:

Attachment: YARN-8706.004.patch

> DelayedProcessKiller is executed for Docker containers even though docker 
> stop sends a KILL signal after the specified grace period
> ---
>
> Key: YARN-8706
> URL: https://issues.apache.org/jira/browse/YARN-8706
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
>  Labels: docker
> Attachments: YARN-8706.001.patch, YARN-8706.002.patch, 
> YARN-8706.003.patch, YARN-8706.004.patch
>
>
> {{DockerStopCommand}} adds a grace period of 10 seconds.
> 10 seconds is also the default grace time use by docker stop
>  [https://docs.docker.com/engine/reference/commandline/stop/]
> Documentation of the docker stop:
> {quote}the main process inside the container will receive {{SIGTERM}}, and 
> after a grace period, {{SIGKILL}}.
> {quote}
> There is a {{DelayedProcessKiller}} in {{ContainerExcecutor}} which executes 
> for all containers after a delay when {{sleepDelayBeforeSigKill>0}}. By 
> default this is set to {{250 milliseconds}} and so irrespective of the 
> container type, it will always get executed.
>  
> For a docker container, {{docker stop}} takes care of sending a {{SIGKILL}} 
> after the grace period
> - when sleepDelayBeforeSigKill > 10 seconds, then there is no point of 
> executing DelayedProcessKiller
> - when sleepDelayBeforeSigKill < 1 second, then the grace period should be 
> the smallest value, which is 1 second, because anyways we are forcing kill 
> after 250 ms
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8706) DelayedProcessKiller is executed for Docker containers even though docker stop sends a KILL signal after the specified grace period

2018-09-05 Thread Chandni Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604800#comment-16604800
 ] 

Chandni Singh commented on YARN-8706:
-

I had deprecated {{DockerStopCommand}} because of which there are more 
deprecation warnings.

> DelayedProcessKiller is executed for Docker containers even though docker 
> stop sends a KILL signal after the specified grace period
> ---
>
> Key: YARN-8706
> URL: https://issues.apache.org/jira/browse/YARN-8706
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
>  Labels: docker
> Attachments: YARN-8706.001.patch, YARN-8706.002.patch, 
> YARN-8706.003.patch
>
>
> {{DockerStopCommand}} adds a grace period of 10 seconds.
> 10 seconds is also the default grace time use by docker stop
>  [https://docs.docker.com/engine/reference/commandline/stop/]
> Documentation of the docker stop:
> {quote}the main process inside the container will receive {{SIGTERM}}, and 
> after a grace period, {{SIGKILL}}.
> {quote}
> There is a {{DelayedProcessKiller}} in {{ContainerExcecutor}} which executes 
> for all containers after a delay when {{sleepDelayBeforeSigKill>0}}. By 
> default this is set to {{250 milliseconds}} and so irrespective of the 
> container type, it will always get executed.
>  
> For a docker container, {{docker stop}} takes care of sending a {{SIGKILL}} 
> after the grace period
> - when sleepDelayBeforeSigKill > 10 seconds, then there is no point of 
> executing DelayedProcessKiller
> - when sleepDelayBeforeSigKill < 1 second, then the grace period should be 
> the smallest value, which is 1 second, because anyways we are forcing kill 
> after 250 ms
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8706) DelayedProcessKiller is executed for Docker containers even though docker stop sends a KILL signal after the specified grace period

2018-09-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604796#comment-16604796
 ] 

Hadoop QA commented on YARN-8706:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
8s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  7m  8s{color} 
| {color:red} hadoop-yarn-project_hadoop-yarn generated 6 new + 108 unchanged - 
0 fixed = 114 total (was 108) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
43s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 18m 
32s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}105m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | YARN-8706 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938505/YARN-8706.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b6f01d6270ba 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e780556 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| javac | 

[jira] [Commented] (YARN-7592) yarn.federation.failover.enabled missing in yarn-default.xml

2018-09-05 Thread Bibin A Chundatt (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604773#comment-16604773
 ] 

Bibin A Chundatt commented on YARN-7592:


[~jira.shegalov]

Following are my understanding based on discussion in YARN-8434

As per 
[comment|https://issues.apache.org/jira/browse/YARN-8434?focusedCommentId=16539415=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16539415]
 from [~subru]  FederationRMFailoverProxyProvider is intenally set for 
connection retry handling.


IIUC {{RMProxy#createRMProxy}} , federation check is not required. Also 
following code seems to have issue {{RMProxy#newProxyInstance}} 

{code}
  private static  T newProxyInstance(final YarnConfiguration conf,
  final Class protocol, RMProxy instance, RetryPolicy retryPolicy)
  throws IOException{
if (HAUtil.isHAEnabled(conf) || HAUtil.isFederationEnabled(conf)) {
  RMFailoverProxyProvider provider =
  instance.createRMFailoverProxyProvider(conf, protocol);
  return (T) RetryProxy.create(protocol, provider, retryPolicy);
} else {
  InetSocketAddress rmAddress = instance.getRMAddress(conf, protocol);
  LOG.info("Connecting to ResourceManager at " + rmAddress);
  T proxy = instance.getProxy(conf, protocol, rmAddress);
  return (T) RetryProxy.create(protocol, proxy, retryPolicy);
}
  }
{code}

Router + 1 RM (non HA) - 2 NM and Federation enabled topology.
{{ConfiguredRMFailoverProxyProvider}}  get intialized as Failover Provider  
ServerProxy and fails to connect to RM. Exception @
{code}
 this.rmServiceIds = rmIds.toArray(new String[rmIds.size()]);
conf.set(YarnConfiguration.RM_HA_ID, rmServiceIds[currentProxyIndex]);
{code}

cc:/ [~subru]





> yarn.federation.failover.enabled missing in yarn-default.xml
> 
>
> Key: YARN-7592
> URL: https://issues.apache.org/jira/browse/YARN-7592
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 3.0.0-beta1
>Reporter: Gera Shegalov
>Priority: Major
>
> yarn.federation.failover.enabled should be documented in yarn-default.xml. I 
> am also not sure why it should be true by default and force the HA retry 
> policy in {{RMProxy#createRMProxy}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8659) RMWebServices returns only RUNNING apps when filtered with queue

2018-09-05 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604755#comment-16604755
 ] 

Szilard Nemeth edited comment on YARN-8659 at 9/5/18 6:23 PM:
--

Hi [~Prabhu Joseph]!
I found out the root cause of this bug and I was able to reproduce the bug with 
2 testcases.
When {{ClientRMService#getApplications}} is invoked, it first checks whether 
the user filters for queues. If yes, it iterates over the specified queues and 
retrieves the apps bound to the queue from the scheduler. Then, as a last step, 
a tricky iterator is set up, that basically can iterate over the collected 
application attempt IDs (since we can have multiple queues and each queue can 
have many apps associated to it, it's a list of lists).
See the iterator here: 
https://github.com/apache/hadoop/blob/9af96d4ed4b6f80d3ca53a2b003d2ef768650dd4/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java#L834-L859

What essentially is broken is the code that gets the application attempt IDs 
from the scheduler: 
https://github.com/apache/hadoop/blob/9af96d4ed4b6f80d3ca53a2b003d2ef768650dd4/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java#L829
The scheduler only returns the scheduled applications, but not the finished 
ones. This essentially means whatever is specified for the application state 
parameter, the code would only give applications back that are currently 
executing.

Let's go back what you described above: 
1. Just the RUNNING apps are returned if any queue is specified because the 
call to {{scheduler.getAppsInQueue\(queue\);}} only returns apps that are 
executing.
2. No applications are returned if the queue parameter is specified and the 
state parameter is set to FINISHED. 
As described above, this is faulty even if you don't specify a state parameter 
at all, as the call to {{scheduler.getAppsInQueue\(queue\);}} only returns apps 
that are executing, but not the other ones.

So basically, the solution for this is removing the tricky iterator and simply 
iterate over the apps retrieved from RMContext.
This should really work as the current code is also getting the applications 
from that collection: 
https://github.com/apache/hadoop/blob/9af96d4ed4b6f80d3ca53a2b003d2ef768650dd4/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java#L850

We should keep in mind API compatibility, though. 
With the current implementation, apps are only returned for a queue if they are 
executing.
With the code changes in my patch, if the user specifies the queue filter, the 
endpoint returns apps regardless of their states.

If we think about the apps endpoint as a set of filter parameters applied on 
applications, it seems to be more logical to return apps bound to a queue, 
regardless of what states they have, if the only filter is the queue filter.
If the user wants to have the apps that are executing and bound to a queue, one 
should specify both the queue and the state parameters.

[~templedf], [~leftnoteasy], [~haibochen]: Could you please share your opinions 
about what's more important? Keeping API compatibility or fix this bug?

Thanks!





was (Author: snemeth):
Hi [~Prabhu Joseph]!
I found out the root cause of this bug and I was able to reproduce the bug with 
2 testcases.
When {{ClientRMService#getApplications}} is invoked, it first checks whether 
the user filters for queues. If yes, it iterates over the specified queues and 
retrieves the apps bound to the queue from the scheduler. Then, as a last step, 
a tricky iterator is set up, that basically can iterate over the collected 
application attempt IDs (since we can have multiple queues and each queue can 
have many apps associated to it, it's a list of lists).
See the iterator here: 
https://github.com/apache/hadoop/blob/9af96d4ed4b6f80d3ca53a2b003d2ef768650dd4/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java#L834-L859

What essentially is broken is the code that gets the application attempt IDs 
from the scheduler: 
https://github.com/apache/hadoop/blob/9af96d4ed4b6f80d3ca53a2b003d2ef768650dd4/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java#L829
The scheduler only returns the scheduled applications, but not the finished 
ones. This essentially means whatever is specified for the application state 
parameter, the code would only give applications back that are currently 
executing.


[jira] [Updated] (YARN-8659) RMWebServices returns only RUNNING apps when filtered with queue

2018-09-05 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-8659:
-
Attachment: YARN-8659.001.patch

> RMWebServices returns only RUNNING apps when filtered with queue
> 
>
> Key: YARN-8659
> URL: https://issues.apache.org/jira/browse/YARN-8659
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 2.7.3
>Reporter: Prabhu Joseph
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: Screen Shot 2018-08-13 at 8.01.29 PM.png, Screen Shot 
> 2018-08-13 at 8.01.52 PM.png, YARN-8659.001.patch
>
>
> RMWebServices returns only RUNNING apps when filtered with queue and returns 
> empty apps
> when filtered with both FINISHED states and queue.
> http://pjoseph-script-llap3.openstacklocal:8088/ws/v1/cluster/apps?queue=default
> http://pjoseph-script-llap3.openstacklocal:8088/ws/v1/cluster/apps?states=FINISHED=default



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8659) RMWebServices returns only RUNNING apps when filtered with queue

2018-09-05 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604755#comment-16604755
 ] 

Szilard Nemeth commented on YARN-8659:
--

Hi [~Prabhu Joseph]!
I found out the root cause of this bug and I was able to reproduce the bug with 
2 testcases.
When {{ClientRMService#getApplications}} is invoked, it first checks whether 
the user filters for queues. If yes, it iterates over the specified queues and 
retrieves the apps bound to the queue from the scheduler. Then, as a last step, 
a tricky iterator is set up, that basically can iterate over the collected 
application attempt IDs (since we can have multiple queues and each queue can 
have many apps associated to it, it's a list of lists).
See the iterator here: 
https://github.com/apache/hadoop/blob/9af96d4ed4b6f80d3ca53a2b003d2ef768650dd4/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java#L834-L859

What essentially is broken is the code that gets the application attempt IDs 
from the scheduler: 
https://github.com/apache/hadoop/blob/9af96d4ed4b6f80d3ca53a2b003d2ef768650dd4/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java#L829
The scheduler only returns the scheduled applications, but not the finished 
ones. This essentially means whatever is specified for the application state 
parameter, the code would only give applications back that are currently 
executing.

Let's go back what you described above: 
1. Just the RUNNING apps are returned if any queue is specified because the 
call to {{scheduler.getAppsInQueue\(queue\);}} only returns apps that are 
executing.
2. No applications are returned if the queue parameter is specified and the 
state parameter is set to FINISHED. 
As described above, this is faulty even if you don't specify a state parameter 
at all, as the call to {{scheduler.getAppsInQueue\(queue\);}} only returns apps 
that are executing, but not the other ones.

So basically, the solution for this is removing the tricky iterator and simply 
iterate over the apps retrieved from RMContext.
This should really work as the current code is also getting the applications 
from that collection: 
https://github.com/apache/hadoop/blob/9af96d4ed4b6f80d3ca53a2b003d2ef768650dd4/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java#L850

We should keep in mind API compatibility, though. 
With the current implementation, apps are only returned for a queue if they are 
executing.
With the code changes in my patch, if the user specifies the queue filter, the 
endpoint returns apps regardless of their states.

If we think about the apps endpoint as a set of filter parameters applied on 
applications, it seems to be more logical to return apps bound to a queue, 
regardless of what states they have, if the only filter is the queue filter.
If the user wants to have the apps that are executing and bound to a queue, one 
should specify both the queue and the state parameters.





> RMWebServices returns only RUNNING apps when filtered with queue
> 
>
> Key: YARN-8659
> URL: https://issues.apache.org/jira/browse/YARN-8659
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 2.7.3
>Reporter: Prabhu Joseph
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: Screen Shot 2018-08-13 at 8.01.29 PM.png, Screen Shot 
> 2018-08-13 at 8.01.52 PM.png
>
>
> RMWebServices returns only RUNNING apps when filtered with queue and returns 
> empty apps
> when filtered with both FINISHED states and queue.
> http://pjoseph-script-llap3.openstacklocal:8088/ws/v1/cluster/apps?queue=default
> http://pjoseph-script-llap3.openstacklocal:8088/ws/v1/cluster/apps?states=FINISHED=default



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8706) DelayedProcessKiller is executed for Docker containers even though docker stop sends a KILL signal after the specified grace period

2018-09-05 Thread Eric Badger (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604745#comment-16604745
 ] 

Eric Badger commented on YARN-8706:
---

Thanks for the update, [~csingh]. +1 (non-binding) pending Hadoop QA

> DelayedProcessKiller is executed for Docker containers even though docker 
> stop sends a KILL signal after the specified grace period
> ---
>
> Key: YARN-8706
> URL: https://issues.apache.org/jira/browse/YARN-8706
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
>  Labels: docker
> Attachments: YARN-8706.001.patch, YARN-8706.002.patch, 
> YARN-8706.003.patch
>
>
> {{DockerStopCommand}} adds a grace period of 10 seconds.
> 10 seconds is also the default grace time use by docker stop
>  [https://docs.docker.com/engine/reference/commandline/stop/]
> Documentation of the docker stop:
> {quote}the main process inside the container will receive {{SIGTERM}}, and 
> after a grace period, {{SIGKILL}}.
> {quote}
> There is a {{DelayedProcessKiller}} in {{ContainerExcecutor}} which executes 
> for all containers after a delay when {{sleepDelayBeforeSigKill>0}}. By 
> default this is set to {{250 milliseconds}} and so irrespective of the 
> container type, it will always get executed.
>  
> For a docker container, {{docker stop}} takes care of sending a {{SIGKILL}} 
> after the grace period
> - when sleepDelayBeforeSigKill > 10 seconds, then there is no point of 
> executing DelayedProcessKiller
> - when sleepDelayBeforeSigKill < 1 second, then the grace period should be 
> the smallest value, which is 1 second, because anyways we are forcing kill 
> after 250 ms
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8699) Add Yarnclient#yarnclusterMetrics API implementation in router

2018-09-05 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604732#comment-16604732
 ] 

Giovanni Matteo Fumarola commented on YARN-8699:


Thanks [~bibinchundatt] for working on this.
NIT: Typo in {{getClusterMetirsResponse}}

> Add Yarnclient#yarnclusterMetrics API implementation in router
> --
>
> Key: YARN-8699
> URL: https://issues.apache.org/jira/browse/YARN-8699
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-8699.001.patch, YARN-8699.002.patch, 
> YARN-8699.003.patch, YARN-8699.004.patch
>
>
> Implement YarnclusterMetrics API in FederationClientInterceptor



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6456) Allow administrators to set a single ContainerRuntime for all containers

2018-09-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604715#comment-16604715
 ] 

Hadoop QA commented on YARN-6456:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
25s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 26s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 3 new + 441 unchanged - 0 fixed = 444 total (was 441) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 47s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
48s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
25s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 18m 
41s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}103m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | YARN-6456 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938495/YARN-6456.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux b0c665f354ea 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 

[jira] [Commented] (YARN-8745) Misplaced the TestRMWebServicesFairScheduler.java file.

2018-09-05 Thread Bibin A Chundatt (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604710#comment-16604710
 ] 

Bibin A Chundatt commented on YARN-8745:


Thank you [~sreenivasulureddy]

YARN-7451 newly added files license headers also seems different. 
Could you also fix headers too..

> Misplaced the TestRMWebServicesFairScheduler.java file.
> ---
>
> Key: YARN-8745
> URL: https://issues.apache.org/jira/browse/YARN-8745
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, test
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Y. SREENIVASULU REDDY
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-8745.001.patch
>
>
> TestRMWebServicesFairScheduler.java file exist in
> {noformat}
> hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesFairScheduler.java
> {noformat}
> But the package structure is 
> {noformat}
> package org.apache.hadoop.yarn.server.resourcemanager.webapp.fairscheduler;
> {noformat}
> so moving the file to proper package.
> YARN-7451 issue triggered from this ID.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8569) Create an interface to provide cluster information to application

2018-09-05 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604686#comment-16604686
 ] 

Eric Yang commented on YARN-8569:
-

YARN localizer only support tarball, archive, and individual file.  It does not 
support directory with files in it.  This causes docker to mount the path to a 
specific file instead of a directory with files in it.  We get conflicts of 
double mounting the same path when merge patch 1 and patch 5 together with 
attempt to double mount the same subdirectory:

{code}
{
"Type": "bind",
"Source": 
"/tmp/hadoop-yarn/nm-local-dir/usercache/hbase/appcache/application_1536164978190_0001/container_1536164978190_0001_01_05/sysfs",
"Destination": "/hadoop/yarn/sysfs",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "bind",
"Source": 
"/tmp/hadoop-yarn/nm-local-dir/usercache/hbase/appcache/application_1536164978190_0001/filecache/10/service.json",
"Destination": "/hadoop/yarn/sysfs/service.json",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{code}

Docker would error out at this point.  Therefore, I am going to try to generate 
an archive or tarball with service.json in it, and let localizer decompress the 
directory, and mount the directory.  The follow up update logic will locate the 
localizer directory and replace information in the localizer directory.  In 
case, if anyone wonder that why go through the compress service.json file to 
tarball step in the next patch.

> Create an interface to provide cluster information to application
> -
>
> Key: YARN-8569
> URL: https://issues.apache.org/jira/browse/YARN-8569
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: Docker
> Attachments: YARN-8569 YARN sysfs interface to provide cluster 
> information to application.pdf, YARN-8569.001.patch, YARN-8569.002.patch, 
> YARN-8569.003.patch, YARN-8569.004.patch, YARN-8569.005.patch
>
>
> Some program requires container hostnames to be known for application to run. 
>  For example, distributed tensorflow requires launch_command that looks like:
> {code}
> # On ps0.example.com:
> $ python trainer.py \
>  --ps_hosts=ps0.example.com:,ps1.example.com: \
>  --worker_hosts=worker0.example.com:,worker1.example.com: \
>  --job_name=ps --task_index=0
> # On ps1.example.com:
> $ python trainer.py \
>  --ps_hosts=ps0.example.com:,ps1.example.com: \
>  --worker_hosts=worker0.example.com:,worker1.example.com: \
>  --job_name=ps --task_index=1
> # On worker0.example.com:
> $ python trainer.py \
>  --ps_hosts=ps0.example.com:,ps1.example.com: \
>  --worker_hosts=worker0.example.com:,worker1.example.com: \
>  --job_name=worker --task_index=0
> # On worker1.example.com:
> $ python trainer.py \
>  --ps_hosts=ps0.example.com:,ps1.example.com: \
>  --worker_hosts=worker0.example.com:,worker1.example.com: \
>  --job_name=worker --task_index=1
> {code}
> This is a bit cumbersome to orchestrate via Distributed Shell, or YARN 
> services launch_command.  In addition, the dynamic parameters do not work 
> with YARN flex command.  This is the classic pain point for application 
> developer attempt to automate system environment settings as parameter to end 
> user application.
> It would be great if YARN Docker integration can provide a simple option to 
> expose hostnames of the yarn service via a mounted file.  The file content 
> gets updated when flex command is performed.  This allows application 
> developer to consume system environment settings via a standard interface.  
> It is like /proc/devices for Linux, but for Hadoop.  This may involve 
> updating a file in distributed cache, and allow mounting of the file via 
> container-executor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8706) DelayedProcessKiller is executed for Docker containers even though docker stop sends a KILL signal after the specified grace period

2018-09-05 Thread Chandni Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-8706:

Attachment: YARN-8706.003.patch

> DelayedProcessKiller is executed for Docker containers even though docker 
> stop sends a KILL signal after the specified grace period
> ---
>
> Key: YARN-8706
> URL: https://issues.apache.org/jira/browse/YARN-8706
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
>  Labels: docker
> Attachments: YARN-8706.001.patch, YARN-8706.002.patch, 
> YARN-8706.003.patch
>
>
> {{DockerStopCommand}} adds a grace period of 10 seconds.
> 10 seconds is also the default grace time use by docker stop
>  [https://docs.docker.com/engine/reference/commandline/stop/]
> Documentation of the docker stop:
> {quote}the main process inside the container will receive {{SIGTERM}}, and 
> after a grace period, {{SIGKILL}}.
> {quote}
> There is a {{DelayedProcessKiller}} in {{ContainerExcecutor}} which executes 
> for all containers after a delay when {{sleepDelayBeforeSigKill>0}}. By 
> default this is set to {{250 milliseconds}} and so irrespective of the 
> container type, it will always get executed.
>  
> For a docker container, {{docker stop}} takes care of sending a {{SIGKILL}} 
> after the grace period
> - when sleepDelayBeforeSigKill > 10 seconds, then there is no point of 
> executing DelayedProcessKiller
> - when sleepDelayBeforeSigKill < 1 second, then the grace period should be 
> the smallest value, which is 1 second, because anyways we are forcing kill 
> after 250 ms
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8730) TestRMWebServiceAppsNodelabel#testAppsRunning fails

2018-09-05 Thread Eric Payne (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604634#comment-16604634
 ] 

Eric Payne commented on YARN-8730:
--

-1 overall
|Vote|Subsystem|Runtime|Comment|
|0|findbugs|0m 1s|Findbugs executables are not available.|
|+1|@author|0m 0s|The patch does not contain any @author tags.|
|-1|test4tests|0m 0s|The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch.|
|+1|mvninstall|7m 23s|branch-2.8 passed|
|+1|compile|0m 38s|branch-2.8 passed|
|+1|checkstyle|0m 23s|branch-2.8 passed|
|+1|mvnsite|0m 47s|branch-2.8 passed|
|+1|mvneclipse|0m 23s|branch-2.8 passed|
|+1|javadoc|0m 25s|branch-2.8 passed|
|+1|mvninstall|0m 39s|the patch passed|
|+1|compile|0m 36s|the patch passed|
|+1|javac|0m 36s|the patch passed|
|+1|checkstyle|0m 
18s|hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 3 unchanged - 1 fixed = 3 total (was 4)|
|+1|mvnsite|0m 41s|the patch passed|
|+1|mvneclipse|0m 22s|the patch passed|
|+1|whitespace|0m 0s|The patch has no whitespace issues.|
|+1|javadoc|0m 20s|the patch passed|
|-1|unit|87m 18s|hadoop-yarn-server-resourcemanager in|
| | | |the patch failed.|
|-1|asflicense|0m 19s|The patch generated 1 ASF License warnings.|
| | |101m 12s| |
|Reason|Tests|
|Failed junit 
tests|hadoop.yarn.server.resourcemanager.scheduler.fair.TestContinuousScheduling|
| 
|hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesDelegationTokenAuthentication|
| |hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesDelegationTokens|
||Subsystem||Report/Notes||
|Optional Tests|asflicense compile javac javadoc mvninstall mvnsite unit 
findbugs checkstyle|
|uname|Linux disbeliefchief.corp.ne1.yahoo.com 
3.10.0-693.el7.YAHOO.20170801.5.x86_64 #1 SMP Tue Aug 1 22:57:35 UTC 2017 
x86_64 x86_64 x86_64 GNU/Linux|
|Build tool|maven|
|Personality|/home/ericp/hadoop/source/YARN-8730/branch-2.8/patchprocess/yetus-0.3.0/lib/precommit/personality/hadoop.sh|
|git revision|branch-2.8 / c7c5d73|
|Default Java|1.8.0_141|
|unit|/tmp/yetus-13905.5547/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt|
|unit test 
logs|/tmp/yetus-13905.5547/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt|
|asflicense|/tmp/yetus-13905.5547/patch-asflicense-problems.txt|
|modules|C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager|

- Unit tests: {{TestRMWebServicesDelegationTokenAuthentication}} and 
{{TestRMWebServicesDelegationTokens}} are bothe failing for branch-2.8 without 
the changes in this patch. I can't get {{TestContinuousScheduling}} to fail 
when run individually.
- ASF warning is complaining about 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/dependency-reduced-pom.xml
- No unit test was added. This patch fixes {{TestRMWebServiceAppsNodelabel}}

> TestRMWebServiceAppsNodelabel#testAppsRunning fails
> ---
>
> Key: YARN-8730
> URL: https://issues.apache.org/jira/browse/YARN-8730
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.8.4
>Reporter: Jason Lowe
>Assignee: Eric Payne
>Priority: Major
> Attachments: YARN-8730.001.branch-2.8.patch
>
>
> TestRMWebServiceAppsNodelabel is failing in branch-2.8:
> {noformat}
> Running 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServiceAppsNodelabel
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.473 sec <<< 
> FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServiceAppsNodelabel
> testAppsRunning(org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServiceAppsNodelabel)
>   Time elapsed: 6.708 sec  <<< FAILURE!
> org.junit.ComparisonFailure: partition amused 
> expected:<{"[]memory":1024,"vCores...> but 
> was:<{"[res":{"memory":1024,"memorySize":1024,"virtualCores":1},"]memory":1024,"vCores...>
>   at org.junit.Assert.assertEquals(Assert.java:115)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServiceAppsNodelabel.verifyResource(TestRMWebServiceAppsNodelabel.java:222)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServiceAppsNodelabel.testAppsRunning(TestRMWebServiceAppsNodelabel.java:205)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: 

[jira] [Commented] (YARN-8638) Allow linux container runtimes to be pluggable

2018-09-05 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604405#comment-16604405
 ] 

Hudson commented on YARN-8638:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14880 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14880/])
YARN-8638. Allow linux container runtimes to be pluggable. Contributed (skumpf: 
rev dffb7bfe6cd2292162f08ec0bded736bc5194c3f)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/TestDelegatingLinuxContainerRuntime.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DelegatingLinuxContainerRuntime.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DefaultLinuxContainerRuntime.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/JavaSandboxLinuxContainerRuntime.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/LinuxContainerRuntime.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/MockLinuxContainerRuntime.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java


> Allow linux container runtimes to be pluggable
> --
>
> Key: YARN-8638
> URL: https://issues.apache.org/jira/browse/YARN-8638
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.2.0
>Reporter: Craig Condit
>Assignee: Craig Condit
>Priority: Minor
> Fix For: 3.2.0, 3.1.2
>
> Attachments: YARN-8638.001.patch, YARN-8638.002.patch, 
> YARN-8638.003.patch, YARN-8638.004.patch
>
>
> YARN currently supports three different Linux container runtimes (default, 
> docker, and javasandbox). However, it would be relatively straightforward to 
> support arbitrary runtime implementations. This would enable easier 
> experimentation with new and emerging runtime technologies (runc, containerd, 
> etc.) without requiring a rebuild and redeployment of Hadoop. 
> This could be accomplished via a simple configuration change:
> {code:xml}
> 
>  yarn.nodemanager.runtime.linux.allowed-runtimes
>  default,docker,experimental
> 
>  
> 
>  yarn.nodemanager.runtime.linux.experimental.class
>  com.somecompany.yarn.runtime.ExperimentalLinuxContainerRuntime
> {code}
>  
> In this example, {{yarn.nodemanager.runtime.linux.allowed-runtimes}} would 
> now allow arbitrary values. Additionally, 
> {{yarn.nodemanager.runtime.linux.\{RUNTIME_KEY}.class}} would indicate the 
> {{LinuxContainerRuntime}} implementation to instantiate. A no-argument 
> constructor should be sufficient, as {{LinuxContainerRuntime}} already 
> provides an {{initialize()}} method.
> {{DockerLinuxContainerRuntime.isDockerContainerRequested(Map 
> env)}} and {{JavaSandboxLinuxContainerRuntime.isSandboxContainerRequested()}} 
> could be generalized to {{isRuntimeRequested(Map env)}} and 
> added to the {{LinuxContainerRuntime}} interface. This would allow 
> {{DelegatingLinuxContainerRuntime}} to select an appropriate runtime based on 
> whether that runtime claimed ownership of the current container execution.
> For backwards compatibility, the existing values (default,docker,javasandbox) 
> would continue to be supported as-is. Under the current logic, the evaluation 
> order is javasandbox, docker, default (with default being chosen if no other 
> candidates are available). Under the new evaluation logic, pluggable runtimes 
> would be evaluated after docker and before default, in the order in which 
> they are defined in the allowed-runtimes list. This will change no behavior 
> on current clusters (as there would be no pluggable runtimes defined), and 
> preserves behavior with respect to ordering of existing runtimes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: 

[jira] [Commented] (YARN-8638) Allow linux container runtimes to be pluggable

2018-09-05 Thread Shane Kumpf (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604384#comment-16604384
 ] 

Shane Kumpf commented on YARN-8638:
---

I opened HADOOP-15721 and YARN-8748 to discuss disabling/fixing the two 
pre-commit warnings encountered here.

> Allow linux container runtimes to be pluggable
> --
>
> Key: YARN-8638
> URL: https://issues.apache.org/jira/browse/YARN-8638
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.2.0
>Reporter: Craig Condit
>Assignee: Craig Condit
>Priority: Minor
> Fix For: 3.2.0, 3.1.2
>
> Attachments: YARN-8638.001.patch, YARN-8638.002.patch, 
> YARN-8638.003.patch, YARN-8638.004.patch
>
>
> YARN currently supports three different Linux container runtimes (default, 
> docker, and javasandbox). However, it would be relatively straightforward to 
> support arbitrary runtime implementations. This would enable easier 
> experimentation with new and emerging runtime technologies (runc, containerd, 
> etc.) without requiring a rebuild and redeployment of Hadoop. 
> This could be accomplished via a simple configuration change:
> {code:xml}
> 
>  yarn.nodemanager.runtime.linux.allowed-runtimes
>  default,docker,experimental
> 
>  
> 
>  yarn.nodemanager.runtime.linux.experimental.class
>  com.somecompany.yarn.runtime.ExperimentalLinuxContainerRuntime
> {code}
>  
> In this example, {{yarn.nodemanager.runtime.linux.allowed-runtimes}} would 
> now allow arbitrary values. Additionally, 
> {{yarn.nodemanager.runtime.linux.\{RUNTIME_KEY}.class}} would indicate the 
> {{LinuxContainerRuntime}} implementation to instantiate. A no-argument 
> constructor should be sufficient, as {{LinuxContainerRuntime}} already 
> provides an {{initialize()}} method.
> {{DockerLinuxContainerRuntime.isDockerContainerRequested(Map 
> env)}} and {{JavaSandboxLinuxContainerRuntime.isSandboxContainerRequested()}} 
> could be generalized to {{isRuntimeRequested(Map env)}} and 
> added to the {{LinuxContainerRuntime}} interface. This would allow 
> {{DelegatingLinuxContainerRuntime}} to select an appropriate runtime based on 
> whether that runtime claimed ownership of the current container execution.
> For backwards compatibility, the existing values (default,docker,javasandbox) 
> would continue to be supported as-is. Under the current logic, the evaluation 
> order is javasandbox, docker, default (with default being chosen if no other 
> candidates are available). Under the new evaluation logic, pluggable runtimes 
> would be evaluated after docker and before default, in the order in which 
> they are defined in the allowed-runtimes list. This will change no behavior 
> on current clusters (as there would be no pluggable runtimes defined), and 
> preserves behavior with respect to ordering of existing runtimes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7825) Maintain constant horizontal application info bar for all pages

2018-09-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604382#comment-16604382
 ] 

Hadoop QA commented on YARN-7825:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
31m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | YARN-7825 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938458/YARN-7825.001.patch |
| Optional Tests |  dupname  asflicense  shadedclient  |
| uname | Linux 61fce8ad025b 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c7403a4 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 312 (vs. ulimit of 1) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/21766/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Maintain constant horizontal application info bar for all pages
> ---
>
> Key: YARN-7825
> URL: https://issues.apache.org/jira/browse/YARN-7825
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Yesha Vora
>Assignee: Akhil PB
>Priority: Major
> Attachments: Screen Shot 2018-04-10 at 11.06.27 AM.png, Screen Shot 
> 2018-04-10 at 11.06.40 AM.png, Screen Shot 2018-04-10 at 11.07.07 AM.png, 
> Screen Shot 2018-04-10 at 11.07.29 AM.png, Screen Shot 2018-04-10 at 11.15.27 
> AM.png, YARN-7825.001.patch
>
>
> Steps:
> 1) enable Ats v2
> 2) Start Yarn service application ( Httpd )
> 3) Fix horizontal info bar for below pages.
>  * component page
>  * Component Instance info page 
>  * Application attempt Info 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8745) Misplaced the TestRMWebServicesFairScheduler.java file.

2018-09-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604381#comment-16604381
 ] 

Hadoop QA commented on YARN-8745:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 70m 
24s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}126m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | YARN-8745 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938441/YARN-8745.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3dee0ff96c61 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 85c3fe3 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/21765/testReport/ |
| Max. process+thread count | 941 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/21765/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Misplaced the 

[jira] [Created] (YARN-8748) Javadoc warnings within the nodemanager package

2018-09-05 Thread Shane Kumpf (JIRA)
Shane Kumpf created YARN-8748:
-

 Summary: Javadoc warnings within the nodemanager package
 Key: YARN-8748
 URL: https://issues.apache.org/jira/browse/YARN-8748
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.2.0
Reporter: Shane Kumpf


There are a number of javadoc warnings in trunk in classes under the 
nodemanager package. These should be addressed or suppressed.
{code:java}
[WARNING] Javadoc Warnings
[WARNING] 
/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java:93:
 warning - Tag @see: reference not found: 
ContainerLaunch.ShellScriptBuilder#listDebugInformation
[WARNING] 
/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/JavaSandboxLinuxContainerRuntime.java:118:
 warning - YarnConfiguration#YARN_CONTAINER_SANDBOX (referenced by @value tag) 
is an unknown reference.
[WARNING] 
/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/JavaSandboxLinuxContainerRuntime.java:118:
 warning - YarnConfiguration#YARN_CONTAINER_SANDBOX_FILE_PERMISSIONS 
(referenced by @value tag) is an unknown reference.
[WARNING] 
/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/JavaSandboxLinuxContainerRuntime.java:118:
 warning - YarnConfiguration#YARN_CONTAINER_SANDBOX_POLICY (referenced by 
@value tag) is an unknown reference.
[WARNING] 
/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/JavaSandboxLinuxContainerRuntime.java:118:
 warning - YarnConfiguration#YARN_CONTAINER_SANDBOX_WHITELIST_GROUP (referenced 
by @value tag) is an unknown reference.
[WARNING] 
/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/JavaSandboxLinuxContainerRuntime.java:118:
 warning - YarnConfiguration#YARN_CONTAINER_SANDBOX_POLICY_GROUP_PREFIX 
(referenced by @value tag) is an unknown reference.
[WARNING] 
/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/JavaSandboxLinuxContainerRuntime.java:211:
 warning - YarnConfiguration#YARN_CONTAINER_SANDBOX_WHITELIST_GROUP (referenced 
by @value tag) is an unknown reference.
[WARNING] 
/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/JavaSandboxLinuxContainerRuntime.java:211:
 warning - NMContainerPolicyUtils#SECURITY_FLAG (referenced by @value tag) is 
an unknown reference.
[WARNING] 
/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/TrafficControlBandwidthHandlerImpl.java:248:
 warning - @return tag has no arguments.
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8638) Allow linux container runtimes to be pluggable

2018-09-05 Thread Shane Kumpf (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604366#comment-16604366
 ] 

Shane Kumpf commented on YARN-8638:
---

Thanks for the contribution, [~ccondit-target] and thank you all for the 
reviews and discussion. I committed this to trunk and branch-3.1.

> Allow linux container runtimes to be pluggable
> --
>
> Key: YARN-8638
> URL: https://issues.apache.org/jira/browse/YARN-8638
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.2.0
>Reporter: Craig Condit
>Assignee: Craig Condit
>Priority: Minor
> Fix For: 3.2.0, 3.1.2
>
> Attachments: YARN-8638.001.patch, YARN-8638.002.patch, 
> YARN-8638.003.patch, YARN-8638.004.patch
>
>
> YARN currently supports three different Linux container runtimes (default, 
> docker, and javasandbox). However, it would be relatively straightforward to 
> support arbitrary runtime implementations. This would enable easier 
> experimentation with new and emerging runtime technologies (runc, containerd, 
> etc.) without requiring a rebuild and redeployment of Hadoop. 
> This could be accomplished via a simple configuration change:
> {code:xml}
> 
>  yarn.nodemanager.runtime.linux.allowed-runtimes
>  default,docker,experimental
> 
>  
> 
>  yarn.nodemanager.runtime.linux.experimental.class
>  com.somecompany.yarn.runtime.ExperimentalLinuxContainerRuntime
> {code}
>  
> In this example, {{yarn.nodemanager.runtime.linux.allowed-runtimes}} would 
> now allow arbitrary values. Additionally, 
> {{yarn.nodemanager.runtime.linux.\{RUNTIME_KEY}.class}} would indicate the 
> {{LinuxContainerRuntime}} implementation to instantiate. A no-argument 
> constructor should be sufficient, as {{LinuxContainerRuntime}} already 
> provides an {{initialize()}} method.
> {{DockerLinuxContainerRuntime.isDockerContainerRequested(Map 
> env)}} and {{JavaSandboxLinuxContainerRuntime.isSandboxContainerRequested()}} 
> could be generalized to {{isRuntimeRequested(Map env)}} and 
> added to the {{LinuxContainerRuntime}} interface. This would allow 
> {{DelegatingLinuxContainerRuntime}} to select an appropriate runtime based on 
> whether that runtime claimed ownership of the current container execution.
> For backwards compatibility, the existing values (default,docker,javasandbox) 
> would continue to be supported as-is. Under the current logic, the evaluation 
> order is javasandbox, docker, default (with default being chosen if no other 
> candidates are available). Under the new evaluation logic, pluggable runtimes 
> would be evaluated after docker and before default, in the order in which 
> they are defined in the allowed-runtimes list. This will change no behavior 
> on current clusters (as there would be no pluggable runtimes defined), and 
> preserves behavior with respect to ordering of existing runtimes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8747) ui2 page loading failed due to js error under some time zone configuration

2018-09-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604325#comment-16604325
 ] 

ASF GitHub Bot commented on YARN-8747:
--

GitHub user collinmazb opened a pull request:

https://github.com/apache/hadoop/pull/411

YARN-8747: update moment-timezone version to 0.5.1



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/collinmazb/hadoop trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/411.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #411


commit 4a7b5f45b63547d58502bcc005ece105f676a5d2
Author: collinma 
Date:   2018-09-05T12:05:29Z

YARN-8747: update moment-timezone version to 0.5.1




> ui2 page loading failed due to js error under some time zone configuration
> --
>
> Key: YARN-8747
> URL: https://issues.apache.org/jira/browse/YARN-8747
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Affects Versions: 3.1.1
>Reporter: collinma
>Priority: Blocker
> Attachments: image-2018-09-05-18-54-03-991.png
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> We deployed hadoop 3.1.1 on centos 7.2 servers whose timezone is configured 
> as GMT+8,  the web browser time zone is GMT+8 too. yarn ui page loaded failed 
> due to js error:
>  
> !image-2018-09-05-18-54-03-991.png!
> The moment-timezone js component raised that error. This has been fixed in 
> moment-timezone 
> v0.5.1([see|[https://github.com/moment/moment-timezone/issues/294]).] We need 
> to update moment-timezone version accordingly



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7825) Maintain constant horizontal application info bar for all pages

2018-09-05 Thread Akhil PB (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB reassigned YARN-7825:
--

Assignee: Akhil PB

> Maintain constant horizontal application info bar for all pages
> ---
>
> Key: YARN-7825
> URL: https://issues.apache.org/jira/browse/YARN-7825
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Yesha Vora
>Assignee: Akhil PB
>Priority: Major
> Attachments: Screen Shot 2018-04-10 at 11.06.27 AM.png, Screen Shot 
> 2018-04-10 at 11.06.40 AM.png, Screen Shot 2018-04-10 at 11.07.07 AM.png, 
> Screen Shot 2018-04-10 at 11.07.29 AM.png, Screen Shot 2018-04-10 at 11.15.27 
> AM.png
>
>
> Steps:
> 1) enable Ats v2
> 2) Start Yarn service application ( Httpd )
> 3) Fix horizontal info bar for below pages.
>  * component page
>  * Component Instance info page 
>  * Application attempt Info 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8727) NPE in RouterRMAdminService while stopping service.

2018-09-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604299#comment-16604299
 ] 

Hadoop QA commented on YARN-8727:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 55s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
37s{color} | {color:green} hadoop-yarn-server-router in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | YARN-8727 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12937586/YARN-8727.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 92a3ffe6e09f 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 85c3fe3 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/21764/testReport/ |
| Max. process+thread count | 676 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/21764/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


[jira] [Updated] (YARN-8745) Misplaced the TestRMWebServicesFairScheduler.java file.

2018-09-05 Thread Y. SREENIVASULU REDDY (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Y. SREENIVASULU REDDY updated YARN-8745:

Fix Version/s: 3.2.0

> Misplaced the TestRMWebServicesFairScheduler.java file.
> ---
>
> Key: YARN-8745
> URL: https://issues.apache.org/jira/browse/YARN-8745
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, test
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Y. SREENIVASULU REDDY
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-8745.001.patch
>
>
> TestRMWebServicesFairScheduler.java file exist in
> {noformat}
> hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesFairScheduler.java
> {noformat}
> But the package structure is 
> {noformat}
> package org.apache.hadoop.yarn.server.resourcemanager.webapp.fairscheduler;
> {noformat}
> so moving the file to proper package.
> YARN-7451 issue triggered from this ID.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8747) ui2 page loading failed due to js error under some time zone configuration

2018-09-05 Thread collinma (JIRA)
collinma created YARN-8747:
--

 Summary: ui2 page loading failed due to js error under some time 
zone configuration
 Key: YARN-8747
 URL: https://issues.apache.org/jira/browse/YARN-8747
 Project: Hadoop YARN
  Issue Type: Bug
  Components: webapp
Affects Versions: 3.1.1
Reporter: collinma
 Attachments: image-2018-09-05-18-54-03-991.png

We deployed hadoop 3.1.1 on centos 7.2 servers whose timezone is configured as 
GMT+8,  the web browser time zone is GMT+8 too. yarn ui page loaded failed due 
to js error:

 

!image-2018-09-05-18-54-03-991.png!

The moment-timezone js component raised that error. This has been fixed in 
moment-timezone 
v0.5.1([see|[https://github.com/moment/moment-timezone/issues/294]).] We need 
to update moment-timezone version accordingly



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8746) ui2 overview doesn't display GPU usage info when using Fairscheduler

2018-09-05 Thread collinma (JIRA)
collinma created YARN-8746:
--

 Summary: ui2 overview doesn't display GPU usage info when using 
Fairscheduler 
 Key: YARN-8746
 URL: https://issues.apache.org/jira/browse/YARN-8746
 Project: Hadoop YARN
  Issue Type: Bug
  Components: fairscheduler
Affects Versions: 3.1.1
Reporter: collinma


When using fair scheduler, GPU related information isn't displayed because the 
"metrics" api doesn't return any GPU related usage information( has run yarn on 
GPU per [this 
|[https://hadoop.apache.org/docs/r3.1.1/hadoop-yarn/hadoop-yarn-site/UsingGpus.html]).]
  The hadoop version is 3.1.1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8743) capacity scheduler doesn't set node label to reserved container

2018-09-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604201#comment-16604201
 ] 

Hadoop QA commented on YARN-8743:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 71m 
34s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}125m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | YARN-8743 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938417/YARN-8743.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d9c1767b887e 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 85c3fe3 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/21763/testReport/ |
| Max. process+thread count | 928 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/21763/console |
| Powered by | Apache Yetus 0.8.0 

[jira] [Commented] (YARN-8745) Misplaced the TestRMWebServicesFairScheduler.java file.

2018-09-05 Thread Y. SREENIVASULU REDDY (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604187#comment-16604187
 ] 

Y. SREENIVASULU REDDY commented on YARN-8745:
-

Attached the patch for the same, please review.

> Misplaced the TestRMWebServicesFairScheduler.java file.
> ---
>
> Key: YARN-8745
> URL: https://issues.apache.org/jira/browse/YARN-8745
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, test
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Y. SREENIVASULU REDDY
>Priority: Major
> Attachments: YARN-8745.001.patch
>
>
> TestRMWebServicesFairScheduler.java file exist in
> {noformat}
> hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesFairScheduler.java
> {noformat}
> But the package structure is 
> {noformat}
> package org.apache.hadoop.yarn.server.resourcemanager.webapp.fairscheduler;
> {noformat}
> so moving the file to proper package.
> YARN-7451 issue triggered from this ID.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8745) Misplaced the TestRMWebServicesFairScheduler.java file.

2018-09-05 Thread Y. SREENIVASULU REDDY (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Y. SREENIVASULU REDDY updated YARN-8745:

Attachment: YARN-8745.001.patch

> Misplaced the TestRMWebServicesFairScheduler.java file.
> ---
>
> Key: YARN-8745
> URL: https://issues.apache.org/jira/browse/YARN-8745
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, test
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Y. SREENIVASULU REDDY
>Priority: Major
> Attachments: YARN-8745.001.patch
>
>
> TestRMWebServicesFairScheduler.java file exist in
> {noformat}
> hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesFairScheduler.java
> {noformat}
> But the package structure is 
> {noformat}
> package org.apache.hadoop.yarn.server.resourcemanager.webapp.fairscheduler;
> {noformat}
> so moving the file to proper package.
> YARN-7451 issue triggered from this ID.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8745) Misplaced the TestRMWebServicesFairScheduler.java file.

2018-09-05 Thread Y. SREENIVASULU REDDY (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Y. SREENIVASULU REDDY updated YARN-8745:

Description: 
TestRMWebServicesFairScheduler.java file exist in
{noformat}
hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesFairScheduler.java
{noformat}
But the package structure is 
{noformat}
package org.apache.hadoop.yarn.server.resourcemanager.webapp.fairscheduler;
{noformat}

so moving the file to proper package.

YARN-7451 issue triggered from this ID.

  was:
TestRMWebServicesFairScheduler.java file exist in
{noformat}
hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesFairScheduler.java
{noformat}
But the package structure is 
{noformat}
package org.apache.hadoop.yarn.server.resourcemanager.webapp.fairscheduler;
{noformat}

so moving the file to proper package.


> Misplaced the TestRMWebServicesFairScheduler.java file.
> ---
>
> Key: YARN-8745
> URL: https://issues.apache.org/jira/browse/YARN-8745
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, test
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Y. SREENIVASULU REDDY
>Priority: Major
>
> TestRMWebServicesFairScheduler.java file exist in
> {noformat}
> hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesFairScheduler.java
> {noformat}
> But the package structure is 
> {noformat}
> package org.apache.hadoop.yarn.server.resourcemanager.webapp.fairscheduler;
> {noformat}
> so moving the file to proper package.
> YARN-7451 issue triggered from this ID.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8745) Misplaced the TestRMWebServicesFairScheduler.java file.

2018-09-05 Thread Y. SREENIVASULU REDDY (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Y. SREENIVASULU REDDY updated YARN-8745:

External issue ID:   (was: YARN-7451 issue triggered from this ID.)

> Misplaced the TestRMWebServicesFairScheduler.java file.
> ---
>
> Key: YARN-8745
> URL: https://issues.apache.org/jira/browse/YARN-8745
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, test
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Y. SREENIVASULU REDDY
>Priority: Major
>
> TestRMWebServicesFairScheduler.java file exist in
> {noformat}
> hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesFairScheduler.java
> {noformat}
> But the package structure is 
> {noformat}
> package org.apache.hadoop.yarn.server.resourcemanager.webapp.fairscheduler;
> {noformat}
> so moving the file to proper package.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8745) Misplaced the TestRMWebServicesFairScheduler.java file.

2018-09-05 Thread Y. SREENIVASULU REDDY (JIRA)
Y. SREENIVASULU REDDY created YARN-8745:
---

 Summary: Misplaced the TestRMWebServicesFairScheduler.java file.
 Key: YARN-8745
 URL: https://issues.apache.org/jira/browse/YARN-8745
 Project: Hadoop YARN
  Issue Type: Bug
  Components: fairscheduler, test
Reporter: Y. SREENIVASULU REDDY
Assignee: Y. SREENIVASULU REDDY


TestRMWebServicesFairScheduler.java file exist in
{noformat}
hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesFairScheduler.java
{noformat}
But the package structure is 
{noformat}
package org.apache.hadoop.yarn.server.resourcemanager.webapp.fairscheduler;
{noformat}

so moving the file to proper package.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8743) capacity scheduler doesn't set node label to reserved container

2018-09-05 Thread Hu Ziqian (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604018#comment-16604018
 ] 

Hu Ziqian commented on YARN-8743:
-

fix checkstyle in 002.patch

> capacity scheduler doesn't set node label to reserved container
> ---
>
> Key: YARN-8743
> URL: https://issues.apache.org/jira/browse/YARN-8743
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, scheduler preemption
>Reporter: Hu Ziqian
>Assignee: Hu Ziqian
>Priority: Major
> Attachments: YARN-8743.001.patch, YARN-8743.002.patch
>
>
> capacity scheduler doesn't set node label when new a reserved container's 
> RMContainerImpl. When allocate this container, leafQueue will treat it as a 
> ignorePartitionExclusivityRMContainer.
> It will cause preempt failure. When preempt happens, the preemption policy 
> will try to preempt the reserved container while leafQueue doesn't remove it 
> from ignorePartitionExclusivityRMContainers. In our cluster, we found that 
> preemption policy will always try to preempt the reserved container and 
> actually preempt nothing.
> We set the node label information to  reserved container's RMContainerImpl 
> and redo our test. The preemption performs as expected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8743) capacity scheduler doesn't set node label to reserved container

2018-09-05 Thread Hu Ziqian (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hu Ziqian updated YARN-8743:

Attachment: YARN-8743.002.patch

> capacity scheduler doesn't set node label to reserved container
> ---
>
> Key: YARN-8743
> URL: https://issues.apache.org/jira/browse/YARN-8743
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, scheduler preemption
>Reporter: Hu Ziqian
>Assignee: Hu Ziqian
>Priority: Major
> Attachments: YARN-8743.001.patch, YARN-8743.002.patch
>
>
> capacity scheduler doesn't set node label when new a reserved container's 
> RMContainerImpl. When allocate this container, leafQueue will treat it as a 
> ignorePartitionExclusivityRMContainer.
> It will cause preempt failure. When preempt happens, the preemption policy 
> will try to preempt the reserved container while leafQueue doesn't remove it 
> from ignorePartitionExclusivityRMContainers. In our cluster, we found that 
> preemption policy will always try to preempt the reserved container and 
> actually preempt nothing.
> We set the node label information to  reserved container's RMContainerImpl 
> and redo our test. The preemption performs as expected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8743) capacity scheduler doesn't set node label to reserved container

2018-09-05 Thread Hu Ziqian (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604016#comment-16604016
 ] 

Hu Ziqian commented on YARN-8743:
-

I check the failed UT is timeout of 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.TestSchedulingWithAllocationRequestId.testMultipleAllocationRequestDiffPriority
 and it can pass on my laptop. How can i fix it on Hadoop QA?

> capacity scheduler doesn't set node label to reserved container
> ---
>
> Key: YARN-8743
> URL: https://issues.apache.org/jira/browse/YARN-8743
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, scheduler preemption
>Reporter: Hu Ziqian
>Assignee: Hu Ziqian
>Priority: Major
> Attachments: YARN-8743.001.patch
>
>
> capacity scheduler doesn't set node label when new a reserved container's 
> RMContainerImpl. When allocate this container, leafQueue will treat it as a 
> ignorePartitionExclusivityRMContainer.
> It will cause preempt failure. When preempt happens, the preemption policy 
> will try to preempt the reserved container while leafQueue doesn't remove it 
> from ignorePartitionExclusivityRMContainers. In our cluster, we found that 
> preemption policy will always try to preempt the reserved container and 
> actually preempt nothing.
> We set the node label information to  reserved container's RMContainerImpl 
> and redo our test. The preemption performs as expected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org