[jira] [Updated] (YARN-7248) NM returns new SCHEDULED container status to older clients

2017-09-27 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7248:
--
Target Version/s: 2.9.0, 3.1.0  (was: 2.9.0)

> NM returns new SCHEDULED container status to older clients
> --
>
> Key: YARN-7248
> URL: https://issues.apache.org/jira/browse/YARN-7248
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Jason Lowe
>Assignee: Arun Suresh
>Priority: Blocker
> Attachments: YARN-7248.001.patch, YARN-7248.002.patch, 
> YARN-7248.003.patch
>
>
> YARN-4597 added a new SCHEDULED container state and that state is returned to 
> clients when the container is localizing, etc.  However the client may be 
> running on an older software version that does not have the new SCHEDULED 
> state which could lead the client to crash on the unexpected container state 
> value or make incorrect assumptions like any state != NEW and != RUNNING must 
> be COMPLETED which was true in the older version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6570) No logs were found for running application, running container

2017-09-27 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183683#comment-16183683
 ] 

Junping Du commented on YARN-6570:
--

Sorry for replying late as on vacation recently and thanks for the nice catch, 
[~jlowe]! I will update a new patch after coming back.

> No logs were found for running application, running container
> -
>
> Key: YARN-6570
> URL: https://issues.apache.org/jira/browse/YARN-6570
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Sumana Sathish
>Assignee: Junping Du
>Priority: Critical
> Attachments: YARN-6570-branch-2.8.001.patch, 
> YARN-6570-branch-2.8.002.patch, YARN-6570.poc.patch, YARN-6570-v2.patch, 
> YARN-6570-v3.patch
>
>
> 1.Obtain running containers from the following CLI for running application:
>  yarn  container -list appattempt
> 2. Couldnot fetch logs 
> {code}
> Can not find any log file matching the pattern: ALL for the container
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7260) yarn.router.pipeline.cache-max-size is missing in yarn-default.xml

2017-09-27 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183659#comment-16183659
 ] 

Rohith Sharma K S commented on YARN-7260:
-

bq. The test is failing because yarn-default.xml has 
yarn.router.clientrm.cache-max-size but that doesn't appear in YarnConfiguration
thanks [~jlowe] for the investigation. As part of test improvement, I think it 
would be a good to print all missing configurations in assert message. When I 
checked assert message, I could not able to find which configurations are 
missing! 

> yarn.router.pipeline.cache-max-size is missing in yarn-default.xml
> --
>
> Key: YARN-7260
> URL: https://issues.apache.org/jira/browse/YARN-7260
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Rohith Sharma K S
>Assignee: Jason Lowe
> Attachments: YARN-7260-branch-2.001.patch
>
>
> In branch-2 TestYarnConfigurationFields fails
> {code}
> Running org.apache.hadoop.yarn.api.records.TestURL Tests run: 1, Failures: 0, 
> Errors: 0, Skipped: 0, Time elapsed: 0.278 sec - in 
> org.apache.hadoop.yarn.api.records.TestURL Running 
> org.apache.hadoop.yarn.conf.TestYarnConfigurationFields Tests run: 4, 
> Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 0.539 sec <<< FAILURE! - in 
> org.apache.hadoop.yarn.conf.TestYarnConfigurationFields 
> testCompareXmlAgainstConfigurationClass(org.apache.hadoop.yarn.conf.TestYarnConfigurationFields)
>  Time elapsed: 0.296 sec <<< FAILURE! java.lang.AssertionError: 
> yarn-default.xml has 1 properties missing in class 
> org.apache.hadoop.yarn.conf.YarnConfiguration at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> org.apache.hadoop.conf.TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass(TestConfigurationFieldsBase.java:588)
>  
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7248) NM returns new SCHEDULED container status to older clients

2017-09-27 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183654#comment-16183654
 ] 

Junping Du commented on YARN-7248:
--

I think we should fix this in 3.0-beta also given rolling upgrade from 2.x is 
the goal. CC [~andrew.wang].

> NM returns new SCHEDULED container status to older clients
> --
>
> Key: YARN-7248
> URL: https://issues.apache.org/jira/browse/YARN-7248
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Jason Lowe
>Assignee: Arun Suresh
>Priority: Blocker
> Attachments: YARN-7248.001.patch, YARN-7248.002.patch, 
> YARN-7248.003.patch
>
>
> YARN-4597 added a new SCHEDULED container state and that state is returned to 
> clients when the container is localizing, etc.  However the client may be 
> running on an older software version that does not have the new SCHEDULED 
> state which could lead the client to crash on the unexpected container state 
> value or make incorrect assumptions like any state != NEW and != RUNNING must 
> be COMPLETED which was true in the older version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7248) NM returns new SCHEDULED container status to older clients

2017-09-27 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-7248:
-
Target Version/s: 2.9.0  (was: 2.9.0, 3.0.0-beta1)

> NM returns new SCHEDULED container status to older clients
> --
>
> Key: YARN-7248
> URL: https://issues.apache.org/jira/browse/YARN-7248
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Jason Lowe
>Assignee: Arun Suresh
>Priority: Blocker
> Attachments: YARN-7248.001.patch, YARN-7248.002.patch, 
> YARN-7248.003.patch
>
>
> YARN-4597 added a new SCHEDULED container state and that state is returned to 
> clients when the container is localizing, etc.  However the client may be 
> running on an older software version that does not have the new SCHEDULED 
> state which could lead the client to crash on the unexpected container state 
> value or make incorrect assumptions like any state != NEW and != RUNNING must 
> be COMPLETED which was true in the older version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7248) NM returns new SCHEDULED container status to older clients

2017-09-27 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-7248:
-
Target Version/s: 2.9.0, 3.0.0-beta1  (was: 2.9.0)

> NM returns new SCHEDULED container status to older clients
> --
>
> Key: YARN-7248
> URL: https://issues.apache.org/jira/browse/YARN-7248
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Jason Lowe
>Assignee: Arun Suresh
>Priority: Blocker
> Attachments: YARN-7248.001.patch, YARN-7248.002.patch, 
> YARN-7248.003.patch
>
>
> YARN-4597 added a new SCHEDULED container state and that state is returned to 
> clients when the container is localizing, etc.  However the client may be 
> running on an older software version that does not have the new SCHEDULED 
> state which could lead the client to crash on the unexpected container state 
> value or make incorrect assumptions like any state != NEW and != RUNNING must 
> be COMPLETED which was true in the older version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4090) Make Collections.sort() more efficient by caching resource usage

2017-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183644#comment-16183644
 ] 

Hadoop QA commented on YARN-4090:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  0s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 46m  8s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 86m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSLeafQueue |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-4090 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12889408/YARN-4090.009.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a4ae70d780b5 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 28c4957 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/17675/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17675/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 

[jira] [Commented] (YARN-7262) Add a hierarchy into the ZKRMStateStore for delegation token znodes to prevent jute buffer overflow

2017-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183638#comment-16183638
 ] 

Hadoop QA commented on YARN-7262:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
38s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
40s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  6s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 10 new + 276 unchanged - 0 fixed = 286 total (was 276) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
45s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
3s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 47m 36s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}140m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-7262 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12889399/YARN-7262.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 

[jira] [Updated] (YARN-7252) Removing queue then failing over results in exception

2017-09-27 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-7252:

Fix Version/s: YARN-5734

> Removing queue then failing over results in exception
> -
>
> Key: YARN-7252
> URL: https://issues.apache.org/jira/browse/YARN-7252
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Critical
> Fix For: YARN-5734
>
> Attachments: YARN-7252-YARN-5734.001.patch, 
> YARN-7252-YARN-5734.002.patch
>
>
> Scenario: rm1 and rm2, starting configuration with root.default, root.a. rm1 
> is active. First, put root.a into STOPPED state, then remove it. Then put rm1 
> in standby and rm2 in active. Here's the exception: {noformat}Operation 
> failed: Error on refreshAll during transition to Active
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.transitionToActive(AdminService.java:315)
>   at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
>   at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:4460)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
> Caused by: org.apache.hadoop.ha.ServiceFailedException: RefreshAll operation 
> failed
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshAll(AdminService.java:747)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.transitionToActive(AdminService.java:307)
>   ... 10 more
> Caused by: java.io.IOException: Failed to re-init queues : root.a is deleted 
> from the new capacity scheduler configuration, but the queue is not yet in 
> stopped state. Current State : RUNNING
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitialize(CapacityScheduler.java:436)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshQueues(AdminService.java:405)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshAll(AdminService.java:736)
>   ... 11 more
> Caused by: java.io.IOException: root.a is deleted from the new capacity 
> scheduler configuration, but the queue is not yet in stopped state. Current 
> State : RUNNING
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager.validateQueueHierarchy(CapacitySchedulerQueueManager.java:312)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager.reinitializeQueues(CapacitySchedulerQueueManager.java:174)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitializeQueues(CapacityScheduler.java:648)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitialize(CapacityScheduler.java:432)
>   ... 13 more{noformat}
> Seems rm2 does not think root.a was STOPPED, so when it can't find root.a and 
> sees it is deleted, it throws exception.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7251) Misc changes to YARN-5734

2017-09-27 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-7251:

Attachment: YARN-7251-YARN-5734.006-final.patch

> Misc changes to YARN-5734
> -
>
> Key: YARN-7251
> URL: https://issues.apache.org/jira/browse/YARN-7251
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Fix For: YARN-5734
>
> Attachments: YARN-7251-YARN-5734.001.patch, 
> YARN-7251-YARN-5734.002.patch, YARN-7251-YARN-5734.003-v2.patch, 
> YARN-7251-YARN-5734.004-v2.patch, YARN-7251-YARN-5734.005-v2.patch, 
> YARN-7251-YARN-5734.006-final.patch
>
>
> Documentation/style changes to YARN-5734 before merge.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4090) Make Collections.sort() more efficient by caching resource usage

2017-09-27 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-4090:
---
Affects Version/s: 2.8.1
   3.0.0-alpha3

> Make Collections.sort() more efficient by caching resource usage
> 
>
> Key: YARN-4090
> URL: https://issues.apache.org/jira/browse/YARN-4090
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.8.1, 3.0.0-alpha3
>Reporter: Xianyin Xin
>Assignee: Yufei Gu
> Attachments: sampling1.jpg, sampling2.jpg, YARN-4090.001.patch, 
> YARN-4090.002.patch, YARN-4090.003.patch, YARN-4090.004.patch, 
> YARN-4090.005.patch, YARN-4090.006.patch, YARN-4090.007.patch, 
> YARN-4090.008.patch, YARN-4090.009.patch, YARN-4090-preview.patch, 
> YARN-4090-TestResult.pdf
>
>
> Collections.sort() consumes too much time in a scheduling round.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4090) Make Collections.sort() more efficient by caching resource usage

2017-09-27 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-4090:
---
Attachment: YARN-4090.009.patch

Thanks, [~templedf]. Uploaded the patch v9 for your comments.

> Make Collections.sort() more efficient by caching resource usage
> 
>
> Key: YARN-4090
> URL: https://issues.apache.org/jira/browse/YARN-4090
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Xianyin Xin
>Assignee: Yufei Gu
> Attachments: sampling1.jpg, sampling2.jpg, YARN-4090.001.patch, 
> YARN-4090.002.patch, YARN-4090.003.patch, YARN-4090.004.patch, 
> YARN-4090.005.patch, YARN-4090.006.patch, YARN-4090.007.patch, 
> YARN-4090.008.patch, YARN-4090.009.patch, YARN-4090-preview.patch, 
> YARN-4090-TestResult.pdf
>
>
> Collections.sort() consumes too much time in a scheduling round.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6550) Capture launch_container.sh logs

2017-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183578#comment-16183578
 ] 

Hadoop QA commented on YARN-6550:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
49s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
38s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
34s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 48s{color} 
| {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 1 new + 24 unchanged - 1 fixed = 25 total (was 25) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 31s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 29 new + 162 unchanged - 0 fixed = 191 total (was 162) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 46m 33s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
47s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.nodemanager.TestNodeManagerReboot |
|   | hadoop.yarn.server.nodemanager.containermanager.TestContainerManager |
|   | hadoop.yarn.server.nodemanager.TestNodeStatusUpdater |
|   | 
hadoop.yarn.server.nodemanager.containermanager.scheduler.TestContainerSchedulerQueuing
 |
|   | hadoop.yarn.server.nodemanager.TestNodeStatusUpdaterForLabels |
| Timed out junit tests | 
org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown |
|   | org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerResync |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:eaf5c66 |
| JIRA Issue | YARN-6550 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12889396/YARN-6550.branch-2.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5398bf427a08 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / c143708 |
| Default Java | 1.7.0_151 |
| findbugs | v3.0.0 |
| javac | 

[jira] [Commented] (YARN-7207) Cache the local host name when getting application list in RM

2017-09-27 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183572#comment-16183572
 ] 

Allen Wittenauer commented on YARN-7207:


Actually, let me expand on that a bit, because we're running directly into 
"better practices" in a space that many may not understand the details.

A process requests a host resolution of a name/ip that is associated with the 
machine that the process is running on (localhost, whatever hostname() returns, 
etc, etc).  That resolution should be going through the local cache (nscd, 
sssd, lookupd, whatever).  That cache should be configured such that it 
resolves through files (e.g., /etc/hosts) and then through DNS.  /etc/hosts 
SHOULD have all known names and IPs for the local machine, eliminating the need 
for any DNS lookup. 

A misconfigured machine will either by not having a cache or having the cache 
misconfigured ask DNS or some other naming service first.  This will 
*definitely* impact system performance. But it's also a misconfiguration; this 
won't just impact YARN but pretty much every single process on the box.  Need 
to write to syslog?  Yup, gonna ask DNS

> Cache the local host name when getting application list in RM
> -
>
> Key: YARN-7207
> URL: https://issues.apache.org/jira/browse/YARN-7207
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: RM
>Affects Versions: 3.1.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-7207.001.patch, YARN-7207.002.patch
>
>
> {{getLocalHostName()}} is invoked for generating the report for each 
> application, which means it is called 1000 times for each 
> {{getApplications()}} if there are 1000 apps in RM. Some user got a 
> performance issue when {{getLocalHostName()}} is slow under some network envs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7207) Cache the local host name when getting application list in RM

2017-09-27 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183566#comment-16183566
 ] 

Allen Wittenauer commented on YARN-7207:


bq. Single call of getLocalHost is pretty slow due to some DNS issue

DNS calls for localhost shouldn't happen on a properly configured machine.

> Cache the local host name when getting application list in RM
> -
>
> Key: YARN-7207
> URL: https://issues.apache.org/jira/browse/YARN-7207
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: RM
>Affects Versions: 3.1.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-7207.001.patch, YARN-7207.002.patch
>
>
> {{getLocalHostName()}} is invoked for generating the report for each 
> application, which means it is called 1000 times for each 
> {{getApplications()}} if there are 1000 apps in RM. Some user got a 
> performance issue when {{getLocalHostName()}} is slow under some network envs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7207) Cache the local host name when getting application list in RM

2017-09-27 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183554#comment-16183554
 ] 

Yufei Gu commented on YARN-7207:


There are different symptoms. Single call of getLocalHost is pretty slow due to 
some DNS issue, which may cause issue in different components, not only in RM. 
I don't think this change will hide anything. It is still observable in that 
situation. There is one case we observed is that performance of single call is 
fine, you won't notice the slowness at all, then it gets significant slow while 
you call it thousands of time with multiple threads. This change helps a lot in 
that case. Yes, this patch hides the symptom in that case. To solve it, I filed 
YARN-7263.

> Cache the local host name when getting application list in RM
> -
>
> Key: YARN-7207
> URL: https://issues.apache.org/jira/browse/YARN-7207
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: RM
>Affects Versions: 3.1.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-7207.001.patch, YARN-7207.002.patch
>
>
> {{getLocalHostName()}} is invoked for generating the report for each 
> application, which means it is called 1000 times for each 
> {{getApplications()}} if there are 1000 apps in RM. Some user got a 
> performance issue when {{getLocalHostName()}} is slow under some network envs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7252) Removing queue then failing over results in exception

2017-09-27 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183536#comment-16183536
 ] 

Wangda Tan commented on YARN-7252:
--

+1, thanks [~jhung]!

> Removing queue then failing over results in exception
> -
>
> Key: YARN-7252
> URL: https://issues.apache.org/jira/browse/YARN-7252
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Critical
> Attachments: YARN-7252-YARN-5734.001.patch, 
> YARN-7252-YARN-5734.002.patch
>
>
> Scenario: rm1 and rm2, starting configuration with root.default, root.a. rm1 
> is active. First, put root.a into STOPPED state, then remove it. Then put rm1 
> in standby and rm2 in active. Here's the exception: {noformat}Operation 
> failed: Error on refreshAll during transition to Active
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.transitionToActive(AdminService.java:315)
>   at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
>   at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:4460)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
> Caused by: org.apache.hadoop.ha.ServiceFailedException: RefreshAll operation 
> failed
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshAll(AdminService.java:747)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.transitionToActive(AdminService.java:307)
>   ... 10 more
> Caused by: java.io.IOException: Failed to re-init queues : root.a is deleted 
> from the new capacity scheduler configuration, but the queue is not yet in 
> stopped state. Current State : RUNNING
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitialize(CapacityScheduler.java:436)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshQueues(AdminService.java:405)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshAll(AdminService.java:736)
>   ... 11 more
> Caused by: java.io.IOException: root.a is deleted from the new capacity 
> scheduler configuration, but the queue is not yet in stopped state. Current 
> State : RUNNING
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager.validateQueueHierarchy(CapacitySchedulerQueueManager.java:312)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager.reinitializeQueues(CapacitySchedulerQueueManager.java:174)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitializeQueues(CapacityScheduler.java:648)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitialize(CapacityScheduler.java:432)
>   ... 13 more{noformat}
> Seems rm2 does not think root.a was STOPPED, so when it can't find root.a and 
> sees it is deleted, it throws exception.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7262) Add a hierarchy into the ZKRMStateStore for delegation token znodes to prevent jute buffer overflow

2017-09-27 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated YARN-7262:

Attachment: YARN-7262.001.patch

The patch adds the ability to configure a hierarchy like that in YARN-2962.  I 
generalized and reused code from YARN-2962 when possible; otherwise, I tried to 
mirror the YARN-2962 code.  There are two big differences:
# The app znodes in YARN-2962 had children (for app attempts), which we don't 
have to worry about here because delegation token znodes don't have children.
# YARN-2962 adds an extra level named "HIERARCHIES" that doesn't seem to be 
necessary.  The token znode path is already quite long, so I omitted that.  The 
layout looks like this:
{noformat}
 * |--- RM_DT_SECRET_MANAGER_ROOT
 *|- RM_DT_SEQUENTIAL_NUMBER_ZNODE_NAME
 *|- RM_DELEGATION_TOKENS_ROOT_ZNODE_NAME
 *|   |- 1
 *|   |  |- (#TokenId barring last character)
 *|   |  |   |- (#Last character of TokenId)
 *|   |  
 *|   |- 2
 *|   |  |- (#TokenId barring last 2 characters)
 *|   |  |   |- (#Last 2 characters of TokenId)
 *|   |  
 *|   |- 3
 *|   |  |- (#TokenId barring last 3 characters)
 *|   |  |   |- (#Last 3 characters of TokenId)
 *|   |  
 *|   |- 4
 *|   |  |- (#TokenId barring last 4 characters)
 *|   |  |   |- (#Last 4 characters of TokenId)
 *|   |  
 *|   |- Token_1
 *|   |- Token_2
 *|   
{noformat}
YARN-2962 had "HIERARCHIES" next to "Token_#" with "1", "2", "3", and "4" under 
it.  Here, we just put "1", "2", "3", and "4" next to "Token_#".

Some more useful info about the patch:
- The default behavior is to use a flat layout, like before.  
{{yarn.resourcemanager.zk-delegation-token-node.split-index}} can be set to 
{{0}}, {{1}}, {{2}}, {{3}}, or {{4}} to split on the last 1, 2, 3, or 4 digits 
of the token sequence number.
- Token sequence numbers start at {{0}} and have a variable width, unlike 
Application IDs which have a width of 4, so when naming their znodes, the code 
pads them to at least 4 digits.  For example, {{RMDelegationToken_5}} becomes 
{{RMDelegationToken_0005}}.  This ensures that the index splitting works 
correctly.  The exception to this is when using a flat layout so we maintain 
the names as before.
- When looking for a delegation token znode, it will first try with the current 
value of {{yarn.resourcemanager.zk-delegation-token-node.split-index}}, but it 
will fallback to looking at the other possible znode paths in case the token 
was created when {{yarn.resourcemanager.zk-delegation-token-node.split-index}} 
had been set to a different value.  This ensures we don't lose any tokens when 
{{yarn.resourcemanager.zk-delegation-token-node.split-index}} changes.
- I haven't had a chance to try it out in an actual cluster yet, but there are 
unit tests that show it working correctly.  In the meantime, we can still start 
reviews.

> Add a hierarchy into the ZKRMStateStore for delegation token znodes to 
> prevent jute buffer overflow
> ---
>
> Key: YARN-7262
> URL: https://issues.apache.org/jira/browse/YARN-7262
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-7262.001.patch
>
>
> We've seen users who are running into a problem where the RM is storing so 
> many delegation tokens in the {{ZKRMStateStore}} that the _listing_ of those 
> znodes is higher than the jute buffer. This is fine during operations, but 
> becomes a problem on a fail over because the RM will try to read in all of 
> the token znodes (i.e. call {{getChildren}} on the parent znode).  This is 
> particularly bad because everything appears to be okay, but then if a 
> failover occurs you end up with no active RMs.
> There was a similar problem with the Yarn application data that was fixed in 
> YARN-2962 by adding a (configurable) hierarchy of znodes so the RM could pull 
> subchildren without overflowing the jute buffer (though it's off by default).
> We should add a hierarchy similar to that of YARN-2962, but for the 
> delegation token znodes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7251) Misc changes to YARN-5734

2017-09-27 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183535#comment-16183535
 ] 

Wangda Tan commented on YARN-7251:
--

+1 to the addendum patch, thanks [~jhung]!

> Misc changes to YARN-5734
> -
>
> Key: YARN-7251
> URL: https://issues.apache.org/jira/browse/YARN-7251
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Fix For: YARN-5734
>
> Attachments: YARN-7251-YARN-5734.001.patch, 
> YARN-7251-YARN-5734.002.patch, YARN-7251-YARN-5734.003-v2.patch, 
> YARN-7251-YARN-5734.004-v2.patch, YARN-7251-YARN-5734.005-v2.patch
>
>
> Documentation/style changes to YARN-5734 before merge.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7207) Cache the local host name when getting application list in RM

2017-09-27 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183527#comment-16183527
 ] 

Allen Wittenauer commented on YARN-7207:


If resolving the local hostname is slow, then that's a symptom of a 
misconfigured host.  e.g., putting dns before files in nsswitch.  Are we 
actually helping the user by hiding it?

> Cache the local host name when getting application list in RM
> -
>
> Key: YARN-7207
> URL: https://issues.apache.org/jira/browse/YARN-7207
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: RM
>Affects Versions: 3.1.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-7207.001.patch, YARN-7207.002.patch
>
>
> {{getLocalHostName()}} is invoked for generating the report for each 
> application, which means it is called 1000 times for each 
> {{getApplications()}} if there are 1000 apps in RM. Some user got a 
> performance issue when {{getLocalHostName()}} is slow under some network envs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6550) Capture launch_container.sh logs

2017-09-27 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-6550:
---
Attachment: YARN-6550.branch-2.001.patch

Attached patch for branch-2

> Capture launch_container.sh logs
> 
>
> Key: YARN-6550
> URL: https://issues.apache.org/jira/browse/YARN-6550
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-beta1
>Reporter: Wangda Tan
>Assignee: Suma Shivaprasad
> Attachments: YARN-6550.002.patch, YARN-6550.003.patch, 
> YARN-6550.005.patch, YARN-6550.006.patch, YARN-6550.007.patch, 
> YARN-6550.008.patch, YARN-6550.009.patch, YARN-6550.010.patch, 
> YARN-6550.011.patch, YARN-6550.011.patch, YARN-6550.012.patch, 
> YARN-6550.branch-2.001.patch, YARN-6550.patch
>
>
> launch_container.sh which generated by NM will do a bunch of things (like 
> create link, etc.) while launch a process. No logs captured until {{exec}} is 
> called. We need capture all failures of launch_container.sh for easier 
> troubleshooting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6625) yarn application -list returns a tracking URL for AM that doesn't work in secured and HA environment

2017-09-27 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183507#comment-16183507
 ] 

Wangda Tan commented on YARN-6625:
--

Thanks [~yufeigu]!

> yarn application -list returns a tracking URL for AM that doesn't work in 
> secured and HA environment
> 
>
> Key: YARN-6625
> URL: https://issues.apache.org/jira/browse/YARN-6625
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: amrmproxy
>Affects Versions: 3.0.0-alpha2
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6625.001.patch, YARN-6625.002.patch, 
> YARN-6625.003.patch, YARN-6625.004.patch
>
>
> The tracking URL given at the command line should work secured or not. The 
> tracking URLs are like http://node-2.abc.com:47014 and AM web server supposed 
> to redirect it to a RM address like this 
> http://node-1.abc.com:8088/proxy/application_1494544954891_0002/, but it 
> fails to do that because the connection is rejected when AM is talking to RM 
> admin service to get HA status.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7248) NM returns new SCHEDULED container status to older clients

2017-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183506#comment-16183506
 ] 

Hadoop QA commented on YARN-7248:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
59s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
2s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 288 unchanged - 3 fixed = 288 total (was 291) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
36s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
40s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m  
2s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 50m 28s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}163m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation |
| Timed out junit tests | 
org.apache.hadoop.yarn.server.resourcemanager.TestRMStoreCommands |
|   | 

[jira] [Commented] (YARN-7009) TestNMClient.testNMClientNoCleanupOnStop is flaky by design

2017-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183504#comment-16183504
 ] 

Hadoop QA commented on YARN-7009:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
14s{color} | {color:green} root: The patch generated 0 new + 278 unchanged - 2 
fixed = 278 total (was 280) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 10m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
31s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m  
0s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 18m 26s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 27m 
53s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m 44s{color} 
| {color:red} hadoop-sls in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}182m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.scheduler.TestContainerSchedulerQueuing
 |
|   | hadoop.yarn.sls.TestReservationSystemInvariants |
|   | hadoop.yarn.sls.TestSLSRunner |
| 

[jira] [Commented] (YARN-6550) Capture launch_container.sh logs

2017-09-27 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183501#comment-16183501
 ] 

Suma Shivaprasad commented on YARN-6550:


This patch was tested internally with runs of mapreduce, distributed shell apps 
to valudate container launches without any issues

> Capture launch_container.sh logs
> 
>
> Key: YARN-6550
> URL: https://issues.apache.org/jira/browse/YARN-6550
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-beta1
>Reporter: Wangda Tan
>Assignee: Suma Shivaprasad
> Attachments: YARN-6550.002.patch, YARN-6550.003.patch, 
> YARN-6550.005.patch, YARN-6550.006.patch, YARN-6550.007.patch, 
> YARN-6550.008.patch, YARN-6550.009.patch, YARN-6550.010.patch, 
> YARN-6550.011.patch, YARN-6550.011.patch, YARN-6550.012.patch, YARN-6550.patch
>
>
> launch_container.sh which generated by NM will do a bunch of things (like 
> create link, etc.) while launch a process. No logs captured until {{exec}} is 
> called. We need capture all failures of launch_container.sh for easier 
> troubleshooting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6550) Capture launch_container.sh logs

2017-09-27 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183501#comment-16183501
 ] 

Suma Shivaprasad edited comment on YARN-6550 at 9/28/17 12:41 AM:
--

This patch was tested internally with runs of mapreduce, distributed shell apps 
to validate container launches without any issues


was (Author: suma.shivaprasad):
This patch was tested internally with runs of mapreduce, distributed shell apps 
to valudate container launches without any issues

> Capture launch_container.sh logs
> 
>
> Key: YARN-6550
> URL: https://issues.apache.org/jira/browse/YARN-6550
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-beta1
>Reporter: Wangda Tan
>Assignee: Suma Shivaprasad
> Attachments: YARN-6550.002.patch, YARN-6550.003.patch, 
> YARN-6550.005.patch, YARN-6550.006.patch, YARN-6550.007.patch, 
> YARN-6550.008.patch, YARN-6550.009.patch, YARN-6550.010.patch, 
> YARN-6550.011.patch, YARN-6550.011.patch, YARN-6550.012.patch, YARN-6550.patch
>
>
> launch_container.sh which generated by NM will do a bunch of things (like 
> create link, etc.) while launch a process. No logs captured until {{exec}} is 
> called. We need capture all failures of launch_container.sh for easier 
> troubleshooting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7191) Improve yarn-service documentation

2017-09-27 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183498#comment-16183498
 ] 

Billie Rinaldi commented on YARN-7191:
--

+1 for patch 04. I will commit with --whitespace=fix.

> Improve yarn-service documentation
> --
>
> Key: YARN-7191
> URL: https://issues.apache.org/jira/browse/YARN-7191
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-7191.yarn-native-services.01.patch, 
> YARN-7191.yarn-native-services.02.patch, 
> YARN-7191.yarn-native-services.03.patch, 
> YARN-7191.yarn-native-services.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2497) Changes for fair scheduler to support allocate resource respect labels

2017-09-27 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183476#comment-16183476
 ] 

Wangda Tan commented on YARN-2497:
--

[~templedf],

I found the latest patch changed reports of ApplicationReport, which is a 
behavior change, it is not exactly same as you said:
bq. CS is doing what you described

Could you double check?

> Changes for fair scheduler to support allocate resource respect labels
> --
>
> Key: YARN-2497
> URL: https://issues.apache.org/jira/browse/YARN-2497
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Wangda Tan
>Assignee: Daniel Templeton
> Attachments: YARN-2497.001.patch, YARN-2497.002.patch, 
> YARN-2497.003.patch, YARN-2497.004.patch, YARN-2497.005.patch, 
> YARN-2497.006.patch, YARN-2497.007.patch, YARN-2499.WIP01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6509) Add a size threshold beyond which yarn logs will require a force option

2017-09-27 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183474#comment-16183474
 ] 

Wangda Tan commented on YARN-6509:
--

[~xgong],  

Thanks for the patch, several comments regarding to CLI options: 
- Existing unit of log size limit is bytes, do you think we should use "MB" as 
minimum unit so user doesn't need to find a calculator to get the value? If you 
agree so, it's better to update "size_limit" to "size_limit_mb".
- We may not need a separate ignore_size_limit, just specify -1 to size_limit 
to disable size limit should be good enough. 

For help message: 
{code}
868 opts.addOption(SIZE_LIMIT_OPTION, true, "Use this option to limit "
869 + "the size of the total logs which could be fetched. "
870 + "By default, the value is 10G.");
871 opts.addOption(IGNORE_SIZE_LIMIT_OPTION, false,
872 "Use this option to ignore the total log size limit. By 
default, "
873 + "we only allow to fetch at most 10G logs. If the total log 
size is "
874 + "larger than 10G, the CLI would fail. The user could specify 
this "
875 + "option to ignore the size limit and fetch all logs.");
{code}

- "10G" should not be hard coded, it's better to use LOG_SIZE_LIMIT_DEFAULT. 
- Also "we only allow to fetch at most 10G logs" should ref to size_limit.

> Add a size threshold beyond which yarn logs will require a force option
> ---
>
> Key: YARN-6509
> URL: https://issues.apache.org/jira/browse/YARN-6509
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Xuan Gong
> Fix For: 2.9.0
>
> Attachments: YARN-6509.1.patch, YARN-6509.2.patch, YARN-6509.3.patch, 
> YARN-6509.4.patch
>
>
> An accidental fetch for a long running application can lead to scenario which 
> the large size of log can fill up a disk.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7250) Update Shared cache client api to use URLs

2017-09-27 Thread Chris Trezzo (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183463#comment-16183463
 ] 

Chris Trezzo commented on YARN-7250:


Thank you [~vrushalic] for the review! I will wait until tomorrow to commit, 
just in case there are any other comments. Otherwise, I plan to commit to 
trunk, branch-3.0 and branch-2.

> Update Shared cache client api to use URLs
> --
>
> Key: YARN-7250
> URL: https://issues.apache.org/jira/browse/YARN-7250
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
>Priority: Minor
> Attachments: YARN-7250-trunk-001.patch
>
>
> We should make the SharedCacheClient use api more consistent with other YARN 
> api methods. We can do this by doing two things:
> # Update the SharedCacheClient#use api so that it returns a URL instead of a 
> Path. Currently yarn developers have to convert the path to a URL when 
> creating a LocalResources. It would be much smoother if they could just use a 
> URL passed to them by the shared cache client.
> # Remove the portion of the client that deals with fragments as this is not 
> consistent with the rest of YARN. This functionality is bleeding in from the 
> MapReduce layer, which uses fragments to keep track of destination file 
> names. YARN's api does not use fragments. Instead  the ContainerLaunchContext 
> expects a Map localResources, where the strings are 
> the destination file names. We should let the YARN application handle 
> destination file names however it wants instead of pushing this into the 
> shared cache api. Additionally, fragments are a clunky way to handle this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7250) Update Shared cache client api to use URLs

2017-09-27 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183461#comment-16183461
 ] 

Vrushali C edited comment on YARN-7250 at 9/27/17 11:44 PM:


+1 
Patch LGTM with the following understanding:

- The interface itself is marked Unstable so it should be okay to change it.
- Any documentation updates to be handled in YARN-2960
- Purely a client side change, the protobuf for the rpc isn't changing
- For MapReduce to use this, work is in progress at MAPREDUCE-5951


was (Author: vrushalic):
+1 
Patch LGTM with the following understanding:

- The interface itself is marked Unstable so it should be okay to change it.
- Any documentation updates to be handled in 
https://issues.apache.org/jira/browse/YARN-2960
- Purely a client side change, the protobuf for the rpc isn't changing
- For MapReduce to use this, work is in progress at MAPREDUCE-5951

> Update Shared cache client api to use URLs
> --
>
> Key: YARN-7250
> URL: https://issues.apache.org/jira/browse/YARN-7250
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
>Priority: Minor
> Attachments: YARN-7250-trunk-001.patch
>
>
> We should make the SharedCacheClient use api more consistent with other YARN 
> api methods. We can do this by doing two things:
> # Update the SharedCacheClient#use api so that it returns a URL instead of a 
> Path. Currently yarn developers have to convert the path to a URL when 
> creating a LocalResources. It would be much smoother if they could just use a 
> URL passed to them by the shared cache client.
> # Remove the portion of the client that deals with fragments as this is not 
> consistent with the rest of YARN. This functionality is bleeding in from the 
> MapReduce layer, which uses fragments to keep track of destination file 
> names. YARN's api does not use fragments. Instead  the ContainerLaunchContext 
> expects a Map localResources, where the strings are 
> the destination file names. We should let the YARN application handle 
> destination file names however it wants instead of pushing this into the 
> shared cache api. Additionally, fragments are a clunky way to handle this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7250) Update Shared cache client api to use URLs

2017-09-27 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183461#comment-16183461
 ] 

Vrushali C commented on YARN-7250:
--

+1 
Patch LGTM with the following understanding:

- The interface itself is marked Unstable so it should be okay to change it.
- Any documentation updates to be handled in 
https://issues.apache.org/jira/browse/YARN-2960
- Purely a client side change, the protobuf for the rpc isn't changing
- For MapReduce to use this, work is in progress at MAPREDUCE-5951

> Update Shared cache client api to use URLs
> --
>
> Key: YARN-7250
> URL: https://issues.apache.org/jira/browse/YARN-7250
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
>Priority: Minor
> Attachments: YARN-7250-trunk-001.patch
>
>
> We should make the SharedCacheClient use api more consistent with other YARN 
> api methods. We can do this by doing two things:
> # Update the SharedCacheClient#use api so that it returns a URL instead of a 
> Path. Currently yarn developers have to convert the path to a URL when 
> creating a LocalResources. It would be much smoother if they could just use a 
> URL passed to them by the shared cache client.
> # Remove the portion of the client that deals with fragments as this is not 
> consistent with the rest of YARN. This functionality is bleeding in from the 
> MapReduce layer, which uses fragments to keep track of destination file 
> names. YARN's api does not use fragments. Instead  the ContainerLaunchContext 
> expects a Map localResources, where the strings are 
> the destination file names. We should let the YARN application handle 
> destination file names however it wants instead of pushing this into the 
> shared cache api. Additionally, fragments are a clunky way to handle this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7250) Update Shared cache client api to use URLs

2017-09-27 Thread Chris Trezzo (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183443#comment-16183443
 ] 

Chris Trezzo commented on YARN-7250:


Also to clarify, this is a client side only change. The protobuf/rpc between 
the client and the SCM is staying the same.

> Update Shared cache client api to use URLs
> --
>
> Key: YARN-7250
> URL: https://issues.apache.org/jira/browse/YARN-7250
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
>Priority: Minor
> Attachments: YARN-7250-trunk-001.patch
>
>
> We should make the SharedCacheClient use api more consistent with other YARN 
> api methods. We can do this by doing two things:
> # Update the SharedCacheClient#use api so that it returns a URL instead of a 
> Path. Currently yarn developers have to convert the path to a URL when 
> creating a LocalResources. It would be much smoother if they could just use a 
> URL passed to them by the shared cache client.
> # Remove the portion of the client that deals with fragments as this is not 
> consistent with the rest of YARN. This functionality is bleeding in from the 
> MapReduce layer, which uses fragments to keep track of destination file 
> names. YARN's api does not use fragments. Instead  the ContainerLaunchContext 
> expects a Map localResources, where the strings are 
> the destination file names. We should let the YARN application handle 
> destination file names however it wants instead of pushing this into the 
> shared cache api. Additionally, fragments are a clunky way to handle this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7257) AggregatedLogsBlock reports a bad 'end' value as a bad 'start' value

2017-09-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183407#comment-16183407
 ] 

Hudson commented on YARN-7257:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12987 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12987/])
YARN-7257. AggregatedLogsBlock reports a bad 'end' value as a bad (xgong: rev 
28c4957fccebe2d7e63ec9fe9af58313b4f21d4f)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/log/AggregatedLogsBlock.java


> AggregatedLogsBlock reports a bad 'end' value as a bad 'start' value
> 
>
> Key: YARN-7257
> URL: https://issues.apache.org/jira/browse/YARN-7257
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: log-aggregation
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Jason Lowe
>Assignee: Jason Lowe
> Fix For: 2.9.0, 3.0.0-beta1, 3.1.0
>
> Attachments: MAPREDUCE-6969.001.patch
>
>
> TestHSWebApp has been failing recently:
> {noformat}
> Running org.apache.hadoop.mapreduce.v2.hs.webapp.TestHSWebApp
> Tests run: 17, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 5.57 sec <<< 
> FAILURE! - in org.apache.hadoop.mapreduce.v2.hs.webapp.TestHSWebApp
> testLogsViewBadStartEnd(org.apache.hadoop.mapreduce.v2.hs.webapp.TestHSWebApp)
>   Time elapsed: 0.076 sec  <<< FAILURE!
> org.mockito.exceptions.verification.junit.ArgumentsAreDifferent: 
> Argument(s) are different! Wanted:
> printWriter.write(
> "Invalid log end value: bar"
> );
> -> at 
> org.apache.hadoop.mapreduce.v2.hs.webapp.TestHSWebApp.testLogsViewBadStartEnd(TestHSWebApp.java:261)
> Actual invocation has different arguments:
> printWriter.write(
> " "http://www.w3.org/TR/html4/strict.dtd;>"
> );
> -> at 
> org.apache.hadoop.yarn.webapp.view.TextView.echoWithoutEscapeHtml(TextView.java:62)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.webapp.TestHSWebApp.testLogsViewBadStartEnd(TestHSWebApp.java:261)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7191) Improve yarn-service documentation

2017-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183393#comment-16183393
 ] 

Hadoop QA commented on YARN-7191:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} yarn-native-services Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
54s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
27m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 21 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 35s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 41m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-7191 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12889374/YARN-7191.yarn-native-services.04.patch
 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 4848fa13d747 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | yarn-native-services / 3f7a50d |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/17672/artifact/patchprocess/whitespace-eol.txt
 |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17672/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Improve yarn-service documentation
> --
>
> Key: YARN-7191
> URL: https://issues.apache.org/jira/browse/YARN-7191
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-7191.yarn-native-services.01.patch, 
> YARN-7191.yarn-native-services.02.patch, 
> YARN-7191.yarn-native-services.03.patch, 
> YARN-7191.yarn-native-services.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7257) AggregatedLogsBlock reports a bad 'end' value as a bad 'start' value

2017-09-27 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183388#comment-16183388
 ] 

Xuan Gong commented on YARN-7257:
-

+1. Thanks for the fix, Jason

> AggregatedLogsBlock reports a bad 'end' value as a bad 'start' value
> 
>
> Key: YARN-7257
> URL: https://issues.apache.org/jira/browse/YARN-7257
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: log-aggregation
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Jason Lowe
>Assignee: Jason Lowe
> Attachments: MAPREDUCE-6969.001.patch
>
>
> TestHSWebApp has been failing recently:
> {noformat}
> Running org.apache.hadoop.mapreduce.v2.hs.webapp.TestHSWebApp
> Tests run: 17, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 5.57 sec <<< 
> FAILURE! - in org.apache.hadoop.mapreduce.v2.hs.webapp.TestHSWebApp
> testLogsViewBadStartEnd(org.apache.hadoop.mapreduce.v2.hs.webapp.TestHSWebApp)
>   Time elapsed: 0.076 sec  <<< FAILURE!
> org.mockito.exceptions.verification.junit.ArgumentsAreDifferent: 
> Argument(s) are different! Wanted:
> printWriter.write(
> "Invalid log end value: bar"
> );
> -> at 
> org.apache.hadoop.mapreduce.v2.hs.webapp.TestHSWebApp.testLogsViewBadStartEnd(TestHSWebApp.java:261)
> Actual invocation has different arguments:
> printWriter.write(
> " "http://www.w3.org/TR/html4/strict.dtd;>"
> );
> -> at 
> org.apache.hadoop.yarn.webapp.view.TextView.echoWithoutEscapeHtml(TextView.java:62)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.webapp.TestHSWebApp.testLogsViewBadStartEnd(TestHSWebApp.java:261)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2497) Changes for fair scheduler to support allocate resource respect labels

2017-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183386#comment-16183386
 ] 

Hadoop QA commented on YARN-2497:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 21 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  8s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 6 new + 1002 unchanged - 29 fixed = 1008 total (was 1031) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 32s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
35s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
37s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 45m 58s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
14s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}122m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit 

[jira] [Commented] (YARN-6626) Embed REST API service into RM

2017-09-27 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183343#comment-16183343
 ] 

Jian He commented on YARN-6626:
---

Thanks Eric, looks like a few things can be removed in the patch:
The constructor:
{code}
public ApiServer() {
  
   super();

}
{code}
This comment:
// don't inject, always take appBaseRoot from RM.
And the “hadoop-yarn-server-common” dependency added in the pom.xml, since it 
doesn't inherit WebServices class anymore

> Embed REST API service into RM
> --
>
> Key: YARN-6626
> URL: https://issues.apache.org/jira/browse/YARN-6626
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Eric Yang
> Fix For: yarn-native-services
>
> Attachments: YARN-6626.yarn-native-services.001.patch, 
> YARN-6626.yarn-native-services.002.patch, 
> YARN-6626.yarn-native-services.003.patch, 
> YARN-6626.yarn-native-services.004.patch, 
> YARN-6626.yarn-native-services.005.patch, 
> YARN-6626.yarn-native-services.006.patch, 
> YARN-6626.yarn-native-services.007.patch, 
> YARN-6626.yarn-native-services.008.patch, 
> YARN-6626.yarn-native-services.009.patch
>
>
> As of now the deployment model of the Native Services REST API service is 
> standalone. There are several cross-cutting solutions that can be inherited 
> for free (kerberos, HA, ACLs, trusted proxy support, etc.) by the REST API 
> service if it is embedded into the RM process. In fact we can expose the REST 
> API via the same port as RM UI (8088 default). The URI path 
> /services/v1/applications will distinguish the REST API calls from other RM 
> APIs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7248) NM returns new SCHEDULED container status to older clients

2017-09-27 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7248:
--
Attachment: YARN-7248.003.patch

Thanks for the rev Jason. Updated the patch (.003) with the changes

> NM returns new SCHEDULED container status to older clients
> --
>
> Key: YARN-7248
> URL: https://issues.apache.org/jira/browse/YARN-7248
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Jason Lowe
>Assignee: Arun Suresh
>Priority: Blocker
> Attachments: YARN-7248.001.patch, YARN-7248.002.patch, 
> YARN-7248.003.patch
>
>
> YARN-4597 added a new SCHEDULED container state and that state is returned to 
> clients when the container is localizing, etc.  However the client may be 
> running on an older software version that does not have the new SCHEDULED 
> state which could lead the client to crash on the unexpected container state 
> value or make incorrect assumptions like any state != NEW and != RUNNING must 
> be COMPLETED which was true in the older version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6871) Add additional deSelects params in RMWebServices#getAppReport

2017-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183337#comment-16183337
 ] 

Hadoop QA commented on YARN-6871:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
11s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 49 unchanged - 1 fixed = 49 total (was 50) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 44m 21s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
|   | hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesApps |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:eaf5c66 |
| JIRA Issue | YARN-6871 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12889361/YARN-6871-branch-2.v1.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ff0eb02d9ef9 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / c570dda |
| Default Java | 1.7.0_151 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/17669/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17669/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17669/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.




[jira] [Commented] (YARN-7191) Improve yarn-service documentation

2017-09-27 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183335#comment-16183335
 ] 

Jian He commented on YARN-7191:
---

updated the doc again.

> Improve yarn-service documentation
> --
>
> Key: YARN-7191
> URL: https://issues.apache.org/jira/browse/YARN-7191
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-7191.yarn-native-services.01.patch, 
> YARN-7191.yarn-native-services.02.patch, 
> YARN-7191.yarn-native-services.03.patch, 
> YARN-7191.yarn-native-services.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7191) Improve yarn-service documentation

2017-09-27 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7191:
--
Attachment: YARN-7191.yarn-native-services.04.patch

> Improve yarn-service documentation
> --
>
> Key: YARN-7191
> URL: https://issues.apache.org/jira/browse/YARN-7191
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-7191.yarn-native-services.01.patch, 
> YARN-7191.yarn-native-services.02.patch, 
> YARN-7191.yarn-native-services.03.patch, 
> YARN-7191.yarn-native-services.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7207) Cache the local host name when getting application list in RM

2017-09-27 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183314#comment-16183314
 ] 

Yufei Gu commented on YARN-7207:


File YARN-7263 for #2. 

> Cache the local host name when getting application list in RM
> -
>
> Key: YARN-7207
> URL: https://issues.apache.org/jira/browse/YARN-7207
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: RM
>Affects Versions: 3.1.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-7207.001.patch, YARN-7207.002.patch
>
>
> {{getLocalHostName()}} is invoked for generating the report for each 
> application, which means it is called 1000 times for each 
> {{getApplications()}} if there are 1000 apps in RM. Some user got a 
> performance issue when {{getLocalHostName()}} is slow under some network envs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7009) TestNMClient.testNMClientNoCleanupOnStop is flaky by design

2017-09-27 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-7009:
-
Attachment: YARN-7009.007.patch

> TestNMClient.testNMClientNoCleanupOnStop is flaky by design
> ---
>
> Key: YARN-7009
> URL: https://issues.apache.org/jira/browse/YARN-7009
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Attachments: YARN-7009.000.patch, YARN-7009.001.patch, 
> YARN-7009.002.patch, YARN-7009.003.patch, YARN-7009.004.patch, 
> YARN-7009.005.patch, YARN-7009.006.patch, YARN-7009.007.patch
>
>
> The sleeps to wait for a transition to reinit and than back to running is not 
> long enough, it can miss the reinit event.
> {code}
> java.lang.AssertionError: Exception is not expected: 
> org.apache.hadoop.yarn.exceptions.YarnException: Cannot perform RE_INIT on 
> [container_1502735389852_0001_01_01]. Current state is [REINITIALIZING, 
> isReInitializing=true].
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.preReInitializeOrLocalizeCheck(ContainerManagerImpl.java:1772)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.reInitializeContainer(ContainerManagerImpl.java:1697)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.reInitializeContainer(ContainerManagerImpl.java:1668)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.service.ContainerManagementProtocolPBServiceImpl.reInitializeContainer(ContainerManagementProtocolPBServiceImpl.java:214)
>   at 
> org.apache.hadoop.yarn.proto.ContainerManagementProtocol$ContainerManagementProtocolService$2.callBlockingMethod(ContainerManagementProtocol.java:237)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestNMClient.testReInitializeContainer(TestNMClient.java:567)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestNMClient.testContainerManagement(TestNMClient.java:405)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestNMClient.testNMClientNoCleanupOnStop(TestNMClient.java:214)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Cannot perform 
> RE_INIT on [container_1502735389852_0001_01_01]. Current state is 
> [REINITIALIZING, isReInitializing=true].
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.preReInitializeOrLocalizeCheck(ContainerManagerImpl.java:1772)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.reInitializeContainer(ContainerManagerImpl.java:1697)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.reInitializeContainer(ContainerManagerImpl.java:1668)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.service.ContainerManagementProtocolPBServiceImpl.reInitializeContainer(ContainerManagementProtocolPBServiceImpl.java:214)
>   at 
> org.apache.hadoop.yarn.proto.ContainerManagementProtocol$ContainerManagementProtocolService$2.callBlockingMethod(ContainerManagementProtocol.java:237)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
>   at 

[jira] [Updated] (YARN-7263) Check host name resolution performance when resource manager starts up

2017-09-27 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-7263:
---
Description: According to YARN-7207, host name resolution could be slow in 
some environment, which affects RM performance in different ways. It would be 
nice to check that when RM starts up and place a warning message into the logs 
if the performance is not ideal.   (was: Host name resolution could be slow in 
some environment, which affects RM performance in different ways. It would be 
nice to check that when RM starts up and place a warning message into the logs 
if the performance is not ideal. )

> Check host name resolution performance when resource manager starts up
> --
>
> Key: YARN-7263
> URL: https://issues.apache.org/jira/browse/YARN-7263
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 3.1.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>
> According to YARN-7207, host name resolution could be slow in some 
> environment, which affects RM performance in different ways. It would be nice 
> to check that when RM starts up and place a warning message into the logs if 
> the performance is not ideal. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7263) Check host name resolution performance when resource manager starts up

2017-09-27 Thread Yufei Gu (JIRA)
Yufei Gu created YARN-7263:
--

 Summary: Check host name resolution performance when resource 
manager starts up
 Key: YARN-7263
 URL: https://issues.apache.org/jira/browse/YARN-7263
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: resourcemanager
Affects Versions: 3.1.0
Reporter: Yufei Gu
Assignee: Yufei Gu


Host name resolution could be slow in some environment, which affects RM 
performance in different ways. It would be nice to check that when RM starts up 
and place a warning message into the logs if the performance is not ideal. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6625) yarn application -list returns a tracking URL for AM that doesn't work in secured and HA environment

2017-09-27 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183308#comment-16183308
 ] 

Yufei Gu commented on YARN-6625:


[~wangda], not much as I know. May take time to rebase though.

> yarn application -list returns a tracking URL for AM that doesn't work in 
> secured and HA environment
> 
>
> Key: YARN-6625
> URL: https://issues.apache.org/jira/browse/YARN-6625
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: amrmproxy
>Affects Versions: 3.0.0-alpha2
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6625.001.patch, YARN-6625.002.patch, 
> YARN-6625.003.patch, YARN-6625.004.patch
>
>
> The tracking URL given at the command line should work secured or not. The 
> tracking URLs are like http://node-2.abc.com:47014 and AM web server supposed 
> to redirect it to a RM address like this 
> http://node-1.abc.com:8088/proxy/application_1494544954891_0002/, but it 
> fails to do that because the connection is rejected when AM is talking to RM 
> admin service to get HA status.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7009) TestNMClient.testNMClientNoCleanupOnStop is flaky by design

2017-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183278#comment-16183278
 ] 

Hadoop QA commented on YARN-7009:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
26s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  7s{color} | {color:orange} root: The patch generated 2 new + 278 unchanged 
- 2 fixed = 280 total (was 280) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
44s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
46s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m  
9s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m  3s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 45s{color} 
| {color:red} hadoop-sls in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}155m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.client.api.impl.TestNMClient |
|   | hadoop.yarn.sls.TestReservationSystemInvariants |
|   | hadoop.yarn.sls.TestSLSRunner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | 

[jira] [Commented] (YARN-7248) NM returns new SCHEDULED container status to older clients

2017-09-27 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183279#comment-16183279
 ] 

Jason Lowe commented on YARN-7248:
--

Thanks for updating the patch!  I believe the unit test failures are 
pre-existing.

EXTRA_STATE_INFO is no longer needed in ContainerStatePBImpl.

CONTAINER_SUB_STATE_PREFIX should be final.

convertFromProtoFormat(ContainerSubStateProto) should not blindly call String 
replace since that could inadvertently replace _any_ occurrence in the string 
when we clearly only want the first part of the string replaced.  It would be 
more correct and much more efficient to use 
e.name().substring(CONTAINER_SUB_STATE_PREFIX.length()) instead.

Nit: If there are going to be comments in the proto files and the 
ContainerSubState enum on how internal states are mapped then there should be a 
comment in getContainerSubState reminding others to keep those comments in sync 
with that method.

Nit: Would be nice to fix the indentation nits identified in checkstyle so the 
new getContainerSubState formatting is consistent with the existing 
getContainerState method.

> NM returns new SCHEDULED container status to older clients
> --
>
> Key: YARN-7248
> URL: https://issues.apache.org/jira/browse/YARN-7248
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Jason Lowe
>Assignee: Arun Suresh
>Priority: Blocker
> Attachments: YARN-7248.001.patch, YARN-7248.002.patch
>
>
> YARN-4597 added a new SCHEDULED container state and that state is returned to 
> clients when the container is localizing, etc.  However the client may be 
> running on an older software version that does not have the new SCHEDULED 
> state which could lead the client to crash on the unexpected container state 
> value or make incorrect assumptions like any state != NEW and != RUNNING must 
> be COMPLETED which was true in the older version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6625) yarn application -list returns a tracking URL for AM that doesn't work in secured and HA environment

2017-09-27 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183271#comment-16183271
 ] 

Wangda Tan commented on YARN-6625:
--

[~yufeigu] / [~rkanter], 

Thanks for working on this patch, we recently saw this issue in our environment 
as well, is there any concerns to pull this fix to branch-2/branch-2.8? 

> yarn application -list returns a tracking URL for AM that doesn't work in 
> secured and HA environment
> 
>
> Key: YARN-6625
> URL: https://issues.apache.org/jira/browse/YARN-6625
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: amrmproxy
>Affects Versions: 3.0.0-alpha2
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6625.001.patch, YARN-6625.002.patch, 
> YARN-6625.003.patch, YARN-6625.004.patch
>
>
> The tracking URL given at the command line should work secured or not. The 
> tracking URLs are like http://node-2.abc.com:47014 and AM web server supposed 
> to redirect it to a RM address like this 
> http://node-1.abc.com:8088/proxy/application_1494544954891_0002/, but it 
> fails to do that because the connection is rejected when AM is talking to RM 
> admin service to get HA status.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7262) Add a hierarchy into the ZKRMStateStore for delegation token znodes to prevent jute buffer overflow

2017-09-27 Thread Robert Kanter (JIRA)
Robert Kanter created YARN-7262:
---

 Summary: Add a hierarchy into the ZKRMStateStore for delegation 
token znodes to prevent jute buffer overflow
 Key: YARN-7262
 URL: https://issues.apache.org/jira/browse/YARN-7262
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Robert Kanter
Assignee: Robert Kanter


We've seen users who are running into a problem where the RM is storing so many 
delegation tokens in the {{ZKRMStateStore}} that the _listing_ of those znodes 
is higher than the jute buffer. This is fine during operations, but becomes a 
problem on a fail over because the RM will try to read in all of the token 
znodes (i.e. call {{getChildren}} on the parent znode).  This is particularly 
bad because everything appears to be okay, but then if a failover occurs you 
end up with no active RMs.

There was a similar problem with the Yarn application data that was fixed in 
YARN-2962 by adding a (configurable) hierarchy of znodes so the RM could pull 
subchildren without overflowing the jute buffer (though it's off by default).
We should add a hierarchy similar to that of YARN-2962, but for the delegation 
token znodes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6871) Add additional deSelects params in RMWebServices#getAppReport

2017-09-27 Thread Tanuj Nayak (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tanuj Nayak updated YARN-6871:
--
Attachment: YARN-6871-branch-2.v1.patch

My apologies, attached branch2 patch with corrected naming.

> Add additional deSelects params in RMWebServices#getAppReport
> -
>
> Key: YARN-6871
> URL: https://issues.apache.org/jira/browse/YARN-6871
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: resourcemanager, router
>Reporter: Giovanni Matteo Fumarola
>Assignee: Tanuj Nayak
> Attachments: YARN-6871.002.patch, YARN-6871.003.patch, 
> YARN-6871.004.patch, YARN-6871.005.patch, YARN-6871.006.patch, 
> YARN-6871.007.patch, YARN-6871.008.patch, YARN-6871.009.patch, 
> YARN-6871-branch-2.v1.patch, YARN-6871.proto.patch
>
>
> This jira tracks the effort to add additional deSelect params to the 
> GetAppReport to make it lighter and faster.
> With the current one we are facing a scalability issues.
> E.g. with ~500 applications running the AppReport can reach up to 300MB in 
> size due to the {{ResourceRequest}} in the {{AppInfo}}.
> Yarn RM will return the new result faster and it will use less compute cycles 
> to create the report and it will improve the YARN RM and Client's 
> performances.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6871) Add additional deSelects params in RMWebServices#getAppReport

2017-09-27 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183230#comment-16183230
 ] 

Giovanni Matteo Fumarola commented on YARN-6871:


Thanks [~tanujnay] for the patch. However, you have to rename (v9) in 
YARN-6871-branch-2.v1.patch since this patch will be apply in branch-2.

> Add additional deSelects params in RMWebServices#getAppReport
> -
>
> Key: YARN-6871
> URL: https://issues.apache.org/jira/browse/YARN-6871
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: resourcemanager, router
>Reporter: Giovanni Matteo Fumarola
>Assignee: Tanuj Nayak
> Attachments: YARN-6871.002.patch, YARN-6871.003.patch, 
> YARN-6871.004.patch, YARN-6871.005.patch, YARN-6871.006.patch, 
> YARN-6871.007.patch, YARN-6871.008.patch, YARN-6871.009.patch, 
> YARN-6871.proto.patch
>
>
> This jira tracks the effort to add additional deSelect params to the 
> GetAppReport to make it lighter and faster.
> With the current one we are facing a scalability issues.
> E.g. with ~500 applications running the AppReport can reach up to 300MB in 
> size due to the {{ResourceRequest}} in the {{AppInfo}}.
> Yarn RM will return the new result faster and it will use less compute cycles 
> to create the report and it will improve the YARN RM and Client's 
> performances.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7190) Ensure only NM classpath in 2.x gets TSv2 related hbase jars, not the user classpath

2017-09-27 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183228#comment-16183228
 ] 

Jason Lowe commented on YARN-7190:
--

My personal preference would be to remove any new jar we know is only required 
by the servers, even for trunk.  Otherwise clients and user tasks can end up 
relying on those jars being there, and we break them when we try to remove them 
later.  I hope we can all agree that the fewer jars added to user classpath as 
part of the ATSv2 effort (or any other effort really) the better.


> Ensure only NM classpath in 2.x gets TSv2 related hbase jars, not the user 
> classpath
> 
>
> Key: YARN-7190
> URL: https://issues.apache.org/jira/browse/YARN-7190
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineclient, timelinereader, timelineserver
>Reporter: Vrushali C
>Assignee: Varun Saxena
>
> [~jlowe] had a good observation about the user classpath getting extra jars 
> in hadoop 2.x brought in with TSv2.  If users start picking up Hadoop 2,x's 
> version of HBase jars instead of the ones they shipped with their job, it 
> could be a problem.
> So when TSv2 is to be used in 2,x, the hbase related jars should come into 
> only the NM classpath not the user classpath.
> Here is a list of some jars
> {code}
> commons-csv-1.0.jar
> commons-el-1.0.jar
> commons-httpclient-3.1.jar
> disruptor-3.3.0.jar
> findbugs-annotations-1.3.9-1.jar
> hbase-annotations-1.2.6.jar
> hbase-client-1.2.6.jar
> hbase-common-1.2.6.jar
> hbase-hadoop2-compat-1.2.6.jar
> hbase-hadoop-compat-1.2.6.jar
> hbase-prefix-tree-1.2.6.jar
> hbase-procedure-1.2.6.jar
> hbase-protocol-1.2.6.jar
> hbase-server-1.2.6.jar
> htrace-core-3.1.0-incubating.jar
> jamon-runtime-2.4.1.jar
> jasper-compiler-5.5.23.jar
> jasper-runtime-5.5.23.jar
> jcodings-1.0.8.jar
> joni-2.1.2.jar
> jsp-2.1-6.1.14.jar
> jsp-api-2.1-6.1.14.jar
> jsr311-api-1.1.1.jar
> metrics-core-2.2.0.jar
> servlet-api-2.5-6.1.14.jar
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6871) Add additional deSelects params in RMWebServices#getAppReport

2017-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183208#comment-16183208
 ] 

Hadoop QA commented on YARN-6871:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  4m 44s{color} 
| {color:red} YARN-6871 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-6871 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12889356/YARN-6871.009.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17667/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add additional deSelects params in RMWebServices#getAppReport
> -
>
> Key: YARN-6871
> URL: https://issues.apache.org/jira/browse/YARN-6871
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: resourcemanager, router
>Reporter: Giovanni Matteo Fumarola
>Assignee: Tanuj Nayak
> Attachments: YARN-6871.002.patch, YARN-6871.003.patch, 
> YARN-6871.004.patch, YARN-6871.005.patch, YARN-6871.006.patch, 
> YARN-6871.007.patch, YARN-6871.008.patch, YARN-6871.009.patch, 
> YARN-6871.proto.patch
>
>
> This jira tracks the effort to add additional deSelect params to the 
> GetAppReport to make it lighter and faster.
> With the current one we are facing a scalability issues.
> E.g. with ~500 applications running the AppReport can reach up to 300MB in 
> size due to the {{ResourceRequest}} in the {{AppInfo}}.
> Yarn RM will return the new result faster and it will use less compute cycles 
> to create the report and it will improve the YARN RM and Client's 
> performances.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2497) Changes for fair scheduler to support allocate resource respect labels

2017-09-27 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-2497:
---
Attachment: YARN-2497.007.patch

CS is doing what you described, but FS is seeing 
{{NODE_LABEL_EXPRESSION_NOT_SET}} when it's deciding what to do with an app.  
This patch restores the original value but splits the methods on {{RMApp}} so 
that FS sees a real label, but the original gives a label for display.  I don't 
think there's any need to adjust the names.  I adjusted the javadoc instead.

> Changes for fair scheduler to support allocate resource respect labels
> --
>
> Key: YARN-2497
> URL: https://issues.apache.org/jira/browse/YARN-2497
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Wangda Tan
>Assignee: Daniel Templeton
> Attachments: YARN-2497.001.patch, YARN-2497.002.patch, 
> YARN-2497.003.patch, YARN-2497.004.patch, YARN-2497.005.patch, 
> YARN-2497.006.patch, YARN-2497.007.patch, YARN-2499.WIP01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6871) Add additional deSelects params in RMWebServices#getAppReport

2017-09-27 Thread Tanuj Nayak (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tanuj Nayak updated YARN-6871:
--
Attachment: YARN-6871.009.patch

> Add additional deSelects params in RMWebServices#getAppReport
> -
>
> Key: YARN-6871
> URL: https://issues.apache.org/jira/browse/YARN-6871
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: resourcemanager, router
>Reporter: Giovanni Matteo Fumarola
>Assignee: Tanuj Nayak
> Attachments: YARN-6871.002.patch, YARN-6871.003.patch, 
> YARN-6871.004.patch, YARN-6871.005.patch, YARN-6871.006.patch, 
> YARN-6871.007.patch, YARN-6871.008.patch, YARN-6871.009.patch, 
> YARN-6871.proto.patch
>
>
> This jira tracks the effort to add additional deSelect params to the 
> GetAppReport to make it lighter and faster.
> With the current one we are facing a scalability issues.
> E.g. with ~500 applications running the AppReport can reach up to 300MB in 
> size due to the {{ResourceRequest}} in the {{AppInfo}}.
> Yarn RM will return the new result faster and it will use less compute cycles 
> to create the report and it will improve the YARN RM and Client's 
> performances.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7241) Merge YARN-5734 to trunk/branch-2

2017-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183194#comment-16183194
 ] 

Hadoop QA commented on YARN-7241:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m  
8s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m  
9s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 12s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 14 new + 590 unchanged - 1 fixed = 604 total (was 591) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
24s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
10s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 59s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 21s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
47s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 

[jira] [Commented] (YARN-6626) Embed REST API service into RM

2017-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183184#comment-16183184
 ] 

Hadoop QA commented on YARN-6626:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
53s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} yarn-native-services Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
55s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
42s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
23s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
26s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
yarn-native-services has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
37s{color} | {color:green} yarn-native-services passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  3s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 243 unchanged - 3 fixed = 244 total (was 246) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
49s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
31s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m  7s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} 

[jira] [Updated] (YARN-7216) Missing ability to list configuration vs status

2017-09-27 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7216:

Description: 
API Server has /ws/v1/services/{service_name}.  This REST end point returns 
Services object which contains both configuration and status.  When status or 
macro based parameters changed in Services object, it can confuse UI code to 
making configuration changes.  The suggestion is to preserve a copy of 
configuration object independent of status object.  This gives UI ability to 
change services configuration and update configuration.

Similar to Ambari, it might provide better information if we have the following 
separated REST end points:

{code}
 /ws/v1/services/[service_name]/spec
 /ws/v1/services/[service_name]/status
{code}


  was:
API Server has /ws/v1/services/{service_name}.  This REST end point returns 
Services object which contains both configuration and status.  When status or 
macro based parameters changed in Services object, it can confuse UI code to 
making configuration changes.  The suggestion is to preserve a copy of 
configuration object independent of status object.  This gives UI ability to 
change services configuration and update configuration.

Similar to Ambari, it might provide better information if we have the following 
separated REST end points:

{code}
 /ws/v1/services/[service_name]/config
 /ws/v1/services/[service_name]/status
{code}



> Missing ability to list configuration vs status
> ---
>
> Key: YARN-7216
> URL: https://issues.apache.org/jira/browse/YARN-7216
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, applications
>Reporter: Eric Yang
>Assignee: Eric Yang
>
> API Server has /ws/v1/services/{service_name}.  This REST end point returns 
> Services object which contains both configuration and status.  When status or 
> macro based parameters changed in Services object, it can confuse UI code to 
> making configuration changes.  The suggestion is to preserve a copy of 
> configuration object independent of status object.  This gives UI ability to 
> change services configuration and update configuration.
> Similar to Ambari, it might provide better information if we have the 
> following separated REST end points:
> {code}
>  /ws/v1/services/[service_name]/spec
>  /ws/v1/services/[service_name]/status
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2497) Changes for fair scheduler to support allocate resource respect labels

2017-09-27 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183134#comment-16183134
 ] 

Wangda Tan commented on YARN-2497:
--

[~templedf], could you check my comment above: 
https://issues.apache.org/jira/browse/YARN-2497?focusedCommentId=16181231=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16181231?

> Changes for fair scheduler to support allocate resource respect labels
> --
>
> Key: YARN-2497
> URL: https://issues.apache.org/jira/browse/YARN-2497
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Wangda Tan
>Assignee: Daniel Templeton
> Attachments: YARN-2497.001.patch, YARN-2497.002.patch, 
> YARN-2497.003.patch, YARN-2497.004.patch, YARN-2497.005.patch, 
> YARN-2497.006.patch, YARN-2499.WIP01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7190) Ensure only NM classpath in 2.x gets TSv2 related hbase jars, not the user classpath

2017-09-27 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183131#comment-16183131
 ] 

Varun Saxena commented on YARN-7190:


Before updating a patch, thought of taking an opinion from others. For backward 
compatibility sake, on branch-2 I am planning to move only hbase specific jars 
and their dependencies to a separate folder (share/hadoop/yarn/serverlib). We 
can do the same for trunk/branch-3.0 as well even though backward compatibility 
is not required. Thoughts?

*Before changes, set of jars in lib folder is as under:*
{noformat}
varun@e7450:~/Projects/hadoop/hadoop-dist/target/hadoop-3.1.0-SNAPSHOT/share/hadoop/yarn$
 ll lib
total 21088
-rw-rw-r-- 1 varun varun4467 Sep 28 00:36 aopalliance-1.0.jar
-rw-rw-r-- 1 varun varun   34827 Sep 28 00:36 commons-csv-1.0.jar
-rw-rw-r-- 1 varun varun  112341 Sep 28 00:36 commons-el-1.0.jar
-rw-rw-r-- 1 varun varun  305001 Sep 28 00:36 commons-httpclient-3.1.jar
-rw-rw-r-- 1 varun varun  988514 Sep 28 00:36 commons-math-2.2.jar
-rw-rw-r-- 1 varun varun   79576 Sep 28 00:36 disruptor-3.3.0.jar
-rw-rw-r-- 1 varun varun 1726527 Sep 28 00:36 ehcache-3.3.1.jar
-rw-rw-r-- 1 varun varun   15322 Sep 28 00:36 findbugs-annotations-1.3.9-1.jar
-rw-rw-r-- 1 varun varun  387689 Sep 28 00:36 fst-2.50.jar
-rw-rw-r-- 1 varun varun   55236 Sep 28 00:36 
geronimo-jcache_1.0_spec-1.0-alpha-1.jar
-rw-rw-r-- 1 varun varun  668235 Sep 28 00:36 guice-4.0.jar
-rw-rw-r-- 1 varun varun   76983 Sep 28 00:36 guice-servlet-4.0.jar
-rw-rw-r-- 1 varun varun   20865 Sep 28 00:36 hbase-annotations-1.2.6.jar
-rw-rw-r-- 1 varun varun 1306445 Sep 28 00:36 hbase-client-1.2.6.jar
-rw-rw-r-- 1 varun varun  582558 Sep 28 00:36 hbase-common-1.2.6.jar
-rw-rw-r-- 1 varun varun  100739 Sep 28 00:36 hbase-hadoop2-compat-1.2.6.jar
-rw-rw-r-- 1 varun varun   36991 Sep 28 00:36 hbase-hadoop-compat-1.2.6.jar
-rw-rw-r-- 1 varun varun  102056 Sep 28 00:36 hbase-prefix-tree-1.2.6.jar
-rw-rw-r-- 1 varun varun  123671 Sep 28 00:36 hbase-procedure-1.2.6.jar
-rw-rw-r-- 1 varun varun 4378437 Sep 28 00:36 hbase-protocol-1.2.6.jar
-rw-rw-r-- 1 varun varun 4184325 Sep 28 00:36 hbase-server-1.2.6.jar
-rw-rw-r-- 1 varun varun  134308 Sep 28 00:36 HikariCP-java7-2.4.12.jar
-rw-rw-r-- 1 varun varun 1475955 Sep 28 00:36 htrace-core-3.1.0-incubating.jar
-rw-rw-r-- 1 varun varun   29947 Sep 28 00:36 jackson-jaxrs-base-2.7.8.jar
-rw-rw-r-- 1 varun varun   16776 Sep 28 00:36 
jackson-jaxrs-json-provider-2.7.8.jar
-rw-rw-r-- 1 varun varun   34578 Sep 28 00:36 
jackson-module-jaxb-annotations-2.7.8.jar
-rw-rw-r-- 1 varun varun   24543 Sep 28 00:36 jamon-runtime-2.4.1.jar
-rw-rw-r-- 1 varun varun  408133 Sep 28 00:36 jasper-compiler-5.5.23.jar
-rw-rw-r-- 1 varun varun   76844 Sep 28 00:36 jasper-runtime-5.5.23.jar
-rw-rw-r-- 1 varun varun   58487 Sep 28 00:36 java-util-1.9.0.jar
-rw-rw-r-- 1 varun varun2497 Sep 28 00:36 javax.inject-1.jar
-rw-rw-r-- 1 varun varun 1291164 Sep 28 00:36 jcodings-1.0.8.jar
-rw-rw-r-- 1 varun varun  134021 Sep 28 00:36 jersey-client-1.19.jar
-rw-rw-r-- 1 varun varun   16151 Sep 28 00:36 jersey-guice-1.19.jar
-rw-rw-r-- 1 varun varun  187292 Sep 28 00:36 joni-2.1.2.jar
-rw-rw-r-- 1 varun varun   75232 Sep 28 00:36 json-io-2.5.1.jar
-rw-rw-r-- 1 varun varun 1024680 Sep 28 00:36 jsp-2.1-6.1.14.jar
-rw-rw-r-- 1 varun varun  134910 Sep 28 00:36 jsp-api-2.1-6.1.14.jar
-rw-rw-r-- 1 varun varun   82123 Sep 28 00:36 metrics-core-2.2.0.jar
-rw-rw-r-- 1 varun varun   85449 Sep 28 00:36 metrics-core-3.0.1.jar
-rw-rw-r-- 1 varun varun  792442 Sep 28 00:36 mssql-jdbc-6.2.1.jre7.jar
-rw-rw-r-- 1 varun varun  132368 Sep 28 00:36 servlet-api-2.5-6.1.14.jar
{noformat}

*After change, jars under lib and serverlib folders would be as under 
(excluding hbase-server jar and its dependencies):*
{noformat}
varun@e7450:~/Projects/hadoop/hadoop-dist/target/hadoop-3.1.0-SNAPSHOT/share/hadoop/yarn$
 ll *lib
lib:
total 4268
-rw-rw-r-- 1 varun varun4467 Sep 28 00:29 aopalliance-1.0.jar
-rw-rw-r-- 1 varun varun   34827 Sep 28 00:29 commons-csv-1.0.jar
-rw-rw-r-- 1 varun varun 1726527 Sep 28 00:29 ehcache-3.3.1.jar
-rw-rw-r-- 1 varun varun  387689 Sep 28 00:29 fst-2.50.jar
-rw-rw-r-- 1 varun varun   55236 Sep 28 00:29 
geronimo-jcache_1.0_spec-1.0-alpha-1.jar
-rw-rw-r-- 1 varun varun  668235 Sep 28 00:29 guice-4.0.jar
-rw-rw-r-- 1 varun varun   76983 Sep 28 00:29 guice-servlet-4.0.jar
-rw-rw-r-- 1 varun varun  134308 Sep 28 00:29 HikariCP-java7-2.4.12.jar
-rw-rw-r-- 1 varun varun   29947 Sep 28 00:29 jackson-jaxrs-base-2.7.8.jar
-rw-rw-r-- 1 varun varun   16776 Sep 28 00:29 
jackson-jaxrs-json-provider-2.7.8.jar
-rw-rw-r-- 1 varun varun   34578 Sep 28 00:29 
jackson-module-jaxb-annotations-2.7.8.jar
-rw-rw-r-- 1 varun varun   58487 Sep 28 00:29 java-util-1.9.0.jar
-rw-rw-r-- 1 varun varun2497 Sep 28 00:29 javax.inject-1.jar
-rw-rw-r-- 1 varun varun  134021 Sep 28 00:29 jersey-client-1.19.jar
-rw-rw-r-- 1 varun varun 

[jira] [Updated] (YARN-6059) Update paused container state in the NM state store

2017-09-27 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-6059:
-
Fix Version/s: (was: 3.0.0)
   3.1.0

> Update paused container state in the NM state store
> ---
>
> Key: YARN-6059
> URL: https://issues.apache.org/jira/browse/YARN-6059
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Hitesh Sharma
>Assignee: Hitesh Sharma
>Priority: Blocker
> Fix For: 2.9.0, 3.1.0
>
> Attachments: YARN-5216-YARN-6059.001.patch, 
> YARN-6059-YARN-5972.001.patch, YARN-6059-YARN-5972.002.patch, 
> YARN-6059-YARN-5972.003.patch, YARN-6059-YARN-5972.004.patch, 
> YARN-6059-YARN-5972.005.patch, YARN-6059-YARN-5972.006.patch, 
> YARN-6059-YARN-5972.007.patch, YARN-6059-YARN-5972.008.patch, 
> YARN-6059-YARN-5972.009.patch, YARN-6059-YARN-5972.010.patch, 
> YARN-6059-YARN-5972.011.patch, YARN-6059-YARN-5972.012.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5216) Expose configurable preemption policy for OPPORTUNISTIC containers running on the NM

2017-09-27 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-5216:
-
Fix Version/s: 3.1.0
   2.9.0

> Expose configurable preemption policy for OPPORTUNISTIC containers running on 
> the NM
> 
>
> Key: YARN-5216
> URL: https://issues.apache.org/jira/browse/YARN-5216
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: distributed-scheduling
>Reporter: Arun Suresh
>Assignee: Hitesh Sharma
>  Labels: oct16-hard
> Fix For: 2.9.0, 3.1.0
>
> Attachments: YARN5216.001.patch, yarn5216.002.patch, 
> YARN-5216-YARN-5972.001.patch, YARN-5216-YARN-5972.002.patch, 
> YARN-5216-YARN-5972.003.patch, YARN-5216-YARN-5972.004.patch, 
> YARN-5216-YARN-5972.005.patch, YARN-5216-YARN-5972.006.patch, 
> YARN-5216-YARN-5972.007.patch
>
>
> Currently, the default action taken by the QueuingContainerManager, 
> introduced in YARN-2883, when a GUARANTEED Container is scheduled on an NM 
> with OPPORTUNISTIC containers using up resources, is to KILL the running 
> OPPORTUNISTIC containers.
> This JIRA proposes to expose a configurable hook to allow the NM to take a 
> different action.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5292) NM Container lifecycle and state transitions to support for PAUSED container state.

2017-09-27 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-5292:
-
Fix Version/s: 3.1.0
   2.9.0

> NM Container lifecycle and state transitions to support for PAUSED container 
> state.
> ---
>
> Key: YARN-5292
> URL: https://issues.apache.org/jira/browse/YARN-5292
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Hitesh Sharma
>Assignee: Hitesh Sharma
> Fix For: 2.9.0, 3.1.0
>
> Attachments: YARN-5292.001.patch, YARN-5292.002.patch, 
> YARN-5292.003.patch, YARN-5292.004.patch, YARN-5292.005.patch, 
> YARN-5292.006.patch
>
>
> This JIRA addresses the NM Container and state machine and lifecycle changes 
> needed  to support pausing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7261) Add debug message in class FSDownload for better download latency monitoring

2017-09-27 Thread Yufei Gu (JIRA)
Yufei Gu created YARN-7261:
--

 Summary: Add debug message in class FSDownload for better download 
latency monitoring
 Key: YARN-7261
 URL: https://issues.apache.org/jira/browse/YARN-7261
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: nodemanager
Reporter: Yufei Gu
Assignee: Yufei Gu






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7253) Shared Cache Manager daemon command listed as admin subcmd in yarn script

2017-09-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183082#comment-16183082
 ] 

Hudson commented on YARN-7253:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12986 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12986/])
YARN-7253. Shared Cache Manager daemon command listed as admin subcmd in 
(ctrezzo: rev c87db8d154ab2501e786b4f1669b205759ece5c3)
* (edit) hadoop-yarn-project/hadoop-yarn/bin/yarn


> Shared Cache Manager daemon command listed as admin subcmd in yarn script
> -
>
> Key: YARN-7253
> URL: https://issues.apache.org/jira/browse/YARN-7253
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1, 3.0.0-alpha4
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
>Priority: Trivial
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-7253-trunk-001.patch
>
>
> Currently the command to start the shared cache manager daemon is listed as 
> an admin command in the yarn script usage:
> {noformat}
>   SUBCOMMAND is one of:
> Admin Commands:
> daemonlogget/set the log level for each daemon
> node prints node report(s)
> rmadmin  admin tools
> scmadmin SharedCacheManager admin tools
> sharedcachemanager   run the SharedCacheManager daemon
> {noformat}
> It should be a daemon command.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7009) TestNMClient.testNMClientNoCleanupOnStop is flaky by design

2017-09-27 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-7009:
-
Attachment: YARN-7009.006.patch

> TestNMClient.testNMClientNoCleanupOnStop is flaky by design
> ---
>
> Key: YARN-7009
> URL: https://issues.apache.org/jira/browse/YARN-7009
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Attachments: YARN-7009.000.patch, YARN-7009.001.patch, 
> YARN-7009.002.patch, YARN-7009.003.patch, YARN-7009.004.patch, 
> YARN-7009.005.patch, YARN-7009.006.patch
>
>
> The sleeps to wait for a transition to reinit and than back to running is not 
> long enough, it can miss the reinit event.
> {code}
> java.lang.AssertionError: Exception is not expected: 
> org.apache.hadoop.yarn.exceptions.YarnException: Cannot perform RE_INIT on 
> [container_1502735389852_0001_01_01]. Current state is [REINITIALIZING, 
> isReInitializing=true].
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.preReInitializeOrLocalizeCheck(ContainerManagerImpl.java:1772)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.reInitializeContainer(ContainerManagerImpl.java:1697)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.reInitializeContainer(ContainerManagerImpl.java:1668)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.service.ContainerManagementProtocolPBServiceImpl.reInitializeContainer(ContainerManagementProtocolPBServiceImpl.java:214)
>   at 
> org.apache.hadoop.yarn.proto.ContainerManagementProtocol$ContainerManagementProtocolService$2.callBlockingMethod(ContainerManagementProtocol.java:237)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestNMClient.testReInitializeContainer(TestNMClient.java:567)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestNMClient.testContainerManagement(TestNMClient.java:405)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestNMClient.testNMClientNoCleanupOnStop(TestNMClient.java:214)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Cannot perform 
> RE_INIT on [container_1502735389852_0001_01_01]. Current state is 
> [REINITIALIZING, isReInitializing=true].
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.preReInitializeOrLocalizeCheck(ContainerManagerImpl.java:1772)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.reInitializeContainer(ContainerManagerImpl.java:1697)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.reInitializeContainer(ContainerManagerImpl.java:1668)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.service.ContainerManagementProtocolPBServiceImpl.reInitializeContainer(ContainerManagementProtocolPBServiceImpl.java:214)
>   at 
> org.apache.hadoop.yarn.proto.ContainerManagementProtocol$ContainerManagementProtocolService$2.callBlockingMethod(ContainerManagementProtocol.java:237)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
>   at 

[jira] [Commented] (YARN-7260) yarn.router.pipeline.cache-max-size is missing in yarn-default.xml

2017-09-27 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183071#comment-16183071
 ] 

Giovanni Matteo Fumarola commented on YARN-7260:


Thanks [~jlowe] for taking care of it. I just checked the yarn-default.xml and 
that one was the only missing.
I am double checking all the other files we edited during the merging to 
branch-2. I will open new jira(s) for any missing part(s).

> yarn.router.pipeline.cache-max-size is missing in yarn-default.xml
> --
>
> Key: YARN-7260
> URL: https://issues.apache.org/jira/browse/YARN-7260
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Rohith Sharma K S
>Assignee: Jason Lowe
> Attachments: YARN-7260-branch-2.001.patch
>
>
> In branch-2 TestYarnConfigurationFields fails
> {code}
> Running org.apache.hadoop.yarn.api.records.TestURL Tests run: 1, Failures: 0, 
> Errors: 0, Skipped: 0, Time elapsed: 0.278 sec - in 
> org.apache.hadoop.yarn.api.records.TestURL Running 
> org.apache.hadoop.yarn.conf.TestYarnConfigurationFields Tests run: 4, 
> Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 0.539 sec <<< FAILURE! - in 
> org.apache.hadoop.yarn.conf.TestYarnConfigurationFields 
> testCompareXmlAgainstConfigurationClass(org.apache.hadoop.yarn.conf.TestYarnConfigurationFields)
>  Time elapsed: 0.296 sec <<< FAILURE! java.lang.AssertionError: 
> yarn-default.xml has 1 properties missing in class 
> org.apache.hadoop.yarn.conf.YarnConfiguration at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> org.apache.hadoop.conf.TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass(TestConfigurationFieldsBase.java:588)
>  
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2497) Changes for fair scheduler to support allocate resource respect labels

2017-09-27 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183070#comment-16183070
 ] 

Daniel Templeton commented on YARN-2497:


The opportunistic containers test that failed in the last run succeeds for me 
locally, so I'm guessing it's flakey.

> Changes for fair scheduler to support allocate resource respect labels
> --
>
> Key: YARN-2497
> URL: https://issues.apache.org/jira/browse/YARN-2497
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Wangda Tan
>Assignee: Daniel Templeton
> Attachments: YARN-2497.001.patch, YARN-2497.002.patch, 
> YARN-2497.003.patch, YARN-2497.004.patch, YARN-2497.005.patch, 
> YARN-2497.006.patch, YARN-2499.WIP01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2497) Changes for fair scheduler to support allocate resource respect labels

2017-09-27 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183056#comment-16183056
 ] 

Daniel Templeton commented on YARN-2497:


Remaining checkstyle issues are bogus.  Looks like one of the test failures may 
be valid.  In any case, I think we're close enough to start with reviews.

> Changes for fair scheduler to support allocate resource respect labels
> --
>
> Key: YARN-2497
> URL: https://issues.apache.org/jira/browse/YARN-2497
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Wangda Tan
>Assignee: Daniel Templeton
> Attachments: YARN-2497.001.patch, YARN-2497.002.patch, 
> YARN-2497.003.patch, YARN-2497.004.patch, YARN-2497.005.patch, 
> YARN-2497.006.patch, YARN-2499.WIP01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6623) Add support to turn off launching privileged containers in the container-executor

2017-09-27 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183050#comment-16183050
 ] 

Andrew Wang commented on YARN-6623:
---

Folks, this is currently marked as the last blocker for beta1. I asked Mikos 
about this offline, and he explained to me that:

* We've documented the Docker feature as experimental
* It's off by default
* This isn't a regression from an earlier 3.0.0 alpha release

Given this, I'd like to drop this off the blocker list and proceed with the 
beta1 release.

Alternatively, if there's some quick hack we could do to unblock the release, 
I'm all ears.

> Add support to turn off launching privileged containers in the 
> container-executor
> -
>
> Key: YARN-6623
> URL: https://issues.apache.org/jira/browse/YARN-6623
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
>Priority: Blocker
> Attachments: YARN-6623.001.patch, YARN-6623.002.patch, 
> YARN-6623.003.patch, YARN-6623.004.patch, YARN-6623.005.patch, 
> YARN-6623.006.patch, YARN-6623.007.patch, YARN-6623.008.patch, 
> YARN-6623.009.patch, YARN-6623.010.patch, YARN-6623.011.patch, 
> YARN-6623.012.patch, YARN-6623.013.patch
>
>
> Currently, launching privileged containers is controlled by the NM. We should 
> add a flag to the container-executor.cfg allowing admins to disable launching 
> privileged containers at the container-executor level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7248) NM returns new SCHEDULED container status to older clients

2017-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183043#comment-16183043
 ] 

Hadoop QA commented on YARN-7248:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
42s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 54s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 31 new + 290 unchanged - 0 fixed = 321 total (was 290) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 37s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
41s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
22s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 45m 57s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}139m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-7248 |
| JIRA Patch URL | 

[jira] [Commented] (YARN-2497) Changes for fair scheduler to support allocate resource respect labels

2017-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183033#comment-16183033
 ] 

Hadoop QA commented on YARN-2497:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 19 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
44s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
16s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 13s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 6 new + 975 unchanged - 29 fixed = 981 total (was 1004) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 54s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
29s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
38s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
42s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 48m 10s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
18s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}134m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit 

[jira] [Commented] (YARN-7259) add rolling policy to LogAggregationIndexedFileController

2017-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183024#comment-16183024
 ] 

Hadoop QA commented on YARN-7259:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  3m 
45s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common: The patch generated 21 new 
+ 8 unchanged - 0 fixed = 29 total (was 8) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
21s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
35s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
|  |  
org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController.parseCheckSumFiles(List)
 invokes inefficient new Long(long) constructor; use Long.valueOf(long) instead 
 At LogAggregationIndexedFileController.java:constructor; use 
Long.valueOf(long) instead  At LogAggregationIndexedFileController.java:[line 
712] |
|  |  Result of integer multiplication cast to long in 
org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController.getRollOverLogMaxSize(Configuration)
  At LogAggregationIndexedFileController.java:to long in 
org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController.getRollOverLogMaxSize(Configuration)
  At LogAggregationIndexedFileController.java:[line 1100] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-7259 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12889322/YARN-7259.1.patch |
| Optional Tests |  asflicense  compile  

[jira] [Commented] (YARN-7251) Misc changes to YARN-5734

2017-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183009#comment-16183009
 ] 

Hadoop QA commented on YARN-7251:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-5734 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
26s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
15s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
58s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
11s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} YARN-5734 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
12s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  0s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 2 new + 365 unchanged - 0 fixed = 367 total (was 365) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 15s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
35s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 46m 56s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| 

[jira] [Commented] (YARN-7244) ShuffleHandler is not aware of disks that are added

2017-09-27 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183000#comment-16183000
 ] 

Jason Lowe commented on YARN-7244:
--

bq. Only potential issue which I see is that, once a set of dirs are pulled 
from LocalDirAllocator#ctx.localDirs, these dirs will be validated only when 
one more getLocalPathForWrite/Read is invoked. So there could be a window where 
we may get a stale dirs.

I wouldn't worry too much about that window.  Think of the much larger window a 
container gets, since it is only told once, on startup, what the list of valid 
dirs are.  I think we're fine as long as aux services are notified fairly soon 
after a disk fails.  It doesn't have to be instantaneous nor atomic.  We could 
make a pull API where the aux service can essentially directly call the NM's 
LocalDirHandlerService for getting a path to read or a path to write, then the 
aux service doesn't even have to manage the directories itself if all it cares 
about is finding a place to write or read.




> ShuffleHandler is not aware of disks that are added
> ---
>
> Key: YARN-7244
> URL: https://issues.apache.org/jira/browse/YARN-7244
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Kuhu Shukla
>Assignee: Kuhu Shukla
> Attachments: YARN-7244.001.patch, YARN-7244.002.patch
>
>
> The ShuffleHandler permanently remembers the list of "good" disks on NM 
> startup. If disks later are added to the node then map tasks will start using 
> them but the ShuffleHandler will not be aware of them. The end result is that 
> the data cannot be shuffled from the node leading to fetch failures and 
> re-runs of the map tasks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6626) Embed REST API service into RM

2017-09-27 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16182918#comment-16182918
 ] 

Eric Yang commented on YARN-6626:
-

[~gsaha] RM hardcoded the Jersey initialization code using Guice Injection in 
RMWebApp instead of web.xml to tell Jersey to look for rest package.  I did not 
find a good place in RM code to express Jersey initialization code more easily.

> Embed REST API service into RM
> --
>
> Key: YARN-6626
> URL: https://issues.apache.org/jira/browse/YARN-6626
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Eric Yang
> Fix For: yarn-native-services
>
> Attachments: YARN-6626.yarn-native-services.001.patch, 
> YARN-6626.yarn-native-services.002.patch, 
> YARN-6626.yarn-native-services.003.patch, 
> YARN-6626.yarn-native-services.004.patch, 
> YARN-6626.yarn-native-services.005.patch, 
> YARN-6626.yarn-native-services.006.patch, 
> YARN-6626.yarn-native-services.007.patch, 
> YARN-6626.yarn-native-services.008.patch, 
> YARN-6626.yarn-native-services.009.patch
>
>
> As of now the deployment model of the Native Services REST API service is 
> standalone. There are several cross-cutting solutions that can be inherited 
> for free (kerberos, HA, ACLs, trusted proxy support, etc.) by the REST API 
> service if it is embedded into the RM process. In fact we can expose the REST 
> API via the same port as RM UI (8088 default). The URI path 
> /services/v1/applications will distinguish the REST API calls from other RM 
> APIs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6626) Embed REST API service into RM

2017-09-27 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-6626:

Attachment: YARN-6626.yarn-native-services.009.patch

- Removed "extends WebService" from ApiServer class type. 
- Added comment about avoid cyclic dependency using reflection.
- Added white space between if condition.

> Embed REST API service into RM
> --
>
> Key: YARN-6626
> URL: https://issues.apache.org/jira/browse/YARN-6626
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Eric Yang
> Fix For: yarn-native-services
>
> Attachments: YARN-6626.yarn-native-services.001.patch, 
> YARN-6626.yarn-native-services.002.patch, 
> YARN-6626.yarn-native-services.003.patch, 
> YARN-6626.yarn-native-services.004.patch, 
> YARN-6626.yarn-native-services.005.patch, 
> YARN-6626.yarn-native-services.006.patch, 
> YARN-6626.yarn-native-services.007.patch, 
> YARN-6626.yarn-native-services.008.patch, 
> YARN-6626.yarn-native-services.009.patch
>
>
> As of now the deployment model of the Native Services REST API service is 
> standalone. There are several cross-cutting solutions that can be inherited 
> for free (kerberos, HA, ACLs, trusted proxy support, etc.) by the REST API 
> service if it is embedded into the RM process. In fact we can expose the REST 
> API via the same port as RM UI (8088 default). The URI path 
> /services/v1/applications will distinguish the REST API calls from other RM 
> APIs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6182) [YARN-3368] Fix alignment issues and missing information in queue pages

2017-09-27 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16182908#comment-16182908
 ] 

Sunil G edited comment on YARN-6182 at 9/27/17 5:05 PM:


+1 Committing tomorrow if no objections.


was (Author: sunilg):
+1 Comitting tomorrow if no objections.

> [YARN-3368] Fix alignment issues and missing information in queue pages
> ---
>
> Key: YARN-6182
> URL: https://issues.apache.org/jira/browse/YARN-6182
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
> Attachments: YARN-6182.001.patch, YARN-6182.002.patch, 
> YARN-6182.003.patch, YARN-6182.004.patch
>
>
> This patch fixes following issues:
> In Queues page:
> # Queue Capacities: Absolute Max Capacity should be aligned better.
> # Queue Information: State is coming empty
> # Queues tab becomes inactive while hovering on the queue.
> # Fixes the capacity decimal places to two places.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6182) [YARN-3368] Fix alignment issues and missing information in queue pages

2017-09-27 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16182908#comment-16182908
 ] 

Sunil G commented on YARN-6182:
---

+1 Comitting tomorrow if no objections.

> [YARN-3368] Fix alignment issues and missing information in queue pages
> ---
>
> Key: YARN-6182
> URL: https://issues.apache.org/jira/browse/YARN-6182
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
> Attachments: YARN-6182.001.patch, YARN-6182.002.patch, 
> YARN-6182.003.patch, YARN-6182.004.patch
>
>
> This patch fixes following issues:
> In Queues page:
> # Queue Capacities: Absolute Max Capacity should be aligned better.
> # Queue Information: State is coming empty
> # Queues tab becomes inactive while hovering on the queue.
> # Fixes the capacity decimal places to two places.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6373) [YARN-3368] Improvements in cluster-overview page in YARN-UI

2017-09-27 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16182906#comment-16182906
 ] 

Sunil G commented on YARN-6373:
---

[~GergelyNovak], changes seems fine. Could you please help to rebase and help 
to update major changes done as per this patch. If its fine, i ll commit it 
tomorrow.

> [YARN-3368] Improvements in cluster-overview page in YARN-UI
> 
>
> Key: YARN-6373
> URL: https://issues.apache.org/jira/browse/YARN-6373
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Gergely Novák
> Attachments: YARN-6373.001.patch, YARN-6373.002.patch, 
> YARN-6373.003.patch
>
>
> # Make appld and queueName clickable to navigate to respective pages.
> # Flow layout for panels in cluster-overview page.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7202) End-to-end UT for api-server

2017-09-27 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16182886#comment-16182886
 ] 

Eric Yang edited comment on YARN-7202 at 9/27/17 5:02 PM:
--

[~jianhe] In the example that you provided, it is showing what ApiServer would 
behave and what is expected from ServiceClient.  If ServiceClient throws 
YarnException for NOT_FOUND, and ApiServer does not capturing YarnException to 
treat it as NOT_FOUND, then we look at the reason for the discrepancy.  We know 
ServiceClient will throw YarnException for various errors, including NOT_FOUND. 
 However, ServiceClient does not provide more details for ApiServer to identify 
the difference between NOT_FOUND, and INTERNAL_SERVER_ERROR by looking at 
YarnException.  Hence, the mocked version of ServiceClient is verifying how 
ApiServer would behave under all conditions that ApiServer covered.  The 
ambiguity does not come from ApiServer.  If both mocked version of 
ServiceClient, and ApiServer are in agreement of the behavior.  The ambiguity 
problem must be originated from ServiceClient.  This verification confirms that 
unit test for ApiServer is done correctly, and problem resides in 
ServiceClient.  Unit tests help to peel off layer and layer of ambiguity, then 
we have a good suite of unit test cases that can be repeated and reproduced.

ServiceClient code is inherited from Apache Slider.  There are too many 
integration classes with external dependencies.  It won't be easy to write 
integration test as a unit test case in one task.  I would proposed to write 
unit test cases in separate JIRAs as we continue to trim down the framework to 
make up for the technical debts from Apache Slider.


was (Author: eyang):
[~jianhe] In the example that you provided, it is showing what ApiServer would 
behave and what is expected from ServiceClient.  If ServiceClient throws 
YarnException for NOT_FOUND, and ApiServer does not capturing YarnException to 
treat it as NOT_FOUND, then we look at the reason for the discrepancy.  We know 
ServiceClient will throw YarnException for various errors, including NOT_FOUND. 
 However, ServiceClient does not provide more details for ApiServer to identify 
the difference between NOT_FOUND, and INTERNAL_SERVER_ERROR by looking at 
YarnException.  Hence, the mocked version of ServiceClient is verifying how 
ApiServer would behave under all conditions that ApiServer covered.  The 
ambiguity does not come from ApiServer.  If both mocked version of 
ServiceClient, and ApiServer are behaving correctly.  The ambiguity problem 
must be originated from ServiceClient.  This verification confirms that unit 
test for ApiServer is done correctly, and problem resides in ServiceClient.  If 
we peel off layer and layer of ambiguity, then we have a good suite of unit 
test cases that can be repeated and reproduced.

ServiceClient code is inherited from Apache Slider.  There are too many 
integration classes with external dependencies.  It won't be easy to write unit 
test cases in one task.  I would proposed to write unit test cases in separate 
JIRAs as we continue to trim down the framework to make up for the technical 
debts from Apache Slider.

> End-to-end UT for api-server
> 
>
> Key: YARN-7202
> URL: https://issues.apache.org/jira/browse/YARN-7202
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
> Attachments: YARN-7202.yarn-native-services.001.patch, 
> YARN-7202.yarn-native-services.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7259) add rolling policy to LogAggregationIndexedFileController

2017-09-27 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-7259:

Attachment: YARN-7259.1.patch

> add rolling policy to LogAggregationIndexedFileController
> -
>
> Key: YARN-7259
> URL: https://issues.apache.org/jira/browse/YARN-7259
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-7259.1.patch
>
>
> We would roll over the log files based on the size. It only happens when the 
> partial log aggregation is enabled.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7202) End-to-end UT for api-server

2017-09-27 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16182886#comment-16182886
 ] 

Eric Yang commented on YARN-7202:
-

[~jianhe] In the example that you provided, it is showing what ApiServer would 
behave and what is expected from ServiceClient.  If ServiceClient throws 
YarnException for NOT_FOUND, and ApiServer does not capturing YarnException to 
treat it as NOT_FOUND, then we look at the reason for the discrepancy.  We know 
ServiceClient will throw YarnException for various errors, including NOT_FOUND. 
 However, ServiceClient does not provide more details for ApiServer to identify 
the difference between NOT_FOUND, and INTERNAL_SERVER_ERROR by looking at 
YarnException.  Hence, the mocked version of ServiceClient is verifying how 
ApiServer would behave under all conditions that ApiServer covered.  The 
ambiguity does not come from ApiServer.  If both mocked version of 
ServiceClient, and ApiServer are behaving correctly.  The ambiguity problem 
must be originated from ServiceClient.  This verification confirms that unit 
test for ApiServer is done correctly, and problem resides in ServiceClient.  If 
we peel off layer and layer of ambiguity, then we have a good suite of unit 
test cases that can be repeated and reproduced.

ServiceClient code is inherited from Apache Slider.  There are too many 
integration classes with external dependencies.  It won't be easy to write unit 
test cases in one task.  I would proposed to write unit test cases in separate 
JIRAs as we continue to trim down the framework to make up for the technical 
debts from Apache Slider.

> End-to-end UT for api-server
> 
>
> Key: YARN-7202
> URL: https://issues.apache.org/jira/browse/YARN-7202
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
> Attachments: YARN-7202.yarn-native-services.001.patch, 
> YARN-7202.yarn-native-services.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7207) Cache the local host name when getting application list in RM

2017-09-27 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16182871#comment-16182871
 ] 

Robert Kanter edited comment on YARN-7207 at 9/27/17 4:47 PM:
--

+1 LGTM

Please file a new JIRA for #2 then.


was (Author: rkanter):
+1 LGTM

> Cache the local host name when getting application list in RM
> -
>
> Key: YARN-7207
> URL: https://issues.apache.org/jira/browse/YARN-7207
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: RM
>Affects Versions: 3.1.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-7207.001.patch, YARN-7207.002.patch
>
>
> {{getLocalHostName()}} is invoked for generating the report for each 
> application, which means it is called 1000 times for each 
> {{getApplications()}} if there are 1000 apps in RM. Some user got a 
> performance issue when {{getLocalHostName()}} is slow under some network envs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7207) Cache the local host name when getting application list in RM

2017-09-27 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16182871#comment-16182871
 ] 

Robert Kanter commented on YARN-7207:
-

+1 LGTM

> Cache the local host name when getting application list in RM
> -
>
> Key: YARN-7207
> URL: https://issues.apache.org/jira/browse/YARN-7207
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: RM
>Affects Versions: 3.1.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-7207.001.patch, YARN-7207.002.patch
>
>
> {{getLocalHostName()}} is invoked for generating the report for each 
> application, which means it is called 1000 times for each 
> {{getApplications()}} if there are 1000 apps in RM. Some user got a 
> performance issue when {{getLocalHostName()}} is slow under some network envs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7205) Log improvements for the ResourceUtils

2017-09-27 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G reassigned YARN-7205:
-

Assignee: Sunil G

> Log improvements for the ResourceUtils
> --
>
> Key: YARN-7205
> URL: https://issues.apache.org/jira/browse/YARN-7205
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Jian He
>Assignee: Sunil G
>
> I've seen below logs printed at the service client console after the merge, 
> can this be moved to debug level log ? cc  [~sunilg], [~leftnoteasy]
> {code}
> 17/09/15 10:26:32 INFO conf.Configuration: resource-types.xml not found
> 17/09/15 10:26:32 INFO resource.ResourceUtils: Unable to find 
> 'resource-types.xml'. Falling back to memory and vcores as resources.
> 17/09/15 10:26:32 INFO resource.ResourceUtils: Adding resource type - name = 
> memory-mb, units = Mi, type = COUNTABLE
> 17/09/15 10:26:32 INFO resource.ResourceUtils: Adding resource type - name = 
> vcores, units = , type = COUNTABLE
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7260) yarn.router.pipeline.cache-max-size is missing in yarn-default.xml

2017-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16182820#comment-16182820
 ] 

Hadoop QA commented on YARN-7260:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  8m 
29s{color} | {color:red} Docker failed to build yetus/hadoop:eaf5c66. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-7260 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12889289/YARN-7260-branch-2.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17663/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> yarn.router.pipeline.cache-max-size is missing in yarn-default.xml
> --
>
> Key: YARN-7260
> URL: https://issues.apache.org/jira/browse/YARN-7260
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Rohith Sharma K S
>Assignee: Jason Lowe
> Attachments: YARN-7260-branch-2.001.patch
>
>
> In branch-2 TestYarnConfigurationFields fails
> {code}
> Running org.apache.hadoop.yarn.api.records.TestURL Tests run: 1, Failures: 0, 
> Errors: 0, Skipped: 0, Time elapsed: 0.278 sec - in 
> org.apache.hadoop.yarn.api.records.TestURL Running 
> org.apache.hadoop.yarn.conf.TestYarnConfigurationFields Tests run: 4, 
> Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 0.539 sec <<< FAILURE! - in 
> org.apache.hadoop.yarn.conf.TestYarnConfigurationFields 
> testCompareXmlAgainstConfigurationClass(org.apache.hadoop.yarn.conf.TestYarnConfigurationFields)
>  Time elapsed: 0.296 sec <<< FAILURE! java.lang.AssertionError: 
> yarn-default.xml has 1 properties missing in class 
> org.apache.hadoop.yarn.conf.YarnConfiguration at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> org.apache.hadoop.conf.TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass(TestConfigurationFieldsBase.java:588)
>  
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7244) ShuffleHandler is not aware of disks that are added

2017-09-27 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16182720#comment-16182720
 ] 

Sunil G commented on YARN-7244:
---

Thanks [~jlowe] for adding more clarity on this.

'pull' model may be better and could work for all such cases. As Jason 
suggested if apps could know the latest dirs from 
{{getLocalDirsForRead/Write}}, shuffle handler will have a list of valid dirs 
always. Only potential issue which I see is that, once a set of dirs are pulled 
from {{LocalDirAllocator#ctx.localDirs}}, these dirs will be validated only 
when one more getLocalPathForWrite/Read is invoked. So there could be a window 
where we may get a stale dirs. If new api 
{{LocalDirAllocator#getLocalDirsForRead}} could call {{confChanged}}, then i 
think it should be a source of truth for localDirs for given time snapshot.

bq.Do you think, we can improve this to skip as default behavior itself
Currently in this patch, you are trying to avoid disk validation check when 
shouldFilter is false. To add more context, may be we could skip this check 
here provided we have a valid dirs in ShuffleHandler end based on earlier api.

> ShuffleHandler is not aware of disks that are added
> ---
>
> Key: YARN-7244
> URL: https://issues.apache.org/jira/browse/YARN-7244
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Kuhu Shukla
>Assignee: Kuhu Shukla
> Attachments: YARN-7244.001.patch, YARN-7244.002.patch
>
>
> The ShuffleHandler permanently remembers the list of "good" disks on NM 
> startup. If disks later are added to the node then map tasks will start using 
> them but the ShuffleHandler will not be aware of them. The end result is that 
> the data cannot be shuffled from the node leading to fetch failures and 
> re-runs of the map tasks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7244) ShuffleHandler is not aware of disks that are added

2017-09-27 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16182678#comment-16182678
 ] 

Jason Lowe commented on YARN-7244:
--

Thanks for the patch!

The core issue here is that the NM is handing out directories to tasks that the 
shuffle manager is unaware of.  This filtering-or-not approach doesn't 
completely solve the issue, since the ShuffleHandler will still attempt to 
visit disks that the NM has already determined are bad.  That could cause 
performance problems if the ShuffleHandler tries to read a particularly 
problematic disk over and over as it searches for outputs to shuffle for every 
shuffle request.

It would be more ideal if the NM could convey to aux services what directories 
are in use.  Then the ShuffleHandler and NM would be in sync with respect to 
what disks should or should not be used.

bq. Another way to handle this would have been to change the AuxiliaryServices 
to pass the NMContext or the LocalDirAllocator from the NM .

That would be nice, as there are probably other things in the NMContext that 
aux services may want to know about.  However we could always go with a much 
more direct route.  We could add an API to AuxiliaryService that can set a 
callback object that can be leveraged to retrieve the current list of paths 
that are good for reading or writing, or we can an API to AuxiliaryService that 
the NM can call to update that service on the list of paths good for reading 
and writing.  (i.e.: either a 'pull' or 'push' model for exposing the current 
good directories to aux services).

The 'pull' model requires an interface or abstract class in yarn-api that 
defines the API aux services can call to retrieve the directories, and we would 
put the actual implementation of that interface in yarn-server-nodemanager.  
Ideally the interface would look a lot like the existing getLocalDirsForRead(), 
getLocalDirsForWrite(), etc. of the LocalDirsHandlerService so it's an easy 
pass-through to implement on the nodemanager side.

The 'push' model requires adding a listener interface to 
LocalDIrsHandlerService so we know when a disk is added or removed and can 
callback into each aux service to update them on the current list of dirs for 
reading and writing.

Haven't had a lot of time to figure out which would be more ideal in practice 
in terms of ease-of-use and performance, but I think I'd rather see the aux 
services be more in sync with the rest of the NM wrt. local dirs being actively 
used.


> ShuffleHandler is not aware of disks that are added
> ---
>
> Key: YARN-7244
> URL: https://issues.apache.org/jira/browse/YARN-7244
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Kuhu Shukla
>Assignee: Kuhu Shukla
> Attachments: YARN-7244.001.patch, YARN-7244.002.patch
>
>
> The ShuffleHandler permanently remembers the list of "good" disks on NM 
> startup. If disks later are added to the node then map tasks will start using 
> them but the ShuffleHandler will not be aware of them. The end result is that 
> the data cannot be shuffled from the node leading to fetch failures and 
> re-runs of the map tasks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6871) Add additional deSelects params in RMWebServices#getAppReport

2017-09-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16182659#comment-16182659
 ] 

Hudson commented on YARN-6871:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12985 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12985/])
YARN-6871. Add additional deSelects params in (sunilg: rev 
8facf1f976d7e12a846f12baabf54be1b7a49f9d)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesApps.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/DeSelectFields.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/AppInfo.java


> Add additional deSelects params in RMWebServices#getAppReport
> -
>
> Key: YARN-6871
> URL: https://issues.apache.org/jira/browse/YARN-6871
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: resourcemanager, router
>Reporter: Giovanni Matteo Fumarola
>Assignee: Tanuj Nayak
> Attachments: YARN-6871.002.patch, YARN-6871.003.patch, 
> YARN-6871.004.patch, YARN-6871.005.patch, YARN-6871.006.patch, 
> YARN-6871.007.patch, YARN-6871.008.patch, YARN-6871.proto.patch
>
>
> This jira tracks the effort to add additional deSelect params to the 
> GetAppReport to make it lighter and faster.
> With the current one we are facing a scalability issues.
> E.g. with ~500 applications running the AppReport can reach up to 300MB in 
> size due to the {{ResourceRequest}} in the {{AppInfo}}.
> Yarn RM will return the new result faster and it will use less compute cycles 
> to create the report and it will improve the YARN RM and Client's 
> performances.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7241) Merge YARN-5734 to trunk/branch-2

2017-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16182658#comment-16182658
 ] 

Hadoop QA commented on YARN-7241:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
14s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 10s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 14 new + 591 unchanged - 1 fixed = 605 total (was 592) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
24s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
12s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 82m 35s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
43s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 

[jira] [Updated] (YARN-7260) yarn.router.pipeline.cache-max-size is missing in yarn-default.xml

2017-09-27 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-7260:
-
Attachment: YARN-7260-branch-2.001.patch

Attaching a patch that fixes the missed rename in yarn-default.xml along with 
the new property description.  Pinging [~giovanni.fumarola] and [~subru] to 
review and see if there are other things that were missed as part of the merge 
to branch-2.

> yarn.router.pipeline.cache-max-size is missing in yarn-default.xml
> --
>
> Key: YARN-7260
> URL: https://issues.apache.org/jira/browse/YARN-7260
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Rohith Sharma K S
>Assignee: Jason Lowe
> Attachments: YARN-7260-branch-2.001.patch
>
>
> In branch-2 TestYarnConfigurationFields fails
> {code}
> Running org.apache.hadoop.yarn.api.records.TestURL Tests run: 1, Failures: 0, 
> Errors: 0, Skipped: 0, Time elapsed: 0.278 sec - in 
> org.apache.hadoop.yarn.api.records.TestURL Running 
> org.apache.hadoop.yarn.conf.TestYarnConfigurationFields Tests run: 4, 
> Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 0.539 sec <<< FAILURE! - in 
> org.apache.hadoop.yarn.conf.TestYarnConfigurationFields 
> testCompareXmlAgainstConfigurationClass(org.apache.hadoop.yarn.conf.TestYarnConfigurationFields)
>  Time elapsed: 0.296 sec <<< FAILURE! java.lang.AssertionError: 
> yarn-default.xml has 1 properties missing in class 
> org.apache.hadoop.yarn.conf.YarnConfiguration at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> org.apache.hadoop.conf.TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass(TestConfigurationFieldsBase.java:588)
>  
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7260) yarn.router.pipeline.cache-max-size is missing in yarn-default.xml

2017-09-27 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-7260:
-
Affects Version/s: 2.9.0
 Target Version/s: 2.9.0
  Summary: yarn.router.pipeline.cache-max-size is missing in 
yarn-default.xml  (was: yarn.router,.)

> yarn.router.pipeline.cache-max-size is missing in yarn-default.xml
> --
>
> Key: YARN-7260
> URL: https://issues.apache.org/jira/browse/YARN-7260
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Rohith Sharma K S
>Assignee: Jason Lowe
>
> In branch-2 TestYarnConfigurationFields fails
> {code}
> Running org.apache.hadoop.yarn.api.records.TestURL Tests run: 1, Failures: 0, 
> Errors: 0, Skipped: 0, Time elapsed: 0.278 sec - in 
> org.apache.hadoop.yarn.api.records.TestURL Running 
> org.apache.hadoop.yarn.conf.TestYarnConfigurationFields Tests run: 4, 
> Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 0.539 sec <<< FAILURE! - in 
> org.apache.hadoop.yarn.conf.TestYarnConfigurationFields 
> testCompareXmlAgainstConfigurationClass(org.apache.hadoop.yarn.conf.TestYarnConfigurationFields)
>  Time elapsed: 0.296 sec <<< FAILURE! java.lang.AssertionError: 
> yarn-default.xml has 1 properties missing in class 
> org.apache.hadoop.yarn.conf.YarnConfiguration at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> org.apache.hadoop.conf.TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass(TestConfigurationFieldsBase.java:588)
>  
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7260) yarn.router,.

2017-09-27 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe reassigned YARN-7260:


Assignee: Jason Lowe
 Summary: yarn.router,.  (was: TestYarnConfigurationFields fails in 
branch-2)

The test is failing because yarn-default.xml has 
yarn.router.clientrm.cache-max-size but that doesn't appear in 
YarnConfiguration.  Looks like this was a botched cherry-pick from YARN-5413 
into branch-2.  The original commit in trunk or the YARN-2915 branch changed 
yarn.router.clientrm.cache-max-size to yarn.router.pipeline.cache-max-size in 
both YarnConfiguration and yarn-default, but the yarn-default change was lost 
when ported to branch-2.


> yarn.router,.
> -
>
> Key: YARN-7260
> URL: https://issues.apache.org/jira/browse/YARN-7260
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Jason Lowe
>
> In branch-2 TestYarnConfigurationFields fails
> {code}
> Running org.apache.hadoop.yarn.api.records.TestURL Tests run: 1, Failures: 0, 
> Errors: 0, Skipped: 0, Time elapsed: 0.278 sec - in 
> org.apache.hadoop.yarn.api.records.TestURL Running 
> org.apache.hadoop.yarn.conf.TestYarnConfigurationFields Tests run: 4, 
> Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 0.539 sec <<< FAILURE! - in 
> org.apache.hadoop.yarn.conf.TestYarnConfigurationFields 
> testCompareXmlAgainstConfigurationClass(org.apache.hadoop.yarn.conf.TestYarnConfigurationFields)
>  Time elapsed: 0.296 sec <<< FAILURE! java.lang.AssertionError: 
> yarn-default.xml has 1 properties missing in class 
> org.apache.hadoop.yarn.conf.YarnConfiguration at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> org.apache.hadoop.conf.TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass(TestConfigurationFieldsBase.java:588)
>  
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7248) NM returns new SCHEDULED container status to older clients

2017-09-27 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7248:
--
Attachment: YARN-7248.002.patch

Updating patch (v002) per discussion.

> NM returns new SCHEDULED container status to older clients
> --
>
> Key: YARN-7248
> URL: https://issues.apache.org/jira/browse/YARN-7248
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Jason Lowe
>Assignee: Arun Suresh
>Priority: Blocker
> Attachments: YARN-7248.001.patch, YARN-7248.002.patch
>
>
> YARN-4597 added a new SCHEDULED container state and that state is returned to 
> clients when the container is localizing, etc.  However the client may be 
> running on an older software version that does not have the new SCHEDULED 
> state which could lead the client to crash on the unexpected container state 
> value or make incorrect assumptions like any state != NEW and != RUNNING must 
> be COMPLETED which was true in the older version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7248) NM returns new SCHEDULED container status to older clients

2017-09-27 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16182572#comment-16182572
 ] 

Arun Suresh commented on YARN-7248:
---

Thanks for the reviews [~leftnoteasy] and  [~jlowe] ..
I agree with Jason about the efficiency argument. Will update patch shortly.

> NM returns new SCHEDULED container status to older clients
> --
>
> Key: YARN-7248
> URL: https://issues.apache.org/jira/browse/YARN-7248
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Jason Lowe
>Assignee: Arun Suresh
>Priority: Blocker
> Attachments: YARN-7248.001.patch
>
>
> YARN-4597 added a new SCHEDULED container state and that state is returned to 
> clients when the container is localizing, etc.  However the client may be 
> running on an older software version that does not have the new SCHEDULED 
> state which could lead the client to crash on the unexpected container state 
> value or make incorrect assumptions like any state != NEW and != RUNNING must 
> be COMPLETED which was true in the older version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7237) Cleanup usages of ResourceProfiles

2017-09-27 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16182515#comment-16182515
 ] 

Daniel Templeton commented on YARN-7237:


My comments:

# Rather than creating our own minimum and maximum profiles based on the usual 
minimums and maximums, I rather get rid of the minimum and maximum profiles 
completely and go back to the original code for enforcing minimums and maximums.
#  Unused import in {{ResourceProfilesManagerImpl}}
# {{testResourceProfiles()}} looks like it tests more than just the minimum and 
maximum profiles.  Are you sure you should remove it?
# {{TestUtils.createMockResourceProfileManager()}} appears unused.
# Both the new {{TestUtils}} methods could use some javadoc.

> Cleanup usages of ResourceProfiles
> --
>
> Key: YARN-7237
> URL: https://issues.apache.org/jira/browse/YARN-7237
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-7237.001.patch, YARN-7237.002.patch, 
> YARN-7237.003.patch
>
>
> While doing tests, there're a couple of issues:
> 1) When use {{ProfileCapability#getProfileCapabilityOverride}}, it does 
> overwrite of whatever specified in resource-profiles.json when value >= 0. 
> Which is different from javadocs of {{ProfileCapability}} 
> bq. For example, if you have a resource profile "small" that maps to <4096M, 
> 2 cores, 1 gpu> and you set the capability override to <8192M, 0 cores, 0 
> gpu>, then the actual resource allocation on the ResourceManager will be 
> <8192M, 2 cores, 1 gpu>
> To me, the correct behavior should do overwrite when value > 0. The reason 
> is, by default resource value will be set to 0, For example, assume we have a 
> profile {{"a" = (mem=3, vcore=5, res_1=7)}}, and create a 
> capability-overwrite (capability = new resource(8). The final result should 
> be (mem=8, vcore=5, res_1=7), instead of (mem=8, vcore=0, res_1=0).
> 2) ResourceProfileManager now loads minimum/maximum profile from config file 
> (resource-profiles.json), to me this is not correct because minimum/maximum 
> allocation for each resource types are already specified inside 
> {{resource-types.xml}}. We should always use 
> {{ResourceUtils#getResourceTypesMinimum/MaximumAllocation}} to get from 
> resource-types.xml and yarn-site.xml. This value will be added to profiles so 
> client can get these configs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7251) Misc changes to YARN-5734

2017-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16182487#comment-16182487
 ] 

Hadoop QA commented on YARN-7251:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-5734 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
46s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
4s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
19s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
31s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
58s{color} | {color:green} YARN-5734 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
42s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  2s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 2 new + 365 unchanged - 0 fixed = 367 total (was 365) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
36s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 46m 37s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
14s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| 

  1   2   >