[jira] [Commented] (YARN-5988) RM unable to start in secure setup

2017-01-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15797400#comment-15797400
 ] 

Hudson commented on YARN-5988:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11071 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11071/])
YARN-5988. RM unable to start in secure setup. Contributed by Ajith S. 
(rohithsharmaks: rev e49e0a6e37f4a32535d7d4a07015fbf9eb33c74a)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMAdminService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/AdminService.java


> RM unable to start in secure setup
> --
>
> Key: YARN-5988
> URL: https://issues.apache.org/jira/browse/YARN-5988
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha1
>Reporter: Ajith S
>Assignee: Ajith S
>Priority: Blocker
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5988.01.patch, YARN-5988.02.patch, 
> YARN-5988.03.patch, YARN-5988.04.patch, YARN-5988.05.patch
>
>
> When CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHORIZATION=true
> RM is unable to start



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6009) RM fails to start during an upgrade - Failed to load/recover state (YarnException: Invalid application timeout, value=0 for type=LIFETIME)

2017-01-03 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-6009:

Attachment: YARN-6009.01.patch

updated patch for not validating timeout values during recovery. 

> RM fails to start during an upgrade - Failed to load/recover state 
> (YarnException: Invalid application timeout, value=0 for type=LIFETIME)
> --
>
> Key: YARN-6009
> URL: https://issues.apache.org/jira/browse/YARN-6009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Gour Saha
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: YARN-6009.01.patch
>
>
> ResourceManager fails to start during an upgrade with the following 
> exceptions - 
> Exception 1:
> {color:red}
> {code}
> 2016-12-09 14:57:23,508 INFO  capacity.CapacityScheduler 
> (CapacityScheduler.java:initScheduler(328)) - Initialized CapacityScheduler 
> with calculator=class 
> org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator, 
> minimumAllocation=<>, 
> maximumAllocation=<>, asynchronousScheduling=false, 
> asyncScheduleInterval=5ms
> 2016-12-09 14:57:23,509 WARN  ha.ActiveStandbyElector 
> (ActiveStandbyElector.java:becomeActive(863)) - Exception handling the 
> winning of election
> org.apache.hadoop.ha.ServiceFailedException: RM could not transition to Active
> at 
> org.apache.hadoop.yarn.server.resourcemanager.EmbeddedElectorService.becomeActive(EmbeddedElectorService.java:129)
> at 
> org.apache.hadoop.ha.ActiveStandbyElector.becomeActive(ActiveStandbyElector.java:859)
> at 
> org.apache.hadoop.ha.ActiveStandbyElector.processResult(ActiveStandbyElector.java:463)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:611)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510)
> Caused by: org.apache.hadoop.ha.ServiceFailedException: Error when 
> transitioning to Active mode
> at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.transitionToActive(AdminService.java:318)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.EmbeddedElectorService.becomeActive(EmbeddedElectorService.java:127)
> ... 4 more
> Caused by: org.apache.hadoop.service.ServiceStateException: 
> org.apache.hadoop.yarn.exceptions.YarnException: Invalid application timeout, 
> value=0 for type=LIFETIME
> at 
> org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:204)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startActiveServices(ResourceManager.java:991)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1032)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1028)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToActive(ResourceManager.java:1028)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.transitionToActive(AdminService.java:313)
> ... 5 more
> Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Invalid 
> application timeout, value=0 for type=LIFETIME
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMServerUtils.validateApplicationTimeouts(RMServerUtils.java:305)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:365)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.recoverApplication(RMAppManager.java:330)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.recover(RMAppManager.java:463)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.recover(ResourceManager.java:1184)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStart(ResourceManager.java:594)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
> ... 13 more
> {code}
> {color}
> Exception 2:
> {color:red}
> {code}
> 2016-12-09 14:57:26,162 INFO  rmapp.RMAppImpl (RMAppImpl.java:handle(790)) - 
> application_1477927786494_0008 State change from NEW to FINISHED
> 2016-12-09 14:57:26,162 ERROR 

[jira] [Commented] (YARN-3866) AM-RM protocol changes to support container resizing

2017-01-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15797285#comment-15797285
 ] 

Hadoop QA commented on YARN-3866:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  7m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
26s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
36s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
0s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
13s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
3s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
54s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
12s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_121 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 57s{color} 
| {color:red} hadoop-yarn-project_hadoop-yarn-jdk1.8.0_111 with JDK v1.8.0_111 
generated 18 new + 39 unchanged - 0 fixed = 57 total (was 39) {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  2m 15s{color} 
| {color:red} hadoop-yarn-project_hadoop-yarn-jdk1.7.0_121 with JDK v1.7.0_121 
generated 20 new + 48 unchanged - 0 fixed = 68 total (was 48) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 34s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 32 new + 102 unchanged - 0 fixed = 134 total (was 102) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-yarn-api in the patch passed with JDK 
v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
24s{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.7.0_121. {color} |
| {color:red}-1{color} | {color:red} unit 

[jira] [Commented] (YARN-3866) AM-RM protocol changes to support container resizing

2017-01-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15797280#comment-15797280
 ] 

Hadoop QA commented on YARN-3866:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 
27s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
16s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
1s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
21s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
1s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
57s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
5s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_121 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 57s{color} 
| {color:red} hadoop-yarn-project_hadoop-yarn-jdk1.8.0_111 with JDK v1.8.0_111 
generated 18 new + 39 unchanged - 0 fixed = 57 total (was 39) {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  2m 21s{color} 
| {color:red} hadoop-yarn-project_hadoop-yarn-jdk1.7.0_121 with JDK v1.7.0_121 
generated 20 new + 48 unchanged - 0 fixed = 68 total (was 48) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 33s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 32 new + 102 unchanged - 0 fixed = 134 total (was 102) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
26s{color} | {color:green} hadoop-yarn-api in the patch passed with JDK 
v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
25s{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.7.0_121. {color} |
| {color:red}-1{color} | {color:red} unit 

[jira] [Commented] (YARN-6050) AMs can't be scheduled on racks or nodes

2017-01-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15797268#comment-15797268
 ] 

Hadoop QA commented on YARN-6050:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 15 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 10m 
30s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 10m 30s{color} 
| {color:red} root generated 7 new + 690 unchanged - 0 fixed = 697 total (was 
690) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  8s{color} | {color:orange} root: The patch generated 10 new + 1651 
unchanged - 5 fixed = 1661 total (was 1656) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
44s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
50s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 41m 
49s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}108m 
35s{color} | {color:green} hadoop-mapreduce-client-jobclient in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
49s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}245m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6050 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12845466/YARN-6050.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 6de943e21e1e 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8fadd69 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| 

[jira] [Comment Edited] (YARN-5899) A small fix for printing debug info inside function canAssignToThisQueue()

2017-01-03 Thread Ying Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15797093#comment-15797093
 ] 

Ying Zhang edited comment on YARN-5899 at 1/4/17 4:04 AM:
--

This patch only introduces some modification to debug information. No test case 
is needed. I've enabled DEBUG mode and test it manually. Failed test case 
TestRMRestart.testFinishedAppRemovalAfterRMRestart is known and tracked by 
YARN-5548.


was (Author: ying zhang):
Failed test case TestRMRestart.testFinishedAppRemovalAfterRMRestart is known 
and tracked by YARN-5548.

> A small fix for printing debug info inside function canAssignToThisQueue()
> --
>
> Key: YARN-5899
> URL: https://issues.apache.org/jira/browse/YARN-5899
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Affects Versions: 3.0.0-alpha1
>Reporter: Ying Zhang
>Assignee: Ying Zhang
>Priority: Trivial
> Attachments: YARN-5899.001.patch, YARN-5899.002.patch
>
>
> A small fix inside function canAssignToThisQueue() for printing DEBUG info. 
> Please see patch attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5899) A small fix for printing debug info inside function canAssignToThisQueue()

2017-01-03 Thread Ying Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15797093#comment-15797093
 ] 

Ying Zhang commented on YARN-5899:
--

Failed test case TestRMRestart.testFinishedAppRemovalAfterRMRestart is known 
and tracked by YARN-5548.

> A small fix for printing debug info inside function canAssignToThisQueue()
> --
>
> Key: YARN-5899
> URL: https://issues.apache.org/jira/browse/YARN-5899
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Affects Versions: 3.0.0-alpha1
>Reporter: Ying Zhang
>Assignee: Ying Zhang
>Priority: Trivial
> Attachments: YARN-5899.001.patch, YARN-5899.002.patch
>
>
> A small fix inside function canAssignToThisQueue() for printing DEBUG info. 
> Please see patch attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6009) RM fails to start during an upgrade - Failed to load/recover state (YarnException: Invalid application timeout, value=0 for type=LIFETIME)

2017-01-03 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15797057#comment-15797057
 ] 

Daniel Templeton edited comment on YARN-6009 at 1/4/17 3:44 AM:


[~gsaha], ignoring an application that failed to recover should not be 
something that happens quietly by default.  There are lots of scenarios where 
that behavior could cause problems.  I agree, though, that it should be 
possible to startup nonetheless.  YARN-6035 would give admins an explicit 
option to get a forced startup.  Also see the discussion on YARN-6031.


was (Author: templedf):
[~gsaha], ignoring an application that failed to recover should not be 
something that happens quietly by default.  There are lots of scenarios where 
that behavior could cause problems.  I agree, though, that it should be 
possible to startup nonetheless.  YARN-6035 would give admins an explicit 
option to get a forced startup.

> RM fails to start during an upgrade - Failed to load/recover state 
> (YarnException: Invalid application timeout, value=0 for type=LIFETIME)
> --
>
> Key: YARN-6009
> URL: https://issues.apache.org/jira/browse/YARN-6009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Gour Saha
>Assignee: Rohith Sharma K S
>Priority: Critical
>
> ResourceManager fails to start during an upgrade with the following 
> exceptions - 
> Exception 1:
> {color:red}
> {code}
> 2016-12-09 14:57:23,508 INFO  capacity.CapacityScheduler 
> (CapacityScheduler.java:initScheduler(328)) - Initialized CapacityScheduler 
> with calculator=class 
> org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator, 
> minimumAllocation=<>, 
> maximumAllocation=<>, asynchronousScheduling=false, 
> asyncScheduleInterval=5ms
> 2016-12-09 14:57:23,509 WARN  ha.ActiveStandbyElector 
> (ActiveStandbyElector.java:becomeActive(863)) - Exception handling the 
> winning of election
> org.apache.hadoop.ha.ServiceFailedException: RM could not transition to Active
> at 
> org.apache.hadoop.yarn.server.resourcemanager.EmbeddedElectorService.becomeActive(EmbeddedElectorService.java:129)
> at 
> org.apache.hadoop.ha.ActiveStandbyElector.becomeActive(ActiveStandbyElector.java:859)
> at 
> org.apache.hadoop.ha.ActiveStandbyElector.processResult(ActiveStandbyElector.java:463)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:611)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510)
> Caused by: org.apache.hadoop.ha.ServiceFailedException: Error when 
> transitioning to Active mode
> at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.transitionToActive(AdminService.java:318)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.EmbeddedElectorService.becomeActive(EmbeddedElectorService.java:127)
> ... 4 more
> Caused by: org.apache.hadoop.service.ServiceStateException: 
> org.apache.hadoop.yarn.exceptions.YarnException: Invalid application timeout, 
> value=0 for type=LIFETIME
> at 
> org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:204)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startActiveServices(ResourceManager.java:991)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1032)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1028)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToActive(ResourceManager.java:1028)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.transitionToActive(AdminService.java:313)
> ... 5 more
> Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Invalid 
> application timeout, value=0 for type=LIFETIME
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMServerUtils.validateApplicationTimeouts(RMServerUtils.java:305)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:365)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.recoverApplication(RMAppManager.java:330)
> at 
> 

[jira] [Commented] (YARN-5585) [Atsv2] Reader side changes for entity prefix and support for pagination via additional filters

2017-01-03 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15797031#comment-15797031
 ] 

Varun Saxena commented on YARN-5585:


Yes this sentence should be added in javadoc. But for fromIdPrefix. Right ?

> [Atsv2] Reader side changes for entity prefix and support for pagination via 
> additional filters
> ---
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
>  Labels: yarn-5355-merge-blocker
> Attachments: 0001-YARN-5585.patch, YARN-5585-YARN-5355.0001.patch, 
> YARN-5585-YARN-5355.0002.patch, YARN-5585-YARN-5355.0003.patch, 
> YARN-5585-YARN-5355.0004.patch, YARN-5585-YARN-5355.0005.patch, 
> YARN-5585-workaround.patch, YARN-5585.v0.patch
>
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Current Behavior : Default limit is set to 100. If there are 1000 entities 
> then REST call gives first/last 100 entities. How to retrieve next set of 100 
> entities i.e 101 to 200 OR 900 to 801?
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-5. But to retrieve next 5 apps, there is 
> no way to achieve this. 
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&=app-5* which gives list of apps from app-6 to 
> app-10. 
> Since ATS is targeting large number of entities storage, it is very common 
> use case to get next set of entities using fromId rather than querying all 
> the entites. This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5988) RM unable to start in secure setup

2017-01-03 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15797011#comment-15797011
 ] 

Naganarasimha G R commented on YARN-5988:
-

thanks for the clarifications [~rohithsharma], i think you can go ahead and do 
the honors !

> RM unable to start in secure setup
> --
>
> Key: YARN-5988
> URL: https://issues.apache.org/jira/browse/YARN-5988
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha1
>Reporter: Ajith S
>Assignee: Ajith S
>Priority: Blocker
> Attachments: YARN-5988.01.patch, YARN-5988.02.patch, 
> YARN-5988.03.patch, YARN-5988.04.patch, YARN-5988.05.patch
>
>
> When CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHORIZATION=true
> RM is unable to start



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5899) A small fix for printing debug info inside function canAssignToThisQueue()

2017-01-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15797005#comment-15797005
 ] 

Hadoop QA commented on YARN-5899:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 40m 44s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5899 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12845468/YARN-5899.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b2ab3f90ec0b 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8fadd69 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/14545/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14545/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/14545/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> A small fix for printing debug info inside function canAssignToThisQueue()
> 

[jira] [Commented] (YARN-5988) RM unable to start in secure setup

2017-01-03 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796985#comment-15796985
 ] 

Rohith Sharma K S commented on YARN-5988:
-

Thanks for triggering Jenkins manually, I have made 2 changes. Primarily, I 
have increased the base port to 1500. This is the cause of failure for all. RPC 
server does not start if port is less than 1024 unless it is running in 
specific user. So, every time RPC server start was failed with exception 
permission denied. Secondly, I added RM_HA_ID. Without this configuration, RM 
automatically detect and test passes, but to make sure configurations are in 
line with production cluster, I added this.

> RM unable to start in secure setup
> --
>
> Key: YARN-5988
> URL: https://issues.apache.org/jira/browse/YARN-5988
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha1
>Reporter: Ajith S
>Assignee: Ajith S
>Priority: Blocker
> Attachments: YARN-5988.01.patch, YARN-5988.02.patch, 
> YARN-5988.03.patch, YARN-5988.04.patch, YARN-5988.05.patch
>
>
> When CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHORIZATION=true
> RM is unable to start



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5988) RM unable to start in secure setup

2017-01-03 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796977#comment-15796977
 ] 

Rohith Sharma K S commented on YARN-5988:
-

I will commit the patch if no more objections

> RM unable to start in secure setup
> --
>
> Key: YARN-5988
> URL: https://issues.apache.org/jira/browse/YARN-5988
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha1
>Reporter: Ajith S
>Assignee: Ajith S
>Priority: Blocker
> Attachments: YARN-5988.01.patch, YARN-5988.02.patch, 
> YARN-5988.03.patch, YARN-5988.04.patch, YARN-5988.05.patch
>
>
> When CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHORIZATION=true
> RM is unable to start



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3866) AM-RM protocol changes to support container resizing

2017-01-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796972#comment-15796972
 ] 

Hadoop QA commented on YARN-3866:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
56s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
9s{color} | {color:green} branch-2 passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
24s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
3s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
59s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
9s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} branch-2 passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 57s{color} 
| {color:red} hadoop-yarn-project_hadoop-yarn-jdk1.8.0_111 with JDK v1.8.0_111 
generated 18 new + 41 unchanged - 0 fixed = 59 total (was 41) {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
28s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  2m 
28s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  2m 28s{color} 
| {color:red} hadoop-yarn-project_hadoop-yarn-jdk1.7.0_121 with JDK v1.7.0_121 
generated 20 new + 50 unchanged - 0 fixed = 70 total (was 50) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 33 new + 103 unchanged - 0 fixed = 136 total (was 103) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-yarn-api in the patch passed with JDK 
v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
28s{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.7.0_121. {color} |
| {color:red}-1{color} | {color:red} unit {color} | 

[jira] [Commented] (YARN-6022) Revert changes of AbstractResourceRequest

2017-01-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796904#comment-15796904
 ] 

Hadoop QA commented on YARN-6022:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
43s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
42s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 46s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 4 new + 147 unchanged - 1 fixed = 151 total (was 148) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
53s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
31s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 1 new + 913 unchanged - 0 fixed = 914 total (was 913) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 40m 43s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 87m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart |
|   | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6022 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12845460/YARN-6022.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6da7fada7a3a 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8fadd69 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 

[jira] [Commented] (YARN-6028) Add document for container metrics

2017-01-03 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796901#comment-15796901
 ] 

Weiwei Yang commented on YARN-6028:
---

Resubmitted the patch, now jenkins got happy. Hi [~templedf], how about the v2 
patch looks? Does my previous comment 
[here|https://issues.apache.org/jira/browse/YARN-6028?focusedCommentId=15786733=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15786733]
 make sense to you? Please let me know, thank you.

> Add document for container metrics
> --
>
> Key: YARN-6028
> URL: https://issues.apache.org/jira/browse/YARN-6028
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: documentation, nodemanager
>Affects Versions: 2.7.3
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: YARN-6028.01.patch, YARN-6028.02.patch
>
>
> YARN-3022 exposed container metrics from node manager, but document is 
> missing in Metrics.md.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5899) A small fix for printing debug info inside function canAssignToThisQueue()

2017-01-03 Thread Ying Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796889#comment-15796889
 ] 

Ying Zhang edited comment on YARN-5899 at 1/4/17 2:07 AM:
--

Not sure why there wasn't Jenkins result. Delete and re-attach 
YARN-5899.002.patch to try to start Jenkins again.


was (Author: ying zhang):
Not sure why there wasn't Jenkins results. Delete and re-attach 
YARN-5899.002.patch to try to start Jenkins again.

> A small fix for printing debug info inside function canAssignToThisQueue()
> --
>
> Key: YARN-5899
> URL: https://issues.apache.org/jira/browse/YARN-5899
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Affects Versions: 3.0.0-alpha1
>Reporter: Ying Zhang
>Assignee: Ying Zhang
>Priority: Trivial
> Attachments: YARN-5899.001.patch, YARN-5899.002.patch
>
>
> A small fix inside function canAssignToThisQueue() for printing DEBUG info. 
> Please see patch attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5899) A small fix for printing debug info inside function canAssignToThisQueue()

2017-01-03 Thread Ying Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796889#comment-15796889
 ] 

Ying Zhang commented on YARN-5899:
--

Not sure why there wasn't Jenkins results. Delete and re-attach 
YARN-5899.002.patch to try to start Jenkins again.

> A small fix for printing debug info inside function canAssignToThisQueue()
> --
>
> Key: YARN-5899
> URL: https://issues.apache.org/jira/browse/YARN-5899
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Affects Versions: 3.0.0-alpha1
>Reporter: Ying Zhang
>Assignee: Ying Zhang
>Priority: Trivial
> Attachments: YARN-5899.001.patch, YARN-5899.002.patch
>
>
> A small fix inside function canAssignToThisQueue() for printing DEBUG info. 
> Please see patch attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5899) A small fix for printing debug info inside function canAssignToThisQueue()

2017-01-03 Thread Ying Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ying Zhang updated YARN-5899:
-
Attachment: YARN-5899.002.patch

> A small fix for printing debug info inside function canAssignToThisQueue()
> --
>
> Key: YARN-5899
> URL: https://issues.apache.org/jira/browse/YARN-5899
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Affects Versions: 3.0.0-alpha1
>Reporter: Ying Zhang
>Assignee: Ying Zhang
>Priority: Trivial
> Attachments: YARN-5899.001.patch, YARN-5899.002.patch
>
>
> A small fix inside function canAssignToThisQueue() for printing DEBUG info. 
> Please see patch attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5899) A small fix for printing debug info inside function canAssignToThisQueue()

2017-01-03 Thread Ying Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ying Zhang updated YARN-5899:
-
Attachment: (was: YARN-5899.002.patch)

> A small fix for printing debug info inside function canAssignToThisQueue()
> --
>
> Key: YARN-5899
> URL: https://issues.apache.org/jira/browse/YARN-5899
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Affects Versions: 3.0.0-alpha1
>Reporter: Ying Zhang
>Assignee: Ying Zhang
>Priority: Trivial
> Attachments: YARN-5899.001.patch, YARN-5899.002.patch
>
>
> A small fix inside function canAssignToThisQueue() for printing DEBUG info. 
> Please see patch attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6009) RM fails to start during an upgrade - Failed to load/recover state (YarnException: Invalid application timeout, value=0 for type=LIFETIME)

2017-01-03 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796868#comment-15796868
 ] 

Gour Saha commented on YARN-6009:
-

[~rohithsharma] I understand that, but I am a little worried here. No matter 
what the issue with the state store of a particular app may be, it should not 
block the RM from starting. Note, this is not just limited to lifetime 
property. We can log appropriate messages for the problematic apps (and maybe 
even update the app diagnostics) and move on with graceful start of RM. The app 
owners can later work on the individual problematic apps, but at least the 
cluster will be up and running, ready to serve new apps.

> RM fails to start during an upgrade - Failed to load/recover state 
> (YarnException: Invalid application timeout, value=0 for type=LIFETIME)
> --
>
> Key: YARN-6009
> URL: https://issues.apache.org/jira/browse/YARN-6009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Gour Saha
>Assignee: Rohith Sharma K S
>Priority: Critical
>
> ResourceManager fails to start during an upgrade with the following 
> exceptions - 
> Exception 1:
> {color:red}
> {code}
> 2016-12-09 14:57:23,508 INFO  capacity.CapacityScheduler 
> (CapacityScheduler.java:initScheduler(328)) - Initialized CapacityScheduler 
> with calculator=class 
> org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator, 
> minimumAllocation=<>, 
> maximumAllocation=<>, asynchronousScheduling=false, 
> asyncScheduleInterval=5ms
> 2016-12-09 14:57:23,509 WARN  ha.ActiveStandbyElector 
> (ActiveStandbyElector.java:becomeActive(863)) - Exception handling the 
> winning of election
> org.apache.hadoop.ha.ServiceFailedException: RM could not transition to Active
> at 
> org.apache.hadoop.yarn.server.resourcemanager.EmbeddedElectorService.becomeActive(EmbeddedElectorService.java:129)
> at 
> org.apache.hadoop.ha.ActiveStandbyElector.becomeActive(ActiveStandbyElector.java:859)
> at 
> org.apache.hadoop.ha.ActiveStandbyElector.processResult(ActiveStandbyElector.java:463)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:611)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510)
> Caused by: org.apache.hadoop.ha.ServiceFailedException: Error when 
> transitioning to Active mode
> at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.transitionToActive(AdminService.java:318)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.EmbeddedElectorService.becomeActive(EmbeddedElectorService.java:127)
> ... 4 more
> Caused by: org.apache.hadoop.service.ServiceStateException: 
> org.apache.hadoop.yarn.exceptions.YarnException: Invalid application timeout, 
> value=0 for type=LIFETIME
> at 
> org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:204)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startActiveServices(ResourceManager.java:991)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1032)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1028)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToActive(ResourceManager.java:1028)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.transitionToActive(AdminService.java:313)
> ... 5 more
> Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Invalid 
> application timeout, value=0 for type=LIFETIME
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMServerUtils.validateApplicationTimeouts(RMServerUtils.java:305)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:365)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.recoverApplication(RMAppManager.java:330)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.recover(RMAppManager.java:463)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.recover(ResourceManager.java:1184)
> at 
> 

[jira] [Commented] (YARN-4899) Queue metrics of SLS capacity scheduler only activated after app submit to the queue

2017-01-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796862#comment-15796862
 ] 

Hadoop QA commented on YARN-4899:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
1s{color} | {color:green} hadoop-sls in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 21m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-4899 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12845461/YARN-4899.2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d7474bea8f0b 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8fadd69 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14543/testReport/ |
| modules | C: hadoop-tools/hadoop-sls U: hadoop-tools/hadoop-sls |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/14543/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Queue metrics of SLS capacity scheduler only activated after app submit to 
> the queue
> 
>
> Key: YARN-4899
> URL: https://issues.apache.org/jira/browse/YARN-4899
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>  Labels: oct16-medium
> Attachments: YARN-4899.1.patch, 

[jira] [Updated] (YARN-6050) AMs can't be scheduled on racks or nodes

2017-01-03 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated YARN-6050:

Attachment: YARN-6050.002.patch

Thanks [~leftnoteasy], that will be helpful.  I've created YARN-6051 to track 
that.

The 002 patch makes the protobuf changes discussed above.

> AMs can't be scheduled on racks or nodes
> 
>
> Key: YARN-6050
> URL: https://issues.apache.org/jira/browse/YARN-6050
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-6050.001.patch, YARN-6050.002.patch
>
>
> Yarn itself supports rack/node aware scheduling for AMs; however, there 
> currently are two problems:
> # To specify hard or soft rack/node requests, you have to specify more than 
> one {{ResourceRequest}}.  For example, if you want to schedule an AM only on 
> "rackA", you have to create two {{ResourceRequest}}, like this:
> {code}
> ResourceRequest.newInstance(PRIORITY, ANY, CAPABILITY, NUM_CONTAINERS, false);
> ResourceRequest.newInstance(PRIORITY, "rackA", CAPABILITY, NUM_CONTAINERS, 
> true);
> {code}
> The problem is that the Yarn API doesn't actually allow you to specify more 
> than one {{ResourceRequest}} in the {{ApplicationSubmissionContext}}.  The 
> current behavior is to either build one from {{getResource}} or directly from 
> {{getAMContainerResourceRequest}}, depending on if 
> {{getAMContainerResourceRequest}} is null or not.  We'll need to add a third 
> method, say {{getAMContainerResourceRequests}}, which takes a list of 
> {{ResourceRequest}} so that clients can specify the multiple resource 
> requests.
> # There are some places where things are hardcoded to overwrite what the 
> client specifies.  These are pretty straightforward to fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6051) Create CS test for YARN-6050

2017-01-03 Thread Robert Kanter (JIRA)
Robert Kanter created YARN-6051:
---

 Summary: Create CS test for YARN-6050
 Key: YARN-6051
 URL: https://issues.apache.org/jira/browse/YARN-6051
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Robert Kanter
Assignee: Wangda Tan






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6028) Add document for container metrics

2017-01-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796851#comment-15796851
 ] 

Hadoop QA commented on YARN-6028:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6028 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12845463/YARN-6028.02.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 807a0b17803f 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8fadd69 |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/14542/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add document for container metrics
> --
>
> Key: YARN-6028
> URL: https://issues.apache.org/jira/browse/YARN-6028
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: documentation, nodemanager
>Affects Versions: 2.7.3
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: YARN-6028.01.patch, YARN-6028.02.patch
>
>
> YARN-3022 exposed container metrics from node manager, but document is 
> missing in Metrics.md.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4899) Queue metrics of SLS capacity scheduler only activated after app submit to the queue

2017-01-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796845#comment-15796845
 ] 

Hadoop QA commented on YARN-4899:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
56s{color} | {color:green} hadoop-sls in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 21m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-4899 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12845461/YARN-4899.2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 579ba8b08566 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8fadd69 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14541/testReport/ |
| modules | C: hadoop-tools/hadoop-sls U: hadoop-tools/hadoop-sls |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/14541/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Queue metrics of SLS capacity scheduler only activated after app submit to 
> the queue
> 
>
> Key: YARN-4899
> URL: https://issues.apache.org/jira/browse/YARN-4899
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>  Labels: oct16-medium
> Attachments: YARN-4899.1.patch, 

[jira] [Commented] (YARN-5988) RM unable to start in secure setup

2017-01-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796830#comment-15796830
 ] 

Hadoop QA commented on YARN-5988:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 21s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 57 unchanged - 0 fixed = 58 total (was 57) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 39m 20s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5988 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12845345/YARN-5988.05.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3ffe399f79c2 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8fadd69 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/14538/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/14538/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14538/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/14538/console |
| 

[jira] [Commented] (YARN-3866) AM-RM protocol changes to support container resizing

2017-01-03 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796822#comment-15796822
 ] 

Junping Du commented on YARN-3866:
--

bq. I forgot to make changes to proto file which suggest by Arun Suresh. (move 
updateContainer to 16).
My bad too. I forget to double check it. I believe addendum 4 don't have this 
issue any more. Will wait for Jenkins results...

> AM-RM protocol changes to support container resizing
> 
>
> Key: YARN-3866
> URL: https://issues.apache.org/jira/browse/YARN-3866
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Reporter: MENG DING
>Assignee: MENG DING
>Priority: Blocker
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: YARN-3866-YARN-1197.4.patch, 
> YARN-3866-branch-2.8.addendum.1.patch, YARN-3866-branch-2.8.addendum.1.patch, 
> YARN-3866-branch-2.8.addendum.2.patch, YARN-3866-branch-2.8.addendum.3.patch, 
> YARN-3866-branch-2.8.addendum.4.patch, YARN-3866-branch-2.addendum.4.patch, 
> YARN-3866.1.patch, YARN-3866.2.patch, YARN-3866.3.patch
>
>
> YARN-1447 and YARN-1448 are outdated. 
> This ticket deals with AM-RM Protocol changes to support container resize 
> according to the latest design in YARN-1197.
> 1) Add increase/decrease requests in AllocateRequest
> 2) Get approved increase/decrease requests from RM in AllocateResponse
> 3) Add relevant test cases



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6028) Add document for container metrics

2017-01-03 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-6028:
--
Attachment: YARN-6028.02.patch

> Add document for container metrics
> --
>
> Key: YARN-6028
> URL: https://issues.apache.org/jira/browse/YARN-6028
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: documentation, nodemanager
>Affects Versions: 2.7.3
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: YARN-6028.01.patch, YARN-6028.02.patch
>
>
> YARN-3022 exposed container metrics from node manager, but document is 
> missing in Metrics.md.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6028) Add document for container metrics

2017-01-03 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-6028:
--
Attachment: (was: YARN-6028.03.patch)

> Add document for container metrics
> --
>
> Key: YARN-6028
> URL: https://issues.apache.org/jira/browse/YARN-6028
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: documentation, nodemanager
>Affects Versions: 2.7.3
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: YARN-6028.01.patch
>
>
> YARN-3022 exposed container metrics from node manager, but document is 
> missing in Metrics.md.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6028) Add document for container metrics

2017-01-03 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-6028:
--
Attachment: (was: YARN-6028.02.patch)

> Add document for container metrics
> --
>
> Key: YARN-6028
> URL: https://issues.apache.org/jira/browse/YARN-6028
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: documentation, nodemanager
>Affects Versions: 2.7.3
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: YARN-6028.01.patch
>
>
> YARN-3022 exposed container metrics from node manager, but document is 
> missing in Metrics.md.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2902) Killing a container that is localizing can orphan resources in the DOWNLOADING state

2017-01-03 Thread Wilfred Spiegelenburg (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796805#comment-15796805
 ] 

Wilfred Spiegelenburg commented on YARN-2902:
-

I was looking at a problem around removing non existing directories in the LCE 
(native code) and saw that there was a difference in behaviour between trunk 
and branch-2.
In trunk we "ignore" a non existing directory in {{delete_as_user()}} whan the 
stat returns an {{ENOENT}} we do not do that in branch-2. I tracked it back to 
the backport of this jira into branch-2. The native code part of the change was 
not ported back into branch-2.

Question is was that on purpose or was it an oversight? If it was an oversight 
does this require a new jira to backport the native code or does it get handled 
as an addendum to this one. It has been a while since this jira was closed and 
the jira has been in a number of releases.

> Killing a container that is localizing can orphan resources in the 
> DOWNLOADING state
> 
>
> Key: YARN-2902
> URL: https://issues.apache.org/jira/browse/YARN-2902
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 2.5.0
>Reporter: Jason Lowe
>Assignee: Varun Saxena
> Fix For: 2.8.0, 2.7.2, 2.6.4, 3.0.0-alpha1
>
> Attachments: YARN-2902-branch-2.6.01.patch, YARN-2902.002.patch, 
> YARN-2902.03.patch, YARN-2902.04.patch, YARN-2902.05.patch, 
> YARN-2902.06.patch, YARN-2902.07.patch, YARN-2902.08.patch, 
> YARN-2902.09.patch, YARN-2902.10.patch, YARN-2902.11.patch, YARN-2902.patch
>
>
> If a container is in the process of localizing when it is stopped/killed then 
> resources are left in the DOWNLOADING state.  If no other container comes 
> along and requests these resources they linger around with no reference 
> counts but aren't cleaned up during normal cache cleanup scans since it will 
> never delete resources in the DOWNLOADING state even if their reference count 
> is zero.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4899) Queue metrics of SLS capacity scheduler only activated after app submit to the queue

2017-01-03 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796806#comment-15796806
 ] 

Wangda Tan commented on YARN-4899:
--

Thanks review from [~mshen] and patch uploaded by [~jhung], missed this JIRA 
previously.

Latest patch LGTM, will commit later.

> Queue metrics of SLS capacity scheduler only activated after app submit to 
> the queue
> 
>
> Key: YARN-4899
> URL: https://issues.apache.org/jira/browse/YARN-4899
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>  Labels: oct16-medium
> Attachments: YARN-4899.1.patch, YARN-4899.2.patch
>
>
> We should start recording queue metrics since cluster start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4899) Queue metrics of SLS capacity scheduler only activated after app submit to the queue

2017-01-03 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796798#comment-15796798
 ] 

Jonathan Hung commented on YARN-4899:
-

Uploaded patch addressing checkstyle and review.

> Queue metrics of SLS capacity scheduler only activated after app submit to 
> the queue
> 
>
> Key: YARN-4899
> URL: https://issues.apache.org/jira/browse/YARN-4899
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>  Labels: oct16-medium
> Attachments: YARN-4899.1.patch, YARN-4899.2.patch
>
>
> We should start recording queue metrics since cluster start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4899) Queue metrics of SLS capacity scheduler only activated after app submit to the queue

2017-01-03 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-4899:

Attachment: YARN-4899.2.patch

> Queue metrics of SLS capacity scheduler only activated after app submit to 
> the queue
> 
>
> Key: YARN-4899
> URL: https://issues.apache.org/jira/browse/YARN-4899
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>  Labels: oct16-medium
> Attachments: YARN-4899.1.patch, YARN-4899.2.patch
>
>
> We should start recording queue metrics since cluster start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5988) RM unable to start in secure setup

2017-01-03 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796742#comment-15796742
 ] 

Naganarasimha G R commented on YARN-5988:
-

manually triggered the jenkins build.
[~rohithsharma], offline ajith had informed that he had tried the option which 
you had tried in the patch and that too was failing, lets see whether jenkins 
has any issue !

> RM unable to start in secure setup
> --
>
> Key: YARN-5988
> URL: https://issues.apache.org/jira/browse/YARN-5988
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha1
>Reporter: Ajith S
>Assignee: Ajith S
>Priority: Blocker
> Attachments: YARN-5988.01.patch, YARN-5988.02.patch, 
> YARN-5988.03.patch, YARN-5988.04.patch, YARN-5988.05.patch
>
>
> When CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHORIZATION=true
> RM is unable to start



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6022) Revert changes of AbstractResourceRequest

2017-01-03 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6022:
-
Attachment: YARN-6022.003.patch

Thanks for review, [~templedf], attached 003 patch addressed your comment.

> Revert changes of AbstractResourceRequest
> -
>
> Key: YARN-6022
> URL: https://issues.apache.org/jira/browse/YARN-6022
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-6022.001.patch, YARN-6022.002.patch, 
> YARN-6022.003.patch
>
>
> YARN-5774 added AbstractResourceRequest to make easier internal scheduler 
> change, this is not a correct approach: For example, with this change, we 
> need to make AbstractResourceRequest to be public/stable. And end users can 
> use it like:
> {code}
> AbstractResourceRequest request = ...
> request.setCapability(...)
> {code}
> But AbstractResourceRequest should not be visible by application at all. 
> We need to revert it from branch-2.8 / branch-2 / trunk. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6022) Revert changes of AbstractResourceRequest

2017-01-03 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796729#comment-15796729
 ] 

Daniel Templeton commented on YARN-6022:


Looks generally good.  Thanks, [~leftnoteasy].  My only comment is that 
{{AbstractYarnScheduler.getNormalizeResource()}} should be 
{{AbstractYarnScheduler.getNormalizedResource()}}.

> Revert changes of AbstractResourceRequest
> -
>
> Key: YARN-6022
> URL: https://issues.apache.org/jira/browse/YARN-6022
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-6022.001.patch, YARN-6022.002.patch
>
>
> YARN-5774 added AbstractResourceRequest to make easier internal scheduler 
> change, this is not a correct approach: For example, with this change, we 
> need to make AbstractResourceRequest to be public/stable. And end users can 
> use it like:
> {code}
> AbstractResourceRequest request = ...
> request.setCapability(...)
> {code}
> But AbstractResourceRequest should not be visible by application at all. 
> We need to revert it from branch-2.8 / branch-2 / trunk. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6022) Revert changes of AbstractResourceRequest

2017-01-03 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6022:
-
Attachment: YARN-6022.002.patch

Just forgot one thing, UpdateContainerRequest should be public/unstable. That 
is another problem created by YARN-5744, which makes common fields of 
ResourceRequest/UpdateContainerRequest must have identical visibility. (002)

> Revert changes of AbstractResourceRequest
> -
>
> Key: YARN-6022
> URL: https://issues.apache.org/jira/browse/YARN-6022
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-6022.001.patch, YARN-6022.002.patch
>
>
> YARN-5774 added AbstractResourceRequest to make easier internal scheduler 
> change, this is not a correct approach: For example, with this change, we 
> need to make AbstractResourceRequest to be public/stable. And end users can 
> use it like:
> {code}
> AbstractResourceRequest request = ...
> request.setCapability(...)
> {code}
> But AbstractResourceRequest should not be visible by application at all. 
> We need to revert it from branch-2.8 / branch-2 / trunk. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6022) Revert changes of AbstractResourceRequest

2017-01-03 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6022:
-
Attachment: YARN-6022.001.patch

Attached ver.1 patch for review.

> Revert changes of AbstractResourceRequest
> -
>
> Key: YARN-6022
> URL: https://issues.apache.org/jira/browse/YARN-6022
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-6022.001.patch
>
>
> YARN-5774 added AbstractResourceRequest to make easier internal scheduler 
> change, this is not a correct approach: For example, with this change, we 
> need to make AbstractResourceRequest to be public/stable. And end users can 
> use it like:
> {code}
> AbstractResourceRequest request = ...
> request.setCapability(...)
> {code}
> But AbstractResourceRequest should not be visible by application at all. 
> We need to revert it from branch-2.8 / branch-2 / trunk. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6041) Opportunistic containers : Combined patch for branch-2

2017-01-03 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796705#comment-15796705
 ] 

Arun Suresh commented on YARN-6041:
---

Thanks for taking a look [~kasha],

bq. Assuming MRJobConfig. MR_NUM_OPPORTUNISTIC_MAPS_PER_100 was not part of any 
release. Can you confirm the same?
I confirm this hasn't seen light of day in any previous release :)

bq. Assuming the incompatible proto changes to yarn_server_common_protos and 
yarn_server_common_service_protos are okay because the previous versions didn't 
make it to a release. Can you confirm the same?
Ditto.. 

bq. Nit: For scheduler-related configs, should we use yarn.scheduler.* like 
other scheduler configs? And, yarn.nodemanager.* for all node-specific 
opportunistic container configs? Making sure the configs are consistent and 
simple is a good exercise before we include these in a release.
Hmm... I guess you are referring to YARN-5646. Looks like the patch was 
modified to incorporate your suggestions wrt the parameter names. Also, since 
this doesn't affect the scheduler per se.. the RM side changes are all 
yarn.resourcemanager.* and the nodemanager ones are yarn.nodemanager.*

Good catch on the method names... will update patch shortly

> Opportunistic containers : Combined patch for branch-2 
> ---
>
> Key: YARN-6041
> URL: https://issues.apache.org/jira/browse/YARN-6041
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-6041-branch-2.001.patch, 
> YARN-6041-branch-2.002.patch
>
>
> This is a combined patch targeting branch-2 of the following JIRAs which have 
> already been committed to trunk :
> YARN-5938. Refactoring OpportunisticContainerAllocator to use 
> SchedulerRequestKey instead of Priority and other misc fixes
> YARN-5646. Add documentation and update config parameter names for scheduling 
> of OPPORTUNISTIC containers.
> YARN-5982. Simplify opportunistic container parameters and metrics.
> YARN-5918. Handle Opportunistic scheduling allocate request failure when NM 
> is lost.
> YARN-4597. Introduce ContainerScheduler and a SCHEDULED state to NodeManager 
> container lifecycle.
> YARN-5823. Update NMTokens in case of requests with only opportunistic 
> containers.
> YARN-5377. Fix 
> TestQueuingContainerManager.testKillMultipleOpportunisticContainers.
> YARN-2995. Enhance UI to show cluster resource utilization of various 
> container Execution types.
> YARN-5799. Fix Opportunistic Allocation to set the correct value of Node Http 
> Address.
> YARN-5486. Update OpportunisticContainerAllocatorAMService::allocate method 
> to handle OPPORTUNISTIC container requests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6015) AsyncDispatcher thread name can be set to improved debugging

2017-01-03 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796686#comment-15796686
 ] 

Naganarasimha G R commented on YARN-6015:
-

Thanks [~ajithshetty] for the patch,
I feel we can replace {{"RM Metrics dispatcher"}} => {{"RM Timeline 
dispatcher"}} and agree with [~templedf] that {{setDispatcherThreadName}} is 
not required as it can be set in the constructor where ever required.  


> AsyncDispatcher thread name can be set to improved debugging
> 
>
> Key: YARN-6015
> URL: https://issues.apache.org/jira/browse/YARN-6015
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Ajith S
>Assignee: Ajith S
> Attachments: YARN-6015.01.patch
>
>
> Currently all the running instances of AsyncDispatcher have same thread name. 
> To improve debugging, we can have option to set thread name



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6041) Opportunistic containers : Combined patch for branch-2

2017-01-03 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796676#comment-15796676
 ] 

Karthik Kambatla commented on YARN-6041:


Thanks for putting this up on JIRA, [~asuresh]. 

I skimmed through the patch, paying attention to compatibility and user-facing 
configs. The patch looks mostly good. 
# Assuming MRJobConfig. MR_NUM_OPPORTUNISTIC_MAPS_PER_100 was not part of any 
release. Can you confirm the same? 
# Should the corresponding methods in RMContainerAllocator be also renamed to 
use percentage instead of per_100? Also, should the config name be PERCENT 
instead of PERCENTAGE to be consistent with other configs in MR? 
# Nit: For scheduler-related configs, should we use yarn.scheduler.* like other 
scheduler configs? And, yarn.nodemanager.* for all node-specific opportunistic 
container configs? Making sure the configs are consistent and simple is a good 
exercise before we include these in a release. 
# Assuming the incompatible proto changes to yarn_server_common_protos and 
yarn_server_common_service_protos are okay because the previous versions didn't 
make it to a release. Can you confirm the same? 

> Opportunistic containers : Combined patch for branch-2 
> ---
>
> Key: YARN-6041
> URL: https://issues.apache.org/jira/browse/YARN-6041
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-6041-branch-2.001.patch, 
> YARN-6041-branch-2.002.patch
>
>
> This is a combined patch targeting branch-2 of the following JIRAs which have 
> already been committed to trunk :
> YARN-5938. Refactoring OpportunisticContainerAllocator to use 
> SchedulerRequestKey instead of Priority and other misc fixes
> YARN-5646. Add documentation and update config parameter names for scheduling 
> of OPPORTUNISTIC containers.
> YARN-5982. Simplify opportunistic container parameters and metrics.
> YARN-5918. Handle Opportunistic scheduling allocate request failure when NM 
> is lost.
> YARN-4597. Introduce ContainerScheduler and a SCHEDULED state to NodeManager 
> container lifecycle.
> YARN-5823. Update NMTokens in case of requests with only opportunistic 
> containers.
> YARN-5377. Fix 
> TestQueuingContainerManager.testKillMultipleOpportunisticContainers.
> YARN-2995. Enhance UI to show cluster resource utilization of various 
> container Execution types.
> YARN-5799. Fix Opportunistic Allocation to set the correct value of Node Http 
> Address.
> YARN-5486. Update OpportunisticContainerAllocatorAMService::allocate method 
> to handle OPPORTUNISTIC container requests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6029) CapacityScheduler deadlock when ParentQueue#getQueueUserAclInfo is called by one thread and LeafQueue#assignContainers is releasing excessive reserved container by another

2017-01-03 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6029:
-
Summary: CapacityScheduler deadlock when ParentQueue#getQueueUserAclInfo is 
called by one thread and LeafQueue#assignContainers is releasing excessive 
reserved container by another thread  (was: CapacityScheduler deadlock when 
ParentQueue#getQueueUserAclInfo is called by Thread_A at the moment that 
Thread_B calls LeafQueue#assignContainers to release a reserved container)

> CapacityScheduler deadlock when ParentQueue#getQueueUserAclInfo is called by 
> one thread and LeafQueue#assignContainers is releasing excessive reserved 
> container by another thread
> --
>
> Key: YARN-6029
> URL: https://issues.apache.org/jira/browse/YARN-6029
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 2.8.0
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Critical
> Attachments: YARN-6029-branch-2.8.001.patch, YARN-6029.001.patch, 
> YARN-6029.002.patch, deadlock.jstack
>
>
> When ParentQueue#getQueueUserAclInfo is called (e.g. a client calls 
> YarnClient#getQueueAclsInfo) just at the moment that 
> LeafQueue#assignContainers is called and before notifying parent queue to 
> release resource (should release a reserved container), then ResourceManager 
> can deadlock. I found this problem on our testing environment for hadoop2.8.
> Reproduce the deadlock in chronological order
> * 1. Thread A (ResourceManager Event Processor) calls synchronized 
> LeafQueue#assignContainers (got LeafQueue instance lock of queue root.a)
> * 2. Thread B (IPC Server handler) calls synchronized 
> ParentQueue#getQueueUserAclInfo (got ParentQueue instance lock of queue 
> root), iterates over children queue acls and is blocked when calling 
> synchronized LeafQueue#getQueueUserAclInfo (the LeafQueue instance lock of 
> queue root.a is hold by Thread A)
> * 3. Thread A wants to inform the parent queue that a container is being 
> completed and is blocked when invoking synchronized 
> ParentQueue#internalReleaseResource method (the ParentQueue instance lock of 
> queue root is hold by Thread B)
> I think the synchronized modifier of LeafQueue#getQueueUserAclInfo can be 
> removed to solve this problem, since this method appears to not affect fields 
> of LeafQueue instance.
> Attach patch with UT for review.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4899) Queue metrics of SLS capacity scheduler only activated after app submit to the queue

2017-01-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1579#comment-1579
 ] 

Hadoop QA commented on YARN-4899:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} hadoop-tools/hadoop-sls: The patch generated 5 
new + 100 unchanged - 0 fixed = 105 total (was 100) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
53s{color} | {color:green} hadoop-sls in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-4899 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12803092/YARN-4899.1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 33df6bd90953 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8fadd69 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/14533/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-sls.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14533/testReport/ |
| modules | C: hadoop-tools/hadoop-sls U: hadoop-tools/hadoop-sls |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/14533/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Queue metrics of SLS capacity scheduler only activated after app submit to 
> the queue
> 
>
> Key: YARN-4899
> URL: https://issues.apache.org/jira/browse/YARN-4899
> Project: Hadoop YARN

[jira] [Commented] (YARN-6025) Fix synchronization issues of AbstractYarnScheduler#nodeUpdate and its implementations

2017-01-03 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796658#comment-15796658
 ] 

Naganarasimha G R commented on YARN-6025:
-

Thanks for the review and commit [~leftnoteasy] and additional review from 
[~sunilg]

> Fix synchronization issues of AbstractYarnScheduler#nodeUpdate and its 
> implementations
> --
>
> Key: YARN-6025
> URL: https://issues.apache.org/jira/browse/YARN-6025
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, scheduler
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-6025.002.patch, YARN-6025.01.patch
>
>
> YARN-3139 does optimization on the locks by introducing 
> ReentrantReadWriteLock to remove synchronized locks, but still seems to have 
> some  synchronization locks in AbstractYarnScheduler and its hierarchy 
> because of nonavailibility of read write locks in Fifo scheduler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5714) ContainerExecutor does not order environment map

2017-01-03 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796644#comment-15796644
 ] 

Daniel Templeton commented on YARN-5714:


Coming back to [~Naganarasimha]'s original comment, wouldn't it be simpler and 
more effective to use a linked hash map to preserve the order given by the 
client?  Sorry.  I know that's the last comment you want to hear after six 
rounds on the patch. :)

> ContainerExecutor does not order environment map
> 
>
> Key: YARN-5714
> URL: https://issues.apache.org/jira/browse/YARN-5714
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.4.1, 2.5.2, 2.7.3, 2.6.4, 3.0.0-alpha1
> Environment: all (linux and windows alike)
>Reporter: Remi Catherinot
>Assignee: Remi Catherinot
>Priority: Trivial
>  Labels: oct16-medium
> Attachments: YARN-5714.001.patch, YARN-5714.002.patch, 
> YARN-5714.003.patch, YARN-5714.004.patch, YARN-5714.005.patch, 
> YARN-5714.006.patch
>
>   Original Estimate: 120h
>  Remaining Estimate: 120h
>
> when dumping the launch container script, environment variables are dumped 
> based on the order internally used by the map implementation (hash based). It 
> does not take into consideration that some env varibales may refer each 
> other, and so that some env variables must be declared before those 
> referencing them.
> In my case, i ended up having LD_LIBRARY_PATH which was depending on 
> HADOOP_COMMON_HOME being dumped before HADOOP_COMMON_HOME. Thus it had a 
> wrong value and so native libraries weren't loaded. jobs were running but not 
> at their best efficiency. This is just a use case falling into that bug, but 
> i'm sure others may happen as well.
> I already have a patch running in my production environment, i just estimate 
> to 5 days for packaging the patch in the right fashion for JIRA + try my best 
> to add tests.
> Note : the patch is not OS aware with a default empty implementation. I will 
> only implement the unix version on a 1st release. I'm not used to windows env 
> variables syntax so it will take me more time/research for it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3866) AM-RM protocol changes to support container resizing

2017-01-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796642#comment-15796642
 ] 

Hadoop QA commented on YARN-3866:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  0m 
49s{color} | {color:red} Docker failed to build yetus/hadoop:b59b8b7. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-3866 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12845452/YARN-3866-branch-2.addendum.4.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/14535/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> AM-RM protocol changes to support container resizing
> 
>
> Key: YARN-3866
> URL: https://issues.apache.org/jira/browse/YARN-3866
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Reporter: MENG DING
>Assignee: MENG DING
>Priority: Blocker
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: YARN-3866-YARN-1197.4.patch, 
> YARN-3866-branch-2.8.addendum.1.patch, YARN-3866-branch-2.8.addendum.1.patch, 
> YARN-3866-branch-2.8.addendum.2.patch, YARN-3866-branch-2.8.addendum.3.patch, 
> YARN-3866-branch-2.8.addendum.4.patch, YARN-3866-branch-2.addendum.4.patch, 
> YARN-3866.1.patch, YARN-3866.2.patch, YARN-3866.3.patch
>
>
> YARN-1447 and YARN-1448 are outdated. 
> This ticket deals with AM-RM Protocol changes to support container resize 
> according to the latest design in YARN-1197.
> 1) Add increase/decrease requests in AllocateRequest
> 2) Get approved increase/decrease requests from RM in AllocateResponse
> 3) Add relevant test cases



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3866) AM-RM protocol changes to support container resizing

2017-01-03 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-3866:
-
Attachment: YARN-3866-branch-2.8.addendum.4.patch

I forgot to make changes to proto file which suggest by [~asuresh]. (move 
updateContainer to 16).

Attached ver.4 patch for branch-2.8 as well (ver.4 addendum patch for branch-2 
already included the change)

> AM-RM protocol changes to support container resizing
> 
>
> Key: YARN-3866
> URL: https://issues.apache.org/jira/browse/YARN-3866
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Reporter: MENG DING
>Assignee: MENG DING
>Priority: Blocker
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: YARN-3866-YARN-1197.4.patch, 
> YARN-3866-branch-2.8.addendum.1.patch, YARN-3866-branch-2.8.addendum.1.patch, 
> YARN-3866-branch-2.8.addendum.2.patch, YARN-3866-branch-2.8.addendum.3.patch, 
> YARN-3866-branch-2.8.addendum.4.patch, YARN-3866-branch-2.addendum.4.patch, 
> YARN-3866.1.patch, YARN-3866.2.patch, YARN-3866.3.patch
>
>
> YARN-1447 and YARN-1448 are outdated. 
> This ticket deals with AM-RM Protocol changes to support container resize 
> according to the latest design in YARN-1197.
> 1) Add increase/decrease requests in AllocateRequest
> 2) Get approved increase/decrease requests from RM in AllocateResponse
> 3) Add relevant test cases



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6029) CapacityScheduler deadlock when ParentQueue#getQueueUserAclInfo is called by Thread_A at the moment that Thread_B calls LeafQueue#assignContainers to release a reserved

2017-01-03 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796631#comment-15796631
 ] 

Wangda Tan commented on YARN-6029:
--

Committing ...

> CapacityScheduler deadlock when ParentQueue#getQueueUserAclInfo is called by 
> Thread_A at the moment that Thread_B calls LeafQueue#assignContainers to 
> release a reserved container
> --
>
> Key: YARN-6029
> URL: https://issues.apache.org/jira/browse/YARN-6029
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 2.8.0
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Critical
> Attachments: YARN-6029-branch-2.8.001.patch, YARN-6029.001.patch, 
> YARN-6029.002.patch, deadlock.jstack
>
>
> When ParentQueue#getQueueUserAclInfo is called (e.g. a client calls 
> YarnClient#getQueueAclsInfo) just at the moment that 
> LeafQueue#assignContainers is called and before notifying parent queue to 
> release resource (should release a reserved container), then ResourceManager 
> can deadlock. I found this problem on our testing environment for hadoop2.8.
> Reproduce the deadlock in chronological order
> * 1. Thread A (ResourceManager Event Processor) calls synchronized 
> LeafQueue#assignContainers (got LeafQueue instance lock of queue root.a)
> * 2. Thread B (IPC Server handler) calls synchronized 
> ParentQueue#getQueueUserAclInfo (got ParentQueue instance lock of queue 
> root), iterates over children queue acls and is blocked when calling 
> synchronized LeafQueue#getQueueUserAclInfo (the LeafQueue instance lock of 
> queue root.a is hold by Thread A)
> * 3. Thread A wants to inform the parent queue that a container is being 
> completed and is blocked when invoking synchronized 
> ParentQueue#internalReleaseResource method (the ParentQueue instance lock of 
> queue root is hold by Thread B)
> I think the synchronized modifier of LeafQueue#getQueueUserAclInfo can be 
> removed to solve this problem, since this method appears to not affect fields 
> of LeafQueue instance.
> Attach patch with UT for review.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3866) AM-RM protocol changes to support container resizing

2017-01-03 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-3866:
-
Attachment: YARN-3866-branch-2.addendum.4.patch

Attached fix for branch-2.

> AM-RM protocol changes to support container resizing
> 
>
> Key: YARN-3866
> URL: https://issues.apache.org/jira/browse/YARN-3866
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Reporter: MENG DING
>Assignee: MENG DING
>Priority: Blocker
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: YARN-3866-YARN-1197.4.patch, 
> YARN-3866-branch-2.8.addendum.1.patch, YARN-3866-branch-2.8.addendum.1.patch, 
> YARN-3866-branch-2.8.addendum.2.patch, YARN-3866-branch-2.8.addendum.3.patch, 
> YARN-3866-branch-2.addendum.4.patch, YARN-3866.1.patch, YARN-3866.2.patch, 
> YARN-3866.3.patch
>
>
> YARN-1447 and YARN-1448 are outdated. 
> This ticket deals with AM-RM Protocol changes to support container resize 
> according to the latest design in YARN-1197.
> 1) Add increase/decrease requests in AllocateRequest
> 2) Get approved increase/decrease requests from RM in AllocateResponse
> 3) Add relevant test cases



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6022) Revert changes of AbstractResourceRequest

2017-01-03 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan reassigned YARN-6022:


Assignee: Wangda Tan

> Revert changes of AbstractResourceRequest
> -
>
> Key: YARN-6022
> URL: https://issues.apache.org/jira/browse/YARN-6022
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
>
> YARN-5774 added AbstractResourceRequest to make easier internal scheduler 
> change, this is not a correct approach: For example, with this change, we 
> need to make AbstractResourceRequest to be public/stable. And end users can 
> use it like:
> {code}
> AbstractResourceRequest request = ...
> request.setCapability(...)
> {code}
> But AbstractResourceRequest should not be visible by application at all. 
> We need to revert it from branch-2.8 / branch-2 / trunk. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6022) Revert changes of AbstractResourceRequest

2017-01-03 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796608#comment-15796608
 ] 

Wangda Tan commented on YARN-6022:
--

bq. Why wouldn't the user write the same thing as before:
Because it is a public/stable interface, how you stop user to do that?

bq. I don't think it's the end of days .. 
bq. It's a little bit of an odd API change
bq. I'm fine with implementing that change as long as we can do it before it 
gets in the way of beta1.
Fine if you think this solution is fine, and previous one is odd. I can fix it.

Assigning to myself. 

> Revert changes of AbstractResourceRequest
> -
>
> Key: YARN-6022
> URL: https://issues.apache.org/jira/browse/YARN-6022
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Priority: Blocker
>
> YARN-5774 added AbstractResourceRequest to make easier internal scheduler 
> change, this is not a correct approach: For example, with this change, we 
> need to make AbstractResourceRequest to be public/stable. And end users can 
> use it like:
> {code}
> AbstractResourceRequest request = ...
> request.setCapability(...)
> {code}
> But AbstractResourceRequest should not be visible by application at all. 
> We need to revert it from branch-2.8 / branch-2 / trunk. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6029) CapacityScheduler deadlock when ParentQueue#getQueueUserAclInfo is called by Thread_A at the moment that Thread_B calls LeafQueue#assignContainers to release a reserved

2017-01-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796606#comment-15796606
 ] 

Hadoop QA commented on YARN-6029:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
45s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
11s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 16s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 4 new + 52 unchanged - 2 fixed = 56 total (was 54) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 57s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}180m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_111 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
| JDK v1.7.0_121 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5af2af1 |
| JIRA Issue | YARN-6029 |
| JIRA Patch URL | 

[jira] [Commented] (YARN-6022) Revert changes of AbstractResourceRequest

2017-01-03 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796601#comment-15796601
 ] 

Daniel Templeton commented on YARN-6022:


{quote}According to API definition, user should write code like:
{code}AbstractResourceRequest r = new ResourceRequest(...);
r.setCapability(..._{code}{quote}

I don't quite follow that.  Why wouldn't the user write the same thing as 
before:

{code}ResourceRequest r = new ResourceRequest(...);
r.setCapability(..._{code}

?

I agree that internal changes should not cause external API changes, but this 
change doesn't change the API in any meaningful way.  No user code will break.  
Nothing will change about the way users write code.  The only change is that 
the compatibility accessors move from the top section of the _Method Summary_ 
section in the {{ResourceRequest}} and {{UpdateContainerRequest}} javadocs into 
the _Methods inherited from..._ part of the _Method Summary_ section.  I did a 
little digging and was unable to find any wisdom on the interwebs that labels a 
change like this as a breaking change.  (The most complete doc I found was 
https://wiki.eclipse.org/Evolving_Java-based_APIs_2.)

It's a little bit of an odd API change, but I don't think it's the end of days. 
 [~leftnoteasy]'s suggestion on YARN-5774 to change the normalize API to accept 
a {{Resource}} instead of a request sounds like a cleaner solution.  I'm fine 
with implementing that change as long as we can do it before it gets in the way 
of beta1.

> Revert changes of AbstractResourceRequest
> -
>
> Key: YARN-6022
> URL: https://issues.apache.org/jira/browse/YARN-6022
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Priority: Blocker
>
> YARN-5774 added AbstractResourceRequest to make easier internal scheduler 
> change, this is not a correct approach: For example, with this change, we 
> need to make AbstractResourceRequest to be public/stable. And end users can 
> use it like:
> {code}
> AbstractResourceRequest request = ...
> request.setCapability(...)
> {code}
> But AbstractResourceRequest should not be visible by application at all. 
> We need to revert it from branch-2.8 / branch-2 / trunk. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6050) AMs can't be scheduled on racks or nodes

2017-01-03 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796565#comment-15796565
 ] 

Wangda Tan commented on YARN-6050:
--

Thanks [~rkanter], I can provide a separate CS test once this patch goes in. 

> AMs can't be scheduled on racks or nodes
> 
>
> Key: YARN-6050
> URL: https://issues.apache.org/jira/browse/YARN-6050
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-6050.001.patch
>
>
> Yarn itself supports rack/node aware scheduling for AMs; however, there 
> currently are two problems:
> # To specify hard or soft rack/node requests, you have to specify more than 
> one {{ResourceRequest}}.  For example, if you want to schedule an AM only on 
> "rackA", you have to create two {{ResourceRequest}}, like this:
> {code}
> ResourceRequest.newInstance(PRIORITY, ANY, CAPABILITY, NUM_CONTAINERS, false);
> ResourceRequest.newInstance(PRIORITY, "rackA", CAPABILITY, NUM_CONTAINERS, 
> true);
> {code}
> The problem is that the Yarn API doesn't actually allow you to specify more 
> than one {{ResourceRequest}} in the {{ApplicationSubmissionContext}}.  The 
> current behavior is to either build one from {{getResource}} or directly from 
> {{getAMContainerResourceRequest}}, depending on if 
> {{getAMContainerResourceRequest}} is null or not.  We'll need to add a third 
> method, say {{getAMContainerResourceRequests}}, which takes a list of 
> {{ResourceRequest}} so that clients can specify the multiple resource 
> requests.
> # There are some places where things are hardcoded to overwrite what the 
> client specifies.  These are pretty straightforward to fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3955) Support for priority ACLs in CapacityScheduler

2017-01-03 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796563#comment-15796563
 ] 

Wangda Tan commented on YARN-3955:
--

Thanks [~sunilg], some additional comments:

1) Common logic of checkAccess / getDefaultPriority can be merged further: both 
can get approvedPriority first.
2) As I commented above, do changes of capacity-scheduler.xml related to the 
patch? I cannot find which module uses {{acl_access_priority}} in 
configuration. If not, could you add correct default value?

3) CapacityScheduler:
- updateApplicationPriority should hold writeLock?
- similiarily, checkAndGetApplicationPriority should hold readlock?
- checkAndGetApplicationPriority: when an app's priority set to negative, I 
think we should use 0 instead of max. Thoughts? 
- 

4) AppPriorityACLsMgr:
- addPrioirityACLs, should we do "replace" instead of "add" to acl groups? If 
it is not intentional, could you add a test to make sure update of acls works? 
(like change from \[1,2,3\] to \[1,3,4\])
- getPriorityPerUserACL -> getMappedPriorityAclForUGI. 

5) As I mentioned before, remove readlock of LQ#getPriorityAcls, final should 
be enough.

6) YarnScheduler: why the new added method has SettableFuture in parameters? It 
doesn't look very clean ...


> Support for priority ACLs in CapacityScheduler
> --
>
> Key: YARN-3955
> URL: https://issues.apache.org/jira/browse/YARN-3955
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: ApplicationPriority-ACL.pdf, 
> ApplicationPriority-ACLs-v2.pdf, YARN-3955.0001.patch, YARN-3955.0002.patch, 
> YARN-3955.0003.patch, YARN-3955.0004.patch, YARN-3955.0005.patch, 
> YARN-3955.0006.patch, YARN-3955.0007.patch, YARN-3955.v0.patch, 
> YARN-3955.v1.patch, YARN-3955.wip1.patch
>
>
> Support will be added for User-level access permission to use different 
> application-priorities. This is to avoid situations where all users try 
> running max priority in the cluster and thus degrading the value of 
> priorities.
> Access Control Lists can be set per priority level within each queue. Below 
> is an example configuration that can be added in capacity scheduler 
> configuration
> file for each Queue level.
> yarn.scheduler.capacity.root...acl=user1,user2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5923) Unable to access logs for a running application if YARN_ACL_ENABLE is enabled

2017-01-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796558#comment-15796558
 ] 

Hudson commented on YARN-5923:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11068 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11068/])
YARN-5923. Unable to access logs for a running application if (jdu: rev 
8fadd69047143c9c389cc09ca24100b5f90f79d2)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/WebServer.java


> Unable to access logs for a running application if YARN_ACL_ENABLE is enabled
> -
>
> Key: YARN-5923
> URL: https://issues.apache.org/jira/browse/YARN-5923
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Fix For: 2.9.0
>
> Attachments: YARN-5923.1.patch, YARN-5923.2.patch, YARN-5923.3.patch
>
>
> 2016-11-07 23:20:41,423 WARN  webapp.GenericExceptionHandler 
> (GenericExceptionHandler.java:toResponse(98)) - INTERNAL_SERVER_ERROR
> javax.ws.rs.WebApplicationException: 
> org.apache.hadoop.yarn.exceptions.YarnException: User [dr.who] is not 
> authorized to view the logs for application application_1478218837976_0068
> at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices.getContainerLogsInfo(NMWebServices.java:226)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
> at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:886)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:834)
> at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebAppFilter.doFilter(NMWebAppFilter.java:72)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:795)
> at 
> com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:163)
> at 
> com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58)
> at 
> com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:118)
> at 
> com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:113)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.security.http.XFrameOptionsFilter.doFilter(XFrameOptionsFilter.java:57)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> 

[jira] [Commented] (YARN-6025) Fix synchronization issues of AbstractYarnScheduler#nodeUpdate and its implementations

2017-01-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796557#comment-15796557
 ] 

Hudson commented on YARN-6025:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11068 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11068/])
YARN-6025. Fix synchronization issues of (wangda: rev 
f69a107aeccc68ca1085a7be8093d36b2f45eaa1)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/FifoScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java


> Fix synchronization issues of AbstractYarnScheduler#nodeUpdate and its 
> implementations
> --
>
> Key: YARN-6025
> URL: https://issues.apache.org/jira/browse/YARN-6025
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, scheduler
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-6025.002.patch, YARN-6025.01.patch
>
>
> YARN-3139 does optimization on the locks by introducing 
> ReentrantReadWriteLock to remove synchronized locks, but still seems to have 
> some  synchronization locks in AbstractYarnScheduler and its hierarchy 
> because of nonavailibility of read write locks in Fifo scheduler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6050) AMs can't be scheduled on racks or nodes

2017-01-03 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796493#comment-15796493
 ] 

Robert Kanter commented on YARN-6050:
-

That's good to know about the protobufs.  I'll work on those changes.

As for a CS test, I looked into that before, but I found that part of the code 
pretty confusing, and most things seemed to be mocked, which makes it harder to 
verify that the AM container got assigned to the correct Node like in the FS 
tests I added.  I'll take another crack at this, but if you have any 
suggestions or know of a test that does something similar, please let me know.

> AMs can't be scheduled on racks or nodes
> 
>
> Key: YARN-6050
> URL: https://issues.apache.org/jira/browse/YARN-6050
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-6050.001.patch
>
>
> Yarn itself supports rack/node aware scheduling for AMs; however, there 
> currently are two problems:
> # To specify hard or soft rack/node requests, you have to specify more than 
> one {{ResourceRequest}}.  For example, if you want to schedule an AM only on 
> "rackA", you have to create two {{ResourceRequest}}, like this:
> {code}
> ResourceRequest.newInstance(PRIORITY, ANY, CAPABILITY, NUM_CONTAINERS, false);
> ResourceRequest.newInstance(PRIORITY, "rackA", CAPABILITY, NUM_CONTAINERS, 
> true);
> {code}
> The problem is that the Yarn API doesn't actually allow you to specify more 
> than one {{ResourceRequest}} in the {{ApplicationSubmissionContext}}.  The 
> current behavior is to either build one from {{getResource}} or directly from 
> {{getAMContainerResourceRequest}}, depending on if 
> {{getAMContainerResourceRequest}} is null or not.  We'll need to add a third 
> method, say {{getAMContainerResourceRequests}}, which takes a list of 
> {{ResourceRequest}} so that clients can specify the multiple resource 
> requests.
> # There are some places where things are hardcoded to overwrite what the 
> client specifies.  These are pretty straightforward to fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3866) AM-RM protocol changes to support container resizing

2017-01-03 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796470#comment-15796470
 ] 

Wangda Tan commented on YARN-3866:
--

Thanks [~djp] for reviewing this!

Once you committed the patch to 2.8, please keep this patch open and block 
version to 2.9.0 / 3.0.0, I need to add addendum patch to make sure 2.9.0/3.0.0 
is compatible to older releases.

> AM-RM protocol changes to support container resizing
> 
>
> Key: YARN-3866
> URL: https://issues.apache.org/jira/browse/YARN-3866
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Reporter: MENG DING
>Assignee: MENG DING
>Priority: Blocker
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: YARN-3866-YARN-1197.4.patch, 
> YARN-3866-branch-2.8.addendum.1.patch, YARN-3866-branch-2.8.addendum.1.patch, 
> YARN-3866-branch-2.8.addendum.2.patch, YARN-3866-branch-2.8.addendum.3.patch, 
> YARN-3866.1.patch, YARN-3866.2.patch, YARN-3866.3.patch
>
>
> YARN-1447 and YARN-1448 are outdated. 
> This ticket deals with AM-RM Protocol changes to support container resize 
> according to the latest design in YARN-1197.
> 1) Add increase/decrease requests in AllocateRequest
> 2) Get approved increase/decrease requests from RM in AllocateResponse
> 3) Add relevant test cases



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5923) Unable to access logs for a running application if YARN_ACL_ENABLE is enabled

2017-01-03 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796472#comment-15796472
 ] 

Junping Du commented on YARN-5923:
--

Thanks [~xgong] for incorporating my comments. Latest patch LGTM. +1. 
Committing it now.

> Unable to access logs for a running application if YARN_ACL_ENABLE is enabled
> -
>
> Key: YARN-5923
> URL: https://issues.apache.org/jira/browse/YARN-5923
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-5923.1.patch, YARN-5923.2.patch, YARN-5923.3.patch
>
>
> 2016-11-07 23:20:41,423 WARN  webapp.GenericExceptionHandler 
> (GenericExceptionHandler.java:toResponse(98)) - INTERNAL_SERVER_ERROR
> javax.ws.rs.WebApplicationException: 
> org.apache.hadoop.yarn.exceptions.YarnException: User [dr.who] is not 
> authorized to view the logs for application application_1478218837976_0068
> at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices.getContainerLogsInfo(NMWebServices.java:226)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
> at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:886)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:834)
> at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebAppFilter.doFilter(NMWebAppFilter.java:72)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:795)
> at 
> com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:163)
> at 
> com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58)
> at 
> com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:118)
> at 
> com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:113)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.security.http.XFrameOptionsFilter.doFilter(XFrameOptionsFilter.java:57)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1294)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> 

[jira] [Comment Edited] (YARN-3866) AM-RM protocol changes to support container resizing

2017-01-03 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796464#comment-15796464
 ] 

Junping Du edited comment on YARN-3866 at 1/3/17 10:54 PM:
---

The unit tests failed shouldn't be relevant to the patch. Most failed tests are 
common seen, the only one I didn't noticed before is 
TestCapacitySchedulerSurgicalPreemption which get tracked in YARN-5973. I 
verified locally to see if w/o the patch will affect these unit test failures. 
Here is the results:
| Tests | Without Patch | With Patch  |
| TestCapacitySchedulerSurgicalPreemption   | FAILED| FAILED  |
| TestAMAuthorization   | FAILED| FAILED  |
| TestFairScheduler | SUCCEED   | SUCCEED |
| TestClientRMTokens| FAILED| FAILED  |
| TestAMRMProxy | SUCCEED   | SUCCEED |
| TestGetGroups | FAILED| FAILED  | 


>From JACC report, we can see previous incompatible API issues involved in 
>YARN-3866 are resolved. The javac warning is just annoying because we are 
>adding back deprecated APIs for better compatibility.
I am planning to go ahead to commit the latest addendum patch before the end of 
today. Please comments if you have any concerns.


was (Author: djp):
The unit tests failed shouldn't be relevant to the patch. Most failed tests are 
common seen, the only one I didn't noticed before is 
TestCapacitySchedulerSurgicalPreemption which get tracked in YARN-5973. I 
verified locally to see if w/o the patch will affect these unit test failures. 
Here is the results:
| Tests | Without Patch | With Patch  |
| - |:-:| ---:|
| TestCapacitySchedulerSurgicalPreemption   | FAILED| FAILED  |
| TestAMAuthorization   | FAILED| FAILED  |
| TestFairScheduler | SUCCEED   | SUCCEED |
| TestClientRMTokens| FAILED| FAILED  |
| TestAMRMProxy | SUCCEED   | SUCCEED |
| TestGetGroups | FAILED| FAILED  | 


>From JACC report, we can see previous incompatible API issues involved in 
>YARN-3866 are resolved. The javac warning is just annoying because we are 
>adding back deprecated APIs for better compatibility.
I am planning to go ahead to commit the latest addendum patch before the end of 
today. Please comments if you have any concerns.

> AM-RM protocol changes to support container resizing
> 
>
> Key: YARN-3866
> URL: https://issues.apache.org/jira/browse/YARN-3866
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Reporter: MENG DING
>Assignee: MENG DING
>Priority: Blocker
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: YARN-3866-YARN-1197.4.patch, 
> YARN-3866-branch-2.8.addendum.1.patch, YARN-3866-branch-2.8.addendum.1.patch, 
> YARN-3866-branch-2.8.addendum.2.patch, YARN-3866-branch-2.8.addendum.3.patch, 
> YARN-3866.1.patch, YARN-3866.2.patch, YARN-3866.3.patch
>
>
> YARN-1447 and YARN-1448 are outdated. 
> This ticket deals with AM-RM Protocol changes to support container resize 
> according to the latest design in YARN-1197.
> 1) Add increase/decrease requests in AllocateRequest
> 2) Get approved increase/decrease requests from RM in AllocateResponse
> 3) Add relevant test cases



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3866) AM-RM protocol changes to support container resizing

2017-01-03 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796464#comment-15796464
 ] 

Junping Du commented on YARN-3866:
--

The unit tests failed shouldn't be relevant to the patch. Most failed tests are 
common seen, the only one I didn't noticed before is 
TestCapacitySchedulerSurgicalPreemption which get tracked in YARN-5973. I 
verified locally to see if w/o the patch will affect these unit test failures. 
Here is the results:
| Tests | Without Patch | With Patch  |
| - |:-:| ---:|
| TestCapacitySchedulerSurgicalPreemption   | FAILED| FAILED  |
| TestAMAuthorization   | FAILED| FAILED  |
| TestFairScheduler | SUCCEED   | SUCCEED |
| TestClientRMTokens| FAILED| FAILED  |
| TestAMRMProxy | SUCCEED   | SUCCEED |
| TestGetGroups | FAILED| FAILED  | 


>From JACC report, we can see previous incompatible API issues involved in 
>YARN-3866 are resolved. The javac warning is just annoying because we are 
>adding back deprecated APIs for better compatibility.
I am planning to go ahead to commit the latest addendum patch before the end of 
today. Please comments if you have any concerns.

> AM-RM protocol changes to support container resizing
> 
>
> Key: YARN-3866
> URL: https://issues.apache.org/jira/browse/YARN-3866
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Reporter: MENG DING
>Assignee: MENG DING
>Priority: Blocker
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: YARN-3866-YARN-1197.4.patch, 
> YARN-3866-branch-2.8.addendum.1.patch, YARN-3866-branch-2.8.addendum.1.patch, 
> YARN-3866-branch-2.8.addendum.2.patch, YARN-3866-branch-2.8.addendum.3.patch, 
> YARN-3866.1.patch, YARN-3866.2.patch, YARN-3866.3.patch
>
>
> YARN-1447 and YARN-1448 are outdated. 
> This ticket deals with AM-RM Protocol changes to support container resize 
> according to the latest design in YARN-1197.
> 1) Add increase/decrease requests in AllocateRequest
> 2) Get approved increase/decrease requests from RM in AllocateResponse
> 3) Add relevant test cases



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6025) Fix synchronization issues of AbstractYarnScheduler#nodeUpdate and its implementations

2017-01-03 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6025:
-
Summary: Fix synchronization issues of AbstractYarnScheduler#nodeUpdate and 
its implementations  (was: Issues in synchronization in AbstractYarnScheduler & 
its hierarchy)

> Fix synchronization issues of AbstractYarnScheduler#nodeUpdate and its 
> implementations
> --
>
> Key: YARN-6025
> URL: https://issues.apache.org/jira/browse/YARN-6025
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, scheduler
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: YARN-6025.002.patch, YARN-6025.01.patch
>
>
> YARN-3139 does optimization on the locks by introducing 
> ReentrantReadWriteLock to remove synchronized locks, but still seems to have 
> some  synchronization locks in AbstractYarnScheduler and its hierarchy 
> because of nonavailibility of read write locks in Fifo scheduler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5923) Unable to access logs for a running application if YARN_ACL_ENABLE is enabled

2017-01-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796434#comment-15796434
 ] 

Hadoop QA commented on YARN-5923:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 
53s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 28s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5923 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12845437/YARN-5923.3.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3a73c20550f7 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ebdd2e0 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14532/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/14532/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Unable to access logs for a running application if YARN_ACL_ENABLE is enabled
> -
>
> Key: YARN-5923
> URL: https://issues.apache.org/jira/browse/YARN-5923
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
>   

[jira] [Commented] (YARN-6025) Issues in synchronization in AbstractYarnScheduler & its hierarchy

2017-01-03 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796428#comment-15796428
 ] 

Wangda Tan commented on YARN-6025:
--

Test failure is not related, it is tracked by: YARN-5548. Committing it now.

> Issues in synchronization in AbstractYarnScheduler & its hierarchy
> --
>
> Key: YARN-6025
> URL: https://issues.apache.org/jira/browse/YARN-6025
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, scheduler
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: YARN-6025.002.patch, YARN-6025.01.patch
>
>
> YARN-3139 does optimization on the locks by introducing 
> ReentrantReadWriteLock to remove synchronized locks, but still seems to have 
> some  synchronization locks in AbstractYarnScheduler and its hierarchy 
> because of nonavailibility of read write locks in Fifo scheduler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6050) AMs can't be scheduled on racks or nodes

2017-01-03 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796422#comment-15796422
 ] 

Wangda Tan commented on YARN-6050:
--

[~rkanter],

Make sense to me. Reconsidered about this, overload semantics of 
ResourceRequest is not a good idea.

For 001 patch we have two fields for AM request inside ASC, and it added a new 
field to proto as well. I think we can improve this further:

1. It's better to deprecate the original get/set single ResourceRequest API, 
which to force applications move to new API. And we can also avoid our 
scheduler internal implementation to use the field.

2. We don't have to add new field to proto, according to PB compatibility doc:
https://developers.google.com/protocol-buffers/docs/proto#backwards-compatibility,
 changing from optional to repeated is compatible:
bq. optional is compatible with repeated. Given serialized data of a repeated 
field as input, clients that expect this field to be optional will take the 
last input value if it's a primitive type field or merge all input elements if 
it's a message type field.

We can add small checks to ResourceRequestPBImpl to handle this. Rest of the 
logics generally looks good to me, and it's better to add a CS test as well. 

> AMs can't be scheduled on racks or nodes
> 
>
> Key: YARN-6050
> URL: https://issues.apache.org/jira/browse/YARN-6050
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-6050.001.patch
>
>
> Yarn itself supports rack/node aware scheduling for AMs; however, there 
> currently are two problems:
> # To specify hard or soft rack/node requests, you have to specify more than 
> one {{ResourceRequest}}.  For example, if you want to schedule an AM only on 
> "rackA", you have to create two {{ResourceRequest}}, like this:
> {code}
> ResourceRequest.newInstance(PRIORITY, ANY, CAPABILITY, NUM_CONTAINERS, false);
> ResourceRequest.newInstance(PRIORITY, "rackA", CAPABILITY, NUM_CONTAINERS, 
> true);
> {code}
> The problem is that the Yarn API doesn't actually allow you to specify more 
> than one {{ResourceRequest}} in the {{ApplicationSubmissionContext}}.  The 
> current behavior is to either build one from {{getResource}} or directly from 
> {{getAMContainerResourceRequest}}, depending on if 
> {{getAMContainerResourceRequest}} is null or not.  We'll need to add a third 
> method, say {{getAMContainerResourceRequests}}, which takes a list of 
> {{ResourceRequest}} so that clients can specify the multiple resource 
> requests.
> # There are some places where things are hardcoded to overwrite what the 
> client specifies.  These are pretty straightforward to fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6050) AMs can't be scheduled on racks or nodes

2017-01-03 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796378#comment-15796378
 ] 

Robert Kanter commented on YARN-6050:
-

The API is already complicated enough.  If we overload a {{ResourceRequest}} to 
have this functionality when used by the client, I'm concerned that going to 
make things even more confusing for clients.  I believe Container Requests also 
work this way, so it is also more consistent with how those work.

> AMs can't be scheduled on racks or nodes
> 
>
> Key: YARN-6050
> URL: https://issues.apache.org/jira/browse/YARN-6050
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-6050.001.patch
>
>
> Yarn itself supports rack/node aware scheduling for AMs; however, there 
> currently are two problems:
> # To specify hard or soft rack/node requests, you have to specify more than 
> one {{ResourceRequest}}.  For example, if you want to schedule an AM only on 
> "rackA", you have to create two {{ResourceRequest}}, like this:
> {code}
> ResourceRequest.newInstance(PRIORITY, ANY, CAPABILITY, NUM_CONTAINERS, false);
> ResourceRequest.newInstance(PRIORITY, "rackA", CAPABILITY, NUM_CONTAINERS, 
> true);
> {code}
> The problem is that the Yarn API doesn't actually allow you to specify more 
> than one {{ResourceRequest}} in the {{ApplicationSubmissionContext}}.  The 
> current behavior is to either build one from {{getResource}} or directly from 
> {{getAMContainerResourceRequest}}, depending on if 
> {{getAMContainerResourceRequest}} is null or not.  We'll need to add a third 
> method, say {{getAMContainerResourceRequests}}, which takes a list of 
> {{ResourceRequest}} so that clients can specify the multiple resource 
> requests.
> # There are some places where things are hardcoded to overwrite what the 
> client specifies.  These are pretty straightforward to fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5959) RM changes to support change of container ExecutionType

2017-01-03 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796368#comment-15796368
 ] 

Wangda Tan commented on YARN-5959:
--

Thanks [~asuresh] for updating the patch.

I think the latest patch is much better than the previous one, but I found we 
can merge logics of execution type update and resizing even more. And we can 
make call flow of allocation more clear:

*High level comments:*

1) Can we uniform call flow of OpportunisticContainerAllocatorAMService 
(OCAAMS) and normal AMS?
Currently, OCAAMS gets SchedulerApplicationAttempt, and then directly update 
request to ContainerUpdateContext. And for increase/decrease request, they will 
be added to Scheduler.allocate, and set through 
AppSchedulingInfo#updateResourceRequests. 

I think the two could be merged: Since we have a unified field of 
UpdateContainerRequest inside AllocateRequest, and we calls 
RMServerUtils#validateAndSplitUpdateResourceRequests from 
ApplicationMasterService, so we should be able to split 
increase/decrease/demote/promote requests. Then increase / promote requests 
will be converted to normal resource requests and add to AppSchedulingInfo (via 
Scheduler#allocate).

Inside AppSchedulingInfo, it could uses ContainerUpdateContext to maintain all 
container-update-related requests.

2) Similar to above, it's better to unify call flow of fetch container update 
result as well:

Currently it calls appAttempt.pullContainersWithUpdatedExecType, I think it 
should be merged to FiCaSchedulerApp#getAllocation. And actually, we should 
remove increasedContainers/decreasedContainers, and instead we should have: 
{{List}} to capture all kinds of 
container updates.

*Summary of my opinions about call flow:*

a. Send update container request to scheduler:
For increase/decrease container: Inside ApplicationMasterService, calls 
scheduler#allocate (same as today)
For promotion/demotion container: 
OCAAMS split request to two parts:
- List of opportunistic container allocations: opportunistic allocations 
Requests goes to OpportunisticContainerAllocator (same as today)
- UpdateContainerRequests (promotion and demotion) goes to 
ApplicationMasterService 

Minor refactoring may require for Scheduler#allocate's parameters, since we can 
merge all container update requests to a single one. like 
{{List}}.

b. Get update container response from scheduler: 
For both increase/decrease/promotion/demotion containers, they will be set to 
{{Allocation}} by SchedulerApplicationAttempt implementations.

*Other suggestions*
3) Can we avoid changing AbstractResourceRequest (assume we may remove it on 
YARN-6022)? Adding a {{updatedContainerId}} to {{ResourceRequest}} is very 
confusing to application, because they may set 
ResourceRequest#updatedContainerId to update a container, but it won't work.
I believe we should have enough context (like the ContainerId inside 
SchedulerRequestKey) already, could you reconsider this part? 

4) ContainerUpdateType: It's better to have separated promotion / demotion 
instead of a single "update-execution-type".

Thoughts?

> RM changes to support change of container ExecutionType
> ---
>
> Key: YARN-5959
> URL: https://issues.apache.org/jira/browse/YARN-5959
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5959-YARN-5085.001.patch, 
> YARN-5959-YARN-5085.002.patch, YARN-5959.combined.001.patch, 
> YARN-5959.wip.002.patch, YARN-5959.wip.003.patch, YARN-5959.wip.patch
>
>
> RM side changes to allow an AM to ask for change of ExecutionType.
> Currently, there are two cases:
> # *Promotion* : OPPORTUNISTIC to GUARANTEED.
> # *Demotion* : GUARANTEED to OPPORTUNISTIC.
> This is similar in YARN-1197 which allows for change in Container resources. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5923) Unable to access logs for a running application if YARN_ACL_ENABLE is enabled

2017-01-03 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796337#comment-15796337
 ] 

Xuan Gong commented on YARN-5923:
-

Thanks for the review. [~djp]
Attached a new patch for the comments

> Unable to access logs for a running application if YARN_ACL_ENABLE is enabled
> -
>
> Key: YARN-5923
> URL: https://issues.apache.org/jira/browse/YARN-5923
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-5923.1.patch, YARN-5923.2.patch, YARN-5923.3.patch
>
>
> 2016-11-07 23:20:41,423 WARN  webapp.GenericExceptionHandler 
> (GenericExceptionHandler.java:toResponse(98)) - INTERNAL_SERVER_ERROR
> javax.ws.rs.WebApplicationException: 
> org.apache.hadoop.yarn.exceptions.YarnException: User [dr.who] is not 
> authorized to view the logs for application application_1478218837976_0068
> at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices.getContainerLogsInfo(NMWebServices.java:226)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
> at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:886)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:834)
> at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebAppFilter.doFilter(NMWebAppFilter.java:72)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:795)
> at 
> com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:163)
> at 
> com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58)
> at 
> com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:118)
> at 
> com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:113)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.security.http.XFrameOptionsFilter.doFilter(XFrameOptionsFilter.java:57)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1294)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> 

[jira] [Updated] (YARN-5923) Unable to access logs for a running application if YARN_ACL_ENABLE is enabled

2017-01-03 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-5923:

Attachment: YARN-5923.3.patch

> Unable to access logs for a running application if YARN_ACL_ENABLE is enabled
> -
>
> Key: YARN-5923
> URL: https://issues.apache.org/jira/browse/YARN-5923
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-5923.1.patch, YARN-5923.2.patch, YARN-5923.3.patch
>
>
> 2016-11-07 23:20:41,423 WARN  webapp.GenericExceptionHandler 
> (GenericExceptionHandler.java:toResponse(98)) - INTERNAL_SERVER_ERROR
> javax.ws.rs.WebApplicationException: 
> org.apache.hadoop.yarn.exceptions.YarnException: User [dr.who] is not 
> authorized to view the logs for application application_1478218837976_0068
> at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices.getContainerLogsInfo(NMWebServices.java:226)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
> at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:886)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:834)
> at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebAppFilter.doFilter(NMWebAppFilter.java:72)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:795)
> at 
> com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:163)
> at 
> com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58)
> at 
> com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:118)
> at 
> com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:113)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.security.http.XFrameOptionsFilter.doFilter(XFrameOptionsFilter.java:57)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1294)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
> at 
> 

[jira] [Commented] (YARN-6025) Issues in synchronization in AbstractYarnScheduler & its hierarchy

2017-01-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796306#comment-15796306
 ] 

Hadoop QA commented on YARN-6025:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 38m 41s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6025 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12845244/YARN-6025.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2e8598881bf8 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 591fb15 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/14531/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14531/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/14531/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Issues in synchronization in AbstractYarnScheduler & its hierarchy
> 

[jira] [Commented] (YARN-5714) ContainerExecutor does not order environment map

2017-01-03 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796191#comment-15796191
 ] 

Miklos Szegedi commented on YARN-5714:
--

The change looks good to me. (non-binding) Thank you, [~rcatherinot]!


> ContainerExecutor does not order environment map
> 
>
> Key: YARN-5714
> URL: https://issues.apache.org/jira/browse/YARN-5714
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.4.1, 2.5.2, 2.7.3, 2.6.4, 3.0.0-alpha1
> Environment: all (linux and windows alike)
>Reporter: Remi Catherinot
>Assignee: Remi Catherinot
>Priority: Trivial
>  Labels: oct16-medium
> Attachments: YARN-5714.001.patch, YARN-5714.002.patch, 
> YARN-5714.003.patch, YARN-5714.004.patch, YARN-5714.005.patch, 
> YARN-5714.006.patch
>
>   Original Estimate: 120h
>  Remaining Estimate: 120h
>
> when dumping the launch container script, environment variables are dumped 
> based on the order internally used by the map implementation (hash based). It 
> does not take into consideration that some env varibales may refer each 
> other, and so that some env variables must be declared before those 
> referencing them.
> In my case, i ended up having LD_LIBRARY_PATH which was depending on 
> HADOOP_COMMON_HOME being dumped before HADOOP_COMMON_HOME. Thus it had a 
> wrong value and so native libraries weren't loaded. jobs were running but not 
> at their best efficiency. This is just a use case falling into that bug, but 
> i'm sure others may happen as well.
> I already have a patch running in my production environment, i just estimate 
> to 5 days for packaging the patch in the right fashion for JIRA + try my best 
> to add tests.
> Note : the patch is not OS aware with a default empty implementation. I will 
> only implement the unix version on a 1st release. I'm not used to windows env 
> variables syntax so it will take me more time/research for it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6022) Revert changes of AbstractResourceRequest

2017-01-03 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796153#comment-15796153
 ] 

Wangda Tan commented on YARN-6022:
--

[~kasha] / [~asuresh], 

Missed your last comment somehow, apologize for the last reply. Since this is 
not included by branch-2.8, so we should have enough time to change it.

I want to revert the change because two part:
1) It makes a stable API inherit from an unstable API, which is very confusing. 
Even if we don't want user to use the AbstractRR, but it is unavoidable. 
According to API definition, user should write code like:
{code}
AbstractResourceRequest r = new ResourceRequest(...);
r.setCapability(..._
{code}

Apparently we don't want user to do things like this because AbstractRR could 
be changed. 

2) Most importantly, it is not necessary. YARN-5774 only requires scheduler 
returns normalized resource instead of update resource inside a given resource 
request. To me, we should avoid changing user-facing API because of internal 
implementation, this change is completely avoidable. I suggest to keep the 
original API. 

Thoughts?

> Revert changes of AbstractResourceRequest
> -
>
> Key: YARN-6022
> URL: https://issues.apache.org/jira/browse/YARN-6022
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Priority: Blocker
>
> YARN-5774 added AbstractResourceRequest to make easier internal scheduler 
> change, this is not a correct approach: For example, with this change, we 
> need to make AbstractResourceRequest to be public/stable. And end users can 
> use it like:
> {code}
> AbstractResourceRequest request = ...
> request.setCapability(...)
> {code}
> But AbstractResourceRequest should not be visible by application at all. 
> We need to revert it from branch-2.8 / branch-2 / trunk. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5529) Create new DiskValidator class with metrics

2017-01-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796146#comment-15796146
 ] 

Hudson commented on YARN-5529:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11064 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11064/])
YARN-5529. Create new DiskValidator class with metrics (yufeigu via (rkanter: 
rev 591fb159444037bf4cb651aa1228914f5d71e1bf)
* (add) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestReadWriteDiskValidator.java
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ReadWriteDiskValidatorMetrics.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/impl/MetricsRecords.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MutableQuantiles.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/DiskValidatorFactory.java
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ReadWriteDiskValidator.java


> Create new DiskValidator class with metrics
> ---
>
> Key: YARN-5529
> URL: https://issues.apache.org/jira/browse/YARN-5529
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Ray Chiang
>Assignee: Yufei Gu
>  Labels: supportability
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5529.001.patch, YARN-5529.002.patch, 
> YARN-5529.003.patch
>
>
> With really large clusters, the basic DiskValidator isn't sufficient for some 
> of the less common types of disk failures.
> Look at a new DiskValidator that could do one or more of the following:
> - Add new tests to find more problems
> - Add new metrics to at least characterize problems that we haven't predicted



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6050) AMs can't be scheduled on racks or nodes

2017-01-03 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796139#comment-15796139
 ] 

Wangda Tan commented on YARN-6050:
--

Thanks [~rkanter], thanks will be very useful!

To make less changes to API and internal implementations, I think an 
alternative approach is to expand AM request on the server side.

For example, if AM request.resourceName specified to "/rack", it will be 
expanded to \[/rack, *\], and if AM request.resourceName specified to "host", 
it will be expanded to \[host, /rack, *\] 

If request.relaxLocality set to false, we will add it to the expanded result as 
well: for example, if AM request is "host1, relaxLocality=false", the expanded 
result will be: \[(host1, relaxLocality=true), (/rack, relaxLocality=false), 
(*, relaxLocality=false)\]

With this change, we should be able to only update ApplicationMasterService to 
expand request, and all other logics should be the same.

Thoughts?

> AMs can't be scheduled on racks or nodes
> 
>
> Key: YARN-6050
> URL: https://issues.apache.org/jira/browse/YARN-6050
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-6050.001.patch
>
>
> Yarn itself supports rack/node aware scheduling for AMs; however, there 
> currently are two problems:
> # To specify hard or soft rack/node requests, you have to specify more than 
> one {{ResourceRequest}}.  For example, if you want to schedule an AM only on 
> "rackA", you have to create two {{ResourceRequest}}, like this:
> {code}
> ResourceRequest.newInstance(PRIORITY, ANY, CAPABILITY, NUM_CONTAINERS, false);
> ResourceRequest.newInstance(PRIORITY, "rackA", CAPABILITY, NUM_CONTAINERS, 
> true);
> {code}
> The problem is that the Yarn API doesn't actually allow you to specify more 
> than one {{ResourceRequest}} in the {{ApplicationSubmissionContext}}.  The 
> current behavior is to either build one from {{getResource}} or directly from 
> {{getAMContainerResourceRequest}}, depending on if 
> {{getAMContainerResourceRequest}} is null or not.  We'll need to add a third 
> method, say {{getAMContainerResourceRequests}}, which takes a list of 
> {{ResourceRequest}} so that clients can specify the multiple resource 
> requests.
> # There are some places where things are hardcoded to overwrite what the 
> client specifies.  These are pretty straightforward to fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5923) Unable to access logs for a running application if YARN_ACL_ENABLE is enabled

2017-01-03 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796121#comment-15796121
 ] 

Junping Du commented on YARN-5923:
--

Agree that it could be hard to have unit test here. 
Patch looks good in overall. Some minor comments:
1. We can break the loop earlier when {{hasHadoopAuthFilterInitializer = 
true;}} given we are only interested in adding target in case no default 
initializer.
2. Claim target to be List instead of ArrayList and may be 
rename to targets given it could have multiple items.
Other looks fine to me.

> Unable to access logs for a running application if YARN_ACL_ENABLE is enabled
> -
>
> Key: YARN-5923
> URL: https://issues.apache.org/jira/browse/YARN-5923
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-5923.1.patch, YARN-5923.2.patch
>
>
> 2016-11-07 23:20:41,423 WARN  webapp.GenericExceptionHandler 
> (GenericExceptionHandler.java:toResponse(98)) - INTERNAL_SERVER_ERROR
> javax.ws.rs.WebApplicationException: 
> org.apache.hadoop.yarn.exceptions.YarnException: User [dr.who] is not 
> authorized to view the logs for application application_1478218837976_0068
> at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices.getContainerLogsInfo(NMWebServices.java:226)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
> at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:886)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:834)
> at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebAppFilter.doFilter(NMWebAppFilter.java:72)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:795)
> at 
> com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:163)
> at 
> com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58)
> at 
> com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:118)
> at 
> com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:113)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.security.http.XFrameOptionsFilter.doFilter(XFrameOptionsFilter.java:57)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1294)
> at 
> 

[jira] [Commented] (YARN-5529) Create new DiskValidator class with metrics

2017-01-03 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796088#comment-15796088
 ] 

Yufei Gu commented on YARN-5529:


Great! Thanks [~rkanter] for the review and commit.

> Create new DiskValidator class with metrics
> ---
>
> Key: YARN-5529
> URL: https://issues.apache.org/jira/browse/YARN-5529
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Ray Chiang
>Assignee: Yufei Gu
>  Labels: supportability
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5529.001.patch, YARN-5529.002.patch, 
> YARN-5529.003.patch
>
>
> With really large clusters, the basic DiskValidator isn't sufficient for some 
> of the less common types of disk failures.
> Look at a new DiskValidator that could do one or more of the following:
> - Add new tests to find more problems
> - Add new metrics to at least characterize problems that we haven't predicted



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5529) Create new DiskValidator class with metrics

2017-01-03 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15796064#comment-15796064
 ] 

Robert Kanter commented on YARN-5529:
-

+1 LGTM

> Create new DiskValidator class with metrics
> ---
>
> Key: YARN-5529
> URL: https://issues.apache.org/jira/browse/YARN-5529
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Ray Chiang
>Assignee: Yufei Gu
>  Labels: supportability
> Attachments: YARN-5529.001.patch, YARN-5529.002.patch, 
> YARN-5529.003.patch
>
>
> With really large clusters, the basic DiskValidator isn't sufficient for some 
> of the less common types of disk failures.
> Look at a new DiskValidator that could do one or more of the following:
> - Add new tests to find more problems
> - Add new metrics to at least characterize problems that we haven't predicted



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5529) Create new DiskValidator class with metrics

2017-01-03 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15765570#comment-15765570
 ] 

Robert Kanter edited comment on YARN-5529 at 1/3/17 8:01 PM:
-

A few things:
# Please look into the failed tests
# In {{ReadWriteDiskValidator}}, we create a file using 
{{Files.createTempFile}} and delete it near the end.  If some kind of exception 
happens (either because we're throwing a {{DiskErrorException}} or a more 
generic {{IOException}} from some problem), we won't end up deleting the file.  
If this happens a lot, we can start filling up the disk with junk over time.  
We should put the delete call in a {{finally}}.
# Typo: {{throw new DiskErrorException("Data in file has bee corrupted.");}} It 
should be {{been}}.
# In {{TestReadWriteDiskValidator#testReadWriteDiskValidator}}, the {{100}} 
used in a few places could be made a constant, or at least a variable
# In {{TestReadWriteDiskValidator#testCheckFailures}}, the {{try-catch}} blocks 
where we're expecting a failure should {{fail}} if a {{DiskErrorException}} is 
not thrown, and should do some basic checking in the {{catch}} statement.  e.g.
{code:java}
try {
  readWriteDiskValidator.checkStatus(testDir);
  fail("some message");
} catch (DiskErrorException e) {
  some_basic_check(e);
}
{code}
# In {{TestReadWriteDiskValidator#testCheckFailures}} when we set the 
permissions to {{000}}, might that cause a problem when trying to clean it up 
(i.e. {{mvn clean}})?  


was (Author: rkanter):
A few things:
# Please look into the failed tests
# In {{ReadWriteDiskValidator}}, we create a file using 
{{Files.createTempFile}} and delete it near the end.  If some kind of exception 
happens (either because we're throwing a {{DiskErrorException}} or a more 
generic {{IOException from some problem), we won't end up deleting the file.  
If this happens a lot, we can start filling up the disk with junk over time.  
We should put the delete call in a {{finally}}.
# Typo: {{throw new DiskErrorException("Data in file has bee corrupted.");}} It 
should be {{been}}.
# In {{TestReadWriteDiskValidator#testReadWriteDiskValidator}}, the {{100}} 
used in a few places could be made a constant, or at least a variable
# In {{TestReadWriteDiskValidator#testCheckFailures}}, the {{try-catch}} blocks 
where we're expecting a failure should {{fail}} if a {{DiskErrorException}} is 
not thrown, and should do some basic checking in the {{catch}} statement.  e.g.
{code:java}
try {
  readWriteDiskValidator.checkStatus(testDir);
  fail("some message");
} catch (DiskErrorException e) {
  some_basic_check(e);
}
{code}
# In {{TestReadWriteDiskValidator#testCheckFailures}} when we set the 
permissions to {{000}}, might that cause a problem when trying to clean it up 
(i.e. {{mvn clean}})?  

> Create new DiskValidator class with metrics
> ---
>
> Key: YARN-5529
> URL: https://issues.apache.org/jira/browse/YARN-5529
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Ray Chiang
>Assignee: Yufei Gu
>  Labels: supportability
> Attachments: YARN-5529.001.patch, YARN-5529.002.patch, 
> YARN-5529.003.patch
>
>
> With really large clusters, the basic DiskValidator isn't sufficient for some 
> of the less common types of disk failures.
> Look at a new DiskValidator that could do one or more of the following:
> - Add new tests to find more problems
> - Add new metrics to at least characterize problems that we haven't predicted



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6015) AsyncDispatcher thread name can be set to improved debugging

2017-01-03 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15795909#comment-15795909
 ] 

Daniel Templeton commented on YARN-6015:


Thanks, [~ajithshetty].  A few comments:

* Looks like {{setDispatcherThreadName()}} is unused and is superseded by the 
constructor
* {{   * The thread name for dispatcher}} should end with a period
* {{AsyncDispatcher(String)}} needs a summary in the javadoc
* In {{   * @param dispatcherName Name of the dispatcher thread}}, "Name" 
shouldn't be capitalized


> AsyncDispatcher thread name can be set to improved debugging
> 
>
> Key: YARN-6015
> URL: https://issues.apache.org/jira/browse/YARN-6015
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Ajith S
>Assignee: Ajith S
> Attachments: YARN-6015.01.patch
>
>
> Currently all the running instances of AsyncDispatcher have same thread name. 
> To improve debugging, we can have option to set thread name



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5986) Add support for container specific properties to the ContainerLaunchContext

2017-01-03 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15795882#comment-15795882
 ] 

Miklos Szegedi commented on YARN-5986:
--

Thank you, [~vvasudev]! I updated the title.

> Add support for container specific properties to the ContainerLaunchContext
> ---
>
> Key: YARN-5986
> URL: https://issues.apache.org/jira/browse/YARN-5986
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>
> Currently ContainerLaunchContext is defined as
> message ContainerLaunchContextProto {
>   repeated StringLocalResourceMapProto localResources = 1;
>   optional bytes tokens = 2;
>   repeated StringBytesMapProto service_data = 3;
>   repeated StringStringMapProto environment = 4;
>   repeated string command = 5;
>   repeated ApplicationACLMapProto application_ACLs = 6;
>   optional ContainerRetryContextProto container_retry_context = 7;
> }
> It would be nice to have an additional parameter "configuration" to support 
> cases like YARN-5600, where we want to pass a parameter to Yarn and not the 
> application or container.
>   repeated StringStringMapProto configuration = 8;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5986) Add support for container specific properties to the ContainerLaunchContext

2017-01-03 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-5986:
-
Summary: Add support for container specific properties to the 
ContainerLaunchContext  (was: Adding YARN configuration entries to 
ContainerLaunchContext)

> Add support for container specific properties to the ContainerLaunchContext
> ---
>
> Key: YARN-5986
> URL: https://issues.apache.org/jira/browse/YARN-5986
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>
> Currently ContainerLaunchContext is defined as
> message ContainerLaunchContextProto {
>   repeated StringLocalResourceMapProto localResources = 1;
>   optional bytes tokens = 2;
>   repeated StringBytesMapProto service_data = 3;
>   repeated StringStringMapProto environment = 4;
>   repeated string command = 5;
>   repeated ApplicationACLMapProto application_ACLs = 6;
>   optional ContainerRetryContextProto container_retry_context = 7;
> }
> It would be nice to have an additional parameter "configuration" to support 
> cases like YARN-5600, where we want to pass a parameter to Yarn and not the 
> application or container.
>   repeated StringStringMapProto configuration = 8;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6012) Remove node label (removeFromClusterNodeLabels) document is missing

2017-01-03 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15795866#comment-15795866
 ] 

Daniel Templeton commented on YARN-6012:


Thanks, [~Ying Zhang], for the patch.  I can see that the new docs are styled 
after the add/modify docs, but it would be really nice to have docs that are 
reader-friendly.

For example, instead of "Remove cluster node labels: Executing {{yarn rmadmin 
-removeFromClusterNodeLabels ""}} (label splitted by ",") 
to remove node labels.", it would be nice to be a bit more readable, such as 
"To remove one or more node labels, execute the following command: {{yarn 
rmadmin -removeFromClusterNodeLabels "[,,...]"}}.  The command 
argument should be a comma-separated list of node labels to remove."

The rest of the bullets in the new docs can be made similarly more friendly by 
making them text instead of terse bullets.

> Remove node label (removeFromClusterNodeLabels) document is missing
> ---
>
> Key: YARN-6012
> URL: https://issues.apache.org/jira/browse/YARN-6012
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha1
>Reporter: Weiwei Yang
>Assignee: Ying Zhang
>  Labels: doc, nodelabel
> Attachments: YARN-6012.001.patch
>
>
> Add corresponding documentation for
> {code}
> yarn rmadmin -removeFromClusterNodeLabels "x,y"
> {code}
> in yarn node labels doc page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5585) [Atsv2] Reader side changes for entity prefix and support for pagination via additional filters

2017-01-03 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15795865#comment-15795865
 ] 

Sangjin Lee commented on YARN-5585:
---

Thanks for the clarification. Yes, I agree that the conclusion should be that 
the fromIdPrefix should be mandatory in both cases.

> [Atsv2] Reader side changes for entity prefix and support for pagination via 
> additional filters
> ---
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
>  Labels: yarn-5355-merge-blocker
> Attachments: 0001-YARN-5585.patch, YARN-5585-YARN-5355.0001.patch, 
> YARN-5585-YARN-5355.0002.patch, YARN-5585-YARN-5355.0003.patch, 
> YARN-5585-YARN-5355.0004.patch, YARN-5585-YARN-5355.0005.patch, 
> YARN-5585-workaround.patch, YARN-5585.v0.patch
>
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Current Behavior : Default limit is set to 100. If there are 1000 entities 
> then REST call gives first/last 100 entities. How to retrieve next set of 100 
> entities i.e 101 to 200 OR 900 to 801?
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-5. But to retrieve next 5 apps, there is 
> no way to achieve this. 
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&=app-5* which gives list of apps from app-6 to 
> app-10. 
> Since ATS is targeting large number of entities storage, it is very common 
> use case to get next set of entities using fromId rather than querying all 
> the entites. This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6027) Support fromId for flows/flowrun apps

2017-01-03 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15795862#comment-15795862
 ] 

Sangjin Lee commented on YARN-6027:
---

Catching up on this discussion after the break.

I am +1 on adding pagination support for flow runs and apps.

Regarding flows (flow activity), I think [~varun_saxena] explained well how 
that table is supposed to be used and served. The intent of the flow activity 
is it's strongly organized by *dates*; i.e. it is stored as daily activities. 
Therefore, a natural way to segment and paginate it would be using date range 
filters.

Please note that "flows" and "flow runs" are different. Flows are closer to the 
application that drives runs. Flow runs are *actualized instances* of those 
flows. For example, if you run a MR sleep job on every hour, there would be 
*one* flow that says "Sleep Job". And there would be 24 flow runs for a given 
day that belong to that flow.

The flow activity table surfaces all the flows (not the flow runs) that had 
activity on a given day, and that should be a good landing point for users, not 
unlike the current RM page where it shows the latest active YARN applications. 
In the above sleep job case, it should show only *one entry* that says "Sleep 
Job". You can drill down (i.e. obtain the list of flow runs from that) to get 
at the 24 flow runs.

bq. Landing page should be list of all flows.

I think we had this discussion in the past but I forget the JIRA id. Given the 
current structure of the data, it would not be very feasible. Also, for any 
sufficiently old cluster, you can easily imagine how big (and slow) this result 
can be. Even if we introduced pagination, you'd be looking at flows that start 
with "A" on this landing page. You'd need to move many pages to get to your 
flows. Again, like the RM landing page, IMO *recency* is the key to the 
usefulness of this UI, and that's why we organized the flow activity that way. 
That way most users would find their flows within the first few pages (at most) 
of the data. Hope that helps.

cc [~jrottinghuis] [~vrushalic]

> Support fromId for flows/flowrun apps
> -
>
> Key: YARN-6027
> URL: https://issues.apache.org/jira/browse/YARN-6027
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: yarn-5355-merge-blocker
>
> In YARN-5585 , fromId is supported for retrieving entities. We need similar 
> filter for flows/flowRun apps and flow run and flow as well. 
> Along with supporting fromId, this JIRA should also discuss following points
> * Should we throw an exception for entities/entity retrieval if duplicates 
> found?
> * TimelieEntity :
> ** Should equals method also check for idPrefix?
> ** Does idPrefix is part of identifiers?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5585) [Atsv2] Reader side changes for entity prefix and support for pagination via additional filters

2017-01-03 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15795796#comment-15795796
 ] 

Rohith Sharma K S commented on YARN-5585:
-

So apart from proposed Java Doc, additional change is to add sentence in fromId 
right? *fromIdPrefix is mandatory even in the case the entity id prefix is not 
used and should be set to 0*

> [Atsv2] Reader side changes for entity prefix and support for pagination via 
> additional filters
> ---
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
>  Labels: yarn-5355-merge-blocker
> Attachments: 0001-YARN-5585.patch, YARN-5585-YARN-5355.0001.patch, 
> YARN-5585-YARN-5355.0002.patch, YARN-5585-YARN-5355.0003.patch, 
> YARN-5585-YARN-5355.0004.patch, YARN-5585-YARN-5355.0005.patch, 
> YARN-5585-workaround.patch, YARN-5585.v0.patch
>
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Current Behavior : Default limit is set to 100. If there are 1000 entities 
> then REST call gives first/last 100 entities. How to retrieve next set of 100 
> entities i.e 101 to 200 OR 900 to 801?
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-5. But to retrieve next 5 apps, there is 
> no way to achieve this. 
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&=app-5* which gives list of apps from app-6 to 
> app-10. 
> Since ATS is targeting large number of entities storage, it is very common 
> use case to get next set of entities using fromId rather than querying all 
> the entites. This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5830) Avoid preempting AM containers

2017-01-03 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15795778#comment-15795778
 ] 

Karthik Kambatla commented on YARN-5830:


Discussed with Yufei offline. 

I have a couple of high-level concerns:
# When identifying a node and containers to preempt, the patch differentiates 
between a set with AM containers and a set without. However, it doesn't look 
into the number of AM containers. 
# The method {{identifyContainersToPreempt}} is too long and might benefit from 
more modularity. 

One approach we discussed was to add a new method, 
{{identifyContainersToPreemptOnNode}} that works on a single node and takes 
maximum number of allowed AM containers. 

> Avoid preempting AM containers
> --
>
> Key: YARN-5830
> URL: https://issues.apache.org/jira/browse/YARN-5830
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Karthik Kambatla
>Assignee: Yufei Gu
> Attachments: YARN-5830.001.patch, YARN-5830.002.patch
>
>
> While considering containers for preemption, avoid AM containers unless 
> absolutely necessary. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5987) NM configured command to collect heap dump of preempted container

2017-01-03 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15795781#comment-15795781
 ] 

Miklos Szegedi commented on YARN-5987:
--

Thank you [~templedf] for the info! YARN-2261 is about adding a cleanup 
container for an entire application, this jira is about adding a cleanup script 
for every container.
I read the design there and it suggests two points to discuss to me. One is 
whether we want to run the cleanup callback in a container and the other is 
whether we want to do retries.
1. If we used a separate container for the callback, it might fail due to 
resource constraints, which would prevent collecting a useful dump file. It has 
to run while the original container is alive. One container accessing another 
would also raise container isolation concerns I think.
2. If we do not run the callback in a container, it cannot be preempted, so I 
think we do not need any retry logic either. We can reconsider this, if there 
is a usage pattern other than collecting a dump in the future. However, the 
callback itself can implement a retry logic in the current implementation, if 
necessary.

> NM configured command to collect heap dump of preempted container
> -
>
> Key: YARN-5987
> URL: https://issues.apache.org/jira/browse/YARN-5987
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Attachments: YARN-5987.000.patch, YARN-5987.001.patch
>
>
> The node manager can kill a container, if it exceeds the assigned memory 
> limits. It would be nice to have a configuration entry to set up a command 
> that can collect additional debug information, if needed. The collected 
> information can be used for root cause analysis.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6046) Documentation correction in YarnApplicationSecurity

2017-01-03 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15795779#comment-15795779
 ] 

Daniel Templeton commented on YARN-6046:


[~bibinchundatt], thanks for filing the issue.  With a quick read I don't see 
what's wrong with the last paragraph you pointed out.  What's your concern 
there?

> Documentation correction in YarnApplicationSecurity
> ---
>
> Key: YARN-6046
> URL: https://issues.apache.org/jira/browse/YARN-6046
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Daniel Templeton
>Priority: Trivial
>  Labels: newbie
>
> Few documentation correction required in 
> hadoop-yarn/hadoop-yarn-site/YarnApplicationSecurity.html
> {code}
> 1. Suring AM startup, log in to Kerberos.
> {code}
> {code}
> Don’t. Rely on the lifespan of the 
> {code}
> {code}
> renewed automatically; the AM pushes out 
> {code}
> {code}
> In an insecure cluster, the application will run as the identity of the 
> account of the node manager, typically something such as yarn or mapred. By 
> default, the application will access HDFS as that user, with a different home 
> directory, and with a different user identified in audit logs and on file 
> system owner attributes.
> {code}
> Need to reframe sentence.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6046) Documentation correction in YarnApplicationSecurity

2017-01-03 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton reassigned YARN-6046:
--

Assignee: Daniel Templeton

> Documentation correction in YarnApplicationSecurity
> ---
>
> Key: YARN-6046
> URL: https://issues.apache.org/jira/browse/YARN-6046
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Daniel Templeton
>Priority: Trivial
>  Labels: newbie
>
> Few documentation correction required in 
> hadoop-yarn/hadoop-yarn-site/YarnApplicationSecurity.html
> {code}
> 1. Suring AM startup, log in to Kerberos.
> {code}
> {code}
> Don’t. Rely on the lifespan of the 
> {code}
> {code}
> renewed automatically; the AM pushes out 
> {code}
> {code}
> In an insecure cluster, the application will run as the identity of the 
> account of the node manager, typically something such as yarn or mapred. By 
> default, the application will access HDFS as that user, with a different home 
> directory, and with a different user identified in audit logs and on file 
> system owner attributes.
> {code}
> Need to reframe sentence.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6050) AMs can't be scheduled on racks or nodes

2017-01-03 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated YARN-6050:

Attachment: YARN-6050.001.patch

> AMs can't be scheduled on racks or nodes
> 
>
> Key: YARN-6050
> URL: https://issues.apache.org/jira/browse/YARN-6050
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-6050.001.patch
>
>
> Yarn itself supports rack/node aware scheduling for AMs; however, there 
> currently are two problems:
> # To specify hard or soft rack/node requests, you have to specify more than 
> one {{ResourceRequest}}.  For example, if you want to schedule an AM only on 
> "rackA", you have to create two {{ResourceRequest}}, like this:
> {code}
> ResourceRequest.newInstance(PRIORITY, ANY, CAPABILITY, NUM_CONTAINERS, false);
> ResourceRequest.newInstance(PRIORITY, "rackA", CAPABILITY, NUM_CONTAINERS, 
> true);
> {code}
> The problem is that the Yarn API doesn't actually allow you to specify more 
> than one {{ResourceRequest}} in the {{ApplicationSubmissionContext}}.  The 
> current behavior is to either build one from {{getResource}} or directly from 
> {{getAMContainerResourceRequest}}, depending on if 
> {{getAMContainerResourceRequest}} is null or not.  We'll need to add a third 
> method, say {{getAMContainerResourceRequests}}, which takes a list of 
> {{ResourceRequest}} so that clients can specify the multiple resource 
> requests.
> # There are some places where things are hardcoded to overwrite what the 
> client specifies.  These are pretty straightforward to fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5585) [Atsv2] Reader side changes for entity prefix and support for pagination via additional filters

2017-01-03 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15795729#comment-15795729
 ] 

Rohith Sharma K S commented on YARN-5585:
-

bq. It is regarding the case where entity id prefix is not used for certain 
types of entities. In that case, what do we say users should specify to 
accomplish pagination? Should they do fromIdPrefix = 0 (default) and the 
fromId, or should they not specify fromIdPrefix at all?
There are 2 cases. 
# When idPrefix is incremental for entities of given entityType : Here it is 
known that entities are sorted using id prefix. In this cases, ideally fromId 
is not mandatorily required to achieve pagination. fromIdPrefix itself is 
sufficient to achieve pagination. 
# When idPrefix is same for all entities of given entityType(lets say 0) : 
Here, since idPrefix is same across the entities, entities are sorted based on 
back end storage. In Hbase, it is lexicographical order. But still, we need to 
achieve pagination. So, fromIdPrefix alone is not sufficient, but also requires 
fromId. So, instead of allowing user to provide only fromId, better enforce to 
provide fromIdPrefix and fromId. Note that we can NOT take default value 0 for 
fromIdPrefix if fromIdprefix is null because idPrefix is controlled by user.


> [Atsv2] Reader side changes for entity prefix and support for pagination via 
> additional filters
> ---
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
>  Labels: yarn-5355-merge-blocker
> Attachments: 0001-YARN-5585.patch, YARN-5585-YARN-5355.0001.patch, 
> YARN-5585-YARN-5355.0002.patch, YARN-5585-YARN-5355.0003.patch, 
> YARN-5585-YARN-5355.0004.patch, YARN-5585-YARN-5355.0005.patch, 
> YARN-5585-workaround.patch, YARN-5585.v0.patch
>
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Current Behavior : Default limit is set to 100. If there are 1000 entities 
> then REST call gives first/last 100 entities. How to retrieve next set of 100 
> entities i.e 101 to 200 OR 900 to 801?
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-5. But to retrieve next 5 apps, there is 
> no way to achieve this. 
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&=app-5* which gives list of apps from app-6 to 
> app-10. 
> Since ATS is targeting large number of entities storage, it is very common 
> use case to get next set of entities using fromId rather than querying all 
> the entites. This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6050) AMs can't be scheduled on racks or nodes

2017-01-03 Thread Robert Kanter (JIRA)
Robert Kanter created YARN-6050:
---

 Summary: AMs can't be scheduled on racks or nodes
 Key: YARN-6050
 URL: https://issues.apache.org/jira/browse/YARN-6050
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.9.0, 3.0.0-alpha2
Reporter: Robert Kanter
Assignee: Robert Kanter


Yarn itself supports rack/node aware scheduling for AMs; however, there 
currently are two problems:
# To specify hard or soft rack/node requests, you have to specify more than one 
{{ResourceRequest}}.  For example, if you want to schedule an AM only on 
"rackA", you have to create two {{ResourceRequest}}, like this:
{code}
ResourceRequest.newInstance(PRIORITY, ANY, CAPABILITY, NUM_CONTAINERS, false);
ResourceRequest.newInstance(PRIORITY, "rackA", CAPABILITY, NUM_CONTAINERS, 
true);
{code}
The problem is that the Yarn API doesn't actually allow you to specify more 
than one {{ResourceRequest}} in the {{ApplicationSubmissionContext}}.  The 
current behavior is to either build one from {{getResource}} or directly from 
{{getAMContainerResourceRequest}}, depending on if 
{{getAMContainerResourceRequest}} is null or not.  We'll need to add a third 
method, say {{getAMContainerResourceRequests}}, which takes a list of 
{{ResourceRequest}} so that clients can specify the multiple resource requests.
# There are some places where things are hardcoded to overwrite what the client 
specifies.  These are pretty straightforward to fix.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5585) [Atsv2] Reader side changes for entity prefix and support for pagination via additional filters

2017-01-03 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15795718#comment-15795718
 ] 

Sangjin Lee commented on YARN-5585:
---

Great! Can we then clarify that point (fromIdPrefix is mandatory even in the 
case the entity id prefix is not used and should be set to 0)? Thanks!

> [Atsv2] Reader side changes for entity prefix and support for pagination via 
> additional filters
> ---
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
>  Labels: yarn-5355-merge-blocker
> Attachments: 0001-YARN-5585.patch, YARN-5585-YARN-5355.0001.patch, 
> YARN-5585-YARN-5355.0002.patch, YARN-5585-YARN-5355.0003.patch, 
> YARN-5585-YARN-5355.0004.patch, YARN-5585-YARN-5355.0005.patch, 
> YARN-5585-workaround.patch, YARN-5585.v0.patch
>
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Current Behavior : Default limit is set to 100. If there are 1000 entities 
> then REST call gives first/last 100 entities. How to retrieve next set of 100 
> entities i.e 101 to 200 OR 900 to 801?
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-5. But to retrieve next 5 apps, there is 
> no way to achieve this. 
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&=app-5* which gives list of apps from app-6 to 
> app-10. 
> Since ATS is targeting large number of entities storage, it is very common 
> use case to get next set of entities using fromId rather than querying all 
> the entites. This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5585) [Atsv2] Reader side changes for entity prefix and support for pagination via additional filters

2017-01-03 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15795685#comment-15795685
 ] 

Sangjin Lee commented on YARN-5585:
---

Regarding whether to use the entity id prefix in {{TimelineEntity.equals()}}, I 
think the only implication is when we add parsed entities to a set. If we don't 
include the entity id prefix in {{equals()}} and there were 2 entities with the 
same id (but different id prefix somehow), whichever was returned later would 
not be added to the set.

Since this is an uncommon case which we've been discussing in another context, 
I'm +0 on not introducing the entity id prefix to {{TimelineEntity.equals()}}.

> [Atsv2] Reader side changes for entity prefix and support for pagination via 
> additional filters
> ---
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
>  Labels: yarn-5355-merge-blocker
> Attachments: 0001-YARN-5585.patch, YARN-5585-YARN-5355.0001.patch, 
> YARN-5585-YARN-5355.0002.patch, YARN-5585-YARN-5355.0003.patch, 
> YARN-5585-YARN-5355.0004.patch, YARN-5585-YARN-5355.0005.patch, 
> YARN-5585-workaround.patch, YARN-5585.v0.patch
>
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Current Behavior : Default limit is set to 100. If there are 1000 entities 
> then REST call gives first/last 100 entities. How to retrieve next set of 100 
> entities i.e 101 to 200 OR 900 to 801?
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-5. But to retrieve next 5 apps, there is 
> no way to achieve this. 
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&=app-5* which gives list of apps from app-6 to 
> app-10. 
> Since ATS is targeting large number of entities storage, it is very common 
> use case to get next set of entities using fromId rather than querying all 
> the entites. This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5585) [Atsv2] Reader side changes for entity prefix and support for pagination via additional filters

2017-01-03 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15795682#comment-15795682
 ] 

Varun Saxena edited comment on YARN-5585 at 1/3/17 6:06 PM:


bq. Perhaps you are implying that fromIdPrefix should still be set in that case?
bq. I think this needs to be clarified. IMO, we should keep the behavior of 
requiring fromIdPrefix in all cases, but in those cases where the entity id 
prefix is not used, we should require users to specify 0 (default) as the 
fromIdPrefix. Otherwise, to allow fromId only would be more confusing.
Yes, fromId is ignored is fromIdPrefix is not set. And in cases where id prefix 
is not used for an entity type, user needs to explicitly send 0 in 
fromIdPrefix. We can also allow not specifying fromIdPrefix(and only using 
fromId) in such cases while assuming internally that its 0. But as you said it 
may be confusing. Till we clearly document the behavior I think it should be 
fine.
I agree with you that the point about users having to send 0 needs to be 
clarified as well in the javadoc in addition to the javadoc proposed above.


was (Author: varun_saxena):
bq. Perhaps you are implying that fromIdPrefix should still be set in that case?
bq. I think this needs to be clarified. IMO, we should keep the behavior of 
requiring fromIdPrefix in all cases, but in those cases where the entity id 
prefix is not used, we should require users to specify 0 (default) as the 
fromIdPrefix. Otherwise, to allow fromId only would be more confusing.
Yes, fromId is ignored is fromIdPrefix is not set. And in cases where 
fromIdPrefix is not used for an entity type, user needs to explicitly send 0. 
We can also allow not specifying fromIdPrefix(and only using fromId) in such 
cases while assuming internally that its 0. But as you said it may be 
confusing. Till we clearly document the behavior I think it should be fine.
I agree with you that the point about users having to send 0 needs to be 
clarified as well in the javadoc in addition to the javadoc proposed above.

> [Atsv2] Reader side changes for entity prefix and support for pagination via 
> additional filters
> ---
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
>  Labels: yarn-5355-merge-blocker
> Attachments: 0001-YARN-5585.patch, YARN-5585-YARN-5355.0001.patch, 
> YARN-5585-YARN-5355.0002.patch, YARN-5585-YARN-5355.0003.patch, 
> YARN-5585-YARN-5355.0004.patch, YARN-5585-YARN-5355.0005.patch, 
> YARN-5585-workaround.patch, YARN-5585.v0.patch
>
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Current Behavior : Default limit is set to 100. If there are 1000 entities 
> then REST call gives first/last 100 entities. How to retrieve next set of 100 
> entities i.e 101 to 200 OR 900 to 801?
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-5. But to retrieve next 5 apps, there is 
> no way to achieve this. 
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&=app-5* which gives list of apps from app-6 to 
> app-10. 
> Since ATS is targeting large number of entities storage, it is very common 
> use case to get next set of entities using fromId rather than querying all 
> the entites. This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5585) [Atsv2] Reader side changes for entity prefix and support for pagination via additional filters

2017-01-03 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15795682#comment-15795682
 ] 

Varun Saxena commented on YARN-5585:


bq. Perhaps you are implying that fromIdPrefix should still be set in that case?
bq. I think this needs to be clarified. IMO, we should keep the behavior of 
requiring fromIdPrefix in all cases, but in those cases where the entity id 
prefix is not used, we should require users to specify 0 (default) as the 
fromIdPrefix. Otherwise, to allow fromId only would be more confusing.
Yes, fromId is ignored is fromIdPrefix is not set. And in cases where 
fromIdPrefix is not used for an entity type, user needs to explicitly send 0. 
We can also allow not specifying fromIdPrefix(and only using fromId) in such 
cases while assuming internally that its 0. But as you said it may be 
confusing. Till we clearly document the behavior I think it should be fine.
I agree with you that the point about users having to send 0 needs to be 
clarified as well in the javadoc in addition to the javadoc proposed above.

> [Atsv2] Reader side changes for entity prefix and support for pagination via 
> additional filters
> ---
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
>  Labels: yarn-5355-merge-blocker
> Attachments: 0001-YARN-5585.patch, YARN-5585-YARN-5355.0001.patch, 
> YARN-5585-YARN-5355.0002.patch, YARN-5585-YARN-5355.0003.patch, 
> YARN-5585-YARN-5355.0004.patch, YARN-5585-YARN-5355.0005.patch, 
> YARN-5585-workaround.patch, YARN-5585.v0.patch
>
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Current Behavior : Default limit is set to 100. If there are 1000 entities 
> then REST call gives first/last 100 entities. How to retrieve next set of 100 
> entities i.e 101 to 200 OR 900 to 801?
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-5. But to retrieve next 5 apps, there is 
> no way to achieve this. 
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&=app-5* which gives list of apps from app-6 to 
> app-10. 
> Since ATS is targeting large number of entities storage, it is very common 
> use case to get next set of entities using fromId rather than querying all 
> the entites. This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6022) Revert changes of AbstractResourceRequest

2017-01-03 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15795677#comment-15795677
 ] 

Arun Suresh commented on YARN-6022:
---

[~leftnoteasy], [~kasha], do you guys want to just move this to @Public 
@Unstable now (thereby unblocking the release) and maybe refactor this later ?

> Revert changes of AbstractResourceRequest
> -
>
> Key: YARN-6022
> URL: https://issues.apache.org/jira/browse/YARN-6022
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Priority: Blocker
>
> YARN-5774 added AbstractResourceRequest to make easier internal scheduler 
> change, this is not a correct approach: For example, with this change, we 
> need to make AbstractResourceRequest to be public/stable. And end users can 
> use it like:
> {code}
> AbstractResourceRequest request = ...
> request.setCapability(...)
> {code}
> But AbstractResourceRequest should not be visible by application at all. 
> We need to revert it from branch-2.8 / branch-2 / trunk. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >