[jira] [Commented] (YARN-6108) Improve AHS webservice to accept NM address as a parameter to get container logs

2017-02-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15851195#comment-15851195
 ] 

Hadoop QA commented on YARN-6108:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
35s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 44s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 2 new + 36 unchanged - 0 fixed = 38 total (was 36) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
27s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
33s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
24s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m  2s{color} 
| {color:red} hadoop-yarn-server-applicationhistoryservice in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.timeline.webapp.TestTimelineWebServices |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6108 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12850776/YARN-6108.5.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b304e270dbe3 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0914fcc |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 

[jira] [Updated] (YARN-5889) Improve user-limit calculation in capacity scheduler

2017-02-02 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-5889:
--
Attachment: YARN-5889.0010.patch

Yes [~leftnoteasy]. Makes sense to me. Uploading a patch fixing the comments 
given. Thank You very much.

> Improve user-limit calculation in capacity scheduler
> 
>
> Key: YARN-5889
> URL: https://issues.apache.org/jira/browse/YARN-5889
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-5889.0001.patch, 
> YARN-5889.0001.suggested.patchnotes, YARN-5889.0002.patch, 
> YARN-5889.0003.patch, YARN-5889.0004.patch, YARN-5889.0005.patch, 
> YARN-5889.0006.patch, YARN-5889.0007.patch, YARN-5889.0008.patch, 
> YARN-5889.0009.patch, YARN-5889.0010.patch, YARN-5889.v0.patch, 
> YARN-5889.v1.patch, YARN-5889.v2.patch
>
>
> Currently user-limit is computed during every heartbeat allocation cycle with 
> a write lock. To improve performance, this tickets is focussing on moving 
> user-limit calculation out of heartbeat allocation flow.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6108) Improve AHS webservice to accept NM address as a parameter to get container logs

2017-02-02 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15851132#comment-15851132
 ] 

Xuan Gong commented on YARN-6108:
-

Fix the javadoc warning.

> Improve AHS webservice to accept NM address as a parameter to get container 
> logs
> 
>
> Key: YARN-6108
> URL: https://issues.apache.org/jira/browse/YARN-6108
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-6108.1.patch, YARN-6108.2.patch, YARN-6108.3.patch, 
> YARN-6108.4.patch, YARN-6108.5.patch, YARN-6108.branch-2.v1.patch
>
>
> Currently, if we want to get container log for running application, we need 
> to get NM web address from AHS which we need to enable 
> yarn.timeline-service.generic-application-history.save-non-am-container-meta-info
>  for non-am containers. But, in most of time, we will disable this 
> configuration for ATS performance purpose. In this case, it is impossible for 
> us to get the logs for non-am container in a running application.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6108) Improve AHS webservice to accept NM address as a parameter to get container logs

2017-02-02 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-6108:

Attachment: YARN-6108.5.patch

> Improve AHS webservice to accept NM address as a parameter to get container 
> logs
> 
>
> Key: YARN-6108
> URL: https://issues.apache.org/jira/browse/YARN-6108
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-6108.1.patch, YARN-6108.2.patch, YARN-6108.3.patch, 
> YARN-6108.4.patch, YARN-6108.5.patch, YARN-6108.branch-2.v1.patch
>
>
> Currently, if we want to get container log for running application, we need 
> to get NM web address from AHS which we need to enable 
> yarn.timeline-service.generic-application-history.save-non-am-container-meta-info
>  for non-am containers. But, in most of time, we will disable this 
> configuration for ATS performance purpose. In this case, it is impossible for 
> us to get the logs for non-am container in a running application.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6050) AMs can't be scheduled on racks or nodes

2017-02-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15851082#comment-15851082
 ] 

Hadoop QA commented on YARN-6050:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 15 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 13m 
26s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 13m 26s{color} 
| {color:red} root generated 1 new + 700 unchanged - 0 fixed = 701 total (was 
700) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  7s{color} | {color:orange} root: The patch generated 6 new + 1657 unchanged 
- 6 fixed = 1663 total (was 1663) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
40s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
41s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 42m 17s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}109m  
8s{color} | {color:green} hadoop-mapreduce-client-jobclient in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
49s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}255m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6050 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12850728/YARN-6050.006.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 9113763947c3 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 

[jira] [Commented] (YARN-5665) Documentation does not mention package name requirement for yarn.resourcemanager.scheduler.class

2017-02-02 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15851047#comment-15851047
 ] 

Miklos Szegedi commented on YARN-5665:
--

Thank you, [~yufeigu]. I see "Please use a full class patch", did you mean path?


> Documentation does not mention package name requirement for 
> yarn.resourcemanager.scheduler.class
> 
>
> Key: YARN-5665
> URL: https://issues.apache.org/jira/browse/YARN-5665
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha1
>Reporter: Miklos Szegedi
>Assignee: Yufei Gu
>Priority: Trivial
>  Labels: doc, newbie
> Attachments: YARN-5665.001.patch
>
>
> http://hadoop.apache.org/docs/r3.0.0-alpha1/hadoop-project-dist/hadoop-common/ClusterSetup.html
>  refers to FairScheduler, when it documents the setting 
> yarn.resourcemanager.scheduler.class. What it forgets to mention is that the 
> user has to specify the full class path like 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler 
> otherwise the system throws java.lang.ClassNotFoundException: FairScheduler. 
> It would be nice, if the documentation specified the full class path, so that 
> the user does not need to look it up.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5951) Changes to allow CapacityScheduler to use configuration store

2017-02-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15850970#comment-15850970
 ] 

Hadoop QA commented on YARN-5951:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 3s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
20s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
28s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 2 new + 305 unchanged - 2 fixed = 307 total (was 307) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 42m 39s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestResourceTrackerService |
|   | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5951 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12850733/YARN-5951-YARN-5734.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c2de9a4e7c7f 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-5734 / 11e44bd |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/14817/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/14817/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14817/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 

[jira] [Commented] (YARN-5665) Documentation does not mention package name requirement for yarn.resourcemanager.scheduler.class

2017-02-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15850916#comment-15850916
 ] 

Hadoop QA commented on YARN-5665:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5665 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12850737/YARN-5665.001.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 3ee53875fd13 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0914fcc |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/14819/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Documentation does not mention package name requirement for 
> yarn.resourcemanager.scheduler.class
> 
>
> Key: YARN-5665
> URL: https://issues.apache.org/jira/browse/YARN-5665
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha1
>Reporter: Miklos Szegedi
>Assignee: Yufei Gu
>Priority: Trivial
>  Labels: doc, newbie
> Attachments: YARN-5665.001.patch
>
>
> http://hadoop.apache.org/docs/r3.0.0-alpha1/hadoop-project-dist/hadoop-common/ClusterSetup.html
>  refers to FairScheduler, when it documents the setting 
> yarn.resourcemanager.scheduler.class. What it forgets to mention is that the 
> user has to specify the full class path like 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler 
> otherwise the system throws java.lang.ClassNotFoundException: FairScheduler. 
> It would be nice, if the documentation specified the full class path, so that 
> the user does not need to look it up.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5556) CapacityScheduler: Support deleting queues without requiring a RM restart

2017-02-02 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15850912#comment-15850912
 ] 

Wangda Tan commented on YARN-5556:
--

[~Naganarasimha], thanks for confirming this! 

> CapacityScheduler: Support deleting queues without requiring a RM restart
> -
>
> Key: YARN-5556
> URL: https://issues.apache.org/jira/browse/YARN-5556
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Xuan Gong
>Assignee: Naganarasimha G R
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5556.v1.001.patch, YARN-5556.v1.002.patch, 
> YARN-5556.v1.003.patch, YARN-5556.v1.004.patch, YARN-5556.v2.005.patch, 
> YARN-5556.v2.006.patch
>
>
> Today, we could add or modify queues without restarting the RM, via a CS 
> refresh. But for deleting queue, we have to restart the ResourceManager. We 
> could support for deleting queues without requiring a RM restart



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6050) AMs can't be scheduled on racks or nodes

2017-02-02 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15850910#comment-15850910
 ] 

Wangda Tan commented on YARN-6050:
--

bq. This patch doesn't add hard locality though (or change how that works). 
That's still governed by the relaxLocality booleans in the ResourceRequest 
objects that the client passes along.
This is true, but the AM blacklisting (added by YARN-2005) feature may prevent 
scheduler allocate container for a failed AM container since the host which AM 
asks may be blacklisted by scheduler.

To me it is better to handle this case together with the patch.

> AMs can't be scheduled on racks or nodes
> 
>
> Key: YARN-6050
> URL: https://issues.apache.org/jira/browse/YARN-6050
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-6050.001.patch, YARN-6050.002.patch, 
> YARN-6050.003.patch, YARN-6050.004.patch, YARN-6050.005.patch, 
> YARN-6050.006.patch
>
>
> Yarn itself supports rack/node aware scheduling for AMs; however, there 
> currently are two problems:
> # To specify hard or soft rack/node requests, you have to specify more than 
> one {{ResourceRequest}}.  For example, if you want to schedule an AM only on 
> "rackA", you have to create two {{ResourceRequest}}, like this:
> {code}
> ResourceRequest.newInstance(PRIORITY, ANY, CAPABILITY, NUM_CONTAINERS, false);
> ResourceRequest.newInstance(PRIORITY, "rackA", CAPABILITY, NUM_CONTAINERS, 
> true);
> {code}
> The problem is that the Yarn API doesn't actually allow you to specify more 
> than one {{ResourceRequest}} in the {{ApplicationSubmissionContext}}.  The 
> current behavior is to either build one from {{getResource}} or directly from 
> {{getAMContainerResourceRequest}}, depending on if 
> {{getAMContainerResourceRequest}} is null or not.  We'll need to add a third 
> method, say {{getAMContainerResourceRequests}}, which takes a list of 
> {{ResourceRequest}} so that clients can specify the multiple resource 
> requests.
> # There are some places where things are hardcoded to overwrite what the 
> client specifies.  These are pretty straightforward to fix.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4090) Make Collections.sort() more efficient in FSParentQueue.java

2017-02-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15850911#comment-15850911
 ] 

Hadoop QA commented on YARN-4090:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 19s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 19s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 19 unchanged - 0 fixed = 20 total (was 19) {color} 
|
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
15s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 18s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 20m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-4090 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12845162/YARN-4090.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8b261bf98c0f 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0914fcc |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-YARN-Build/14818/artifact/patchprocess/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-YARN-Build/14818/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| javac | 

[jira] [Commented] (YARN-6125) The application attempt's diagnostic message should have a maximum size

2017-02-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15850903#comment-15850903
 ] 

Hadoop QA commented on YARN-6125:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
44s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
34s{color} | {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
26s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 54m 36s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}113m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationPriority |
|   | hadoop.yarn.server.resourcemanager.TestApplicationMasterService |
|   | hadoop.yarn.server.resourcemanager.ahs.TestRMApplicationHistoryWriter |
|   | hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA |
|   | hadoop.yarn.server.resourcemanager.TestApplicationMasterLauncher |
|   | hadoop.yarn.server.resourcemanager.rmapp.TestApplicationLifetimeMonitor |
|   | hadoop.yarn.server.resourcemanager.TestWorkPreservingRMRestart |
|   | hadoop.yarn.server.resourcemanager.TestResourceTrackerService |
|   | hadoop.yarn.server.resourcemanager.TestContainerResourceUsage |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler |
|   | hadoop.yarn.server.resourcemanager.TestDecommissioningNodesWatcher |
|   | 

[jira] [Commented] (YARN-6108) Improve AHS webservice to accept NM address as a parameter to get container logs

2017-02-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15850900#comment-15850900
 ] 

Hadoop QA commented on YARN-6108:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
40s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 44s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 35 unchanged - 0 fixed = 36 total (was 35) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
6s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common 
generated 3 new + 4579 unchanged - 0 fixed = 4582 total (was 4579) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
37s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
33s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m  5s{color} 
| {color:red} hadoop-yarn-server-applicationhistoryservice in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 78m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.timeline.webapp.TestTimelineWebServices |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6108 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12850723/YARN-6108.4.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8d58739576b5 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 

[jira] [Updated] (YARN-5665) Documentation does not mention package name requirement for yarn.resourcemanager.scheduler.class

2017-02-02 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5665:
---
Attachment: YARN-5665.001.patch

> Documentation does not mention package name requirement for 
> yarn.resourcemanager.scheduler.class
> 
>
> Key: YARN-5665
> URL: https://issues.apache.org/jira/browse/YARN-5665
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha1
>Reporter: Miklos Szegedi
>Assignee: Yufei Gu
>Priority: Trivial
>  Labels: doc, newbie
> Attachments: YARN-5665.001.patch
>
>
> http://hadoop.apache.org/docs/r3.0.0-alpha1/hadoop-project-dist/hadoop-common/ClusterSetup.html
>  refers to FairScheduler, when it documents the setting 
> yarn.resourcemanager.scheduler.class. What it forgets to mention is that the 
> user has to specify the full class path like 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler 
> otherwise the system throws java.lang.ClassNotFoundException: FairScheduler. 
> It would be nice, if the documentation specified the full class path, so that 
> the user does not need to look it up.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5665) Documentation does not mention package name requirement for yarn.resourcemanager.scheduler.class

2017-02-02 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu reassigned YARN-5665:
--

Assignee: Yufei Gu

> Documentation does not mention package name requirement for 
> yarn.resourcemanager.scheduler.class
> 
>
> Key: YARN-5665
> URL: https://issues.apache.org/jira/browse/YARN-5665
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha1
>Reporter: Miklos Szegedi
>Assignee: Yufei Gu
>Priority: Trivial
>  Labels: doc, newbie
> Attachments: YARN-5665.001.patch
>
>
> http://hadoop.apache.org/docs/r3.0.0-alpha1/hadoop-project-dist/hadoop-common/ClusterSetup.html
>  refers to FairScheduler, when it documents the setting 
> yarn.resourcemanager.scheduler.class. What it forgets to mention is that the 
> user has to specify the full class path like 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler 
> otherwise the system throws java.lang.ClassNotFoundException: FairScheduler. 
> It would be nice, if the documentation specified the full class path, so that 
> the user does not need to look it up.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4090) Make Collections.sort() more efficient in FSParentQueue.java

2017-02-02 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15850854#comment-15850854
 ] 

Yufei Gu commented on YARN-4090:


Submit the patch to let Hadoop QA kick in.
Hi [~zsl2007], are you still working on this? 

> Make Collections.sort() more efficient in FSParentQueue.java
> 
>
> Key: YARN-4090
> URL: https://issues.apache.org/jira/browse/YARN-4090
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Xianyin Xin
>Assignee: zhangshilong
> Attachments: sampling1.jpg, sampling2.jpg, YARN-4090.001.patch, 
> YARN-4090.002.patch, YARN-4090.003.patch, YARN-4090.004.patch, 
> YARN-4090-preview.patch, YARN-4090-TestResult.pdf
>
>
> Collections.sort() consumes too much time in a scheduling round.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5951) Changes to allow CapacityScheduler to use configuration store

2017-02-02 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-5951:

Attachment: YARN-5951-YARN-5734.004.patch

> Changes to allow CapacityScheduler to use configuration store
> -
>
> Key: YARN-5951
> URL: https://issues.apache.org/jira/browse/YARN-5951
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-5951-YARN-5734.001.patch, 
> YARN-5951-YARN-5734.002.patch, YARN-5951-YARN-5734.003.patch, 
> YARN-5951-YARN-5734.004.patch
>
>
> EDIT: changing this ticket. Found that the CapacityStoreConfigurationProvider 
> is not necessary, since we can just grab a Configuration object from 
> StoreConfigurationProvider with type "SCHEDULER" and create a 
> CapacitySchedulerConfiguration from it.
> This ticket will track changes needed for integrating other components to be 
> used by the capacity scheduler.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5951) Changes to allow CapacityScheduler to use configuration store

2017-02-02 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15850836#comment-15850836
 ] 

Jonathan Hung commented on YARN-5951:
-

TestRMRestart test failure seems related to YARN-5548.
Addressed TestCapacityScheduler failure in 004 patch. Basically we need to init 
CS before reinitializing CS to make sure the CS conf provider is initialized.

Addressed checkstyle issues. Two of the issues detect unused import, but the 
import is used in a javadoc link, so will keep these imports.

Addressed license issues.

> Changes to allow CapacityScheduler to use configuration store
> -
>
> Key: YARN-5951
> URL: https://issues.apache.org/jira/browse/YARN-5951
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-5951-YARN-5734.001.patch, 
> YARN-5951-YARN-5734.002.patch, YARN-5951-YARN-5734.003.patch, 
> YARN-5951-YARN-5734.004.patch
>
>
> EDIT: changing this ticket. Found that the CapacityStoreConfigurationProvider 
> is not necessary, since we can just grab a Configuration object from 
> StoreConfigurationProvider with type "SCHEDULER" and create a 
> CapacitySchedulerConfiguration from it.
> This ticket will track changes needed for integrating other components to be 
> used by the capacity scheduler.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-4691) Cache resource usage at FSLeafQueue level

2017-02-02 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu reassigned YARN-4691:
--

Assignee: Yufei Gu

> Cache resource usage at FSLeafQueue level
> -
>
> Key: YARN-4691
> URL: https://issues.apache.org/jira/browse/YARN-4691
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Yufei Gu
>
> As part of the fair share assignment, fair scheduler needs to sort queues to 
> decide which queue is furthest away from its fair share. During the sorting, 
> the comparator needs to get the Resource usage of each queue.
> The parent queue will aggregate the resource usage from leaf queues. The leaf 
> queue will aggregate the resource usage from all apps in the queue.
> {noformat}
> FSLeafQueue.java
>   @Override
>   public Resource getResourceUsage() {
> Resource usage = Resources.createResource(0);
> readLock.lock();
> try {
>   for (FSAppAttempt app : runnableApps) {
> Resources.addTo(usage, app.getResourceUsage());
>   }
>   for (FSAppAttempt app : nonRunnableApps) {
> Resources.addTo(usage, app.getResourceUsage());
>   }
> } finally {
>   readLock.unlock();
> }
> return usage;
>   }
> {noformat}
> Each time fair scheduler tries to assign a container, it needs to sort all 
> queues. Thus the number of Resources.addTo operations will be 
> (number_of_queues) * lg(number_of_queues) *  number_of_apps_per_queue, or 
> number_of_apps_on_the_cluster * lg(number_of_queues).
> One way to solve this is to cache the resource usage at FSLeafQueue level. 
> Each time fair scheduler updates FSAppAttempt's resource usage, it will 
> update FSLeafQueue resource usage. This will greatly reduce the overall 
> number of Resources.addTo operations.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6050) AMs can't be scheduled on racks or nodes

2017-02-02 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated YARN-6050:

Attachment: YARN-6050.006.patch

The 006 patch is rebased on the latest trunk.  I had to make some very minor 
changes, but nothing worth noting.

[~leftnoteasy], please take a look.

> AMs can't be scheduled on racks or nodes
> 
>
> Key: YARN-6050
> URL: https://issues.apache.org/jira/browse/YARN-6050
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-6050.001.patch, YARN-6050.002.patch, 
> YARN-6050.003.patch, YARN-6050.004.patch, YARN-6050.005.patch, 
> YARN-6050.006.patch
>
>
> Yarn itself supports rack/node aware scheduling for AMs; however, there 
> currently are two problems:
> # To specify hard or soft rack/node requests, you have to specify more than 
> one {{ResourceRequest}}.  For example, if you want to schedule an AM only on 
> "rackA", you have to create two {{ResourceRequest}}, like this:
> {code}
> ResourceRequest.newInstance(PRIORITY, ANY, CAPABILITY, NUM_CONTAINERS, false);
> ResourceRequest.newInstance(PRIORITY, "rackA", CAPABILITY, NUM_CONTAINERS, 
> true);
> {code}
> The problem is that the Yarn API doesn't actually allow you to specify more 
> than one {{ResourceRequest}} in the {{ApplicationSubmissionContext}}.  The 
> current behavior is to either build one from {{getResource}} or directly from 
> {{getAMContainerResourceRequest}}, depending on if 
> {{getAMContainerResourceRequest}} is null or not.  We'll need to add a third 
> method, say {{getAMContainerResourceRequests}}, which takes a list of 
> {{ResourceRequest}} so that clients can specify the multiple resource 
> requests.
> # There are some places where things are hardcoded to overwrite what the 
> client specifies.  These are pretty straightforward to fix.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4675) Reorganize TimeClientImpl into TimeClientV1Impl and TimeClientV2Impl

2017-02-02 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15850794#comment-15850794
 ] 

Varun Saxena commented on YARN-4675:


Findbugs and test failure of TestTimelineClientV2Impl is related. We need to 
set timeline service to 2.0 explicitly in the test.
Also many checkstyle issues can be fixed as well. Can you fix them ?

> Reorganize TimeClientImpl into TimeClientV1Impl and TimeClientV2Impl
> 
>
> Key: YARN-4675
> URL: https://issues.apache.org/jira/browse/YARN-4675
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>  Labels: YARN-5355, yarn-5355-merge-blocker
> Attachments: YARN-4675.v2.002.patch, YARN-4675.v2.003.patch, 
> YARN-4675.v2.004.patch, YARN-4675.v2.005.patch, YARN-4675.v2.006.patch, 
> YARN-4675-YARN-2928.v1.001.patch
>
>
> We need to reorganize TimeClientImpl into TimeClientV1Impl ,  
> TimeClientV2Impl and if required a base class, so that its clear which part 
> of the code belongs to which version and thus better maintainable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-5271) ATS client doesn't work with Jersey 2 on the classpath

2017-02-02 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu reopened YARN-5271:
-

> ATS client doesn't work with Jersey 2 on the classpath
> --
>
> Key: YARN-5271
> URL: https://issues.apache.org/jira/browse/YARN-5271
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client, timelineserver
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Weiwei Yang
>  Labels: oct16-medium
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: YARN-5271.01.patch, YARN-5271.02.patch, 
> YARN-5271.branch-2.01.patch, YARN-5271-branch-2.8.01.patch
>
>
> see SPARK-15343 : once Jersey 2 is on the CP, you can't instantiate a 
> timeline client, *even if the server is an ATS1.5 server and publishing is 
> via the FS*



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5271) ATS client doesn't work with Jersey 2 on the classpath

2017-02-02 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15850784#comment-15850784
 ] 

Li Lu commented on YARN-5271:
-

Quick note: are we catching an error here and disables timeline service based 
on this? Catching errors seems to be inadequate as per Java API doc:
bq. An Error is a subclass of Throwable that indicates serious problems that a 
reasonable application should not try to catch. Most such errors are abnormal 
conditions. 
(https://docs.oracle.com/javase/7/docs/api/java/lang/Error.html)

Reopen this JIRA for more investigation. 

> ATS client doesn't work with Jersey 2 on the classpath
> --
>
> Key: YARN-5271
> URL: https://issues.apache.org/jira/browse/YARN-5271
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client, timelineserver
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Weiwei Yang
>  Labels: oct16-medium
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: YARN-5271.01.patch, YARN-5271.02.patch, 
> YARN-5271.branch-2.01.patch, YARN-5271-branch-2.8.01.patch
>
>
> see SPARK-15343 : once Jersey 2 is on the CP, you can't instantiate a 
> timeline client, *even if the server is an ATS1.5 server and publishing is 
> via the FS*



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6108) Improve AHS webservice to accept NM address as a parameter to get container logs

2017-02-02 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-6108:

Attachment: YARN-6108.4.patch

fix the checkstyle issue

> Improve AHS webservice to accept NM address as a parameter to get container 
> logs
> 
>
> Key: YARN-6108
> URL: https://issues.apache.org/jira/browse/YARN-6108
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-6108.1.patch, YARN-6108.2.patch, YARN-6108.3.patch, 
> YARN-6108.4.patch, YARN-6108.branch-2.v1.patch
>
>
> Currently, if we want to get container log for running application, we need 
> to get NM web address from AHS which we need to enable 
> yarn.timeline-service.generic-application-history.save-non-am-container-meta-info
>  for non-am containers. But, in most of time, we will disable this 
> configuration for ATS performance purpose. In this case, it is impossible for 
> us to get the logs for non-am container in a running application.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6125) The application attempt's diagnostic message should have a maximum size

2017-02-02 Thread Andras Piros (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Piros updated YARN-6125:
---
Attachment: YARN-6125.003.patch

> The application attempt's diagnostic message should have a maximum size
> ---
>
> Key: YARN-6125
> URL: https://issues.apache.org/jira/browse/YARN-6125
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.7.0
>Reporter: Daniel Templeton
>Assignee: Andras Piros
>Priority: Critical
> Fix For: 3.0.0-alpha3
>
> Attachments: YARN-6125.000.patch, YARN-6125.001.patch, 
> YARN-6125.002.patch, YARN-6125.003.patch
>
>
> We've found through experience that the diagnostic message can grow 
> unbounded.  I've seen attempts that have diagnostic messages over 1MB.  Since 
> the message is stored in the state store, it's a bad idea to allow the 
> message to grow unbounded.  Instead, there should be a property that sets a 
> maximum size on the message.
> I suspect that some of the ZK state store issues we've seen in the past were 
> due to the size of the diagnostic messages and not to the size of the 
> classpath, as is the current prevailing opinion.
> An open question is how best to prune the message once it grows too large.  
> Should we
> # truncate the tail,
> # truncate the head,
> # truncate the middle,
> # add another property to make the behavior selectable, or
> # none of the above?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5946) Create YarnConfigurationStore interface and InMemoryConfigurationStore class

2017-02-02 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-5946:

Summary: Create YarnConfigurationStore interface and 
InMemoryConfigurationStore class  (was: Create YarnConfigurationStore class)

> Create YarnConfigurationStore interface and InMemoryConfigurationStore class
> 
>
> Key: YARN-5946
> URL: https://issues.apache.org/jira/browse/YARN-5946
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-5946.001.patch, YARN-5946-YARN-5734.002.patch
>
>
> This class provides the interface to persist YARN configurations in a backing 
> store.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6125) The application attempt's diagnostic message should have a maximum size

2017-02-02 Thread Andras Piros (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15850732#comment-15850732
 ] 

Andras Piros edited comment on YARN-6125 at 2/2/17 11:35 PM:
-

* {{ExpectedException.none()}} as factory method, and {{ExpectedException}} 
general usage description can be found 
[*here*|https://github.com/junit-team/junit4/wiki/exception-testing#expectedexception-rule].
 Basically if you don't expect any {{Exception}} s to be thrown, you don't call 
{{expect()}}, {{expectMessage()}} or {{expectCause()}}. Yes, you can always 
test exceptional behavior w/ [*try-catch 
idiom*|https://github.com/junit-team/junit4/wiki/exception-testing#trycatch-idiom]
 instead of {{ExpectedException}}
* got rid of {{Lists.newLinkedList()}} call
* got rid of that {{final}} variable

Posting the patch addressing those issues but not yet the truncate behavior.

Talked to [~templedf] and [~yufeigu] offline about truncate characteristics, 
here is one possible solution:
# when a message is being appended, and it doesn't fit into the buffer, trim 
message body to buffer size (minus header prefixes) preserving its head and add 
a prefix header telling users it's been partially truncated
# when a message is being appended, and it fits into the buffer:
## past messages are deleted in FIFO order
## for all the ones deleted we put a common prefix header stating there have 
been messages left out
## for the last one truncated we trim the message body and add a prefix header 
stating part of its tail has been left out

Some example scenarios, considering four consecutive {{append()}} calls w/ 
different messages each:
# nothing truncated:
{noformat}
message1 this is a very long message that fits into the buffer
message2 this is a lot shorter than the previour one
message3 this is even shorter
message4 the shortest one
{noformat}
# only the very last message has been partially truncated:
{noformat}
message1 this is a very long message that fits into the buffer
message2 this is a lot shorter than the previour one
message3 this is even shorter
message4 this has been one that did not fit totally 
into the buffer...
{noformat}
# only the very first message has been partially truncated:
{noformat}
message1 this has been one that did not fit totally 
into the buffer...
message2 this is a very long message that fits into the buffer
message3 this is a lot shorter than the previour one
message4 the shortest one
{noformat}
# the first several ones have been deleted, the previous one has been truncated 
partially:
{noformat}

message3 this has been one that did not fit totally 
into the buffer...
message4 the shortest one
{noformat}
# all the previous ones have been deleted, plus the last one has been truncated:
{noformat}

message4 this has been one that did not fit totally 
into the buffer...
{noformat}

[~jlowe], [~kasha], [~vvasudev] what is your opinion about the proposal?


was (Author: andras.piros):
* {{ExpectedException.none()}} as factory method, and {{ExpectedException}} 
general usage description can be found 
[*here*|https://github.com/junit-team/junit4/wiki/exception-testing#expectedexception-rule].
 Basically if you don't expect any {{Exception}} s to be thrown, you don't call 
{{expect()}}, {{expectMessage()}} or {{expectCause()}}. Yes, you can always 
test exceptional behavior w/ [*try-catch 
idiom*|https://github.com/junit-team/junit4/wiki/exception-testing#trycatch-idiom]
 instead of {{ExpectedException}}
* got rid of {{Lists.newLinkedList()}} call
* got rid of that {{final}} variable

Posting the patch addressing those issues but not yet the truncate behavior.

Talked to [~templedf] and [~yufeigu] offline about truncate characteristics, 
here is one possible solution:
# when a message is being appended, and it doesn't fit into the buffer, trim 
message body to buffer size (minus header prefixes) preserving its head and add 
a prefix header telling users it's been partially truncated
# when a message is being appended, and it fits into the buffer:
## past messages are truncated in FIFO order
## for all the ones truncated fully we put a common prefix header stating there 
have been messages left out
## for the last one truncated partially we trim the message body and add a 
prefix header stating it's been partially truncated

Some example scenarios:
# nothing truncated:
{noformat}
message1 this is a very long message that fits into the buffer
message2 this is a lot shorter than the previour one
message3 the shortest one
{noformat}
# only the very last message has been partially truncated:
{noformat}
message1 this is a very long message that fits into the buffer
message2 this is a lot shorter than the previour one
message3 the shortest one
message4 this has been one that did not fit totally 
into the buffer...
{noformat}
# only the very first message has been partially truncated:
{noformat}
message1 this has been one 

[jira] [Commented] (YARN-6125) The application attempt's diagnostic message should have a maximum size

2017-02-02 Thread Andras Piros (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15850732#comment-15850732
 ] 

Andras Piros commented on YARN-6125:


* {{ExpectedException.none()}} as factory method, and {{ExpectedException}} 
general usage description can be found 
[*here*|https://github.com/junit-team/junit4/wiki/exception-testing#expectedexception-rule].
 Basically if you don't expect any {{Exception}} s to be thrown, you don't call 
{{expect()}}, {{expectMessage()}} or {{expectCause()}}. Yes, you can always 
test exceptional behavior w/ [*try-catch 
idiom*|https://github.com/junit-team/junit4/wiki/exception-testing#trycatch-idiom]
 instead of {{ExpectedException}}
* got rid of {{Lists.newLinkedList()}} call
* got rid of that {{final}} variable

Posting the patch addressing those issues but not yet the truncate behavior.

Talked to [~templedf] and [~yufeigu] offline about truncate characteristics, 
here is one possible solution:
# when a message is being appended, and it doesn't fit into the buffer, trim 
message body to buffer size (minus header prefixes) preserving its head and add 
a prefix header telling users it's been partially truncated
# when a message is being appended, and it fits into the buffer:
## past messages are truncated in FIFO order
## for all the ones truncated fully we put a common prefix header stating there 
have been messages left out
## for the last one truncated partially we trim the message body and add a 
prefix header stating it's been partially truncated

Some example scenarios:
# nothing truncated:
{noformat}
message1 this is a very long message that fits into the buffer
message2 this is a lot shorter than the previour one
message3 the shortest one
{noformat}
# only the very last message has been partially truncated:
{noformat}
message1 this is a very long message that fits into the buffer
message2 this is a lot shorter than the previour one
message3 the shortest one
message4 this has been one that did not fit totally 
into the buffer...
{noformat}
# only the very first message has been partially truncated:
{noformat}
message1 this has been one that did not fit totally 
into the buffer...
message2 this is a very long message that fits into the buffer
message3 this is a lot shorter than the previour one
message4 the shortest one
{noformat}
# the first several ones have been deleted, the previous one has been truncated 
partially:
{noformat}

message3 this has been one that did not fit totally 
into the buffer...
message4 the shortest one
{noformat}
# all the previous ones have been deleted, plus the last one has been truncated:
{noformat}

message4 this has been one that did not fit totally 
into the buffer...
{noformat}

[~jlowe], [~kasha], [~vvasudev] what is your opinion about the proposal?

> The application attempt's diagnostic message should have a maximum size
> ---
>
> Key: YARN-6125
> URL: https://issues.apache.org/jira/browse/YARN-6125
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.7.0
>Reporter: Daniel Templeton
>Assignee: Andras Piros
>Priority: Critical
> Fix For: 3.0.0-alpha3
>
> Attachments: YARN-6125.000.patch, YARN-6125.001.patch, 
> YARN-6125.002.patch
>
>
> We've found through experience that the diagnostic message can grow 
> unbounded.  I've seen attempts that have diagnostic messages over 1MB.  Since 
> the message is stored in the state store, it's a bad idea to allow the 
> message to grow unbounded.  Instead, there should be a property that sets a 
> maximum size on the message.
> I suspect that some of the ZK state store issues we've seen in the past were 
> due to the size of the diagnostic messages and not to the size of the 
> classpath, as is the current prevailing opinion.
> An open question is how best to prune the message once it grows too large.  
> Should we
> # truncate the tail,
> # truncate the head,
> # truncate the middle,
> # add another property to make the behavior selectable, or
> # none of the above?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6108) Improve AHS webservice to accept NM address as a parameter to get container logs

2017-02-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15850691#comment-15850691
 ] 

Hadoop QA commented on YARN-6108:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
8s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
47s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 56s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 3 new + 35 unchanged - 0 fixed = 38 total (was 35) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
2s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
45s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
21s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m 27s{color} 
| {color:red} hadoop-yarn-server-applicationhistoryservice in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 87m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.timeline.webapp.TestTimelineWebServices |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6108 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12850687/YARN-6108.3.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 41ea75748c2f 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0914fcc |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 

[jira] [Commented] (YARN-4675) Reorganize TimeClientImpl into TimeClientV1Impl and TimeClientV2Impl

2017-02-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15850686#comment-15850686
 ] 

Hadoop QA commented on YARN-4675:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
58s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
20s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
35s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 48s{color} | {color:orange} root: The patch generated 20 new + 744 unchanged 
- 10 fixed = 764 total (was 754) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  3m 
15s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
16s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common 
generated 0 new + 4575 unchanged - 4 fixed = 4575 total (was 4579) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} hadoop-yarn-server-tests in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-yarn-applications-distributedshell in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} hadoop-mapreduce-client-app in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 37s{color} 
| {color:red} hadoop-yarn-common in the patch failed. {color} |
| 

[jira] [Created] (YARN-6139) There are no docs for file localization

2017-02-02 Thread Daniel Templeton (JIRA)
Daniel Templeton created YARN-6139:
--

 Summary: There are no docs for file localization
 Key: YARN-6139
 URL: https://issues.apache.org/jira/browse/YARN-6139
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.8.0
Reporter: Daniel Templeton


File localization is a major part of YARN and how it runs applications.  The 
localization process is completely undocumented aside from the 
{{o.a.h.filecache.DistributedCache}} (MR1) API docs.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6125) The application attempt's diagnostic message should have a maximum size

2017-02-02 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15850577#comment-15850577
 ] 

Daniel Templeton commented on YARN-6125:


bq. the pom.xml dependency reordering was necessary in order to get 
ExpectedException working.

What is the net effect of declaring that no exception is expected?  Is it any 
different from not declaring anything?

bq. Lists.newLinkedList() comes from Guava, which I like.

The issue isn't Guava.  The issue is that it was created to replace {{new 
LinkedList()}} (in this case), but as of Java 7 there's the diamond 
operator.  If you read the javadoc for {{newList()}}, it says not to use it 
with Java 7 or later.

bq. I like final stuff (I know I'm weird), please specify where I should make 
variables / fields non-final

I was thinking specifically of {code}final int inputLength = 
csq.length();{code}  {{inputLength}} is only ever used as a parameter, so 
declaring it final is kinda overkill.

While you're working on the patch, another architectural concern has occurred 
to me.  Assume I have a 64k limit.  I append a message that's 65,530 bytes 
long.  I then append a message that's 10 bytes long.  The current 
implementation will drop the first message completely.  Seems like it might be 
better to only drop the first line of the first message,  just enough to trim 
it down.  A partial message may be better than no message.  Granted, it's 
probably not hugely useful to only have the tail of a message, but from a 
user's perspective it's less unexpected to get a partial message than to drop a 
large message entirely.  One could make the same case for just cutting it down 
to the exact length instead of minding line boundaries...  [~jlowe], [~kasha], 
[~vvasudev], [~yufeigu], any thoughts?

Oh, one more thing...  Can we add a header that shows the content was trimmed?  
If we're not just clipping to the buffer size, it would be useful to have a way 
to notify the user that the message is not complete.  In most cases it will be, 
so we want to be clear when it isn't.

> The application attempt's diagnostic message should have a maximum size
> ---
>
> Key: YARN-6125
> URL: https://issues.apache.org/jira/browse/YARN-6125
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.7.0
>Reporter: Daniel Templeton
>Assignee: Andras Piros
>Priority: Critical
> Fix For: 3.0.0-alpha3
>
> Attachments: YARN-6125.000.patch, YARN-6125.001.patch, 
> YARN-6125.002.patch
>
>
> We've found through experience that the diagnostic message can grow 
> unbounded.  I've seen attempts that have diagnostic messages over 1MB.  Since 
> the message is stored in the state store, it's a bad idea to allow the 
> message to grow unbounded.  Instead, there should be a property that sets a 
> maximum size on the message.
> I suspect that some of the ZK state store issues we've seen in the past were 
> due to the size of the diagnostic messages and not to the size of the 
> classpath, as is the current prevailing opinion.
> An open question is how best to prune the message once it grows too large.  
> Should we
> # truncate the tail,
> # truncate the head,
> # truncate the middle,
> # add another property to make the behavior selectable, or
> # none of the above?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6113) re-direct NM Web Service to get container logs for finished applications

2017-02-02 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-6113:

Attachment: YARN-6113.trunk.v2.patch

> re-direct NM Web Service to get container logs for finished applications
> 
>
> Key: YARN-6113
> URL: https://issues.apache.org/jira/browse/YARN-6113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-6113.branch-2.v1.patch, 
> YARN-6113.branch-2.v2.patch, YARN-6113.trunk.v2.patch
>
>
> In NM web ui, when we try to get container logs for finished application, it 
> would redirect to the log server based on the configuration: 
> yarn.log.server.url. We should do the similar thing for NM WebService



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6113) re-direct NM Web Service to get container logs for finished applications

2017-02-02 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-6113:

Attachment: YARN-6113.branch-2.v2.patch

> re-direct NM Web Service to get container logs for finished applications
> 
>
> Key: YARN-6113
> URL: https://issues.apache.org/jira/browse/YARN-6113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-6113.branch-2.v1.patch, YARN-6113.branch-2.v2.patch
>
>
> In NM web ui, when we try to get container logs for finished application, it 
> would redirect to the log server based on the configuration: 
> yarn.log.server.url. We should do the similar thing for NM WebService



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5951) Changes to allow CapacityScheduler to use configuration store

2017-02-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15850551#comment-15850551
 ] 

Hadoop QA commented on YARN-5951:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
50s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 19s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 6 new + 117 unchanged - 1 fixed = 123 total (was 118) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 39m 36s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
54s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5951 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12850676/YARN-5951-YARN-5734.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux fea4a8e4501e 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-5734 / 11e44bd |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/14812/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/14812/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14812/testReport/ |
| asflicense | 

[jira] [Commented] (YARN-6108) Improve AHS webservice to accept NM address as a parameter to get container logs

2017-02-02 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15850504#comment-15850504
 ] 

Xuan Gong commented on YARN-6108:
-

Thanks for review. [~djp]

Uploaded a new patch to address all the comments and fix the checkstyle issues

> Improve AHS webservice to accept NM address as a parameter to get container 
> logs
> 
>
> Key: YARN-6108
> URL: https://issues.apache.org/jira/browse/YARN-6108
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-6108.1.patch, YARN-6108.2.patch, YARN-6108.3.patch, 
> YARN-6108.branch-2.v1.patch
>
>
> Currently, if we want to get container log for running application, we need 
> to get NM web address from AHS which we need to enable 
> yarn.timeline-service.generic-application-history.save-non-am-container-meta-info
>  for non-am containers. But, in most of time, we will disable this 
> configuration for ATS performance purpose. In this case, it is impossible for 
> us to get the logs for non-am container in a running application.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6108) Improve AHS webservice to accept NM address as a parameter to get container logs

2017-02-02 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-6108:

Attachment: YARN-6108.3.patch

> Improve AHS webservice to accept NM address as a parameter to get container 
> logs
> 
>
> Key: YARN-6108
> URL: https://issues.apache.org/jira/browse/YARN-6108
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-6108.1.patch, YARN-6108.2.patch, YARN-6108.3.patch, 
> YARN-6108.branch-2.v1.patch
>
>
> Currently, if we want to get container log for running application, we need 
> to get NM web address from AHS which we need to enable 
> yarn.timeline-service.generic-application-history.save-non-am-container-meta-info
>  for non-am containers. But, in most of time, we will disable this 
> configuration for ATS performance purpose. In this case, it is impossible for 
> us to get the logs for non-am container in a running application.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5556) CapacityScheduler: Support deleting queues without requiring a RM restart

2017-02-02 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15850467#comment-15850467
 ] 

Naganarasimha G R commented on YARN-5556:
-

Oops missed to see this ! Thanks for informing [~wangda] but tried running in 
local build multiple times (specific method and all) but seems like its not 
failing, may be we can wait for one more time and if required raise a jira, so 
that i can add more logs to capture the failure.


> CapacityScheduler: Support deleting queues without requiring a RM restart
> -
>
> Key: YARN-5556
> URL: https://issues.apache.org/jira/browse/YARN-5556
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Xuan Gong
>Assignee: Naganarasimha G R
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5556.v1.001.patch, YARN-5556.v1.002.patch, 
> YARN-5556.v1.003.patch, YARN-5556.v1.004.patch, YARN-5556.v2.005.patch, 
> YARN-5556.v2.006.patch
>
>
> Today, we could add or modify queues without restarting the RM, via a CS 
> refresh. But for deleting queue, we have to restart the ResourceManager. We 
> could support for deleting queues without requiring a RM restart



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5703) ReservationAgents are not correctly configured

2017-02-02 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15850445#comment-15850445
 ] 

Naganarasimha G R commented on YARN-5703:
-

thanks [~maniraj...@gmail.com] for working on the patch, please find my 
comments : 
* AbstractReservationSystem,  ln no 452: I think we can better use {{agentclass 
= Configuration.getClass(String, Class, Class)}} and then call 
{{agentClass.newInstance()}} here instead of current approach and better to 
avoid {{ReflectionUtils.newInstance(Class, Configuration)}} which seems to 
cache constructors if not present. This seems to be used mainly by MR earlier.

* ReservationAgent, ln no 80 : {{init(ReservationSchedulerConfiguration conf)}} 
it should be only {{Configuration}} and not *ReservationSchedulerConfiguration* 
as it is its subclass.

* AlignedPlannerWithGreedy, ln no 38 : need not extend Configurable, and also 
add a method for {{ReservationAgent.init}} method. I would suggest to only have 
{{ReservationAgent.init}}. Also configurable does not any way seem to suit here.

* AlignedPlannerWithGreedy, ln no 38 : {{yarnConf}} would not require local 
vairable if plan as above
IterativePlanner.java, ln no 429: why do we require 
{{init(ReservationSchedulerConfiguration conf)}}?

> ReservationAgents are not correctly configured
> --
>
> Key: YARN-5703
> URL: https://issues.apache.org/jira/browse/YARN-5703
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Sean Po
>Assignee: Manikandan R
> Attachments: YARN-5703.001.patch, YARN-5703.002.patch
>
>
> In AbstractReservationSystem, the method that instantiates a ReservationAgent 
> does not properly initialize it with the appropriate configuration because it 
> expects the ReservationAgent to implement Configurable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5951) Changes to allow CapacityScheduler to use configuration store

2017-02-02 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15850384#comment-15850384
 ] 

Jonathan Hung commented on YARN-5951:
-

Thanks [~leftnoteasy], uploaded 003 patch.

> Changes to allow CapacityScheduler to use configuration store
> -
>
> Key: YARN-5951
> URL: https://issues.apache.org/jira/browse/YARN-5951
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-5951-YARN-5734.001.patch, 
> YARN-5951-YARN-5734.002.patch, YARN-5951-YARN-5734.003.patch
>
>
> EDIT: changing this ticket. Found that the CapacityStoreConfigurationProvider 
> is not necessary, since we can just grab a Configuration object from 
> StoreConfigurationProvider with type "SCHEDULER" and create a 
> CapacitySchedulerConfiguration from it.
> This ticket will track changes needed for integrating other components to be 
> used by the capacity scheduler.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5951) Changes to allow CapacityScheduler to use configuration store

2017-02-02 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-5951:

Attachment: YARN-5951-YARN-5734.003.patch

> Changes to allow CapacityScheduler to use configuration store
> -
>
> Key: YARN-5951
> URL: https://issues.apache.org/jira/browse/YARN-5951
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-5951-YARN-5734.001.patch, 
> YARN-5951-YARN-5734.002.patch, YARN-5951-YARN-5734.003.patch
>
>
> EDIT: changing this ticket. Found that the CapacityStoreConfigurationProvider 
> is not necessary, since we can just grab a Configuration object from 
> StoreConfigurationProvider with type "SCHEDULER" and create a 
> CapacitySchedulerConfiguration from it.
> This ticket will track changes needed for integrating other components to be 
> used by the capacity scheduler.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4675) Reorganize TimeClientImpl into TimeClientV1Impl and TimeClientV2Impl

2017-02-02 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-4675:

Attachment: YARN-4675.v2.006.patch

Thanks for the detailed review comments [~varun_saxena], please find the 
updated patch addressing the comments,
bq. In TimelineClientImpl#serviceInit, we check if timeline service version is 
v2 and if not throw IOException. Here we can simply call 
YarnConfiguration#timelineServiceV2Enabled to make the check.
if we are checking in this fashion we need to use negation "!" and negation 
will return true even when {{timelineServiceEnabled(conf)}} fails in 
timelineServiceV2Enabled check.

bq. However, we should still make a check for timeline service v2 flag in 
TimelineV2ClientImpl and v1 flag in TimelineClientImpl (it can be initialized 
with false in constructor), just in case init is not called.
IMO no need to do unnecessary checks as anyway client is bound to fail if 
service is not initialized

bq. are we planning to move around classes in this patch itself as per Li's 
comment so the impl constructors are not called directly?
As discussed later in the meeting may be not required as part of this.


> Reorganize TimeClientImpl into TimeClientV1Impl and TimeClientV2Impl
> 
>
> Key: YARN-4675
> URL: https://issues.apache.org/jira/browse/YARN-4675
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>  Labels: YARN-5355, yarn-5355-merge-blocker
> Attachments: YARN-4675.v2.002.patch, YARN-4675.v2.003.patch, 
> YARN-4675.v2.004.patch, YARN-4675.v2.005.patch, YARN-4675.v2.006.patch, 
> YARN-4675-YARN-2928.v1.001.patch
>
>
> We need to reorganize TimeClientImpl into TimeClientV1Impl ,  
> TimeClientV2Impl and if required a base class, so that its clear which part 
> of the code belongs to which version and thus better maintainable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5951) Changes to allow CapacityScheduler to use configuration store

2017-02-02 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15850363#comment-15850363
 ] 

Wangda Tan commented on YARN-5951:
--

Thanks [~jhung] for updating the patch.

Overall patch looks good. 

Few comments for latest patch: 
1) Suggest to remove 
{{DERBY_CS_CONF_PROVIDER}}/{{StoreBasedCSConfigurationProvider}}/{{DerbyConfigurationStore}}/{{YarnConfigurationStore}}
 from the patch, better to add them when we add the first store-based 
implementation.
2) Like other configs, better to add a DEFAULT_CS_CONF_PROVIDER and point it to 
FILE_CS_CONF_PROVIDER.

> Changes to allow CapacityScheduler to use configuration store
> -
>
> Key: YARN-5951
> URL: https://issues.apache.org/jira/browse/YARN-5951
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-5951-YARN-5734.001.patch, 
> YARN-5951-YARN-5734.002.patch
>
>
> EDIT: changing this ticket. Found that the CapacityStoreConfigurationProvider 
> is not necessary, since we can just grab a Configuration object from 
> StoreConfigurationProvider with type "SCHEDULER" and create a 
> CapacitySchedulerConfiguration from it.
> This ticket will track changes needed for integrating other components to be 
> used by the capacity scheduler.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5889) Improve user-limit calculation in capacity scheduler

2017-02-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15850352#comment-15850352
 ] 

Hadoop QA commented on YARN-5889:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 30s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 23 new + 1025 unchanged - 18 fixed = 1048 total (was 1043) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 39m 
44s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5889 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12850622/YARN-5889.0009.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1ea4dce72a65 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0914fcc |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/14810/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14810/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/14810/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Improve user-limit calculation in capacity scheduler
> 
>
> Key: YARN-5889
>   

[jira] [Commented] (YARN-5889) Improve user-limit calculation in capacity scheduler

2017-02-02 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15850294#comment-15850294
 ] 

Wangda Tan commented on YARN-5889:
--

Hi Sunil,

bq. I am slightly confused here. I think we might need null check. I ll help to 
share detailed view for that. ...
I think the latest patch do the correct thing, except one thing:

Instead of doing:

{code}
Map> userMap = (isActive)
? preComputedActiveUserLimit
: preComputedAllUserLimit;

return !userMap.containsKey(nodePartition)
|| (getLocalVersionOfUsersState(nodePartition, schedulingMode,
isActive) != latestVersionOfUserCount);
{code}

I think it should be enough to do:

{code}
return getLocalVersionOfUsersState(nodePartition, schedulingMode,
isActive) != latestVersionOfUserCount;
{code}

The reason is, getLocalVersionOfUsersState will always return -1 when userMap 
doesn't contains nodePartition.
And {{reComputeUserLimits}} will insert the map and return.

Is my understand correct?

bq. userLimitNeedsRecompute or getLatestVersionOfUsersState are not writeLock 
protected ...

{{userLimitNeedsRecompute}} is now writeLock-protected (I think you don't need 
atomicLong any more but it's fine to keep it as-is). And usage of 
latestVersionOfUsersState is always under read/writeLock. So 
latestVersionOfUsersState.get() is always up-to-date value and will not be 
changed during calculation (it is also fine if it is changed during usage, our 
goal is to make local cache invalidated).

And one additional minor comment:
1) computeUserLimit:
- Remove {{User user}} from parameter list, and instead of calling 
lQueue.getUser, it's better to call this.getUser()

+1 to the latest patch beyond above comments, I think the patch is ready-to-go 
once Jenkins give a +1. 

> Improve user-limit calculation in capacity scheduler
> 
>
> Key: YARN-5889
> URL: https://issues.apache.org/jira/browse/YARN-5889
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-5889.0001.patch, 
> YARN-5889.0001.suggested.patchnotes, YARN-5889.0002.patch, 
> YARN-5889.0003.patch, YARN-5889.0004.patch, YARN-5889.0005.patch, 
> YARN-5889.0006.patch, YARN-5889.0007.patch, YARN-5889.0008.patch, 
> YARN-5889.0009.patch, YARN-5889.v0.patch, YARN-5889.v1.patch, 
> YARN-5889.v2.patch
>
>
> Currently user-limit is computed during every heartbeat allocation cycle with 
> a write lock. To improve performance, this tickets is focussing on moving 
> user-limit calculation out of heartbeat allocation flow.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6125) The application attempt's diagnostic message should have a maximum size

2017-02-02 Thread Andras Piros (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848968#comment-15848968
 ] 

Andras Piros edited comment on YARN-6125 at 2/2/17 6:16 PM:


[~templedf] thanks for the review!

My thoughts on the comments:
# done
# done
# the {{pom.xml}} dependency reordering was necessary in order to get 
{{ExpectedException}} working. There are other parts of {{hadoop}} that employ 
the same thing (notably, 
{{hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-nodemanager/pom.xml}}),
 essentially to make sure the correct Hamcrest version is on the classpath. One 
day switching to Mockito 2.x [*should solve this 
problem*|https://github.com/mockito/mockito/issues/324]
# {{Lists.newLinkedList()}} comes from Guava, which I like. But nevermind, 
changed to use {{new LinkedList<>()}}
# done
# done
# done
# I like {{final}} stuff (I know I'm weird), please specify where I should make 
variables / fields non-{{final}}
# WIP
# done
# done
# done
# done
# done


was (Author: andras.piros):
[~templedf] thanks for the review!

My thoughts on the comments:
# done
# done
# the {{pom.xml}} dependency reordering was necessary in order to get 
{{ExpectedException}} working. There are other parts of {{hadoop}} that employ 
the same thing (notably, 
{{hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-nodemanager/pom.xml}}),
 essentially to make sure the correct Hamcrest version is on the classpath. One 
day switching to Mockito 2.x [*should solve this 
problem*|https://github.com/mockito/mockito/issues/324]
# {{Lists.newLinkedList()}} comes from Guava, which I like. But nevermind, 
changed to use {{new LinkedList<>()}}
# done
# done
# done
# I like {{final}} stuff (I know I'm weird), please specify where I should make 
variables / fields non-{{final}}
# WIP
# WIP
# done
# done
# done
# done

> The application attempt's diagnostic message should have a maximum size
> ---
>
> Key: YARN-6125
> URL: https://issues.apache.org/jira/browse/YARN-6125
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.7.0
>Reporter: Daniel Templeton
>Assignee: Andras Piros
>Priority: Critical
> Fix For: 3.0.0-alpha3
>
> Attachments: YARN-6125.000.patch, YARN-6125.001.patch, 
> YARN-6125.002.patch
>
>
> We've found through experience that the diagnostic message can grow 
> unbounded.  I've seen attempts that have diagnostic messages over 1MB.  Since 
> the message is stored in the state store, it's a bad idea to allow the 
> message to grow unbounded.  Instead, there should be a property that sets a 
> maximum size on the message.
> I suspect that some of the ZK state store issues we've seen in the past were 
> due to the size of the diagnostic messages and not to the size of the 
> classpath, as is the current prevailing opinion.
> An open question is how best to prune the message once it grows too large.  
> Should we
> # truncate the tail,
> # truncate the head,
> # truncate the middle,
> # add another property to make the behavior selectable, or
> # none of the above?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3269) Yarn.nodemanager.remote-app-log-dir could not be configured to fully qualified path

2017-02-02 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated YARN-3269:

Fix Version/s: 2.7.4

> Yarn.nodemanager.remote-app-log-dir could not be configured to fully 
> qualified path
> ---
>
> Key: YARN-3269
> URL: https://issues.apache.org/jira/browse/YARN-3269
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha1
>
> Attachments: YARN-3269.1.patch, YARN-3269.2.patch
>
>
> Log aggregation currently is always relative to the default file system, not 
> an arbitrary file system identified by URI. So we can't put an arbitrary 
> fully-qualified URI into yarn.nodemanager.remote-app-log-dir.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3269) Yarn.nodemanager.remote-app-log-dir could not be configured to fully qualified path

2017-02-02 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15850240#comment-15850240
 ] 

Zhe Zhang commented on YARN-3269:
-

Thanks [~xgong] for the fix. I just cherry-picked to branch-2.7.

> Yarn.nodemanager.remote-app-log-dir could not be configured to fully 
> qualified path
> ---
>
> Key: YARN-3269
> URL: https://issues.apache.org/jira/browse/YARN-3269
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: YARN-3269.1.patch, YARN-3269.2.patch
>
>
> Log aggregation currently is always relative to the default file system, not 
> an arbitrary file system identified by URI. So we can't put an arbitrary 
> fully-qualified URI into yarn.nodemanager.remote-app-log-dir.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3571) AM does not re-blacklist NMs after ignoring-blacklist event happens?

2017-02-02 Thread Manikandan R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15850161#comment-15850161
 ] 

Manikandan R commented on YARN-3571:


I am interested in working on this. Can I work on it?

> AM does not re-blacklist NMs after ignoring-blacklist event happens?
> 
>
> Key: YARN-3571
> URL: https://issues.apache.org/jira/browse/YARN-3571
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager, resourcemanager
>Affects Versions: 2.5.1
>Reporter: Hao Zhu
>
> Detailed analysis are in item "3 Will AM re-blacklist NMs after 
> ignoring-blacklist event happens?" of below link:
> http://www.openkb.info/2015/05/when-will-application-master-blacklist.html
> The current behavior is : if that Node Manager has ever been blacklisted 
> before, then it will not be blacklisted again after ignore-blacklist happens; 
> Else, it will be blacklisted.
> However I think the right behavior should be : AM can re-blacklist NMs even 
> after ignoring-blacklist happens once.
>  The code logic is in function containerFailedOnHost(String hostName) of 
> RMContainerRequestor.java:
> {code}
>   protected void containerFailedOnHost(String hostName) {
> if (!nodeBlacklistingEnabled) {
>   return;
> }
> if (blacklistedNodes.contains(hostName)) {
>   if (LOG.isDebugEnabled()) {
> LOG.debug("Host " + hostName + " is already blacklisted.");
>   }
>   return; //already blacklisted
> {code}
> The reason of above behavior is in above item 2: when ignoring-blacklist 
> happens, it only ask RM to clear "blacklistAdditions", however it dose not 
> clear the "blacklistedNodes" variable.
> This behavior may cause the whole job/application to fail if the previous 
> blacklisted NM was released after ignoring-blacklist event happens.
> Imagine a serial murder is released from prison just because the prison is 
> 33% full, and horribly he/she will never be put in prison again. Only new 
> murder will be put in prison.
> Example to prove:
> Test 1:
> One node(h4) has issue, other 3 nodes are healthy.
> The job failed with below AM logs:
> {code}
> [root@h1 container_1430425729977_0006_01_01]# egrep -i 'failures on 
> node|blacklist|FATAL' syslog
> 2015-05-02 18:38:41,246 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 
> nodeBlacklistingEnabled:true
> 2015-05-02 18:38:41,246 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 
> blacklistDisablePercent is 1
> 2015-05-02 18:39:07,249 FATAL [IPC Server handler 3 on 41696] 
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: 
> attempt_1430425729977_0006_m_02_0 - exited : java.io.IOException: Spill 
> failed
> 2015-05-02 18:39:07,297 INFO [Thread-49] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 1 failures on 
> node h4.poc.com
> 2015-05-02 18:39:07,950 FATAL [IPC Server handler 16 on 41696] 
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: 
> attempt_1430425729977_0006_m_08_0 - exited : java.io.IOException: Spill 
> failed
> 2015-05-02 18:39:07,954 INFO [Thread-49] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 2 failures on 
> node h4.poc.com
> 2015-05-02 18:39:08,148 FATAL [IPC Server handler 17 on 41696] 
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: 
> attempt_1430425729977_0006_m_07_0 - exited : java.io.IOException: Spill 
> failed
> 2015-05-02 18:39:08,152 INFO [Thread-49] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 3 failures on 
> node h4.poc.com
> 2015-05-02 18:39:08,152 INFO [Thread-49] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Blacklisted host 
> h4.poc.com
> 2015-05-02 18:39:08,561 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the 
> blacklist for application_1430425729977_0006: blacklistAdditions=1 
> blacklistRemovals=0
> 2015-05-02 18:39:08,561 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Ignore 
> blacklisting set to true. Known: 4, Blacklisted: 1, 25%
> 2015-05-02 18:39:09,563 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the 
> blacklist for application_1430425729977_0006: blacklistAdditions=0 
> blacklistRemovals=1
> 2015-05-02 18:39:32,912 FATAL [IPC Server handler 19 on 41696] 
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: 
> attempt_1430425729977_0006_m_02_1 - exited : java.io.IOException: Spill 
> failed
> 2015-05-02 18:39:35,076 FATAL [IPC Server handler 1 on 41696] 
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: 
> attempt_1430425729977_0006_m_09_0 - exited : java.io.IOException: Spill 
> failed
> 2015-05-02 18:39:35,133 FATAL [IPC Server 

[jira] [Commented] (YARN-5179) Issue of CPU usage of containers

2017-02-02 Thread Manikandan R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15850154#comment-15850154
 ] 

Manikandan R commented on YARN-5179:


I am interested in working on this. Can I take it forward?

> Issue of CPU usage of containers
> 
>
> Key: YARN-5179
> URL: https://issues.apache.org/jira/browse/YARN-5179
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.0
> Environment: Both on Windows and Linux
>Reporter: Zhongkai Mi
>
> // Multiply by 1000 to avoid losing data when converting to int 
>int milliVcoresUsed = (int) (cpuUsageTotalCoresPercentage * 1000 
>   * maxVCoresAllottedForContainers /nodeCpuPercentageForYARN); 
> This formula will not get right CPU usage based vcore if vcores != physical 
> cores. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6013) ApplicationMasterProtocolPBClientImpl.allocate fails with EOFException when RPC privacy is enabled

2017-02-02 Thread Steven Rand (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15850106#comment-15850106
 ] 

Steven Rand commented on YARN-6013:
---

[~djp], I'm wondering whether you have any opinions here, since you've been 
working on the 2.8.0 release. I could be wrong, of course, but I'm concerned 
that this is a non-trivial regression from 2.7.3, and I think it'd be great if 
we could fix this (or determine that I'm just doing something wrong) before 
2.8.0 is released.

> ApplicationMasterProtocolPBClientImpl.allocate fails with EOFException when 
> RPC privacy is enabled
> --
>
> Key: YARN-6013
> URL: https://issues.apache.org/jira/browse/YARN-6013
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client, yarn
>Affects Versions: 2.8.0
>Reporter: Steven Rand
>Priority: Critical
> Attachments: YARN-6013-branch-2.8.0.001.patch, yarn-rm-log.txt
>
>
> When privacy is enabled for RPC (hadoop.rpc.protection = privacy), 
> {{ApplicationMasterProtocolPBClientImpl.allocate}} sometimes (but not always) 
> fails with an EOFException. I've reproduced this with Spark 2.0.2 built 
> against latest branch-2.8 and with a simple distcp job on latest branch-2.8.
> Steps to reproduce using distcp:
> 1. Set hadoop.rpc.protection equal to privacy
> 2. Write data to HDFS. I did this with Spark as follows: 
> {code}
> sc.parallelize(1 to (5*1024*1024)).map(k => Seq(k, 
> org.apache.commons.lang.RandomStringUtils.random(1024, 
> "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWxyZ0123456789")).mkString("|")).toDF().repartition(100).write.parquet("hdfs:///tmp/testData")
> {code}
> 3. Attempt to distcp that data to another location in HDFS. For example:
> {code}
> hadoop distcp -Dmapreduce.framework.name=yarn hdfs:///tmp/testData 
> hdfs:///tmp/testDataCopy
> {code}
> I observed this error in the ApplicationMaster's syslog:
> {code}
> 2016-12-19 19:13:50,097 INFO [eventHandlingThread] 
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event Writer 
> setup for JobId: job_1482189777425_0004, File: 
> hdfs://:8020/tmp/hadoop-yarn/staging//.staging/job_1482189777425_0004/job_1482189777425_0004_1.jhist
> 2016-12-19 19:13:51,004 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before 
> Scheduling: PendingReds:0 ScheduledMaps:4 ScheduledReds:0 AssignedMaps:0 
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0 
> HostLocal:0 RackLocal:0
> 2016-12-19 19:13:51,031 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() 
> for application_1482189777425_0004: ask=1 release= 0 newContainers=0 
> finishedContainers=0 resourcelimit= knownNMs=3
> 2016-12-19 19:13:52,043 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.io.retry.RetryInvocationHandler: Exception while invoking 
> ApplicationMasterProtocolPBClientImpl.allocate over null. Retrying after 
> sleeping for 3ms.
> java.io.EOFException: End of File Exception between local host is: 
> "/"; destination host is: "":8030; 
> : java.io.EOFException; For more details see:  
> http://wiki.apache.org/hadoop/EOFException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>   at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:801)
>   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:765)
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1486)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1428)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1338)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
>   at com.sun.proxy.$Proxy80.allocate(Unknown Source)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClientImpl.allocate(ApplicationMasterProtocolPBClientImpl.java:77)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> 

[jira] [Commented] (YARN-6108) Improve AHS webservice to accept NM address as a parameter to get container logs

2017-02-02 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15849870#comment-15849870
 ] 

Junping Du commented on YARN-6108:
--

Thanks [~xgong] for addressing my above comments. Quick go through latest patch 
and have several comments so far:

1. Shall we move YarnWebServiceUtils into webapp.util package just like 
WebAppUtils?

2. YarnWebServiceParams.NM_NODENAME seems a bit redundancy, may be rename to 
NM_ID or NM_NODEID?

3.
{noformat} 
+  String resURI = JOINER.join(getAbsoluteNMWebAddress(nodeHttpAddress),
+  NM_DOWNLOAD_URI_STR, uri);
{noformat}
Sound like nodeHttpAddress is still possible to be null. If so, 
getAbsoluteNMWebAddress() will throw NPE in this case. It is better to handle 
null case explicitly here.


> Improve AHS webservice to accept NM address as a parameter to get container 
> logs
> 
>
> Key: YARN-6108
> URL: https://issues.apache.org/jira/browse/YARN-6108
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-6108.1.patch, YARN-6108.2.patch, 
> YARN-6108.branch-2.v1.patch
>
>
> Currently, if we want to get container log for running application, we need 
> to get NM web address from AHS which we need to enable 
> yarn.timeline-service.generic-application-history.save-non-am-container-meta-info
>  for non-am containers. But, in most of time, we will disable this 
> configuration for ATS performance purpose. In this case, it is impossible for 
> us to get the logs for non-am container in a running application.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5889) Improve user-limit calculation in capacity scheduler

2017-02-02 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-5889:
--
Attachment: YARN-5889.0009.patch

Thanks [~leftnoteasy] for helping to review the patch thoroughly. Updating a 
new patch

bq.4) isRecomputeNeeded:
I am slightly confused here. I think we might need null check. I ll help to 
share detailed view for that.

Assume that there are no precomputed user-limit at the start when RM is started 
or queue is refreshed. So all cache ill be empty, and we ll do our first 
computation when a container request comes. 
So in this case, userLimitPerSchedulingMode will be null. And we ll do a 
recompute and then userLimitPerSchedulingMode will have some entires. So a null 
check is needed at the very beginning scenario. I can see whether this check 
can be done outside or note. Am i missing something here? pls help to share 
your view.

bq.And also, we don't need latestVersionOfUserCount, instead we should call 
latestVersionOfUsersState.get().
userLimitNeedsRecompute or getLatestVersionOfUsersState are not writeLock 
protected. Hence in getComputedResourceLimitForAll/ActiveUsers  , it may be 
possible that latestVersionOfUsersState may change within writeLock block while 
operating.  Since we save the saved version of latestVersionOfUserCount to 
update local cache (per partition nd sch mode), even though some other thread 
changed the real  latestVersionOfUsersState, cache will be invalidate it 
correctly. Pls pool in your thoughts.

> Improve user-limit calculation in capacity scheduler
> 
>
> Key: YARN-5889
> URL: https://issues.apache.org/jira/browse/YARN-5889
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-5889.0001.patch, 
> YARN-5889.0001.suggested.patchnotes, YARN-5889.0002.patch, 
> YARN-5889.0003.patch, YARN-5889.0004.patch, YARN-5889.0005.patch, 
> YARN-5889.0006.patch, YARN-5889.0007.patch, YARN-5889.0008.patch, 
> YARN-5889.0009.patch, YARN-5889.v0.patch, YARN-5889.v1.patch, 
> YARN-5889.v2.patch
>
>
> Currently user-limit is computed during every heartbeat allocation cycle with 
> a write lock. To improve performance, this tickets is focussing on moving 
> user-limit calculation out of heartbeat allocation flow.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6108) Improve AHS webservice to accept NM address as a parameter to get container logs

2017-02-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15849770#comment-15849770
 ] 

Hadoop QA commented on YARN-6108:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
54s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 47s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 9 new + 36 unchanged - 0 fixed = 45 total (was 36) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
43s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
36s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
45s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m  5s{color} 
| {color:red} hadoop-yarn-server-applicationhistoryservice in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 84m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.timeline.webapp.TestTimelineWebServices |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6108 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12850487/YARN-6108.2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ba56637e1e5f 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 327c998 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 

[jira] [Commented] (YARN-6100) improve YARN webservice to output aggregated container logs

2017-02-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15849694#comment-15849694
 ] 

Hudson commented on YARN-6100:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11199 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11199/])
YARN-6100. Improve YARN webservice to output aggregated container logs. 
(junping_du: rev 327c9980aafce52cc02d2b8885fc4e9f628ab23c)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/NMWebServices.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/LogsCLI.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSWebServices.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/LogToolUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TestAHSWebServices.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServices.java


> improve YARN webservice to output aggregated container logs
> ---
>
> Key: YARN-6100
> URL: https://issues.apache.org/jira/browse/YARN-6100
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: YARN-6100.1.patch, YARN-6100.2.patch, 
> YARN-6100.branch-2.v1.patch, YARN-6100.branch-2.v3.patch, 
> YARN-6100.branch-2.v4.patch, YARN-6100.trunk.2.patch, 
> YARN-6100.trunk.v1.patch, YARN-6100.trunk.v3.patch, YARN-6100.trunk.v4.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6138) AMRMClientAsync does not surface preemption message

2017-02-02 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15849679#comment-15849679
 ] 

Sunil G commented on YARN-6138:
---

Thanks [~mradmila] for sharing the patch. Few general comments

1) Please follow patch naming convention as per this link. 
https://wiki.apache.org/hadoop/HowToContribute#Naming_your_patch
This will help you to compile patch against respective branch. Also if its a 
general issue, you can give patch against trunk first.

2) Your patch has same file changes 3 times. Please help to upload a correct 
formatted patch. git commands could help here.

3) I am not very sure why a call back method needs to be exposed for preemption 
message. Doesnt it need to a part of container completed etc ? Could you please 
add some more details here, why this change is needed?

> AMRMClientAsync does not surface preemption message
> ---
>
> Key: YARN-6138
> URL: https://issues.apache.org/jira/browse/YARN-6138
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 2.7.2
>Reporter: Marko Radmilac
>Priority: Minor
> Attachments: amrmpatch.txt
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> While AMRMClientAsync does pass on updated nodes and other messages from the 
> AMRM channel, it does not pass information about preemption.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6108) Improve AHS webservice to accept NM address as a parameter to get container logs

2017-02-02 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15849677#comment-15849677
 ] 

Junping Du commented on YARN-6108:
--

YARN-6100 is committed, submit patch for this JIRA.

> Improve AHS webservice to accept NM address as a parameter to get container 
> logs
> 
>
> Key: YARN-6108
> URL: https://issues.apache.org/jira/browse/YARN-6108
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-6108.1.patch, YARN-6108.2.patch, 
> YARN-6108.branch-2.v1.patch
>
>
> Currently, if we want to get container log for running application, we need 
> to get NM web address from AHS which we need to enable 
> yarn.timeline-service.generic-application-history.save-non-am-container-meta-info
>  for non-am containers. But, in most of time, we will disable this 
> configuration for ATS performance purpose. In this case, it is impossible for 
> us to get the logs for non-am container in a running application.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6099) Improve webservice to list aggregated log files

2017-02-02 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-6099:
-
Fix Version/s: (was: 3.0.0-alpha2)
   3.0.0-alpha3

> Improve webservice to list aggregated log files
> ---
>
> Key: YARN-6099
> URL: https://issues.apache.org/jira/browse/YARN-6099
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: YARN-6099.1.patch, YARN-6099.branch-2.v2.patch, 
> YARN-6099.branch-2.v3.patch, YARN-6099.trunk.1.patch, 
> YARN-6099.trunk.2.patch, YARN-6099.trunk.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6100) improve YARN webservice to output aggregated container logs

2017-02-02 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-6100:
-
Fix Version/s: 3.0.0-alpha3

> improve YARN webservice to output aggregated container logs
> ---
>
> Key: YARN-6100
> URL: https://issues.apache.org/jira/browse/YARN-6100
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: YARN-6100.1.patch, YARN-6100.2.patch, 
> YARN-6100.branch-2.v1.patch, YARN-6100.branch-2.v3.patch, 
> YARN-6100.branch-2.v4.patch, YARN-6100.trunk.2.patch, 
> YARN-6100.trunk.v1.patch, YARN-6100.trunk.v3.patch, YARN-6100.trunk.v4.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6061) Add a customized uncaughtexceptionhandler for critical threads in RM

2017-02-02 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-6061:
---
Attachment: YARN-6061.007.patch

> Add a customized uncaughtexceptionhandler for critical threads in RM
> 
>
> Key: YARN-6061
> URL: https://issues.apache.org/jira/browse/YARN-6061
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-6061.001.patch, YARN-6061.002.patch, 
> YARN-6061.003.patch, YARN-6061.004.patch, YARN-6061.005.patch, 
> YARN-6061.006.patch, YARN-6061.007.patch
>
>
> There are several threads in fair scheduler. The thread will quit when there 
> is a runtime exception inside it. We should bring down the RM when that 
> happens. Otherwise, there may be some weird behavior in RM. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org