[jira] [Commented] (YARN-6686) Support for adding and removing queue mappings

2017-06-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069481#comment-16069481
 ] 

Hadoop QA commented on YARN-6686:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  3m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
18s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
46s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 19 new + 3 unchanged - 0 fixed = 22 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 39m  
0s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 70m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6686 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12875164/YARN-6686-YARN-5734.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6f1cf67f6fff 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-5734 / a48d475 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16282/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/16282/artifact/patchprocess/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16282/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16282/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Support for adding and removing queue mappings
> --
>
> Key: YARN-6686
> URL: 

[jira] [Commented] (YARN-6694) Add certain envs to the default yarn.nodemanager.env-whitelist

2017-06-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069465#comment-16069465
 ] 

Hudson commented on YARN-6694:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11958 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11958/])
YARN-6694. Add certain envs to the default (xgong: rev 
3be2659f83965a312d1095f03b7a95c7781c10af)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml


> Add certain envs to the default yarn.nodemanager.env-whitelist
> --
>
> Key: YARN-6694
> URL: https://issues.apache.org/jira/browse/YARN-6694
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6694.1.patch
>
>
> Certain envs can be added in the  yarn.nodemanager.env-whitelist
> such as: HADOOP_HOME,PATH,LANG,TZ 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6752) Display reserved resources in web UI per application

2017-06-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069459#comment-16069459
 ] 

Hadoop QA commented on YARN-6752:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 39s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 2 new + 
102 unchanged - 0 fixed = 104 total (was 102) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 44m 50s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 80m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6752 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12875163/YARN-6752.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8b263ca9809c 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / af2773f |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16281/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16281/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16281/testReport/ |
| modules | C: 

[jira] [Commented] (YARN-6742) Minor mistakes in "The YARN Service Registry" docs

2017-06-29 Thread Yeliang Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069428#comment-16069428
 ] 

Yeliang Cang commented on YARN-6742:


ok! [~shaneku...@gmail.com] I will review the whole document and submit a patch 
later. Thank you for the reply!

> Minor mistakes in "The YARN Service Registry" docs
> --
>
> Key: YARN-6742
> URL: https://issues.apache.org/jira/browse/YARN-6742
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha3
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
>Priority: Trivial
> Attachments: YARN-6742-001.patch
>
>
> There are minor mistakes in The YARN Service Registry docs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6694) Add certain envs to the default yarn.nodemanager.env-whitelist

2017-06-29 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069404#comment-16069404
 ] 

Xuan Gong commented on YARN-6694:
-

Committed into trunk. Thanks, Jian

> Add certain envs to the default yarn.nodemanager.env-whitelist
> --
>
> Key: YARN-6694
> URL: https://issues.apache.org/jira/browse/YARN-6694
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6694.1.patch
>
>
> Certain envs can be added in the  yarn.nodemanager.env-whitelist
> such as: HADOOP_HOME,PATH,LANG,TZ 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6694) Add certain envs to the default yarn.nodemanager.env-whitelist

2017-06-29 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069399#comment-16069399
 ] 

Xuan Gong commented on YARN-6694:
-

+1. Checking this in

> Add certain envs to the default yarn.nodemanager.env-whitelist
> --
>
> Key: YARN-6694
> URL: https://issues.apache.org/jira/browse/YARN-6694
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-6694.1.patch
>
>
> Certain envs can be added in the  yarn.nodemanager.env-whitelist
> such as: HADOOP_HOME,PATH,LANG,TZ 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6385) Fix warnings caused by TestFileSystemApplicationHistoryStore

2017-06-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069384#comment-16069384
 ] 

Hadoop QA commented on YARN-6385:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice:
 The patch generated 0 new + 7 unchanged - 2 fixed = 7 total (was 9) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
1s{color} | {color:green} hadoop-yarn-server-applicationhistoryservice in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 29m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6385 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860278/YARN-6385.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2b1f2d622162 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / af2773f |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16276/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16276/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix warnings caused by TestFileSystemApplicationHistoryStore
> 
>
> Key: YARN-6385
> URL: https://issues.apache.org/jira/browse/YARN-6385
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha2
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: YARN-6385.001.patch
>
>
> There are two warnings generated in {{TestFileSystemApplicationHistoryStore}}.
> {code}
> 

[jira] [Commented] (YARN-6426) Compress ZK YARN keys to scale up (especially AppStateData

2017-06-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069376#comment-16069376
 ] 

Hadoop QA commented on YARN-6426:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} YARN-6426 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-6426 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12861550/zkcompression.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16280/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Compress ZK YARN keys to scale up (especially AppStateData
> --
>
> Key: YARN-6426
> URL: https://issues.apache.org/jira/browse/YARN-6426
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 3.0.0-alpha2
>Reporter: Roni Burd
>Assignee: Roni Burd
>  Labels: patch
> Attachments: zkcompression.patch
>
>
> ZK today stores the protobuf files uncompressed. This is not an issue except 
> that if a customer job has thousands of files, AppStateData will store the 
> user context as a string with multiple URLs and it is easy to get to 1MB or 
> more. 
> This can put unnecessary strain on ZK and make the process slow. 
> The proposal is to simply compress protobufs before sending them to ZK



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3232) Some application states are not necessarily exposed to users

2017-06-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069375#comment-16069375
 ] 

Hadoop QA commented on YARN-3232:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} YARN-3232 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-3232 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12823005/YARN-3232.v2.01.patch 
|
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16279/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Some application states are not necessarily exposed to users
> 
>
> Key: YARN-3232
> URL: https://issues.apache.org/jira/browse/YARN-3232
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Jian He
>Assignee: Varun Saxena
> Attachments: YARN-3232.002.patch, YARN-3232.01.patch, 
> YARN-3232.02.patch, YARN-3232.v2.01.patch
>
>
> application NEW_SAVING and SUBMITTED states are not necessarily exposed to 
> users as they mostly internal to the system, transient and not user-facing. 
> We may deprecate these two states and remove them from the web UI.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6686) Support for adding and removing queue mappings

2017-06-29 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069373#comment-16069373
 ] 

Jonathan Hung commented on YARN-6686:
-

Here's an initial patch which supports adding and removing from a global 
configuration csv (e.g. yarn.scheduler.capacity.queue-mappings).

It's just an initial patch, there's still stuff missing, for example:
# adding the same functionality for queue configurations
# unit tests
but I wanted to post this early to see if there were any serious objections 
with it.

Basically in addition to {noformat}

  ...
{noformat} to specify updates to global configurations, you 
can also do:
{noformat}

  ...
  ...
{noformat} to specify configs you want to add/remove from the 
csv collection for configuration keys. For example,
{noformat}

  

  yarn.scheduler.capacity.queue-mappings
  u:user1:default

  
{noformat}
then 
{noformat}

  

  yarn.scheduler.capacity.queue-mappings
  u:user2:default

  
{noformat} would add the user1->default queue mapping, then 
remove the user2->default mapping. So if you started with 
{{u:user2:default,u:user3:default}} as your mappings, you would end up with 
{{u:user3:default,u:user1:default}} in the end.

Any comments are welcome!

> Support for adding and removing queue mappings
> --
>
> Key: YARN-6686
> URL: https://issues.apache.org/jira/browse/YARN-6686
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-6686-YARN-5734.001.patch
>
>
> Right now capacity scheduler uses UserGroupMappingPlacementRule to determine 
> queue mappings. This rule stores mappings in 
> {{yarn.scheduler.capacity.queue-mappings}}. For users with a large number of 
> mappings, adding or removing queue mappings becomes infeasible.
> Need to come up with a way to add/remove individual mappings, for any/all 
> different configured placement rules.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5683) Support specifying storage type for per-application local dirs

2017-06-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069372#comment-16069372
 ] 

Hadoop QA commented on YARN-5683:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} YARN-5683 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-5683 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12832871/YARN-5683-3.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16278/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Support specifying storage type for per-application local dirs
> --
>
> Key: YARN-5683
> URL: https://issues.apache.org/jira/browse/YARN-5683
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha2
>Reporter: Tao Yang
>Assignee: Tao Yang
>  Labels: oct16-hard
> Attachments: flow_diagram_for_MapReduce-2.png, 
> flow_diagram_for_MapReduce.png, YARN-5683-1.patch, YARN-5683-2.patch, 
> YARN-5683-3.patch
>
>
> h3.  Introduction
> * Some applications of various frameworks (Flink, Spark and MapReduce etc) 
> using local storage (checkpoint, shuffle etc) might require high IO 
> performance. It's useful to allocate local directories to high performance 
> storage media for these applications on heterogeneous clusters.
> * YARN does not distinguish different storage types and hence applications 
> cannot selectively use storage media with different performance 
> characteristics. Adding awareness of storage media can allow YARN to make 
> better decisions about the placement of local directories.
> h3.  Approach
> * NodeManager will distinguish storage types for local directories.
> ** yarn.nodemanager.local-dirs and yarn.nodemanager.log-dirs configuration 
> should allow the cluster administrator to optionally specify the storage type 
> for each local directories. Example: 
> [SSD]/disk1/nm-local-dir,/disk2/nm-local-dir,/disk3/nm-local-dir (equals to 
> [SSD]/disk1/nm-local-dir,[DISK]/disk2/nm-local-dir,[DISK]/disk3/nm-local-dir)
> ** StorageType defines DISK/SSD storage types and takes DISK as the default 
> storage type. 
> ** StorageLocation separates storage type and directory path, used by 
> LocalDirAllocator to aware the types of local dirs, the default storage type 
> is DISK.
> ** getLocalPathForWrite method of LocalDirAllcator will prefer to choose the 
> local directory of the specified storage type, and will fallback to not care 
> storage type if the requirement can not be satisfied.
> ** Support for container related local/log directories by ContainerLaunch. 
> All application frameworks can set the environment variables 
> (LOCAL_STORAGE_TYPE and LOG_STORAGE_TYPE) to specified the desired storage 
> type of local/log directories, and choose to not launch container if fallback 
> through these environment variables (ENSURE_LOCAL_STORAGE_TYPE and 
> ENSURE_LOG_STORAGE_TYPE).
> * Allow specified storage type for various frameworks (Take MapReduce as an 
> example)
> ** Add new configurations should allow application administrator to 
> optionally specify the storage type of local/log directories and fallback 
> strategy (MapReduce configurations: mapreduce.job.local-storage-type, 
> mapreduce.job.log-storage-type, mapreduce.job.ensure-local-storage-type and 
> mapreduce.job.ensure-log-storage-type).
> ** Support for container work directories. Set the environment variables 
> includes LOCAL_STORAGE_TYPE and LOG_STORAGE_TYPE according to configurations 
> above for ContainerLaunchContext and ApplicationSubmissionContext. (MapReduce 
> should update YARNRunner and TaskAttemptImpl)
> ** Add storage type prefix for request path to support for other local 
> directories of frameworks (such as shuffle directories for MapReduce). 
> (MapReduce should update YarnOutputFiles, MROutputFiles and YarnChild to 
> support for output/work directories)
> ** Flow diagram for MapReduce framework
> !flow_diagram_for_MapReduce-2.png!
> h3.  Further Discussion
> * The requirement of storage type for local/log directories may not be 
> satisfied on heterogeneous clusters. To achieve global optimum, scheduler 
> should aware and manage disk resources. 
> [YARN-2139|https://issues.apache.org/jira/browse/YARN-2139] is close to that 
> but seems not support multiple storage 

[jira] [Commented] (YARN-5995) Add RMStateStore metrics to monitor all RMStateStoreEventTypeTransition performance

2017-06-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069371#comment-16069371
 ] 

Hadoop QA commented on YARN-5995:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} YARN-5995 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-5995 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12861538/YARN-5995.0004.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16277/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add RMStateStore metrics to monitor all RMStateStoreEventTypeTransition 
> performance
> ---
>
> Key: YARN-5995
> URL: https://issues.apache.org/jira/browse/YARN-5995
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: metrics, resourcemanager
>Affects Versions: 2.7.1
> Environment: CentOS7.2 Hadoop-2.7.1 
>Reporter: zhangyubiao
>Assignee: zhangyubiao
>  Labels: patch
> Attachments: YARN-5995.0001.patch, YARN-5995.0002.patch, 
> YARN-5995.0003.patch, YARN-5995.0004.patch, YARN-5995.patch
>
>
> Add RMStateStore metrics to monitor all RMStateStoreEventTypeTransition 
> performance



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6686) Support for adding and removing queue mappings

2017-06-29 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-6686:

Attachment: YARN-6686-YARN-5734.001.patch

> Support for adding and removing queue mappings
> --
>
> Key: YARN-6686
> URL: https://issues.apache.org/jira/browse/YARN-6686
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-6686-YARN-5734.001.patch
>
>
> Right now capacity scheduler uses UserGroupMappingPlacementRule to determine 
> queue mappings. This rule stores mappings in 
> {{yarn.scheduler.capacity.queue-mappings}}. For users with a large number of 
> mappings, adding or removing queue mappings becomes infeasible.
> Need to come up with a way to add/remove individual mappings, for any/all 
> different configured placement rules.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2113) Add cross-user preemption within CapacityScheduler's leaf-queue

2017-06-29 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069351#comment-16069351
 ] 

Sunil G commented on YARN-2113:
---

+1 on branch-2 and branch-2.7 patches. I will wait for jenkins to come in for 
branch-2 also.

> Add cross-user preemption within CapacityScheduler's leaf-queue
> ---
>
> Key: YARN-2113
> URL: https://issues.apache.org/jira/browse/YARN-2113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Sunil G
> Fix For: 3.0.0-alpha4
>
> Attachments: IntraQueue Preemption-Impact Analysis.pdf, 
> TestNoIntraQueuePreemptionIfBelowUserLimitAndDifferentPrioritiesWithExtraUsers.txt,
>  YARN-2113.0001.patch, YARN-2113.0002.patch, YARN-2113.0003.patch, 
> YARN-2113.0004.patch, YARN-2113.0005.patch, YARN-2113.0006.patch, 
> YARN-2113.0007.patch, YARN-2113.0008.patch, YARN-2113.0009.patch, 
> YARN-2113.0010.patch, YARN-2113.0011.patch, YARN-2113.0012.patch, 
> YARN-2113.0013.patch, YARN-2113.0014.patch, YARN-2113.0015.patch, 
> YARN-2113.0016.patch, YARN-2113.0017.patch, YARN-2113.0018.patch, 
> YARN-2113.0019.patch, YARN-2113.apply.onto.0012.ericp.patch, 
> YARN-2113.branch-2.0019.patch, YARN-2113.branch-2.0020.patch, 
> YARN-2113.branch-2.8.0019.patch, YARN-2113.branch-2.8.0020.patch, YARN-2113 
> Intra-QueuePreemption Behavior.pdf, YARN-2113.v0.patch
>
>
> Preemption today only works across queues and moves around resources across 
> queues per demand and usage. We should also have user-level preemption within 
> a queue, to balance capacity across users in a predictable manner.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2113) Add cross-user preemption within CapacityScheduler's leaf-queue

2017-06-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069315#comment-16069315
 ] 

Hadoop QA commented on YARN-2113:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 
49s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
14s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
14s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
29s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
44s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_131 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
33s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  2m 33s{color} 
| {color:red} hadoop-yarn-project_hadoop-yarn-jdk1.8.0_131 with JDK v1.8.0_131 
generated 1 new + 59 unchanged - 1 fixed = 60 total (was 60) {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
36s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
36s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 39s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 9 new + 156 unchanged - 0 fixed = 165 total (was 156) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
28s{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.7.0_131. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 47s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}215m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_131 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.scheduler.TestAppSchedulingInfo |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
| JDK v1.7.0_131 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | 

[jira] [Commented] (YARN-6322) Disable queue refresh when configuration mutation is enabled

2017-06-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069294#comment-16069294
 ] 

Hadoop QA commented on YARN-6322:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 1s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 40m  1s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6322 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12875158/YARN-6322-YARN-5734.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0430e424a718 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-5734 / a48d475 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16275/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16275/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16275/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Disable queue refresh when configuration mutation is enabled
> 
>
> Key: YARN-6322
> URL: https://issues.apache.org/jira/browse/YARN-6322
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>  

[jira] [Updated] (YARN-6752) Display reserved resources in web UI per application

2017-06-29 Thread Abdullah Yousufi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abdullah Yousufi updated YARN-6752:
---
Attachment: YARN-6752.002.patch

> Display reserved resources in web UI per application
> 
>
> Key: YARN-6752
> URL: https://issues.apache.org/jira/browse/YARN-6752
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
> Attachments: YARN-6752.001.patch, YARN-6752.002.patch
>
>
> Show the number of reserved memory and vcores for each application



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6622) Document Docker work as experimental

2017-06-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-6622:
--
Target Version/s: 2.8.1, 3.0.0-beta1  (was: 2.8.1, 3.0.0-alpha4)

> Document Docker work as experimental
> 
>
> Key: YARN-6622
> URL: https://issues.apache.org/jira/browse/YARN-6622
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: documentation
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-6622.001.patch
>
>
> We should update the Docker support documentation calling out the Docker work 
> as experimental.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6426) Compress ZK YARN keys to scale up (especially AppStateData

2017-06-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-6426:
--
Target Version/s: 2.8.1, 2.9.0, 3.0.0-beta1  (was: 2.9.0, 2.8.1, 
3.0.0-alpha4)

> Compress ZK YARN keys to scale up (especially AppStateData
> --
>
> Key: YARN-6426
> URL: https://issues.apache.org/jira/browse/YARN-6426
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 3.0.0-alpha2
>Reporter: Roni Burd
>Assignee: Roni Burd
>  Labels: patch
> Attachments: zkcompression.patch
>
>
> ZK today stores the protobuf files uncompressed. This is not an issue except 
> that if a customer job has thousands of files, AppStateData will store the 
> user context as a string with multiple URLs and it is easy to get to 1MB or 
> more. 
> This can put unnecessary strain on ZK and make the process slow. 
> The proposal is to simply compress protobufs before sending them to ZK



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6127) Add support for work preserving NM restart when AMRMProxy is enabled

2017-06-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-6127:
--
Fix Version/s: 2.9
   3.0.0-alpha4

> Add support for work preserving NM restart when AMRMProxy is enabled
> 
>
> Key: YARN-6127
> URL: https://issues.apache.org/jira/browse/YARN-6127
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: amrmproxy, nodemanager
>Reporter: Subru Krishnan
>Assignee: Botong Huang
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: YARN-6127-branch-2.v1.patch, YARN-6127.v1.patch, 
> YARN-6127.v2.patch, YARN-6127.v3.patch, YARN-6127.v4.patch
>
>
> YARN-1336 added the ability to restart NM without loosing any running 
> containers. In a Federated YARN environment, there's additional state in the 
> {{AMRMProxy}} to allow for spanning across multiple sub-clusters, so we need 
> to enhance {{AMRMProxy}} to support work-preserving restart.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6127) Add support for work preserving NM restart when AMRMProxy is enabled

2017-06-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-6127:
--
Fix Version/s: (was: 2.9)
   2.9.0

> Add support for work preserving NM restart when AMRMProxy is enabled
> 
>
> Key: YARN-6127
> URL: https://issues.apache.org/jira/browse/YARN-6127
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: amrmproxy, nodemanager
>Reporter: Subru Krishnan
>Assignee: Botong Huang
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: YARN-6127-branch-2.v1.patch, YARN-6127.v1.patch, 
> YARN-6127.v2.patch, YARN-6127.v3.patch, YARN-6127.v4.patch
>
>
> YARN-1336 added the ability to restart NM without loosing any running 
> containers. In a Federated YARN environment, there's additional state in the 
> {{AMRMProxy}} to allow for spanning across multiple sub-clusters, so we need 
> to enhance {{AMRMProxy}} to support work-preserving restart.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6751) Display reserved resources in web UI per queue

2017-06-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069260#comment-16069260
 ] 

Hudson commented on YARN-6751:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11955 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11955/])
YARN-6751. Display reserved resources in web UI per queue (Contributed 
(templedf: rev ec975197799417a1d5727dedc395fe6c15c30eb2)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSQueue.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/FairSchedulerQueueInfo.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/FairSchedulerPage.java


> Display reserved resources in web UI per queue
> --
>
> Key: YARN-6751
> URL: https://issues.apache.org/jira/browse/YARN-6751
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, webapp
>Affects Versions: 2.8.1, 3.0.0-alpha3
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: YARN-6751.001.patch
>
>
> Show the number of reserved memory and vcores in each queue block



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6594) [API] Introduce SchedulingRequest object

2017-06-29 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069248#comment-16069248
 ] 

Konstantinos Karanasos commented on YARN-6594:
--

Thanks [~sunilg] -- yep, I will make sure I add tests/examples of how to use 
the new API.

> [API] Introduce SchedulingRequest object
> 
>
> Key: YARN-6594
> URL: https://issues.apache.org/jira/browse/YARN-6594
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-6594.001.patch
>
>
> This JIRA introduces a new SchedulingRequest object.
> It will be part of the {{AllocateRequest}} and will be used to define sizing 
> (e.g., number of allocations, size of allocations) and placement constraints 
> for allocations.
> Applications can use either this new object (when rich placement constraints 
> are required) or the existing {{ResourceRequest}} object.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6751) Display reserved resources in web UI per queue

2017-06-29 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-6751:
---
Component/s: webapp

> Display reserved resources in web UI per queue
> --
>
> Key: YARN-6751
> URL: https://issues.apache.org/jira/browse/YARN-6751
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, webapp
>Affects Versions: 2.8.1, 3.0.0-alpha3
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
> Attachments: YARN-6751.001.patch
>
>
> Show the number of reserved memory and vcores in each queue block



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6751) Display reserved resources in web UI per queue

2017-06-29 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-6751:
---
Target Version/s: 2.9.0, 3.0.0-alpha4  (was: 3.0.0-alpha4)

> Display reserved resources in web UI per queue
> --
>
> Key: YARN-6751
> URL: https://issues.apache.org/jira/browse/YARN-6751
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.8.1, 3.0.0-alpha3
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
> Attachments: YARN-6751.001.patch
>
>
> Show the number of reserved memory and vcores in each queue block



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6751) Display reserved resources in web UI per queue

2017-06-29 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-6751:
---
Affects Version/s: 2.8.1
   3.0.0-alpha3

> Display reserved resources in web UI per queue
> --
>
> Key: YARN-6751
> URL: https://issues.apache.org/jira/browse/YARN-6751
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.8.1, 3.0.0-alpha3
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
> Attachments: YARN-6751.001.patch
>
>
> Show the number of reserved memory and vcores in each queue block



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6322) Disable queue refresh when configuration mutation is enabled

2017-06-29 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069219#comment-16069219
 ] 

Jonathan Hung commented on YARN-6322:
-

002 fixes checkstyle

> Disable queue refresh when configuration mutation is enabled
> 
>
> Key: YARN-6322
> URL: https://issues.apache.org/jira/browse/YARN-6322
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-6322-YARN-5734.001.patch, 
> YARN-6322-YARN-5734.002.patch
>
>
> When configuration mutation is enabled, the configuration store is the source 
> of truth. Calling {{-refreshQueues}} won't work as intended, so we should 
> just disable this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6322) Disable queue refresh when configuration mutation is enabled

2017-06-29 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-6322:

Attachment: YARN-6322-YARN-5734.002.patch

> Disable queue refresh when configuration mutation is enabled
> 
>
> Key: YARN-6322
> URL: https://issues.apache.org/jira/browse/YARN-6322
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-6322-YARN-5734.001.patch, 
> YARN-6322-YARN-5734.002.patch
>
>
> When configuration mutation is enabled, the configuration store is the source 
> of truth. Calling {{-refreshQueues}} won't work as intended, so we should 
> just disable this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6752) Display reserved resources in web UI per application

2017-06-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069217#comment-16069217
 ] 

Hadoop QA commented on YARN-6752:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 2 new + 
83 unchanged - 0 fixed = 85 total (was 83) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 44m 28s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 82m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesApps |
|   | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6752 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12875126/YARN-6752.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ea642cb68c0e 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 441378e |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16274/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16274/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 

[jira] [Commented] (YARN-6322) Disable queue refresh when configuration mutation is enabled

2017-06-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069213#comment-16069213
 ] 

Hadoop QA commented on YARN-6322:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
49s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
25s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
27s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 1 new + 853 unchanged - 0 fixed = 854 total (was 853) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 51m 51s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation |
|   | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.monitor.TestSchedulingMonitor |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6322 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12875136/YARN-6322-YARN-5734.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f7024dd05c73 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-5734 / a48d475 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/16273/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16273/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16273/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16273/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was 

[jira] [Commented] (YARN-6694) Add certain envs to the default yarn.nodemanager.env-whitelist

2017-06-29 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069188#comment-16069188
 ] 

Jian He commented on YARN-6694:
---

Yeah, because certain downstream projects like hive are depending on the envs.

> Add certain envs to the default yarn.nodemanager.env-whitelist
> --
>
> Key: YARN-6694
> URL: https://issues.apache.org/jira/browse/YARN-6694
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-6694.1.patch
>
>
> Certain envs can be added in the  yarn.nodemanager.env-whitelist
> such as: HADOOP_HOME,PATH,LANG,TZ 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6751) Display reserved resources in web UI per queue

2017-06-29 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069185#comment-16069185
 ] 

Daniel Templeton commented on YARN-6751:


Patch looks good to me.  I tested it locally, and it works as expected.  The 
unit test failures are either unrelated or I cannot reproduce them locally.  +1

> Display reserved resources in web UI per queue
> --
>
> Key: YARN-6751
> URL: https://issues.apache.org/jira/browse/YARN-6751
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
> Attachments: YARN-6751.001.patch
>
>
> Show the number of reserved memory and vcores in each queue block



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6754) Fair scheduler docs should explain meaning of weight=0 for a queue

2017-06-29 Thread Daniel Templeton (JIRA)
Daniel Templeton created YARN-6754:
--

 Summary: Fair scheduler docs should explain meaning of weight=0 
for a queue
 Key: YARN-6754
 URL: https://issues.apache.org/jira/browse/YARN-6754
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: docs
Affects Versions: 3.0.0-alpha3, 2.8.1
Reporter: Daniel Templeton






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6753) Expose more ContainerImpl states from NM in ContainerStateProto

2017-06-29 Thread Roni Burd (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069164#comment-16069164
 ] 

Roni Burd commented on YARN-6753:
-

I was wondering what people feel around exposing some of the internals to allow 
debugging tools and other AMs to get insight into the NM. I understand that 
there can be reluctance to take a code dependencies, but by using protobuf the 
dependency should be weak. There are other internals I would like to expose 
over time like, Working Directory, etc. 

[~wangda] [~jianhe] , any thoughts? Do you have any recommendation what would 
be a " kosher" way of doing this?


> Expose more ContainerImpl states from NM in ContainerStateProto 
> 
>
> Key: YARN-6753
> URL: https://issues.apache.org/jira/browse/YARN-6753
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Roni Burd
>Priority: Minor
>
> The current NM protobuf definition exposes a subset of the NM internal state 
> via ContainerStateProto.
> We are currently building tools that can use of more fine grain state like 
> LOCALIZING, LOCALIZED, EXIT_WITH_FAILURES etc.
> The proposal is to add more internal states in the API.
> I'm not sure if this is considered an Incompatible change or not



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5067) Support specifying resources for AM containers in SLS

2017-06-29 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069160#comment-16069160
 ] 

Haibo Chen commented on YARN-5067:
--

Thanks [~yufeigu] for the update! One minor question, why are we making the 
LOGGER in AMSimulator and MRAMSimulator non-static now? Otherwise, the patch 
looks good to me.

> Support specifying resources for AM containers in SLS
> -
>
> Key: YARN-5067
> URL: https://issues.apache.org/jira/browse/YARN-5067
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Yufei Gu
> Attachments: YARN-5067.001.patch, YARN-5067.002.patch, 
> YARN-5067.003.patch
>
>
> Now resource of application masters in SLS is hardcoded to mem=1024 vcores=1.
> We should be able to specify AM resources from trace input file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6742) Minor mistakes in "The YARN Service Registry" docs

2017-06-29 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069147#comment-16069147
 ] 

Shane Kumpf commented on YARN-6742:
---

Thanks for the patch, [~Cyl]. 

While the patch looks good to me, I cursory check of this documentation shows a 
couple other minor grammatical errors ("but is not sufficient for our purposes 
since it It does not allow", "also be able register certificates" for example). 
Would you mind giving the whole document a review in an attempt to catch what 
we can in one commit?

> Minor mistakes in "The YARN Service Registry" docs
> --
>
> Key: YARN-6742
> URL: https://issues.apache.org/jira/browse/YARN-6742
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha3
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
>Priority: Trivial
> Attachments: YARN-6742-001.patch
>
>
> There are minor mistakes in The YARN Service Registry docs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6694) Add certain envs to the default yarn.nodemanager.env-whitelist

2017-06-29 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069119#comment-16069119
 ] 

Shane Kumpf commented on YARN-6694:
---

Thanks for the patch, [~jianhe]. Can you help me understand the use case here 
and why we want to make this the default (versus having administrators add 
these for their cluster as needed)?

> Add certain envs to the default yarn.nodemanager.env-whitelist
> --
>
> Key: YARN-6694
> URL: https://issues.apache.org/jira/browse/YARN-6694
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-6694.1.patch
>
>
> Certain envs can be added in the  yarn.nodemanager.env-whitelist
> such as: HADOOP_HOME,PATH,LANG,TZ 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4605) Spelling mistake in the help message of "yarn applicationattempt" command

2017-06-29 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-4605:
---
Component/s: (was: yarn)

> Spelling mistake in the help message of "yarn applicationattempt" command
> -
>
> Key: YARN-4605
> URL: https://issues.apache.org/jira/browse/YARN-4605
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client
>Affects Versions: 2.4.0
>Reporter: Manjunath Ballur
>Assignee: Weiwei Yang
>Priority: Trivial
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: YARN-4605.001.patch, YARN-4605.002.patch, 
> YARN-4605.003.patch
>
>
> Using YARN CLI, when the user types "yarn applicationattempt", the help 
> message for the "applicationattempt" command is shown. 
> Here, the following line has a spelling mistake. "application" is misspelled 
> as "aplication":
> -listList application attempts for aplication.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6322) Disable queue refresh when configuration mutation is enabled

2017-06-29 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069091#comment-16069091
 ] 

Jonathan Hung commented on YARN-6322:
-

Attached a patch to disable -refreshQueues if scheduler is a 
MutableConfScheduler, and its configuration is mutable.

[~xgong] and [~leftnoteasy], do you mind taking a look ? Thanks!

> Disable queue refresh when configuration mutation is enabled
> 
>
> Key: YARN-6322
> URL: https://issues.apache.org/jira/browse/YARN-6322
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-6322-YARN-5734.001.patch
>
>
> When configuration mutation is enabled, the configuration store is the source 
> of truth. Calling {{-refreshQueues}} won't work as intended, so we should 
> just disable this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6322) Disable queue refresh when configuration mutation is enabled

2017-06-29 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-6322:

Attachment: YARN-6322-YARN-5734.001.patch

> Disable queue refresh when configuration mutation is enabled
> 
>
> Key: YARN-6322
> URL: https://issues.apache.org/jira/browse/YARN-6322
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-6322-YARN-5734.001.patch
>
>
> When configuration mutation is enabled, the configuration store is the source 
> of truth. Calling {{-refreshQueues}} won't work as intended, so we should 
> just disable this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6752) Display reserved resources in web UI per application

2017-06-29 Thread Abdullah Yousufi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abdullah Yousufi updated YARN-6752:
---
Attachment: YARN-6752.001.patch

> Display reserved resources in web UI per application
> 
>
> Key: YARN-6752
> URL: https://issues.apache.org/jira/browse/YARN-6752
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
> Attachments: YARN-6752.001.patch
>
>
> Show the number of reserved memory and vcores for each application



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2113) Add cross-user preemption within CapacityScheduler's leaf-queue

2017-06-29 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated YARN-2113:
-
Attachment: YARN-2113.branch-2.8.0020.patch
YARN-2113.branch-2.0020.patch

Thanks [~sunilg]. Sorry about the debug comment. I have removed it from both 
branch-2 and branch-2.8 patches and have uploaded new versions.

> Add cross-user preemption within CapacityScheduler's leaf-queue
> ---
>
> Key: YARN-2113
> URL: https://issues.apache.org/jira/browse/YARN-2113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Sunil G
> Fix For: 3.0.0-alpha4
>
> Attachments: IntraQueue Preemption-Impact Analysis.pdf, 
> TestNoIntraQueuePreemptionIfBelowUserLimitAndDifferentPrioritiesWithExtraUsers.txt,
>  YARN-2113.0001.patch, YARN-2113.0002.patch, YARN-2113.0003.patch, 
> YARN-2113.0004.patch, YARN-2113.0005.patch, YARN-2113.0006.patch, 
> YARN-2113.0007.patch, YARN-2113.0008.patch, YARN-2113.0009.patch, 
> YARN-2113.0010.patch, YARN-2113.0011.patch, YARN-2113.0012.patch, 
> YARN-2113.0013.patch, YARN-2113.0014.patch, YARN-2113.0015.patch, 
> YARN-2113.0016.patch, YARN-2113.0017.patch, YARN-2113.0018.patch, 
> YARN-2113.0019.patch, YARN-2113.apply.onto.0012.ericp.patch, 
> YARN-2113.branch-2.0019.patch, YARN-2113.branch-2.0020.patch, 
> YARN-2113.branch-2.8.0019.patch, YARN-2113.branch-2.8.0020.patch, YARN-2113 
> Intra-QueuePreemption Behavior.pdf, YARN-2113.v0.patch
>
>
> Preemption today only works across queues and moves around resources across 
> queues per demand and usage. We should also have user-level preemption within 
> a queue, to balance capacity across users in a predictable manner.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6594) [API] Introduce SchedulingRequest object

2017-06-29 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069020#comment-16069020
 ] 

Sunil G commented on YARN-6594:
---

Thanks for the good work here folks!
I am starting to look this now.

[~kkaranasos] At high level it will be really great if we can have some 
examples to show how to use these constructs for few real use cases. 

> [API] Introduce SchedulingRequest object
> 
>
> Key: YARN-6594
> URL: https://issues.apache.org/jira/browse/YARN-6594
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-6594.001.patch
>
>
> This JIRA introduces a new SchedulingRequest object.
> It will be part of the {{AllocateRequest}} and will be used to define sizing 
> (e.g., number of allocations, size of allocations) and placement constraints 
> for allocations.
> Applications can use either this new object (when rich placement constraints 
> are required) or the existing {{ResourceRequest}} object.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6746) SchedulerUtils.checkResourceRequestMatchingNodePartition() is dead code

2017-06-29 Thread Deepti Sawhney (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069018#comment-16069018
 ] 

Deepti Sawhney commented on YARN-6746:
--

Hello Daniel,

I have removed the method. Kindly advise on further directions regarding
Java, Maven.  I would like to build the code before moving it to Jenkins.

Regards,
Deepti.

Strictly speaking, you should build the project to make sure that nothing
broke.  To do that, you'll have to install Java and
Maven.  If you want to go there, let me know, and I'll point you to some
more detailed instructions.  The pre-commit job run by Jenkins will do that
for you, though, so you can just leave it to Jenkins if you prefer.

On Wed, Jun 28, 2017 at 3:31 PM, Daniel Templeton (JIRA) 



> SchedulerUtils.checkResourceRequestMatchingNodePartition() is dead code
> ---
>
> Key: YARN-6746
> URL: https://issues.apache.org/jira/browse/YARN-6746
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Affects Versions: 2.8.1, 3.0.0-alpha3
>Reporter: Daniel Templeton
>Assignee: Deepti Sawhney
>Priority: Minor
>  Labels: newbie
>
> The function is unused.  It also appears to be broken.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6428) Queue AM limit is not honored in CS always

2017-06-29 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069011#comment-16069011
 ] 

Sunil G commented on YARN-6428:
---

Thanks [~naganarasimha...@apache.org] and [~bibinchundatt] for the 
clarifications.

If we use a very big value in long, then it could cause an overflow once we 
multiple with 10^6. We moved from int to long to support -ve round of cases, so 
I am also not much lenient in putting a loop hole there. Instead could we use 
BigDecimal to set precision point. I could see that we are using BigInteger in 
few places, so it may be fine. I will wait for [~leftnoteasy] here.
My alternate proposal is something like this
{code}
BigDecimal bd = new BigDecimal(by).setScale(2, 
RoundingMode.HALF_EVEN);
double by = bd.doubleValue();
{code}
I will wait for more comments too at this point.

> Queue AM limit is not honored  in CS always
> ---
>
> Key: YARN-6428
> URL: https://issues.apache.org/jira/browse/YARN-6428
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-6428.0001.patch, YARN-6428.0002.patch
>
>
> Steps to reproduce
> 
> Setup cluster with 40 GB and 40 vcores with 4 Node managers with 10 GB each.
> Configure 100% to default queue as capacity and max am limit as 10 %
> Minimum scheduler memory and vcore as 512,1
> *Expected* 
> AM limit 4096 and 4 vores
> *Actual*
> AM limit 4096+512 and 4+1 vcore



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6752) Display reserved resources in web UI per application

2017-06-29 Thread Abdullah Yousufi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abdullah Yousufi updated YARN-6752:
---
Summary: Display reserved resources in web UI per application  (was: 
Display reserved resources in web UI per application for fair scheduler)

> Display reserved resources in web UI per application
> 
>
> Key: YARN-6752
> URL: https://issues.apache.org/jira/browse/YARN-6752
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
>
> Show the number of reserved memory and vcores for each application



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6594) [API] Introduce SchedulingRequest object

2017-06-29 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068989#comment-16068989
 ] 

Jian He commented on YARN-6594:
---

Sounds good, if SchedulingRequestBuilder has a same method for taking the 
resource amount and number of containers, then it would be the same.

> [API] Introduce SchedulingRequest object
> 
>
> Key: YARN-6594
> URL: https://issues.apache.org/jira/browse/YARN-6594
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-6594.001.patch
>
>
> This JIRA introduces a new SchedulingRequest object.
> It will be part of the {{AllocateRequest}} and will be used to define sizing 
> (e.g., number of allocations, size of allocations) and placement constraints 
> for allocations.
> Applications can use either this new object (when rich placement constraints 
> are required) or the existing {{ResourceRequest}} object.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6753) Expose more ContainerImpl states from NM in ContainerStateProto

2017-06-29 Thread Roni Burd (JIRA)
Roni Burd created YARN-6753:
---

 Summary: Expose more ContainerImpl states from NM in 
ContainerStateProto 
 Key: YARN-6753
 URL: https://issues.apache.org/jira/browse/YARN-6753
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Reporter: Roni Burd
Priority: Minor


The current NM protobuf definition exposes a subset of the NM internal state 
via ContainerStateProto.

We are currently building tools that can use of more fine grain state like 
LOCALIZING, LOCALIZED, EXIT_WITH_FAILURES etc.

The proposal is to add more internal states in the API.

I'm not sure if this is considered an Incompatible change or not



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6751) Display reserved resources in web UI per queue

2017-06-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068967#comment-16068967
 ] 

Hadoop QA commented on YARN-6751:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
18s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 2 new + 857 unchanged - 0 fixed = 859 total (was 857) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m 44s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.metrics.TestSystemMetricsPublisher |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6751 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12875097/YARN-6751.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8ab4a4d3adf8 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5a75f73 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/16270/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16270/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16270/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16270/console |

[jira] [Comment Edited] (YARN-4161) Capacity Scheduler : Assign single or multiple containers per heart beat driven by configuration

2017-06-29 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068963#comment-16068963
 ] 

Sunil G edited comment on YARN-4161 at 6/29/17 8:51 PM:


bq.Yes. so you ok with existing two config fields?
Yes. We can do two configs. Any other options with one config param may confuse 
user more.

bq.Yes, now checking is little confused. I refactored to more clearer.
Makes sene

bq.For now, I more prefer to do it in scheduler-level
Lets do this at scheduler level for now. I think we can extend to queue level a 
little later when its really needed. Else it may complicate a little more.


was (Author: sunilg):
bq,Yes. so you ok with existing two config fields?
Yes. We can do two configs. Any other options with one config param may confuse 
user more.

bq.Yes, now checking is little confused. I refactored to more clearer.
Makes sene

bq.For now, I more prefer to do it in scheduler-level
Lets do this at scheduler level for now. I think we can extend to queue level a 
little later when its really needed. Else it may complicate a little more.

> Capacity Scheduler : Assign single or multiple containers per heart beat 
> driven by configuration
> 
>
> Key: YARN-4161
> URL: https://issues.apache.org/jira/browse/YARN-4161
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: capacity scheduler
>Reporter: Mayank Bansal
>Assignee: Mayank Bansal
>  Labels: oct16-medium
> Attachments: YARN-4161.patch, YARN-4161.patch.1
>
>
> Capacity Scheduler right now schedules multiple containers per heart beat if 
> there are more resources available in the node.
> This approach works fine however in some cases its not distribute the load 
> across the cluster hence throughput of the cluster suffers. I am adding 
> feature to drive that using configuration by that we can control the number 
> of containers assigned per heart beat.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4161) Capacity Scheduler : Assign single or multiple containers per heart beat driven by configuration

2017-06-29 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068963#comment-16068963
 ] 

Sunil G commented on YARN-4161:
---

bq,Yes. so you ok with existing two config fields?
Yes. We can do two configs. Any other options with one config param may confuse 
user more.

bq.Yes, now checking is little confused. I refactored to more clearer.
Makes sene

bq.For now, I more prefer to do it in scheduler-level
Lets do this at scheduler level for now. I think we can extend to queue level a 
little later when its really needed. Else it may complicate a little more.

> Capacity Scheduler : Assign single or multiple containers per heart beat 
> driven by configuration
> 
>
> Key: YARN-4161
> URL: https://issues.apache.org/jira/browse/YARN-4161
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: capacity scheduler
>Reporter: Mayank Bansal
>Assignee: Mayank Bansal
>  Labels: oct16-medium
> Attachments: YARN-4161.patch, YARN-4161.patch.1
>
>
> Capacity Scheduler right now schedules multiple containers per heart beat if 
> there are more resources available in the node.
> This approach works fine however in some cases its not distribute the load 
> across the cluster hence throughput of the cluster suffers. I am adding 
> feature to drive that using configuration by that we can control the number 
> of containers assigned per heart beat.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6694) Add certain envs to the default yarn.nodemanager.env-whitelist

2017-06-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068961#comment-16068961
 ] 

Hadoop QA commented on YARN-6694:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
22s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6694 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12875104/YARN-6694.1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 83a717b88568 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5a75f73 |
| Default Java | 1.8.0_131 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16271/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16271/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add certain envs to the default yarn.nodemanager.env-whitelist
> --
>
> Key: YARN-6694
> URL: https://issues.apache.org/jira/browse/YARN-6694
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-6694.1.patch
>
>
> Certain envs can be added in the  yarn.nodemanager.env-whitelist
> such as: HADOOP_HOME,PATH,LANG,TZ 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2113) Add cross-user preemption within CapacityScheduler's leaf-queue

2017-06-29 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068943#comment-16068943
 ] 

Sunil G commented on YARN-2113:
---

Thanks [~eepayne] for the effort. Really appreciate the same.
I am good with the branch-2 patch. Tested locally with a basic setup. Looks 
good.

One minor nit in that branch-2 patch. I could see a commented code snippet 
starting with {{EEP1:}}. Could you please help to remove the same and provide a 
new patch. So I can go ahead in committing the same in branch-2

One more question, I can see a branch-2.8 patch as well. Is that need to be 
rebased as well. Thank You.

> Add cross-user preemption within CapacityScheduler's leaf-queue
> ---
>
> Key: YARN-2113
> URL: https://issues.apache.org/jira/browse/YARN-2113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Sunil G
> Fix For: 3.0.0-alpha4
>
> Attachments: IntraQueue Preemption-Impact Analysis.pdf, 
> TestNoIntraQueuePreemptionIfBelowUserLimitAndDifferentPrioritiesWithExtraUsers.txt,
>  YARN-2113.0001.patch, YARN-2113.0002.patch, YARN-2113.0003.patch, 
> YARN-2113.0004.patch, YARN-2113.0005.patch, YARN-2113.0006.patch, 
> YARN-2113.0007.patch, YARN-2113.0008.patch, YARN-2113.0009.patch, 
> YARN-2113.0010.patch, YARN-2113.0011.patch, YARN-2113.0012.patch, 
> YARN-2113.0013.patch, YARN-2113.0014.patch, YARN-2113.0015.patch, 
> YARN-2113.0016.patch, YARN-2113.0017.patch, YARN-2113.0018.patch, 
> YARN-2113.0019.patch, YARN-2113.apply.onto.0012.ericp.patch, 
> YARN-2113.branch-2.0019.patch, YARN-2113.branch-2.8.0019.patch, YARN-2113 
> Intra-QueuePreemption Behavior.pdf, YARN-2113.v0.patch
>
>
> Preemption today only works across queues and moves around resources across 
> queues per demand and usage. We should also have user-level preemption within 
> a queue, to balance capacity across users in a predictable manner.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6694) Add certain envs to the default yarn.nodemanager.env-whitelist

2017-06-29 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068926#comment-16068926
 ] 

Jian He commented on YARN-6694:
---

Attached a simple patch that adds the aforementioned properties in the default 
yarn.nodemanager.env-whitelist

> Add certain envs to the default yarn.nodemanager.env-whitelist
> --
>
> Key: YARN-6694
> URL: https://issues.apache.org/jira/browse/YARN-6694
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-6694.1.patch
>
>
> Certain envs can be added in the  yarn.nodemanager.env-whitelist
> such as: HADOOP_HOME,PATH,LANG,TZ 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6694) Add certain envs to the default yarn.nodemanager.env-whitelist

2017-06-29 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-6694:
--
Attachment: YARN-6694.1.patch

> Add certain envs to the default yarn.nodemanager.env-whitelist
> --
>
> Key: YARN-6694
> URL: https://issues.apache.org/jira/browse/YARN-6694
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-6694.1.patch
>
>
> Certain envs can be added in the  yarn.nodemanager.env-whitelist
> such as: HADOOP_HOME,PATH,LANG,TZ 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6752) Display reserved resources in web UI per application for fair scheduler

2017-06-29 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-6752:
--
Description: Show the number of reserved memory and vcores for each 
application

> Display reserved resources in web UI per application for fair scheduler
> ---
>
> Key: YARN-6752
> URL: https://issues.apache.org/jira/browse/YARN-6752
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
>
> Show the number of reserved memory and vcores for each application



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6752) Display reserved resources in web UI per application for fair scheduler

2017-06-29 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-6752:
--
Environment: (was: Show the number of reserved memory and vcores for 
each application)

> Display reserved resources in web UI per application for fair scheduler
> ---
>
> Key: YARN-6752
> URL: https://issues.apache.org/jira/browse/YARN-6752
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6731) Add ability to export scheduler configuration XML

2017-06-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068916#comment-16068916
 ] 

Hadoop QA commented on YARN-6731:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
20s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
25s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 54m  5s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 86m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerNodeLabelUpdate
 |
|   | hadoop.yarn.server.resourcemanager.TestRMEmbeddedElector |
|   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
| Timed out junit tests | 
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6731 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12875093/YARN-6731-YARN-5734.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ecb98b5eb767 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-5734 / a48d475 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16269/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16269/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16269/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add ability to export scheduler configuration XML
> -
>
> Key: YARN-6731
> URL: 

[jira] [Commented] (YARN-6594) [API] Introduce SchedulingRequest object

2017-06-29 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068874#comment-16068874
 ] 

Konstantinos Karanasos commented on YARN-6594:
--

Hi [~jianhe], glad to hear you will be using the new API.
Re: your question on nesting the ResourceSizing object, I did it this way to 
align with what we discussed offline with 
[~asuresh]/[~curino]/[~chris.douglas]/[~subru]/[~vinodkv]/[~leftnoteasy].
Essentially, it is cleaner to separate the sizing object. Also we might need to 
add more fields to it later, which will end up making the  SchedulingRequest 
object too bulky.
To make it easier for the applications, like you point out, I will make sure in 
the next version I add a constructor in the SchedulingRequest that 
automatically creates the ResourceSizing objects, given a number of allocations 
and resources.
Hope it makes sense.


> [API] Introduce SchedulingRequest object
> 
>
> Key: YARN-6594
> URL: https://issues.apache.org/jira/browse/YARN-6594
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-6594.001.patch
>
>
> This JIRA introduces a new SchedulingRequest object.
> It will be part of the {{AllocateRequest}} and will be used to define sizing 
> (e.g., number of allocations, size of allocations) and placement constraints 
> for allocations.
> Applications can use either this new object (when rich placement constraints 
> are required) or the existing {{ResourceRequest}} object.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6752) Display reserved resources in web UI per application for fair scheduler

2017-06-29 Thread Abdullah Yousufi (JIRA)
Abdullah Yousufi created YARN-6752:
--

 Summary: Display reserved resources in web UI per application for 
fair scheduler
 Key: YARN-6752
 URL: https://issues.apache.org/jira/browse/YARN-6752
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: fairscheduler
 Environment: Show the number of reserved memory and vcores for each 
application
Reporter: Abdullah Yousufi
Assignee: Abdullah Yousufi






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6610) DominantResourceCalculator.getResourceAsValue() dominant param is no longer appropriate

2017-06-29 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068865#comment-16068865
 ] 

Daniel Templeton commented on YARN-6610:


After a long discussion with [~kasha], I think I'm now convinced that there's 
no correctness issue with using DRF in a multi-resource configuration.  There 
is, of course, the performance issue created by increasing the number of 
dimensions in the resource comparisons, but that's an issue to be resolved for 
resource types in general.  I think the posted patch should be fine.

> DominantResourceCalculator.getResourceAsValue() dominant param is no longer 
> appropriate
> ---
>
> Key: YARN-6610
> URL: https://issues.apache.org/jira/browse/YARN-6610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-6610.001.patch
>
>
> The {{dominant}} param assumes there are only two resources, i.e. true means 
> to compare the dominant, and false means to compare the subordinate.  Now 
> that there are _n_ resources, this parameter no longer makes sense.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6751) Display reserved resources in web UI per queue

2017-06-29 Thread Abdullah Yousufi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abdullah Yousufi updated YARN-6751:
---
Attachment: YARN-6751.001.patch

> Display reserved resources in web UI per queue
> --
>
> Key: YARN-6751
> URL: https://issues.apache.org/jira/browse/YARN-6751
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
> Attachments: YARN-6751.001.patch
>
>
> Show the number of reserved memory and vcores in each queue block



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6409) RM does not blacklist node for AM launch failures

2017-06-29 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068806#comment-16068806
 ] 

Haibo Chen commented on YARN-6409:
--

Thanks Ray for suggestion. That sounds good to me. [~jlowe] [~djp] Do you guys 
have experience on this issue? We'd appreciate your input on which is the 
preferred.  We have seen this issue with our customers fairly often. 

> RM does not blacklist node for AM launch failures
> -
>
> Key: YARN-6409
> URL: https://issues.apache.org/jira/browse/YARN-6409
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha2
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-6409.00.patch, YARN-6409.01.patch, 
> YARN-6409.02.patch, YARN-6409.03.patch
>
>
> Currently, node blacklisting upon AM failures only handles failures that 
> happen after AM container is launched (see 
> RMAppAttemptImpl.shouldCountTowardsNodeBlacklisting()).  However, AM launch 
> can also fail if the NM, where the AM container is allocated, goes 
> unresponsive.  Because it is not handled, scheduler may continue to allocate 
> AM containers on that same NM for the following app attempts. 
> {code}
> Application application_1478721503753_0870 failed 2 times due to Error 
> launching appattempt_1478721503753_0870_02. Got exception: 
> java.io.IOException: Failed on local exception: java.io.IOException: 
> java.net.SocketTimeoutException: 6 millis timeout while waiting for 
> channel to be ready for read. ch : java.nio.channels.SocketChannel[connected 
> local=/17.111.179.113:46702 remote=*.me.com/17.111.178.125:8041]; Host 
> Details : local host is: "*.me.com/17.111.179.113"; destination host is: 
> "*.me.com":8041; 
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) 
> at org.apache.hadoop.ipc.Client.call(Client.java:1475) 
> at org.apache.hadoop.ipc.Client.call(Client.java:1408) 
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
>  
> at com.sun.proxy.$Proxy86.startContainers(Unknown Source) 
> at 
> org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.startContainers(ContainerManagementProtocolPBClientImpl.java:96)
>  
> at sun.reflect.GeneratedMethodAccessor155.invoke(Unknown Source) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  
> at java.lang.reflect.Method.invoke(Method.java:497) 
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256)
>  
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
>  
> at com.sun.proxy.$Proxy87.startContainers(Unknown Source) 
> at 
> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:120)
>  
> at 
> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:256)
>  
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  
> at java.lang.Thread.run(Thread.java:745) 
> Caused by: java.io.IOException: java.net.SocketTimeoutException: 6 millis 
> timeout while waiting for channel to be ready for read. ch : 
> java.nio.channels.SocketChannel[connected local=/17.111.179.113:46702 
> remote=*.me.com/17.111.178.125:8041] 
> at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:687) 
> at java.security.AccessController.doPrivileged(Native Method) 
> at javax.security.auth.Subject.doAs(Subject.java:422) 
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
>  
> at 
> org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:650)
>  
> at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:738) 
> at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:375) 
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1524) 
> at org.apache.hadoop.ipc.Client.call(Client.java:1447) 
> ... 15 more 
> Caused by: java.net.SocketTimeoutException: 6 millis timeout while 
> waiting for channel to be ready for read. ch : 
> java.nio.channels.SocketChannel[connected local=/17.111.179.113:46702 
> remote=*.me.com/17.111.178.125:8041] 
> at 
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) 
> at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) 
> at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) 
> at java.io.FilterInputStream.read(FilterInputStream.java:133) 
> at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) 
> at java.io.BufferedInputStream.read(BufferedInputStream.java:265) 
> at 

[jira] [Commented] (YARN-6409) RM does not blacklist node for AM launch failures

2017-06-29 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068796#comment-16068796
 ] 

Ray Chiang commented on YARN-6409:
--

What about making this a configuration setting?  It seems like this shows up 
more on larger clusters (higher chance of network down, more nodes to deal 
with, etc.).

> RM does not blacklist node for AM launch failures
> -
>
> Key: YARN-6409
> URL: https://issues.apache.org/jira/browse/YARN-6409
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha2
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-6409.00.patch, YARN-6409.01.patch, 
> YARN-6409.02.patch, YARN-6409.03.patch
>
>
> Currently, node blacklisting upon AM failures only handles failures that 
> happen after AM container is launched (see 
> RMAppAttemptImpl.shouldCountTowardsNodeBlacklisting()).  However, AM launch 
> can also fail if the NM, where the AM container is allocated, goes 
> unresponsive.  Because it is not handled, scheduler may continue to allocate 
> AM containers on that same NM for the following app attempts. 
> {code}
> Application application_1478721503753_0870 failed 2 times due to Error 
> launching appattempt_1478721503753_0870_02. Got exception: 
> java.io.IOException: Failed on local exception: java.io.IOException: 
> java.net.SocketTimeoutException: 6 millis timeout while waiting for 
> channel to be ready for read. ch : java.nio.channels.SocketChannel[connected 
> local=/17.111.179.113:46702 remote=*.me.com/17.111.178.125:8041]; Host 
> Details : local host is: "*.me.com/17.111.179.113"; destination host is: 
> "*.me.com":8041; 
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) 
> at org.apache.hadoop.ipc.Client.call(Client.java:1475) 
> at org.apache.hadoop.ipc.Client.call(Client.java:1408) 
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
>  
> at com.sun.proxy.$Proxy86.startContainers(Unknown Source) 
> at 
> org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.startContainers(ContainerManagementProtocolPBClientImpl.java:96)
>  
> at sun.reflect.GeneratedMethodAccessor155.invoke(Unknown Source) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  
> at java.lang.reflect.Method.invoke(Method.java:497) 
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256)
>  
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
>  
> at com.sun.proxy.$Proxy87.startContainers(Unknown Source) 
> at 
> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:120)
>  
> at 
> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:256)
>  
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  
> at java.lang.Thread.run(Thread.java:745) 
> Caused by: java.io.IOException: java.net.SocketTimeoutException: 6 millis 
> timeout while waiting for channel to be ready for read. ch : 
> java.nio.channels.SocketChannel[connected local=/17.111.179.113:46702 
> remote=*.me.com/17.111.178.125:8041] 
> at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:687) 
> at java.security.AccessController.doPrivileged(Native Method) 
> at javax.security.auth.Subject.doAs(Subject.java:422) 
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
>  
> at 
> org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:650)
>  
> at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:738) 
> at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:375) 
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1524) 
> at org.apache.hadoop.ipc.Client.call(Client.java:1447) 
> ... 15 more 
> Caused by: java.net.SocketTimeoutException: 6 millis timeout while 
> waiting for channel to be ready for read. ch : 
> java.nio.channels.SocketChannel[connected local=/17.111.179.113:46702 
> remote=*.me.com/17.111.178.125:8041] 
> at 
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) 
> at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) 
> at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) 
> at java.io.FilterInputStream.read(FilterInputStream.java:133) 
> at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) 
> at java.io.BufferedInputStream.read(BufferedInputStream.java:265) 
> at java.io.DataInputStream.readInt(DataInputStream.java:387) 

[jira] [Commented] (YARN-6731) Add ability to export scheduler configuration XML

2017-06-29 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068769#comment-16068769
 ] 

Jonathan Hung commented on YARN-6731:
-

002 fixes checkstyle.

> Add ability to export scheduler configuration XML
> -
>
> Key: YARN-6731
> URL: https://issues.apache.org/jira/browse/YARN-6731
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-6731-YARN-5734.001.patch, 
> YARN-6731-YARN-5734.002.patch
>
>
> This is useful for debugging/cluster migration/peace of mind.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6731) Add ability to export scheduler configuration XML

2017-06-29 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-6731:

Attachment: YARN-6731-YARN-5734.002.patch

> Add ability to export scheduler configuration XML
> -
>
> Key: YARN-6731
> URL: https://issues.apache.org/jira/browse/YARN-6731
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-6731-YARN-5734.001.patch, 
> YARN-6731-YARN-5734.002.patch
>
>
> This is useful for debugging/cluster migration/peace of mind.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6748) Expose scheduling policy for each queue in FairScheduler

2017-06-29 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068619#comment-16068619
 ] 

Yufei Gu edited comment on YARN-6748 at 6/29/17 6:29 PM:
-

YARN-5929 adds queue scheduling policy to jmx. You can get it from RM rest API 
as well. It may be a good idea to add scheduling policy to WebUI. cc 
[~dan...@cloudera.com] [~ayousufi].


was (Author: yufeigu):
YARN-5929 adds queue scheduling policy to jmx. You can get it from web service 
as well. It may be a good idea to add scheduling policy to WebUI. cc 
[~dan...@cloudera.com] [~ayousufi].

> Expose scheduling policy for each queue in FairScheduler
> 
>
> Key: YARN-6748
> URL: https://issues.apache.org/jira/browse/YARN-6748
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Akira Ajisaka
>
> The scheduling policy for FairScheduler cannot be obtained via CLI or WebUI, 
> or metrics. Therefore we cannot recognize that the configuration is reflected.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6751) Display reserved resources in web UI per queue

2017-06-29 Thread Abdullah Yousufi (JIRA)
Abdullah Yousufi created YARN-6751:
--

 Summary: Display reserved resources in web UI per queue
 Key: YARN-6751
 URL: https://issues.apache.org/jira/browse/YARN-6751
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: fairscheduler
Reporter: Abdullah Yousufi
Assignee: Abdullah Yousufi


Show the number of reserved memory and vcores in each queue block



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6748) Expose scheduling policy for each queue in FairScheduler

2017-06-29 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068619#comment-16068619
 ] 

Yufei Gu commented on YARN-6748:


YARN-5929 adds queue scheduling policy to jmx. You can get it from web service 
as well. It may be a good idea to add scheduling policy to WebUI. cc 
[~dan...@cloudera.com] [~ayousufi].

> Expose scheduling policy for each queue in FairScheduler
> 
>
> Key: YARN-6748
> URL: https://issues.apache.org/jira/browse/YARN-6748
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Akira Ajisaka
>
> The scheduling policy for FairScheduler cannot be obtained via CLI or WebUI, 
> or metrics. Therefore we cannot recognize that the configuration is reflected.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6749) TestAppSchedulingInfo.testPriorityAccounting fails consistently

2017-06-29 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-6749:

Affects Version/s: (was: 2.9)

> TestAppSchedulingInfo.testPriorityAccounting fails consistently
> ---
>
> Key: YARN-6749
> URL: https://issues.apache.org/jira/browse/YARN-6749
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.2
>Reporter: Eric Badger
>Assignee: Naganarasimha G R
> Attachments: YARN-6749-branch-2.8.001.patch
>
>
> Broken by YARN-6467



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6749) TestAppSchedulingInfo.testPriorityAccounting fails consistently

2017-06-29 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-6749:

Attachment: YARN-6749-branch-2.8.001.patch

Thanks [~ebadger] for pointing this out. My bad i should have taken a step 
further in verifying after the confirmation. any way uploading the patches for 
branches where it failed.

> TestAppSchedulingInfo.testPriorityAccounting fails consistently
> ---
>
> Key: YARN-6749
> URL: https://issues.apache.org/jira/browse/YARN-6749
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.2, 2.9
>Reporter: Eric Badger
>Assignee: Naganarasimha G R
> Attachments: YARN-6749-branch-2.8.001.patch
>
>
> Broken by YARN-6467



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6750) Add a configuration to cap how much a NM can be overallocated

2017-06-29 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-6750:
-
Issue Type: Sub-task  (was: Bug)
Parent: YARN-1011

> Add a configuration to cap how much a NM can be overallocated
> -
>
> Key: YARN-6750
> URL: https://issues.apache.org/jira/browse/YARN-6750
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6750) Add a configuration to cap how much a NM can be overallocated

2017-06-29 Thread Haibo Chen (JIRA)
Haibo Chen created YARN-6750:


 Summary: Add a configuration to cap how much a NM can be 
overallocated
 Key: YARN-6750
 URL: https://issues.apache.org/jira/browse/YARN-6750
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Reporter: Haibo Chen
Assignee: Haibo Chen






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4161) Capacity Scheduler : Assign single or multiple containers per heart beat driven by configuration

2017-06-29 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068542#comment-16068542
 ] 

Wei Yan commented on YARN-4161:
---

{quote}
I got your point. I was trying to point out to an existing configuration in CS 
named yarn.scheduler.capacity.per-node-heartbeat.maximum-offswitch-assignments. 
I thought of keep this syntax and it seems more of similar nature to me.
{quote}
i c. In that case, I agree with you to change to the same syntax as the 
offswitch one.

{quote}
Thanks for clarifying. I understood the reason behind that. On an another note, 
maxAssign=0 is meaning less. Correct? So value less than 0 could be considered 
for infinite. And a boolean variable to switch-on / off the feature.
{quote}
Yes. so you ok with existing two config fields?

{quote}
I think assignMultiple should not impact existing 
"maximum-offswitch-assignments" feature. I can see that current patch will skip 
when assignMultiple is configured as false.
{quote}
Yes, now checking is little confused. I refactored to more clearer.
{code}
  private boolean canAllocateMore(CSAssignment assignment, int offswitchCount,
  int assignedContainers) {
// Current assignment shouldn't be empty
if (assignment == null
|| Resources.equals(assignment.getResource(), Resources.none())) {
  return false;
}

// offswitch assignment should be under threshold
if (offswitchCount >= offswitchPerHeartbeatLimit) {
  return false;
}

// assignMultiple should be ON, and assignedContainers should be under 
threshold
return !(!assignMultiple
|| (maxAllocationPerNode != -1 && maxAllocationPerNode <= 
assignedContainers));
  }
{code}

{quote}
Could we rename assignMultiple to something like 
assign-multiple-containers-per-heartbeat
{quote}
As mentioned above, I'll rename configs to follow CS's offswitch one.
{quote}
Are you planning to consider this per queue as well ? Or only at CS level from 
top for now.
{quote}
For now, I more prefer to do it in scheduler-level. As this feature is 
something that works in cluster-level, to balance the workload across NMs. I 
don't have a concrete case that we need queue-level. Do u have cases that it 
would be better to do per-queue config?


> Capacity Scheduler : Assign single or multiple containers per heart beat 
> driven by configuration
> 
>
> Key: YARN-4161
> URL: https://issues.apache.org/jira/browse/YARN-4161
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: capacity scheduler
>Reporter: Mayank Bansal
>Assignee: Mayank Bansal
>  Labels: oct16-medium
> Attachments: YARN-4161.patch, YARN-4161.patch.1
>
>
> Capacity Scheduler right now schedules multiple containers per heart beat if 
> there are more resources available in the node.
> This approach works fine however in some cases its not distribute the load 
> across the cluster hence throughput of the cluster suffers. I am adding 
> feature to drive that using configuration by that we can control the number 
> of containers assigned per heart beat.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6749) TestAppSchedulingInfo.testPriorityAccounting fails consistently

2017-06-29 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R reassigned YARN-6749:
---

Assignee: Naganarasimha G R

> TestAppSchedulingInfo.testPriorityAccounting fails consistently
> ---
>
> Key: YARN-6749
> URL: https://issues.apache.org/jira/browse/YARN-6749
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.2, 2.9
>Reporter: Eric Badger
>Assignee: Naganarasimha G R
>
> Broken by YARN-6467



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6749) TestAppSchedulingInfo.testPriorityAccounting fails consistently

2017-06-29 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-6749:

Affects Version/s: 2.9
   2.8.2

> TestAppSchedulingInfo.testPriorityAccounting fails consistently
> ---
>
> Key: YARN-6749
> URL: https://issues.apache.org/jira/browse/YARN-6749
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.2, 2.9
>Reporter: Eric Badger
>Assignee: Naganarasimha G R
>
> Broken by YARN-6467



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6749) TestAppSchedulingInfo.testPriorityAccounting fails consistently

2017-06-29 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068465#comment-16068465
 ] 

Eric Badger commented on YARN-6749:
---

[~Naganarasimha], [~maniraj...@gmail.com], could you take a look at this? Looks 
like this showed up in the precommits but was previously deemed as not an issue 
with the patch

> TestAppSchedulingInfo.testPriorityAccounting fails consistently
> ---
>
> Key: YARN-6749
> URL: https://issues.apache.org/jira/browse/YARN-6749
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Badger
>
> Broken by YARN-6467



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6749) TestAppSchedulingInfo.testPriorityAccounting fails consistently

2017-06-29 Thread Eric Badger (JIRA)
Eric Badger created YARN-6749:
-

 Summary: TestAppSchedulingInfo.testPriorityAccounting fails 
consistently
 Key: YARN-6749
 URL: https://issues.apache.org/jira/browse/YARN-6749
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Eric Badger


Broken by YARN-6467



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5670) Add support for Docker image clean up

2017-06-29 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf reassigned YARN-5670:
-

Assignee: Shane Kumpf  (was: luhuichun)

> Add support for Docker image clean up
> -
>
> Key: YARN-5670
> URL: https://issues.apache.org/jira/browse/YARN-5670
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Zhankun Tang
>Assignee: Shane Kumpf
>
> Regarding to Docker image localization, we also need a way to clean up the 
> old/stale Docker image to save storage space. We may extend deletion service 
> to utilize "docker rm" to do this.
> This is related to YARN-3854 and may depend on its implementation. Please 
> refer to YARN-3854 for Docker image localization details.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-3854) Add localization support for docker images

2017-06-29 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf reassigned YARN-3854:
-

Assignee: Shane Kumpf  (was: luhuichun)

> Add localization support for docker images
> --
>
> Key: YARN-3854
> URL: https://issues.apache.org/jira/browse/YARN-3854
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Sidharta Seethana
>Assignee: Shane Kumpf
> Attachments: YARN-3854-branch-2.8.001.patch, 
> YARN-3854_Localization_support_for_Docker_image_v1.pdf, 
> YARN-3854_Localization_support_for_Docker_image_v2.pdf, 
> YARN-3854_Localization_support_for_Docker_image_v3.pdf
>
>
> We need the ability to localize docker images when those images aren't 
> already available locally. There are various approaches that could be used 
> here with different trade-offs/issues : image archives on HDFS + docker load 
> ,  docker pull during the localization phase or (automatic) docker pull 
> during the run/launch phase. 
> We also need the ability to clean-up old/stale, unused images. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3854) Add localization support for docker images

2017-06-29 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068234#comment-16068234
 ] 

Shane Kumpf commented on YARN-3854:
---

Thanks, [~luhuichun]!

> Add localization support for docker images
> --
>
> Key: YARN-3854
> URL: https://issues.apache.org/jira/browse/YARN-3854
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Sidharta Seethana
>Assignee: luhuichun
> Attachments: YARN-3854-branch-2.8.001.patch, 
> YARN-3854_Localization_support_for_Docker_image_v1.pdf, 
> YARN-3854_Localization_support_for_Docker_image_v2.pdf, 
> YARN-3854_Localization_support_for_Docker_image_v3.pdf
>
>
> We need the ability to localize docker images when those images aren't 
> already available locally. There are various approaches that could be used 
> here with different trade-offs/issues : image archives on HDFS + docker load 
> ,  docker pull during the localization phase or (automatic) docker pull 
> during the run/launch phase. 
> We also need the ability to clean-up old/stale, unused images. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5534) Allow whitelisted volume mounts

2017-06-29 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068231#comment-16068231
 ] 

Shane Kumpf commented on YARN-5534:
---

Thanks, [~luhuichun]!

> Allow whitelisted volume mounts 
> 
>
> Key: YARN-5534
> URL: https://issues.apache.org/jira/browse/YARN-5534
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: luhuichun
>Assignee: luhuichun
> Attachments: YARN-5534.001.patch, YARN-5534.002.patch
>
>
> Introduction 
> Mounting files or directories from the host is one way of passing 
> configuration and other information into a docker container. 
> We could allow the user to set a list of mounts in the environment of 
> ContainerLaunchContext (e.g. /dir1:/targetdir1,/dir2:/targetdir2). 
> These would be mounted read-only to the specified target locations. This has 
> been resolved in YARN-4595
> 2.Problem Definition
> Bug mounting arbitrary volumes into a Docker container can be a security risk.
> 3.Possible solutions
> one approach to provide safe mounts is to allow the cluster administrator to 
> configure a set of parent directories as white list mounting directories.
>  Add a property named yarn.nodemanager.volume-mounts.white-list, when 
> container executor do mount checking, only the allowed directories or 
> sub-directories can be mounted. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5534) Allow whitelisted volume mounts

2017-06-29 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf reassigned YARN-5534:
-

Assignee: Shane Kumpf  (was: luhuichun)

> Allow whitelisted volume mounts 
> 
>
> Key: YARN-5534
> URL: https://issues.apache.org/jira/browse/YARN-5534
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: luhuichun
>Assignee: Shane Kumpf
> Attachments: YARN-5534.001.patch, YARN-5534.002.patch
>
>
> Introduction 
> Mounting files or directories from the host is one way of passing 
> configuration and other information into a docker container. 
> We could allow the user to set a list of mounts in the environment of 
> ContainerLaunchContext (e.g. /dir1:/targetdir1,/dir2:/targetdir2). 
> These would be mounted read-only to the specified target locations. This has 
> been resolved in YARN-4595
> 2.Problem Definition
> Bug mounting arbitrary volumes into a Docker container can be a security risk.
> 3.Possible solutions
> one approach to provide safe mounts is to allow the cluster administrator to 
> configure a set of parent directories as white list mounting directories.
>  Add a property named yarn.nodemanager.volume-mounts.white-list, when 
> container executor do mount checking, only the allowed directories or 
> sub-directories can be mounted. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6748) Expose scheduling policy for each queue in FairScheduler

2017-06-29 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created YARN-6748:
---

 Summary: Expose scheduling policy for each queue in FairScheduler
 Key: YARN-6748
 URL: https://issues.apache.org/jira/browse/YARN-6748
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: fairscheduler
Reporter: Akira Ajisaka


The scheduling policy for FairScheduler cannot be obtained via CLI or WebUI, or 
metrics. Therefore we cannot recognize that the configuration is reflected.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5534) Allow whitelisted volume mounts

2017-06-29 Thread luhuichun (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068018#comment-16068018
 ] 

luhuichun commented on YARN-5534:
-

[~shaneku...@gmail.com] ok it's ok for me 

> Allow whitelisted volume mounts 
> 
>
> Key: YARN-5534
> URL: https://issues.apache.org/jira/browse/YARN-5534
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: luhuichun
>Assignee: luhuichun
> Attachments: YARN-5534.001.patch, YARN-5534.002.patch
>
>
> Introduction 
> Mounting files or directories from the host is one way of passing 
> configuration and other information into a docker container. 
> We could allow the user to set a list of mounts in the environment of 
> ContainerLaunchContext (e.g. /dir1:/targetdir1,/dir2:/targetdir2). 
> These would be mounted read-only to the specified target locations. This has 
> been resolved in YARN-4595
> 2.Problem Definition
> Bug mounting arbitrary volumes into a Docker container can be a security risk.
> 3.Possible solutions
> one approach to provide safe mounts is to allow the cluster administrator to 
> configure a set of parent directories as white list mounting directories.
>  Add a property named yarn.nodemanager.volume-mounts.white-list, when 
> container executor do mount checking, only the allowed directories or 
> sub-directories can be mounted. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3854) Add localization support for docker images

2017-06-29 Thread luhuichun (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068015#comment-16068015
 ] 

luhuichun commented on YARN-3854:
-

[~shaneku...@gmail.com] Hi Shane, it's ok for me to transfer

> Add localization support for docker images
> --
>
> Key: YARN-3854
> URL: https://issues.apache.org/jira/browse/YARN-3854
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Sidharta Seethana
>Assignee: luhuichun
> Attachments: YARN-3854-branch-2.8.001.patch, 
> YARN-3854_Localization_support_for_Docker_image_v1.pdf, 
> YARN-3854_Localization_support_for_Docker_image_v2.pdf, 
> YARN-3854_Localization_support_for_Docker_image_v3.pdf
>
>
> We need the ability to localize docker images when those images aren't 
> already available locally. There are various approaches that could be used 
> here with different trade-offs/issues : image archives on HDFS + docker load 
> ,  docker pull during the localization phase or (automatic) docker pull 
> during the run/launch phase. 
> We also need the ability to clean-up old/stale, unused images. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org