[jira] [Commented] (YARN-7116) CapacityScheduler Web UI: Queue's AM usage is always show on per-user's AM usage.

2017-08-28 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144745#comment-16144745
 ] 

Bibin A Chundatt commented on YARN-7116:


Thank you [~leftnoteasy] for patch. IIUC the AM limit shown is of partition 
now. AM limit also could be changed. thoughts ??

> CapacityScheduler Web UI: Queue's AM usage is always show on per-user's AM 
> usage.
> -
>
> Key: YARN-7116
> URL: https://issues.apache.org/jira/browse/YARN-7116
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, webapp
>Affects Versions: 2.9.0, 2.8.1, 3.0.0-alpha4
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-7116.001.patch
>
>
> On CapacityScheduler's web UI, AM usage of different users belong to the same 
> queue always shows queue's AM usage. 
> The root cause is: under CapacitySchedulerPage. 
> {code}
> tbody.tr().td(userInfo.getUsername())
> .td(userInfo.getUserResourceLimit().toString())
> .td(resourcesUsed.toString())
> .td(resourceUsages.getAMLimit().toString())
> .td(amUsed.toString())
> .td(Integer.toString(userInfo.getNumActiveApplications()))
> .td(Integer.toString(userInfo.getNumPendingApplications()))._();
> {code}
> Instead of amUsed.toString(), it should use userInfo.getAmUsed().



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7113) Clean up packaging and dependencies for yarn-native-services

2017-08-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144736#comment-16144736
 ] 

Hadoop QA commented on YARN-7113:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} yarn-native-services Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
37s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 0s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
18s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 5s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 
25s{color} | {color:green} yarn-native-services passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-assemblies hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
40s{color} | {color:green} yarn-native-services passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
21s{color} | {color:green} root: The patch generated 0 new + 106 unchanged - 
107 fixed = 106 total (was 213) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 13m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
24s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
10s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
8s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-assemblies hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 14s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
48s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}150m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestRaceWhenRelogin |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-7113 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884165/YARN-7113-yarn-native-services.02.patch
 |
| Optional 

[jira] [Updated] (YARN-7088) Fix application start time and add submit time to UIs

2017-08-28 Thread Abdullah Yousufi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abdullah Yousufi updated YARN-7088:
---
Attachment: YARN-7088.004.patch

> Fix application start time and add submit time to UIs
> -
>
> Key: YARN-7088
> URL: https://issues.apache.org/jira/browse/YARN-7088
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
> Attachments: YARN-7088.001.patch, YARN-7088.002.patch, 
> YARN-7088.003.patch, YARN-7088.004.patch
>
>
> Currently, the start time in the old and new UI actually shows the app 
> submission time. There should actually be two different fields; one for the 
> app's submission and one for its start, as well as the elapsed pending time 
> between the two.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7010) Federation: routing REST invocations transparently to multiple RMs (part 2 - getApps)

2017-08-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144685#comment-16144685
 ] 

Hadoop QA commented on YARN-7010:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 251 unchanged - 14 fixed = 251 total (was 265) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
58s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 47m  4s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
11s{color} | {color:green} hadoop-yarn-server-router in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}117m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer
 |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-7010 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884160/YARN-7010.v5.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux db166408c634 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| 

[jira] [Commented] (YARN-7088) Fix application start time and add submit time to UIs

2017-08-28 Thread Abdullah Yousufi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144674#comment-16144674
 ] 

Abdullah Yousufi commented on YARN-7088:


Thanks [~dan...@cloudera.com], I'll take a look at the checkstyle issues and 
upload a patch that resolves your earlier comments as well.

> Fix application start time and add submit time to UIs
> -
>
> Key: YARN-7088
> URL: https://issues.apache.org/jira/browse/YARN-7088
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
> Attachments: YARN-7088.001.patch, YARN-7088.002.patch, 
> YARN-7088.003.patch
>
>
> Currently, the start time in the old and new UI actually shows the app 
> submission time. There should actually be two different fields; one for the 
> app's submission and one for its start, as well as the elapsed pending time 
> between the two.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7088) Fix application start time and add submit time to UIs

2017-08-28 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144661#comment-16144661
 ] 

Daniel Templeton commented on YARN-7088:


The unit test failure is unrelated.  The checkstyle issues aren't directly your 
fault, but it would be swell if you could clean them up.  I'll take a closer 
look at the patch when I get a chance.

> Fix application start time and add submit time to UIs
> -
>
> Key: YARN-7088
> URL: https://issues.apache.org/jira/browse/YARN-7088
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
> Attachments: YARN-7088.001.patch, YARN-7088.002.patch, 
> YARN-7088.003.patch
>
>
> Currently, the start time in the old and new UI actually shows the app 
> submission time. There should actually be two different fields; one for the 
> app's submission and one for its start, as well as the elapsed pending time 
> between the two.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7113) Clean up packaging and dependencies for yarn-native-services

2017-08-28 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144656#comment-16144656
 ] 

Billie Rinaldi commented on YARN-7113:
--

I also noticed that commons-lang is no longer necessary for 
hadoop-yarn-services-api.

> Clean up packaging and dependencies for yarn-native-services
> 
>
> Key: YARN-7113
> URL: https://issues.apache.org/jira/browse/YARN-7113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: diff.patch, YARN-7113-yarn-native-services.01.patch, 
> YARN-7113-yarn-native-services.02.patch
>
>
> Since the yarn native services code has been greatly simplified, I think we 
> no longer need a separate lib directory for services. A dependency cleanup is 
> needed to address unused declared dependencies and used undeclared 
> dependencies in the new modules. We should also address NOTICE changes needed 
> for the 3 new dependencies that are being added, jcommander, snakeyaml, and 
> swagger-annotations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7113) Clean up packaging and dependencies for yarn-native-services

2017-08-28 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-7113:
-
Attachment: YARN-7113-yarn-native-services.02.patch

> Clean up packaging and dependencies for yarn-native-services
> 
>
> Key: YARN-7113
> URL: https://issues.apache.org/jira/browse/YARN-7113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: diff.patch, YARN-7113-yarn-native-services.01.patch, 
> YARN-7113-yarn-native-services.02.patch
>
>
> Since the yarn native services code has been greatly simplified, I think we 
> no longer need a separate lib directory for services. A dependency cleanup is 
> needed to address unused declared dependencies and used undeclared 
> dependencies in the new modules. We should also address NOTICE changes needed 
> for the 3 new dependencies that are being added, jcommander, snakeyaml, and 
> swagger-annotations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7076) yarn application -list -appTypes is not working

2017-08-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144648#comment-16144648
 ] 

Hudson commented on YARN-7076:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12257 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12257/])
YARN-7076. yarn application -list -appTypes is not working. Contributed 
(junping_du: rev 312b1fd9da2781da97f8c76fe1262c4d99b9c37f)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServices.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestClientRMService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java


> yarn application -list -appTypes  is not working
> -
>
> Key: YARN-7076
> URL: https://issues.apache.org/jira/browse/YARN-7076
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
>Priority: Blocker
> Fix For: 2.9.0, 3.0.0-beta1, 2.8.2
>
> Attachments: YARN-7076.01.patch, YARN-7076.02.patch
>
>
> yarn application -list -appTypes  is not working
> Looks like it's because the ApplicationCLI pass in the appType as uppercase, 
> but ClientRMService#getApplications is case sensitive, so if user submits an 
> app with lowercase appType, it wont work



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7113) Clean up packaging and dependencies for yarn-native-services

2017-08-28 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144640#comment-16144640
 ] 

Billie Rinaldi commented on YARN-7113:
--

Thanks for taking a look, [~jianhe]!

bq. I tried some testing, and looks like the apiserver is still not started, 
because "src/main/resources/webapps/services-rest-api/app" is not named 
properly, and it throws Exception when trying to find the file
I'm not sure what you mean. Patch 01 renames 
src/main/resources/webapps/services-rest-api/app to 
src/main/resources/webapps/api-server/app, and I was able to start apiserver 
and use it to launch services. What error did you get when you tried to run it?

bq. some dependency may also be not needed
I ran dependency analysis again and discovered that zookeeper is not needed any 
longer (since we are using curator instead). However, we still seem to be using 
commons compress, configuration2, and logging:
{noformat}
import org.apache.commons.compress.archivers.tar.TarArchiveEntry;
import org.apache.commons.compress.archivers.tar.TarArchiveOutputStream;
import org.apache.commons.configuration2.SubsetConfiguration;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
{noformat}

It looks like only YarnRegistryViewForProviders and TestYarnNativeServices are 
using commons-logging instead of slf4j, so I will switch those two over and 
remove the commons-logging dependency.

> Clean up packaging and dependencies for yarn-native-services
> 
>
> Key: YARN-7113
> URL: https://issues.apache.org/jira/browse/YARN-7113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: diff.patch, YARN-7113-yarn-native-services.01.patch
>
>
> Since the yarn native services code has been greatly simplified, I think we 
> no longer need a separate lib directory for services. A dependency cleanup is 
> needed to address unused declared dependencies and used undeclared 
> dependencies in the new modules. We should also address NOTICE changes needed 
> for the 3 new dependencies that are being added, jcommander, snakeyaml, and 
> swagger-annotations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7076) yarn application -list -appTypes is not working

2017-08-28 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144635#comment-16144635
 ] 

Junping Du commented on YARN-7076:
--

btw, personally, I don't think it will be an incompatible issue as 
case-sensitive never work for CLI or REST API and ClientRMService is a private 
API so it seems nothing get break here with the patch. So I didn't put 
incompatible tag on this jira.

> yarn application -list -appTypes  is not working
> -
>
> Key: YARN-7076
> URL: https://issues.apache.org/jira/browse/YARN-7076
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
>Priority: Blocker
> Fix For: 2.9.0, 3.0.0-beta1, 2.8.2
>
> Attachments: YARN-7076.01.patch, YARN-7076.02.patch
>
>
> yarn application -list -appTypes  is not working
> Looks like it's because the ApplicationCLI pass in the appType as uppercase, 
> but ClientRMService#getApplications is case sensitive, so if user submits an 
> app with lowercase appType, it wont work



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7076) yarn application -list -appTypes is not working

2017-08-28 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-7076:
-
Fix Version/s: 2.8.2
   3.0.0-beta1
   2.9.0

> yarn application -list -appTypes  is not working
> -
>
> Key: YARN-7076
> URL: https://issues.apache.org/jira/browse/YARN-7076
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
>Priority: Blocker
> Fix For: 2.9.0, 3.0.0-beta1, 2.8.2
>
> Attachments: YARN-7076.01.patch, YARN-7076.02.patch
>
>
> yarn application -list -appTypes  is not working
> Looks like it's because the ApplicationCLI pass in the appType as uppercase, 
> but ClientRMService#getApplications is case sensitive, so if user submits an 
> app with lowercase appType, it wont work



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7088) Fix application start time and add submit time to UIs

2017-08-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144628#comment-16144628
 ] 

Hadoop QA commented on YARN-7088:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
53s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
10s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 13m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 16s{color} | {color:orange} root: The patch generated 18 new + 971 unchanged 
- 4 fixed = 989 total (was 975) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
35s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
39s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
59s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
53s{color} | {color:green} hadoop-yarn-server-applicationhistoryservice in the 
patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 50m 14s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m 
49s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
15s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit 

[jira] [Commented] (YARN-7066) Add ability to specify volumes to mount for DockerContainerRuntime

2017-08-28 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144627#comment-16144627
 ] 

Eric Yang commented on YARN-7066:
-

[~ebadger] Yes, I agree.

[~shaneku...@gmail.com] I think this is better solution than predefined white 
list.  Majority of docker image have arbitrary defined path for storing 
stateful data.  Predefined white list will not cover all of them.  Hence, using 
user defined volumes is superior solution to YARN-5534.  Given that YARN-4266 
is applied to govern security of unix process owner.  Hence, mounting would not 
generate security hole.

YARN-6623 seems like a very big patch for privileged on/off.  It looks like 
attempt to shift java logic to c code.  C code is running with root privileges, 
it would be better to keep privileged code simple to reduce security hole.  I 
can wait for YARN-6623 to be completed then update this JIRA to use the new 
code.

> Add ability to specify volumes to mount for DockerContainerRuntime
> --
>
> Key: YARN-7066
> URL: https://issues.apache.org/jira/browse/YARN-7066
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Affects Versions: 3.0.0-beta1
>Reporter: Eric Yang
> Attachments: YARN-7066.001.patch
>
>
> Yarnfile describes environment, docker image, and configuration template for 
> launching docker containers in YARN.  It would be nice to have ability to 
> specify the volumes to mount.  This can be used in combination to 
> AMBARI-21748 to mount HDFS as data directories to docker containers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7011) yarn-daemon.sh is not respecting --config option

2017-08-28 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144625#comment-16144625
 ] 

Junping Du commented on YARN-7011:
--

I tried all 4 ways of config in HADOOP 2.7.3, 2.8.1 and 3.0.0-beta:
1. export HADOOP_CONF_DIR=/tmp/hadoop-config-1
2. In hadoop-env.sh: "export HADOOP_CONF_DIR=/tmp/hadoop-config-2"
3. pass --config /tmp/hadoop-config-3 on command line.
4. create a conf directory in HADOOP_HOME where bin/hadoop exists

And here comes the results:

||Scenario||Apache 2.7.3 result||Apache 2.8.1 result||Apache 3 result||
|Only (1)|work|work|work|
|Only (2)|work|work|work|
|Only (3)|work|work|work|
|Only (4)|work|work|work|
|Both (1) and (2)|1 effective|1 effective|1 effective|
|Both (2) and (3)|3 effective|3 effective|3 effective|
|Both (1) and (3)|3 effective|3 effective|3 effective|
|All (1), (2), (3)|3 effective|3 effective| 3 effective|
|Any of (1) / (2) / (3) with (4)|(1)(3) is higher than (4), (2) is 
lowest|(1)(3) is higher than (4), (2) is lowest|(1)(3) is higher than (4), (2) 
is lowest|
|All (1), (2), (3), (4)|(3)|(3)|(3)|

Based on results above, I think this shouldn't be an issue for Hadoop 3 beta.

> yarn-daemon.sh is not respecting --config option
> 
>
> Key: YARN-7011
> URL: https://issues.apache.org/jira/browse/YARN-7011
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Sumana Sathish
>Priority: Trivial
>
> Steps to reproduce:
> 1. Copy the conf to a temporary location /tmp/Conf
> 2. Modify anything in yarn-site.xml under /tmp/Conf/. Ex: Give invalid RM 
> address
> 3. Restart the resourcemanager using yarn-daemon.sh using --config /tmp/Conf
> 4. --config is not respected as the changes made in /tmp/Conf/yarn-site.xml 
> is not taken in while restarting RM



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7071) Add vcores and number of containers in web UI v2 node heat map

2017-08-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144624#comment-16144624
 ] 

Hadoop QA commented on YARN-7071:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  1m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-7071 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884159/YARN-7071.001.patch |
| Optional Tests |  asflicense  |
| uname | Linux ca227e684984 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / a1e3f84 |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17171/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add vcores and number of containers in web UI v2 node heat map
> --
>
> Key: YARN-7071
> URL: https://issues.apache.org/jira/browse/YARN-7071
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn-ui-v2
>Affects Versions: 3.0.0-alpha4
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
> Attachments: YARN-7071.001.patch
>
>
> Currently, the node heat map displays memory usage per node. This change 
> would add a dropdown to view cpu vcores or number of containers as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7010) Federation: routing REST invocations transparently to multiple RMs (part 2 - getApps)

2017-08-28 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-7010:
---
Attachment: YARN-7010.v5.patch

> Federation: routing REST invocations transparently to multiple RMs (part 2 - 
> getApps)
> -
>
> Key: YARN-7010
> URL: https://issues.apache.org/jira/browse/YARN-7010
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-7010.v0.patch, YARN-7010.v1.patch, 
> YARN-7010.v2.patch, YARN-7010.v3.patch, YARN-7010.v4.patch, YARN-7010.v5.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7071) Add vcores and number of containers in web UI v2 node heat map

2017-08-28 Thread Abdullah Yousufi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abdullah Yousufi updated YARN-7071:
---
Attachment: YARN-7071.001.patch

> Add vcores and number of containers in web UI v2 node heat map
> --
>
> Key: YARN-7071
> URL: https://issues.apache.org/jira/browse/YARN-7071
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn-ui-v2
>Affects Versions: 3.0.0-alpha4
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
> Attachments: YARN-7071.001.patch
>
>
> Currently, the node heat map displays memory usage per node. This change 
> would add a dropdown to view cpu vcores or number of containers as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7076) yarn application -list -appTypes is not working

2017-08-28 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144602#comment-16144602
 ] 

Junping Du commented on YARN-7076:
--

Patch LGTM. +1. Committing it now.

> yarn application -list -appTypes  is not working
> -
>
> Key: YARN-7076
> URL: https://issues.apache.org/jira/browse/YARN-7076
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
>Priority: Blocker
> Attachments: YARN-7076.01.patch, YARN-7076.02.patch
>
>
> yarn application -list -appTypes  is not working
> Looks like it's because the ApplicationCLI pass in the appType as uppercase, 
> but ClientRMService#getApplications is case sensitive, so if user submits an 
> app with lowercase appType, it wont work



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7113) Clean up packaging and dependencies for yarn-native-services

2017-08-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144595#comment-16144595
 ] 

Hadoop QA commented on YARN-7113:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} YARN-7113 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-7113 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884157/diff.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17169/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Clean up packaging and dependencies for yarn-native-services
> 
>
> Key: YARN-7113
> URL: https://issues.apache.org/jira/browse/YARN-7113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: diff.patch, YARN-7113-yarn-native-services.01.patch
>
>
> Since the yarn native services code has been greatly simplified, I think we 
> no longer need a separate lib directory for services. A dependency cleanup is 
> needed to address unused declared dependencies and used undeclared 
> dependencies in the new modules. We should also address NOTICE changes needed 
> for the 3 new dependencies that are being added, jcommander, snakeyaml, and 
> swagger-annotations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7116) CapacityScheduler Web UI: Queue's AM usage is always show on per-user's AM usage.

2017-08-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144594#comment-16144594
 ] 

Hadoop QA commented on YARN-7116:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 45m 49s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-7116 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884152/YARN-7116.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a5d046886a2b 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / a1e3f84 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/17168/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17168/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17168/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> CapacityScheduler Web UI: Queue's AM usage is 

[jira] [Updated] (YARN-7113) Clean up packaging and dependencies for yarn-native-services

2017-08-28 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7113:
--
Attachment: diff.patch

> Clean up packaging and dependencies for yarn-native-services
> 
>
> Key: YARN-7113
> URL: https://issues.apache.org/jira/browse/YARN-7113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: diff.patch, YARN-7113-yarn-native-services.01.patch
>
>
> Since the yarn native services code has been greatly simplified, I think we 
> no longer need a separate lib directory for services. A dependency cleanup is 
> needed to address unused declared dependencies and used undeclared 
> dependencies in the new modules. We should also address NOTICE changes needed 
> for the 3 new dependencies that are being added, jcommander, snakeyaml, and 
> swagger-annotations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7113) Clean up packaging and dependencies for yarn-native-services

2017-08-28 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144593#comment-16144593
 ] 

Jian He commented on YARN-7113:
---

Thanks for the patch, Billie!  looks good to me overall.
I tried some testing, and looks like the apiserver is still not started, 
because "src/main/resources/webapps/services-rest-api/app" is not named 
properly, and it throws Exception when trying to find the file
some dependency may also be not needed 
I attached a diff patch here.  please check 

> Clean up packaging and dependencies for yarn-native-services
> 
>
> Key: YARN-7113
> URL: https://issues.apache.org/jira/browse/YARN-7113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: YARN-7113-yarn-native-services.01.patch
>
>
> Since the yarn native services code has been greatly simplified, I think we 
> no longer need a separate lib directory for services. A dependency cleanup is 
> needed to address unused declared dependencies and used undeclared 
> dependencies in the new modules. We should also address NOTICE changes needed 
> for the 3 new dependencies that are being added, jcommander, snakeyaml, and 
> swagger-annotations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7117) Capacity Scheduler: Support Auto Creation of Leaf Queues While Doing Queue Mapping

2017-08-28 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144581#comment-16144581
 ] 

Wangda Tan commented on YARN-7117:
--

+ [~jlowe]/[~asuresh]/[~jhung]/[~Naganarasimha], could you share your thoughts 
when you get chance?

> Capacity Scheduler: Support Auto Creation of Leaf Queues While Doing Queue 
> Mapping
> --
>
> Key: YARN-7117
> URL: https://issues.apache.org/jira/browse/YARN-7117
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: capacity scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>
> Currently Capacity Scheduler doesn't support auto creation of queues when 
> doing queue mapping. We saw more and more use cases which has complex queue 
> mapping policies configured to handle application to queues mapping. 
> The most common use case of CapacityScheduler queue mapping is to create one 
> queue for each user/group. However update {{capacity-scheduler.xml}} and 
> {{RMAdmin:refreshQueues}} needs to be done when new user/group onboard. One 
> of the option to solve the problem is automatically create queues when new 
> user/group arrives.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7117) Capacity Scheduler: Support Auto Creation of Leaf Queues While Doing Queue Mapping

2017-08-28 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144580#comment-16144580
 ] 

Wangda Tan commented on YARN-7117:
--

Discussed with [~clayb]/[~sunilg]l/[~vinodkv] offline (Thanks Clay for sharing 
internal use cases). Here is our initial proposal to get more thoughts: 
- Parent queue can be marked to allow auto creation of leaf queues. 
(such as {{prefix..auto-queue-creation.enabled}}, default is off). 
We allow no sub queue specified for such parent queue.
- Minimum resource could be specified for queues which are automatically 
created. (such as 
{{prefix..auto-queue-creation.subqueue-minimum-resource}}). After 
YARN-5881, absolute resources can be specified for auto created queues.
- CS treats automatically created queues no different from normal queues, which 
means scheduler will use existing logic to do preemption / fairness allocation 
/ queue-ordering / user-limit, etc. for auto-created queue. 
- ACL of created queue should be determined by policy. For example, if we 
expect create different queue for different user, admin may set 
{{prefix..auto-queue-creation.admin-acl-policy=user-name-equals-to-queue-name}}.
- Auto-create queue flag can be specified in queue-mapping policy, default is 
off.

A related issue (maybe it's better to discuss on a separate JIRA) is: it's 
possible that queues are created but not actively used, so we could allow 
guaranteed resources are overcommitted. (For example a parent queue with 100G 
guaranteed memory, and there're 200 sub queues created under the parent, each 
queue has 1G guaranteed memory, but most of the sub queues are not being used).

To solve the issue, scheduler may need to maintain a list of 
{{#active-leaf-queues}} under one parent (An active-leaf-queue means a leaf 
queue has at least one app not in final state). Parent queue's guaranteed 
resource will be checked and enforced when state of leaf queue's changed to 
active. Application submission will be rejected if 
- {{Σ(leafQueue.guaranteed) (leafQueue ∈ \{active-leaf-queues\}) > 
parent.guaranteed)}}. 

> Capacity Scheduler: Support Auto Creation of Leaf Queues While Doing Queue 
> Mapping
> --
>
> Key: YARN-7117
> URL: https://issues.apache.org/jira/browse/YARN-7117
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: capacity scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>
> Currently Capacity Scheduler doesn't support auto creation of queues when 
> doing queue mapping. We saw more and more use cases which has complex queue 
> mapping policies configured to handle application to queues mapping. 
> The most common use case of CapacityScheduler queue mapping is to create one 
> queue for each user/group. However update {{capacity-scheduler.xml}} and 
> {{RMAdmin:refreshQueues}} needs to be done when new user/group onboard. One 
> of the option to solve the problem is automatically create queues when new 
> user/group arrives.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7117) Capacity Scheduler: Support Auto Creation of Leaf Queues While Doing Queue Mapping

2017-08-28 Thread Wangda Tan (JIRA)
Wangda Tan created YARN-7117:


 Summary: Capacity Scheduler: Support Auto Creation of Leaf Queues 
While Doing Queue Mapping
 Key: YARN-7117
 URL: https://issues.apache.org/jira/browse/YARN-7117
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: capacity scheduler
Reporter: Wangda Tan
Assignee: Wangda Tan


Currently Capacity Scheduler doesn't support auto creation of queues when doing 
queue mapping. We saw more and more use cases which has complex queue mapping 
policies configured to handle application to queues mapping. 

The most common use case of CapacityScheduler queue mapping is to create one 
queue for each user/group. However update {{capacity-scheduler.xml}} and 
{{RMAdmin:refreshQueues}} needs to be done when new user/group onboard. One of 
the option to solve the problem is automatically create queues when new 
user/group arrives.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7010) Federation: routing REST invocations transparently to multiple RMs (part 2 - getApps)

2017-08-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144574#comment-16144574
 ] 

Hadoop QA commented on YARN-7010:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 251 unchanged - 14 fixed = 251 total (was 265) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
36s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
50s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 45m 51s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
24s{color} | {color:green} hadoop-yarn-server-router in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}108m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer
 |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-7010 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884142/YARN-7010.v4.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 41f8d9299cc7 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 51881a8 |
| Default Java | 1.8.0_144 |
| findbugs | 

[jira] [Commented] (YARN-2162) add ability in Fair Scheduler to optionally configure maxResources in terms of percentage

2017-08-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144573#comment-16144573
 ] 

ASF GitHub Bot commented on YARN-2162:
--

Github user templedf commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/261#discussion_r135665251
  
--- Diff: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationConfiguration.java
 ---
@@ -290,26 +290,15 @@ Resource getMinResources(String queue) {
 
   /**
* Get the maximum resource allocation for the given queue. If the max 
in not
-   * set, return the larger of the min and the default max.
+   * set, return the default max.
*
* @param queue the target queue's name
* @return the max allocation on this queue
*/
-  @VisibleForTesting
-  Resource getMaxResources(String queue) {
-Resource maxQueueResource = maxQueueResources.get(queue);
-if (maxQueueResource == null) {
-  Resource minQueueResource = minQueueResources.get(queue);
-  if (minQueueResource != null &&
-  Resources.greaterThan(RESOURCE_CALCULATOR, Resources.unbounded(),
-  minQueueResource, queueMaxResourcesDefault)) {
-return minQueueResource;
-  } else {
-return queueMaxResourcesDefault;
-  }
-} else {
-  return maxQueueResource;
-}
+  @VisibleForTesting ConfigurableResource getMaxResources(String queue) {
+ConfigurableResource maxQueueResource = maxQueueResources.get(queue);
+return maxQueueResource == null ?
+queueMaxResourcesDefault : maxQueueResource;
--- End diff --

I'm not a fan of the ternary operator unless it really makes things 
clearer.  I don't see the point here.



> add ability in Fair Scheduler to optionally configure maxResources in terms 
> of percentage
> -
>
> Key: YARN-2162
> URL: https://issues.apache.org/jira/browse/YARN-2162
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, scheduler
>Reporter: Ashwin Shankar
>Assignee: Yufei Gu
>  Labels: scheduler
> Attachments: YARN-2162.001.patch, YARN-2162.002.patch, 
> YARN-2162.003.patch
>
>
> minResources and maxResources in fair scheduler configs are expressed in 
> terms of absolute numbers X mb, Y vcores. 
> As a result, when we expand or shrink our hadoop cluster, we need to 
> recalculate and change minResources/maxResources accordingly, which is pretty 
> inconvenient.
> We can circumvent this problem if we can optionally configure these 
> properties in terms of percentage of cluster capacity. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2162) add ability in Fair Scheduler to optionally configure maxResources in terms of percentage

2017-08-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144568#comment-16144568
 ] 

ASF GitHub Bot commented on YARN-2162:
--

Github user templedf commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/261#discussion_r135659338
  
--- Diff: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/ConfigurableResource.java
 ---
@@ -0,0 +1,62 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair;
+
+import org.apache.hadoop.classification.InterfaceAudience.Private;
+import org.apache.hadoop.classification.InterfaceStability.Unstable;
+import org.apache.hadoop.yarn.api.records.Resource;
+import org.apache.hadoop.yarn.server.resourcemanager.resource.ResourceType;
+
+import java.util.HashMap;
+
+/**
+ * A {@link ConfigurableResource} object represents an entity that is used 
to
+ * configure resources, such as maximum resources of a queue. It can be
+ * percentage of cluster resources or an absolute value.
+ */
+@Private
+@Unstable
+public class ConfigurableResource {
+  private final Resource resource;
+  private final double[] percentages;
+
+  public ConfigurableResource(double[] percentages) {
+this.percentages = percentages;
+this.resource = null;
+  }
+
+  public ConfigurableResource(Resource resource) {
+this.percentages = null;
+this.resource = resource;
+  }
+
+  public Resource getResource(Resource clusterResource) {
+if (percentages != null && clusterResource != null) {
+  long memory = (long) (clusterResource.getMemorySize() * 
percentages[0]);
--- End diff --

That's directly in line with resource types.


> add ability in Fair Scheduler to optionally configure maxResources in terms 
> of percentage
> -
>
> Key: YARN-2162
> URL: https://issues.apache.org/jira/browse/YARN-2162
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, scheduler
>Reporter: Ashwin Shankar
>Assignee: Yufei Gu
>  Labels: scheduler
> Attachments: YARN-2162.001.patch, YARN-2162.002.patch, 
> YARN-2162.003.patch
>
>
> minResources and maxResources in fair scheduler configs are expressed in 
> terms of absolute numbers X mb, Y vcores. 
> As a result, when we expand or shrink our hadoop cluster, we need to 
> recalculate and change minResources/maxResources accordingly, which is pretty 
> inconvenient.
> We can circumvent this problem if we can optionally configure these 
> properties in terms of percentage of cluster capacity. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2162) add ability in Fair Scheduler to optionally configure maxResources in terms of percentage

2017-08-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144571#comment-16144571
 ] 

ASF GitHub Bot commented on YARN-2162:
--

Github user templedf commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/261#discussion_r135665725
  
--- Diff: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerConfiguration.java
 ---
@@ -287,27 +289,69 @@ public float getReservableNodes() {
* 
* @throws AllocationConfigurationException
*/
-  public static Resource parseResourceConfigValue(String val)
+  public static ConfigurableResource parseResourceConfigValue(String val)
   throws AllocationConfigurationException {
+ConfigurableResource configurableResource;
 try {
   val = StringUtils.toLowerCase(val);
-  int memory = findResource(val, "mb");
-  int vcores = findResource(val, "vcores");
-  return BuilderUtils.newResource(memory, vcores);
+  if (val.contains("%")) {
+configurableResource = new ConfigurableResource(
+getResourcePercentage(val));
+  } else {
+int memory = findResource(val, "mb");
+int vcores = findResource(val, "vcores");
+configurableResource = new ConfigurableResource(
+BuilderUtils.newResource(memory, vcores));
+  }
 } catch (AllocationConfigurationException ex) {
   throw ex;
 } catch (Exception ex) {
   throw new AllocationConfigurationException(
   "Error reading resource config", ex);
 }
+return configurableResource;
+  }
+
+  private static double[] getResourcePercentage(
+  String val) throws AllocationConfigurationException {
+double[] resourcePercentage = new double[ResourceType.values().length];
+String[] strings = val.split(",");
+if (strings.length == 1) {
+  double percentage = findPercentage(strings[0], "");
+  for (int i = 0 ; i < ResourceType.values().length ; i++) {
+resourcePercentage[i] = percentage/100;
+  }
+} else {
+  double memPercentage = findPercentage(val, "memory");
+  double vcorePercentage = findPercentage(val, "cpu");
+  resourcePercentage[0] = memPercentage/100;
--- End diff --

Agreed.  This seems needlessly convoluted.


> add ability in Fair Scheduler to optionally configure maxResources in terms 
> of percentage
> -
>
> Key: YARN-2162
> URL: https://issues.apache.org/jira/browse/YARN-2162
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, scheduler
>Reporter: Ashwin Shankar
>Assignee: Yufei Gu
>  Labels: scheduler
> Attachments: YARN-2162.001.patch, YARN-2162.002.patch, 
> YARN-2162.003.patch
>
>
> minResources and maxResources in fair scheduler configs are expressed in 
> terms of absolute numbers X mb, Y vcores. 
> As a result, when we expand or shrink our hadoop cluster, we need to 
> recalculate and change minResources/maxResources accordingly, which is pretty 
> inconvenient.
> We can circumvent this problem if we can optionally configure these 
> properties in terms of percentage of cluster capacity. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2162) add ability in Fair Scheduler to optionally configure maxResources in terms of percentage

2017-08-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144569#comment-16144569
 ] 

ASF GitHub Bot commented on YARN-2162:
--

Github user templedf commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/261#discussion_r135665452
  
--- Diff: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/ConfigurableResource.java
 ---
@@ -0,0 +1,62 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair;
+
+import org.apache.hadoop.classification.InterfaceAudience.Private;
+import org.apache.hadoop.classification.InterfaceStability.Unstable;
+import org.apache.hadoop.yarn.api.records.Resource;
+import org.apache.hadoop.yarn.server.resourcemanager.resource.ResourceType;
+
+import java.util.HashMap;
+
+/**
+ * A {@link ConfigurableResource} object represents an entity that is used 
to
+ * configure resources, such as maximum resources of a queue. It can be
+ * percentage of cluster resources or an absolute value.
+ */
+@Private
+@Unstable
+public class ConfigurableResource {
+  private final Resource resource;
+  private final double[] percentages;
+
+  public ConfigurableResource(double[] percentages) {
+this.percentages = percentages;
+this.resource = null;
+  }
+
+  public ConfigurableResource(Resource resource) {
+this.percentages = null;
+this.resource = resource;
+  }
+
+  public Resource getResource(Resource clusterResource) {
+if (percentages != null && clusterResource != null) {
+  long memory = (long) (clusterResource.getMemorySize() * 
percentages[0]);
+  int vcore = (int) (clusterResource.getVirtualCores() * 
percentages[1]);
+  return Resource.newInstance(memory, vcore);
+} else {
+  return resource;
+}
+  }
+
+  public Resource getResource() {
+return getResource(null);
--- End diff --

This also seems kinda pointless and a bit brittle.  Just return resource.


> add ability in Fair Scheduler to optionally configure maxResources in terms 
> of percentage
> -
>
> Key: YARN-2162
> URL: https://issues.apache.org/jira/browse/YARN-2162
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, scheduler
>Reporter: Ashwin Shankar
>Assignee: Yufei Gu
>  Labels: scheduler
> Attachments: YARN-2162.001.patch, YARN-2162.002.patch, 
> YARN-2162.003.patch
>
>
> minResources and maxResources in fair scheduler configs are expressed in 
> terms of absolute numbers X mb, Y vcores. 
> As a result, when we expand or shrink our hadoop cluster, we need to 
> recalculate and change minResources/maxResources accordingly, which is pretty 
> inconvenient.
> We can circumvent this problem if we can optionally configure these 
> properties in terms of percentage of cluster capacity. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2162) add ability in Fair Scheduler to optionally configure maxResources in terms of percentage

2017-08-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144570#comment-16144570
 ] 

ASF GitHub Bot commented on YARN-2162:
--

Github user templedf commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/261#discussion_r135665524
  
--- Diff: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSQueue.java
 ---
@@ -158,26 +158,37 @@ public Resource getMinShare() {
 return minShare;
   }
 
-  public void setMaxShare(Resource maxShare){
+  public void setMaxShare(ConfigurableResource maxShare){
 this.maxShare = maxShare;
   }
 
+  @Override
+  public Resource getMaxShare() {
+Resource maxResource = 
maxShare.getResource(scheduler.getClusterResource());
+
+// Set max resource to min resource if min resource is greater than max
+// resource
+if(Resources.greaterThan(scheduler.getResourceCalculator(),
+scheduler.getClusterResource(), minShare, maxResource)) {
--- End diff --

Yeah, you should avoid greaterThan() (et al) if you can.


> add ability in Fair Scheduler to optionally configure maxResources in terms 
> of percentage
> -
>
> Key: YARN-2162
> URL: https://issues.apache.org/jira/browse/YARN-2162
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, scheduler
>Reporter: Ashwin Shankar
>Assignee: Yufei Gu
>  Labels: scheduler
> Attachments: YARN-2162.001.patch, YARN-2162.002.patch, 
> YARN-2162.003.patch
>
>
> minResources and maxResources in fair scheduler configs are expressed in 
> terms of absolute numbers X mb, Y vcores. 
> As a result, when we expand or shrink our hadoop cluster, we need to 
> recalculate and change minResources/maxResources accordingly, which is pretty 
> inconvenient.
> We can circumvent this problem if we can optionally configure these 
> properties in terms of percentage of cluster capacity. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2162) add ability in Fair Scheduler to optionally configure maxResources in terms of percentage

2017-08-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144572#comment-16144572
 ] 

ASF GitHub Bot commented on YARN-2162:
--

Github user templedf commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/261#discussion_r135665361
  
--- Diff: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/ConfigurableResource.java
 ---
@@ -0,0 +1,62 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair;
+
+import org.apache.hadoop.classification.InterfaceAudience.Private;
+import org.apache.hadoop.classification.InterfaceStability.Unstable;
+import org.apache.hadoop.yarn.api.records.Resource;
+import org.apache.hadoop.yarn.server.resourcemanager.resource.ResourceType;
+
+import java.util.HashMap;
+
+/**
+ * A {@link ConfigurableResource} object represents an entity that is used 
to
--- End diff --

No point linking to yourself.  Just make it a {@code} instead.


> add ability in Fair Scheduler to optionally configure maxResources in terms 
> of percentage
> -
>
> Key: YARN-2162
> URL: https://issues.apache.org/jira/browse/YARN-2162
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, scheduler
>Reporter: Ashwin Shankar
>Assignee: Yufei Gu
>  Labels: scheduler
> Attachments: YARN-2162.001.patch, YARN-2162.002.patch, 
> YARN-2162.003.patch
>
>
> minResources and maxResources in fair scheduler configs are expressed in 
> terms of absolute numbers X mb, Y vcores. 
> As a result, when we expand or shrink our hadoop cluster, we need to 
> recalculate and change minResources/maxResources accordingly, which is pretty 
> inconvenient.
> We can circumvent this problem if we can optionally configure these 
> properties in terms of percentage of cluster capacity. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7010) Federation: routing REST invocations transparently to multiple RMs (part 2 - getApps)

2017-08-28 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144438#comment-16144438
 ] 

Giovanni Matteo Fumarola edited comment on YARN-7010 at 8/29/17 12:00 AM:
--

Thanks [~curino].
1. Done.
2. I will update in the next iteration.
3. It is the wanted behavior. I want to measure each time I called Yarn RM how 
much time the router spends. 
4. I will update in the next iteration.
5. Done.
6. Done.
7. Done.
8. Done.


was (Author: giovanni.fumarola):
Thanks [~curino].
1. Done.
2. The 2 methods with the tag in AppInfo are called only from Test classes and 
Mocks for testing.
3. It is the wanted behavior. I want to measure each time I called Yarn RM how 
much time the router spends. 
4. In my local tests this structure is the optimal, the other solutions will 
require additional data structures. I cannot merge on my way since I have to 
scan all the list to figure it out if there is an AM or not. In case we merge 
before we find the AM and the policy is to discard incomplete results we ended 
up doing more iterations than needed.
5. Done.
6. Done.
7. Done.
8. Done.

> Federation: routing REST invocations transparently to multiple RMs (part 2 - 
> getApps)
> -
>
> Key: YARN-7010
> URL: https://issues.apache.org/jira/browse/YARN-7010
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-7010.v0.patch, YARN-7010.v1.patch, 
> YARN-7010.v2.patch, YARN-7010.v3.patch, YARN-7010.v4.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7098) LocalizerRunner should immediately send heartbeat response LocalizerStatus.DIE when the Container transitions from LOCALIZING to KILLING

2017-08-28 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144551#comment-16144551
 ] 

Arun Suresh commented on YARN-7098:
---

Thanks for the patch [~brookz]. 
I think we would have to consider the Localization Scope as well though. In 
case of private localizers, the 
{{LocalizerResourceRequestEvent::getVisibility()}} can be PRIVATE or 
APPLICATION. If it is application scope, the resource can technically be used 
by other containers of the same app, in which case, we should probably not 
endContainerLocaliztion.
[~jianhe], Thoughts ?

> LocalizerRunner should immediately send heartbeat response 
> LocalizerStatus.DIE when the Container transitions from LOCALIZING to KILLING
> 
>
> Key: YARN-7098
> URL: https://issues.apache.org/jira/browse/YARN-7098
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Brook Zhou
>Assignee: Brook Zhou
>Priority: Minor
> Attachments: YARN-7098.patch
>
>
> Currently, the following can happen:
> 1. ContainerLocalizer heartbeats to ResourceLocalizationService.
> 2. LocalizerTracker.processHeartbeat verifies that there is a LocalizerRunner 
> for the localizerId (containerId). Goes into {code:java}return 
> localizer.processHeartbeat(status.getResources());{code}
> 3. Container receives kill event, goes from LOCALIZING -> KILLING. The 
> LocalizerRunner is removed from LocalizerTracker, since the privLocalizers 
> lock is now free.
> 4. Since check (2) passed, LocalizerRunner sends heartbeat response with 
> LocalizerStatus.LIVE and the next file to download.
> What should happen here is that (4) sends a LocalizerStatus.DIE, since (3) 
> happened before the heartbeat response in (4). This saves the container from 
> potentially downloading an extra resource due to the one extra LIVE heartbeat 
> which will end up being deleted anyway.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7116) CapacityScheduler Web UI: Queue's AM usage is always show on per-user's AM usage.

2017-08-28 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7116:
-
Component/s: webapp
 capacity scheduler

> CapacityScheduler Web UI: Queue's AM usage is always show on per-user's AM 
> usage.
> -
>
> Key: YARN-7116
> URL: https://issues.apache.org/jira/browse/YARN-7116
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, webapp
>Affects Versions: 2.9.0, 2.8.1, 3.0.0-alpha4
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-7116.001.patch
>
>
> On CapacityScheduler's web UI, AM usage of different users belong to the same 
> queue always shows queue's AM usage. 
> The root cause is: under CapacitySchedulerPage. 
> {code}
> tbody.tr().td(userInfo.getUsername())
> .td(userInfo.getUserResourceLimit().toString())
> .td(resourcesUsed.toString())
> .td(resourceUsages.getAMLimit().toString())
> .td(amUsed.toString())
> .td(Integer.toString(userInfo.getNumActiveApplications()))
> .td(Integer.toString(userInfo.getNumPendingApplications()))._();
> {code}
> Instead of amUsed.toString(), it should use userInfo.getAmUsed().



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7115) Move BoundedAppender to org.hadoop.yarn.util pacakge

2017-08-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144526#comment-16144526
 ] 

Hadoop QA commented on YARN-7115:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
12s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 49s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
30s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}136m 27s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
|   | hadoop.yarn.server.resourcemanager.monitor.TestSchedulingMonitor |
|   | hadoop.yarn.server.resourcemanager.rmapp.TestApplicationLifetimeMonitor |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-7115 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884123/YARN-7115.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux fe0f992a1283 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 51881a8 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/17163/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17163/testReport/ |
| asflicense | 

[jira] [Commented] (YARN-7116) CapacityScheduler Web UI: Queue's AM usage is always show on per-user's AM usage.

2017-08-28 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144524#comment-16144524
 ] 

Wangda Tan commented on YARN-7116:
--

[~sunilg] could you help to review this patch? this should be very 
straightforward :)

> CapacityScheduler Web UI: Queue's AM usage is always show on per-user's AM 
> usage.
> -
>
> Key: YARN-7116
> URL: https://issues.apache.org/jira/browse/YARN-7116
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 2.8.1, 3.0.0-alpha4
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-7116.001.patch
>
>
> On CapacityScheduler's web UI, AM usage of different users belong to the same 
> queue always shows queue's AM usage. 
> The root cause is: under CapacitySchedulerPage. 
> {code}
> tbody.tr().td(userInfo.getUsername())
> .td(userInfo.getUserResourceLimit().toString())
> .td(resourcesUsed.toString())
> .td(resourceUsages.getAMLimit().toString())
> .td(amUsed.toString())
> .td(Integer.toString(userInfo.getNumActiveApplications()))
> .td(Integer.toString(userInfo.getNumPendingApplications()))._();
> {code}
> Instead of amUsed.toString(), it should use userInfo.getAmUsed().



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7116) CapacityScheduler Web UI: Queue's AM usage is always show on per-user's AM usage.

2017-08-28 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7116:
-
Attachment: YARN-7116.001.patch

Attached ver.001 patch.

> CapacityScheduler Web UI: Queue's AM usage is always show on per-user's AM 
> usage.
> -
>
> Key: YARN-7116
> URL: https://issues.apache.org/jira/browse/YARN-7116
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 2.8.1, 3.0.0-alpha4
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-7116.001.patch
>
>
> On CapacityScheduler's web UI, AM usage of different users belong to the same 
> queue always shows queue's AM usage. 
> The root cause is: under CapacitySchedulerPage. 
> {code}
> tbody.tr().td(userInfo.getUsername())
> .td(userInfo.getUserResourceLimit().toString())
> .td(resourcesUsed.toString())
> .td(resourceUsages.getAMLimit().toString())
> .td(amUsed.toString())
> .td(Integer.toString(userInfo.getNumActiveApplications()))
> .td(Integer.toString(userInfo.getNumPendingApplications()))._();
> {code}
> Instead of amUsed.toString(), it should use userInfo.getAmUsed().



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7116) CapacityScheduler Web UI: Queue's AM usage is always show on per-user's AM usage.

2017-08-28 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144509#comment-16144509
 ] 

Wangda Tan commented on YARN-7116:
--

Credit to Steven Brennan for reporting and reproducing the issue. 

> CapacityScheduler Web UI: Queue's AM usage is always show on per-user's AM 
> usage.
> -
>
> Key: YARN-7116
> URL: https://issues.apache.org/jira/browse/YARN-7116
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 2.8.1, 3.0.0-alpha4
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>
> On CapacityScheduler's web UI, AM usage of different users belong to the same 
> queue always shows queue's AM usage. 
> The root cause is: under CapacitySchedulerPage. 
> {code}
> tbody.tr().td(userInfo.getUsername())
> .td(userInfo.getUserResourceLimit().toString())
> .td(resourcesUsed.toString())
> .td(resourceUsages.getAMLimit().toString())
> .td(amUsed.toString())
> .td(Integer.toString(userInfo.getNumActiveApplications()))
> .td(Integer.toString(userInfo.getNumPendingApplications()))._();
> {code}
> Instead of amUsed.toString(), it should use userInfo.getAmUsed().



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7116) CapacityScheduler Web UI: Queue's AM usage is always show on per-user's AM usage.

2017-08-28 Thread Wangda Tan (JIRA)
Wangda Tan created YARN-7116:


 Summary: CapacityScheduler Web UI: Queue's AM usage is always show 
on per-user's AM usage.
 Key: YARN-7116
 URL: https://issues.apache.org/jira/browse/YARN-7116
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0-alpha4, 2.8.1, 2.9.0
Reporter: Wangda Tan
Assignee: Wangda Tan


On CapacityScheduler's web UI, AM usage of different users belong to the same 
queue always shows queue's AM usage. 

The root cause is: under CapacitySchedulerPage. 

{code}
tbody.tr().td(userInfo.getUsername())
.td(userInfo.getUserResourceLimit().toString())
.td(resourcesUsed.toString())
.td(resourceUsages.getAMLimit().toString())
.td(amUsed.toString())
.td(Integer.toString(userInfo.getNumActiveApplications()))
.td(Integer.toString(userInfo.getNumPendingApplications()))._();
{code}

Instead of amUsed.toString(), it should use userInfo.getAmUsed().



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7098) LocalizerRunner should immediately send heartbeat response LocalizerStatus.DIE when the Container transitions from LOCALIZING to KILLING

2017-08-28 Thread Brook Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brook Zhou updated YARN-7098:
-
Attachment: YARN-7098.patch

> LocalizerRunner should immediately send heartbeat response 
> LocalizerStatus.DIE when the Container transitions from LOCALIZING to KILLING
> 
>
> Key: YARN-7098
> URL: https://issues.apache.org/jira/browse/YARN-7098
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Brook Zhou
>Assignee: Brook Zhou
>Priority: Minor
> Attachments: YARN-7098.patch
>
>
> Currently, the following can happen:
> 1. ContainerLocalizer heartbeats to ResourceLocalizationService.
> 2. LocalizerTracker.processHeartbeat verifies that there is a LocalizerRunner 
> for the localizerId (containerId). Goes into {code:java}return 
> localizer.processHeartbeat(status.getResources());{code}
> 3. Container receives kill event, goes from LOCALIZING -> KILLING. The 
> LocalizerRunner is removed from LocalizerTracker, since the privLocalizers 
> lock is now free.
> 4. Since check (2) passed, LocalizerRunner sends heartbeat response with 
> LocalizerStatus.LIVE and the next file to download.
> What should happen here is that (4) sends a LocalizerStatus.DIE, since (3) 
> happened before the heartbeat response in (4). This saves the container from 
> potentially downloading an extra resource due to the one extra LIVE heartbeat 
> which will end up being deleted anyway.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7098) LocalizerRunner should immediately send heartbeat response LocalizerStatus.DIE when the Container transitions from LOCALIZING to KILLING

2017-08-28 Thread Brook Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brook Zhou updated YARN-7098:
-
Description: 
Currently, the following can happen:

1. ContainerLocalizer heartbeats to ResourceLocalizationService.
2. LocalizerTracker.processHeartbeat verifies that there is a LocalizerRunner 
for the localizerId (containerId). Goes into {code:java}return 
localizer.processHeartbeat(status.getResources());{code}
3. Container receives kill event, goes from LOCALIZING -> KILLING. The 
LocalizerRunner is removed from LocalizerTracker, since the privLocalizers lock 
is now free.
4. Since check (2) passed, LocalizerRunner sends heartbeat response with 
LocalizerStatus.LIVE and the next file to download.

What should happen here is that (4) sends a LocalizerStatus.DIE, since (3) 
happened before the heartbeat response in (4). This saves the container from 
potentially downloading an extra resource due to the one extra LIVE heartbeat 
which will end up being deleted anyway.

  was:
Currently, the following can happen:

1. ContainerLocalizer heartbeats to ResourceLocalizationService.
2. LocalizerTracker.processHeartbeat verifies that there is a LocalizerRunner 
for the localizerId (containerId). Starts executing 
3. Container receives kill event, goes from LOCALIZING -> KILLING. The 
LocalizerRunner is removed from LocalizerTracker.
4. Since check (2) passed, LocalizerRunner sends heartbeat response with 
LocalizerStatus.LIVE and the next file to download.

What should happen here is that (4) sends a LocalizerStatus.DIE, since (3) 
happened before the heartbeat response in (4). This saves the container from 
potentially downloading an extra resource which will end up being deleted 
anyway.


> LocalizerRunner should immediately send heartbeat response 
> LocalizerStatus.DIE when the Container transitions from LOCALIZING to KILLING
> 
>
> Key: YARN-7098
> URL: https://issues.apache.org/jira/browse/YARN-7098
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Brook Zhou
>Assignee: Brook Zhou
>Priority: Minor
>
> Currently, the following can happen:
> 1. ContainerLocalizer heartbeats to ResourceLocalizationService.
> 2. LocalizerTracker.processHeartbeat verifies that there is a LocalizerRunner 
> for the localizerId (containerId). Goes into {code:java}return 
> localizer.processHeartbeat(status.getResources());{code}
> 3. Container receives kill event, goes from LOCALIZING -> KILLING. The 
> LocalizerRunner is removed from LocalizerTracker, since the privLocalizers 
> lock is now free.
> 4. Since check (2) passed, LocalizerRunner sends heartbeat response with 
> LocalizerStatus.LIVE and the next file to download.
> What should happen here is that (4) sends a LocalizerStatus.DIE, since (3) 
> happened before the heartbeat response in (4). This saves the container from 
> potentially downloading an extra resource due to the one extra LIVE heartbeat 
> which will end up being deleted anyway.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7098) LocalizerRunner should immediately send heartbeat response LocalizerStatus.DIE when the Container transitions from LOCALIZING to KILLING

2017-08-28 Thread Brook Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brook Zhou updated YARN-7098:
-
Description: 
Currently, the following can happen:

1. ContainerLocalizer heartbeats to ResourceLocalizationService.
2. LocalizerTracker.processHeartbeat verifies that there is a LocalizerRunner 
for the localizerId (containerId). Starts executing 
3. Container receives kill event, goes from LOCALIZING -> KILLING. The 
LocalizerRunner is removed from LocalizerTracker.
4. Since check (2) passed, LocalizerRunner sends heartbeat response with 
LocalizerStatus.LIVE and the next file to download.

What should happen here is that (4) sends a LocalizerStatus.DIE, since (3) 
happened before the heartbeat response in (4). This saves the container from 
potentially downloading an extra resource which will end up being deleted 
anyway.

  was:
Currently, the following can happen:

1. ContainerLocalizer heartbeats to ResourceLocalizationService.
2. LocalizerTracker.processHeartbeat verifies that there is a LocalizerRunner 
for the localizerId (containerId).
3. Container receives kill event, goes from LOCALIZING -> KILLING. The 
LocalizerRunner is not removed from LocalizerTracker due to locking.
4. Since check (2) passed, LocalizerRunner sends heartbeat response with 
LocalizerStatus.LIVE and the next file to download.

What should happen here is that (4) sends a LocalizerStatus.DIE, since (3) 
happened before the heartbeat response in (4). This saves the container from 
potentially downloading an extra resource which will end up being deleted 
anyway.


> LocalizerRunner should immediately send heartbeat response 
> LocalizerStatus.DIE when the Container transitions from LOCALIZING to KILLING
> 
>
> Key: YARN-7098
> URL: https://issues.apache.org/jira/browse/YARN-7098
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Brook Zhou
>Assignee: Brook Zhou
>Priority: Minor
>
> Currently, the following can happen:
> 1. ContainerLocalizer heartbeats to ResourceLocalizationService.
> 2. LocalizerTracker.processHeartbeat verifies that there is a LocalizerRunner 
> for the localizerId (containerId). Starts executing 
> 3. Container receives kill event, goes from LOCALIZING -> KILLING. The 
> LocalizerRunner is removed from LocalizerTracker.
> 4. Since check (2) passed, LocalizerRunner sends heartbeat response with 
> LocalizerStatus.LIVE and the next file to download.
> What should happen here is that (4) sends a LocalizerStatus.DIE, since (3) 
> happened before the heartbeat response in (4). This saves the container from 
> potentially downloading an extra resource which will end up being deleted 
> anyway.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7076) yarn application -list -appTypes is not working

2017-08-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144493#comment-16144493
 ] 

Hadoop QA commented on YARN-7076:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 0 new + 343 unchanged - 4 fixed = 343 total (was 347) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 44m 29s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-7076 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884134/YARN-7076.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8d8ae2ece1df 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 51881a8 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/17165/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17165/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17165/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> yarn application -list -appTypes  is not working
> -
>
> 

[jira] [Updated] (YARN-7098) LocalizerRunner should immediately send heartbeat response LocalizerStatus.DIE when the Container transitions from LOCALIZING to KILLING

2017-08-28 Thread Brook Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brook Zhou updated YARN-7098:
-
Description: 
Currently, the following can happen:

1. ContainerLocalizer heartbeats to ResourceLocalizationService.
2. LocalizerTracker.processHeartbeat verifies that there is a LocalizerRunner 
for the localizerId (containerId).
3. Container receives kill event, goes from LOCALIZING -> KILLING. The 
LocalizerRunner is not removed from LocalizerTracker due to locking.
4. Since check (2) passed, LocalizerRunner sends heartbeat response with 
LocalizerStatus.LIVE and the next file to download.

What should happen here is that (4) sends a LocalizerStatus.DIE, since (3) 
happened before the heartbeat response in (4). This saves the container from 
potentially downloading an extra resource which will end up being deleted 
anyway.

  was:
Currently, the following can happen:

1. ContainerLocalizer heartbeats to ResourceLocalizationService.
2. LocalizerTracker.processHeartbeat verifies that there is a LocalizerRunner 
for the localizerId (containerId).
3. Container receives kill event, goes from LOCALIZING -> KILLING. The 
LocalizerRunner for the localizerId is removed from LocalizerTracker.
4. Since check (2) passed, LocalizerRunner sends heartbeat response with 
LocalizerStatus.LIVE and the next file to download.

What should happen here is that (4) sends a LocalizerStatus.DIE, since (3) 
happened before the heartbeat response in (4). This saves the container from 
potentially downloading an extra resource which will end up being deleted 
anyway.


> LocalizerRunner should immediately send heartbeat response 
> LocalizerStatus.DIE when the Container transitions from LOCALIZING to KILLING
> 
>
> Key: YARN-7098
> URL: https://issues.apache.org/jira/browse/YARN-7098
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Brook Zhou
>Assignee: Brook Zhou
>Priority: Minor
>
> Currently, the following can happen:
> 1. ContainerLocalizer heartbeats to ResourceLocalizationService.
> 2. LocalizerTracker.processHeartbeat verifies that there is a LocalizerRunner 
> for the localizerId (containerId).
> 3. Container receives kill event, goes from LOCALIZING -> KILLING. The 
> LocalizerRunner is not removed from LocalizerTracker due to locking.
> 4. Since check (2) passed, LocalizerRunner sends heartbeat response with 
> LocalizerStatus.LIVE and the next file to download.
> What should happen here is that (4) sends a LocalizerStatus.DIE, since (3) 
> happened before the heartbeat response in (4). This saves the container from 
> potentially downloading an extra resource which will end up being deleted 
> anyway.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6756) ContainerRequest#executionTypeRequest causes NPE

2017-08-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144480#comment-16144480
 ] 

Hadoop QA commented on YARN-6756:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client: The patch generated 2 new + 
10 unchanged - 0 fixed = 12 total (was 10) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m 
19s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 41m  4s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6756 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884136/YARN-6756.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 60e19670e49e 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 51881a8 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/17166/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17166/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17166/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> ContainerRequest#executionTypeRequest causes NPE
> 
>
> Key: YARN-6756
> URL: https://issues.apache.org/jira/browse/YARN-6756
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
>Priority: Critical
> 

[jira] [Commented] (YARN-7115) Move BoundedAppender to org.hadoop.yarn.util pacakge

2017-08-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144479#comment-16144479
 ] 

Hadoop QA commented on YARN-7115:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 45s{color} 
| {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 48m 33s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
32s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}108m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.util.TestBoundedAppender |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
| Timed out junit tests | 
org.apache.hadoop.yarn.server.resourcemanager.TestRMStoreCommands |
|   | 
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA |
|   | org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-7115 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884118/YARN-7115.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8cd0d1d508c7 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 51881a8 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/17162/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
 |
| unit | 

[jira] [Commented] (YARN-6756) ContainerRequest#executionTypeRequest causes NPE

2017-08-28 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144477#comment-16144477
 ] 

Arun Suresh commented on YARN-6756:
---

makes sense..
+1 pending Jenkins, Thanks

> ContainerRequest#executionTypeRequest causes NPE
> 
>
> Key: YARN-6756
> URL: https://issues.apache.org/jira/browse/YARN-6756
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
>Priority: Critical
> Attachments: YARN-6756.01.patch, YARN-6756.02.patch
>
>
> ContainerRequest#executionTypeRequest is initialized as null, which could 
> cause below " execTypeReq.getExecutionType" unconditionally throw NPE.
> {code}
>   ResourceRequestInfo addResourceRequest(Long allocationRequestId,
>   Priority priority, String resourceName, ExecutionTypeRequest 
> execTypeReq,
>   Resource capability, T req, boolean relaxLocality,
>   String labelExpression) {
> ResourceRequestInfo resourceRequestInfo = get(priority, resourceName,
> execTypeReq.getExecutionType(), capability);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2162) add ability in Fair Scheduler to optionally configure maxResources in terms of percentage

2017-08-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144470#comment-16144470
 ] 

ASF GitHub Bot commented on YARN-2162:
--

Github user kambatla commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/261#discussion_r135650317
  
--- Diff: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSQueue.java
 ---
@@ -158,26 +158,37 @@ public Resource getMinShare() {
 return minShare;
   }
 
-  public void setMaxShare(Resource maxShare){
+  public void setMaxShare(ConfigurableResource maxShare){
 this.maxShare = maxShare;
   }
 
+  @Override
+  public Resource getMaxShare() {
+Resource maxResource = 
maxShare.getResource(scheduler.getClusterResource());
+
+// Set max resource to min resource if min resource is greater than max
+// resource
+if(Resources.greaterThan(scheduler.getResourceCalculator(),
+scheduler.getClusterResource(), minShare, maxResource)) {
--- End diff --

This should likely by componentWiseMax. See Daniel's patch on YARN-6964.


> add ability in Fair Scheduler to optionally configure maxResources in terms 
> of percentage
> -
>
> Key: YARN-2162
> URL: https://issues.apache.org/jira/browse/YARN-2162
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, scheduler
>Reporter: Ashwin Shankar
>Assignee: Yufei Gu
>  Labels: scheduler
> Attachments: YARN-2162.001.patch, YARN-2162.002.patch, 
> YARN-2162.003.patch
>
>
> minResources and maxResources in fair scheduler configs are expressed in 
> terms of absolute numbers X mb, Y vcores. 
> As a result, when we expand or shrink our hadoop cluster, we need to 
> recalculate and change minResources/maxResources accordingly, which is pretty 
> inconvenient.
> We can circumvent this problem if we can optionally configure these 
> properties in terms of percentage of cluster capacity. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2162) add ability in Fair Scheduler to optionally configure maxResources in terms of percentage

2017-08-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144472#comment-16144472
 ] 

ASF GitHub Bot commented on YARN-2162:
--

Github user kambatla commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/261#discussion_r135650575
  
--- Diff: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerConfiguration.java
 ---
@@ -287,27 +289,69 @@ public float getReservableNodes() {
* 
* @throws AllocationConfigurationException
*/
-  public static Resource parseResourceConfigValue(String val)
+  public static ConfigurableResource parseResourceConfigValue(String val)
   throws AllocationConfigurationException {
+ConfigurableResource configurableResource;
 try {
   val = StringUtils.toLowerCase(val);
-  int memory = findResource(val, "mb");
-  int vcores = findResource(val, "vcores");
-  return BuilderUtils.newResource(memory, vcores);
+  if (val.contains("%")) {
+configurableResource = new ConfigurableResource(
+getResourcePercentage(val));
+  } else {
+int memory = findResource(val, "mb");
+int vcores = findResource(val, "vcores");
+configurableResource = new ConfigurableResource(
+BuilderUtils.newResource(memory, vcores));
+  }
 } catch (AllocationConfigurationException ex) {
   throw ex;
 } catch (Exception ex) {
   throw new AllocationConfigurationException(
   "Error reading resource config", ex);
 }
+return configurableResource;
+  }
+
+  private static double[] getResourcePercentage(
+  String val) throws AllocationConfigurationException {
+double[] resourcePercentage = new double[ResourceType.values().length];
+String[] strings = val.split(",");
+if (strings.length == 1) {
+  double percentage = findPercentage(strings[0], "");
+  for (int i = 0 ; i < ResourceType.values().length ; i++) {
+resourcePercentage[i] = percentage/100;
+  }
+} else {
+  double memPercentage = findPercentage(val, "memory");
+  double vcorePercentage = findPercentage(val, "cpu");
+  resourcePercentage[0] = memPercentage/100;
--- End diff --

With ResourceTypes coming in, I feel this has to be handled better.  Loop 
over and use that index instead? 


> add ability in Fair Scheduler to optionally configure maxResources in terms 
> of percentage
> -
>
> Key: YARN-2162
> URL: https://issues.apache.org/jira/browse/YARN-2162
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, scheduler
>Reporter: Ashwin Shankar
>Assignee: Yufei Gu
>  Labels: scheduler
> Attachments: YARN-2162.001.patch, YARN-2162.002.patch, 
> YARN-2162.003.patch
>
>
> minResources and maxResources in fair scheduler configs are expressed in 
> terms of absolute numbers X mb, Y vcores. 
> As a result, when we expand or shrink our hadoop cluster, we need to 
> recalculate and change minResources/maxResources accordingly, which is pretty 
> inconvenient.
> We can circumvent this problem if we can optionally configure these 
> properties in terms of percentage of cluster capacity. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2162) add ability in Fair Scheduler to optionally configure maxResources in terms of percentage

2017-08-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144469#comment-16144469
 ] 

ASF GitHub Bot commented on YARN-2162:
--

Github user kambatla commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/261#discussion_r135649586
  
--- Diff: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/ConfigurableResource.java
 ---
@@ -0,0 +1,62 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair;
+
+import org.apache.hadoop.classification.InterfaceAudience.Private;
+import org.apache.hadoop.classification.InterfaceStability.Unstable;
+import org.apache.hadoop.yarn.api.records.Resource;
+import org.apache.hadoop.yarn.server.resourcemanager.resource.ResourceType;
+
+import java.util.HashMap;
+
+/**
+ * A {@link ConfigurableResource} object represents an entity that is used 
to
+ * configure resources, such as maximum resources of a queue. It can be
+ * percentage of cluster resources or an absolute value.
+ */
+@Private
+@Unstable
+public class ConfigurableResource {
+  private final Resource resource;
+  private final double[] percentages;
+
+  public ConfigurableResource(double[] percentages) {
+this.percentages = percentages;
+this.resource = null;
+  }
+
+  public ConfigurableResource(Resource resource) {
+this.percentages = null;
+this.resource = resource;
+  }
+
+  public Resource getResource(Resource clusterResource) {
+if (percentages != null && clusterResource != null) {
+  long memory = (long) (clusterResource.getMemorySize() * 
percentages[0]);
--- End diff --

Is this inline with the ResourceTypes work? Do we want someone from that 
world, say Daniel, to look at this patch? 


> add ability in Fair Scheduler to optionally configure maxResources in terms 
> of percentage
> -
>
> Key: YARN-2162
> URL: https://issues.apache.org/jira/browse/YARN-2162
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, scheduler
>Reporter: Ashwin Shankar
>Assignee: Yufei Gu
>  Labels: scheduler
> Attachments: YARN-2162.001.patch, YARN-2162.002.patch, 
> YARN-2162.003.patch
>
>
> minResources and maxResources in fair scheduler configs are expressed in 
> terms of absolute numbers X mb, Y vcores. 
> As a result, when we expand or shrink our hadoop cluster, we need to 
> recalculate and change minResources/maxResources accordingly, which is pretty 
> inconvenient.
> We can circumvent this problem if we can optionally configure these 
> properties in terms of percentage of cluster capacity. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7072) Add a new log aggregation file format controller

2017-08-28 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144468#comment-16144468
 ] 

Wangda Tan commented on YARN-7072:
--

[~xgong],

bq. Right now, we do not have a fs api to check whether it supports the append 
or not. So, I am adding a temporary configuration for this. And will throw 
runtime exception if the user configure this configuration as false, and use 
LogAggregationIndexFileFormat.
I would prefer not to add a config. How about do a fs test when 
LogAggregationService start. Just create a file, append few bytes and fail fast 
when append fails. 

bq. I think this should be fine. Seek operation is not that expensive.
Since this is just internal implementation, we can revisit this later. 

bq. No, dummyBytes is not the separator. In current implementation, we do not 
have separator. For adding the dummyBytes, I just want to re-set the curator to 
the end of the file for appending the new logs.
I suggest to add a separator to this patch (a hard coded and randomly generated 
128 bits string) if it is just a trivial change. This might be important for 
the future file format, update implementation is relatively simple but update 
format is painful.

Will do another review later.

> Add a new log aggregation file format controller
> 
>
> Key: YARN-7072
> URL: https://issues.apache.org/jira/browse/YARN-7072
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-7072-trunk.001.patch, YARN-7072.trunk.002.patch, 
> YARN-7072-trunk.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7072) Add a new log aggregation file format controller

2017-08-28 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-7072:

Attachment: YARN-7072-trunk.003.patch

> Add a new log aggregation file format controller
> 
>
> Key: YARN-7072
> URL: https://issues.apache.org/jira/browse/YARN-7072
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-7072-trunk.001.patch, YARN-7072.trunk.002.patch, 
> YARN-7072-trunk.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7072) Add a new log aggregation file format controller

2017-08-28 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144454#comment-16144454
 ] 

Xuan Gong commented on YARN-7072:
-

Thanks for the review, [~leftnoteasy]

bq. Please check appendable when LogAggregationIndexFileFormat is chosen.

Right now, we do not have a fs api to check whether it supports the append or 
not. So, I am adding a temporary configuration for this. And will throw runtime 
exception if the user configure this configuration as false, and use 
LogAggregationIndexFileFormat.

bq. Suggest to put all TFile controller related implementation to 
...filecontroller.tfile, and all Indexed controller impl to 
...filecontroller.ifile (or some better name)

Done

bq. IndexedFileAggregatedLogsBlock is a little bit lengthy, suggest to move it 
to a separate file and break down render() method.

Done

bq. It looks like getFilteredFiles could be getAllChecksumFiles, since suffix 
never accept input other than CHECK_SUM_FILE_SUFFIX.

Make sense. Fixed it.

bq. Is it possible that there's more than two checksum files? Could we check it 
inside getFilteredFiles and throw exception when we find such?

I do not think so. Before we create checksum file, we need to make sure there 
is no checksum exists. If it does, we would read the existing checksum file 
instead of creating a new one. There should be only one checksum file for each 
NMs.

bq. LogAggregationFileController#createPrintStream, should use 
LogCLIHelper#createPrintStream instead.

Yes, move the createPrintStream to Utils class, and use it for both 
LogAggregationFileController and LogCLIHelper

bq. Why sort is needed? Could that possibly makes different sequence of file 
content stored in log file (e.g. in serialized-file, we have container1_stdout, 
container3_stderr, container2_stdout) which could lead to unnecessary seek 
operation.

Removed

bq. Output format related logic should be common and shared by all controller 
impl:

Make sense. Fixed

bq. It looks like loadIndexedLogsMeta did seek twice, is it possible to read 
last x-MB (say, 64MB) data directly (which assumes in most cases total size of 
file meta less than x-MB, so we don't have to do seek twice, seek operation 
could be expensive. ByteArrayInputStream could be used to read from a cached 
memory.

I think this should be fine. Seek operation is not that expensive.

bq. When IOException cached, it's better to log full stacktrace to log file 
instead of only message.

bq. For c. I think we should read meta information from original file as well 
when checksum file not existed (When we doing rolling aggregation and last 
aggregation succeeded, since everytime we delete checksum file after 
aggregation succeeded. see postWrite).

Yes, in some cases, we do need to read the meta from the original file, but 
will not do it every time.

bq. Path remoteLogFile, should be final.

Done

bq. IIUC, the dummyBytes is separator so we know what is the last succeeded 
write. If so, probably a simple "\n" is not enough.

No, dummyBytes is not the separator. In current implementation, we do not have 
separator. For adding the dummyBytes, I just want to re-set the curator to the 
end of the file for appending the new logs.

bq.Renames: fsDataOutputStream => checksumFileOutputStream, fsDataInputStream 
=> checksumFileInputStream.

Done

> Add a new log aggregation file format controller
> 
>
> Key: YARN-7072
> URL: https://issues.apache.org/jira/browse/YARN-7072
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-7072-trunk.001.patch, YARN-7072.trunk.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7010) Federation: routing REST invocations transparently to multiple RMs (part 2 - getApps)

2017-08-28 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-7010:
---
Attachment: YARN-7010.v4.patch

> Federation: routing REST invocations transparently to multiple RMs (part 2 - 
> getApps)
> -
>
> Key: YARN-7010
> URL: https://issues.apache.org/jira/browse/YARN-7010
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-7010.v0.patch, YARN-7010.v1.patch, 
> YARN-7010.v2.patch, YARN-7010.v3.patch, YARN-7010.v4.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6161) YARN support for port allocation

2017-08-28 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144453#comment-16144453
 ] 

Wangda Tan commented on YARN-6161:
--

Is this duplicated by YARN-7079?

> YARN support for port allocation
> 
>
> Key: YARN-6161
> URL: https://issues.apache.org/jira/browse/YARN-6161
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
> Fix For: yarn-native-services
>
>
> Since there is no agent code in YARN native services, we need another 
> mechanism for allocating ports to containers. This is not necessary when 
> running Docker containers, but it will become important when an agent-less 
> docker-less provider is introduced.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7010) Federation: routing REST invocations transparently to multiple RMs (part 2 - getApps)

2017-08-28 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144438#comment-16144438
 ] 

Giovanni Matteo Fumarola commented on YARN-7010:


Thanks [~curino].
1. Done.
2. The 2 methods with the tag in AppInfo are called only from Test classes and 
Mocks for testing.
3. It is the wanted behavior. I want to measure each time I called Yarn RM how 
much time the router spends. 
4. In my local tests this structure is the optimal, the other solutions will 
require additional data structures. I cannot merge on my way since I have to 
scan all the list to figure it out if there is an AM or not. In case we merge 
before we find the AM and the policy is to discard incomplete results we ended 
up doing more iterations than needed.
5. Done.
6. Done.
7. Done.
8. Done.

> Federation: routing REST invocations transparently to multiple RMs (part 2 - 
> getApps)
> -
>
> Key: YARN-7010
> URL: https://issues.apache.org/jira/browse/YARN-7010
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-7010.v0.patch, YARN-7010.v1.patch, 
> YARN-7010.v2.patch, YARN-7010.v3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7112) TestAMRMProxy is failing with invalid request

2017-08-28 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144429#comment-16144429
 ] 

Wangda Tan commented on YARN-7112:
--

Committed to branch-2.8 as well, thanks [~jlowe] for quick update.

> TestAMRMProxy is failing with invalid request
> -
>
> Key: YARN-7112
> URL: https://issues.apache.org/jira/browse/YARN-7112
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-beta1, 2.8.2
>Reporter: Jason Lowe
>Assignee: Jason Lowe
> Fix For: 2.9.0, 3.0.0-beta1, 2.8.2
>
> Attachments: YARN-7112.001.patch, YARN-7112-branch-2.8.001.patch
>
>
> The testAMRMProxyE2E and testAMRMProxyTokenRenewal tests in TestAMRMProxy are 
> failing:
> {noformat}
> org.apache.hadoop.yarn.exceptions.InvalidApplicationMasterRequestException: 
> Invalid responseId in AllocateRequest from application attempt: 
> appattempt_1503933047334_0001_01, expect responseId to be 0, but get 1
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7112) TestAMRMProxy is failing with invalid request

2017-08-28 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7112:
-
Fix Version/s: 2.8.2

> TestAMRMProxy is failing with invalid request
> -
>
> Key: YARN-7112
> URL: https://issues.apache.org/jira/browse/YARN-7112
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-beta1, 2.8.2
>Reporter: Jason Lowe
>Assignee: Jason Lowe
> Fix For: 2.9.0, 3.0.0-beta1, 2.8.2
>
> Attachments: YARN-7112.001.patch, YARN-7112-branch-2.8.001.patch
>
>
> The testAMRMProxyE2E and testAMRMProxyTokenRenewal tests in TestAMRMProxy are 
> failing:
> {noformat}
> org.apache.hadoop.yarn.exceptions.InvalidApplicationMasterRequestException: 
> Invalid responseId in AllocateRequest from application attempt: 
> appattempt_1503933047334_0001_01, expect responseId to be 0, but get 1
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7113) Clean up packaging and dependencies for yarn-native-services

2017-08-28 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-7113:
-
Attachment: YARN-7113-yarn-native-services.01.patch

> Clean up packaging and dependencies for yarn-native-services
> 
>
> Key: YARN-7113
> URL: https://issues.apache.org/jira/browse/YARN-7113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: YARN-7113-yarn-native-services.01.patch
>
>
> Since the yarn native services code has been greatly simplified, I think we 
> no longer need a separate lib directory for services. A dependency cleanup is 
> needed to address unused declared dependencies and used undeclared 
> dependencies in the new modules. We should also address NOTICE changes needed 
> for the 3 new dependencies that are being added, jcommander, snakeyaml, and 
> swagger-annotations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7083) Log aggregation deletes/renames while file is open

2017-08-28 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144420#comment-16144420
 ] 

Jason Lowe commented on YARN-7083:
--

I'm OK with fixing in 2.8.x and filing a followup JIRA.  If that followup isn't 
going to be fixed for a bit it may make more sense to revert YARN-6876 until 
it's ready to address the issue to avoid shipping this bug in 3.0.0-beta1 or 
2.9.0.

> Log aggregation deletes/renames while file is open
> --
>
> Key: YARN-7083
> URL: https://issues.apache.org/jira/browse/YARN-7083
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.8.2
>Reporter: Daryn Sharp
>Assignee: Jason Lowe
>Priority: Critical
> Attachments: YARN-7083.001.patch
>
>
> YARN-6288 changes the log aggregation writer to be an autoclosable.  
> Unfortunately the try-with-resources block for the writer will either rename 
> or delete the log while open.
> Assuming the NM's behavior is correct, deleting open files only results in 
> ominous WARNs in the nodemanager log and increases the rate of logging in the 
> NN when the implicit try-with-resource close fails.  These red herrings 
> complicate debugging efforts.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7076) yarn application -list -appTypes is not working

2017-08-28 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144412#comment-16144412
 ] 

Junping Du commented on YARN-7076:
--

Thanks [~jianhe] for updating the patch. 02 patch LGTM. +1 based on Jenkins 
report.

> yarn application -list -appTypes  is not working
> -
>
> Key: YARN-7076
> URL: https://issues.apache.org/jira/browse/YARN-7076
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
>Priority: Blocker
> Attachments: YARN-7076.01.patch, YARN-7076.02.patch
>
>
> yarn application -list -appTypes  is not working
> Looks like it's because the ApplicationCLI pass in the appType as uppercase, 
> but ClientRMService#getApplications is case sensitive, so if user submits an 
> app with lowercase appType, it wont work



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6756) ContainerRequest#executionTypeRequest causes NPE

2017-08-28 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144410#comment-16144410
 ] 

Jian He commented on YARN-6756:
---

[~asuresh], ut added.
there's one more issue, the relaxLocality was by default is true for the old 
ContainerRequest constructors, but with the builder it is by default false
I changed it to true, looks ok ?

> ContainerRequest#executionTypeRequest causes NPE
> 
>
> Key: YARN-6756
> URL: https://issues.apache.org/jira/browse/YARN-6756
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
>Priority: Critical
> Attachments: YARN-6756.01.patch, YARN-6756.02.patch
>
>
> ContainerRequest#executionTypeRequest is initialized as null, which could 
> cause below " execTypeReq.getExecutionType" unconditionally throw NPE.
> {code}
>   ResourceRequestInfo addResourceRequest(Long allocationRequestId,
>   Priority priority, String resourceName, ExecutionTypeRequest 
> execTypeReq,
>   Resource capability, T req, boolean relaxLocality,
>   String labelExpression) {
> ResourceRequestInfo resourceRequestInfo = get(priority, resourceName,
> execTypeReq.getExecutionType(), capability);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6756) ContainerRequest#executionTypeRequest causes NPE

2017-08-28 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144410#comment-16144410
 ] 

Jian He edited comment on YARN-6756 at 8/28/17 9:53 PM:


[~asuresh], ut added.
there's one more issue, the relaxLocality was by default  true for the old 
ContainerRequest constructors, but with the builder it is by default false
I changed it to true, looks ok ?


was (Author: jianhe):
[~asuresh], ut added.
there's one more issue, the relaxLocality was by default is true for the old 
ContainerRequest constructors, but with the builder it is by default false
I changed it to true, looks ok ?

> ContainerRequest#executionTypeRequest causes NPE
> 
>
> Key: YARN-6756
> URL: https://issues.apache.org/jira/browse/YARN-6756
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
>Priority: Critical
> Attachments: YARN-6756.01.patch, YARN-6756.02.patch
>
>
> ContainerRequest#executionTypeRequest is initialized as null, which could 
> cause below " execTypeReq.getExecutionType" unconditionally throw NPE.
> {code}
>   ResourceRequestInfo addResourceRequest(Long allocationRequestId,
>   Priority priority, String resourceName, ExecutionTypeRequest 
> execTypeReq,
>   Resource capability, T req, boolean relaxLocality,
>   String labelExpression) {
> ResourceRequestInfo resourceRequestInfo = get(priority, resourceName,
> execTypeReq.getExecutionType(), capability);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6756) ContainerRequest#executionTypeRequest causes NPE

2017-08-28 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-6756:
--
Attachment: YARN-6756.02.patch

> ContainerRequest#executionTypeRequest causes NPE
> 
>
> Key: YARN-6756
> URL: https://issues.apache.org/jira/browse/YARN-6756
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
>Priority: Critical
> Attachments: YARN-6756.01.patch, YARN-6756.02.patch
>
>
> ContainerRequest#executionTypeRequest is initialized as null, which could 
> cause below " execTypeReq.getExecutionType" unconditionally throw NPE.
> {code}
>   ResourceRequestInfo addResourceRequest(Long allocationRequestId,
>   Priority priority, String resourceName, ExecutionTypeRequest 
> execTypeReq,
>   Resource capability, T req, boolean relaxLocality,
>   String labelExpression) {
> ResourceRequestInfo resourceRequestInfo = get(priority, resourceName,
> execTypeReq.getExecutionType(), capability);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6876) Create an abstract log writer for extendability

2017-08-28 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144402#comment-16144402
 ] 

Jason Lowe commented on YARN-6876:
--

Sorry for the late comment, but I was just looking at the code in trunk and saw 
a bunch of close calls changed to quiet forms.  Seems to me this could silently 
swallow errors encountered during close which would be a bad thing.  I saw that 
this was brought up earlier in the review but I didn't see it addressed in the 
comments.
bq. I found we are replacing close or closeStream to closeQuietly. Any 
specially reason for this change? Does that related to this refactor effort 
here?


> Create an abstract log writer for extendability
> ---
>
> Key: YARN-6876
> URL: https://issues.apache.org/jira/browse/YARN-6876
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: YARN-6876-branch-2.001.patch, YARN-6876-trunk.001.patch, 
> YARN-6876-trunk.002.patch, YARN-6876-trunk.003.patch, 
> YARN-6876-trunk.004.patch, YARN-6876-trunk.005.patch, 
> YARN-6876-trunk.006.patch
>
>
> Currently, TFile log writer is used to aggregate log in YARN. We need to add 
> an abstract layer, and pick up the correct log writer based on the 
> configuration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7037) Optimize data transfer with zero-copy approach for containerlogs REST API in NMWebServices

2017-08-28 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144398#comment-16144398
 ] 

Junping Du commented on YARN-7037:
--

bq. LogToolUtils#outputContainerLog was used for both local log which can be 
optimized by FileInputStream and aggregated log which can't because it's 
transferred by DataInputStream from remote.
I see. That make sense to me. 


+1 on latest patch. Will commit it tomorrow if no further comments from others.

> Optimize data transfer with zero-copy approach for containerlogs REST API in 
> NMWebServices
> --
>
> Key: YARN-7037
> URL: https://issues.apache.org/jira/browse/YARN-7037
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.8.0
>Reporter: Tao Yang
>Assignee: Tao Yang
> Attachments: YARN-7037.001.patch, YARN-7037.branch-2.8.001.patch
>
>
> Split this improvement from YARN-6259.
> It's useful to read container logs more efficiently. With zero-copy approach, 
> data transfer pipeline (disk --> read buffer --> NM buffer --> socket buffer) 
> can be optimized to pipeline(disk --> read buffer --> socket buffer) .
> In my local test, time cost of copying 256MB file with zero-copy can be 
> reduced from 12 seconds to 2.5 seconds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6756) ContainerRequest#executionTypeRequest causes NPE

2017-08-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144397#comment-16144397
 ] 

Hadoop QA commented on YARN-6756:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m 
16s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 41m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6756 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884120/YARN-6756.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 97c7d1acb07a 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 51881a8 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17164/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17164/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> ContainerRequest#executionTypeRequest causes NPE
> 
>
> Key: YARN-6756
> URL: https://issues.apache.org/jira/browse/YARN-6756
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
>Priority: Critical
> Attachments: YARN-6756.01.patch
>
>
> ContainerRequest#executionTypeRequest is initialized as null, which could 
> cause below " execTypeReq.getExecutionType" 

[jira] [Updated] (YARN-7076) yarn application -list -appTypes is not working

2017-08-28 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7076:
--
Attachment: YARN-7076.02.patch

> yarn application -list -appTypes  is not working
> -
>
> Key: YARN-7076
> URL: https://issues.apache.org/jira/browse/YARN-7076
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
>Priority: Blocker
> Attachments: YARN-7076.01.patch, YARN-7076.02.patch
>
>
> yarn application -list -appTypes  is not working
> Looks like it's because the ApplicationCLI pass in the appType as uppercase, 
> but ClientRMService#getApplications is case sensitive, so if user submits an 
> app with lowercase appType, it wont work



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4511) Common scheduler changes supporting scheduler-specific implementations

2017-08-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144371#comment-16144371
 ] 

Hadoop QA commented on YARN-4511:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-1011 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  5m  
4s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
46s{color} | {color:green} YARN-1011 passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  4m 
57s{color} | {color:red} root in YARN-1011 failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
43s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} YARN-1011 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
23s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 13m 23s{color} 
| {color:red} root generated 539 new + 778 unchanged - 0 fixed = 1317 total 
(was 778) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 48s{color} | {color:orange} root: The patch generated 11 new + 559 unchanged 
- 0 fixed = 570 total (was 559) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
12s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
28s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 1 new + 348 unchanged - 0 fixed = 349 total (was 348) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m 39s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 51s{color} 
| {color:red} hadoop-sls in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}121m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  Increment of volatile field 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode.numGuaranteedContainers
 in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode.guaranteedContainerAllocated(RMContainer,
 boolean)  At SchedulerNode.java:in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode.guaranteedContainerAllocated(RMContainer,
 boolean)  At SchedulerNode.java:[line 207] |
|  |  Increment of volatile field 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode.numGuaranteedContainers
 in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode.guaranteedContainerReleased(RMContainer,
 boolean)  At SchedulerNode.java:in 

[jira] [Commented] (YARN-6894) RM Apps API returns only active apps when query parameter queue used

2017-08-28 Thread Grant Sohn (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144365#comment-16144365
 ] 

Grant Sohn commented on YARN-6894:
--

+1 (non-binding)

> RM Apps API returns only active apps when query parameter queue used
> 
>
> Key: YARN-6894
> URL: https://issues.apache.org/jira/browse/YARN-6894
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, restapi
>Reporter: Grant Sohn
>Assignee: Gergely Novák
>Priority: Minor
> Attachments: YARN-6894.001.patch, YARN-6894.002.patch
>
>
> If you run RM's Cluster Applications API with no query parameters, you get a 
> list of apps.
> If you run RM's Cluster Applications API with any query parameters other than 
> "queue" you get the list of apps with the parameter filters being applied.
> However, when you use the "queue" query parameter, you only see the 
> applications that are active in the cluster (NEW, NEW_SAVING, SUBMITTED, 
> ACCEPTED, RUNNING).  This behavior is inconsistent with the API.  If there is 
> a sound reason behind this, it should be documented and it seems like there 
> might be as the mapred queue CLI behaves similarly.
> http://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Applications_API



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7112) TestAMRMProxy is failing with invalid request

2017-08-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144362#comment-16144362
 ] 

Hadoop QA commented on YARN-7112:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
46s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.8 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
47s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m  2s{color} 
| {color:red} hadoop-yarn-client in the patch failed with JDK v1.7.0_131. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_144 Failed junit tests | hadoop.yarn.client.TestGetGroups |
|   | hadoop.yarn.client.api.impl.TestYarnClient |
| JDK v1.7.0_131 Failed junit tests | hadoop.yarn.client.TestGetGroups |
|   | hadoop.yarn.client.api.impl.TestYarnClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:d946387 |
| JIRA Issue | YARN-7112 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884117/YARN-7112-branch-2.8.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6e0f46bf63b9 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git 

[jira] [Resolved] (YARN-7110) NodeManager always crash for spark shuffle service out of memory

2017-08-28 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe resolved YARN-7110.
--
Resolution: Duplicate

> NodeManager always crash for spark shuffle service out of memory
> 
>
> Key: YARN-7110
> URL: https://issues.apache.org/jira/browse/YARN-7110
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: YunFan Zhou
>Priority: Critical
> Attachments: screenshot-1.png
>
>
> NM often crash due to the Spark shuffle service,  I can saw many error log 
> messages before NM crashed:
> {noformat}
> 2017-08-28 16:14:20,521 ERROR 
> org.apache.spark.network.server.TransportRequestHandler: Error sending result 
> ChunkFetchSuccess{streamChunkId=StreamChunkId{streamId=79124460, 
> chunkIndex=0}, 
> buffer=FileSegmentManagedBuffer{file=/data11/hadoopdata/nodemanager/local/usercache/map_loc/appcache/application_1502793246072_2171283/blockmgr-11e2d625-8db1-477c-9365-4f6d0a7d1c48/10/shuffle_0_6_0.data,
>  offset=27063401500, length=64785602}} to /10.93.91.17:18958; closing 
> connection
> java.io.IOException: Broken pipe
> at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
> at 
> sun.nio.ch.FileChannelImpl.transferToDirectlyInternal(FileChannelImpl.java:428)
> at 
> sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:493)
> at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:608)
> at 
> org.apache.spark.network.buffer.LazyFileRegion.transferTo(LazyFileRegion.java:96)
> at 
> org.apache.spark.network.protocol.MessageWithHeader.transferTo(MessageWithHeader.java:92)
> at 
> io.netty.channel.socket.nio.NioSocketChannel.doWriteFileRegion(NioSocketChannel.java:254)
> at 
> io.netty.channel.nio.AbstractNioByteChannel.doWrite(AbstractNioByteChannel.java:237)
> at 
> io.netty.channel.socket.nio.NioSocketChannel.doWrite(NioSocketChannel.java:281)
> at 
> io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:761)
> at 
> io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.forceFlush(AbstractNioChannel.java:317)
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:519)
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
> at 
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
> at java.lang.Thread.run(Thread.java:745)
> 2017-08-28 16:14:20,523 ERROR 
> org.apache.spark.network.server.TransportRequestHandler: Error sending result 
> RpcResponse{requestId=7652091066050104512, 
> body=NioManagedBuffer{buf=java.nio.HeapByteBuffer[pos=0 lim=13 cap=13]}} to 
> /10.93.91.17:18958; closing connection
> {noformat}
> Finally, there are too many *Finalizer* objects in the process of *NM* to 
> cause OOM.
> !screenshot-1.png!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7110) NodeManager always crash for spark shuffle service out of memory

2017-08-28 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144360#comment-16144360
 ] 

Jason Lowe commented on YARN-7110:
--

Looks like Varun was driving that effort but may be busy with other work.  Feel 
free to ping him on that JIRA for the current status.  It makes more sense to 
keep the discussion there where there's already earlier discussion, draft of a 
proposed design, and many people watching that ticket.  Splitting the 
discussion across that ticket and here does not make sense.  Closing this as a 
duplicate.

As for the urgent need, we ran into something similar and fixed the spark 
shuffle handler.  That's the urgent fix you need today.  Migrating it out of 
the NM to a separate process doesn't really solve this particular issue.  If 
the spark shuffle handler's memory is going to explode, it just changes what 
explodes with it.  It would be nice if it just destroyed the spark handler 
instead of the NM process, but a cluster running mostly Spark is still hosed if 
none of the shuffle handlers are running.  The NM supports a work-preserving 
restart, so you could also consider placing your NMs under supervision so they 
are restarted if they crash.  When doing this you will probably want to set 
yarn.nodemanager.recovery.supervised=true to inform the NM that it can rely on 
something to restart it in a timely manner if it goes down due to an error.  
Not as preferable as fixing the problem in the spark shuffle handler directly, 
but it is an option to help your situation in the short term.


> NodeManager always crash for spark shuffle service out of memory
> 
>
> Key: YARN-7110
> URL: https://issues.apache.org/jira/browse/YARN-7110
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: YunFan Zhou
>Priority: Critical
> Attachments: screenshot-1.png
>
>
> NM often crash due to the Spark shuffle service,  I can saw many error log 
> messages before NM crashed:
> {noformat}
> 2017-08-28 16:14:20,521 ERROR 
> org.apache.spark.network.server.TransportRequestHandler: Error sending result 
> ChunkFetchSuccess{streamChunkId=StreamChunkId{streamId=79124460, 
> chunkIndex=0}, 
> buffer=FileSegmentManagedBuffer{file=/data11/hadoopdata/nodemanager/local/usercache/map_loc/appcache/application_1502793246072_2171283/blockmgr-11e2d625-8db1-477c-9365-4f6d0a7d1c48/10/shuffle_0_6_0.data,
>  offset=27063401500, length=64785602}} to /10.93.91.17:18958; closing 
> connection
> java.io.IOException: Broken pipe
> at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
> at 
> sun.nio.ch.FileChannelImpl.transferToDirectlyInternal(FileChannelImpl.java:428)
> at 
> sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:493)
> at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:608)
> at 
> org.apache.spark.network.buffer.LazyFileRegion.transferTo(LazyFileRegion.java:96)
> at 
> org.apache.spark.network.protocol.MessageWithHeader.transferTo(MessageWithHeader.java:92)
> at 
> io.netty.channel.socket.nio.NioSocketChannel.doWriteFileRegion(NioSocketChannel.java:254)
> at 
> io.netty.channel.nio.AbstractNioByteChannel.doWrite(AbstractNioByteChannel.java:237)
> at 
> io.netty.channel.socket.nio.NioSocketChannel.doWrite(NioSocketChannel.java:281)
> at 
> io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:761)
> at 
> io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.forceFlush(AbstractNioChannel.java:317)
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:519)
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
> at 
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
> at java.lang.Thread.run(Thread.java:745)
> 2017-08-28 16:14:20,523 ERROR 
> org.apache.spark.network.server.TransportRequestHandler: Error sending result 
> RpcResponse{requestId=7652091066050104512, 
> body=NioManagedBuffer{buf=java.nio.HeapByteBuffer[pos=0 lim=13 cap=13]}} to 
> /10.93.91.17:18958; closing connection
> {noformat}
> Finally, there are too many *Finalizer* objects in the process of *NM* to 
> cause OOM.
> !screenshot-1.png!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6756) ContainerRequest#executionTypeRequest causes NPE

2017-08-28 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144342#comment-16144342
 ] 

Arun Suresh commented on YARN-6756:
---

Thanks [~jianhe], Straightforward patch.. LGTM
Is it possible to put in a small testcase?

> ContainerRequest#executionTypeRequest causes NPE
> 
>
> Key: YARN-6756
> URL: https://issues.apache.org/jira/browse/YARN-6756
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
>Priority: Critical
> Attachments: YARN-6756.01.patch
>
>
> ContainerRequest#executionTypeRequest is initialized as null, which could 
> cause below " execTypeReq.getExecutionType" unconditionally throw NPE.
> {code}
>   ResourceRequestInfo addResourceRequest(Long allocationRequestId,
>   Priority priority, String resourceName, ExecutionTypeRequest 
> execTypeReq,
>   Resource capability, T req, boolean relaxLocality,
>   String labelExpression) {
> ResourceRequestInfo resourceRequestInfo = get(priority, resourceName,
> execTypeReq.getExecutionType(), capability);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7010) Federation: routing REST invocations transparently to multiple RMs (part 2 - getApps)

2017-08-28 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144338#comment-16144338
 ] 

Carlo Curino commented on YARN-7010:


Thanks [~giovanni.fumarola] for the updated version, patch generally looks 
good, but I have few more comments:
# Please leave the APP_NAME to its current value, as this will also be used in 
non-federated settings.
# double check the "@VisibleForTesting" tags as I think some might be misplaced
# Is {{routerMetrics.succeededMultipleAppsRetrieved(stopTime - startTime);}} 
supposed to be in the retrieval loop? This will measure separately for each 
retrieval what happens. Is this what you want? Or measure latency once for the 
overall {{getApps}} function call?
# {{mergeAppsInfo}} I think you could combined all the loops if you use one or 
two support HashMap... you can merge everything you find on your 
way (single scan O(n)), while you are searching for the AM. When you find the 
AM you can have a deeper merge (or a swap) so that all the non-portioned 
information are reflected correctly in the return. If at the end you didn't 
find an AM you can filter out (based on partial or not). This would likely make 
this faster (you do basically 3 full scans of the results, and you can bring it 
down to 1 or 2) and a bit easier to read. 
# {{MockDefaultRequestInterceptorREST}} where you do 
{{appInfo.setAMHostHttpAddress("I am the AM");}} use a fake but correctly 
structured http address, in case someone later on adds a validation for it, so 
they don't have to fix your code here. 
# Same as above for {{TestRouterWebServiceUtil}}
# {{testMerge4DifferentApps}} should check that the parameters (at least a few) 
are preserved
# Consider to refactor {{testMergeAppsFinished}} and {{testMergeAppsRunning}}, 
you have lots of repeated code which could be factored out.


> Federation: routing REST invocations transparently to multiple RMs (part 2 - 
> getApps)
> -
>
> Key: YARN-7010
> URL: https://issues.apache.org/jira/browse/YARN-7010
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-7010.v0.patch, YARN-7010.v1.patch, 
> YARN-7010.v2.patch, YARN-7010.v3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7115) Move BoundedAppender to org.hadoop.yarn.util pacakge

2017-08-28 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7115:
--
Attachment: YARN-7115.02.patch

> Move BoundedAppender to org.hadoop.yarn.util pacakge 
> -
>
> Key: YARN-7115
> URL: https://issues.apache.org/jira/browse/YARN-7115
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-7115.01.patch, YARN-7115.02.patch
>
>
> BoundedAppender is a useful util class which can be present in the util 
> package



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7115) Move BoundedAppender to org.hadoop.yarn.util pacakge

2017-08-28 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144337#comment-16144337
 ] 

Jian He commented on YARN-7115:
---

v2 marks the class as public/unstable 

> Move BoundedAppender to org.hadoop.yarn.util pacakge 
> -
>
> Key: YARN-7115
> URL: https://issues.apache.org/jira/browse/YARN-7115
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-7115.01.patch, YARN-7115.02.patch
>
>
> BoundedAppender is a useful util class which can be present in the util 
> package



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6756) ContainerRequest#executionTypeRequest causes NPE

2017-08-28 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144333#comment-16144333
 ] 

Jian He commented on YARN-6756:
---

[~asuresh], can you help check this ?

> ContainerRequest#executionTypeRequest causes NPE
> 
>
> Key: YARN-6756
> URL: https://issues.apache.org/jira/browse/YARN-6756
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
>Priority: Critical
> Attachments: YARN-6756.01.patch
>
>
> ContainerRequest#executionTypeRequest is initialized as null, which could 
> cause below " execTypeReq.getExecutionType" unconditionally throw NPE.
> {code}
>   ResourceRequestInfo addResourceRequest(Long allocationRequestId,
>   Priority priority, String resourceName, ExecutionTypeRequest 
> execTypeReq,
>   Resource capability, T req, boolean relaxLocality,
>   String labelExpression) {
> ResourceRequestInfo resourceRequestInfo = get(priority, resourceName,
> execTypeReq.getExecutionType(), capability);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5816) TestDelegationTokenRenewer#testCancelWithMultipleAppSubmissions is still flakey

2017-08-28 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144334#comment-16144334
 ] 

Robert Kanter commented on YARN-5816:
-

Test failure is unrelated - YARN-7044.

> TestDelegationTokenRenewer#testCancelWithMultipleAppSubmissions is still 
> flakey
> ---
>
> Key: YARN-5816
> URL: https://issues.apache.org/jira/browse/YARN-5816
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, test
>Reporter: Daniel Templeton
>Assignee: Robert Kanter
>Priority: Minor
> Attachments: 
> org.apache.hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer-output.txt,
>  
> org.apache.hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer.txt,
>  YARN-5816.001.patch
>
>
> Even after YARN-5057, 
> TestDelegationTokenRenewer#testCancelWithMultipleAppSubmissions is still 
> flakey:
> {noformat}
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2.796 sec <<< 
> FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer
> testCancelWithMultipleAppSubmissions(org.apache.hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer)
>   Time elapsed: 2.307 sec  <<< FAILURE!
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer.testCancelWithMultipleAppSubmissions(TestDelegationTokenRenewer.java:1260)
> {noformat}
> Note that it's the same error as YARN-5057, but on a different line.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6756) ContainerRequest#executionTypeRequest causes NPE

2017-08-28 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh reassigned YARN-6756:
-

Assignee: Jian He

> ContainerRequest#executionTypeRequest causes NPE
> 
>
> Key: YARN-6756
> URL: https://issues.apache.org/jira/browse/YARN-6756
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
>Priority: Critical
> Attachments: YARN-6756.01.patch
>
>
> ContainerRequest#executionTypeRequest is initialized as null, which could 
> cause below " execTypeReq.getExecutionType" unconditionally throw NPE.
> {code}
>   ResourceRequestInfo addResourceRequest(Long allocationRequestId,
>   Priority priority, String resourceName, ExecutionTypeRequest 
> execTypeReq,
>   Resource capability, T req, boolean relaxLocality,
>   String labelExpression) {
> ResourceRequestInfo resourceRequestInfo = get(priority, resourceName,
> execTypeReq.getExecutionType(), capability);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6756) ContainerRequest#executionTypeRequest causes NPE

2017-08-28 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144327#comment-16144327
 ] 

Jian He commented on YARN-6756:
---

A simple patch to fix the npe

> ContainerRequest#executionTypeRequest causes NPE
> 
>
> Key: YARN-6756
> URL: https://issues.apache.org/jira/browse/YARN-6756
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Priority: Critical
> Attachments: YARN-6756.01.patch
>
>
> ContainerRequest#executionTypeRequest is initialized as null, which could 
> cause below " execTypeReq.getExecutionType" unconditionally throw NPE.
> {code}
>   ResourceRequestInfo addResourceRequest(Long allocationRequestId,
>   Priority priority, String resourceName, ExecutionTypeRequest 
> execTypeReq,
>   Resource capability, T req, boolean relaxLocality,
>   String labelExpression) {
> ResourceRequestInfo resourceRequestInfo = get(priority, resourceName,
> execTypeReq.getExecutionType(), capability);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6756) ContainerRequest#executionTypeRequest causes NPE

2017-08-28 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-6756:
--
Attachment: YARN-6756.01.patch

> ContainerRequest#executionTypeRequest causes NPE
> 
>
> Key: YARN-6756
> URL: https://issues.apache.org/jira/browse/YARN-6756
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Priority: Critical
> Attachments: YARN-6756.01.patch
>
>
> ContainerRequest#executionTypeRequest is initialized as null, which could 
> cause below " execTypeReq.getExecutionType" unconditionally throw NPE.
> {code}
>   ResourceRequestInfo addResourceRequest(Long allocationRequestId,
>   Priority priority, String resourceName, ExecutionTypeRequest 
> execTypeReq,
>   Resource capability, T req, boolean relaxLocality,
>   String labelExpression) {
> ResourceRequestInfo resourceRequestInfo = get(priority, resourceName,
> execTypeReq.getExecutionType(), capability);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7083) Log aggregation deletes/renames while file is open

2017-08-28 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144317#comment-16144317
 ] 

Junping Du edited comment on YARN-7083 at 8/28/17 8:48 PM:
---

Trunk code has changed significantly since YARN-6876 so patch here doesn't 
apply any more. From my quick look, the issue (not close file before 
rename/delete) is still there but the fix is not straightforward as writer get 
hidden behind for different formats. I would suggest to commit the patch fix 
here to branch-2.8 and branch-2.8.2 only and create a separated jira tracking 
for trunk/branch-2. 
[~jlowe], what do you think?


was (Author: djp):
Trunk code has changed significantly since YARN-6877 so patch here doesn't 
apply any more. From my quick look, the issue (not close file before 
rename/delete) is still there but the fix is not straightforward as writer get 
hidden behind for different formats. I would suggest to commit the patch fix 
here to branch-2.8 and branch-2.8.2 only and create a separated jira tracking 
for trunk/branch-2. 
[~jlowe], what do you think?

> Log aggregation deletes/renames while file is open
> --
>
> Key: YARN-7083
> URL: https://issues.apache.org/jira/browse/YARN-7083
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.8.2
>Reporter: Daryn Sharp
>Assignee: Jason Lowe
>Priority: Critical
> Attachments: YARN-7083.001.patch
>
>
> YARN-6288 changes the log aggregation writer to be an autoclosable.  
> Unfortunately the try-with-resources block for the writer will either rename 
> or delete the log while open.
> Assuming the NM's behavior is correct, deleting open files only results in 
> ominous WARNs in the nodemanager log and increases the rate of logging in the 
> NN when the implicit try-with-resource close fails.  These red herrings 
> complicate debugging efforts.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7115) Move BoundedAppender to org.hadoop.yarn.util pacakge

2017-08-28 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7115:
--
Attachment: YARN-7115.01.patch

> Move BoundedAppender to org.hadoop.yarn.util pacakge 
> -
>
> Key: YARN-7115
> URL: https://issues.apache.org/jira/browse/YARN-7115
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-7115.01.patch
>
>
> BoundedAppender is a useful util class which can be present in the util 
> package



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7083) Log aggregation deletes/renames while file is open

2017-08-28 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144317#comment-16144317
 ] 

Junping Du commented on YARN-7083:
--

Trunk code has changed significantly since YARN-6877 so patch here doesn't 
apply any more. From my quick look, the issue (not close file before 
rename/delete) is still there but the fix is not straightforward as writer get 
hidden behind for different formats. I would suggest to commit the patch fix 
here to branch-2.8 and branch-2.8.2 only and create a separated jira tracking 
for trunk/branch-2. 
[~jlowe], what do you think?

> Log aggregation deletes/renames while file is open
> --
>
> Key: YARN-7083
> URL: https://issues.apache.org/jira/browse/YARN-7083
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.8.2
>Reporter: Daryn Sharp
>Assignee: Jason Lowe
>Priority: Critical
> Attachments: YARN-7083.001.patch
>
>
> YARN-6288 changes the log aggregation writer to be an autoclosable.  
> Unfortunately the try-with-resources block for the writer will either rename 
> or delete the log while open.
> Assuming the NM's behavior is correct, deleting open files only results in 
> ominous WARNs in the nodemanager log and increases the rate of logging in the 
> NN when the implicit try-with-resource close fails.  These red herrings 
> complicate debugging efforts.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7115) Move BoundedAppender to org.hadoop.yarn.util pacakge

2017-08-28 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144316#comment-16144316
 ] 

Jian He commented on YARN-7115:
---

[~templedf], can you help check ?

> Move BoundedAppender to org.hadoop.yarn.util pacakge 
> -
>
> Key: YARN-7115
> URL: https://issues.apache.org/jira/browse/YARN-7115
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-7115.01.patch
>
>
> BoundedAppender is a useful util class which can be present in the util 
> package



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7115) Move BoundedAppender to org.hadoop.yarn.util pacakge

2017-08-28 Thread Jian He (JIRA)
Jian He created YARN-7115:
-

 Summary: Move BoundedAppender to org.hadoop.yarn.util pacakge 
 Key: YARN-7115
 URL: https://issues.apache.org/jira/browse/YARN-7115
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Jian He
Assignee: Jian He


BoundedAppender is a useful util class which can be present in the util package



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5816) TestDelegationTokenRenewer#testCancelWithMultipleAppSubmissions is still flakey

2017-08-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144303#comment-16144303
 ] 

Hadoop QA commented on YARN-5816:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 45m 43s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 70m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-5816 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884101/YARN-5816.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f8f43994dc79 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 51881a8 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/17156/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17156/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17156/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestDelegationTokenRenewer#testCancelWithMultipleAppSubmissions is still 
> flakey
> ---
>
> Key: YARN-5816
> URL: 

[jira] [Updated] (YARN-7112) TestAMRMProxy is failing with invalid request

2017-08-28 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-7112:
-
Attachment: YARN-7112-branch-2.8.001.patch

Thanks for the review and commit, Wangda!  Here's the patch for branch-2.8.


> TestAMRMProxy is failing with invalid request
> -
>
> Key: YARN-7112
> URL: https://issues.apache.org/jira/browse/YARN-7112
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-beta1, 2.8.2
>Reporter: Jason Lowe
>Assignee: Jason Lowe
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: YARN-7112.001.patch, YARN-7112-branch-2.8.001.patch
>
>
> The testAMRMProxyE2E and testAMRMProxyTokenRenewal tests in TestAMRMProxy are 
> failing:
> {noformat}
> org.apache.hadoop.yarn.exceptions.InvalidApplicationMasterRequestException: 
> Invalid responseId in AllocateRequest from application attempt: 
> appattempt_1503933047334_0001_01, expect responseId to be 0, but get 1
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6161) YARN support for port allocation

2017-08-28 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144292#comment-16144292
 ] 

Eric Yang commented on YARN-6161:
-

The current design supports host network, bridge network and fixed-cidr.  The 
possible combinations of port allocations are:

| Network Type | Port allocation method | YARN resource tracking |
| host | Random port | No action, OS handles port allocation |
| host | Fixed port | YARN tracks ports assignment |
| bridge | Random port | No action, OS handles port allocation |
| bridge | Fixed port | YARN map and track ports assignment |
| cidr | Random port | No action, OS handles port allocation |
| cidr | Fixed port | YARN tracks ports assignment |

Where bridge means container has private network address but port is exposed to 
outside world on host network.
Where cidr means container is issued with IP from the same subnet as the host 
network.

Bridge with random port is likely an unsupported configuration due to late 
binding of port information by application and docker does not become aware of 
the port existence.  This feature requires tracking of port usage by 
containers, and mapping port redirection for bridged network.

> YARN support for port allocation
> 
>
> Key: YARN-6161
> URL: https://issues.apache.org/jira/browse/YARN-6161
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
> Fix For: yarn-native-services
>
>
> Since there is no agent code in YARN native services, we need another 
> mechanism for allocating ports to containers. This is not necessary when 
> running Docker containers, but it will become important when an agent-less 
> docker-less provider is introduced.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7066) Add ability to specify volumes to mount for DockerContainerRuntime

2017-08-28 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144290#comment-16144290
 ] 

Shane Kumpf commented on YARN-7066:
---

[~eyang] thanks for the patch. 

This seems to duplicate what we plan to accomplish with YARN-5534. Would you 
agree?

There is also work on going with YARN-6623 that will change the way the docker 
commands and the mount whitelists are defined, so I'm hesitant to introduce 
mount related changes until that is in. 

> Add ability to specify volumes to mount for DockerContainerRuntime
> --
>
> Key: YARN-7066
> URL: https://issues.apache.org/jira/browse/YARN-7066
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Affects Versions: 3.0.0-beta1
>Reporter: Eric Yang
> Attachments: YARN-7066.001.patch
>
>
> Yarnfile describes environment, docker image, and configuration template for 
> launching docker containers in YARN.  It would be nice to have ability to 
> specify the volumes to mount.  This can be used in combination to 
> AMBARI-21748 to mount HDFS as data directories to docker containers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7034) DefaultLinuxContainerRuntime and DockerLinuxContainerRuntime sends client environment variables to container-executor

2017-08-28 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144277#comment-16144277
 ] 

Robert Kanter commented on YARN-7034:
-

We generally trust the yarn user, and the cluster admin who set up Yarn, so I 
think it should be okay to add the {{yarn.nodemanager.admin-env}} list in.  The 
end user had no control over that config.  If we don't allow the admin-env, we 
may make things difficult for some setups if there is a legit reason to set an 
env var - otherwise, the environment is always empty and no way to change that. 
 

> DefaultLinuxContainerRuntime and DockerLinuxContainerRuntime sends client 
> environment variables to container-executor
> -
>
> Key: YARN-7034
> URL: https://issues.apache.org/jira/browse/YARN-7034
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Critical
> Attachments: YARN-7034.000.patch, YARN-7034.001.patch
>
>
> This behavior is unnecessary since there is nothing that is used from the 
> environment right now. One option is to whitelist these variables before 
> passing them. Are there any known use cases for this to justify?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7088) Fix application start time and add submit time to UIs

2017-08-28 Thread Abdullah Yousufi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abdullah Yousufi updated YARN-7088:
---
Attachment: YARN-7088.003.patch

> Fix application start time and add submit time to UIs
> -
>
> Key: YARN-7088
> URL: https://issues.apache.org/jira/browse/YARN-7088
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
> Attachments: YARN-7088.001.patch, YARN-7088.002.patch, 
> YARN-7088.003.patch
>
>
> Currently, the start time in the old and new UI actually shows the app 
> submission time. There should actually be two different fields; one for the 
> app's submission and one for its start, as well as the elapsed pending time 
> between the two.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7066) Add ability to specify volumes to mount for DockerContainerRuntime

2017-08-28 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144274#comment-16144274
 ] 

Eric Badger commented on YARN-7066:
---

Is this a dup of YARN-6919? If it is, I'm fine closing that JIRA and keeping 
this one, since there are comments here

> Add ability to specify volumes to mount for DockerContainerRuntime
> --
>
> Key: YARN-7066
> URL: https://issues.apache.org/jira/browse/YARN-7066
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Affects Versions: 3.0.0-beta1
>Reporter: Eric Yang
> Attachments: YARN-7066.001.patch
>
>
> Yarnfile describes environment, docker image, and configuration template for 
> launching docker containers in YARN.  It would be nice to have ability to 
> specify the volumes to mount.  This can be used in combination to 
> AMBARI-21748 to mount HDFS as data directories to docker containers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7088) Fix application start time and add submit time to UIs

2017-08-28 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144270#comment-16144270
 ] 

Daniel Templeton commented on YARN-7088:


I know you're working on getting the patch working still, but here are a couple 
of initial comments:

# The {{ApplicationReport.getLaunchTime()}} javadocs should be verbose about 
what a launch time is compared with what a start time is.  It should probably 
also explain that the current state of affairs is because of compatibility of 
the web services endpoints.
# In the protos you can't change previously assigned numbers.  Put the launch 
time at the end.
# I see both "launch" time and "launched" time.  You should probably pick one.

> Fix application start time and add submit time to UIs
> -
>
> Key: YARN-7088
> URL: https://issues.apache.org/jira/browse/YARN-7088
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
> Attachments: YARN-7088.001.patch, YARN-7088.002.patch
>
>
> Currently, the start time in the old and new UI actually shows the app 
> submission time. There should actually be two different fields; one for the 
> app's submission and one for its start, as well as the elapsed pending time 
> between the two.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7076) yarn application -list -appTypes is not working

2017-08-28 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144268#comment-16144268
 ] 

Junping Du commented on YARN-7076:
--

Thanks [~jianhe] for delivering a quick fix. The patch looks general OK to me. 
A minor issue here is we better to add a UT to test ClientRMService is case 
insensitive now - a simple way is to add something similar to below code in 
TestClientRMService.java but using all upper case or/and lower case of 
"matchType".

{noformat}
Set appTypes = new HashSet();
appTypes.add("matchType");

getAllAppsRequest = GetApplicationsRequest.newInstance(appTypes);
getAllApplicationsResponse =
rmService.getApplications(getAllAppsRequest);
Assert.assertEquals(1,
getAllApplicationsResponse.getApplicationList().size());
{noformat}

> yarn application -list -appTypes  is not working
> -
>
> Key: YARN-7076
> URL: https://issues.apache.org/jira/browse/YARN-7076
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
>Priority: Blocker
> Attachments: YARN-7076.01.patch
>
>
> yarn application -list -appTypes  is not working
> Looks like it's because the ApplicationCLI pass in the appType as uppercase, 
> but ClientRMService#getApplications is case sensitive, so if user submits an 
> app with lowercase appType, it wont work



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7076) yarn application -list -appTypes is not working

2017-08-28 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du reassigned YARN-7076:


Assignee: Jian He

> yarn application -list -appTypes  is not working
> -
>
> Key: YARN-7076
> URL: https://issues.apache.org/jira/browse/YARN-7076
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
>Priority: Blocker
> Attachments: YARN-7076.01.patch
>
>
> yarn application -list -appTypes  is not working
> Looks like it's because the ApplicationCLI pass in the appType as uppercase, 
> but ClientRMService#getApplications is case sensitive, so if user submits an 
> app with lowercase appType, it wont work



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7076) yarn application -list -appTypes is not working

2017-08-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144261#comment-16144261
 ] 

Hadoop QA commented on YARN-7076:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 0 new + 343 unchanged - 4 fixed = 343 total (was 347) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 20s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 83m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
| Timed out junit tests | 
org.apache.hadoop.yarn.server.resourcemanager.TestRMStoreCommands |
|   | 
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA |
|   | org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-7076 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884093/YARN-7076.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 15a3506d879e 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 51881a8 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/17155/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17155/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 

  1   2   >