[jira] [Commented] (YARN-7275) NM Statestore cleanup for Container updates

2017-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201473#comment-16201473
 ] 

Hadoop QA commented on YARN-7275:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 19s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 10 new + 288 unchanged - 0 fixed = 298 total (was 288) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
0s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
19s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 2 new + 103 unchanged - 0 fixed = 105 total (was 103) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 45s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
|  |  new org.apache.hadoop.yarn.exceptions.YarnException(String) not thrown in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.scheduler.ContainerScheduler.recoverActiveContainer(Container,
 NMStateStoreService$RecoveredContainerStatus)  At 
ContainerScheduler.java:org.apache.hadoop.yarn.server.nodemanager.containermanager.scheduler.ContainerScheduler.recoverActiveContainer(Container,
 NMStateStoreService$RecoveredContainerStatus)  At 
ContainerScheduler.java:[line 243] |
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.scheduler.TestContainerSchedulerQueuing
 |
|   | hadoop.yarn.server.nodemanager.scheduler.TestDistributedScheduler |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:3d04c00 |
| JIRA Issue | YARN-7275 |
| JIRA Patch URL | 

[jira] [Commented] (YARN-7198) Add jsvc support for RegistryDNS

2017-10-11 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201458#comment-16201458
 ] 

Jian He commented on YARN-7198:
---

Thanks, I'll check these. 

> Add jsvc support for RegistryDNS
> 
>
> Key: YARN-7198
> URL: https://issues.apache.org/jira/browse/YARN-7198
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Critical
> Attachments: YARN-7198-yarn-native-services.01.patch, 
> YARN-7198-yarn-native-services.02.patch, 
> YARN-7198-yarn-native-services.03.patch, 
> YARN-7198-yarn-native-services.04.patch, 
> YARN-7198-yarn-native-services.05.patch
>
>
> RegistryDNS should have jsvc support and be managed through the shell 
> scripts, rather than being started manually. See original comments on 
> YARN-7191.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7202) Add UT for api-server

2017-10-11 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201456#comment-16201456
 ] 

Jian He commented on YARN-7202:
---

lgtm overall, minor comments:
How about rename setApiServer to setServiceClient ?
For the pom.xml changes, I think some of those won't be required given that the 
test is moved away ?



> Add UT for api-server
> -
>
> Key: YARN-7202
> URL: https://issues.apache.org/jira/browse/YARN-7202
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Eric Yang
> Attachments: YARN-7202.yarn-native-services.001.patch, 
> YARN-7202.yarn-native-services.002.patch, 
> YARN-7202.yarn-native-services.003.patch, 
> YARN-7202.yarn-native-services.004.patch, 
> YARN-7202.yarn-native-services.005.patch, 
> YARN-7202.yarn-native-services.006.patch, 
> YARN-7202.yarn-native-services.007.patch, 
> YARN-7202.yarn-native-services.008.patch, 
> YARN-7202.yarn-native-services.011.patch, 
> YARN-7202.yarn-native-services.012.patch, 
> YARN-7202.yarn-native-services.013.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7202) Add UT for api-server

2017-10-11 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201456#comment-16201456
 ] 

Jian He edited comment on YARN-7202 at 10/12/17 5:18 AM:
-

lgtm overall, minor comments:
How about rename ApiServer#setApiServer to setServiceClient ?
For the pom.xml changes, I think some of those won't be required given that the 
test is moved away ?




was (Author: jianhe):
lgtm overall, minor comments:
How about rename setApiServer to setServiceClient ?
For the pom.xml changes, I think some of those won't be required given that the 
test is moved away ?



> Add UT for api-server
> -
>
> Key: YARN-7202
> URL: https://issues.apache.org/jira/browse/YARN-7202
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Eric Yang
> Attachments: YARN-7202.yarn-native-services.001.patch, 
> YARN-7202.yarn-native-services.002.patch, 
> YARN-7202.yarn-native-services.003.patch, 
> YARN-7202.yarn-native-services.004.patch, 
> YARN-7202.yarn-native-services.005.patch, 
> YARN-7202.yarn-native-services.006.patch, 
> YARN-7202.yarn-native-services.007.patch, 
> YARN-7202.yarn-native-services.008.patch, 
> YARN-7202.yarn-native-services.011.patch, 
> YARN-7202.yarn-native-services.012.patch, 
> YARN-7202.yarn-native-services.013.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7198) Add jsvc support for RegistryDNS

2017-10-11 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201450#comment-16201450
 ] 

Allen Wittenauer commented on YARN-7198:


I'm still fighting to get this running, but a few things already:

a) please link "YARN Registry" in the beginning of the document to the YARN 
registry documentation.
b) let's fix the YARN registry documentation to explicitly say that a separate 
zookeeper instance is required.  (or, if it's not, then something is missing in 
the docs there)
c) the zk quorum info in the registrydns docs contradict what is in the YARN 
registry documentation. this clearly needs to get rectified.

I'll play with this more tomorrow, since my calendar cleared up.

> Add jsvc support for RegistryDNS
> 
>
> Key: YARN-7198
> URL: https://issues.apache.org/jira/browse/YARN-7198
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Critical
> Attachments: YARN-7198-yarn-native-services.01.patch, 
> YARN-7198-yarn-native-services.02.patch, 
> YARN-7198-yarn-native-services.03.patch, 
> YARN-7198-yarn-native-services.04.patch, 
> YARN-7198-yarn-native-services.05.patch
>
>
> RegistryDNS should have jsvc support and be managed through the shell 
> scripts, rather than being started manually. See original comments on 
> YARN-7191.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7170) Investigate bower dependencies for YARN UI v2

2017-10-11 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated YARN-7170:
---
Attachment: YARN-7170.002.patch

-02:
* upgrade frontend-maven-plugin
* also move it's version definition to the proper location in the maven repo

> Investigate bower dependencies for YARN UI v2
> -
>
> Key: YARN-7170
> URL: https://issues.apache.org/jira/browse/YARN-7170
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Critical
> Attachments: YARN-7170.001.patch, YARN-7170.002.patch
>
>
> [INFO] bower ember#2.2.0   progress Receiving
> objects:  50% (38449/75444), 722.46 MiB | 3.30 MiB/s
> ...
> [INFO] bower ember#2.2.0   progress Receiving
> objects:  99% (75017/75444), 1.56 GiB | 3.31 MiB/s
> Investigate the dependencies and reduce the download size and speed of 
> compilation.
> cc/ [~Sreenath] and [~akhilpb]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6492) Generate queue metrics for each partition

2017-10-11 Thread Manikandan R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201432#comment-16201432
 ] 

Manikandan R commented on YARN-6492:


Sorry for the delay. Had a offline discussions with [~Naganarasimha] & 
[~sunilg] regarding the structure and POC patch to deliver the same. It has 
taken a shape and attached JMX o/p for further discussions. Attached o/p has 1. 
PartitionQueueMetrics and  2. QueueMetrics. 

1. For POC, We had two partitions, x & y. Hence o/p will have 
PartitionQueueMetrics for both the partitions. Under each 
PartitionQueueMetrics, QueueMetrics for each queue and UserMetrics for each 
user would be available.

2. We have retained  existing QueueMetrics for backward compatiblity which also 
has been captured in o/p for better understanding.

[~jlowe] [~jhung] [~sunilg] Thoughts?

> Generate queue metrics for each partition
> -
>
> Key: YARN-6492
> URL: https://issues.apache.org/jira/browse/YARN-6492
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler
>Reporter: Jonathan Hung
>Assignee: Manikandan R
> Attachments: YARN-6492.001.patch, partition_metrics.txt
>
>
> We are interested in having queue metrics for all partitions. Right now each 
> queue has one QueueMetrics object which captures metrics either in default 
> partition or across all partitions. (After YARN-6467 it will be in default 
> partition)
> But having the partition metrics would be very useful.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6492) Generate queue metrics for each partition

2017-10-11 Thread Manikandan R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikandan R updated YARN-6492:
---
Attachment: partition_metrics.txt

> Generate queue metrics for each partition
> -
>
> Key: YARN-6492
> URL: https://issues.apache.org/jira/browse/YARN-6492
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler
>Reporter: Jonathan Hung
>Assignee: Manikandan R
> Attachments: YARN-6492.001.patch, partition_metrics.txt
>
>
> We are interested in having queue metrics for all partitions. Right now each 
> queue has one QueueMetrics object which captures metrics either in default 
> partition or across all partitions. (After YARN-6467 it will be in default 
> partition)
> But having the partition metrics would be very useful.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6608) Backport all SLS improvements from trunk to branch-2

2017-10-11 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201437#comment-16201437
 ] 

Wangda Tan commented on YARN-6608:
--

[~curino], my pleasure.

1) For test case:

Most test cases are failed with error:
{code}
org/eclipse/jetty/server/Handler : Unsupported major.minor version 52.0
{code}

This is caused by jetty 9.3+ dependencies added to SLS: 
https://dev.eclipse.org/mhonarc/lists/jetty-announce/msg00080.html which only 
support JDK 8, but in branch-2 it should support both of JDK 7/8.  

Reverted this part of change and after that all test cases pass in my local 
environment except:

TestDebugOverflowUserlimit

This test looks problematic, which failed with error:
{code}
...
java.io.FileNotFoundException: File src/test/resources/overflow.json does not 
exist
{code}
And the {{TestDebugOverflowUserlimit}} is not existed in trunk, is this file 
added by mistake? I removed the file from the patch. [~curino], any suggestions 
here?

2) For javac warning:

Hopefully this patch could fix this problem.

3) For shellcheck: 

The warning exists in trunk as well. Filed YARN-7318 to track the problem 
separately.

In general: unit test is not a problem since I can run it locally without any 
issue. If javac turns to green, I suggest to commit this patch and fix the 
shell check on the separate JIRA.

Attached ver.6 patch. 

> Backport all SLS improvements from trunk to branch-2
> 
>
> Key: YARN-6608
> URL: https://issues.apache.org/jira/browse/YARN-6608
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.9.0
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-6608-branch-2.v0.patch, 
> YARN-6608-branch-2.v1.patch, YARN-6608-branch-2.v2.patch, 
> YARN-6608-branch-2.v3.patch, YARN-6608-branch-2.v4.patch, 
> YARN-6608-branch-2.v5.patch, YARN-6608-branch-2.v6.patch
>
>
> The SLS has received lots of attention in trunk, but only some of it made it 
> back to branch-2. This patch is a "raw" fork-lift of the trunk development 
> from hadoop-tools/hadoop-sls.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6608) Backport all SLS improvements from trunk to branch-2

2017-10-11 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6608:
-
Attachment: YARN-6608-branch-2.v6.patch

> Backport all SLS improvements from trunk to branch-2
> 
>
> Key: YARN-6608
> URL: https://issues.apache.org/jira/browse/YARN-6608
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.9.0
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-6608-branch-2.v0.patch, 
> YARN-6608-branch-2.v1.patch, YARN-6608-branch-2.v2.patch, 
> YARN-6608-branch-2.v3.patch, YARN-6608-branch-2.v4.patch, 
> YARN-6608-branch-2.v5.patch, YARN-6608-branch-2.v6.patch
>
>
> The SLS has received lots of attention in trunk, but only some of it made it 
> back to branch-2. This patch is a "raw" fork-lift of the trunk development 
> from hadoop-tools/hadoop-sls.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7275) NM Statestore cleanup for Container updates

2017-10-11 Thread kartheek muthyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kartheek muthyala updated YARN-7275:

Attachment: YARN-7275.003.patch

Sorry [~asuresh], couldn't reply to you earlier on this. Thank you for the 
quick review. 
I have modified the patch with the given comments. Let me know if this version 
is okay. 

> NM Statestore cleanup for Container updates
> ---
>
> Key: YARN-7275
> URL: https://issues.apache.org/jira/browse/YARN-7275
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: kartheek muthyala
>Priority: Blocker
> Attachments: YARN-7275.001.patch, YARN-7275.002.patch, 
> YARN-7275.003.patch
>
>
> Currently, only resource updates are recorded in the NM state store, we need 
> to add ExecutionType updates as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7317) Fix overallocation resulted from ceiling in LocalityMulticastAMRMProxyPolicy

2017-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201414#comment-16201414
 ] 

Hadoop QA commented on YARN-7317:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 22s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 57s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
59s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:3d04c00 |
| JIRA Issue | YARN-7317 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12891618/YARN-7317.v4.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3444723e0989 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 075358e |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17878/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17878/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix overallocation resulted from ceiling in LocalityMulticastAMRMProxyPolicy
> 
>
> Key: YARN-7317
> 

[jira] [Created] (YARN-7318) Fix shell check warnings of SLS.

2017-10-11 Thread Wangda Tan (JIRA)
Wangda Tan created YARN-7318:


 Summary: Fix shell check warnings of SLS.
 Key: YARN-7318
 URL: https://issues.apache.org/jira/browse/YARN-7318
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Wangda Tan


Warnings like: 
{code}
hadoop-tools/hadoop-sls/src/main/bin/rumen2sls.sh:75:77: warning: args is 
referenced but not assigned. [SC2154]
hadoop-tools/hadoop-sls/src/main/bin/slsrun.sh:113:61: warning: args is 
referenced but not assigned. [SC2154]
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7317) Fix overallocation resulted from ceiling in LocalityMulticastAMRMProxyPolicy

2017-10-11 Thread Botong Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Botong Huang updated YARN-7317:
---
Attachment: YARN-7317.v4.patch

> Fix overallocation resulted from ceiling in LocalityMulticastAMRMProxyPolicy
> 
>
> Key: YARN-7317
> URL: https://issues.apache.org/jira/browse/YARN-7317
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Attachments: YARN-7317.v1.patch, YARN-7317.v2.patch, 
> YARN-7317.v3.patch, YARN-7317.v4.patch
>
>
> When LocalityMulticastAMRMProxyPolicy is splitting up the ANY requests into 
> different subclusters, we are doing Ceil(N * weight), leading to 
> overallocation overall. It is better to do Floor(N * weight) for each 
> subcluster and then assign the residue randomly according to the weights. So 
> that the total number of containers we ask from all subclusters sum up to be 
> N. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7244) ShuffleHandler is not aware of disks that are added

2017-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201392#comment-16201392
 ] 

Hadoop QA commented on YARN-7244:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 12s{color} | {color:orange} root: The patch generated 16 new + 325 unchanged 
- 1 fixed = 341 total (was 326) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 29s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
43s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 46s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 35s{color} 
| {color:red} hadoop-mapreduce-client-shuffle in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}121m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.scheduler.TestDistributedScheduler |
|   | hadoop.mapred.TestShuffleHandler |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:3d04c00 |
| JIRA Issue | YARN-7244 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12891601/YARN-7244.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0188628fbd85 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (YARN-7317) Fix overallocation resulted from ceiling in LocalityMulticastAMRMProxyPolicy

2017-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201362#comment-16201362
 ] 

Hadoop QA commented on YARN-7317:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m  0s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common: 
The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 18s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
4s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:3d04c00 |
| JIRA Issue | YARN-7317 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12891609/YARN-7317.v3.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 03528431db34 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 075358e |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/17877/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17877/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17877/console 

[jira] [Updated] (YARN-6523) Newly retrieved security Tokens are sent as part of each heartbeat to each node from RM which is not desirable in large cluster

2017-10-11 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-6523:

Summary: Newly retrieved security Tokens are sent as part of each heartbeat 
to each node from RM which is not desirable in large cluster  (was: RM requires 
large memory in sending out security tokens as part of Node Heartbeat in large 
cluster)

> Newly retrieved security Tokens are sent as part of each heartbeat to each 
> node from RM which is not desirable in large cluster
> ---
>
> Key: YARN-6523
> URL: https://issues.apache.org/jira/browse/YARN-6523
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: RM
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Naganarasimha G R
>Assignee: Manikandan R
>
> Currently as part of heartbeat response RM sets all application's tokens 
> though all applications might not be active on the node. On top of it 
> NodeHeartbeatResponsePBImpl converts tokens for each app into 
> SystemCredentialsForAppsProto. Hence for each node and each heartbeat too 
> many SystemCredentialsForAppsProto objects were getting created.
> We hit a OOM while testing for 2000 concurrent apps on 500 nodes cluster with 
> 8GB RAM configured for RM



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6856) Support CLI for Node Attributes Mapping

2017-10-11 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201354#comment-16201354
 ] 

Naganarasimha G R edited comment on YARN-6856 at 10/12/17 2:30 AM:
---

[~sunilg], If you had cycles, can you please take a look at the latest patch ?


was (Author: naganarasimha):
[~sunilg], I

> Support CLI for Node Attributes Mapping
> ---
>
> Key: YARN-6856
> URL: https://issues.apache.org/jira/browse/YARN-6856
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, capacityscheduler, client
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: YARN-6856-YARN-3409.001.patch, 
> YARN-6856-YARN-3409.002.patch, YARN-6856-yarn-3409.003.patch, 
> YARN-6856-yarn-3409.004.patch
>
>
> This focuses on the new CLI for the mapping of Node Attributes



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6856) Support CLI for Node Attributes Mapping

2017-10-11 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201354#comment-16201354
 ] 

Naganarasimha G R commented on YARN-6856:
-

[~sunilg], I

> Support CLI for Node Attributes Mapping
> ---
>
> Key: YARN-6856
> URL: https://issues.apache.org/jira/browse/YARN-6856
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, capacityscheduler, client
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: YARN-6856-YARN-3409.001.patch, 
> YARN-6856-YARN-3409.002.patch, YARN-6856-yarn-3409.003.patch, 
> YARN-6856-yarn-3409.004.patch
>
>
> This focuses on the new CLI for the mapping of Node Attributes



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6608) Backport all SLS improvements from trunk to branch-2

2017-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201351#comment-16201351
 ] 

Hadoop QA commented on YARN-6608:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 
54s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 22 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
54s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
30s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
20s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
52s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
10s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
20s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
18s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
30s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  7m 30s{color} 
| {color:red} root generated 1 new + 1443 unchanged - 5 fixed = 1444 total (was 
1448) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 50s{color} | {color:orange} root: The patch generated 34 new + 161 unchanged 
- 235 fixed = 195 total (was 396) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
34s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red}  0m  
0s{color} | {color:red} The patch generated 2 new + 2 unchanged - 22 fixed = 4 
total (was 24) {color} |
| {color:orange}-0{color} | {color:orange} shelldocs {color} | {color:orange}  
0m 10s{color} | {color:orange} The patch generated 16 new + 47 unchanged - 0 
fixed = 63 total (was 47) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 2 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 10s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 55m 
14s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
39s{color} | {color:green} hadoop-rumen in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 51s{color} 
| {color:red} hadoop-sls in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}161m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.net.TestDNS |
|   | 

[jira] [Commented] (YARN-7202) Add UT for api-server

2017-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201327#comment-16201327
 ] 

Hadoop QA commented on YARN-7202:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} yarn-native-services Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
32s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
33s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
42s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 2s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
12s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} yarn-native-services passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 4s{color} | {color:green} root: The patch generated 0 new + 9 unchanged - 4 
fixed = 9 total (was 13) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
23s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
33s{color} | {color:green} hadoop-yarn-services-api in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}102m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  

[jira] [Updated] (YARN-7317) Fix overallocation resulted from ceiling in LocalityMulticastAMRMProxyPolicy

2017-10-11 Thread Botong Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Botong Huang updated YARN-7317:
---
Attachment: YARN-7317.v3.patch

> Fix overallocation resulted from ceiling in LocalityMulticastAMRMProxyPolicy
> 
>
> Key: YARN-7317
> URL: https://issues.apache.org/jira/browse/YARN-7317
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Attachments: YARN-7317.v1.patch, YARN-7317.v2.patch, 
> YARN-7317.v3.patch
>
>
> When LocalityMulticastAMRMProxyPolicy is splitting up the ANY requests into 
> different subclusters, we are doing Ceil(N * weight), leading to 
> overallocation overall. It is better to do Floor(N * weight) for each 
> subcluster and then assign the residue randomly according to the weights. So 
> that the total number of containers we ask from all subclusters sum up to be 
> N. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7317) Fix overallocation resulted from ceiling in LocalityMulticastAMRMProxyPolicy

2017-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201306#comment-16201306
 ] 

Hadoop QA commented on YARN-7317:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common: 
The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
17s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common 
generated 4 new + 30 unchanged - 0 fixed = 34 total (was 30) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
2s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:3d04c00 |
| JIRA Issue | YARN-7317 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12891598/YARN-7317.v2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 79292fc49b56 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 075358e |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/17875/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/17875/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt
 |
|  Test Results | 

[jira] [Updated] (YARN-7244) ShuffleHandler is not aware of disks that are added

2017-10-11 Thread Kuhu Shukla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kuhu Shukla updated YARN-7244:
--
Attachment: YARN-7244.004.patch

Rebasing patch on trunk.

> ShuffleHandler is not aware of disks that are added
> ---
>
> Key: YARN-7244
> URL: https://issues.apache.org/jira/browse/YARN-7244
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Kuhu Shukla
>Assignee: Kuhu Shukla
> Attachments: YARN-7244.001.patch, YARN-7244.002.patch, 
> YARN-7244.003.patch, YARN-7244.004.patch
>
>
> The ShuffleHandler permanently remembers the list of "good" disks on NM 
> startup. If disks later are added to the node then map tasks will start using 
> them but the ShuffleHandler will not be aware of them. The end result is that 
> the data cannot be shuffled from the node leading to fetch failures and 
> re-runs of the map tasks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6457) Allow custom SSL configuration to be supplied in WebApps

2017-10-11 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201287#comment-16201287
 ] 

Robert Kanter commented on YARN-6457:
-

Sorry for raising the alarm - we're still looking into this, but its looking 
like a configuration issue on our end, so I think we're fine here.

> Allow custom SSL configuration to be supplied in WebApps
> 
>
> Key: YARN-6457
> URL: https://issues.apache.org/jira/browse/YARN-6457
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: webapp, yarn
>Reporter: Sanjay M Pujare
>Assignee: Sanjay M Pujare
> Fix For: 2.9.0, 2.7.4, 3.0.0-alpha4, 2.8.2
>
> Attachments: YARN-6457.00.patch, YARN-6457.01.patch
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> Currently a custom SSL store cannot be passed on to WebApps which forces the 
> embedded web-server to use the default keystore set up in ssl-server.xml for 
> the whole Hadoop cluster. There are cases where the Hadoop app needs to use 
> its own/custom keystore.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7169) Backport new yarn-ui to branch2 code (starting with YARN-5355_branch2)

2017-10-11 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201278#comment-16201278
 ] 

Sunil G commented on YARN-7169:
---

[~vrushalic] for UI, we need maven 3.3 as a minimum version. We use 
{{frontend-maven-plugin}} and that works only with maven 3.3. Could we bump up 
the maven version for branch-2.9. Could we backport HADOOP-14285 to this branch 
?

> Backport new yarn-ui to branch2 code (starting with YARN-5355_branch2)
> --
>
> Key: YARN-7169
> URL: https://issues.apache.org/jira/browse/YARN-7169
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineclient, timelinereader, timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>Priority: Critical
> Attachments: YARN-7169-YARN-5355_branch2.0001.patch, 
> YARN-7169-YARN-5355_branch2.0002.patch, ui_commits(1)
>
>
> Jira to track the backport of the new yarn-ui onto branch2. Right now adding 
> into Timeline Service v2's branch2 which is YARN-5355_branch2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7317) Fix overallocation resulted from ceiling in LocalityMulticastAMRMProxyPolicy

2017-10-11 Thread Botong Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Botong Huang updated YARN-7317:
---
Attachment: YARN-7317.v2.patch

Thanks [~curino] for the review, v2 patch uploaded. 

> Fix overallocation resulted from ceiling in LocalityMulticastAMRMProxyPolicy
> 
>
> Key: YARN-7317
> URL: https://issues.apache.org/jira/browse/YARN-7317
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Attachments: YARN-7317.v1.patch, YARN-7317.v2.patch
>
>
> When LocalityMulticastAMRMProxyPolicy is splitting up the ANY requests into 
> different subclusters, we are doing Ceil(N * weight), leading to 
> overallocation overall. It is better to do Floor(N * weight) for each 
> subcluster and then assign the residue randomly according to the weights. So 
> that the total number of containers we ask from all subclusters sum up to be 
> N. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4511) Common scheduler changes supporting scheduler-specific implementations

2017-10-11 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201275#comment-16201275
 ] 

Wangda Tan commented on YARN-4511:
--

Thanks Haibo, applogize for my late responses, was busy with other tasks.

Regarding to: {{allocationInThisHeartbeat}} discussion. The related JIRA is 
YARN-5139, in short, which makes scheduler allocation to two separate phases:
Phase #1, Scheduler look at existing scheduler states (queue/node/app, etc.) 
and make allocation proposal (on which node, allocate container). This could be 
done in multiple threads.
Phase #2, There's another thread (now is single thread), look at allocation 
proposal and try to accept/reject them. 
Under the context of YARN-5139, we cannot assume an allocation proposal will be 
accepted. I'm not sure how this impact your approach.

To your proposal:
bq. we'd do allocation of guaranteed containers first followed by opportunistic 
containers. W need to consider the just-allocated-yet-to-launch guaranteed 
containers to project how much resource we have left to allocate opportunistic 
containers.
I'm still not quite sure about how it works: just-allocated-yet-to-launch 
guaranteed containers could be allocated in different heartbeats, correct? It 
is possible that AM acquires an guaranteed container and wait for serveral 
minutes to launch it, I'm not sure if recording total allocated in a single 
node update event is enough. 

bq. I only try to preserve the containerLaunched flag. Can you be more specific 
about what you're referring to in the patch?
I'm talking about below method in SchedulerNode: (it seems renamed in the 
latest patch)
{code}
/**
   * Inform the node that a container has launched.
   * @param containerId ID of the launched container
   */
  public synchronized void containerStarted(ContainerId containerId) {
ContainerInfo info = launchedContainers.get(containerId);
if (info != null) {
  info.launchedOnNode = true;
}
  }
{code}
I'm not sure why we need a separate launchedOnNode flag because we already have 
a launchedContainer map.

bq. There is a jira open to consolidate with Resource Profiles (YARN-6690). Is 
that a good place to do the work to accommodate other resources?
I'm fine with moving this to a separate JIRA, but we need to do this before 
release, otherwise it gonna be very hard to modify defined protos in a future 
release. 

I'm not sure if I asks too much: could you include a summary of workflow of 
this patch and how schedulers will use them. I found there're lots of changes 
(especially inside SchedulerNode) but I cannot see the full picture of how 
scheduler will use them. A workflow can help reviews a lot. 


> Common scheduler changes supporting scheduler-specific implementations
> --
>
> Key: YARN-4511
> URL: https://issues.apache.org/jira/browse/YARN-4511
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Haibo Chen
> Attachments: YARN-4511-YARN-1011.00.patch, 
> YARN-4511-YARN-1011.01.patch, YARN-4511-YARN-1011.02.patch, 
> YARN-4511-YARN-1011.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7269) Tracking URL in the app state does not get redirected to ApplicationMaster for Running applications

2017-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201274#comment-16201274
 ] 

Hadoop QA commented on YARN-7269:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 59s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
26s{color} | {color:green} hadoop-yarn-server-web-proxy in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:3d04c00 |
| JIRA Issue | YARN-7269 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12891590/YARN-7269.addendum.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 6b90d0fee5be 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 075358e |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17871/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy 
U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy 
|
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17871/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Tracking URL in the app state does not get redirected to ApplicationMaster 
> for Running applications
> 

[jira] [Commented] (YARN-6608) Backport all SLS improvements from trunk to branch-2

2017-10-11 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201272#comment-16201272
 ] 

Carlo Curino commented on YARN-6608:


Awesome! [~wangda], thanks for the help, hopefully we can make it by 2.9 
release cut.

> Backport all SLS improvements from trunk to branch-2
> 
>
> Key: YARN-6608
> URL: https://issues.apache.org/jira/browse/YARN-6608
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.9.0
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-6608-branch-2.v0.patch, 
> YARN-6608-branch-2.v1.patch, YARN-6608-branch-2.v2.patch, 
> YARN-6608-branch-2.v3.patch, YARN-6608-branch-2.v4.patch, 
> YARN-6608-branch-2.v5.patch
>
>
> The SLS has received lots of attention in trunk, but only some of it made it 
> back to branch-2. This patch is a "raw" fork-lift of the trunk development 
> from hadoop-tools/hadoop-sls.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6546) SLS is slow while loading 10k queues

2017-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201259#comment-16201259
 ] 

Hadoop QA commented on YARN-6546:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 14s{color} | {color:orange} hadoop-tools/hadoop-sls: The patch generated 4 
new + 26 unchanged - 0 fixed = 30 total (was 26) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m  
5s{color} | {color:green} hadoop-sls in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:3d04c00 |
| JIRA Issue | YARN-6546 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12891585/YARN-6546.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 861b9717f49b 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 075358e |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/17867/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-sls.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17867/testReport/ |
| modules | C: hadoop-tools/hadoop-sls U: hadoop-tools/hadoop-sls |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17867/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was 

[jira] [Updated] (YARN-7202) Add UT for api-server

2017-10-11 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7202:

Attachment: YARN-7202.yarn-native-services.013.patch

Removed integration test until updateService API is improved (YARN-7217).

> Add UT for api-server
> -
>
> Key: YARN-7202
> URL: https://issues.apache.org/jira/browse/YARN-7202
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Eric Yang
> Attachments: YARN-7202.yarn-native-services.001.patch, 
> YARN-7202.yarn-native-services.002.patch, 
> YARN-7202.yarn-native-services.003.patch, 
> YARN-7202.yarn-native-services.004.patch, 
> YARN-7202.yarn-native-services.005.patch, 
> YARN-7202.yarn-native-services.006.patch, 
> YARN-7202.yarn-native-services.007.patch, 
> YARN-7202.yarn-native-services.008.patch, 
> YARN-7202.yarn-native-services.011.patch, 
> YARN-7202.yarn-native-services.012.patch, 
> YARN-7202.yarn-native-services.013.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7269) Tracking URL in the app state does not get redirected to ApplicationMaster for Running applications

2017-10-11 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201248#comment-16201248
 ] 

Jian He commented on YARN-7269:
---

lgtm

> Tracking URL in the app state does not get redirected to ApplicationMaster 
> for Running applications
> ---
>
> Key: YARN-7269
> URL: https://issues.apache.org/jira/browse/YARN-7269
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Tan, Wangda
>Priority: Critical
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: YARN-7269.001.patch, YARN-7269.002.patch, 
> YARN-7269.003.patch, YARN-7269.addendum.001.patch, 
> YARN-7269.addendum.002.patch
>
>
> Tracking URL in the app state does not get redirected to ApplicationMaster 
> for Running applications. It gives following exception
> {code}
>  org.mortbay.log: /ws/v1/mapreduce/info
> javax.servlet.ServletException: Could not determine the proxy server for 
> redirection
>   at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.findRedirectUrl(AmIpFilter.java:199)
>   at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFilter(AmIpFilter.java:141)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1426)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
>   at 
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
>   at 
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
>   at 
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
>   at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
>   at org.mortbay.jetty.Server.handle(Server.java:326)
>   at 
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
>   at 
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
>   at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
>   at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
>   at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
>   at 
> org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
>   at 
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5926) clean up registry code for java 7/8

2017-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201246#comment-16201246
 ] 

Hadoop QA commented on YARN-5926:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} YARN-5926 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-5926 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12840064/YARN-5926-001.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17873/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> clean up registry code for java 7/8
> ---
>
> Key: YARN-5926
> URL: https://issues.apache.org/jira/browse/YARN-5926
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: YARN-5926-001.patch
>
>
> Clean up the registry code to stop the java 7/8 warnings



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7291) Better input parsing for resource in allocation file

2017-10-11 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-7291:
---
Labels: newbie  (was: )

> Better input parsing for resource in allocation file
> 
>
> Key: YARN-7291
> URL: https://issues.apache.org/jira/browse/YARN-7291
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.1.0
>Reporter: Yufei Gu
>Priority: Minor
>  Labels: newbie
>
> When you set max/min share for queues in fair scheduler allocation file,  
> "1024 mb, 2 4 vcores" is parsed the same as "1024 mb, 4 vcores" without any 
> issue, the same to "50% memory, 50% 100%cpu" which is parsed the same as "50% 
> memory, 100%cpu". That causes confusing. We should fix it. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4858) start-yarn and stop-yarn scripts to support timeline and sharedcachemanager

2017-10-11 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201231#comment-16201231
 ] 

Subru Krishnan commented on YARN-4858:
--

Pushing it out from 2.9.0 due to lack of recent activity. Feel free to revert 
if required.

> start-yarn and stop-yarn scripts to support timeline and sharedcachemanager
> ---
>
> Key: YARN-4858
> URL: https://issues.apache.org/jira/browse/YARN-4858
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: oct16-easy
> Attachments: YARN-4858-001.patch, YARN-4858-branch-2.001.patch
>
>
> The start-yarn and stop-yarn scripts don't have any (even commented out) 
> support for the  timeline and sharedcachemanager
> Proposed:
> * bash and cmd start-yarn scripts have commented out start actions
> * stop-yarn scripts stop the servers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7170) Investigate bower dependencies for YARN UI v2

2017-10-11 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201243#comment-16201243
 ] 

Subru Krishnan edited comment on YARN-7170 at 10/11/17 11:57 PM:
-

[~sunilg]/[~akhilpb], do you intend to get this for 2.9.0 as it looks relevant?


was (Author: subru):
@sunil G/[~akhilpb], do you intend to get this for 2.9.0 as it looks relevant?

> Investigate bower dependencies for YARN UI v2
> -
>
> Key: YARN-7170
> URL: https://issues.apache.org/jira/browse/YARN-7170
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Critical
> Attachments: YARN-7170.001.patch
>
>
> [INFO] bower ember#2.2.0   progress Receiving
> objects:  50% (38449/75444), 722.46 MiB | 3.30 MiB/s
> ...
> [INFO] bower ember#2.2.0   progress Receiving
> objects:  99% (75017/75444), 1.56 GiB | 3.31 MiB/s
> Investigate the dependencies and reduce the download size and speed of 
> compilation.
> cc/ [~Sreenath] and [~akhilpb]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7170) Investigate bower dependencies for YARN UI v2

2017-10-11 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201243#comment-16201243
 ] 

Subru Krishnan commented on YARN-7170:
--

@sunil G/[~akhilpb], do you intend to get this for 2.9.0 as it looks relevant?

> Investigate bower dependencies for YARN UI v2
> -
>
> Key: YARN-7170
> URL: https://issues.apache.org/jira/browse/YARN-7170
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Critical
> Attachments: YARN-7170.001.patch
>
>
> [INFO] bower ember#2.2.0   progress Receiving
> objects:  50% (38449/75444), 722.46 MiB | 3.30 MiB/s
> ...
> [INFO] bower ember#2.2.0   progress Receiving
> objects:  99% (75017/75444), 1.56 GiB | 3.31 MiB/s
> Investigate the dependencies and reduce the download size and speed of 
> compilation.
> cc/ [~Sreenath] and [~akhilpb]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3861) Add fav icon to YARN & MR daemons web UI

2017-10-11 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201228#comment-16201228
 ] 

Subru Krishnan commented on YARN-3861:
--

Pushing it out from 2.9.0 due to lack of recent activity. Feel free to revert 
if required.

> Add fav icon to YARN & MR daemons web UI
> 
>
> Key: YARN-3861
> URL: https://issues.apache.org/jira/browse/YARN-3861
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: webapp
>Reporter: Devaraj K
>Assignee: Devaraj K
>  Labels: oct16-easy
> Attachments: RM UI in Chrome-With Patch.png, RM UI in Chrome-Without 
> Patch.png, RM UI in IE-With Patch.png, RM UI in IE-Without Patch.png.png, 
> YARN-3861.patch, hadoop-fav-transparent.png, hadoop-fav.png
>
>
> Add fav icon image to all YARN & MR daemons web UI.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2748) Upload logs in the sub-folders under the local log dir when aggregating logs

2017-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201242#comment-16201242
 ] 

Hadoop QA commented on YARN-2748:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} YARN-2748 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-2748 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12731620/YARN-2748.04.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17872/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Upload logs in the sub-folders under the local log dir when aggregating logs
> 
>
> Key: YARN-2748
> URL: https://issues.apache.org/jira/browse/YARN-2748
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: log-aggregation
>Affects Versions: 2.6.0
>Reporter: Zhijie Shen
>Assignee: Varun Saxena
> Attachments: YARN-2748.001.patch, YARN-2748.002.patch, 
> YARN-2748.03.patch, YARN-2748.04.patch
>
>
> YARN-2734 has a temporal fix to skip sub folders to avoid exception. Ideally, 
> if the app is creating a sub folder and putting its rolling logs there, we 
> need to upload these logs as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3625) RollingLevelDBTimelineStore Incorrectly Forbids Related Entity in Same Put

2017-10-11 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-3625:
-
Target Version/s: 3.1.0  (was: 2.9.0)

> RollingLevelDBTimelineStore Incorrectly Forbids Related Entity in Same Put
> --
>
> Key: YARN-3625
> URL: https://issues.apache.org/jira/browse/YARN-3625
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
>  Labels: oct16-medium
> Attachments: YARN-3625.1.patch, YARN-3625.2.patch
>
>
> RollingLevelDBTimelineStore batches all entities in the same put to improve 
> performance. This causes an error when relating to an entity in the same put 
> however.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3625) RollingLevelDBTimelineStore Incorrectly Forbids Related Entity in Same Put

2017-10-11 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201226#comment-16201226
 ] 

Subru Krishnan commented on YARN-3625:
--

Pushing it out from 2.9.0 due to lack of recent activity. Feel free to revert 
if required.

> RollingLevelDBTimelineStore Incorrectly Forbids Related Entity in Same Put
> --
>
> Key: YARN-3625
> URL: https://issues.apache.org/jira/browse/YARN-3625
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
>  Labels: oct16-medium
> Attachments: YARN-3625.1.patch, YARN-3625.2.patch
>
>
> RollingLevelDBTimelineStore batches all entities in the same put to improve 
> performance. This causes an error when relating to an entity in the same put 
> however.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5926) clean up registry code for java 7/8

2017-10-11 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201237#comment-16201237
 ] 

Subru Krishnan commented on YARN-5926:
--

Pushing it out from 2.9.0 due to lack of recent activity. Feel free to revert 
if required.

> clean up registry code for java 7/8
> ---
>
> Key: YARN-5926
> URL: https://issues.apache.org/jira/browse/YARN-5926
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: YARN-5926-001.patch
>
>
> Clean up the registry code to stop the java 7/8 warnings



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4988) Limit filter in ApplicationBaseProtocol#getApplications should return latest applications

2017-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201236#comment-16201236
 ] 

Hadoop QA commented on YARN-4988:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} YARN-4988 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-4988 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12819325/YARN-4988-wip.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17870/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Limit filter in ApplicationBaseProtocol#getApplications should return latest 
> applications
> -
>
> Key: YARN-4988
> URL: https://issues.apache.org/jira/browse/YARN-4988
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: oct16-medium
> Attachments: YARN-4988-wip.patch
>
>
> When ever limit filter is used to get application report using 
> ApplicationBaseProtocol#getApplications, the applications retrieved are not 
> the latest. The retrieved applications are random based on the hashcode. 
> The reason for above problem is RM maintains the apps in MAP where in 
> insertion of application id is based on the hashcode. So if there are 10 
> applications from app-1 to app-10 and then limit is 5, then supposed to 
> expect that applications from app-6 to app-10 should be retrieved. But now 
> some first 5 apps in the MAP are retrieved. So applications retrieved are 
> random 5!!
> I think limit should retrieve latest applications only.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5926) clean up registry code for java 7/8

2017-10-11 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-5926:
-
Target Version/s: 3.0.0  (was: 2.9.0)

> clean up registry code for java 7/8
> ---
>
> Key: YARN-5926
> URL: https://issues.apache.org/jira/browse/YARN-5926
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: YARN-5926-001.patch
>
>
> Clean up the registry code to stop the java 7/8 warnings



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4988) Limit filter in ApplicationBaseProtocol#getApplications should return latest applications

2017-10-11 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201234#comment-16201234
 ] 

Subru Krishnan commented on YARN-4988:
--

[~rohithsharma], this looks relevant so are you still targeting for 2.9.0? I 
can help with reviews if required.

> Limit filter in ApplicationBaseProtocol#getApplications should return latest 
> applications
> -
>
> Key: YARN-4988
> URL: https://issues.apache.org/jira/browse/YARN-4988
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: oct16-medium
> Attachments: YARN-4988-wip.patch
>
>
> When ever limit filter is used to get application report using 
> ApplicationBaseProtocol#getApplications, the applications retrieved are not 
> the latest. The retrieved applications are random based on the hashcode. 
> The reason for above problem is RM maintains the apps in MAP where in 
> insertion of application id is based on the hashcode. So if there are 10 
> applications from app-1 to app-10 and then limit is 5, then supposed to 
> expect that applications from app-6 to app-10 should be retrieved. But now 
> some first 5 apps in the MAP are retrieved. So applications retrieved are 
> random 5!!
> I think limit should retrieve latest applications only.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3861) Add fav icon to YARN & MR daemons web UI

2017-10-11 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-3861:
-
Target Version/s: 3.0.0  (was: 3.1.0)

> Add fav icon to YARN & MR daemons web UI
> 
>
> Key: YARN-3861
> URL: https://issues.apache.org/jira/browse/YARN-3861
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: webapp
>Reporter: Devaraj K
>Assignee: Devaraj K
>  Labels: oct16-easy
> Attachments: RM UI in Chrome-With Patch.png, RM UI in Chrome-Without 
> Patch.png, RM UI in IE-With Patch.png, RM UI in IE-Without Patch.png.png, 
> YARN-3861.patch, hadoop-fav-transparent.png, hadoop-fav.png
>
>
> Add fav icon image to all YARN & MR daemons web UI.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4858) start-yarn and stop-yarn scripts to support timeline and sharedcachemanager

2017-10-11 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-4858:
-
Target Version/s: 2.9.1  (was: 2.9.0)

> start-yarn and stop-yarn scripts to support timeline and sharedcachemanager
> ---
>
> Key: YARN-4858
> URL: https://issues.apache.org/jira/browse/YARN-4858
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: oct16-easy
> Attachments: YARN-4858-001.patch, YARN-4858-branch-2.001.patch
>
>
> The start-yarn and stop-yarn scripts don't have any (even commented out) 
> support for the  timeline and sharedcachemanager
> Proposed:
> * bash and cmd start-yarn scripts have commented out start actions
> * stop-yarn scripts stop the servers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7269) Tracking URL in the app state does not get redirected to ApplicationMaster for Running applications

2017-10-11 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7269:
-
Attachment: YARN-7269.addendum.002.patch

Attached ver.2 addendum patch fixed unit test failures.

> Tracking URL in the app state does not get redirected to ApplicationMaster 
> for Running applications
> ---
>
> Key: YARN-7269
> URL: https://issues.apache.org/jira/browse/YARN-7269
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Tan, Wangda
>Priority: Critical
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: YARN-7269.001.patch, YARN-7269.002.patch, 
> YARN-7269.003.patch, YARN-7269.addendum.001.patch, 
> YARN-7269.addendum.002.patch
>
>
> Tracking URL in the app state does not get redirected to ApplicationMaster 
> for Running applications. It gives following exception
> {code}
>  org.mortbay.log: /ws/v1/mapreduce/info
> javax.servlet.ServletException: Could not determine the proxy server for 
> redirection
>   at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.findRedirectUrl(AmIpFilter.java:199)
>   at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFilter(AmIpFilter.java:141)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1426)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
>   at 
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
>   at 
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
>   at 
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
>   at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
>   at org.mortbay.jetty.Server.handle(Server.java:326)
>   at 
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
>   at 
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
>   at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
>   at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
>   at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
>   at 
> org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
>   at 
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4721) RM to try to auth with HDFS on startup, retry with max diagnostics on failure

2017-10-11 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201229#comment-16201229
 ] 

Subru Krishnan commented on YARN-4721:
--

Pushing it out from 2.9.0 due to lack of recent activity. Feel free to revert 
if required.

> RM to try to auth with HDFS on startup, retry with max diagnostics on failure
> -
>
> Key: YARN-4721
> URL: https://issues.apache.org/jira/browse/YARN-4721
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager, security
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>  Labels: oct16-medium
> Attachments: HADOOP-12289-002.patch, HADOOP-12289-003.patch, 
> HADOOP-12889-001.patch
>
>
> If the RM can't auth with HDFS, this can first surface during job submission, 
> which can cause confusion about what's wrong and whose credentials are 
> playing up.
> Instead, the RM could try to talk to HDFS on launch, {{ls /}} should suffice. 
> If it can't auth, it can then tell UGI to log more and retry.
> I don't know what the policy should be if the RM can't auth to HDFS at this 
> point. Certainly it can't currently accept work. But should it fail fast or 
> keep going in the hope that the problem is in the KDC or NN and will fix 
> itself without an RM restart?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4721) RM to try to auth with HDFS on startup, retry with max diagnostics on failure

2017-10-11 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-4721:
-
Target Version/s: 3.1.0  (was: 2.9.0)

> RM to try to auth with HDFS on startup, retry with max diagnostics on failure
> -
>
> Key: YARN-4721
> URL: https://issues.apache.org/jira/browse/YARN-4721
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager, security
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>  Labels: oct16-medium
> Attachments: HADOOP-12289-002.patch, HADOOP-12289-003.patch, 
> HADOOP-12889-001.patch
>
>
> If the RM can't auth with HDFS, this can first surface during job submission, 
> which can cause confusion about what's wrong and whose credentials are 
> playing up.
> Instead, the RM could try to talk to HDFS on launch, {{ls /}} should suffice. 
> If it can't auth, it can then tell UGI to log more and retry.
> I don't know what the policy should be if the RM can't auth to HDFS at this 
> point. Certainly it can't currently accept work. But should it fail fast or 
> keep going in the hope that the problem is in the KDC or NN and will fix 
> itself without an RM restart?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3861) Add fav icon to YARN & MR daemons web UI

2017-10-11 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-3861:
-
Target Version/s: 3.1.0  (was: 2.9.0)

> Add fav icon to YARN & MR daemons web UI
> 
>
> Key: YARN-3861
> URL: https://issues.apache.org/jira/browse/YARN-3861
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: webapp
>Reporter: Devaraj K
>Assignee: Devaraj K
>  Labels: oct16-easy
> Attachments: RM UI in Chrome-With Patch.png, RM UI in Chrome-Without 
> Patch.png, RM UI in IE-With Patch.png, RM UI in IE-Without Patch.png.png, 
> YARN-3861.patch, hadoop-fav-transparent.png, hadoop-fav.png
>
>
> Add fav icon image to all YARN & MR daemons web UI.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3514) Active directory usernames like domain\login cause YARN failures

2017-10-11 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-3514:
-
Target Version/s: 3.1.0  (was: 2.9.0)

> Active directory usernames like domain\login cause YARN failures
> 
>
> Key: YARN-3514
> URL: https://issues.apache.org/jira/browse/YARN-3514
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.2.0
> Environment: CentOS6
>Reporter: john lilley
>Priority: Minor
>  Labels: oct16-easy
> Attachments: YARN-3514.001.patch, YARN-3514.002.patch
>
>
> We have a 2.2.0 (Cloudera 5.3) cluster running on CentOS6 that is 
> Kerberos-enabled and uses an external AD domain controller for the KDC.  We 
> are able to authenticate, browse HDFS, etc.  However, YARN fails during 
> localization because it seems to get confused by the presence of a \ 
> character in the local user name.
> Our AD authentication on the nodes goes through sssd and set configured to 
> map AD users onto the form domain\username.  For example, our test user has a 
> Kerberos principal of hadoopu...@domain.com and that maps onto a CentOS user 
> "domain\hadoopuser".  We have no problem validating that user with PAM, 
> logging in as that user, su-ing to that user, etc.
> However, when we attempt to run a YARN application master, the localization 
> step fails when setting up the local cache directory for the AM.  The error 
> that comes out of the RM logs:
> 2015-04-17 12:47:09 INFO net.redpoint.yarnapp.Client[0]: monitorApplication: 
> ApplicationReport: appId=1, state=FAILED, progress=0.0, finalStatus=FAILED, 
> diagnostics='Application application_1429295486450_0001 failed 1 times due to 
> AM Container for appattempt_1429295486450_0001_01 exited with  exitCode: 
> -1000 due to: Application application_1429295486450_0001 initialization 
> failed (exitCode=255) with output: main : command provided 0
> main : user is DOMAIN\hadoopuser
> main : requested yarn user is domain\hadoopuser
> org.apache.hadoop.util.DiskChecker$DiskErrorException: Cannot create 
> directory: 
> /data/yarn/nm/usercache/domain%5Chadoopuser/appcache/application_1429295486450_0001/filecache/10
> at 
> org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:105)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.download(ContainerLocalizer.java:199)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.localizeFiles(ContainerLocalizer.java:241)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.runLocalization(ContainerLocalizer.java:169)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.main(ContainerLocalizer.java:347)
> .Failing this attempt.. Failing the application.'
> However, when we look on the node launching the AM, we see this:
> [root@rpb-cdh-kerb-2 ~]# cd /data/yarn/nm/usercache
> [root@rpb-cdh-kerb-2 usercache]# ls -l
> drwxr-s--- 4 DOMAIN\hadoopuser yarn 4096 Apr 17 12:10 domain\hadoopuser
> There appears to be different treatment of the \ character in different 
> places.  Something creates the directory as "domain\hadoopuser" but something 
> else later attempts to use it as "domain%5Chadoopuser".  I’m not sure where 
> or why the URL escapement converts the \ to %5C or why this is not consistent.
> I should also mention, for the sake of completeness, our auth_to_local rule 
> is set up to map u...@domain.com to domain\user:
> RULE:[1:$1@$0](^.*@DOMAIN\.COM$)s/^(.*)@DOMAIN\.COM$/domain\\$1/g



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3514) Active directory usernames like domain\login cause YARN failures

2017-10-11 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201218#comment-16201218
 ] 

Subru Krishnan commented on YARN-3514:
--

Pushing it out from 2.9.0 due to lack of recent activity. Feel free to revert 
if required.

> Active directory usernames like domain\login cause YARN failures
> 
>
> Key: YARN-3514
> URL: https://issues.apache.org/jira/browse/YARN-3514
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.2.0
> Environment: CentOS6
>Reporter: john lilley
>Priority: Minor
>  Labels: oct16-easy
> Attachments: YARN-3514.001.patch, YARN-3514.002.patch
>
>
> We have a 2.2.0 (Cloudera 5.3) cluster running on CentOS6 that is 
> Kerberos-enabled and uses an external AD domain controller for the KDC.  We 
> are able to authenticate, browse HDFS, etc.  However, YARN fails during 
> localization because it seems to get confused by the presence of a \ 
> character in the local user name.
> Our AD authentication on the nodes goes through sssd and set configured to 
> map AD users onto the form domain\username.  For example, our test user has a 
> Kerberos principal of hadoopu...@domain.com and that maps onto a CentOS user 
> "domain\hadoopuser".  We have no problem validating that user with PAM, 
> logging in as that user, su-ing to that user, etc.
> However, when we attempt to run a YARN application master, the localization 
> step fails when setting up the local cache directory for the AM.  The error 
> that comes out of the RM logs:
> 2015-04-17 12:47:09 INFO net.redpoint.yarnapp.Client[0]: monitorApplication: 
> ApplicationReport: appId=1, state=FAILED, progress=0.0, finalStatus=FAILED, 
> diagnostics='Application application_1429295486450_0001 failed 1 times due to 
> AM Container for appattempt_1429295486450_0001_01 exited with  exitCode: 
> -1000 due to: Application application_1429295486450_0001 initialization 
> failed (exitCode=255) with output: main : command provided 0
> main : user is DOMAIN\hadoopuser
> main : requested yarn user is domain\hadoopuser
> org.apache.hadoop.util.DiskChecker$DiskErrorException: Cannot create 
> directory: 
> /data/yarn/nm/usercache/domain%5Chadoopuser/appcache/application_1429295486450_0001/filecache/10
> at 
> org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:105)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.download(ContainerLocalizer.java:199)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.localizeFiles(ContainerLocalizer.java:241)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.runLocalization(ContainerLocalizer.java:169)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.main(ContainerLocalizer.java:347)
> .Failing this attempt.. Failing the application.'
> However, when we look on the node launching the AM, we see this:
> [root@rpb-cdh-kerb-2 ~]# cd /data/yarn/nm/usercache
> [root@rpb-cdh-kerb-2 usercache]# ls -l
> drwxr-s--- 4 DOMAIN\hadoopuser yarn 4096 Apr 17 12:10 domain\hadoopuser
> There appears to be different treatment of the \ character in different 
> places.  Something creates the directory as "domain\hadoopuser" but something 
> else later attempts to use it as "domain%5Chadoopuser".  I’m not sure where 
> or why the URL escapement converts the \ to %5C or why this is not consistent.
> I should also mention, for the sake of completeness, our auth_to_local rule 
> is set up to map u...@domain.com to domain\user:
> RULE:[1:$1@$0](^.*@DOMAIN\.COM$)s/^(.*)@DOMAIN\.COM$/domain\\$1/g



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2748) Upload logs in the sub-folders under the local log dir when aggregating logs

2017-10-11 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201217#comment-16201217
 ] 

Subru Krishnan commented on YARN-2748:
--

Pushing it out from 2.9.0 due to lack of recent activity. Feel free to revert 
if required.

> Upload logs in the sub-folders under the local log dir when aggregating logs
> 
>
> Key: YARN-2748
> URL: https://issues.apache.org/jira/browse/YARN-2748
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: log-aggregation
>Affects Versions: 2.6.0
>Reporter: Zhijie Shen
>Assignee: Varun Saxena
> Attachments: YARN-2748.001.patch, YARN-2748.002.patch, 
> YARN-2748.03.patch, YARN-2748.04.patch
>
>
> YARN-2734 has a temporal fix to skip sub folders to avoid exception. Ideally, 
> if the app is creating a sub folder and putting its rolling logs there, we 
> need to upload these logs as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2748) Upload logs in the sub-folders under the local log dir when aggregating logs

2017-10-11 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-2748:
-
Target Version/s: 3.1.0  (was: 2.9.0)

> Upload logs in the sub-folders under the local log dir when aggregating logs
> 
>
> Key: YARN-2748
> URL: https://issues.apache.org/jira/browse/YARN-2748
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: log-aggregation
>Affects Versions: 2.6.0
>Reporter: Zhijie Shen
>Assignee: Varun Saxena
> Attachments: YARN-2748.001.patch, YARN-2748.002.patch, 
> YARN-2748.03.patch, YARN-2748.04.patch
>
>
> YARN-2734 has a temporal fix to skip sub folders to avoid exception. Ideally, 
> if the app is creating a sub folder and putting its rolling logs there, we 
> need to upload these logs as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2681) Support bandwidth enforcement for containers while reading from HDFS

2017-10-11 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-2681:
-
Target Version/s: 3.1.0  (was: 2.9.0)

> Support bandwidth enforcement for containers while reading from HDFS
> 
>
> Key: YARN-2681
> URL: https://issues.apache.org/jira/browse/YARN-2681
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager
>Affects Versions: 2.5.1
> Environment: Linux
>Reporter: Nam H. Do
> Attachments: Traffic Control Design.png, YARN-2681.001.patch, 
> YARN-2681.002.patch, YARN-2681.003.patch, YARN-2681.004.patch, 
> YARN-2681.005.patch, YARN-2681.patch
>
>
> To read/write data from HDFS on data node, applications establise TCP/IP 
> connections with the datanode. The HDFS read can be controled by setting 
> Linux Traffic Control  (TC) subsystem on the data node to make filters on 
> appropriate connections.
> The current cgroups net_cls concept can not be applied on the node where the 
> container is launched, netheir on data node since:
> -   TC hanldes outgoing bandwidth only, so it can be set on container node 
> (HDFS read = incoming data for the container)
> -   Since HDFS data node is handled by only one process,  it is not possible 
> to use net_cls to separate connections from different containers to the 
> datanode.
> Tasks:
> 1) Extend Resource model to define bandwidth enforcement rate
> 2) Monitor TCP/IP connection estabilised by container handling process and 
> its child processes
> 3) Set Linux Traffic Control rules on data node base on address:port pairs in 
> order to enforce bandwidth of outgoing data
> Concept: http://www.hit.bme.hu/~do/papers/EnforcementDesign.pdf
> Implementation: 
> http://www.hit.bme.hu/~dohoai/documents/HdfsTrafficControl.pdf
> http://www.hit.bme.hu/~dohoai/documents/HdfsTrafficControl_UML_diagram.png



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2681) Support bandwidth enforcement for containers while reading from HDFS

2017-10-11 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201215#comment-16201215
 ] 

Subru Krishnan commented on YARN-2681:
--

Pushing it out from 2.9.0 due to lack of recent activity & apparent complexity. 
Feel free to revert if required.

> Support bandwidth enforcement for containers while reading from HDFS
> 
>
> Key: YARN-2681
> URL: https://issues.apache.org/jira/browse/YARN-2681
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager
>Affects Versions: 2.5.1
> Environment: Linux
>Reporter: Nam H. Do
> Attachments: Traffic Control Design.png, YARN-2681.001.patch, 
> YARN-2681.002.patch, YARN-2681.003.patch, YARN-2681.004.patch, 
> YARN-2681.005.patch, YARN-2681.patch
>
>
> To read/write data from HDFS on data node, applications establise TCP/IP 
> connections with the datanode. The HDFS read can be controled by setting 
> Linux Traffic Control  (TC) subsystem on the data node to make filters on 
> appropriate connections.
> The current cgroups net_cls concept can not be applied on the node where the 
> container is launched, netheir on data node since:
> -   TC hanldes outgoing bandwidth only, so it can be set on container node 
> (HDFS read = incoming data for the container)
> -   Since HDFS data node is handled by only one process,  it is not possible 
> to use net_cls to separate connections from different containers to the 
> datanode.
> Tasks:
> 1) Extend Resource model to define bandwidth enforcement rate
> 2) Monitor TCP/IP connection estabilised by container handling process and 
> its child processes
> 3) Set Linux Traffic Control rules on data node base on address:port pairs in 
> order to enforce bandwidth of outgoing data
> Concept: http://www.hit.bme.hu/~do/papers/EnforcementDesign.pdf
> Implementation: 
> http://www.hit.bme.hu/~dohoai/documents/HdfsTrafficControl.pdf
> http://www.hit.bme.hu/~dohoai/documents/HdfsTrafficControl_UML_diagram.png



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-1564) add workflow YARN services

2017-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201212#comment-16201212
 ] 

Hadoop QA commented on YARN-1564:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} YARN-1564 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-1564 |
| GITHUB PR | https://github.com/apache/hadoop/pull/65 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17868/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> add workflow YARN services
> --
>
> Key: YARN-1564
> URL: https://issues.apache.org/jira/browse/YARN-1564
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: api, nodemanager, resourcemanager
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: oct16-hard
> Attachments: YARN-1564-001.patch, YARN-1564-002.patch, 
> YARN-1564-003.patch
>
>   Original Estimate: 24h
>  Time Spent: 48h
>  Remaining Estimate: 0h
>
> I've been using some alternative composite services to help build workflows 
> of process execution in a YARN AM.
> They and their tests could be moved in YARN for the use by others -this would 
> make it easier to build aggregate services in an AM



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2031) YARN Proxy model doesn't support REST APIs in AMs

2017-10-11 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201213#comment-16201213
 ] 

Subru Krishnan commented on YARN-2031:
--

Pushing it out from 2.9.0 due to lack of recent activity. Feel free to revert 
if required.

> YARN Proxy model doesn't support REST APIs in AMs
> -
>
> Key: YARN-2031
> URL: https://issues.apache.org/jira/browse/YARN-2031
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.6.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: YARN-2031-002.patch, YARN-2031-003.patch, 
> YARN-2031-004.patch, YARN-2031-005.patch, YARN-2031.patch.001
>
>
> AMs can't support REST APIs because
> # the AM filter redirects all requests to the proxy with a 302 response (not 
> 307)
> # the proxy doesn't forward PUT/POST/DELETE verbs
> Either the AM filter needs to return 307 and the proxy to forward the verbs, 
> or Am filter should not filter a REST bit of the web site



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2031) YARN Proxy model doesn't support REST APIs in AMs

2017-10-11 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-2031:
-
Target Version/s: 3.1.0  (was: 2.9.0)

> YARN Proxy model doesn't support REST APIs in AMs
> -
>
> Key: YARN-2031
> URL: https://issues.apache.org/jira/browse/YARN-2031
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.6.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: YARN-2031-002.patch, YARN-2031-003.patch, 
> YARN-2031-004.patch, YARN-2031-005.patch, YARN-2031.patch.001
>
>
> AMs can't support REST APIs because
> # the AM filter redirects all requests to the proxy with a 302 response (not 
> 307)
> # the proxy doesn't forward PUT/POST/DELETE verbs
> Either the AM filter needs to return 307 and the proxy to forward the verbs, 
> or Am filter should not filter a REST bit of the web site



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-1564) add workflow YARN services

2017-10-11 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-1564:
-
Target Version/s: 3.1.0  (was: 2.9.0)

> add workflow YARN services
> --
>
> Key: YARN-1564
> URL: https://issues.apache.org/jira/browse/YARN-1564
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: api, nodemanager, resourcemanager
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: oct16-hard
> Attachments: YARN-1564-001.patch, YARN-1564-002.patch, 
> YARN-1564-003.patch
>
>   Original Estimate: 24h
>  Time Spent: 48h
>  Remaining Estimate: 0h
>
> I've been using some alternative composite services to help build workflows 
> of process execution in a YARN AM.
> They and their tests could be moved in YARN for the use by others -this would 
> make it easier to build aggregate services in an AM



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-574) PrivateLocalizer does not support parallel resource download via ContainerLocalizer

2017-10-11 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201200#comment-16201200
 ] 

Subru Krishnan commented on YARN-574:
-

Pushing it out from 2.9.0 due to lack of recent activity. Feel free to revert 
if required.

> PrivateLocalizer does not support parallel resource download via 
> ContainerLocalizer
> ---
>
> Key: YARN-574
> URL: https://issues.apache.org/jira/browse/YARN-574
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.6.0, 2.8.0, 2.7.1
>Reporter: Omkar Vinit Joshi
>Assignee: Ajith S
> Attachments: YARN-574.03.patch, YARN-574.04.patch, YARN-574.05.patch, 
> YARN-574.1.patch, YARN-574.2.patch
>
>
> At present private resources will be downloaded in parallel only if multiple 
> containers request the same resource. However otherwise it will be serial. 
> The protocol between PrivateLocalizer and ContainerLocalizer supports 
> multiple downloads however it is not used and only one resource is sent for 
> downloading at a time.
> I think we can increase / assure parallelism (even for single container 
> requesting resource) for private/application resources by making multiple 
> downloads per ContainerLocalizer.
> Total Parallelism before
> = number of threads allotted for PublicLocalizer [public resource] + number 
> of containers[private and application resource]
> Total Parallelism after
> = number of threads allotted for PublicLocalizer [public resource] + number 
> of containers * max downloads per container [private and application resource]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-1564) add workflow YARN services

2017-10-11 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201203#comment-16201203
 ] 

Subru Krishnan commented on YARN-1564:
--

Pushing it out from 2.9.0 due to apparent complexity & lack of recent activity. 
Feel free to revert if required.

> add workflow YARN services
> --
>
> Key: YARN-1564
> URL: https://issues.apache.org/jira/browse/YARN-1564
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: api, nodemanager, resourcemanager
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: oct16-hard
> Attachments: YARN-1564-001.patch, YARN-1564-002.patch, 
> YARN-1564-003.patch
>
>   Original Estimate: 24h
>  Time Spent: 48h
>  Remaining Estimate: 0h
>
> I've been using some alternative composite services to help build workflows 
> of process execution in a YARN AM.
> They and their tests could be moved in YARN for the use by others -this would 
> make it easier to build aggregate services in an AM



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6608) Backport all SLS improvements from trunk to branch-2

2017-10-11 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6608:
-
Attachment: YARN-6608-branch-2.v5.patch

Attached ver.5 patch, fixed compilation issues and ASF warnings, failed unit 
tests may not related. Need a clean Jenkins run to identify real unit test 
issues.

> Backport all SLS improvements from trunk to branch-2
> 
>
> Key: YARN-6608
> URL: https://issues.apache.org/jira/browse/YARN-6608
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.9.0
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-6608-branch-2.v0.patch, 
> YARN-6608-branch-2.v1.patch, YARN-6608-branch-2.v2.patch, 
> YARN-6608-branch-2.v3.patch, YARN-6608-branch-2.v4.patch, 
> YARN-6608-branch-2.v5.patch
>
>
> The SLS has received lots of attention in trunk, but only some of it made it 
> back to branch-2. This patch is a "raw" fork-lift of the trunk development 
> from hadoop-tools/hadoop-sls.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-574) PrivateLocalizer does not support parallel resource download via ContainerLocalizer

2017-10-11 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-574:

Target Version/s: 3.1.0  (was: 2.9.0)

> PrivateLocalizer does not support parallel resource download via 
> ContainerLocalizer
> ---
>
> Key: YARN-574
> URL: https://issues.apache.org/jira/browse/YARN-574
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.6.0, 2.8.0, 2.7.1
>Reporter: Omkar Vinit Joshi
>Assignee: Ajith S
> Attachments: YARN-574.03.patch, YARN-574.04.patch, YARN-574.05.patch, 
> YARN-574.1.patch, YARN-574.2.patch
>
>
> At present private resources will be downloaded in parallel only if multiple 
> containers request the same resource. However otherwise it will be serial. 
> The protocol between PrivateLocalizer and ContainerLocalizer supports 
> multiple downloads however it is not used and only one resource is sent for 
> downloading at a time.
> I think we can increase / assure parallelism (even for single container 
> requesting resource) for private/application resources by making multiple 
> downloads per ContainerLocalizer.
> Total Parallelism before
> = number of threads allotted for PublicLocalizer [public resource] + number 
> of containers[private and application resource]
> Total Parallelism after
> = number of threads allotted for PublicLocalizer [public resource] + number 
> of containers * max downloads per container [private and application resource]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7205) Log improvements for the ResourceUtils

2017-10-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201193#comment-16201193
 ] 

Hudson commented on YARN-7205:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13076 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13076/])
YARN-7205. Log improvements for the ResourceUtils. (Sunil G via wangda) 
(wangda: rev 8bcc49e6771ca75f012211e27870a421b19233e7)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceTypeInfo.java


> Log improvements for the ResourceUtils
> --
>
> Key: YARN-7205
> URL: https://issues.apache.org/jira/browse/YARN-7205
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Jian He
>Assignee: Sunil G
> Fix For: 3.1.0
>
> Attachments: YARN-7205.001.patch, YARN-7205.002.patch, 
> YARN-7205.003.patch, YARN-7205.004.patch
>
>
> I've seen below logs printed at the service client console after the merge, 
> can this be moved to debug level log ? cc  [~sunilg], [~leftnoteasy]
> {code}
> 17/09/15 10:26:32 INFO conf.Configuration: resource-types.xml not found
> 17/09/15 10:26:32 INFO resource.ResourceUtils: Unable to find 
> 'resource-types.xml'. Falling back to memory and vcores as resources.
> 17/09/15 10:26:32 INFO resource.ResourceUtils: Adding resource type - name = 
> memory-mb, units = Mi, type = COUNTABLE
> 17/09/15 10:26:32 INFO resource.ResourceUtils: Adding resource type - name = 
> vcores, units = , type = COUNTABLE
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7244) ShuffleHandler is not aware of disks that are added

2017-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201163#comment-16201163
 ] 

Hadoop QA commented on YARN-7244:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  9s{color} 
| {color:red} YARN-7244 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-7244 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12891577/YARN-7244.003.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17866/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> ShuffleHandler is not aware of disks that are added
> ---
>
> Key: YARN-7244
> URL: https://issues.apache.org/jira/browse/YARN-7244
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Kuhu Shukla
>Assignee: Kuhu Shukla
> Attachments: YARN-7244.001.patch, YARN-7244.002.patch, 
> YARN-7244.003.patch
>
>
> The ShuffleHandler permanently remembers the list of "good" disks on NM 
> startup. If disks later are added to the node then map tasks will start using 
> them but the ShuffleHandler will not be aware of them. The end result is that 
> the data cannot be shuffled from the node leading to fetch failures and 
> re-runs of the map tasks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6546) SLS is slow while loading 10k queues

2017-10-11 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-6546:
---
Attachment: YARN-6546.002.patch

Thanks for the review, [~miklos.szeg...@cloudera.com]. Uploaded patch v2 for 
your comments. In patch v2:
- SLS tracks a queue when there is an app in it. 
- Remove {{untrackQueue()}} since it is broken and not used.

> SLS is slow while loading 10k queues
> 
>
> Key: YARN-6546
> URL: https://issues.apache.org/jira/browse/YARN-6546
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Affects Versions: 3.0.0-alpha2
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: Desktop.png, YARN-6546.001.patch, YARN-6546.002.patch
>
>
> It takes a long time (more than 10 minutes) to load 10k queues in SLS. The 
> problem should be in {{com.codahale.metrics.CsvReporter}} based on the result 
> from profiler. SLS creates 14 .csv files for each leaf queue, and update them 
> constantly during execution. It is not necessary to log information for 
> inactive queues. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7169) Backport new yarn-ui to branch2 code (starting with YARN-5355_branch2)

2017-10-11 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201137#comment-16201137
 ] 

Vrushali C commented on YARN-7169:
--

Hmm, I see this build error on jenkins:
{code}
[ERROR] Failed to execute goal 
com.github.eirslett:frontend-maven-plugin:1.2:install-node-and-yarn (install 
node and yarn) on project hadoop-yarn-ui: The plugin 
com.github.eirslett:frontend-maven-plugin:1.2 requires Maven version 3.1.0 -> 
[Help 1]
{code}

This builds on my laptop but I have all the npm / node.js /ember stuff 
installed. 

[~sunil.gov...@gmail.com] Would you know how to invoke the build for the UI in 
jenkins?



> Backport new yarn-ui to branch2 code (starting with YARN-5355_branch2)
> --
>
> Key: YARN-7169
> URL: https://issues.apache.org/jira/browse/YARN-7169
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineclient, timelinereader, timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>Priority: Critical
> Attachments: YARN-7169-YARN-5355_branch2.0001.patch, 
> YARN-7169-YARN-5355_branch2.0002.patch, ui_commits(1)
>
>
> Jira to track the backport of the new yarn-ui onto branch2. Right now adding 
> into Timeline Service v2's branch2 which is YARN-5355_branch2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7224) Support GPU isolation for docker container

2017-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201127#comment-16201127
 ] 

Hadoop QA commented on YARN-7224:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
54s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
46s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  3m 
22s{color} | {color:red} hadoop-yarn in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
41s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  9m 41s{color} | 
{color:red} hadoop-yarn-project_hadoop-yarn generated 5 new + 0 unchanged - 0 
fixed = 5 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
41s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 52s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 31 new + 267 unchanged - 4 fixed = 298 total (was 271) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
52s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 10s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
56s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  1m 
41s{color} | {color:red} hadoop-yarn-project_hadoop-yarn generated 1 new + 5400 
unchanged - 0 fixed = 5401 total (was 5400) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
18s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 1 new + 103 unchanged - 0 fixed = 104 total (was 103) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m 35s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 34s{color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 56s{color} 
| 

[jira] [Commented] (YARN-7169) Backport new yarn-ui to branch2 code (starting with YARN-5355_branch2)

2017-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201125#comment-16201125
 ] 

Hadoop QA commented on YARN-7169:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} YARN-5355_branch2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
55s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
23s{color} | {color:green} YARN-5355_branch2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
20s{color} | {color:green} YARN-5355_branch2 passed with JDK v1.8.0_144 {color} 
|
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
48s{color} | {color:green} YARN-5355_branch2 passed with JDK v1.7.0_151 {color} 
|
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
44s{color} | {color:green} YARN-5355_branch2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m  
6s{color} | {color:green} YARN-5355_branch2 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-assemblies hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
14s{color} | {color:green} YARN-5355_branch2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
56s{color} | {color:green} YARN-5355_branch2 passed with JDK v1.8.0_144 {color} 
|
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
22s{color} | {color:green} YARN-5355_branch2 passed with JDK v1.7.0_151 {color} 
|
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
34s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  5m  
9s{color} | {color:red} root in the patch failed with JDK v1.8.0_144. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  5m  9s{color} 
| {color:red} root in the patch failed with JDK v1.8.0_144. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  4m 
59s{color} | {color:red} root in the patch failed with JDK v1.7.0_151. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  4m 59s{color} 
| {color:red} root in the patch failed with JDK v1.7.0_151. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 50s{color} | {color:orange} root: The patch generated 3 new + 285 unchanged 
- 0 fixed = 288 total (was 285) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 1s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m  
9s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-assemblies hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | 

[jira] [Commented] (YARN-7286) Add support for docker to have no capabilities

2017-10-11 Thread Sidharta Seethana (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201121#comment-16201121
 ] 

Sidharta Seethana commented on YARN-7286:
-

Agreed, we shouldn't be changing that behavior at this point. I am ok with the 
"NONE" approach. With respect to mixing "NONE" and other capabilities, how 
about moving capability configuration handling to the initialization function 
in {{DockerLinuxContainerRuntime}} and throw an exception there in case they 
are mixed? It should probably have been in that function in the first place - 
all other config handling is in that function. 

> Add support for docker to have no capabilities
> --
>
> Key: YARN-7286
> URL: https://issues.apache.org/jira/browse/YARN-7286
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: YARN-7286.001.patch, YARN-7286.002.patch, 
> YARN-7286.003.patch
>
>
> Support for controlling capabilities was introduced in YARN-4258. However, it 
> does not allow for the capabilities list to be NULL, since {{getStrings()}} 
> will treat an empty value the same as it treats an unset property. So, a NULL 
> list will actually give the default capabilities list.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7244) ShuffleHandler is not aware of disks that are added

2017-10-11 Thread Kuhu Shukla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kuhu Shukla updated YARN-7244:
--
Attachment: YARN-7244.003.patch

Updated patch closer to the design Jason mentioned earlier. Adds a new Path 
Handler that is passed from the Containermanager -> AuxServices -> 
AuxiliaryService -> ShuffleHandler.
Appreciate any comments on the approach/patch. Thanks a lot!

> ShuffleHandler is not aware of disks that are added
> ---
>
> Key: YARN-7244
> URL: https://issues.apache.org/jira/browse/YARN-7244
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Kuhu Shukla
>Assignee: Kuhu Shukla
> Attachments: YARN-7244.001.patch, YARN-7244.002.patch, 
> YARN-7244.003.patch
>
>
> The ShuffleHandler permanently remembers the list of "good" disks on NM 
> startup. If disks later are added to the node then map tasks will start using 
> them but the ShuffleHandler will not be aware of them. The end result is that 
> the data cannot be shuffled from the node leading to fetch failures and 
> re-runs of the map tasks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7307) Revisit resource-types.xml loading behaviors

2017-10-11 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201076#comment-16201076
 ] 

Wangda Tan commented on YARN-7307:
--

Also, instead of adding new dynamic resources fields to ResourceUtils, can we 
leverage {{initializeResourcesFromResourceInformationMap}} and directly update 
local types information after {{List}} received from RM? 

If you do think we should client to opt-out loading resource-types.xml, instead 
of adding new option to yarn-site.xml, we can add a API to ResourceUtils to 
opt-in load resource-types.xml (by default it won't load), and we can update 
ResourceManager/NodeManager to opt-in only.

> Revisit resource-types.xml loading behaviors
> 
>
> Key: YARN-7307
> URL: https://issues.apache.org/jira/browse/YARN-7307
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Wangda Tan
>Assignee: Sunil G
>Priority: Blocker
> Attachments: YARN-7307.001.patch
>
>
> Existing feature requires every client has a resource-types.xml in order to 
> use multiple resource types, should we allow client/AM update supported 
> resource types via Yarn APIs?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7317) Fix overallocation resulted from ceiling in LocalityMulticastAMRMProxyPolicy

2017-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201072#comment-16201072
 ] 

Hadoop QA commented on YARN-7317:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 14s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common: 
The patch generated 6 new + 0 unchanged - 0 fixed = 6 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 48s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
1s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
20s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common 
generated 6 new + 30 unchanged - 0 fixed = 36 total (was 30) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m  6s{color} 
| {color:red} hadoop-yarn-server-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common
 |
|  |  
org.apache.hadoop.yarn.server.federation.policies.FederationPolicyUtils.rand 
isn't final but should be  At FederationPolicyUtils.java:be  At 
FederationPolicyUtils.java:[line 51] |
| Failed junit tests | 
hadoop.yarn.server.federation.policies.router.TestWeightedRandomRouterPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:3d04c00 |
| JIRA Issue | YARN-7317 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12891573/YARN-7317.v1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1cca8c348cec 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (YARN-7307) Revisit resource-types.xml loading behaviors

2017-10-11 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201070#comment-16201070
 ] 

Wangda Tan commented on YARN-7307:
--

[~sunilg], 

Thanks for working on the patch, several questions/comments: 

1) Do you think is it a valid use case which client need to maintain their own 
resource-type.xml (different from RM version)? 

2) Is there any issue for client load local resource-types.xml, and overwrite 
it once receive the resource types info from RM?

To me, answers to 1)/2) are all no. I don't think client should keep a 
different version and I don't see any issue if client load resource types first 
and overwrite them by responses from RM. So I think we can avoid add the 
separate config {{load.resource-types.config-file.for-client}} and already load 
{{resource-types.xml}} from classpath.

And should we add resourceTypesInfo to RegisterApplicationMasterResponse? Once 
we have that, we can change AMRMClient to automatically update local resource 
types after register to RM.

> Revisit resource-types.xml loading behaviors
> 
>
> Key: YARN-7307
> URL: https://issues.apache.org/jira/browse/YARN-7307
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Wangda Tan
>Assignee: Sunil G
>Priority: Blocker
> Attachments: YARN-7307.001.patch
>
>
> Existing feature requires every client has a resource-types.xml in order to 
> use multiple resource types, should we allow client/AM update supported 
> resource types via Yarn APIs?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7317) Fix overallocation resulted from ceiling in LocalityMulticastAMRMProxyPolicy

2017-10-11 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201024#comment-16201024
 ] 

Carlo Curino commented on YARN-7317:


Thanks [~botong] for the improvement. The patch generally looks good, a couple 
of small issues:
# fix patch to apply to trunk
# line 423 you can mark it as {{@VisibleForTesting}} to signal the method is 
protected instead of private for testing purposes only
# 444 Non --> No  
# Add a test for {{FederationPolicyUtils}} you can lift it from 
{{TestWeightedRandomRouterPolicy.testClusterChosenWithRightProbability}}

> Fix overallocation resulted from ceiling in LocalityMulticastAMRMProxyPolicy
> 
>
> Key: YARN-7317
> URL: https://issues.apache.org/jira/browse/YARN-7317
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Attachments: YARN-7317.v1.patch
>
>
> When LocalityMulticastAMRMProxyPolicy is splitting up the ANY requests into 
> different subclusters, we are doing Ceil(N * weight), leading to 
> overallocation overall. It is better to do Floor(N * weight) for each 
> subcluster and then assign the residue randomly according to the weights. So 
> that the total number of containers we ask from all subclusters sum up to be 
> N. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6744) Recover component information on YARN native services AM restart

2017-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201020#comment-16201020
 ] 

Hadoop QA commented on YARN-6744:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} yarn-native-services Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 2s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} yarn-native-services passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core:
 The patch generated 2 new + 58 unchanged - 11 fixed = 60 total (was 69) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 56s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
13s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-6744 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12891566/YARN-6744-yarn-native-services.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux 054b0ba38e2a 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | yarn-native-services / 4993b8a |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Comment Edited] (YARN-7127) Merge yarn-native-service branch into trunk

2017-10-11 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200989#comment-16200989
 ] 

Gour Saha edited comment on YARN-7127 at 10/11/17 9:28 PM:
---

I totally agree with [~jianhe]'s comment above. It is not as simple as listing 
those 4 basic operations and saying that the entire service subcommand needs to 
be merged into application subcommand. As Jian explained above, there has to be 
a differentiator. Additionally, several things need to be thought through, 
including how to roll "mapred job", "hadoop jar", Tez AM, Spark AM, and several 
other specialized apps out there into "yarn application" as well.

[~eyang] your suggestions are very valid, but it seems more of a larger 
umbrella effort - expanding "yarn application" to provide a unified support for 
all disparate apps to roll into it.


was (Author: gsaha):
I totally agree with [~jianhe]'s comment above. It is not as simple as listing 
those 4 basic commands and saying that the entire service subcommand needs to 
be merged into application subcommand. As Jian explained above, there has to be 
a differentiator. Additionally, several things need to be thought through, 
including how to roll "mapred job", "hadoop jar", Tez AM, Spark AM, and several 
other specialized apps out there into "yarn application" as well.

[~eyang] your suggestions are very valid, but it seems more of a larger 
umbrella effort - expanding "yarn application" to provide a unified support for 
all disparate apps to roll into it.

> Merge yarn-native-service branch into trunk
> ---
>
> Key: YARN-7127
> URL: https://issues.apache.org/jira/browse/YARN-7127
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-7127.01.patch, YARN-7127.02.patch, 
> YARN-7127.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7127) Merge yarn-native-service branch into trunk

2017-10-11 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200989#comment-16200989
 ] 

Gour Saha commented on YARN-7127:
-

I totally agree with [~jianhe]'s comment above. It is not as simple as listing 
those 4 basic commands and saying that the entire service subcommand needs to 
be merged into application subcommand. As Jian explained above, there has to be 
a differentiator. Additionally, several things need to be thought through, 
including how to roll "mapred job", "hadoop jar", Tez AM, Spark AM, and several 
other specialized apps out there into "yarn application" as well.

[~eyang] your suggestions are very valid, but it seems more of a larger 
umbrella effort - expanding "yarn application" to provide a unified support for 
all disparate apps to roll into it.

> Merge yarn-native-service branch into trunk
> ---
>
> Key: YARN-7127
> URL: https://issues.apache.org/jira/browse/YARN-7127
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-7127.01.patch, YARN-7127.02.patch, 
> YARN-7127.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7286) Add support for docker to have no capabilities

2017-10-11 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200987#comment-16200987
 ] 

Jason Lowe commented on YARN-7286:
--

I suspect the behavior difference stems from the different handling of 
properties loaded from site XML files vs. programmatically set properties.  The 
former are treated as resources while the latter are set in the property and 
overlay collections directly.  When loading resources it ignores null values 
for properties unless setAllowNullValueProperties is set, which nothing but 
test code does.  I don't think it's safe to change that behavior at this point 
since it's always behaved that way.  Configuration is one of those "everybody 
uses it and has learned to live with its bugs/quirks" things.  Changing its 
existing behavior is likely to break some downstream project, e.g.: if we were 
to make final fields _really_ final even if programmatically set.


> Add support for docker to have no capabilities
> --
>
> Key: YARN-7286
> URL: https://issues.apache.org/jira/browse/YARN-7286
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: YARN-7286.001.patch, YARN-7286.002.patch, 
> YARN-7286.003.patch
>
>
> Support for controlling capabilities was introduced in YARN-4258. However, it 
> does not allow for the capabilities list to be NULL, since {{getStrings()}} 
> will treat an empty value the same as it treats an unset property. So, a NULL 
> list will actually give the default capabilities list.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-4122) Add support for GPU as a resource

2017-10-11 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan resolved YARN-4122.
--
Resolution: Duplicate

This is duplicated by YARN-6620, closing as dup.

> Add support for GPU as a resource
> -
>
> Key: YARN-4122
> URL: https://issues.apache.org/jira/browse/YARN-4122
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jun Gong
>Assignee: Jun Gong
> Attachments: GPUAsAResourceDesign.pdf
>
>
> Use [cgroups 
> devcies|https://www.kernel.org/doc/Documentation/cgroups/devices.txt] to 
> isolate GPUs for containers. For docker containers, we could use 'docker run 
> --device=...'.
> Reference: [SLURM Resources isolation through 
> cgroups|http://slurm.schedmd.com/slurm_ug_2011/SLURM_UserGroup2011_cgroups.pdf].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4599) Set OOM control for memory cgroups

2017-10-11 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-4599:
--
Issue Type: Sub-task  (was: Bug)
Parent: YARN-1747

> Set OOM control for memory cgroups
> --
>
> Key: YARN-4599
> URL: https://issues.apache.org/jira/browse/YARN-4599
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>Assignee: sandflee
>  Labels: oct16-medium
> Attachments: YARN-4599.sandflee.patch, yarn-4599-not-so-useful.patch
>
>
> YARN-1856 adds memory cgroups enforcing support. We should also explicitly 
> set OOM control so that containers are not killed as soon as they go over 
> their usage. Today, one could set the swappiness to control this, but 
> clusters with swap turned off exist.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7317) Fix overallocation resulted from ceiling in LocalityMulticastAMRMProxyPolicy

2017-10-11 Thread Botong Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Botong Huang updated YARN-7317:
---
Attachment: YARN-7317.v1.patch

> Fix overallocation resulted from ceiling in LocalityMulticastAMRMProxyPolicy
> 
>
> Key: YARN-7317
> URL: https://issues.apache.org/jira/browse/YARN-7317
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Attachments: YARN-7317.v1.patch
>
>
> When LocalityMulticastAMRMProxyPolicy is splitting up the ANY requests into 
> different subclusters, we are doing Ceil(N * weight), leading to 
> overallocation overall. It is better to do Floor(N * weight) for each 
> subcluster and then assign the residue randomly according to the weights. So 
> that the total number of containers we ask from all subclusters sum up to be 
> N. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6668) Use cgroup to get container resource utilization

2017-10-11 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200971#comment-16200971
 ] 

Vinod Kumar Vavilapalli commented on YARN-6668:
---

Haven't looked into the patch, but how does this relate to YARN-4943? Does this 
JIRA duplicate YARN-4943 or do we need more work to facilitate YARN-4943?

> Use cgroup to get container resource utilization
> 
>
> Key: YARN-6668
> URL: https://issues.apache.org/jira/browse/YARN-6668
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Miklos Szegedi
> Attachments: YARN-6668.000.patch, YARN-6668.001.patch, 
> YARN-6668.002.patch, YARN-6668.003.patch, YARN-6668.004.patch, 
> YARN-6668.005.patch, YARN-6668.006.patch, YARN-6668.007.patch, 
> YARN-6668.008.patch, YARN-6668.009.patch
>
>
> Container Monitor relies on proc file system to get container resource 
> utilization, which is not as efficient as reading cgroup accounting. We 
> should in NM, when cgroup is enabled, read cgroup stats instead. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7127) Merge yarn-native-service branch into trunk

2017-10-11 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200955#comment-16200955
 ] 

Jian He commented on YARN-7127:
---

[~eyang], This is just one of the reasons. There more reasons from design point 
of view. 

Let's look at what 'yarn application' subcommand stands today:
It's the command to interact with *ResourceManager* which does listing / 
updating of application *metadata* to YARN's point of view. Although it's 
called application,  It's *NOT* a command specific to the app. (i.e. AM).

However for 'yarn service', it's the command to interact with the *service 
framework*, i.e. the special AM we wrote.
E.g. MapReduce is a customized AM on YARN, it has its own *mapred* command to 
interact with its own AM, which only makes sense to itself. like "mapred 
distcp". Will it make sense to merge the 'distcp' sub command to 'yarn 
application' command?
 
Similarly, for service framework, it's a special AM. It has its own semantics 
and use cases. 
E.g. flex the component count, upgrade the component. The component is the 
concept only specific to service, not to the yarn generic app.  If we merge it 
with generic "application" command, what will the 'component' mean for other 
apps like MR?  There will be many other use cases that will only make sense to 
this service framework.  Hence, this is the concept separation. 


So your previous comment below is not true, other than launching/shutdown, 
there are whole bunch other concepts/operations only applicable to this service 
framework. 
bq.  The only distinction is the launching and shutdown of services may be 
different from batch jobs.

=

Now coming to the feasibility of the implementation of the approach. Even 
though it's merged to application subcommand,  how are we going to 
differentiate the generic app and the service app from CLI. E.g. the yarn 
application -status is getting the *metadata* of YARN, but the "yarn service 
status" command is supposed to get the status of the service AM.  Are we going 
to add an option say "-type service" ?  Ultimately, you still end up having the 
separation, it cannot be avoided. 

> Merge yarn-native-service branch into trunk
> ---
>
> Key: YARN-7127
> URL: https://issues.apache.org/jira/browse/YARN-7127
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-7127.01.patch, YARN-7127.02.patch, 
> YARN-7127.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6744) Recover component information on YARN native services AM restart

2017-10-11 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-6744:
-
Attachment: YARN-6744-yarn-native-services.004.patch

Fix unit test failure.

> Recover component information on YARN native services AM restart
> 
>
> Key: YARN-6744
> URL: https://issues.apache.org/jira/browse/YARN-6744
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: YARN-6744-yarn-native-services.001.patch, 
> YARN-6744-yarn-native-services.002.patch, 
> YARN-6744-yarn-native-services.003.patch, 
> YARN-6744-yarn-native-services.004.patch
>
>
> The new RoleInstance#Container constructor does not populate all the 
> information needed for a RoleInstance. This is the constructor used when 
> recovering running containers in AppState#addRestartedContainer. We will have 
> to figure out a way to determine this information for a running container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7286) Add support for docker to have no capabilities

2017-10-11 Thread Sidharta Seethana (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200904#comment-16200904
 ] 

Sidharta Seethana edited comment on YARN-7286 at 10/11/17 8:30 PM:
---

The {{DEFAULT_NM_DOCKER_CONTAINER_CAPABILITIES}} list was based on the 
capabilities docker enabled by default at the time. This default list of 
capabilities is meant to stay consistent even if docker changes this list over 
time. 

{quote}
Well, yes, but that's at the discretion of the admin. If they want to give the 
user 0 capabilities, then they should be able to. The question is what the best 
way to do that is. If I were to look at yarn-site.xml and see  
for the capabilities, I would implicitly think there are no capabilities given, 
since this is an empty list. However, this would actually give the default list 
of capabilities.
{quote}

This is a bit surprising - is this the behavior expected from 
{{Configuration.setStrings(key, "")}} as well ? The behavior I see is this : 

{code}
final YarnConfiguration conf = new YarnConfiguration();
//set test.key1 and not test.key2
conf.setStrings("test.key1", "");
Assert.assertTrue(conf.getStrings("test.key1", "val1",
"val2") == null);
Assert.assertEquals(2, conf.getStrings("test.key2", "val1", "val2")
.length);
{code}




was (Author: sidharta-s):
The {{DEFAULT_NM_DOCKER_CONTAINER_CAPABILITIES}} list was based on the 
capabilities docker enabled by default at the time. This default list of 
capabilities is meant to stay consistent even if docker changes this list over 
time. 

{quote}
Well, yes, but that's at the discretion of the admin. If they want to give the 
user 0 capabilities, then they should be able to. The question is what the best 
way to do that is. If I were to look at yarn-site.xml and see  
for the capabilities, I would implicitly think there are no capabilities given, 
since this is an empty list. However, this would actually give the default list 
of capabilities.
{quote}

This is bit that is surprising - is this the behavior expected from 
{{Configuration.setStrings(key, "")}} as well ? The behavior I see is this : 

{code}
final YarnConfiguration conf = new YarnConfiguration();
//set test.key1 and not test.key2
conf.setStrings("test.key1", "");
Assert.assertTrue(conf.getStrings("test.key1", "val1",
"val2") == null);
Assert.assertEquals(2, conf.getStrings("test.key2", "val1", "val2")
.length);
{code}



> Add support for docker to have no capabilities
> --
>
> Key: YARN-7286
> URL: https://issues.apache.org/jira/browse/YARN-7286
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: YARN-7286.001.patch, YARN-7286.002.patch, 
> YARN-7286.003.patch
>
>
> Support for controlling capabilities was introduced in YARN-4258. However, it 
> does not allow for the capabilities list to be NULL, since {{getStrings()}} 
> will treat an empty value the same as it treats an unset property. So, a NULL 
> list will actually give the default capabilities list.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7286) Add support for docker to have no capabilities

2017-10-11 Thread Sidharta Seethana (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200904#comment-16200904
 ] 

Sidharta Seethana commented on YARN-7286:
-

The {{DEFAULT_NM_DOCKER_CONTAINER_CAPABILITIES}} list was based on the 
capabilities docker enabled by default at the time. This default list of 
capabilities is meant to stay consistent even if docker changes this list over 
time. 

{quote}
Well, yes, but that's at the discretion of the admin. If they want to give the 
user 0 capabilities, then they should be able to. The question is what the best 
way to do that is. If I were to look at yarn-site.xml and see  
for the capabilities, I would implicitly think there are no capabilities given, 
since this is an empty list. However, this would actually give the default list 
of capabilities.
{quote}

This is bit that is surprising - is this the behavior expected from 
{{Configuration.setStrings(key, "")}} as well ? The behavior I see is this : 

{code}
final YarnConfiguration conf = new YarnConfiguration();
//set test.key1 and not test.key2
conf.setStrings("test.key1", "");
Assert.assertTrue(conf.getStrings("test.key1", "val1",
"val2") == null);
Assert.assertEquals(2, conf.getStrings("test.key2", "val1", "val2")
.length);
{code}



> Add support for docker to have no capabilities
> --
>
> Key: YARN-7286
> URL: https://issues.apache.org/jira/browse/YARN-7286
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: YARN-7286.001.patch, YARN-7286.002.patch, 
> YARN-7286.003.patch
>
>
> Support for controlling capabilities was introduced in YARN-4258. However, it 
> does not allow for the capabilities list to be NULL, since {{getStrings()}} 
> will treat an empty value the same as it treats an unset property. So, a NULL 
> list will actually give the default capabilities list.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6940) FairScheduler: Enable Container update CodePaths and container resize testcase

2017-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200882#comment-16200882
 ] 

Hadoop QA commented on YARN-6940:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  7m 
56s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
6s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
13s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 56s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 78 unchanged - 0 fixed = 79 total (was 78) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 49m 20s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 30m 46s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}160m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.ahs.TestRMApplicationHistoryWriter |
|   | hadoop.yarn.server.resourcemanager.TestApplicationMasterService |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppAttempt |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler |
|   | hadoop.yarn.server.resourcemanager.TestWorkPreservingRMRestart |
| Timed out junit tests | org.apache.hadoop.yarn.client.api.impl.TestAMRMClient 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:3d04c00 |
| JIRA Issue | YARN-6940 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880236/YARN-6940.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  

[jira] [Commented] (YARN-6620) Add support in NodeManager to isolate GPU devices by using CGroups

2017-10-11 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200864#comment-16200864
 ] 

Wangda Tan commented on YARN-6620:
--

Thanks [~sunilg] for committing the patch, thanks [~devaraj.k]/[~tangzhankun] 
for reviewing the patch and thanks [~hex108] for offline suggestions.

> Add support in NodeManager to isolate GPU devices by using CGroups
> --
>
> Key: YARN-6620
> URL: https://issues.apache.org/jira/browse/YARN-6620
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Fix For: 3.1.0
>
> Attachments: YARN-6620.001.patch, YARN-6620.002.patch, 
> YARN-6620.003.patch, YARN-6620.004.patch, YARN-6620.005.patch, 
> YARN-6620.006-WIP.patch, YARN-6620.007.patch, YARN-6620.008.patch, 
> YARN-6620.009.patch, YARN-6620.010.patch, YARN-6620.011.patch, 
> YARN-6620.012.patch, YARN-6620.013.patch, YARN-6620.014.patch, 
> YARN-6620.015.patch, YARN-6620.016.patch, YARN-6620.017.patch
>
>
> This JIRA plan to add support of:
> 1) GPU configuration for NodeManagers
> 2) Isolation in CGroups. (Java side).
> 3) NM restart and recovery allocated GPU devices



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7169) Backport new yarn-ui to branch2 code (starting with YARN-5355_branch2)

2017-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200805#comment-16200805
 ] 

Hadoop QA commented on YARN-7169:
-

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-YARN-Build/17862/console in case of 
problems.


> Backport new yarn-ui to branch2 code (starting with YARN-5355_branch2)
> --
>
> Key: YARN-7169
> URL: https://issues.apache.org/jira/browse/YARN-7169
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineclient, timelinereader, timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>Priority: Critical
> Attachments: YARN-7169-YARN-5355_branch2.0001.patch, 
> YARN-7169-YARN-5355_branch2.0002.patch, ui_commits(1)
>
>
> Jira to track the backport of the new yarn-ui onto branch2. Right now adding 
> into Timeline Service v2's branch2 which is YARN-5355_branch2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7317) Fix overallocation resulted from ceiling in LocalityMulticastAMRMProxyPolicy

2017-10-11 Thread Botong Huang (JIRA)
Botong Huang created YARN-7317:
--

 Summary: Fix overallocation resulted from ceiling in 
LocalityMulticastAMRMProxyPolicy
 Key: YARN-7317
 URL: https://issues.apache.org/jira/browse/YARN-7317
 Project: Hadoop YARN
  Issue Type: Task
Reporter: Botong Huang
Assignee: Botong Huang
Priority: Minor


When LocalityMulticastAMRMProxyPolicy is splitting up the ANY requests into 
different subclusters, we are doing Ceil(N * weight), leading to overallocation 
overall. It is better to do Floor(N * weight) for each subcluster and then 
assign the residue randomly according to the weights. So that the total number 
of containers we ask from all subclusters sum up to be N. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7169) Backport new yarn-ui to branch2 code (starting with YARN-5355_branch2)

2017-10-11 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-7169:
-
Attachment: YARN-7169-YARN-5355_branch2.0002.patch

Uploading YARN-7169-YARN-5355_branch2.0002.patch after rebasing 

> Backport new yarn-ui to branch2 code (starting with YARN-5355_branch2)
> --
>
> Key: YARN-7169
> URL: https://issues.apache.org/jira/browse/YARN-7169
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineclient, timelinereader, timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>Priority: Critical
> Attachments: YARN-7169-YARN-5355_branch2.0001.patch, 
> YARN-7169-YARN-5355_branch2.0002.patch, ui_commits(1)
>
>
> Jira to track the backport of the new yarn-ui onto branch2. Right now adding 
> into Timeline Service v2's branch2 which is YARN-5355_branch2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7169) Backport new yarn-ui to branch2 code (starting with YARN-5355_branch2)

2017-10-11 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-7169:
-
Attachment: ui_commits(1)

Attaching list of commits that I have backported in file "ui_commits(1)" 

> Backport new yarn-ui to branch2 code (starting with YARN-5355_branch2)
> --
>
> Key: YARN-7169
> URL: https://issues.apache.org/jira/browse/YARN-7169
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineclient, timelinereader, timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>Priority: Critical
> Attachments: YARN-7169-YARN-5355_branch2.0001.patch, ui_commits(1)
>
>
> Jira to track the backport of the new yarn-ui onto branch2. Right now adding 
> into Timeline Service v2's branch2 which is YARN-5355_branch2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6620) Add support in NodeManager to isolate GPU devices by using CGroups

2017-10-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200723#comment-16200723
 ] 

Hudson commented on YARN-6620:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #13073 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13073/])
YARN-6620. Add support in NodeManager to isolate GPU devices by using (sunilg: 
rev fa5cfc68f37c78b6cf26ce13247b9ff34da806cd)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestLinuxContainerExecutor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestLinuxContainerExecutorWithMocks.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/Context.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestDefaultContainerExecutor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/privileged/PrivilegedOperation.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/dao/gpu/TestGpuDeviceInformationParser.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeStatusUpdater.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/TestResourcePluginManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/ResourcePlugin.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/gpu/GpuNodeResourceUpdateHandler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/TestResourceHandlerModule.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/dao/gpu/PerGpuMemoryUsage.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/dao/gpu/PerGpuDeviceInformation.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/BaseAMRMProxyTest.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceInformation.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/ResourceHandlerChain.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/dao/gpu/GpuDeviceInformation.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/DefaultContainerExecutor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/TestContainersMonitorResourceChange.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java
* (add) 

[jira] [Updated] (YARN-7169) Backport new yarn-ui to branch2 code (starting with YARN-5355_branch2)

2017-10-11 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-7169:
-
Attachment: YARN-7190-YARN-5355_branch2.01.patch

Uploading patch 002 now. I have backported all the commits that 
[~sunil.gov...@gmail.com] had shared with me offline.

The code builds and all unit tests in yarn-ui seem to pass.

I am now going to test it locally and then will try it out on a machine. 
Uploading a patch to see what jenkins says. 

> Backport new yarn-ui to branch2 code (starting with YARN-5355_branch2)
> --
>
> Key: YARN-7169
> URL: https://issues.apache.org/jira/browse/YARN-7169
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineclient, timelinereader, timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>Priority: Critical
> Attachments: YARN-7169-YARN-5355_branch2.0001.patch, 
> YARN-7190-YARN-5355_branch2.01.patch
>
>
> Jira to track the backport of the new yarn-ui onto branch2. Right now adding 
> into Timeline Service v2's branch2 which is YARN-5355_branch2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7169) Backport new yarn-ui to branch2 code (starting with YARN-5355_branch2)

2017-10-11 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-7169:
-
Attachment: (was: YARN-7190-YARN-5355_branch2.01.patch)

> Backport new yarn-ui to branch2 code (starting with YARN-5355_branch2)
> --
>
> Key: YARN-7169
> URL: https://issues.apache.org/jira/browse/YARN-7169
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineclient, timelinereader, timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>Priority: Critical
> Attachments: YARN-7169-YARN-5355_branch2.0001.patch
>
>
> Jira to track the backport of the new yarn-ui onto branch2. Right now adding 
> into Timeline Service v2's branch2 which is YARN-5355_branch2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7190) Ensure only NM classpath in 2.x gets TSv2 related hbase jars, not the user classpath

2017-10-11 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200714#comment-16200714
 ] 

Vrushali C commented on YARN-7190:
--

I applied the patch file. I had to ignore the yarn.cmd changes due to Control M 
characters. 

Rest of the patch applied fine. I ran the TestTimelineReaderWebServices several 
times. It did not time out for me. 

> Ensure only NM classpath in 2.x gets TSv2 related hbase jars, not the user 
> classpath
> 
>
> Key: YARN-7190
> URL: https://issues.apache.org/jira/browse/YARN-7190
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineclient, timelinereader, timelineserver
>Reporter: Vrushali C
>Assignee: Varun Saxena
> Attachments: YARN-7190-YARN-5355_branch2.01.patch
>
>
> [~jlowe] had a good observation about the user classpath getting extra jars 
> in hadoop 2.x brought in with TSv2.  If users start picking up Hadoop 2,x's 
> version of HBase jars instead of the ones they shipped with their job, it 
> could be a problem.
> So when TSv2 is to be used in 2,x, the hbase related jars should come into 
> only the NM classpath not the user classpath.
> Here is a list of some jars
> {code}
> commons-csv-1.0.jar
> commons-el-1.0.jar
> commons-httpclient-3.1.jar
> disruptor-3.3.0.jar
> findbugs-annotations-1.3.9-1.jar
> hbase-annotations-1.2.6.jar
> hbase-client-1.2.6.jar
> hbase-common-1.2.6.jar
> hbase-hadoop2-compat-1.2.6.jar
> hbase-hadoop-compat-1.2.6.jar
> hbase-prefix-tree-1.2.6.jar
> hbase-procedure-1.2.6.jar
> hbase-protocol-1.2.6.jar
> hbase-server-1.2.6.jar
> htrace-core-3.1.0-incubating.jar
> jamon-runtime-2.4.1.jar
> jasper-compiler-5.5.23.jar
> jasper-runtime-5.5.23.jar
> jcodings-1.0.8.jar
> joni-2.1.2.jar
> jsp-2.1-6.1.14.jar
> jsp-api-2.1-6.1.14.jar
> jsr311-api-1.1.1.jar
> metrics-core-2.2.0.jar
> servlet-api-2.5-6.1.14.jar
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >