[jira] [Updated] (YARN-7050) Post cleanup after YARN-6903

2017-08-21 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7050:
--
Attachment: YARN-7050.yarn-native-services.06.patch

> Post cleanup after YARN-6903
> 
>
> Key: YARN-7050
> URL: https://issues.apache.org/jira/browse/YARN-7050
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-7050.yarn-native-services.01.patch, 
> YARN-7050.yarn-native-services.02.patch, 
> YARN-7050.yarn-native-services.03.patch, 
> YARN-7050.yarn-native-services.04.patch, 
> YARN-7050.yarn-native-services.05.patch, 
> YARN-7050.yarn-native-services.06.patch
>
>
> This jira tries to remove some old code, and moves dependency classes to the 
> new package, and also some other side changes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7050) Post cleanup after YARN-6903

2017-08-21 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7050:
--
Attachment: (was: YARN-7050.yarn-native-services.06.patch)

> Post cleanup after YARN-6903
> 
>
> Key: YARN-7050
> URL: https://issues.apache.org/jira/browse/YARN-7050
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-7050.yarn-native-services.01.patch, 
> YARN-7050.yarn-native-services.02.patch, 
> YARN-7050.yarn-native-services.03.patch, 
> YARN-7050.yarn-native-services.04.patch, 
> YARN-7050.yarn-native-services.05.patch, 
> YARN-7050.yarn-native-services.06.patch
>
>
> This jira tries to remove some old code, and moves dependency classes to the 
> new package, and also some other side changes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7050) Post cleanup after YARN-6903

2017-08-21 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7050:
--
Attachment: (was: YARN-7050.yarn-native-services.06.patch)

> Post cleanup after YARN-6903
> 
>
> Key: YARN-7050
> URL: https://issues.apache.org/jira/browse/YARN-7050
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-7050.yarn-native-services.01.patch, 
> YARN-7050.yarn-native-services.02.patch, 
> YARN-7050.yarn-native-services.03.patch, 
> YARN-7050.yarn-native-services.04.patch, 
> YARN-7050.yarn-native-services.05.patch, 
> YARN-7050.yarn-native-services.06.patch
>
>
> This jira tries to remove some old code, and moves dependency classes to the 
> new package, and also some other side changes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7050) Post cleanup after YARN-6903

2017-08-21 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7050:
--
Attachment: YARN-7050.yarn-native-services.06.patch

> Post cleanup after YARN-6903
> 
>
> Key: YARN-7050
> URL: https://issues.apache.org/jira/browse/YARN-7050
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-7050.yarn-native-services.01.patch, 
> YARN-7050.yarn-native-services.02.patch, 
> YARN-7050.yarn-native-services.03.patch, 
> YARN-7050.yarn-native-services.04.patch, 
> YARN-7050.yarn-native-services.05.patch, 
> YARN-7050.yarn-native-services.06.patch
>
>
> This jira tries to remove some old code, and moves dependency classes to the 
> new package, and also some other side changes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7050) Post cleanup after YARN-6903

2017-08-21 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7050:
--
Attachment: YARN-7050.yarn-native-services.06.patch

Latest patch added some logic changes in ServiceClient. 


> Post cleanup after YARN-6903
> 
>
> Key: YARN-7050
> URL: https://issues.apache.org/jira/browse/YARN-7050
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-7050.yarn-native-services.01.patch, 
> YARN-7050.yarn-native-services.02.patch, 
> YARN-7050.yarn-native-services.03.patch, 
> YARN-7050.yarn-native-services.04.patch, 
> YARN-7050.yarn-native-services.05.patch, 
> YARN-7050.yarn-native-services.06.patch
>
>
> This jira tries to remove some old code, and moves dependency classes to the 
> new package, and also some other side changes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5603) Metrics for Federation StateStore

2017-08-21 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136333#comment-16136333
 ] 

Arun Suresh commented on YARN-5603:
---

Apologize for the delay in reviews [~ellenfkh].
+1. Thanks for the update.
Will commit this shortly.


> Metrics for Federation StateStore
> -
>
> Key: YARN-5603
> URL: https://issues.apache.org/jira/browse/YARN-5603
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Subru Krishnan
>Assignee: Ellen Hui
> Attachments: 0001-Add-Federation-Client-metrics.patch, 
> YARN-5603.001.patch, YARN-5603.002.patch, YARN-5603.003.patch
>
>
> This JIRA proposes addition of metrics for Federation StateStore



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-3053) [Security] Review and implement authentication in ATS v.2

2017-08-21 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C resolved YARN-3053.
--
  Resolution: Fixed
Hadoop Flags: Reviewed
Target Version/s: YARN-5355


Resolving this parent jira since all jiras that were incorporated under this 
are now resolved. The related YARN-6699 jira will be handled as part of 
YARN-7055. 


> [Security] Review and implement authentication in ATS v.2
> -
>
> Key: YARN-3053
> URL: https://issues.apache.org/jira/browse/YARN-3053
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>  Labels: YARN-5355, yarn-5355-merge-blocker
> Fix For: YARN-5355
>
> Attachments: ATSv2Authentication(draft).pdf, 
> ATSv2Authentication.v01.pdf, ATSv2Authentication.v02.pdf
>
>
> Per design in YARN-2928, we want to evaluate and review the system for 
> security, and ensure proper security in the system.
> This includes proper authentication, token management, access control, and 
> any other relevant security aspects.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6933) ResourceUtils.DISALLOWED_NAMES and ResourceUtils.checkMandatoryResources() are duplicating work

2017-08-21 Thread Manikandan R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikandan R updated YARN-6933:
---
Attachment: YARN-6933-YARN-3926.006.patch

> ResourceUtils.DISALLOWED_NAMES and ResourceUtils.checkMandatoryResources() 
> are duplicating work
> ---
>
> Key: YARN-6933
> URL: https://issues.apache.org/jira/browse/YARN-6933
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Manikandan R
>  Labels: newbie++
> Attachments: YARN-6933-YARN-3926.001.patch, 
> YARN-6933-YARN-3926.002.patch, YARN-6933-YARN-3926.003.patch, 
> YARN-6933-YARN-3926.004.patch, YARN-6933-YARN-3926.005.patch, 
> YARN-6933-YARN-3926.006.patch
>
>
> Both are used to check that the mandatory resources were not redefined.  Only 
> one check is needed.  I would recommend dropping {{DISALLOWED_RESOURCES}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6933) ResourceUtils.DISALLOWED_NAMES and ResourceUtils.checkMandatoryResources() are duplicating work

2017-08-21 Thread Manikandan R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136309#comment-16136309
 ] 

Manikandan R commented on YARN-6933:


Rebasing patch.

> ResourceUtils.DISALLOWED_NAMES and ResourceUtils.checkMandatoryResources() 
> are duplicating work
> ---
>
> Key: YARN-6933
> URL: https://issues.apache.org/jira/browse/YARN-6933
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Manikandan R
>  Labels: newbie++
> Attachments: YARN-6933-YARN-3926.001.patch, 
> YARN-6933-YARN-3926.002.patch, YARN-6933-YARN-3926.003.patch, 
> YARN-6933-YARN-3926.004.patch, YARN-6933-YARN-3926.005.patch, 
> YARN-6933-YARN-3926.006.patch
>
>
> Both are used to check that the mandatory resources were not redefined.  Only 
> one check is needed.  I would recommend dropping {{DISALLOWED_RESOURCES}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7068) Documentation updates for branch2 Timeline Service V2

2017-08-21 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-7068:
-
Parent Issue: YARN-7055  (was: YARN-7063)

> Documentation updates for branch2 Timeline Service V2
> -
>
> Key: YARN-7068
> URL: https://issues.apache.org/jira/browse/YARN-7068
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vrushali C
>
> The documentation at TimelineServiceV2.md needs to be updated to reflect the 
> updates made since alpha1 on YARN-5355_branch2 .
> There are some differences between the trunk based YARN-5355 branch and 
> branch2 based YARN-5355_branch2 branch. Filing jira to ensure documentation 
> is consistent between these branches. Patches would be based on updates in 
> YARN-6047 but keeping the branch2 relevancy in context. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6523) RM requires large memory in sending out security tokens as part of Node Heartbeat in large cluster

2017-08-21 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-6523:

Issue Type: Improvement  (was: Bug)

> RM requires large memory in sending out security tokens as part of Node 
> Heartbeat in large cluster
> --
>
> Key: YARN-6523
> URL: https://issues.apache.org/jira/browse/YARN-6523
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: RM
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Critical
>
> Currently as part of heartbeat response RM sets all application's tokens 
> though all applications might not be active on the node. On top of it 
> NodeHeartbeatResponsePBImpl converts tokens for each app into 
> SystemCredentialsForAppsProto. Hence for each node and each heartbeat too 
> many SystemCredentialsForAppsProto objects were getting created.
> We hit a OOM while testing for 2000 concurrent apps on 500 nodes cluster with 
> 8GB RAM configured for RM



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6523) RM requires large memory in sending out security tokens as part of Node Heartbeat in large cluster

2017-08-21 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136281#comment-16136281
 ] 

Naganarasimha G R commented on YARN-6523:
-

Thanks [~rchiang], for having a look into this, as per the discussion above 
this is like good to have improvement rather than bug, and also its not of high 
priority hence planning to get it in next version. So have removed the target 
version and changed type to improvement.

> RM requires large memory in sending out security tokens as part of Node 
> Heartbeat in large cluster
> --
>
> Key: YARN-6523
> URL: https://issues.apache.org/jira/browse/YARN-6523
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: RM
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Critical
>
> Currently as part of heartbeat response RM sets all application's tokens 
> though all applications might not be active on the node. On top of it 
> NodeHeartbeatResponsePBImpl converts tokens for each app into 
> SystemCredentialsForAppsProto. Hence for each node and each heartbeat too 
> many SystemCredentialsForAppsProto objects were getting created.
> We hit a OOM while testing for 2000 concurrent apps on 500 nodes cluster with 
> 8GB RAM configured for RM



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6523) RM requires large memory in sending out security tokens as part of Node Heartbeat in large cluster

2017-08-21 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-6523:

Target Version/s: 2.9.0  (was: 2.9.0, 3.0.0-beta1)

> RM requires large memory in sending out security tokens as part of Node 
> Heartbeat in large cluster
> --
>
> Key: YARN-6523
> URL: https://issues.apache.org/jira/browse/YARN-6523
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: RM
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Critical
>
> Currently as part of heartbeat response RM sets all application's tokens 
> though all applications might not be active on the node. On top of it 
> NodeHeartbeatResponsePBImpl converts tokens for each app into 
> SystemCredentialsForAppsProto. Hence for each node and each heartbeat too 
> many SystemCredentialsForAppsProto objects were getting created.
> We hit a OOM while testing for 2000 concurrent apps on 500 nodes cluster with 
> 8GB RAM configured for RM



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7067) Optimize resource type info shown in UI

2017-08-21 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7067:
-
Attachment: YARN-7067.YARN-3926.001.patch

> Optimize resource type info shown in UI
> ---
>
> Key: YARN-7067
> URL: https://issues.apache.org/jira/browse/YARN-7067
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-7067.001.patch, YARN-7067.YARN-3926.001.patch
>
>
> Existing resource type info shown as:
> {code}
> [name: memory-mb, units: Mi, type: COUNTABLE, value: 1024, minimum 
> allocation: 1024, maximum allocation: 8192, name: vcores, units: , type: 
> COUNTABLE, value: 1, minimum allocation: 1, maximum allocation: 4, name: 
> resource1, units: G, type: COUNTABLE, value: 0, minimum allocation: 0, 
> maximum allocation: 9223372036854775807]  
> {code}
> We need to optimize this a little bit.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7009) TestNMClient.testNMClientNoCleanupOnStop is flaky by design

2017-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136227#comment-16136227
 ] 

Hadoop QA commented on YARN-7009:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
49s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 107 unchanged - 4 fixed = 107 total (was 111) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
37s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
54s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m 
41s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 91m 18s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-7009 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883029/YARN-7009.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5ab49c7c436b 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b6bfb2f |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/17052/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
 |
|  Test Results | 

[jira] [Updated] (YARN-7068) Documentation updates for branch2 Timeline Service V2

2017-08-21 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-7068:
-
Issue Type: Sub-task  (was: Bug)
Parent: YARN-7063

> Documentation updates for branch2 Timeline Service V2
> -
>
> Key: YARN-7068
> URL: https://issues.apache.org/jira/browse/YARN-7068
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vrushali C
>
> The documentation at TimelineServiceV2.md needs to be updated to reflect the 
> updates made since alpha1 on YARN-5355_branch2 .
> There are some differences between the trunk based YARN-5355 branch and 
> branch2 based YARN-5355_branch2 branch. Filing jira to ensure documentation 
> is consistent between these branches. Patches would be based on updates in 
> YARN-6047 but keeping the branch2 relevancy in context. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7068) Documentation updates for branch2 Timeline Service V2

2017-08-21 Thread Vrushali C (JIRA)
Vrushali C created YARN-7068:


 Summary: Documentation updates for branch2 Timeline Service V2
 Key: YARN-7068
 URL: https://issues.apache.org/jira/browse/YARN-7068
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Vrushali C



The documentation at TimelineServiceV2.md needs to be updated to reflect the 
updates made since alpha1 on YARN-5355_branch2 .

There are some differences between the trunk based YARN-5355 branch and branch2 
based YARN-5355_branch2 branch. Filing jira to ensure documentation is 
consistent between these branches. Patches would be based on updates in 
YARN-6047 but keeping the branch2 relevancy in context. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6047) Documentation updates for TimelineService v2

2017-08-21 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136219#comment-16136219
 ] 

Vrushali C commented on YARN-6047:
--

The patches 005 or 004 do not apply cleanly to YARN-5355_branch2. Figuring out 
what the diff is. 

> Documentation updates for TimelineService v2
> 
>
> Key: YARN-6047
> URL: https://issues.apache.org/jira/browse/YARN-6047
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: documentation, timelineserver
>Reporter: Varun Saxena
>Assignee: Rohith Sharma K S
>  Labels: yarn-5355-merge-blocker
> Fix For: YARN-5355
>
> Attachments: TimelineServiceV2.html, YARN-6047-YARN-5355.0005.patch, 
> YARN-6047-YARN-5355.001.patch, YARN-6047-YARN-5355.002.patch, 
> YARN-6047-YARN-5355.003.patch, YARN-6047-YARN-5355.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6047) Documentation updates for TimelineService v2

2017-08-21 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136215#comment-16136215
 ] 

Vrushali C commented on YARN-6047:
--

Committed to YARN-5355 as part of 
https://github.com/apache/hadoop/commit/2288f5e6d022f50b7ad1179d3a8208e57bf67d39

> Documentation updates for TimelineService v2
> 
>
> Key: YARN-6047
> URL: https://issues.apache.org/jira/browse/YARN-6047
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: documentation, timelineserver
>Reporter: Varun Saxena
>Assignee: Rohith Sharma K S
>  Labels: yarn-5355-merge-blocker
> Fix For: YARN-5355
>
> Attachments: TimelineServiceV2.html, YARN-6047-YARN-5355.0005.patch, 
> YARN-6047-YARN-5355.001.patch, YARN-6047-YARN-5355.002.patch, 
> YARN-6047-YARN-5355.003.patch, YARN-6047-YARN-5355.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6047) Documentation updates for TimelineService v2

2017-08-21 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136209#comment-16136209
 ] 

Vrushali C commented on YARN-6047:
--

Committing patch 005 now 

> Documentation updates for TimelineService v2
> 
>
> Key: YARN-6047
> URL: https://issues.apache.org/jira/browse/YARN-6047
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: documentation, timelineserver
>Reporter: Varun Saxena
>Assignee: Rohith Sharma K S
>  Labels: yarn-5355-merge-blocker
> Fix For: YARN-5355
>
> Attachments: TimelineServiceV2.html, YARN-6047-YARN-5355.0005.patch, 
> YARN-6047-YARN-5355.001.patch, YARN-6047-YARN-5355.002.patch, 
> YARN-6047-YARN-5355.003.patch, YARN-6047-YARN-5355.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3661) Basic Federation UI

2017-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136197#comment-16136197
 ] 

Hadoop QA commented on YARN-3661:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 10s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router: 
The patch generated 8 new + 0 unchanged - 0 fixed = 8 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
58s{color} | {color:green} hadoop-yarn-server-router in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 20m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-3661 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883032/YARN-3661-002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9644f74df369 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b6bfb2f |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/17053/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17053/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17053/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Basic Federation UI 
> 
>
> Key: YARN-3661
> URL: https://issues.apache.org/jira/browse/YARN-3661
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, 

[jira] [Commented] (YARN-7053) Move curator transaction support to ZKCuratorManager

2017-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136190#comment-16136190
 ] 

Hadoop QA commented on YARN-7053:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
35s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 13s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 45m 42s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}147m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
|   | hadoop.security.TestKDiag |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
| Timed out junit tests | 
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-7053 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883003/YARN-7053.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6bfcc1ed3de3 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b6bfb2f |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/17046/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit | 

[jira] [Commented] (YARN-7067) Optimize resource type info shown in UI

2017-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136187#comment-16136187
 ] 

Hadoop QA commented on YARN-7067:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} YARN-7067 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-7067 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883031/YARN-7067.001.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17054/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Optimize resource type info shown in UI
> ---
>
> Key: YARN-7067
> URL: https://issues.apache.org/jira/browse/YARN-7067
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-7067.001.patch
>
>
> Existing resource type info shown as:
> {code}
> [name: memory-mb, units: Mi, type: COUNTABLE, value: 1024, minimum 
> allocation: 1024, maximum allocation: 8192, name: vcores, units: , type: 
> COUNTABLE, value: 1, minimum allocation: 1, maximum allocation: 4, name: 
> resource1, units: G, type: COUNTABLE, value: 0, minimum allocation: 0, 
> maximum allocation: 9223372036854775807]  
> {code}
> We need to optimize this a little bit.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7066) Add ability to specify volumes to mount for DockerContainerRuntime

2017-08-21 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136185#comment-16136185
 ] 

Eric Yang commented on YARN-7066:
-

[~miklos.szeg...@cloudera.com] Correct.  Updated title accordingly.

> Add ability to specify volumes to mount for DockerContainerRuntime
> --
>
> Key: YARN-7066
> URL: https://issues.apache.org/jira/browse/YARN-7066
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn-native-services
>Affects Versions: 3.0.0-beta1
>Reporter: Eric Yang
>
> Yarnfile describes environment, docker image, and configuration template for 
> launching docker containers in YARN.  It would be nice to have ability to 
> specify the volumes to mount.  This can be used in combination to 
> AMBARI-21748 to mount HDFS as data directories to docker containers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7066) Add ability to specify volumes to mount for DockerContainerRuntime

2017-08-21 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136185#comment-16136185
 ] 

Eric Yang edited comment on YARN-7066 at 8/22/17 2:47 AM:
--

[~miklos.szeg...@cloudera.com] Correct.  Updated title accordingly.  Thanks


was (Author: eyang):
[~miklos.szeg...@cloudera.com] Correct.  Updated title accordingly.

> Add ability to specify volumes to mount for DockerContainerRuntime
> --
>
> Key: YARN-7066
> URL: https://issues.apache.org/jira/browse/YARN-7066
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn-native-services
>Affects Versions: 3.0.0-beta1
>Reporter: Eric Yang
>
> Yarnfile describes environment, docker image, and configuration template for 
> launching docker containers in YARN.  It would be nice to have ability to 
> specify the volumes to mount.  This can be used in combination to 
> AMBARI-21748 to mount HDFS as data directories to docker containers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7066) Add ability to specify volumes to mount for DockerContainerRuntime

2017-08-21 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7066:

Summary: Add ability to specify volumes to mount for DockerContainerRuntime 
 (was: Add ability to specify volumes to mount for DockerContainerExecutor)

> Add ability to specify volumes to mount for DockerContainerRuntime
> --
>
> Key: YARN-7066
> URL: https://issues.apache.org/jira/browse/YARN-7066
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn-native-services
>Affects Versions: 3.0.0-beta1
>Reporter: Eric Yang
>
> Yarnfile describes environment, docker image, and configuration template for 
> launching docker containers in YARN.  It would be nice to have ability to 
> specify the volumes to mount.  This can be used in combination to 
> AMBARI-21748 to mount HDFS as data directories to docker containers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7050) Post cleanup after YARN-6903

2017-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136159#comment-16136159
 ] 

Hadoop QA commented on YARN-7050:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 118 new or modified 
test files. {color} |
|| || || || {color:brown} yarn-native-services Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
59s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
59s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
22s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
28s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
38s{color} | {color:green} yarn-native-services passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
49s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core
 in yarn-native-services has 7 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
15s{color} | {color:red} hadoop-yarn-slider in yarn-native-services failed. 
{color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
15s{color} | {color:red} hadoop-yarn-slider-core in yarn-native-services 
failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  5m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
27s{color} | {color:green} hadoop-yarn-project_hadoop-yarn generated 0 new + 
109 unchanged - 27 fixed = 109 total (was 136) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 11s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 712 new + 418 unchanged - 3218 fixed = 1130 total (was 3636) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} pylint {color} | {color:green}  0m  
1s{color} | {color:green} The patch generated 0 new + 0 unchanged - 110 fixed = 
0 total (was 110) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 84 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core
 generated 0 new + 2 unchanged - 5 fixed = 2 total (was 7) {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
24s{color} | {color:green} hadoop-yarn-services-api in the patch passed. 
{color} |
| {color:red}-1{color} | 

[jira] [Commented] (YARN-6911) Graph application-level resource utilization in Web UI v2

2017-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136155#comment-16136155
 ] 

Hadoop QA commented on YARN-6911:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  1m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6911 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883022/YARN-6911.001.patch |
| Optional Tests |  asflicense  |
| uname | Linux 2fa83fa43911 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b6bfb2f |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17050/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Graph application-level resource utilization in Web UI v2
> -
>
> Key: YARN-6911
> URL: https://issues.apache.org/jira/browse/YARN-6911
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn-ui-v2
>Affects Versions: 3.0.0-alpha4
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
> Attachments: Resource Utilization Graph Mock Up.png, 
> YARN-6911.001.patch
>
>
> It would be useful to have a visualization of the resource utilization 
> (memory, cpu, etc.) per application using the ATSv2 time series data. Rough 
> mock up attached.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7064) Use cgroup to get container resource utilization

2017-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136151#comment-16136151
 ] 

Hadoop QA commented on YARN-7064:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
40s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 0 new + 2 unchanged - 2 fixed = 2 total (was 4) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
46s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 34m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-7064 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883009/YARN-7064.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 098ab3376d2b 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b6bfb2f |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/17049/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17049/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17049/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Use cgroup to get container resource utilization
> 
>
> Key: YARN-7064
> 

[jira] [Commented] (YARN-7049) FSAppAttempt preemption related fields have confusing names

2017-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136138#comment-16136138
 ] 

Hadoop QA commented on YARN-7049:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m 31s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 70m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-7049 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883005/YARN-7049.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8cd969074bc1 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b6bfb2f |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/17047/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17047/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17047/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> FSAppAttempt preemption related fields have confusing names
> ---
>
> 

[jira] [Updated] (YARN-3661) Basic Federation UI

2017-08-21 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-3661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated YARN-3661:
--
Attachment: YARN-3661-002.patch

> Basic Federation UI 
> 
>
> Key: YARN-3661
> URL: https://issues.apache.org/jira/browse/YARN-3661
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Íñigo Goiri
> Attachments: YARN-3661-000.patch, YARN-3661-001.patch, 
> YARN-3661-002.patch
>
>
> The UIs provided by each RM, provide a correct "local" view of what is 
> running in a sub-cluster. In the context of federation we need new 
> UIs that can track load, jobs, users across sub-clusters.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7067) Optimize resource type info shown in UI

2017-08-21 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7067:
-
Attachment: YARN-7067.001.patch

Attached ver.1 patch, now the format looks like:
{code}

[jira] [Updated] (YARN-7009) TestNMClient.testNMClientNoCleanupOnStop is flaky by design

2017-08-21 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-7009:
-
Attachment: YARN-7009.002.patch

> TestNMClient.testNMClientNoCleanupOnStop is flaky by design
> ---
>
> Key: YARN-7009
> URL: https://issues.apache.org/jira/browse/YARN-7009
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Attachments: YARN-7009.000.patch, YARN-7009.001.patch, 
> YARN-7009.002.patch
>
>
> The sleeps to wait for a transition to reinit and than back to running is not 
> long enough, it can miss the reinit event.
> {code}
> java.lang.AssertionError: Exception is not expected: 
> org.apache.hadoop.yarn.exceptions.YarnException: Cannot perform RE_INIT on 
> [container_1502735389852_0001_01_01]. Current state is [REINITIALIZING, 
> isReInitializing=true].
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.preReInitializeOrLocalizeCheck(ContainerManagerImpl.java:1772)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.reInitializeContainer(ContainerManagerImpl.java:1697)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.reInitializeContainer(ContainerManagerImpl.java:1668)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.service.ContainerManagementProtocolPBServiceImpl.reInitializeContainer(ContainerManagementProtocolPBServiceImpl.java:214)
>   at 
> org.apache.hadoop.yarn.proto.ContainerManagementProtocol$ContainerManagementProtocolService$2.callBlockingMethod(ContainerManagementProtocol.java:237)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestNMClient.testReInitializeContainer(TestNMClient.java:567)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestNMClient.testContainerManagement(TestNMClient.java:405)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestNMClient.testNMClientNoCleanupOnStop(TestNMClient.java:214)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Cannot perform 
> RE_INIT on [container_1502735389852_0001_01_01]. Current state is 
> [REINITIALIZING, isReInitializing=true].
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.preReInitializeOrLocalizeCheck(ContainerManagerImpl.java:1772)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.reInitializeContainer(ContainerManagerImpl.java:1697)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.reInitializeContainer(ContainerManagerImpl.java:1668)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.service.ContainerManagementProtocolPBServiceImpl.reInitializeContainer(ContainerManagementProtocolPBServiceImpl.java:214)
>   at 
> org.apache.hadoop.yarn.proto.ContainerManagementProtocol$ContainerManagementProtocolService$2.callBlockingMethod(ContainerManagementProtocol.java:237)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
>   at java.security.AccessController.doPrivileged(Native 

[jira] [Updated] (YARN-7062) yarn job args changed by ContainerLaunch.expandEnvironment(), such as "{{" and "}}"

2017-08-21 Thread Lingfeng Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lingfeng Su updated YARN-7062:
--
Issue Type: Bug  (was: Improvement)

> yarn job args changed by ContainerLaunch.expandEnvironment(), such as "{{" 
> and "}}"
> ---
>
> Key: YARN-7062
> URL: https://issues.apache.org/jira/browse/YARN-7062
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: applications
>Reporter: Lingfeng Su
>Assignee: Lingfeng Su
>
> I passed a json string like "{User: Billy, {Age: 18}}" to main method args, 
> when running spark jobs on yarn.
> It was changed by ContainerLaunch.expandEnvironment() to "{User: Billy, {Age: 
> 18" ("}}" to "")
> I found the final arg in launch_container.sh of yarn containers, as:
>  --arg '{User: Billy, {Age: 18'
> {code:java}
> exec /bin/bash -c 
> "LD_LIBRARY_PATH="$HADOOP_COMMON_HOME/../../../CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hadoop/lib/native:$LD_LIBRARY_PATH"
>  $JAVA_HOME/bin/java -server -Xmx1024m -Djava.io.tmpdir=$PWD/tmp 
> -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01
>  org.apache.spark.deploy.yarn.ApplicationMaster --class 
> 'org.apache.spark.examples.sql.hive.HiveFromSpark' --jar 
> file:/opt/spark-submit/spark_sql_test-1.0.jar --arg '{User: Billy, {Age: 18' 
> --properties-file $PWD/__spark_conf__/__spark_conf__.properties 1> 
> /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stdout
>  2> 
> /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stderr"
> {code}
> We could make some improvements
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.expandEnvironment():
>  
> {code:java}
> @VisibleForTesting
>   public static String expandEnvironment(String var,
>   Path containerLogDir) {
> var = var.replace(ApplicationConstants.LOG_DIR_EXPANSION_VAR,
>   containerLogDir.toString());
> var =  var.replace(ApplicationConstants.CLASS_PATH_SEPARATOR,
>   File.pathSeparator);
> // replace parameter expansion marker. e.g. {{VAR}} on Windows is replaced
> // as %VAR% and on Linux replaced as "$VAR"
> if (Shell.WINDOWS) {
>   var = var.replaceAll("(\\{\\{)|(\\}\\})", "%");
> } else {
>   var = var.replace(ApplicationConstants.PARAMETER_EXPANSION_LEFT, "$");
>   var = var.replace(ApplicationConstants.PARAMETER_EXPANSION_RIGHT, "");
> }
> return var;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7013) merge related work for YARN-3926 branch

2017-08-21 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7013:
-
Attachment: YARN-7013.003.patch

Attached ver.3 patch. include latest changes.

> merge related work for YARN-3926 branch
> ---
>
> Key: YARN-7013
> URL: https://issues.apache.org/jira/browse/YARN-7013
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-7013.001.patch, YARN-7013.002.patch, 
> YARN-7013.003.patch
>
>
> To run jenkins for whole branch.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6127) Add support for work preserving NM restart when AMRMProxy is enabled

2017-08-21 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136099#comment-16136099
 ] 

Karthik Kambatla commented on YARN-6127:


Given this breaks rolling upgrade, are we sure we want to include in branch-2 
and 2.9.0? 

> Add support for work preserving NM restart when AMRMProxy is enabled
> 
>
> Key: YARN-6127
> URL: https://issues.apache.org/jira/browse/YARN-6127
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: amrmproxy, nodemanager
>Reporter: Subru Krishnan
>Assignee: Botong Huang
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: YARN-6127-branch-2.v1.patch, YARN-6127.v1.patch, 
> YARN-6127.v2.patch, YARN-6127.v3.patch, YARN-6127.v4.patch
>
>
> YARN-1336 added the ability to restart NM without loosing any running 
> containers. In a Federated YARN environment, there's additional state in the 
> {{AMRMProxy}} to allow for spanning across multiple sub-clusters, so we need 
> to enhance {{AMRMProxy}} to support work-preserving restart.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7009) TestNMClient.testNMClientNoCleanupOnStop is flaky by design

2017-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136096#comment-16136096
 ] 

Hadoop QA commented on YARN-7009:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
55s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
43s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 53s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 108 unchanged - 4 fixed = 109 total (was 112) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 39s{color} 
| {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m  
4s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m 
53s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 93m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.client.api.impl.TestTimelineClientV2Impl |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-7009 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882995/YARN-7009.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux bad24e9ad7b9 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b6bfb2f |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| findbugs | 

[jira] [Commented] (YARN-6047) Documentation updates for TimelineService v2

2017-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136094#comment-16136094
 ] 

Hadoop QA commented on YARN-6047:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} YARN-5355 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
46s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} YARN-5355 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 18m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-6047 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883002/YARN-6047-YARN-5355.0005.patch
 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 34f2cc76ab62 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-5355 / d337336 |
| modules | C: hadoop-project hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site 
U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17045/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Documentation updates for TimelineService v2
> 
>
> Key: YARN-6047
> URL: https://issues.apache.org/jira/browse/YARN-6047
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: documentation, timelineserver
>Reporter: Varun Saxena
>Assignee: Rohith Sharma K S
>  Labels: yarn-5355-merge-blocker
> Fix For: YARN-5355
>
> Attachments: TimelineServiceV2.html, YARN-6047-YARN-5355.0005.patch, 
> YARN-6047-YARN-5355.001.patch, YARN-6047-YARN-5355.002.patch, 
> YARN-6047-YARN-5355.003.patch, YARN-6047-YARN-5355.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6911) Graph application-level resource utilization in Web UI v2

2017-08-21 Thread Abdullah Yousufi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abdullah Yousufi updated YARN-6911:
---
Attachment: YARN-6911.001.patch

> Graph application-level resource utilization in Web UI v2
> -
>
> Key: YARN-6911
> URL: https://issues.apache.org/jira/browse/YARN-6911
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn-ui-v2
>Affects Versions: 3.0.0-alpha4
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
> Attachments: Resource Utilization Graph Mock Up.png, 
> YARN-6911.001.patch
>
>
> It would be useful to have a visualization of the resource utilization 
> (memory, cpu, etc.) per application using the ATSv2 time series data. Rough 
> mock up attached.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7067) Optimize resource type info shown in UI

2017-08-21 Thread Wangda Tan (JIRA)
Wangda Tan created YARN-7067:


 Summary: Optimize resource type info shown in UI
 Key: YARN-7067
 URL: https://issues.apache.org/jira/browse/YARN-7067
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Wangda Tan
Assignee: Wangda Tan
Priority: Critical


Existing resource type info shown as:
{code}
[name: memory-mb, units: Mi, type: COUNTABLE, value: 1024, minimum allocation: 
1024, maximum allocation: 8192, name: vcores, units: , type: COUNTABLE, value: 
1, minimum allocation: 1, maximum allocation: 4, name: resource1, units: G, 
type: COUNTABLE, value: 0, minimum allocation: 0, maximum allocation: 
9223372036854775807]
{code}

We need to optimize this a little bit.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3661) Basic Federation UI

2017-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136079#comment-16136079
 ] 

Hadoop QA commented on YARN-3661:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
11s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 10s{color} | {color:orange} root: The patch generated 23 new + 0 unchanged - 
0 fixed = 23 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
8s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
29s{color} | {color:red} hadoop-yarn-server-router in the patch failed. {color} 
|
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
54s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 21s{color} 
| {color:red} hadoop-yarn-server-router in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.router.webapp.TestRouterWebServicesREST |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-3661 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882988/YARN-3661-001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux 370c624998f6 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b6bfb2f |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/17043/artifact/patchprocess/diff-checkstyle-root.txt
 |
| javadoc | 

[jira] [Commented] (YARN-2162) add ability in Fair Scheduler to optionally configure maxResources in terms of percentage

2017-08-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136024#comment-16136024
 ] 

ASF GitHub Bot commented on YARN-2162:
--

Github user flyrain commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/261#discussion_r134355605
  
--- Diff: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/ResourceConfiguration.java
 ---
@@ -0,0 +1,71 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair;
+
+import org.apache.hadoop.classification.InterfaceAudience.Private;
+import org.apache.hadoop.classification.InterfaceStability.Unstable;
+import org.apache.hadoop.yarn.api.records.Resource;
+import org.apache.hadoop.yarn.server.resourcemanager.resource.ResourceType;
+
+import java.util.HashMap;
+
+/**
+ * A ResourceConfiguration represents an entity that is used to 
configuration
+ * resources, such as maximum resources of a queue. It can be percentage of
+ * cluster resources or an absolute value.
+ */
+@Private
+@Unstable
+public class ResourceConfiguration {
+  private final boolean isPercentage;
+  private final Resource resource;
+  private final HashMap percentages;
+
+  public boolean isPercentage() {
--- End diff --

We need to check the state of the object sometimes. Without a boolean, we 
are gonna to rely on value of absolute resource and/or percentage, which is not 
a nice way, e.g., what if absolute resource is set to null.


> add ability in Fair Scheduler to optionally configure maxResources in terms 
> of percentage
> -
>
> Key: YARN-2162
> URL: https://issues.apache.org/jira/browse/YARN-2162
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, scheduler
>Reporter: Ashwin Shankar
>Assignee: Yufei Gu
>  Labels: scheduler
> Attachments: YARN-2162.001.patch, YARN-2162.002.patch
>
>
> minResources and maxResources in fair scheduler configs are expressed in 
> terms of absolute numbers X mb, Y vcores. 
> As a result, when we expand or shrink our hadoop cluster, we need to 
> recalculate and change minResources/maxResources accordingly, which is pretty 
> inconvenient.
> We can circumvent this problem if we can optionally configure these 
> properties in terms of percentage of cluster capacity. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2162) add ability in Fair Scheduler to optionally configure maxResources in terms of percentage

2017-08-21 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136023#comment-16136023
 ] 

Wangda Tan commented on YARN-2162:
--

Just curious to ask since we're working on YARN-5881. What is your plan to 
handle queues in the cluster which have mixed configuration of absolute value 
and percentage. Our plan is to not support this for now (see design of 
YARN-5881). Not sure what is your plan.

cc: [~sunilg]

> add ability in Fair Scheduler to optionally configure maxResources in terms 
> of percentage
> -
>
> Key: YARN-2162
> URL: https://issues.apache.org/jira/browse/YARN-2162
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, scheduler
>Reporter: Ashwin Shankar
>Assignee: Yufei Gu
>  Labels: scheduler
> Attachments: YARN-2162.001.patch, YARN-2162.002.patch
>
>
> minResources and maxResources in fair scheduler configs are expressed in 
> terms of absolute numbers X mb, Y vcores. 
> As a result, when we expand or shrink our hadoop cluster, we need to 
> recalculate and change minResources/maxResources accordingly, which is pretty 
> inconvenient.
> We can circumvent this problem if we can optionally configure these 
> properties in terms of percentage of cluster capacity. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7053) Move curator transaction support to ZKCuratorManager

2017-08-21 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136021#comment-16136021
 ] 

Subru Krishnan commented on YARN-7053:
--

[~jhung], thanks for adding the test. Can you add a *setData* and *delete* also 
in the same test so that we cover all the transactions.

{quote}Did you want to change the put logic as part of this ticket?{quote}

Yes, if {{ZookeeperFederationStateStore::put}} is straightforward to update. If 
not, I am fine with doing it separately in future.

> Move curator transaction support to ZKCuratorManager
> 
>
> Key: YARN-7053
> URL: https://issues.apache.org/jira/browse/YARN-7053
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-7053.001.patch, YARN-7053.002.patch, 
> YARN-7053.003.patch
>
>
> HADOOP-14741 moves curator functionality to ZKCuratorManager. ZKRMStateStore 
> has some curator transaction support which can be reused, so this can be 
> moved to ZKCuratorManager as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2162) add ability in Fair Scheduler to optionally configure maxResources in terms of percentage

2017-08-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136019#comment-16136019
 ] 

ASF GitHub Bot commented on YARN-2162:
--

Github user flyrain commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/261#discussion_r134355210
  
--- Diff: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/ResourceConfiguration.java
 ---
@@ -0,0 +1,71 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair;
+
+import org.apache.hadoop.classification.InterfaceAudience.Private;
+import org.apache.hadoop.classification.InterfaceStability.Unstable;
+import org.apache.hadoop.yarn.api.records.Resource;
+import org.apache.hadoop.yarn.server.resourcemanager.resource.ResourceType;
+
+import java.util.HashMap;
+
+/**
+ * A ResourceConfiguration represents an entity that is used to 
configuration
+ * resources, such as maximum resources of a queue. It can be percentage of
+ * cluster resources or an absolute value.
+ */
+@Private
+@Unstable
+public class ResourceConfiguration {
+  private final boolean isPercentage;
+  private final Resource resource;
+  private final HashMap percentages;
--- End diff --

Will change to an array in next patch.


> add ability in Fair Scheduler to optionally configure maxResources in terms 
> of percentage
> -
>
> Key: YARN-2162
> URL: https://issues.apache.org/jira/browse/YARN-2162
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, scheduler
>Reporter: Ashwin Shankar
>Assignee: Yufei Gu
>  Labels: scheduler
> Attachments: YARN-2162.001.patch, YARN-2162.002.patch
>
>
> minResources and maxResources in fair scheduler configs are expressed in 
> terms of absolute numbers X mb, Y vcores. 
> As a result, when we expand or shrink our hadoop cluster, we need to 
> recalculate and change minResources/maxResources accordingly, which is pretty 
> inconvenient.
> We can circumvent this problem if we can optionally configure these 
> properties in terms of percentage of cluster capacity. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7066) Add ability to specify volumes to mount for DockerContainerExecutor

2017-08-21 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136018#comment-16136018
 ] 

Miklos Szegedi commented on YARN-7066:
--

[~eyang], do you mean DockerContainerRuntime? DockerContainerExecutor was 
deprecated.

> Add ability to specify volumes to mount for DockerContainerExecutor
> ---
>
> Key: YARN-7066
> URL: https://issues.apache.org/jira/browse/YARN-7066
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn-native-services
>Affects Versions: 3.0.0-beta1
>Reporter: Eric Yang
>
> Yarnfile describes environment, docker image, and configuration template for 
> launching docker containers in YARN.  It would be nice to have ability to 
> specify the volumes to mount.  This can be used in combination to 
> AMBARI-21748 to mount HDFS as data directories to docker containers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6251) Add support for async Container release to prevent deadlock during container updates

2017-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136008#comment-16136008
 ] 

Hadoop QA commented on YARN-6251:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 3 new + 271 unchanged - 2 fixed = 274 total (was 273) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 44m 10s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
11s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestOpportunisticContainerAllocatorAMService 
|
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
| Timed out junit tests | 
org.apache.hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStore |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6251 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882981/YARN-6251.007.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 38b4e63f3110 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b6bfb2f |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/17041/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/17041/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17041/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 

[jira] [Commented] (YARN-2162) add ability in Fair Scheduler to optionally configure maxResources in terms of percentage

2017-08-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136003#comment-16136003
 ] 

ASF GitHub Bot commented on YARN-2162:
--

Github user flyrain commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/261#discussion_r134353432
  
--- Diff: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/ResourceConfiguration.java
 ---
@@ -0,0 +1,71 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair;
+
+import org.apache.hadoop.classification.InterfaceAudience.Private;
+import org.apache.hadoop.classification.InterfaceStability.Unstable;
+import org.apache.hadoop.yarn.api.records.Resource;
+import org.apache.hadoop.yarn.server.resourcemanager.resource.ResourceType;
+
+import java.util.HashMap;
+
+/**
+ * A ResourceConfiguration represents an entity that is used to 
configuration
+ * resources, such as maximum resources of a queue. It can be percentage of
+ * cluster resources or an absolute value.
+ */
+@Private
+@Unstable
+public class ResourceConfiguration {
+  private final boolean isPercentage;
--- End diff --

I prefer a boolean. We need to check the state of the object sometimes. 
Without a boolean, we are gonna to rely on value of absolute resource and/or 
percentage, which is not a nice way, e.g., what if absolute resource is set to 
null. 


> add ability in Fair Scheduler to optionally configure maxResources in terms 
> of percentage
> -
>
> Key: YARN-2162
> URL: https://issues.apache.org/jira/browse/YARN-2162
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, scheduler
>Reporter: Ashwin Shankar
>Assignee: Yufei Gu
>  Labels: scheduler
> Attachments: YARN-2162.001.patch, YARN-2162.002.patch
>
>
> minResources and maxResources in fair scheduler configs are expressed in 
> terms of absolute numbers X mb, Y vcores. 
> As a result, when we expand or shrink our hadoop cluster, we need to 
> recalculate and change minResources/maxResources accordingly, which is pretty 
> inconvenient.
> We can circumvent this problem if we can optionally configure these 
> properties in terms of percentage of cluster capacity. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7064) Use cgroup to get container resource utilization

2017-08-21 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-7064:
-
Attachment: YARN-7064.001.patch

> Use cgroup to get container resource utilization
> 
>
> Key: YARN-7064
> URL: https://issues.apache.org/jira/browse/YARN-7064
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Attachments: YARN-7064.000.patch, YARN-7064.001.patch
>
>
> This is an addendum to YARN-6668. What happens is that that jira always wants 
> to rebase patches against YARN-1011 instead of trunk.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2162) add ability in Fair Scheduler to optionally configure maxResources in terms of percentage

2017-08-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135998#comment-16135998
 ] 

ASF GitHub Bot commented on YARN-2162:
--

Github user flyrain commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/261#discussion_r134353082
  
--- Diff: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/ResourceConfiguration.java
 ---
@@ -0,0 +1,71 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair;
+
+import org.apache.hadoop.classification.InterfaceAudience.Private;
+import org.apache.hadoop.classification.InterfaceStability.Unstable;
+import org.apache.hadoop.yarn.api.records.Resource;
+import org.apache.hadoop.yarn.server.resourcemanager.resource.ResourceType;
+
+import java.util.HashMap;
+
+/**
+ * A ResourceConfiguration represents an entity that is used to 
configuration
+ * resources, such as maximum resources of a queue. It can be percentage of
+ * cluster resources or an absolute value.
+ */
+@Private
+@Unstable
+public class ResourceConfiguration {
--- End diff --

Will change to ConfigurableResource in next patch.


> add ability in Fair Scheduler to optionally configure maxResources in terms 
> of percentage
> -
>
> Key: YARN-2162
> URL: https://issues.apache.org/jira/browse/YARN-2162
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, scheduler
>Reporter: Ashwin Shankar
>Assignee: Yufei Gu
>  Labels: scheduler
> Attachments: YARN-2162.001.patch, YARN-2162.002.patch
>
>
> minResources and maxResources in fair scheduler configs are expressed in 
> terms of absolute numbers X mb, Y vcores. 
> As a result, when we expand or shrink our hadoop cluster, we need to 
> recalculate and change minResources/maxResources accordingly, which is pretty 
> inconvenient.
> We can circumvent this problem if we can optionally configure these 
> properties in terms of percentage of cluster capacity. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2162) add ability in Fair Scheduler to optionally configure maxResources in terms of percentage

2017-08-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135990#comment-16135990
 ] 

ASF GitHub Bot commented on YARN-2162:
--

Github user flyrain commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/261#discussion_r134351964
  
--- Diff: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerConfiguration.java
 ---
@@ -287,27 +289,69 @@ public float getReservableNodes() {
* 
* @throws AllocationConfigurationException
*/
-  public static Resource parseResourceConfigValue(String val)
+  public static ResourceConfiguration parseResourceConfigValue(String val)
   throws AllocationConfigurationException {
+ResourceConfiguration resourceConfiguration;
 try {
   val = StringUtils.toLowerCase(val);
-  int memory = findResource(val, "mb");
-  int vcores = findResource(val, "vcores");
-  return BuilderUtils.newResource(memory, vcores);
+  if (val.contains("%")) {
+resourceConfiguration = new ResourceConfiguration(
+getResourcePercentage(val));
+  } else {
+int memory = findResource(val, "mb");
+int vcores = findResource(val, "vcores");
+resourceConfiguration = new ResourceConfiguration(
+BuilderUtils.newResource(memory, vcores));
+  }
 } catch (AllocationConfigurationException ex) {
   throw ex;
 } catch (Exception ex) {
   throw new AllocationConfigurationException(
   "Error reading resource config", ex);
 }
+return resourceConfiguration;
+  }
+
+  private static HashMap getResourcePercentage(
+  String val) throws AllocationConfigurationException {
+HashMap resourcePercentage = new HashMap<>();
+String[] strings = val.split(",");
+if (strings.length == 1) {
+  double percentage = findPercentage(strings[0], "");
+  for (ResourceType resourceType: ResourceType.values()) {
+resourcePercentage.put(resourceType, percentage/100);
+  }
+} else {
+  double memPercentage = findPercentage(val, "memory");
+  double vcorePercentage = findPercentage(val, "vcore");
--- End diff --

Will do in next patch.


> add ability in Fair Scheduler to optionally configure maxResources in terms 
> of percentage
> -
>
> Key: YARN-2162
> URL: https://issues.apache.org/jira/browse/YARN-2162
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, scheduler
>Reporter: Ashwin Shankar
>Assignee: Yufei Gu
>  Labels: scheduler
> Attachments: YARN-2162.001.patch, YARN-2162.002.patch
>
>
> minResources and maxResources in fair scheduler configs are expressed in 
> terms of absolute numbers X mb, Y vcores. 
> As a result, when we expand or shrink our hadoop cluster, we need to 
> recalculate and change minResources/maxResources accordingly, which is pretty 
> inconvenient.
> We can circumvent this problem if we can optionally configure these 
> properties in terms of percentage of cluster capacity. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7050) Post cleanup after YARN-6903

2017-08-21 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7050:
--
Attachment: YARN-7050.yarn-native-services.05.patch

> Post cleanup after YARN-6903
> 
>
> Key: YARN-7050
> URL: https://issues.apache.org/jira/browse/YARN-7050
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-7050.yarn-native-services.01.patch, 
> YARN-7050.yarn-native-services.02.patch, 
> YARN-7050.yarn-native-services.03.patch, 
> YARN-7050.yarn-native-services.04.patch, 
> YARN-7050.yarn-native-services.05.patch
>
>
> This jira tries to remove some old code, and moves dependency classes to the 
> new package, and also some other side changes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2162) add ability in Fair Scheduler to optionally configure maxResources in terms of percentage

2017-08-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135986#comment-16135986
 ] 

ASF GitHub Bot commented on YARN-2162:
--

Github user flyrain commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/261#discussion_r134350951
  
--- Diff: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationFileLoaderService.java
 ---
@@ -605,13 +606,15 @@ private void loadQueue(String parentName, Element 
element,
 
 queueAcls.put(queueName, acls);
 resAcls.put(queueName, racls);
-if (maxQueueResources.containsKey(queueName) &&
-minQueueResources.containsKey(queueName)
-&& !Resources.fitsIn(minQueueResources.get(queueName),
-maxQueueResources.get(queueName))) {
+ResourceConfiguration maxResourceConf = 
maxQueueResources.get(queueName);
+if (maxResourceConf != null &&
+!maxResourceConf.isPercentage() &&
--- End diff --

Add the warning in method FSQueue#getMaxShare()


> add ability in Fair Scheduler to optionally configure maxResources in terms 
> of percentage
> -
>
> Key: YARN-2162
> URL: https://issues.apache.org/jira/browse/YARN-2162
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, scheduler
>Reporter: Ashwin Shankar
>Assignee: Yufei Gu
>  Labels: scheduler
> Attachments: YARN-2162.001.patch, YARN-2162.002.patch
>
>
> minResources and maxResources in fair scheduler configs are expressed in 
> terms of absolute numbers X mb, Y vcores. 
> As a result, when we expand or shrink our hadoop cluster, we need to 
> recalculate and change minResources/maxResources accordingly, which is pretty 
> inconvenient.
> We can circumvent this problem if we can optionally configure these 
> properties in terms of percentage of cluster capacity. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7064) Use cgroup to get container resource utilization

2017-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135983#comment-16135983
 ] 

Hadoop QA commented on YARN-7064:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
50s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 18s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 1 new + 2 unchanged - 2 fixed = 3 total (was 4) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 51s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
15s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.linux.resources.TestCompareResourceCalculators
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-7064 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882977/YARN-7064.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1b9ea0db85ca 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b6bfb2f |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/17042/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/17042/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/17042/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17042/testReport/ |
| 

[jira] [Updated] (YARN-7049) FSAppAttempt preemption related fields have confusing names

2017-08-21 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-7049:
---
Attachment: YARN-7049.002.patch

v2 patch fixes the checkstyle issue. No tests since no change in behavior. And, 
the test failure is a known one that is unrelated. 

> FSAppAttempt preemption related fields have confusing names
> ---
>
> Key: YARN-7049
> URL: https://issues.apache.org/jira/browse/YARN-7049
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.8.1
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: YARN-7049.001.patch, YARN-7049.002.patch
>
>
> FSAppAttempt fields tracking containers/resources queued for preemption can 
> use better names



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7053) Move curator transaction support to ZKCuratorManager

2017-08-21 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135960#comment-16135960
 ] 

Jonathan Hung commented on YARN-7053:
-

Thanks [~subru] for the review. Added a ZKCuratorManager test.

Regarding ZookeeperFederationStateStore transactions, I didn't see any existing 
transactions in this class that could be refactored. And the only place adding 
transactions seemed applicable was in the {{put(String znode, byte[] data, 
boolean update)}} method. Did you want to change the {{put}} logic as part of 
this ticket?

> Move curator transaction support to ZKCuratorManager
> 
>
> Key: YARN-7053
> URL: https://issues.apache.org/jira/browse/YARN-7053
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-7053.001.patch, YARN-7053.002.patch, 
> YARN-7053.003.patch
>
>
> HADOOP-14741 moves curator functionality to ZKCuratorManager. ZKRMStateStore 
> has some curator transaction support which can be reused, so this can be 
> moved to ZKCuratorManager as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7066) Add ability to specify volumes to mount for DockerContainerExecutor

2017-08-21 Thread Eric Yang (JIRA)
Eric Yang created YARN-7066:
---

 Summary: Add ability to specify volumes to mount for 
DockerContainerExecutor
 Key: YARN-7066
 URL: https://issues.apache.org/jira/browse/YARN-7066
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: yarn-native-services
Affects Versions: 3.0.0-beta1
Reporter: Eric Yang


Yarnfile describes environment, docker image, and configuration template for 
launching docker containers in YARN.  It would be nice to have ability to 
specify the volumes to mount.  This can be used in combination to AMBARI-21748 
to mount HDFS as data directories to docker containers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7053) Move curator transaction support to ZKCuratorManager

2017-08-21 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-7053:

Attachment: YARN-7053.003.patch

> Move curator transaction support to ZKCuratorManager
> 
>
> Key: YARN-7053
> URL: https://issues.apache.org/jira/browse/YARN-7053
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-7053.001.patch, YARN-7053.002.patch, 
> YARN-7053.003.patch
>
>
> HADOOP-14741 moves curator functionality to ZKCuratorManager. ZKRMStateStore 
> has some curator transaction support which can be reused, so this can be 
> moved to ZKCuratorManager as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6523) RM requires large memory in sending out security tokens as part of Node Heartbeat in large cluster

2017-08-21 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135951#comment-16135951
 ] 

Ray Chiang commented on YARN-6523:
--

Looks like there hasn't been much movement on this recently.  [~Naganarasimha], 
we're about a month away from beta1.  In case there isn't a second beta, what 
is the likelihood we can get this one done?

> RM requires large memory in sending out security tokens as part of Node 
> Heartbeat in large cluster
> --
>
> Key: YARN-6523
> URL: https://issues.apache.org/jira/browse/YARN-6523
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: RM
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Critical
>
> Currently as part of heartbeat response RM sets all application's tokens 
> though all applications might not be active on the node. On top of it 
> NodeHeartbeatResponsePBImpl converts tokens for each app into 
> SystemCredentialsForAppsProto. Hence for each node and each heartbeat too 
> many SystemCredentialsForAppsProto objects were getting created.
> We hit a OOM while testing for 2000 concurrent apps on 500 nodes cluster with 
> 8GB RAM configured for RM



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6736) Consider writing to both ats v1 & v2 from RM for smoother upgrades

2017-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135950#comment-16135950
 ] 

Hadoop QA commented on YARN-6736:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-5355 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
50s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
11s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
49s{color} | {color:green} YARN-5355 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
6s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in 
YARN-5355 has 2 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
8s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in YARN-5355 has 8 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
37s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client in 
YARN-5355 has 2 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
7s{color} | {color:green} YARN-5355 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
35s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 18s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 11 new + 337 unchanged - 0 fixed = 348 total (was 337) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
26s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 96 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
32s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
39s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
46s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 42m 27s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 31s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
1s{color} | {color:green} hadoop-yarn-applications-distributedshell in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}141m 35s{color} | 
{color:black} 

[jira] [Updated] (YARN-6047) Documentation updates for TimelineService v2

2017-08-21 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-6047:
-
Attachment: YARN-6047-YARN-5355.0005.patch


Thanks [~rohithsharma] and [~varun_saxena]. I have updated the patch with a few 
changes and uploading 005. 


> Documentation updates for TimelineService v2
> 
>
> Key: YARN-6047
> URL: https://issues.apache.org/jira/browse/YARN-6047
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: documentation, timelineserver
>Reporter: Varun Saxena
>Assignee: Rohith Sharma K S
>  Labels: yarn-5355-merge-blocker
> Fix For: YARN-5355
>
> Attachments: TimelineServiceV2.html, YARN-6047-YARN-5355.0005.patch, 
> YARN-6047-YARN-5355.001.patch, YARN-6047-YARN-5355.002.patch, 
> YARN-6047-YARN-5355.003.patch, YARN-6047-YARN-5355.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4227) FairScheduler: RM quits processing expired container from a removed node

2017-08-21 Thread Steven Rand (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135938#comment-16135938
 ] 

Steven Rand commented on YARN-4227:
---

I'm seeing a similar issue on what's roughly branch-2 (CDH 5.11.0), with the 
error being:

{code}
2017-06-27 16:32:39,381 ERROR 
org.apache.hadoop.yarn.YarnUncaughtExceptionHandler: Thread Thread[Preemption 
Timer,5,main] threw an Exception.
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.completedContainer(FairScheduler.java:687)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSPreemptionThread$PreemptContainersTask.run(FSPreemptionThread.java:230)
at java.util.TimerThread.mainLoop(Timer.java:555)
at java.util.TimerThread.run(Timer.java:505)
{code}

This error, which causes the FSPreemptionThead to die, and thereby crashes the 
RM, seems to be correlated with NodeManagers being marked unhealthy due to lack 
of local disk space during large shuffles. I haven't confirmed, but presumably 
the unhealthy nodes are removed while we're waiting for the lock, and no longer 
exist when we call {{releaseContainer}}.

I'm curious as to whether others are seeing this as well on recent versions, in 
which case maybe this is worth reopening?

> FairScheduler: RM quits processing expired container from a removed node
> 
>
> Key: YARN-4227
> URL: https://issues.apache.org/jira/browse/YARN-4227
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.3.0, 2.5.0, 2.7.1
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Critical
> Attachments: YARN-4227.2.patch, YARN-4227.3.patch, YARN-4227.4.patch, 
> YARN-4227.patch
>
>
> Under some circumstances the node is removed before an expired container 
> event is processed causing the RM to exit:
> {code}
> 2015-10-04 21:14:01,063 INFO 
> org.apache.hadoop.yarn.util.AbstractLivelinessMonitor: 
> Expired:container_1436927988321_1307950_01_12 Timed out after 600 secs
> 2015-10-04 21:14:01,063 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
> container_1436927988321_1307950_01_12 Container Transitioned from 
> ACQUIRED to EXPIRED
> 2015-10-04 21:14:01,063 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSSchedulerApp: 
> Completed container: container_1436927988321_1307950_01_12 in state: 
> EXPIRED event:EXPIRE
> 2015-10-04 21:14:01,063 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=system_op   
>OPERATION=AM Released Container TARGET=SchedulerApp RESULT=SUCCESS  
> APPID=application_1436927988321_1307950 
> CONTAINERID=container_1436927988321_1307950_01_12
> 2015-10-04 21:14:01,063 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
> handling event type CONTAINER_EXPIRED to the scheduler
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.completedContainer(FairScheduler.java:849)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:1273)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:122)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:585)
>   at java.lang.Thread.run(Thread.java:745)
> 2015-10-04 21:14:01,063 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Exiting, bbye..
> {code}
> The stack trace is from 2.3.0 but the same issue has been observed in 2.5.0 
> and 2.6.0 by different customers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2416) InvalidStateTransitonException in ResourceManager if AMLauncher does not receive response for startContainers() call in time

2017-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135935#comment-16135935
 ] 

Hadoop QA commented on YARN-2416:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 197 unchanged - 1 fixed = 197 total (was 198) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 45m 15s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 27s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-2416 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882969/YARN-2416.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 091941edb67d 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b6bfb2f |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/17040/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17040/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17040/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> InvalidStateTransitonException in ResourceManager if AMLauncher does not 
> receive response for 

[jira] [Commented] (YARN-3661) Basic Federation UI

2017-08-21 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135928#comment-16135928
 ] 

Giovanni Matteo Fumarola commented on YARN-3661:


Thanks [~elgoiri].
Few additional feedback about the edited classes:
* Why do we need the changes in {hadoop-common-project/hadoop-common/pom.xml}?
* In {{Router}} we don't need to create {{getFederationFacade}} since we can 
use directly the singleton instance everywhere.
* In {{RouterWebServiceUtil}} I have already created {{invokeRMWebService}} you 
just need to adapt your calls to the new method.

> Basic Federation UI 
> 
>
> Key: YARN-3661
> URL: https://issues.apache.org/jira/browse/YARN-3661
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Íñigo Goiri
> Attachments: YARN-3661-000.patch, YARN-3661-001.patch
>
>
> The UIs provided by each RM, provide a correct "local" view of what is 
> running in a sub-cluster. In the context of federation we need new 
> UIs that can track load, jobs, users across sub-clusters.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-3661) Basic Federation UI

2017-08-21 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135928#comment-16135928
 ] 

Giovanni Matteo Fumarola edited comment on YARN-3661 at 8/21/17 10:21 PM:
--

Thanks [~elgoiri].
Few additional feedback about the edited classes:
* Why do we need the changes in {{hadoop-common-project/hadoop-common/pom.xml}}?
* In {{Router}} we don't need to create {{getFederationFacade}} since we can 
use directly the singleton instance everywhere.
* In {{RouterWebServiceUtil}} I have already created {{invokeRMWebService}} you 
just need to adapt your calls to the new method.


was (Author: giovanni.fumarola):
Thanks [~elgoiri].
Few additional feedback about the edited classes:
* Why do we need the changes in {hadoop-common-project/hadoop-common/pom.xml}?
* In {{Router}} we don't need to create {{getFederationFacade}} since we can 
use directly the singleton instance everywhere.
* In {{RouterWebServiceUtil}} I have already created {{invokeRMWebService}} you 
just need to adapt your calls to the new method.

> Basic Federation UI 
> 
>
> Key: YARN-3661
> URL: https://issues.apache.org/jira/browse/YARN-3661
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Íñigo Goiri
> Attachments: YARN-3661-000.patch, YARN-3661-001.patch
>
>
> The UIs provided by each RM, provide a correct "local" view of what is 
> running in a sub-cluster. In the context of federation we need new 
> UIs that can track load, jobs, users across sub-clusters.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6876) Create an abstract log writer for extendability

2017-08-21 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135926#comment-16135926
 ] 

Junping Du commented on YARN-6876:
--

Thanks [~xgong] for uploading the patch. Sorry for coming late on this. I 
assume we have sort out all design issues in umbrella JIRA, so I quickly walk 
through the refactor patch and have a few comments:

It sounds like we are always adding TFile format no matter what we particular 
setting in yarn configuration. I can understand we want to do this for 
compatibility for older version of log files especially for rolling upgrades, 
but that means no ways to get rid of TFile format in future because we bind it 
in code level. I think it may better to make TFile format as default one (put 
it value in yarn-default.xml), but allow user to remove it explicitly?

Also, if there are multiple values specified in "log-aggregation.file-formats", 
what format will take effective first in writing? I think it should be the 
first non-TFile format, isn't it? If so, we should document it somewhere.

It sounds like for each file format, we need to add additional 4 configs in 
yarn-site.xml: log-aggregation.file-formats, 
log-aggregation.file-format.%s.class, log-aggregation.%s.remote-app-log-dir, 
log-aggregation.%s.remote-app-log-dir-suffix. Can we figure out some way to 
simplify it? - at least last two configurations are not so necessary.

LogAggregationFileFormat.java is actually doing handy action for log files, 
like: write, createDir, checkExists, etc. May be we can rename it to 
LogAggregationFileController.java instead? Also *FormatContext and 
*FormatFactory can be renamed to *ControllerContext and *ControllerFactory.

I found we are replacing close or closeStream to closeQuietly. Any specially 
reason for this change? Does that related to this refactor effort here?
{noformat}
-IOUtils.closeStream(this.fsDataOStream);
+IOUtils.closeQuietly(this.fsDataOStream);
{noformat}

{code}
+  @VisibleForTesting
+  public void verifyRemoteLogDir(LogAggregationFileFormat fileFormat) {
+fileFormat.verifyAndCreateRemoteLogDir();
+  }
+
+  @VisibleForTesting
+  public void createAppDir(LogAggregationFileFormat fileFormat,
+  String user, ApplicationId appId, UserGroupInformation userUgi) {
+fileFormat.createAppDir(user, appId, userUgi);
+  }
{code}
These methods seems to be unnecessary. Even for test, we can get fileFormat 
(fileController) and use to do these operations directly.

It sounds like UT test failures are not related. Can you confirm that and check 
if we have existing JIRAs to track all these failures?

> Create an abstract log writer for extendability
> ---
>
> Key: YARN-6876
> URL: https://issues.apache.org/jira/browse/YARN-6876
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-6876-branch-2.001.patch, YARN-6876-trunk.001.patch, 
> YARN-6876-trunk.002.patch, YARN-6876-trunk.003.patch, 
> YARN-6876-trunk.004.patch
>
>
> Currently, TFile log writer is used to aggregate log in YARN. We need to add 
> an abstract layer, and pick up the correct log writer based on the 
> configuration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7010) Federation: routing REST invocations transparently to multiple RMs (part 2 - getApps)

2017-08-21 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135893#comment-16135893
 ] 

Carlo Curino commented on YARN-7010:


Thanks [~giovanni.fumarola] for the patch. Few concerns below.
# the behavior on missing sub-clusters is not very clear. For once I don't 
think we can use the UAM size 1 as a discriminator for Fedeation/not UAM, more 
critically we should define more precisely what to do in case of a missing 
sub-cluster (both if it is the Home RM of an App or if it isn't)
# {{AppsInfo.addAll(AppsInfo)}} maybe best {{AppsInfo.addAll(List)}}
# size of the threadpool?
# The  {{O(n)}} notation is nice, but specify what is n or it is not very 
useful.


> Federation: routing REST invocations transparently to multiple RMs (part 2 - 
> getApps)
> -
>
> Key: YARN-7010
> URL: https://issues.apache.org/jira/browse/YARN-7010
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-7010.v0.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7009) TestNMClient.testNMClientNoCleanupOnStop is flaky by design

2017-08-21 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-7009:
-
Attachment: YARN-7009.001.patch

Fixing checkstyle and findbugs.

> TestNMClient.testNMClientNoCleanupOnStop is flaky by design
> ---
>
> Key: YARN-7009
> URL: https://issues.apache.org/jira/browse/YARN-7009
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Attachments: YARN-7009.000.patch, YARN-7009.001.patch
>
>
> The sleeps to wait for a transition to reinit and than back to running is not 
> long enough, it can miss the reinit event.
> {code}
> java.lang.AssertionError: Exception is not expected: 
> org.apache.hadoop.yarn.exceptions.YarnException: Cannot perform RE_INIT on 
> [container_1502735389852_0001_01_01]. Current state is [REINITIALIZING, 
> isReInitializing=true].
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.preReInitializeOrLocalizeCheck(ContainerManagerImpl.java:1772)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.reInitializeContainer(ContainerManagerImpl.java:1697)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.reInitializeContainer(ContainerManagerImpl.java:1668)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.service.ContainerManagementProtocolPBServiceImpl.reInitializeContainer(ContainerManagementProtocolPBServiceImpl.java:214)
>   at 
> org.apache.hadoop.yarn.proto.ContainerManagementProtocol$ContainerManagementProtocolService$2.callBlockingMethod(ContainerManagementProtocol.java:237)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestNMClient.testReInitializeContainer(TestNMClient.java:567)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestNMClient.testContainerManagement(TestNMClient.java:405)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestNMClient.testNMClientNoCleanupOnStop(TestNMClient.java:214)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Cannot perform 
> RE_INIT on [container_1502735389852_0001_01_01]. Current state is 
> [REINITIALIZING, isReInitializing=true].
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.preReInitializeOrLocalizeCheck(ContainerManagerImpl.java:1772)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.reInitializeContainer(ContainerManagerImpl.java:1697)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.reInitializeContainer(ContainerManagerImpl.java:1668)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.service.ContainerManagementProtocolPBServiceImpl.reInitializeContainer(ContainerManagementProtocolPBServiceImpl.java:214)
>   at 
> org.apache.hadoop.yarn.proto.ContainerManagementProtocol$ContainerManagementProtocolService$2.callBlockingMethod(ContainerManagementProtocol.java:237)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
>   at 

[jira] [Updated] (YARN-3661) Basic Federation UI

2017-08-21 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-3661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated YARN-3661:
--
Attachment: YARN-3661-001.patch

Tackled some comments from [~giovanni.fumarola].

> Basic Federation UI 
> 
>
> Key: YARN-3661
> URL: https://issues.apache.org/jira/browse/YARN-3661
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Íñigo Goiri
> Attachments: YARN-3661-000.patch, YARN-3661-001.patch
>
>
> The UIs provided by each RM, provide a correct "local" view of what is 
> running in a sub-cluster. In the context of federation we need new 
> UIs that can track load, jobs, users across sub-clusters.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7065) [RM UI] App status not getting updated in "All application" page

2017-08-21 Thread Yesha Vora (JIRA)
Yesha Vora created YARN-7065:


 Summary: [RM UI] App status not getting updated in "All 
application" page
 Key: YARN-7065
 URL: https://issues.apache.org/jira/browse/YARN-7065
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Yesha Vora


Scenario:
1) Run Spark Long Running application
2) Do RM and NN failover randomly
3) Validate App state in Yarn

The Spark applications are finished. Yarn-cli returns correct status of yarn 
application.
{code}
[hrt_qa@xxx hadoopqe]$ yarn application -status application_1503203977699_0014
17/08/21 16:56:10 INFO client.AHSProxy: Connecting to Application History 
server at host1 xxx.xx.xx.x:10200
17/08/21 16:56:10 INFO client.RequestHedgingRMFailoverProxyProvider: Looking 
for the active RM in [rm1, rm2]...
17/08/21 16:56:10 INFO client.RequestHedgingRMFailoverProxyProvider: Found 
active RM [rm1]
Application Report : 
Application-Id : application_1503203977699_0014
Application-Name : 
org.apache.spark.sql.execution.datasources.hbase.examples.LRJobForDataSources
Application-Type : SPARK
User : hrt_qa
Queue : default
Application Priority : null
Start-Time : 1503215983532
Finish-Time : 1503250203806
Progress : 0%
State : FAILED
Final-State : FAILED
Tracking-URL : 
https://host1:8090/cluster/app/application_1503203977699_0014
RPC Port : -1
AM Host : N/A
Aggregate Resource Allocation : 174722793 MB-seconds, 170603 
vcore-seconds
Log Aggregation Status : SUCCEEDED
Diagnostics : Application application_1503203977699_0014 failed 20 
times due to AM Container for appattempt_1503203977699_0014_20 exited with  
exitCode: 1
For more detailed output, check the application tracking page: 
https://host1:8090/cluster/app/application_1503203977699_0014 Then click on 
links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_e04_1503203977699_0014_20_01
Exit code: 1
Stack trace: 
org.apache.hadoop.yarn.server.nodemanager.containermanager.runtime.ContainerExecutionException:
 Launch container failed
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DefaultLinuxContainerRuntime.launchContainer(DefaultLinuxContainerRuntime.java:109)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DelegatingLinuxContainerRuntime.launchContainer(DelegatingLinuxContainerRuntime.java:89)
at 
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:392)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:317)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:83)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

Shell output: main : command provided 1
main : run as user is hrt_qa
main : requested yarn user is hrt_qa
Getting exit code file...
Creating script paths...
Writing pid file...
Writing to tmp file 
/grid/0/hadoop/yarn/local/nmPrivate/application_1503203977699_0014/container_e04_1503203977699_0014_20_01/container_e04_1503203977699_0014_20_01.pid.tmp
Writing to cgroup task files...
Creating local dirs...
Launching container...
Getting exit code file...
Creating script paths...


Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
Unmanaged Application : false
Application Node Label Expression : 
AM container Node Label Expression : {code}

However, RM UI "All application" page still shows the application in "RUNNING" 
State.  
https://host1:8090/cluster
On clicking application_id ( 
https://host1:8090/cluster/app/application_1503203977699_0014) , it redirects 
to application page and there it shows correct application state = Failed. 

The App status is not getting updated on Yarn All Application page. 




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6251) Add support for async Container release to prevent deadlock during container updates

2017-08-21 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-6251:
--
Attachment: (was: YARN-6251.007.patch)

> Add support for async Container release to prevent deadlock during container 
> updates
> 
>
> Key: YARN-6251
> URL: https://issues.apache.org/jira/browse/YARN-6251
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-6251.001.patch, YARN-6251.002.patch, 
> YARN-6251.003.patch, YARN-6251.004.patch, YARN-6251.005.patch, 
> YARN-6251.006.patch, YARN-6251.007.patch
>
>
> Opening to track a locking issue that was uncovered when running a custom SLS 
> AMSimulator.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6251) Add support for async Container release to prevent deadlock during container updates

2017-08-21 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-6251:
--
Attachment: YARN-6251.007.patch

Had attached the wrong patch earlier. Removed and re-attached.

> Add support for async Container release to prevent deadlock during container 
> updates
> 
>
> Key: YARN-6251
> URL: https://issues.apache.org/jira/browse/YARN-6251
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-6251.001.patch, YARN-6251.002.patch, 
> YARN-6251.003.patch, YARN-6251.004.patch, YARN-6251.005.patch, 
> YARN-6251.006.patch, YARN-6251.007.patch
>
>
> Opening to track a locking issue that was uncovered when running a custom SLS 
> AMSimulator.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6668) Use cgroup to get container resource utilization

2017-08-21 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135842#comment-16135842
 ] 

Miklos Szegedi commented on YARN-6668:
--

Opened YARN-7064 to continue with this patch.

> Use cgroup to get container resource utilization
> 
>
> Key: YARN-6668
> URL: https://issues.apache.org/jira/browse/YARN-6668
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Miklos Szegedi
> Attachments: YARN-6668.000.patch, YARN-6668.001.patch, 
> YARN-6668.002.patch, YARN-6668.003.patch, YARN-6668.004.patch, 
> YARN-6668.005.patch, YARN-6668.006.patch, YARN-6668.007.patch, 
> YARN-6668.008.patch, YARN-6668.009.patch
>
>
> Container Monitor relies on proc file system to get container resource 
> utilization, which is not as efficient as reading cgroup accounting. We 
> should in NM, when cgroup is enabled, read cgroup stats instead. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6251) Add support for async Container release to prevent deadlock during container updates

2017-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135837#comment-16135837
 ] 

Hadoop QA commented on YARN-6251:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 13s{color} 
| {color:red} YARN-6251 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-6251 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882970/YARN-6251.007.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17039/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add support for async Container release to prevent deadlock during container 
> updates
> 
>
> Key: YARN-6251
> URL: https://issues.apache.org/jira/browse/YARN-6251
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-6251.001.patch, YARN-6251.002.patch, 
> YARN-6251.003.patch, YARN-6251.004.patch, YARN-6251.005.patch, 
> YARN-6251.006.patch, YARN-6251.007.patch
>
>
> Opening to track a locking issue that was uncovered when running a custom SLS 
> AMSimulator.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7064) Use cgroup to get container resource utilization

2017-08-21 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-7064:
-
Attachment: YARN-7064.000.patch

[~haibochen], please review this patch.

> Use cgroup to get container resource utilization
> 
>
> Key: YARN-7064
> URL: https://issues.apache.org/jira/browse/YARN-7064
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Attachments: YARN-7064.000.patch
>
>
> This is an addendum to YARN-6668. What happens is that that jira always wants 
> to rebase patches against YARN-1011 instead of trunk.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7064) Use cgroup to get container resource utilization

2017-08-21 Thread Miklos Szegedi (JIRA)
Miklos Szegedi created YARN-7064:


 Summary: Use cgroup to get container resource utilization
 Key: YARN-7064
 URL: https://issues.apache.org/jira/browse/YARN-7064
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Miklos Szegedi
Assignee: Miklos Szegedi


This is an addendum to YARN-6668. What happens is that that jira always wants 
to rebase patches against YARN-1011 instead of trunk.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6384) Add configuratin to set max cpu usage when strict-resource-usage is false with cgroups

2017-08-21 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135811#comment-16135811
 ] 

Miklos Szegedi commented on YARN-6384:
--

Please rebase your patch against the latest trunk.

> Add configuratin to set max cpu usage when strict-resource-usage is false 
> with cgroups
> --
>
> Key: YARN-6384
> URL: https://issues.apache.org/jira/browse/YARN-6384
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: dengkai
> Attachments: YARN-6384-0.patch
>
>
> When using cgroups on yarn, if 
> yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usage is 
> false, user may get very more cpu time than expected based on the vcores. 
> There should be a upper limit even resource-usage is not strict, just like a 
> percentage which user can get more than promised by vcores. I think it's 
> important in a shared cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6384) Add configuratin to set max cpu usage when strict-resource-usage is false with cgroups

2017-08-21 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135808#comment-16135808
 ] 

Miklos Szegedi commented on YARN-6384:
--

Thank you for the patch [~lolee_k], I have a few comments.
{code}
383 if (strictResourceUsageMode) {
384   limits = getOverallLimits(containerCPU);
385 } else {
386   limits = getOverallLimits(containerCPU * 
maxResourceUsagePercentInNonstrictMode);
382 }   387 }
{code}
We need to maintain backward compatibility. Strict resource usage classically 
means that the limit is enforced by cpu quota instead of cpu shares. That is, 
your change should follow this rule as well. I would suggest to set your 
percentage to 100% by default and apply it in all cases, when strict is set but 
only, if strict is set.
Also, CgroupsLCEResourcesHandler.java is deprecated. Please add your change to 
CGroupsCpuResourceHandlerImpl as well.


> Add configuratin to set max cpu usage when strict-resource-usage is false 
> with cgroups
> --
>
> Key: YARN-6384
> URL: https://issues.apache.org/jira/browse/YARN-6384
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: dengkai
> Attachments: YARN-6384-0.patch
>
>
> When using cgroups on yarn, if 
> yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usage is 
> false, user may get very more cpu time than expected based on the vcores. 
> There should be a upper limit even resource-usage is not strict, just like a 
> percentage which user can get more than promised by vcores. I think it's 
> important in a shared cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6668) Use cgroup to get container resource utilization

2017-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135798#comment-16135798
 ] 

Hadoop QA commented on YARN-6668:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} YARN-6668 does not apply to YARN-1011. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-6668 |
| GITHUB PR | https://github.com/apache/hadoop/pull/241 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17038/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Use cgroup to get container resource utilization
> 
>
> Key: YARN-6668
> URL: https://issues.apache.org/jira/browse/YARN-6668
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Miklos Szegedi
> Attachments: YARN-6668.000.patch, YARN-6668.001.patch, 
> YARN-6668.002.patch, YARN-6668.003.patch, YARN-6668.004.patch, 
> YARN-6668.005.patch, YARN-6668.006.patch, YARN-6668.007.patch, 
> YARN-6668.008.patch, YARN-6668.009.patch
>
>
> Container Monitor relies on proc file system to get container resource 
> utilization, which is not as efficient as reading cgroup accounting. We 
> should in NM, when cgroup is enabled, read cgroup stats instead. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5603) Metrics for Federation StateStore

2017-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135795#comment-16135795
 ] 

Hadoop QA commented on YARN-5603:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
54s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-5603 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882957/YARN-5603.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 271d25f9c007 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 736ceab |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17037/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17037/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Metrics for Federation StateStore
> -
>
> Key: YARN-5603
> URL: https://issues.apache.org/jira/browse/YARN-5603
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Subru Krishnan
>Assignee: Ellen Hui
> Attachments: 0001-Add-Federation-Client-metrics.patch, 
> YARN-5603.001.patch, YARN-5603.002.patch, YARN-5603.003.patch
>
>
> This JIRA proposes addition of metrics for Federation StateStore



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (YARN-4266) Allow whitelisted users to disable user re-mapping/squashing when launching docker containers

2017-08-21 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135750#comment-16135750
 ] 

Eric Yang edited comment on YARN-4266 at 8/21/17 8:40 PM:
--

[~ebadger] I think the idea is correct to use -u=$(id -u $(whoami)):$(id -g 
$(whoami)) to evaluate the running user inside container.  Could you provide an 
updated patch that fixes the style check?
The proposal by Eric Badger is the right approach to manage process owner 
inside container.  However, the proposed change doesn't seem to be related to 
title of this JIRA.


was (Author: eyang):
[~ebadger] I think the idea is correct to use -u=$(id -u $(whoami)):$(id -g 
$(whoami)) to evaluate the running user inside container.  Could you provide an 
updated patch that fixes the style check?

> Allow whitelisted users to disable user re-mapping/squashing when launching 
> docker containers
> -
>
> Key: YARN-4266
> URL: https://issues.apache.org/jira/browse/YARN-4266
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Sidharta Seethana
>Assignee: luhuichun
> Attachments: YARN-4266.001.patch, YARN-4266.001.patch, 
> YARN-4266.002.patch, 
> YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping.pdf, 
> YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping_v2.pdf, 
> YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping_v3.pdf, 
> YARN-4266-branch-2.8.001.patch
>
>
> Docker provides a mechanism (the --user switch) that enables us to specify 
> the user the container processes should run as. We use this mechanism today 
> when launching docker containers . In non-secure mode, we run the docker 
> container based on 
> `yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user` and in 
> secure mode, as the submitting user. However, this mechanism breaks down with 
> a large number of 'pre-created' images which don't necessarily have the users 
> available within the image. Examples of such images include shared images 
> that need to be used by multiple users. We need a way in which we can allow a 
> pre-defined set of users to run containers based on existing images, without 
> using the --user switch. There are some implications of disabling this user 
> squashing that we'll need to work through : log aggregation, artifact 
> deletion etc.,



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6251) Add support for async Container release to prevent deadlock during container updates

2017-08-21 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-6251:
--
Attachment: YARN-6251.007.patch

Rebasing latest patch against trunk

> Add support for async Container release to prevent deadlock during container 
> updates
> 
>
> Key: YARN-6251
> URL: https://issues.apache.org/jira/browse/YARN-6251
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-6251.001.patch, YARN-6251.002.patch, 
> YARN-6251.003.patch, YARN-6251.004.patch, YARN-6251.005.patch, 
> YARN-6251.006.patch, YARN-6251.007.patch
>
>
> Opening to track a locking issue that was uncovered when running a custom SLS 
> AMSimulator.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2416) InvalidStateTransitonException in ResourceManager if AMLauncher does not receive response for startContainers() call in time

2017-08-21 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated YARN-2416:
--
Attachment: YARN-2416.003.patch

Good catch, [~jlowe]. Updated the patch to include these race conditions as 
well.

> InvalidStateTransitonException in ResourceManager if AMLauncher does not 
> receive response for startContainers() call in time
> 
>
> Key: YARN-2416
> URL: https://issues.apache.org/jira/browse/YARN-2416
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.4.0
>Reporter: Jian Fang
>Assignee: Jonathan Eagles
>Priority: Critical
> Attachments: YARN-2416.001.patch, YARN-2416.002.patch, 
> YARN-2416.003.patch
>
>
> AMLauncher calls startContainers(allRequests) to launch a container for 
> application master. Normally, the call comes back immediately so that the 
> RMAppAttempt changes its state from ALLOCATED to LAUNCHED. 
> However, we do observed that in some cases, the RPC call came back very late 
> but the AM container was already started. Because the RMAppAttempt stuck in 
> ALLOCATED state, once resource manager received the REGISTERED event from the 
> application master, it threw InvalidStateTransitonException as follows.
> 2014-07-05 08:59:05,021 ERROR 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: 
> Can't handle this event at current state
> org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: 
> REGISTERED at ALLOCATED
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:652)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:106)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:752)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:733)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106)
> at java.lang.Thread.run(Thread.java:744)
> For subsequent STATUS_UPDATE and CONTAINER_ALLOCATED events for this job, 
> resource manager kept throwing InvalidStateTransitonException.
> 2014-07-05 08:59:06,152 ERROR 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: 
> Can't handle this event at current state
> org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: 
> STATUS_UPDATE at ALLOCATED
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:652)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:106)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:752)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:733)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106)
> at java.lang.Thread.run(Thread.java:744)
> 2014-07-05 08:59:07,779 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
> container_1404549222428_0001_02_02 Container Transitioned from NEW to
>  ALLOCATED
> 2014-07-05 08:59:07,779 ERROR 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: 
> Can't handle this event at current state
> org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: 
> CONTAINER_ALLOCATED at ALLOCATED
> at 
> 

[jira] [Updated] (YARN-6955) Handle concurrent register AM requests in FederationInterceptor

2017-08-21 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-6955:
-
Parent Issue: YARN-2915  (was: YARN-5597)

> Handle concurrent register AM requests in FederationInterceptor
> ---
>
> Key: YARN-6955
> URL: https://issues.apache.org/jira/browse/YARN-6955
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6955.v1.patch, YARN-6955.v2.patch
>
>
>  The timeout between AM and AMRMProxy is shorter than the timeout + failOver 
> between FederationInterceptor (AMRMProxy) and RM. When the first register 
> thread in FI is blocked because of an RM failover, AM can timeout and resend 
> register call, leading to two outstanding register call inside FI. 
> Eventually when RM comes back up, one thread succeeds register and the other 
> thread got an application already registered exception. FI should swallow the 
> exception and return success back to AM in both threads. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7062) yarn job args changed by ContainerLaunch.expandEnvironment(), such as "{{" and "}}"

2017-08-21 Thread Lingfeng Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lingfeng Su updated YARN-7062:
--
Priority: Major  (was: Minor)

> yarn job args changed by ContainerLaunch.expandEnvironment(), such as "{{" 
> and "}}"
> ---
>
> Key: YARN-7062
> URL: https://issues.apache.org/jira/browse/YARN-7062
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: applications
>Reporter: Lingfeng Su
>Assignee: Lingfeng Su
>
> I passed a json string like "{User: Billy, {Age: 18}}" to main method args, 
> when running spark jobs on yarn.
> It was changed by ContainerLaunch.expandEnvironment() to "{User: Billy, {Age: 
> 18" ("}}" to "")
> I found the final arg in launch_container.sh of yarn containers, as:
>  --arg '{User: Billy, {Age: 18'
> {code:java}
> exec /bin/bash -c 
> "LD_LIBRARY_PATH="$HADOOP_COMMON_HOME/../../../CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hadoop/lib/native:$LD_LIBRARY_PATH"
>  $JAVA_HOME/bin/java -server -Xmx1024m -Djava.io.tmpdir=$PWD/tmp 
> -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01
>  org.apache.spark.deploy.yarn.ApplicationMaster --class 
> 'org.apache.spark.examples.sql.hive.HiveFromSpark' --jar 
> file:/opt/spark-submit/spark_sql_test-1.0.jar --arg '{User: Billy, {Age: 18' 
> --properties-file $PWD/__spark_conf__/__spark_conf__.properties 1> 
> /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stdout
>  2> 
> /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stderr"
> {code}
> We could make some improvements
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.expandEnvironment():
>  
> {code:java}
> @VisibleForTesting
>   public static String expandEnvironment(String var,
>   Path containerLogDir) {
> var = var.replace(ApplicationConstants.LOG_DIR_EXPANSION_VAR,
>   containerLogDir.toString());
> var =  var.replace(ApplicationConstants.CLASS_PATH_SEPARATOR,
>   File.pathSeparator);
> // replace parameter expansion marker. e.g. {{VAR}} on Windows is replaced
> // as %VAR% and on Linux replaced as "$VAR"
> if (Shell.WINDOWS) {
>   var = var.replaceAll("(\\{\\{)|(\\}\\})", "%");
> } else {
>   var = var.replace(ApplicationConstants.PARAMETER_EXPANSION_LEFT, "$");
>   var = var.replace(ApplicationConstants.PARAMETER_EXPANSION_RIGHT, "");
> }
> return var;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4266) Allow whitelisted users to disable user re-mapping/squashing when launching docker containers

2017-08-21 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135750#comment-16135750
 ] 

Eric Yang commented on YARN-4266:
-

[~ebadger] I think the idea is correct to use -u=$(id -u $(whoami)):$(id -g 
$(whoami)) to evaluate the running user inside container.  Could you provide an 
updated patch that fixes the style check?

> Allow whitelisted users to disable user re-mapping/squashing when launching 
> docker containers
> -
>
> Key: YARN-4266
> URL: https://issues.apache.org/jira/browse/YARN-4266
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Sidharta Seethana
>Assignee: luhuichun
> Attachments: YARN-4266.001.patch, YARN-4266.001.patch, 
> YARN-4266.002.patch, 
> YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping.pdf, 
> YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping_v2.pdf, 
> YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping_v3.pdf, 
> YARN-4266-branch-2.8.001.patch
>
>
> Docker provides a mechanism (the --user switch) that enables us to specify 
> the user the container processes should run as. We use this mechanism today 
> when launching docker containers . In non-secure mode, we run the docker 
> container based on 
> `yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user` and in 
> secure mode, as the submitting user. However, this mechanism breaks down with 
> a large number of 'pre-created' images which don't necessarily have the users 
> available within the image. Examples of such images include shared images 
> that need to be used by multiple users. We need a way in which we can allow a 
> pre-defined set of users to run containers based on existing images, without 
> using the --user switch. There are some implications of disabling this user 
> squashing that we'll need to work through : log aggregation, artifact 
> deletion etc.,



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7053) Move curator transaction support to ZKCuratorManager

2017-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135747#comment-16135747
 ] 

Hadoop QA commented on YARN-7053:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m  7s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 44m 13s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}127m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.net.TestDNS |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-7053 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882929/YARN-7053.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3c722b55dc47 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 913760c |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/17034/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit | 

[jira] [Commented] (YARN-2416) InvalidStateTransitonException in ResourceManager if AMLauncher does not receive response for startContainers() call in time

2017-08-21 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135721#comment-16135721
 ] 

Jason Lowe commented on YARN-2416:
--

Thanks for the patch!

In general patch looks good, but arguably there are some race conditions that 
are still missed.  If we're going to let the AM register before we see the 
launched event then we could theoretically see the launched event in other 
states besides RUNNING as well if the AM is very short lived, such as 
FINISHING, FAILED, etc.  The LAUNCHED event needs to be ignored in those as 
well.

Otherwise the patch looks good.

> InvalidStateTransitonException in ResourceManager if AMLauncher does not 
> receive response for startContainers() call in time
> 
>
> Key: YARN-2416
> URL: https://issues.apache.org/jira/browse/YARN-2416
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.4.0
>Reporter: Jian Fang
>Assignee: Jonathan Eagles
>Priority: Critical
> Attachments: YARN-2416.001.patch, YARN-2416.002.patch
>
>
> AMLauncher calls startContainers(allRequests) to launch a container for 
> application master. Normally, the call comes back immediately so that the 
> RMAppAttempt changes its state from ALLOCATED to LAUNCHED. 
> However, we do observed that in some cases, the RPC call came back very late 
> but the AM container was already started. Because the RMAppAttempt stuck in 
> ALLOCATED state, once resource manager received the REGISTERED event from the 
> application master, it threw InvalidStateTransitonException as follows.
> 2014-07-05 08:59:05,021 ERROR 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: 
> Can't handle this event at current state
> org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: 
> REGISTERED at ALLOCATED
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:652)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:106)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:752)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:733)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106)
> at java.lang.Thread.run(Thread.java:744)
> For subsequent STATUS_UPDATE and CONTAINER_ALLOCATED events for this job, 
> resource manager kept throwing InvalidStateTransitonException.
> 2014-07-05 08:59:06,152 ERROR 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: 
> Can't handle this event at current state
> org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: 
> STATUS_UPDATE at ALLOCATED
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:652)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:106)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:752)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:733)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106)
> at java.lang.Thread.run(Thread.java:744)
> 2014-07-05 08:59:07,779 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
> container_1404549222428_0001_02_02 Container Transitioned from NEW to
>  ALLOCATED
> 2014-07-05 

[jira] [Updated] (YARN-6668) Use cgroup to get container resource utilization

2017-08-21 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-6668:
-
Attachment: YARN-6668.009.patch

Thank you for the reviews [~haibochen]. I attached a test that is more like a 
functional test comparing the results of the procfs and the cgroups based 
resource calculators. In general CPU is good. We have some differences in 
memory calculations, though. The checks in the code show the details.

> Use cgroup to get container resource utilization
> 
>
> Key: YARN-6668
> URL: https://issues.apache.org/jira/browse/YARN-6668
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Miklos Szegedi
> Attachments: YARN-6668.000.patch, YARN-6668.001.patch, 
> YARN-6668.002.patch, YARN-6668.003.patch, YARN-6668.004.patch, 
> YARN-6668.005.patch, YARN-6668.006.patch, YARN-6668.007.patch, 
> YARN-6668.008.patch, YARN-6668.009.patch
>
>
> Container Monitor relies on proc file system to get container resource 
> utilization, which is not as efficient as reading cgroup accounting. We 
> should in NM, when cgroup is enabled, read cgroup stats instead. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5603) Metrics for Federation StateStore

2017-08-21 Thread Ellen Hui (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ellen Hui updated YARN-5603:

Attachment: YARN-5603.003.patch

One more checkstyle fix.

> Metrics for Federation StateStore
> -
>
> Key: YARN-5603
> URL: https://issues.apache.org/jira/browse/YARN-5603
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Subru Krishnan
>Assignee: Ellen Hui
> Attachments: 0001-Add-Federation-Client-metrics.patch, 
> YARN-5603.001.patch, YARN-5603.002.patch, YARN-5603.003.patch
>
>
> This JIRA proposes addition of metrics for Federation StateStore



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2162) add ability in Fair Scheduler to optionally configure maxResources in terms of percentage

2017-08-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135652#comment-16135652
 ] 

ASF GitHub Bot commented on YARN-2162:
--

Github user flyrain commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/261#discussion_r134311049
  
--- Diff: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationFileLoaderService.java
 ---
@@ -495,17 +496,17 @@ private void loadQueue(String parentName, Element 
element,
   Element field = (Element) fieldNode;
   if ("minResources".equals(field.getTagName())) {
 String text = ((Text)field.getFirstChild()).getData().trim();
-Resource val =
+ResourceConfiguration val =
--- End diff --

The major usage of min share is sorting schedulables and preemption. I 
didn't see many cons. In the other side,
the needs for min share as percentage are limited. It shouldn't a common 
case that a job/queue says to RM that no matter how much resource you have, I 
need at least 10% of them. Make min share as percentage isn't necessary as 
well. Hence, we could just leave it alone.


> add ability in Fair Scheduler to optionally configure maxResources in terms 
> of percentage
> -
>
> Key: YARN-2162
> URL: https://issues.apache.org/jira/browse/YARN-2162
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, scheduler
>Reporter: Ashwin Shankar
>Assignee: Yufei Gu
>  Labels: scheduler
> Attachments: YARN-2162.001.patch, YARN-2162.002.patch
>
>
> minResources and maxResources in fair scheduler configs are expressed in 
> terms of absolute numbers X mb, Y vcores. 
> As a result, when we expand or shrink our hadoop cluster, we need to 
> recalculate and change minResources/maxResources accordingly, which is pretty 
> inconvenient.
> We can circumvent this problem if we can optionally configure these 
> properties in terms of percentage of cluster capacity. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6736) Consider writing to both ats v1 & v2 from RM for smoother upgrades

2017-08-21 Thread Aaron Gresch (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Gresch updated YARN-6736:
---
Attachment: YARN-6736-YARN-5355.003.patch

> Consider writing to both ats v1 & v2 from RM for smoother upgrades
> --
>
> Key: YARN-6736
> URL: https://issues.apache.org/jira/browse/YARN-6736
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Aaron Gresch
> Attachments: YARN-6736-YARN-5355.001.patch, 
> YARN-6736-YARN-5355.002.patch, YARN-6736-YARN-5355.003.patch
>
>
> When the cluster is being upgraded from atsv1 to v2, it may be good to have a 
> brief time period during which RM writes to both atsv1 and v2. This will help 
> frameworks like Tez migrate more smoothly. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5603) Metrics for Federation StateStore

2017-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135645#comment-16135645
 ] 

Hadoop QA commented on YARN-5603:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common: 
The patch generated 1 new + 2 unchanged - 0 fixed = 3 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
51s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-5603 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882947/YARN-5603.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 455b5d6f5a53 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 736ceab |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/17035/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17035/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17035/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Metrics for Federation StateStore
> -
>
> Key: YARN-5603
> URL: https://issues.apache.org/jira/browse/YARN-5603
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: 

[jira] [Commented] (YARN-7053) Move curator transaction support to ZKCuratorManager

2017-08-21 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135621#comment-16135621
 ] 

Subru Krishnan commented on YARN-7053:
--

Thanks [~jhung] for working on this. The patch looks good, couple of minor 
comments:
* Is there any test to move or add?
* Can you update {{ZookeeperFederationStateStore}} to also use the refactored 
transactions?

> Move curator transaction support to ZKCuratorManager
> 
>
> Key: YARN-7053
> URL: https://issues.apache.org/jira/browse/YARN-7053
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-7053.001.patch, YARN-7053.002.patch
>
>
> HADOOP-14741 moves curator functionality to ZKCuratorManager. ZKRMStateStore 
> has some curator transaction support which can be reused, so this can be 
> moved to ZKCuratorManager as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6979) Add flag to notify all types of container updates to NM via NodeHeartbeatResponse

2017-08-21 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135597#comment-16135597
 ] 

Wangda Tan commented on YARN-6979:
--

Reference: 
https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Compatibility.html#Wire_compatibility

> Add flag to notify all types of container updates to NM via 
> NodeHeartbeatResponse
> -
>
> Key: YARN-6979
> URL: https://issues.apache.org/jira/browse/YARN-6979
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: kartheek muthyala
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: YARN-6979.001.patch, YARN-6979.002.patch, 
> YARN-6979.003.patch, YARN-6979.addendum-001.patch
>
>
> Currently, only the Container Resource decrease command is sent to the NM via 
> NodeHeartbeat response. This JIRA proposes to add a flag in the RM to allow 
> ALL container updates (increase, decrease, promote and demote) to be 
> initiated via node HB.
> The AM is still free to use the ContainerManagementPrototol's 
> {{updateContainer}} API in cases where for instance, the Node HB frequency is 
> very low and the AM needs to update the container as soon as possible. In 
> these situations, if the Node HB arrives before the updateContainer API call, 
> the call would error out, due to a version mismatch and the AM is required to 
> handle it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5603) Metrics for Federation StateStore

2017-08-21 Thread Ellen Hui (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ellen Hui updated YARN-5603:

Attachment: YARN-5603.002.patch

Checkstyle fixes

> Metrics for Federation StateStore
> -
>
> Key: YARN-5603
> URL: https://issues.apache.org/jira/browse/YARN-5603
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Subru Krishnan
>Assignee: Ellen Hui
> Attachments: 0001-Add-Federation-Client-metrics.patch, 
> YARN-5603.001.patch, YARN-5603.002.patch
>
>
> This JIRA proposes addition of metrics for Federation StateStore



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6979) Add flag to notify all types of container updates to NM via NodeHeartbeatResponse

2017-08-21 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135590#comment-16135590
 ] 

Wangda Tan commented on YARN-6979:
--

[~asuresh], 

It looks like this patch removed one public API: 

{code}
getContainersToDecrease();
addAllContainersToDecrease(...);
{code}
This may cause problem when version of RM/NM mismatches 
(containers-to-decrease) in PB are not updated by new RM or won't read by new 
NM. This is a compatibility issue to me.

I think we may not need to backport this to 2.8, but I think we should fix it 
in 2.9 and 3.0. 

> Add flag to notify all types of container updates to NM via 
> NodeHeartbeatResponse
> -
>
> Key: YARN-6979
> URL: https://issues.apache.org/jira/browse/YARN-6979
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: kartheek muthyala
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: YARN-6979.001.patch, YARN-6979.002.patch, 
> YARN-6979.003.patch, YARN-6979.addendum-001.patch
>
>
> Currently, only the Container Resource decrease command is sent to the NM via 
> NodeHeartbeat response. This JIRA proposes to add a flag in the RM to allow 
> ALL container updates (increase, decrease, promote and demote) to be 
> initiated via node HB.
> The AM is still free to use the ContainerManagementPrototol's 
> {{updateContainer}} API in cases where for instance, the Node HB frequency is 
> very low and the AM needs to update the container as soon as possible. In 
> these situations, if the Node HB arrives before the updateContainer API call, 
> the call would error out, due to a version mismatch and the AM is required to 
> handle it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6979) Add flag to notify all types of container updates to NM via NodeHeartbeatResponse

2017-08-21 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135582#comment-16135582
 ] 

Arun Suresh edited comment on YARN-6979 at 8/21/17 6:40 PM:


[~leftnoteasy], so with respect to protobuf changes, we havn't removed 
anything, only added new fields. And with regard to the Java API, again, we 
havn't removed any of the old API's - we've just marked them as deprecated, and 
internally route then thru the new API.
Also, we want to target these for 2.9.x and 3.x. Let me know if you require 
them for 2.8.x and we can dig further into compatibility issues before we 
cherry-pick it there.


was (Author: asuresh):
[~leftnoteasy], so with respect to protobuf changes, we havn't removed 
anything. And with regard to the Java API, we havn't removed any of the old 
API's - we've just marked them as deprecated, and internally route then thru 
the new API.
Also, we want to target these for 2.9.x and 3.x. Let me know if you require 
them for 2.8.x and we can dig further into compatibility issues before we 
cherry-pick it there.

> Add flag to notify all types of container updates to NM via 
> NodeHeartbeatResponse
> -
>
> Key: YARN-6979
> URL: https://issues.apache.org/jira/browse/YARN-6979
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: kartheek muthyala
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: YARN-6979.001.patch, YARN-6979.002.patch, 
> YARN-6979.003.patch, YARN-6979.addendum-001.patch
>
>
> Currently, only the Container Resource decrease command is sent to the NM via 
> NodeHeartbeat response. This JIRA proposes to add a flag in the RM to allow 
> ALL container updates (increase, decrease, promote and demote) to be 
> initiated via node HB.
> The AM is still free to use the ContainerManagementPrototol's 
> {{updateContainer}} API in cases where for instance, the Node HB frequency is 
> very low and the AM needs to update the container as soon as possible. In 
> these situations, if the Node HB arrives before the updateContainer API call, 
> the call would error out, due to a version mismatch and the AM is required to 
> handle it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6979) Add flag to notify all types of container updates to NM via NodeHeartbeatResponse

2017-08-21 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135582#comment-16135582
 ] 

Arun Suresh commented on YARN-6979:
---

[~leftnoteasy], so with respect to protobuf changes, we havn't removed 
anything. And with regard to the Java API, we havn't removed any of the old 
API's - we've just marked them as deprecated, and internally route then thru 
the new API.
Also, we want to target these for 2.9.x and 3.x. Let me know if you require 
them for 2.8.x and we can dig further into compatibility issues before we 
cherry-pick it there.

> Add flag to notify all types of container updates to NM via 
> NodeHeartbeatResponse
> -
>
> Key: YARN-6979
> URL: https://issues.apache.org/jira/browse/YARN-6979
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: kartheek muthyala
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: YARN-6979.001.patch, YARN-6979.002.patch, 
> YARN-6979.003.patch, YARN-6979.addendum-001.patch
>
>
> Currently, only the Container Resource decrease command is sent to the NM via 
> NodeHeartbeat response. This JIRA proposes to add a flag in the RM to allow 
> ALL container updates (increase, decrease, promote and demote) to be 
> initiated via node HB.
> The AM is still free to use the ContainerManagementPrototol's 
> {{updateContainer}} API in cases where for instance, the Node HB frequency is 
> very low and the AM needs to update the container as soon as possible. In 
> these situations, if the Node HB arrives before the updateContainer API call, 
> the call would error out, due to a version mismatch and the AM is required to 
> handle it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2162) add ability in Fair Scheduler to optionally configure maxResources in terms of percentage

2017-08-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135565#comment-16135565
 ] 

ASF GitHub Bot commented on YARN-2162:
--

Github user flyrain commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/261#discussion_r134299321
  
--- Diff: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationConfiguration.java
 ---
@@ -293,26 +293,15 @@ Resource getMinResources(String queue) {
 
   /**
* Get the maximum resource allocation for the given queue. If the max 
in not
-   * set, return the larger of the min and the default max.
+   * set, return the default max.
--- End diff --

I changed it since this is not the right place to do that. I move this 
behavior to FSQueue#getMaxShare. The reason is allocation file reload and 
cluster resource changing are two independent events. It is not right to 
calculate based on cluster resource while refresh the allocation file.


> add ability in Fair Scheduler to optionally configure maxResources in terms 
> of percentage
> -
>
> Key: YARN-2162
> URL: https://issues.apache.org/jira/browse/YARN-2162
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, scheduler
>Reporter: Ashwin Shankar
>Assignee: Yufei Gu
>  Labels: scheduler
> Attachments: YARN-2162.001.patch, YARN-2162.002.patch
>
>
> minResources and maxResources in fair scheduler configs are expressed in 
> terms of absolute numbers X mb, Y vcores. 
> As a result, when we expand or shrink our hadoop cluster, we need to 
> recalculate and change minResources/maxResources accordingly, which is pretty 
> inconvenient.
> We can circumvent this problem if we can optionally configure these 
> properties in terms of percentage of cluster capacity. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >