[jira] [Commented] (YARN-5598) [YARN-3368] Fix create-release to be able to generate bits for the new yarn-ui

2016-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15451258#comment-15451258
 ] 

Hadoop QA commented on YARN-5598:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 3m 38s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 5s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 23s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
48s {color} | {color:green} YARN-3368 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 0s 
{color} | {color:green} YARN-3368 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
12s {color} | {color:green} The patch generated 0 new + 74 unchanged - 47 fixed 
= 74 total (was 121) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m 52s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:f62df43 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826325/YARN-5598-YARN-3368.001.patch
 |
| JIRA Issue | YARN-5598 |
| Optional Tests |  asflicense  shellcheck  shelldocs  mvnsite  unit  |
| uname | Linux 89e43542034a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-3368 / 608309d |
| shellcheck | v0.4.4 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12965/testReport/ |
| modules | C:  U:  |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12965/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> [YARN-3368] Fix create-release to be able to generate bits for the new yarn-ui
> --
>
> Key: YARN-5598
> URL: https://issues.apache.org/jira/browse/YARN-5598
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn, yarn-ui-v2
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-5598-YARN-3368.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5598) [YARN-3368] Fix create-release to be able to generate bits for the new yarn-ui

2016-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15451230#comment-15451230
 ] 

Hadoop QA commented on YARN-5598:
-

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-YARN-Build/12965/console in case of 
problems.


> [YARN-3368] Fix create-release to be able to generate bits for the new yarn-ui
> --
>
> Key: YARN-5598
> URL: https://issues.apache.org/jira/browse/YARN-5598
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn, yarn-ui-v2
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-5598-YARN-3368.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5598) [YARN-3368] Fix create-release to be able to generate bits for the new yarn-ui

2016-08-30 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15451160#comment-15451160
 ] 

Wangda Tan commented on YARN-5598:
--

Note: this patch removed the staled file: create-release.sh.

> [YARN-3368] Fix create-release to be able to generate bits for the new yarn-ui
> --
>
> Key: YARN-5598
> URL: https://issues.apache.org/jira/browse/YARN-5598
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn, yarn-ui-v2
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-5598-YARN-3368.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5598) [YARN-3368] Fix create-release to be able to generate bits for the new yarn-ui

2016-08-30 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5598:
-
Attachment: YARN-5598-YARN-3368.001.patch

Attached ver.1 patch for review, tested it locally using: 
https://wiki.apache.org/hadoop/HowToRelease.

bq. ./dev-support/bin/create-release --docker --docker-cache

Verified generated binary artifact has UI dependencies. and also tried to 
deploy it locally.

> [YARN-3368] Fix create-release to be able to generate bits for the new yarn-ui
> --
>
> Key: YARN-5598
> URL: https://issues.apache.org/jira/browse/YARN-5598
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn, yarn-ui-v2
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-5598-YARN-3368.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5221) Expose UpdateResourceRequest API to allow AM to request for change in container properties

2016-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15451154#comment-15451154
 ] 

Hadoop QA commented on YARN-5221:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 24m 7s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 29 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 19s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
28s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 57s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 16s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 
9s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 52s 
{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
41s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 9m 3s 
{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 29s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 49s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 23s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 38s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 7m 38s {color} | 
{color:red} root-jdk1.7.0_111 with JDK v1.7.0_111 generated 1 new + 16 
unchanged - 1 fixed = 17 total (was 17) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 38s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 2m 5s 
{color} | {color:red} root: The patch generated 10 new + 1808 unchanged - 76 
fixed = 1818 total (was 1884) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 10m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdk1.8.0_101 with JDK 
v1.8.0_101 generated 0 new + 123 unchanged - 2 fixed = 123 total (was 125) 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.8.0_101. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed with JDK 
v1.8.0_101. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed with 
JDK v1.8.0_101. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | 

[jira] [Commented] (YARN-5576) Core change to localize resource while container is running

2016-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15451127#comment-15451127
 ] 

Hadoop QA commented on YARN-5576:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 30s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 23s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 10 new + 689 unchanged - 14 fixed = 699 total (was 703) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 57s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m 58s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826312/YARN-5576.2.patch |
| JIRA Issue | YARN-5576 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a63a43d89c8e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 20ae1fa |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12964/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/12964/artifact/patchprocess/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12964/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12964/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Core change to localize resource while 

[jira] [Commented] (YARN-5596) TestDockerContainerRuntime fails on OS X

2016-08-30 Thread Sidharta Seethana (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1545#comment-1545
 ] 

Sidharta Seethana commented on YARN-5596:
-

Thanks, [~templedf]! 

[~vvasudev], could you please take a look? Thanks

> TestDockerContainerRuntime fails on OS X
> 
>
> Key: YARN-5596
> URL: https://issues.apache.org/jira/browse/YARN-5596
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, yarn
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
>Priority: Minor
> Attachments: YARN-5596.001.patch, YARN-5596.002.patch
>
>
> /sys/fs/cgroup doesn't exist on Mac OS X. And the tests seem to fail because 
> of this. 
> {code}
> Failed tests:
>   TestDockerContainerRuntime.testContainerLaunchWithCustomNetworks:456 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
>   TestDockerContainerRuntime.testContainerLaunchWithNetworkingDefaults:401 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
>   TestDockerContainerRuntime.testDockerContainerLaunch:297 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
> Tests run: 19, Failures: 3, Errors: 0, Skipped: 0
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5601) Make the RM epoch base value configurable

2016-08-30 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15451110#comment-15451110
 ] 

Jian He commented on YARN-5601:
---

[~subru], could you please explain what the typical practice is to set this 
epoch number in a federated cluster?

> Make the RM epoch base value configurable
> -
>
> Key: YARN-5601
> URL: https://issues.apache.org/jira/browse/YARN-5601
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-5601-YARN-2915-v1.patch
>
>
> Currently the epoch always starts from zero. This can cause container ids to 
> conflict for an application under Federation that spans multiple RMs 
> concurrently. This JIRA proposes to make the RM epoch base value configurable 
> which will allow us to avoid conflicts by setting different values for each 
> RM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5576) Core change to localize resource while container is running

2016-08-30 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-5576:
--
Attachment: YARN-5576.2.patch

Upload a new path that fixed the comments

> Core change to localize resource while container is running
> ---
>
> Key: YARN-5576
> URL: https://issues.apache.org/jira/browse/YARN-5576
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-5576.1.patch, YARN-5576.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5428) Allow for specifying the docker client configuration directory

2016-08-30 Thread Zhankun Tang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15451008#comment-15451008
 ] 

Zhankun Tang commented on YARN-5428:


[~shaneku...@gmail.com], thanks for the proposal. 

Just one question about 4.2(Distribute Application Owner Supplied Docker Client 
Configuration with the Application), how could we ensure that the config.json 
localization procedure happens before Docker image pull in the same 
localization phase (coordinate with YARN-3854)? I guess  Marathon and K8s don't 
need to consider this because they assume docker pull will be implicitly 
invoked.

If I remember correctly, We could prioritize the docker.tar.gz download 
theoretically because all LocalResource is downloaded one by one per 
ContainerLocalizer. But any other idea on this?

> Allow for specifying the docker client configuration directory
> --
>
> Key: YARN-5428
> URL: https://issues.apache.org/jira/browse/YARN-5428
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-5428.001.patch, YARN-5428.002.patch, 
> YARN-5428.003.patch, YARN-5428.004.patch, 
> YARN-5428Allowforspecifyingthedockerclientconfigurationdirectory.pdf
>
>
> The docker client allows for specifying a configuration directory that 
> contains the docker client's configuration. It is common to store "docker 
> login" credentials in this config, to avoid the need to docker login on each 
> cluster member. 
> By default the docker client config is $HOME/.docker/config.json on Linux. 
> However, this does not work with the current container executor user 
> switching and it may also be desirable to centralize this configuration 
> beyond the single user's home directory.
> Note that the command line arg is for the configuration directory NOT the 
> configuration file.
> This change will be needed to allow YARN to automatically pull images at 
> localization time or within container executor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5605) Preempt containers (all on one node) to meet the requirement of starved applications

2016-08-30 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-5605:
---
Attachment: yarn-5605-1.patch

Attached patch does the things listed in the description, and also removes old 
preemption code.

> Preempt containers (all on one node) to meet the requirement of starved 
> applications
> 
>
> Key: YARN-5605
> URL: https://issues.apache.org/jira/browse/YARN-5605
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: yarn-5605-1.patch
>
>
> Required items:
> # Identify starved applications
> # Identify a node that has enough containers from applications over their 
> fairshare.
> # Preempt those containers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5605) Preempt containers (all on one node) to meet the requirement of starved applications

2016-08-30 Thread Karthik Kambatla (JIRA)
Karthik Kambatla created YARN-5605:
--

 Summary: Preempt containers (all on one node) to meet the 
requirement of starved applications
 Key: YARN-5605
 URL: https://issues.apache.org/jira/browse/YARN-5605
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla


Required items:
# Identify starved applications
# Identify a node that has enough containers from applications over their 
fairshare.
# Preempt those containers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3054) Preempt policy in FairScheduler may cause mapreduce job never finish

2016-08-30 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-3054:
---
Issue Type: Bug  (was: Sub-task)
Parent: (was: YARN-4752)

> Preempt policy in FairScheduler may cause mapreduce job never finish
> 
>
> Key: YARN-3054
> URL: https://issues.apache.org/jira/browse/YARN-3054
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.6.0
>Reporter: Peng Zhang
>  Labels: fs-preemption-bugs
>
> Preemption policy is related with schedule policy now. Using comparator of 
> schedule policy to find preemption candidate cannot guarantee a subset of 
> containers never be preempted. And this may cause tasks to be preempted 
> periodically before they finish. So job cannot make any progress. 
> I think preemption in YARN should got below assurance:
> 1. Mapreduce jobs can get additional resources when others are idle;
> 2. Mapreduce jobs for one user in one queue can still progress with its min 
> share when others preempt resources back.
> Maybe always preempt the latest app and container can get this? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3121) FairScheduler preemption metrics

2016-08-30 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-3121:
---
Issue Type: Bug  (was: Sub-task)
Parent: (was: YARN-4752)

> FairScheduler preemption metrics
> 
>
> Key: YARN-3121
> URL: https://issues.apache.org/jira/browse/YARN-3121
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
>  Labels: fs-preemption-bugs
> Attachments: yARN-3121.prelim.patch, yARN-3121.prelim.patch
>
>
> Add FSQueuemetrics for preemption related information



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2154) FairScheduler: Improve preemption to preempt only those containers that would satisfy the incoming request

2016-08-30 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-2154:
---
Issue Type: Bug  (was: Sub-task)
Parent: (was: YARN-4752)

> FairScheduler: Improve preemption to preempt only those containers that would 
> satisfy the incoming request
> --
>
> Key: YARN-2154
> URL: https://issues.apache.org/jira/browse/YARN-2154
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.4.0
>Reporter: Karthik Kambatla
>Assignee: Arun Suresh
>Priority: Critical
>  Labels: fs-preemption-bugs
> Attachments: YARN-2154.1.patch
>
>
> Today, FairScheduler uses a spray-gun approach to preemption. Instead, it 
> should only preempt resources that would satisfy the incoming request. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3902) Fair scheduler preempts ApplicationMaster

2016-08-30 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-3902:
---
Issue Type: Bug  (was: Sub-task)
Parent: (was: YARN-4752)

> Fair scheduler preempts ApplicationMaster
> -
>
> Key: YARN-3902
> URL: https://issues.apache.org/jira/browse/YARN-3902
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.3.0
> Environment: 3.16.0-0.bpo.4-amd64 #1 SMP Debian 3.16.7-ckt2-1~bpo70+1 
> (2014-12-08) x86_64
>Reporter: He Tianyi
>Assignee: He Tianyi
>  Labels: fs-preemption-bugs
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> YARN-2022 have fixed the similar issue related to CapacityScheduler.
> However, FairScheduler still suffer, preempting AM while other normal 
> containers running out there.
> I think we should take the same approach, avoid AM being preempted unless 
> there is no container running other than AM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4133) Containers to be preempted leak in FairScheduler preemption logic.

2016-08-30 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-4133:
---
Issue Type: Bug  (was: Sub-task)
Parent: (was: YARN-4752)

> Containers to be preempted leak in FairScheduler preemption logic.
> --
>
> Key: YARN-4133
> URL: https://issues.apache.org/jira/browse/YARN-4133
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.7.1
>Reporter: zhihai xu
>Assignee: zhihai xu
>  Labels: fs-preemption-bugs
> Attachments: YARN-4133.000.patch
>
>
> Containers to be preempted leak in FairScheduler preemption logic. It may 
> cause missing preemption due to containers in {{warnedContainers}} wrongly 
> removed. The problem is in {{preemptResources}}:
> There are two issues which can cause containers  wrongly removed from 
> {{warnedContainers}}:
> Firstly missing the container state {{RMContainerState.ACQUIRED}} in the 
> condition check:
> {code}
> (container.getState() == RMContainerState.RUNNING ||
>   container.getState() == RMContainerState.ALLOCATED)
> {code}
> Secondly if  {{isResourceGreaterThanNone(toPreempt)}} return false, we 
> shouldn't remove container from {{warnedContainers}}. We should only remove 
> container from {{warnedContainers}}, if container is not in state 
> {{RMContainerState.RUNNING}}, {{RMContainerState.ALLOCATED}} and 
> {{RMContainerState.ACQUIRED}}.
> {code}
>   if ((container.getState() == RMContainerState.RUNNING ||
>   container.getState() == RMContainerState.ALLOCATED) &&
>   isResourceGreaterThanNone(toPreempt)) {
> warnOrKillContainer(container);
> Resources.subtractFrom(toPreempt, 
> container.getContainer().getResource());
>   } else {
> warnedIter.remove();
>   }
> {code}
> Also once the containers in {{warnedContainers}} are wrongly removed, it will 
> never be preempted. Because these containers are already in 
> {{FSAppAttempt#preemptionMap}} and {{FSAppAttempt#preemptContainer}} won't 
> return the containers in {{FSAppAttempt#preemptionMap}}.
> {code}
>   public RMContainer preemptContainer() {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("App " + getName() + " is going to preempt a running " +
>   "container");
> }
> RMContainer toBePreempted = null;
> for (RMContainer container : getLiveContainers()) {
>   if (!getPreemptionContainers().contains(container) &&
>   (toBePreempted == null ||
>   comparator.compare(toBePreempted, container) > 0)) {
> toBePreempted = container;
>   }
> }
> return toBePreempted;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4120) FSAppAttempt.getResourceUsage() should not take preemptedResource into account

2016-08-30 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-4120:
---
Issue Type: Bug  (was: Sub-task)
Parent: (was: YARN-4752)

> FSAppAttempt.getResourceUsage() should not take preemptedResource into account
> --
>
> Key: YARN-4120
> URL: https://issues.apache.org/jira/browse/YARN-4120
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Reporter: Xianyin Xin
>  Labels: fs-preemption-bugs
>
> When compute resource usage for Schedulables, the following code is envolved,
> {{FSAppAttempt.getResourceUsage}},
> {code}
> public Resource getResourceUsage() {
>   return Resources.subtract(getCurrentConsumption(), getPreemptedResources());
> }
> {code}
> and this value is aggregated to FSLeafQueues and FSParentQueues. In my 
> opinion, taking {{preemptedResource}} into account here is not reasonable, 
> there are two main reasons,
> # it is something in future, i.e., even though these resources are marked as 
> preempted, it is currently used by app, and these resources will be 
> subtracted from {{currentCosumption}} once the preemption is finished. it's 
> not reasonable to make arrange for it ahead of time. 
> # there's another problem here, consider following case,
> {code}
> root
>/\
>   queue1   queue2
>   /\
> queue1.3, queue1.4
> {code}
> suppose queue1.3 need resource and it can preempt resources from queue1.4, 
> the preemption happens in the interior of queue1. But when compute resource 
> usage of queue1, {{queue1.resourceUsage = it's_current_resource_usage - 
> preemption}} according to the current code, which is unfair to queue2 when 
> doing resource allocating.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3405) FairScheduler's preemption cannot happen between sibling in some case

2016-08-30 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-3405:
---
Issue Type: Bug  (was: Sub-task)
Parent: (was: YARN-4752)

> FairScheduler's preemption cannot happen between sibling in some case
> -
>
> Key: YARN-3405
> URL: https://issues.apache.org/jira/browse/YARN-3405
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.7.0
>Reporter: Peng Zhang
>Assignee: Peng Zhang
>Priority: Critical
>  Labels: fs-preemption-bugs
> Attachments: YARN-3405.01.patch, YARN-3405.02.patch
>
>
> Queue hierarchy described as below:
> {noformat}
>   root
>/ \
>queue-1  queue-2   
>   /  \
> queue-1-1 queue-1-2
> {noformat}
> Assume cluster resource is 100
> # queue-1-1 and queue-2 has app. Each get 50 usage and 50 fairshare. 
> # When queue-1-2 is active, and it cause some new preemption request for 
> fairshare 25.
> # When preemption from root, it has possibility to find preemption candidate 
> is queue-2. If so preemptContainerPreCheck for queue-2 return false because 
> it's equal to its fairshare.
> # Finally queue-1-2 will be waiting for resource release form queue-1-1 
> itself.
> What I expect here is that queue-1-2 preempt from queue-1-1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4333) Fair scheduler should support preemption within a leaf queue

2016-08-30 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-4333:
---
Issue Type: Bug  (was: Sub-task)
Parent: (was: YARN-4752)

> Fair scheduler should support preemption within a leaf queue
> 
>
> Key: YARN-4333
> URL: https://issues.apache.org/jira/browse/YARN-4333
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.6.0
>Reporter: Tao Jie
>Assignee: Tao Jie
>  Labels: fs-preemption-bugs
> Attachments: YARN-4333.001.patch, YARN-4333.002.patch, 
> YARN-4333.003.patch
>
>
> Now each app in fair scheduler is allocated its fairshare, however  fairshare 
> resource is not ensured even if fairSharePreemption is enabled.
> Consider: 
> 1, When the cluster is idle, we submit app1 to queueA,which takes maxResource 
> of queueA.  
> 2, Then the cluster becomes busy, but app1 does not release any resource, 
> queueA resource usage is over its fairshare
> 3, Then we submit app2(maybe with higher priority) to queueA. Now app2 has 
> its own fairshare, but could not obtain any resource, since queueA is still 
> over its fairshare and resource will not assign to queueA anymore. Also, 
> preemption is not triggered in this case.
> So we should allow preemption within queue, when app is starved for fairshare.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3405) FairScheduler's preemption cannot happen between sibling in some case

2016-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15450959#comment-15450959
 ] 

Hadoop QA commented on YARN-3405:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 6s {color} 
| {color:red} YARN-3405 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12727554/YARN-3405.02.patch |
| JIRA Issue | YARN-3405 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12961/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> FairScheduler's preemption cannot happen between sibling in some case
> -
>
> Key: YARN-3405
> URL: https://issues.apache.org/jira/browse/YARN-3405
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.7.0
>Reporter: Peng Zhang
>Assignee: Peng Zhang
>Priority: Critical
>  Labels: fs-preemption-bugs
> Attachments: YARN-3405.01.patch, YARN-3405.02.patch
>
>
> Queue hierarchy described as below:
> {noformat}
>   root
>/ \
>queue-1  queue-2   
>   /  \
> queue-1-1 queue-1-2
> {noformat}
> Assume cluster resource is 100
> # queue-1-1 and queue-2 has app. Each get 50 usage and 50 fairshare. 
> # When queue-1-2 is active, and it cause some new preemption request for 
> fairshare 25.
> # When preemption from root, it has possibility to find preemption candidate 
> is queue-2. If so preemptContainerPreCheck for queue-2 return false because 
> it's equal to its fairshare.
> # Finally queue-1-2 will be waiting for resource release form queue-1-1 
> itself.
> What I expect here is that queue-1-2 preempt from queue-1-1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4133) Containers to be preempted leak in FairScheduler preemption logic.

2016-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15450961#comment-15450961
 ] 

Hadoop QA commented on YARN-4133:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} 
| {color:red} YARN-4133 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12754810/YARN-4133.000.patch |
| JIRA Issue | YARN-4133 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12962/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Containers to be preempted leak in FairScheduler preemption logic.
> --
>
> Key: YARN-4133
> URL: https://issues.apache.org/jira/browse/YARN-4133
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.7.1
>Reporter: zhihai xu
>Assignee: zhihai xu
>  Labels: fs-preemption-bugs
> Attachments: YARN-4133.000.patch
>
>
> Containers to be preempted leak in FairScheduler preemption logic. It may 
> cause missing preemption due to containers in {{warnedContainers}} wrongly 
> removed. The problem is in {{preemptResources}}:
> There are two issues which can cause containers  wrongly removed from 
> {{warnedContainers}}:
> Firstly missing the container state {{RMContainerState.ACQUIRED}} in the 
> condition check:
> {code}
> (container.getState() == RMContainerState.RUNNING ||
>   container.getState() == RMContainerState.ALLOCATED)
> {code}
> Secondly if  {{isResourceGreaterThanNone(toPreempt)}} return false, we 
> shouldn't remove container from {{warnedContainers}}. We should only remove 
> container from {{warnedContainers}}, if container is not in state 
> {{RMContainerState.RUNNING}}, {{RMContainerState.ALLOCATED}} and 
> {{RMContainerState.ACQUIRED}}.
> {code}
>   if ((container.getState() == RMContainerState.RUNNING ||
>   container.getState() == RMContainerState.ALLOCATED) &&
>   isResourceGreaterThanNone(toPreempt)) {
> warnOrKillContainer(container);
> Resources.subtractFrom(toPreempt, 
> container.getContainer().getResource());
>   } else {
> warnedIter.remove();
>   }
> {code}
> Also once the containers in {{warnedContainers}} are wrongly removed, it will 
> never be preempted. Because these containers are already in 
> {{FSAppAttempt#preemptionMap}} and {{FSAppAttempt#preemptContainer}} won't 
> return the containers in {{FSAppAttempt#preemptionMap}}.
> {code}
>   public RMContainer preemptContainer() {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("App " + getName() + " is going to preempt a running " +
>   "container");
> }
> RMContainer toBePreempted = null;
> for (RMContainer container : getLiveContainers()) {
>   if (!getPreemptionContainers().contains(container) &&
>   (toBePreempted == null ||
>   comparator.compare(toBePreempted, container) > 0)) {
> toBePreempted = container;
>   }
> }
> return toBePreempted;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4333) Fair scheduler should support preemption within a leaf queue

2016-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15450962#comment-15450962
 ] 

Hadoop QA commented on YARN-4333:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 6s {color} 
| {color:red} YARN-4333 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12786260/YARN-4333.003.patch |
| JIRA Issue | YARN-4333 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12963/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Fair scheduler should support preemption within a leaf queue
> 
>
> Key: YARN-4333
> URL: https://issues.apache.org/jira/browse/YARN-4333
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.6.0
>Reporter: Tao Jie
>Assignee: Tao Jie
>  Labels: fs-preemption-bugs
> Attachments: YARN-4333.001.patch, YARN-4333.002.patch, 
> YARN-4333.003.patch
>
>
> Now each app in fair scheduler is allocated its fairshare, however  fairshare 
> resource is not ensured even if fairSharePreemption is enabled.
> Consider: 
> 1, When the cluster is idle, we submit app1 to queueA,which takes maxResource 
> of queueA.  
> 2, Then the cluster becomes busy, but app1 does not release any resource, 
> queueA resource usage is over its fairshare
> 3, Then we submit app2(maybe with higher priority) to queueA. Now app2 has 
> its own fairshare, but could not obtain any resource, since queueA is still 
> over its fairshare and resource will not assign to queueA anymore. Also, 
> preemption is not triggered in this case.
> So we should allow preemption within queue, when app is starved for fairshare.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-3997) An Application requesting multiple core containers can't preempt running application made of single core containers

2016-08-30 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla resolved YARN-3997.

Resolution: Duplicate

> An Application requesting multiple core containers can't preempt running 
> application made of single core containers
> ---
>
> Key: YARN-3997
> URL: https://issues.apache.org/jira/browse/YARN-3997
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.7.1
> Environment: Ubuntu 14.04, Hadoop 2.7.1, Physical Machines
>Reporter: Dan Shechter
>Assignee: Arun Suresh
>Priority: Critical
>
> When our cluster is configured with preemption, and is fully loaded with an 
> application consuming 1-core containers, it will not kill off these 
> containers when a new application kicks in requesting containers with a size 
> > 1, for example 4 core containers.
> When the "second" application attempts to us 1-core containers as well, 
> preemption proceeds as planned and everything works properly.
> It is my assumption, that the fair-scheduler, while recognizing it needs to 
> kill off some container to make room for the new application, fails to find a 
> SINGLE container satisfying the request for a 4-core container (since all 
> existing containers are 1-core containers), and isn't "smart" enough to 
> realize it needs to kill off 4 single-core containers (in this case) on a 
> single node, for the new application to be able to proceed...
> The exhibited affect is that the new application is hung indefinitely and 
> never gets the resources it requires.
> This can easily be replicated with any yarn application.
> Our "goto" scenario in this case is running pyspark with 1-core executors 
> (containers) while trying to launch h20.ai framework which INSISTS on having 
> at least 4 cores per container.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3997) An Application requesting multiple core containers can't preempt running application made of single core containers

2016-08-30 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-3997:
---
Issue Type: Bug  (was: Sub-task)
Parent: (was: YARN-4752)

> An Application requesting multiple core containers can't preempt running 
> application made of single core containers
> ---
>
> Key: YARN-3997
> URL: https://issues.apache.org/jira/browse/YARN-3997
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.7.1
> Environment: Ubuntu 14.04, Hadoop 2.7.1, Physical Machines
>Reporter: Dan Shechter
>Assignee: Arun Suresh
>Priority: Critical
>
> When our cluster is configured with preemption, and is fully loaded with an 
> application consuming 1-core containers, it will not kill off these 
> containers when a new application kicks in requesting containers with a size 
> > 1, for example 4 core containers.
> When the "second" application attempts to us 1-core containers as well, 
> preemption proceeds as planned and everything works properly.
> It is my assumption, that the fair-scheduler, while recognizing it needs to 
> kill off some container to make room for the new application, fails to find a 
> SINGLE container satisfying the request for a 4-core container (since all 
> existing containers are 1-core containers), and isn't "smart" enough to 
> realize it needs to kill off 4 single-core containers (in this case) on a 
> single node, for the new application to be able to proceed...
> The exhibited affect is that the new application is hung indefinitely and 
> never gets the resources it requires.
> This can easily be replicated with any yarn application.
> Our "goto" scenario in this case is running pyspark with 1-core executors 
> (containers) while trying to launch h20.ai framework which INSISTS on having 
> at least 4 cores per container.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-2457) FairScheduler: Handle preemption to help starved parent queues

2016-08-30 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla resolved YARN-2457.

Resolution: Duplicate

> FairScheduler: Handle preemption to help starved parent queues
> --
>
> Key: YARN-2457
> URL: https://issues.apache.org/jira/browse/YARN-2457
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Affects Versions: 2.5.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>
> YARN-2395/YARN-2394 add preemption timeout and threshold per queue, but don't 
> check for parent queue starvation. 
> We need to check that. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4855) Should check if node exists when replace nodelabels

2016-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15450868#comment-15450868
 ] 

Hadoop QA commented on YARN-4855:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 14s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client: The 
patch generated 20 new + 98 unchanged - 2 fixed = 118 total (was 100) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 57s 
{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 17s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826299/YARN-4855.005.patch |
| JIRA Issue | YARN-4855 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ab510c90a71e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 20ae1fa |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12960/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12960/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12960/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Should check if node exists when replace nodelabels
> ---
>
> Key: YARN-4855
> URL: https://issues.apache.org/jira/browse/YARN-4855
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Tao 

[jira] [Commented] (YARN-4855) Should check if node exists when replace nodelabels

2016-08-30 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15450811#comment-15450811
 ] 

Tao Jie commented on YARN-4855:
---

Rebased the patch, [~Naganarasimha], [~sunilg], [~leftnoteasy] could you take a 
review of it? Thanks in advance!

> Should check if node exists when replace nodelabels
> ---
>
> Key: YARN-4855
> URL: https://issues.apache.org/jira/browse/YARN-4855
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Minor
> Attachments: YARN-4855.001.patch, YARN-4855.002.patch, 
> YARN-4855.003.patch, YARN-4855.004.patch, YARN-4855.005.patch
>
>
> Today when we add nodelabels to nodes, it would succeed even if nodes are not 
> existing NodeManger in cluster without any message.
> It could be like this:
> When we use *yarn rmadmin -replaceLabelsOnNode --fail-on-unkown-nodes 
> "node1=label1"* , it would be denied if node is unknown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4855) Should check if node exists when replace nodelabels

2016-08-30 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated YARN-4855:
--
Attachment: YARN-4855.005.patch

> Should check if node exists when replace nodelabels
> ---
>
> Key: YARN-4855
> URL: https://issues.apache.org/jira/browse/YARN-4855
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Minor
> Attachments: YARN-4855.001.patch, YARN-4855.002.patch, 
> YARN-4855.003.patch, YARN-4855.004.patch, YARN-4855.005.patch
>
>
> Today when we add nodelabels to nodes, it would succeed even if nodes are not 
> existing NodeManger in cluster without any message.
> It could be like this:
> When we use *yarn rmadmin -replaceLabelsOnNode --fail-on-unkown-nodes 
> "node1=label1"* , it would be denied if node is unknown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5323) Policies APIs (for Router and AMRMProxy policies)

2016-08-30 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15450734#comment-15450734
 ] 

Subru Krishnan commented on YARN-5323:
--

Sure [~leftnoteasy], I'll take a look. Thanks.

> Policies APIs (for Router and AMRMProxy policies)
> -
>
> Key: YARN-5323
> URL: https://issues.apache.org/jira/browse/YARN-5323
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5323-YARN-2915.05.patch, 
> YARN-5323-YARN-2915.06.patch, YARN-5323.01.patch, YARN-5323.02.patch, 
> YARN-5323.03.patch, YARN-5323.04.patch
>
>
> This JIRA tracks APIs for the policies that will guide the Router and 
> AMRMProxy decisions on where to fwd the jobs submission/query requests as 
> well as ResourceRequests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5599) Post AM launcher artifacts to ATS

2016-08-30 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S reassigned YARN-5599:
---

Assignee: Rohith Sharma K S

> Post AM launcher artifacts to ATS
> -
>
> Key: YARN-5599
> URL: https://issues.apache.org/jira/browse/YARN-5599
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Daniel Templeton
>Assignee: Rohith Sharma K S
>
> To aid in debugging launch failures, it would be valuable to have an 
> application's launch script and logs posted to ATS.  Because the 
> application's command line may contain private credentials or other secure 
> information, access to the data in ATS should be restricted to the job owner, 
> including the at-rest data.
> Along with making the data available through ATS, the configuration parameter 
> introduced in YARN-5549 and the log line that it guards should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5123) SQL based RM state store

2016-08-30 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15450682#comment-15450682
 ] 

Giovanni Matteo Fumarola commented on YARN-5123:


Any update on this?

I have a quick question: Why PostGreSQL and not HSQLDB?

> SQL based RM state store
> 
>
> Key: YARN-5123
> URL: https://issues.apache.org/jira/browse/YARN-5123
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Lavkesh Lahngir
>Assignee: Lavkesh Lahngir
> Attachments: 0001-SQL-Based-RM-state-store-trunk.patch, High 
> Availability In YARN Resource Manager using SQL Based StateStore.pdf, 
> sqlstatestore.patch
>
>
> In our setup,  zookeeper based RM state store didn't work. We ended up 
> implementing our own SQL based state store. Here is a patch, if anybody else 
> wants to use it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5602) Utils for Federation State and Policy Store

2016-08-30 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-5602:
---
Attachment: YARN-5602-YARN-2915.v1.patch

> Utils for Federation State and Policy Store
> ---
>
> Key: YARN-5602
> URL: https://issues.apache.org/jira/browse/YARN-5602
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-5602-YARN-2915.v1.patch
>
>
> This JIRA tracks the creation of utils for Federation State and Policy Store 
> such as Error Codes, Exceptions...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5264) Use FSQueue to store queue-specific information

2016-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15450556#comment-15450556
 ] 

Hadoop QA commented on YARN-5264:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
57s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 21s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 6 new + 346 unchanged - 4 fixed = 352 total (was 350) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 34m 10s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 47s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826268/YARN-5264.005.patch |
| JIRA Issue | YARN-5264 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c1dea1416d1a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d6d9cff |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12957/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12957/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12957/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Use FSQueue to store queue-specific information
> ---
>
> Key: YARN-5264
> URL: 

[jira] [Updated] (YARN-5567) Fix script exit code checking in NodeHealthScriptRunner#reportHealthStatus

2016-08-30 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5567:
---
Fix Version/s: (was: 2.8.1)
   3.0.0-alpha1

> Fix script exit code checking in NodeHealthScriptRunner#reportHealthStatus
> --
>
> Key: YARN-5567
> URL: https://issues.apache.org/jira/browse/YARN-5567
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Fix For: 3.0.0-alpha1
>
> Attachments: YARN-5567.001.patch
>
>
> In case of FAILED_WITH_EXIT_CODE, health status should be false.
> {code}
>   case FAILED_WITH_EXIT_CODE:
> setHealthStatus(true, "", now);
> break;
> {code}
> should be 
> {code}
>   case FAILED_WITH_EXIT_CODE:
> setHealthStatus(false, "", now);
> break;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5604) Add versioning for FederationStateStore

2016-08-30 Thread Subru Krishnan (JIRA)
Subru Krishnan created YARN-5604:


 Summary: Add versioning for FederationStateStore
 Key: YARN-5604
 URL: https://issues.apache.org/jira/browse/YARN-5604
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Subru Krishnan
Assignee: Giovanni Matteo Fumarola


Currently we don't have versioning (null version) for the 
FederationStateStore.This JIRA proposes add versioning support that is needed 
to support upgrades.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5603) Metrics for Federation entities like StateStore/Router/AMRMProxy

2016-08-30 Thread Subru Krishnan (JIRA)
Subru Krishnan created YARN-5603:


 Summary: Metrics for Federation entities like 
StateStore/Router/AMRMProxy
 Key: YARN-5603
 URL: https://issues.apache.org/jira/browse/YARN-5603
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Subru Krishnan
Assignee: Giovanni Matteo Fumarola


This JIRA proposes addition of metrics for Federation entities like 
StateStore/Router/AMRMProxy etc



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5601) Make the RM epoch base value configurable

2016-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15450524#comment-15450524
 ] 

Hadoop QA commented on YARN-5601:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 0s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
19s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 25s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
44s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 13s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
34s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
5s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 44s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 44s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 2 
new + 424 unchanged - 1 fixed = 426 total (was 425) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 6s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 38m 12s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 19s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  Inconsistent synchronization of 
org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore.baseEpoch; 
locked 80% of time  Unsynchronized access at RMStateStore.java:80% of time  
Unsynchronized access at RMStateStore.java:[line 670] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826260/YARN-5601-YARN-2915-v1.patch
 |
| JIRA Issue | YARN-5601 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8adc278264bd 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | 

[jira] [Updated] (YARN-5221) Expose UpdateResourceRequest API to allow AM to request for change in container properties

2016-08-30 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-5221:
--
Attachment: YARN-5221-branch-2-v1.patch

Attaching patch to kick off jenkins on branch-2

> Expose UpdateResourceRequest API to allow AM to request for change in 
> container properties
> --
>
> Key: YARN-5221
> URL: https://issues.apache.org/jira/browse/YARN-5221
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5221-branch-2-v1.patch, YARN-5221.001.patch, 
> YARN-5221.002.patch, YARN-5221.003.patch, YARN-5221.004.patch, 
> YARN-5221.005.patch, YARN-5221.006.patch, YARN-5221.007.patch, 
> YARN-5221.008.patch, YARN-5221.009.patch, YARN-5221.010.patch, 
> YARN-5221.011.patch, YARN-5221.012.patch, YARN-5221.013.patch
>
>
> YARN-1197 introduced APIs to allow an AM to request for Increase and Decrease 
> of Container Resources after initial allocation.
> YARN-5085 proposes to allow an AM to request for a change of Container 
> ExecutionType.
> This JIRA proposes to unify both of the above into an Update Container API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5602) Utils for Federation State and Policy Store

2016-08-30 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-5602:
---
Description: This JIRA tracks the creation of utils for Federation State 
and Policy Store such as Error Codes, Exceptions...

> Utils for Federation State and Policy Store
> ---
>
> Key: YARN-5602
> URL: https://issues.apache.org/jira/browse/YARN-5602
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>
> This JIRA tracks the creation of utils for Federation State and Policy Store 
> such as Error Codes, Exceptions...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5602) Utils for Federation State and Policy Store

2016-08-30 Thread Giovanni Matteo Fumarola (JIRA)
Giovanni Matteo Fumarola created YARN-5602:
--

 Summary: Utils for Federation State and Policy Store
 Key: YARN-5602
 URL: https://issues.apache.org/jira/browse/YARN-5602
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Giovanni Matteo Fumarola
Assignee: Giovanni Matteo Fumarola






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5567) Fix script exit code checking in NodeHealthScriptRunner#reportHealthStatus

2016-08-30 Thread Wilfred Spiegelenburg (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15450469#comment-15450469
 ] 

Wilfred Spiegelenburg commented on YARN-5567:
-

I would be OK with the change in trunk. It does make the behaviour clearer.
We do need the additional changes on the comments and the javadoc on top of 
what we have now.

> Fix script exit code checking in NodeHealthScriptRunner#reportHealthStatus
> --
>
> Key: YARN-5567
> URL: https://issues.apache.org/jira/browse/YARN-5567
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Fix For: 2.8.1
>
> Attachments: YARN-5567.001.patch
>
>
> In case of FAILED_WITH_EXIT_CODE, health status should be false.
> {code}
>   case FAILED_WITH_EXIT_CODE:
> setHealthStatus(true, "", now);
> break;
> {code}
> should be 
> {code}
>   case FAILED_WITH_EXIT_CODE:
> setHealthStatus(false, "", now);
> break;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5221) Expose UpdateResourceRequest API to allow AM to request for change in container properties

2016-08-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15450455#comment-15450455
 ] 

Hudson commented on YARN-5221:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10377 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10377/])
YARN-5221. Expose UpdateResourceRequest API to allow AM to request for (arun 
suresh: rev d6d9cff21b7b6141ed88359652cf22e8973c0661)
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/UpdateContainerRequestPBImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/recovery/NMNullStateStoreService.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/rm/TestRMContainerAllocator.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateResponse.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/scheduler/OpportunisticContainerAllocator.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/recovery/NMStateStoreService.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/UpdateContainerRequest.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/AMRMClientImpl.java
* (edit) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/SLSCapacityScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/NMContainerStatusPBImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestContainerResizing.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ProtoUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/AMRMClientAsync.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/BaseContainerManagerTest.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/proto/test_token.proto
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ApplicationMasterService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_service_protos.proto
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ContainerUpdateType.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeStatusUpdater.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
* (edit) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/ResourceSchedulerWrapper.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/proto/yarn_security_token.proto
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/AMRMClientAsyncImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/ContainerTokenIdentifier.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/proto/yarn_server_common_service_protos.proto
* (edit) 

[jira] [Commented] (YARN-5585) [Atsv2] Add a new filter fromId in REST endpoints

2016-08-30 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15450450#comment-15450450
 ] 

Li Lu commented on YARN-5585:
-

OK let me make sure if I fully understand it here: the ultimate use case is 
pagination. To support this, we need our APIs to not only have number limits, 
but also the start position of the returned values. One thing to note is that 
in ATS v2, entities may not stored in their creation time order (although may 
quite possibly be so). Therefore this is slightly different to the fromId in 
ATS v1, where it will "retrieve entities earlier than and including the 
specified ID". I'm fine with the proposal to add this feature but we may want 
to note users the difference between two versions of timeline APIs. Am I 
missing something here? 

> [Atsv2] Add a new filter fromId in REST endpoints
> -
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-10. But to retrieve next 5 apps, it is 
> difficult.
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&=app-5* which gives list of apps from app-6 to 
> app-10. 
> This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5323) Policies APIs (for Router and AMRMProxy policies)

2016-08-30 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15450441#comment-15450441
 ] 

Wangda Tan commented on YARN-5323:
--

Thanks [~curino],

Thanks for update, patch looks good in my perspective. I think it's better to 
let [~subru] commit it since he has more contexts.

And for the implementation of the policies YARN-5324/YARN-5325, I may not have 
bandwidth in short term. [~subru], could you take care of patch review for the 
two implementation patches?

Thanks,

> Policies APIs (for Router and AMRMProxy policies)
> -
>
> Key: YARN-5323
> URL: https://issues.apache.org/jira/browse/YARN-5323
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5323-YARN-2915.05.patch, 
> YARN-5323-YARN-2915.06.patch, YARN-5323.01.patch, YARN-5323.02.patch, 
> YARN-5323.03.patch, YARN-5323.04.patch
>
>
> This JIRA tracks APIs for the policies that will guide the Router and 
> AMRMProxy decisions on where to fwd the jobs submission/query requests as 
> well as ResourceRequests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5323) Policies APIs (for Router and AMRMProxy policies)

2016-08-30 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15450436#comment-15450436
 ] 

Giovanni Matteo Fumarola commented on YARN-5323:


LGTM.

Minor fix:
1) AMRMProxyFederationPolicy -> FederationAMRMProxyPolicy;
2) RouterFederationPolicy -> FederationRouterPolicy;


> Policies APIs (for Router and AMRMProxy policies)
> -
>
> Key: YARN-5323
> URL: https://issues.apache.org/jira/browse/YARN-5323
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5323-YARN-2915.05.patch, 
> YARN-5323-YARN-2915.06.patch, YARN-5323.01.patch, YARN-5323.02.patch, 
> YARN-5323.03.patch, YARN-5323.04.patch
>
>
> This JIRA tracks APIs for the policies that will guide the Router and 
> AMRMProxy decisions on where to fwd the jobs submission/query requests as 
> well as ResourceRequests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5585) [Atsv2] Add a new filter fromId in REST endpoints

2016-08-30 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated YARN-5585:

Issue Type: Sub-task  (was: Bug)
Parent: YARN-5355

> [Atsv2] Add a new filter fromId in REST endpoints
> -
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-10. But to retrieve next 5 apps, it is 
> difficult.
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&=app-5* which gives list of apps from app-6 to 
> app-10. 
> This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5323) Policies APIs (for Router and AMRMProxy policies)

2016-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15450433#comment-15450433
 ] 

Hadoop QA commented on YARN-5323:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
56s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
48s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 12s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common: 
The patch generated 9 new + 0 unchanged - 0 fixed = 9 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
55s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 15s 
{color} | {color:red} hadoop-yarn-server-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 34s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 47s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826263/YARN-5323-YARN-2915.06.patch
 |
| JIRA Issue | YARN-5323 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7277211f4d91 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-2915 / c77269d |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12956/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/12956/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12956/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common |
| Console output | 

[jira] [Commented] (YARN-5264) Use FSQueue to store queue-specific information

2016-08-30 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15450429#comment-15450429
 ] 

Yufei Gu commented on YARN-5264:


Thanks [~templedf]'s latest comment. I think we comment concurrently. Nice 
catch for the unnecessary new empty line. I've uploaded patch 005 to fix it and 
some style issues.

> Use FSQueue to store queue-specific information
> ---
>
> Key: YARN-5264
> URL: https://issues.apache.org/jira/browse/YARN-5264
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-5264.001.patch, YARN-5264.002.patch, 
> YARN-5264.003.patch, YARN-5264.004.patch, YARN-5264.005.patch
>
>
> Use FSQueue to store queue-specific information instead of querying 
> AllocationConfiguration. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5264) Use FSQueue to store queue-specific information

2016-08-30 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5264:
---
Attachment: YARN-5264.005.patch

> Use FSQueue to store queue-specific information
> ---
>
> Key: YARN-5264
> URL: https://issues.apache.org/jira/browse/YARN-5264
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-5264.001.patch, YARN-5264.002.patch, 
> YARN-5264.003.patch, YARN-5264.004.patch, YARN-5264.005.patch
>
>
> Use FSQueue to store queue-specific information instead of querying 
> AllocationConfiguration. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-679) add an entry point that can start any Yarn service

2016-08-30 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15450404#comment-15450404
 ] 

Daniel Templeton commented on YARN-679:
---

Continuing on...

* I don't see the point of having a {{shutdown()}} method in 
{{ServiceShutdownHook}} that is separate from the {{run()}} method.
* {{ServiceLaunchException}} appears to be the only exception you've defined 
that doesn't implement your cause exit code promotion.
* This hyphen is spurious: {code}   * to build the formatted exception -in the 
ENGLISH locale.{code}
* This line {code}   * It is still be available as a parameter for the 
format.{code} should be {code}   * It will also be used as a parameter for the 
format.{code}
* In {{InterruptEscalator}} here: {code}ServiceLauncher owner = 
ownerRef.get();
if (owner != null) {
  sb.append(", owner= ").append(owner.toString());
}{code} it would be nice to also include some note that the owner has 
already been reaped if it has.  Conveying information through omission is 
seldom clear.
* In {{InterruptEscalator.interrupted()}}, if the service has already been 
interrupted, it will log the message as a warning twice.
* In {{InterruptEscalator.interrupted()}} here: {code}  //wait for that 
thread to finish
  try {
thread.join(shutdownTimeMillis);
  } catch (InterruptedException ignored) {
//ignored
  }
  forcedShutdownTimedOut = !shutdown.getServiceWasShutdown();
  if (forcedShutdownTimedOut) {
LOG.warn("Service did not shut down in time");
  }{code} if the shutdown is interrupted, you will erroneously log that the 
service did not shut down in time.
* In {{InterruptEscalator.interrupted()}} here: {code}  /**
   * Previous interrupt handlers. These are not queried.
   */
  private final List interruptHandlers = new ArrayList<>(2);{code} 
the comment doesn't make any sense to me based on the code.  I also don't see 
any provision made to deal with multiple handlers per signal.  Maybe list list 
should be a map so you can complain if the handler is already set?

More later...

> add an entry point that can start any Yarn service
> --
>
> Key: YARN-679
> URL: https://issues.apache.org/jira/browse/YARN-679
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: api
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: YARN-679-001.patch, YARN-679-002.patch, 
> YARN-679-002.patch, YARN-679-003.patch, YARN-679-004.patch, 
> YARN-679-005.patch, YARN-679-006.patch, YARN-679-007.patch, 
> YARN-679-008.patch, YARN-679-009.patch, YARN-679-010.patch, 
> YARN-679-011.patch, org.apache.hadoop.servic...mon 3.0.0-SNAPSHOT API).pdf
>
>  Time Spent: 72h
>  Remaining Estimate: 0h
>
> There's no need to write separate .main classes for every Yarn service, given 
> that the startup mechanism should be identical: create, init, start, wait for 
> stopped -with an interrupt handler to trigger a clean shutdown on a control-c 
> interrupt.
> Provide one that takes any classname, and a list of config files/options



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5264) Use FSQueue to store queue-specific information

2016-08-30 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15450391#comment-15450391
 ] 

Yufei Gu edited comment on YARN-5264 at 8/30/16 10:54 PM:
--

Hi [~templedf], thanks a lot for the review. I agree with you that storing data 
twice is bad. I wouldn't say it's impossible to consolidate them, but too 
complicated to do it in this JIRA:
1. There are multiple places to initialize a queue. We need to consolidate them 
or refactor somehow.
2. There are multiple types of queue, for example, "root" and "root.default", 
"configured queue" and "ad hoc queue". The behaviors of these initialization 
should be different.
3. There are a number of test cases built on current approaches are needed to 
be investigated and refactored.

In this JIRA, I've limited the access of these maps in 
{{AllocationConfiguration}} only for itself and its test case, so these 
duplicate data can only used for initialization. That should be OK for the 
current JIRA and we can completely remove duplicate data in a followup JIRA.


was (Author: yufeigu):
Hi [~templedf], thanks a lot for the review. I agree with you that storing data 
twice is bad. I wouldn't say it's impossible to consolidate them, but too 
complicated to do it in this JIRA:
1. There are multiple places to initialize a queue. We need to consolidate them 
or refactor somehow.
2. There are multiple types of queue, for example, "root" and "root.default", 
"configured queue" and "ad hoc queue". The behaviors of these initialization 
should be different.
3. There are a number of test cases built on current approaches are needed to 
be investigated and refactored.

In this JIRA, I've limited the access of these maps in 
{{AllocationConfiguration}} only for itself and its test case, so these 
duplicate data can only used for initialization. That should be OK for the 
current JIRA and we can completely remove duplicate data in a followup JIRA?

> Use FSQueue to store queue-specific information
> ---
>
> Key: YARN-5264
> URL: https://issues.apache.org/jira/browse/YARN-5264
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-5264.001.patch, YARN-5264.002.patch, 
> YARN-5264.003.patch, YARN-5264.004.patch
>
>
> Use FSQueue to store queue-specific information instead of querying 
> AllocationConfiguration. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5221) Expose UpdateResourceRequest API to allow AM to request for change in container properties

2016-08-30 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15450395#comment-15450395
 ] 

Arun Suresh commented on YARN-5221:
---

Thanks for the reviews [~leftnoteasy] and [~subru].
Committed this to trunk... Will post a patch for branch-2 shortly and wait for 
jenkins before committing that and closing this JIRA.

> Expose UpdateResourceRequest API to allow AM to request for change in 
> container properties
> --
>
> Key: YARN-5221
> URL: https://issues.apache.org/jira/browse/YARN-5221
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5221.001.patch, YARN-5221.002.patch, 
> YARN-5221.003.patch, YARN-5221.004.patch, YARN-5221.005.patch, 
> YARN-5221.006.patch, YARN-5221.007.patch, YARN-5221.008.patch, 
> YARN-5221.009.patch, YARN-5221.010.patch, YARN-5221.011.patch, 
> YARN-5221.012.patch, YARN-5221.013.patch
>
>
> YARN-1197 introduced APIs to allow an AM to request for Increase and Decrease 
> of Container Resources after initial allocation.
> YARN-5085 proposes to allow an AM to request for a change of Container 
> ExecutionType.
> This JIRA proposes to unify both of the above into an Update Container API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5264) Use FSQueue to store queue-specific information

2016-08-30 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15450391#comment-15450391
 ] 

Yufei Gu commented on YARN-5264:


Hi [~templedf], thanks a lot for the review. I agree with you that storing data 
twice is bad. I wouldn't say it's impossible to consolidate them, but too 
complicated to do it in this JIRA:
1. There are multiple places to initialize a queue. We need to consolidate them 
or refactor somehow.
2. There are multiple types of queue, for example, "root" and "root.default", 
"configured queue" and "ad hoc queue". The behaviors of these initialization 
should be different.
3. There are a number of test cases built on current approaches are needed to 
be investigated and refactored.

In this JIRA, I've limited the access of these maps in 
{{AllocationConfiguration}} only for itself and its test case, so these 
duplicate data can only used for initialization. That should be OK for the 
current JIRA and we can completely remove duplicate data in a followup JIRA?

> Use FSQueue to store queue-specific information
> ---
>
> Key: YARN-5264
> URL: https://issues.apache.org/jira/browse/YARN-5264
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-5264.001.patch, YARN-5264.002.patch, 
> YARN-5264.003.patch, YARN-5264.004.patch
>
>
> Use FSQueue to store queue-specific information instead of querying 
> AllocationConfiguration. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5323) Policies APIs (for Router and AMRMProxy policies)

2016-08-30 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15450388#comment-15450388
 ] 

Carlo Curino commented on YARN-5323:


 [~wangda], I addressed 2 and 3 above... I use FederationPolicyManager for (2) 
as FederationPoliciesConfigurator clashes 
a bit with FederationPolicyConfigurator.  I also tried to address the 
checkstyle/javadoc (let's see what Jenkins thinks). 

[~subru] and [~giovanni.fumarola] if you have time, take a look as well. You 
have lots of context and are working on some of 
the code that will use it. (It is ok to also have "improvement" type of 
comments for the V2 of federation, we can track them in 
subjiras of YARN-5597 as appropriate).

Thanks,
Carlo

> Policies APIs (for Router and AMRMProxy policies)
> -
>
> Key: YARN-5323
> URL: https://issues.apache.org/jira/browse/YARN-5323
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5323-YARN-2915.05.patch, 
> YARN-5323-YARN-2915.06.patch, YARN-5323.01.patch, YARN-5323.02.patch, 
> YARN-5323.03.patch, YARN-5323.04.patch
>
>
> This JIRA tracks APIs for the policies that will guide the Router and 
> AMRMProxy decisions on where to fwd the jobs submission/query requests as 
> well as ResourceRequests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5555) Scheduler UI: "% of Queue" is inaccurate if leaf queue is hierarchically nested.

2016-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15450390#comment-15450390
 ] 

Hadoop QA commented on YARN-:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 46s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 24s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 2 new + 434 unchanged - 0 fixed = 436 total (was 434) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 38m 3s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 6s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826249/YARN-.001.patch |
| JIRA Issue | YARN- |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c6993de44735 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 9dcbdbd |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12952/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12952/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12952/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Scheduler UI: "% of Queue" is inaccurate if leaf queue is hierarchically 
> nested.
> 
>
>   

[jira] [Updated] (YARN-5323) Policies APIs (for Router and AMRMProxy policies)

2016-08-30 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-5323:
---
Attachment: YARN-5323-YARN-2915.06.patch

> Policies APIs (for Router and AMRMProxy policies)
> -
>
> Key: YARN-5323
> URL: https://issues.apache.org/jira/browse/YARN-5323
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5323-YARN-2915.05.patch, 
> YARN-5323-YARN-2915.06.patch, YARN-5323.01.patch, YARN-5323.02.patch, 
> YARN-5323.03.patch, YARN-5323.04.patch
>
>
> This JIRA tracks APIs for the policies that will guide the Router and 
> AMRMProxy decisions on where to fwd the jobs submission/query requests as 
> well as ResourceRequests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5264) Use FSQueue to store queue-specific information

2016-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15450368#comment-15450368
 ] 

Hadoop QA commented on YARN-5264:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
57s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 21s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 8 new + 347 unchanged - 4 fixed = 355 total (was 351) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 34m 37s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 9s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826248/YARN-5264.004.patch |
| JIRA Issue | YARN-5264 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e9cb59536316 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 9dcbdbd |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12954/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12954/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12954/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Use FSQueue to store queue-specific information
> ---
>
> Key: YARN-5264
> URL: 

[jira] [Commented] (YARN-4220) [Storage implementation] Support getEntities with only Application id but no userId

2016-08-30 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15450357#comment-15450357
 ] 

Li Lu commented on YARN-4220:
-

Folks sorry for the delay... I think I may need some time to catch up with my 
own proposed fix, but if a fix is already in someone's mind, please feel free 
to reassign the JIRA. I can help with reviewing if needed. Thanks! 

> [Storage implementation] Support getEntities with only Application id but no 
> userId
> ---
>
> Key: YARN-4220
> URL: https://issues.apache.org/jira/browse/YARN-4220
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
>Priority: Minor
>  Labels: YARN-5355
>
> Currently we're enforcing flow and flowrun id to be non-null values on 
> {{getEntities}}. We can actually query the appToFlow table to figure out an 
> application's flow id and flowrun id if they're missing. This will simplify 
> normal queries. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5601) Make the RM epoch base value configurable

2016-08-30 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-5601:
-
Attachment: YARN-5601-YARN-2915-v1.patch

A simple patch that reads the base epoch value from config with updated tests 
to validate the same.

> Make the RM epoch base value configurable
> -
>
> Key: YARN-5601
> URL: https://issues.apache.org/jira/browse/YARN-5601
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-5601-YARN-2915-v1.patch
>
>
> Currently the epoch always starts from zero. This can cause container ids to 
> conflict for an application under Federation that spans multiple RMs 
> concurrently. This JIRA proposes to make the RM epoch base value configurable 
> which will allow us to avoid conflicts by setting different values for each 
> RM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5264) Use FSQueue to store queue-specific information

2016-08-30 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15450332#comment-15450332
 ] 

Daniel Templeton commented on YARN-5264:


Per our offline discussion, this patch is a step forwards in a refactoring of 
the fair scheduler data structures.  Future work will address the data 
duplication.

With that perspective in mind, I'm OK with the duplication of the data.  At 
least it's done cleanly.

My only gripe with the patch is that you introduced a spurious newline in 
{{TestFSLeafQueue}} so that there are two consecutive newlines.

> Use FSQueue to store queue-specific information
> ---
>
> Key: YARN-5264
> URL: https://issues.apache.org/jira/browse/YARN-5264
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-5264.001.patch, YARN-5264.002.patch, 
> YARN-5264.003.patch, YARN-5264.004.patch
>
>
> Use FSQueue to store queue-specific information instead of querying 
> AllocationConfiguration. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5601) Make the RM epoch base value configurable

2016-08-30 Thread Subru Krishnan (JIRA)
Subru Krishnan created YARN-5601:


 Summary: Make the RM epoch base value configurable
 Key: YARN-5601
 URL: https://issues.apache.org/jira/browse/YARN-5601
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Subru Krishnan
Assignee: Subru Krishnan


Currently the epoch always starts from zero. This can cause container ids to 
conflict for an application under Federation that spans multiple RMs 
concurrently. This JIRA proposes to make the RM epoch base value configurable 
which will allow us to avoid conflicts by setting different values for each RM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5561) [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and entities via REST

2016-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15450303#comment-15450303
 ] 

Hadoop QA commented on YARN-5561:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 13s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice:
 The patch generated 12 new + 19 unchanged - 0 fixed = 31 total (was 19) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 9 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 14s 
{color} | {color:red} hadoop-yarn-server-timelineservice in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 49s 
{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 57s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826160/YARN-5561.patch |
| JIRA Issue | YARN-5561 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5dabe98111ff 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 9dcbdbd |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12953/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/12953/artifact/patchprocess/whitespace-eol.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/12953/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12953/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice

[jira] [Commented] (YARN-5549) AMLauncher.createAMContainerLaunchContext() should not log the command to be launched indiscriminately

2016-08-30 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15450299#comment-15450299
 ] 

Daniel Templeton commented on YARN-5549:


I just created YARN-5599 and YARN-5600 to help with debugging launch failures 
in a post-YARN-5549 world.  [~vinodkv], please correct the description of 
YARN-5599 to better reflect the reality of ATS if I'm off a bit.

> AMLauncher.createAMContainerLaunchContext() should not log the command to be 
> launched indiscriminately
> --
>
> Key: YARN-5549
> URL: https://issues.apache.org/jira/browse/YARN-5549
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-5549.001.patch, YARN-5549.002.patch, 
> YARN-5549.003.patch, YARN-5549.004.patch
>
>
> The command could contain sensitive information, such as keystore passwords 
> or AWS credentials or other.  Instead of logging it as INFO, we should log it 
> as DEBUG and include a property to disable logging it at all.  Logging it to 
> a different logger would also be viable and may create a smaller 
> administrative footprint.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5599) Post AM launcher artifacts to ATS

2016-08-30 Thread Daniel Templeton (JIRA)
Daniel Templeton created YARN-5599:
--

 Summary: Post AM launcher artifacts to ATS
 Key: YARN-5599
 URL: https://issues.apache.org/jira/browse/YARN-5599
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Daniel Templeton


To aid in debugging launch failures, it would be valuable to have an 
application's launch script and logs posted to ATS.  Because the application's 
command line may contain private credentials or other secure information, 
access to the data in ATS should be restricted to the job owner, including the 
at-rest data.

Along with making the data available through ATS, the configuration parameter 
introduced in YARN-5549 and the log line that it guards should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5561) [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and entities via REST

2016-08-30 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15450294#comment-15450294
 ] 

Li Lu commented on YARN-5561:
-

Thanks [~rohithsharma]! I took a deeper look into the patch. The overall 
approach LGTM. One concern I have is, shall we expose the semantics of 
"containers" and "appattempts" in timeline reader API level? To me, the 
timeline API is independent from the actual use cases: it merely provides 
concrete ways to retrieve timeline entities. Similar to the past AHS APIs, 
maybe we'd like to abstract a separate layer of APIs for YARN to read data from 
timeline service? Of course we can integrate the new "AHS REST API" layer in 
the timeline reader server, but code-wise shall we separate the new APIs from 
TimelineReaderWebService? 

BTW, I set this JIRA to be patch available so that Jenkins can find the patch 
and test it. If you want something otherwise please feel free to change it 
back. Thanks! 

> [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and 
> entities via REST
> ---
>
> Key: YARN-5561
> URL: https://issues.apache.org/jira/browse/YARN-5561
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: YARN-5561.patch, YARN-5561.v0.patch
>
>
> ATSv2 model lacks retrieval of {{list-of-all-apps}}, 
> {{list-of-all-app-attempts}} and {{list-of-all-containers-per-attempt}} via 
> REST API's. And also it is required to know about all the entities in an 
> applications.
> It is pretty much highly required these URLs for Web  UI.
> New REST URL would be 
> # GET {{/ws/v2/timeline/apps}}
> # GET {{/ws/v2/timeline/apps/\{app-id\}/appattempts}}.
> # GET 
> {{/ws/v2/timeline/apps/\{app-id\}/appattempts/\{attempt-id\}/containers}}
> # GET {{/ws/v2/timeline/apps/\{app id\}/entities}} should display list of 
> entities that can be queried.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5600) Add a parameter to ContainerLaunchContext to emulate yarn.nodemanager.delete.debug-delay-sec on a per-application basis

2016-08-30 Thread Daniel Templeton (JIRA)
Daniel Templeton created YARN-5600:
--

 Summary: Add a parameter to ContainerLaunchContext to emulate 
yarn.nodemanager.delete.debug-delay-sec on a per-application basis
 Key: YARN-5600
 URL: https://issues.apache.org/jira/browse/YARN-5600
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: nodemanager
Reporter: Daniel Templeton
Assignee: Daniel Templeton


To make debugging application launch failures simpler, I'd like to add a 
parameter to the CLC to allow an application owner to request delayed deletion 
of the application's launch artifacts.

This JIRA solves largely the same problem as YARN-5599, but for cases where ATS 
is not in use, e.g. branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5264) Use FSQueue to store queue-specific information

2016-08-30 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5264:
---
Attachment: YARN-5264.004.patch

> Use FSQueue to store queue-specific information
> ---
>
> Key: YARN-5264
> URL: https://issues.apache.org/jira/browse/YARN-5264
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-5264.001.patch, YARN-5264.002.patch, 
> YARN-5264.003.patch, YARN-5264.004.patch
>
>
> Use FSQueue to store queue-specific information instead of querying 
> AllocationConfiguration. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5555) Scheduler UI: "% of Queue" is inaccurate if leaf queue is hierarchically nested.

2016-08-30 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated YARN-:
-
Attachment: YARN-.001.patch

> Scheduler UI: "% of Queue" is inaccurate if leaf queue is hierarchically 
> nested.
> 
>
> Key: YARN-
> URL: https://issues.apache.org/jira/browse/YARN-
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Minor
> Attachments: PctOfQueueIsInaccurate.jpg, YARN-.001.patch
>
>
> If a leaf queue is hierarchically nested (e.g., {{root.a.a1}}, 
> {{root.a.a2}}), the values in the "*% of Queue*" column in the apps section 
> of the Scheduler UI is calculated as if the leaf queue ({{a1}}) were a direct 
> child of {{root}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5598) [YARN-3368] Fix create-release to be able to generate bits for the new yarn-ui

2016-08-30 Thread Wangda Tan (JIRA)
Wangda Tan created YARN-5598:


 Summary: [YARN-3368] Fix create-release to be able to generate 
bits for the new yarn-ui
 Key: YARN-5598
 URL: https://issues.apache.org/jira/browse/YARN-5598
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: yarn-ui-v2, yarn
Reporter: Wangda Tan
Assignee: Wangda Tan






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4849) [YARN-3368] cleanup code base, integrate web UI related build to mvn, and fix licenses.

2016-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15450216#comment-15450216
 ] 

Hadoop QA commented on YARN-4849:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 5s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 41s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
12s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 3m 5s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:f62df43 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826243/YARN-4849-YARN-3368.rat-fix-08302016.1.patch
 |
| JIRA Issue | YARN-4849 |
| Optional Tests |  asflicense  shellcheck  shelldocs  |
| uname | Linux cd1bd92cfe41 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-3368 / 608309d |
| shellcheck | v0.4.4 |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12951/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> [YARN-3368] cleanup code base, integrate web UI related build to mvn, and fix 
> licenses.
> ---
>
> Key: YARN-4849
> URL: https://issues.apache.org/jira/browse/YARN-4849
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Fix For: YARN-3368
>
> Attachments: YARN-4849-YARN-3368.1.patch, 
> YARN-4849-YARN-3368.2.patch, YARN-4849-YARN-3368.3.patch, 
> YARN-4849-YARN-3368.4.patch, YARN-4849-YARN-3368.5.patch, 
> YARN-4849-YARN-3368.6.patch, YARN-4849-YARN-3368.7.patch, 
> YARN-4849-YARN-3368.8.patch, YARN-4849-YARN-3368.addendum.1.patch, 
> YARN-4849-YARN-3368.addendum.2.patch, YARN-4849-YARN-3368.addendum.3.patch, 
> YARN-4849-YARN-3368.doc-fix-08172016.1.patch, 
> YARN-4849-YARN-3368.doc-fix-08232016.1.patch, 
> YARN-4849-YARN-3368.license-fix-08172016.1.patch, 
> YARN-4849-YARN-3368.license-fix-08232016.1.patch, 
> YARN-4849-YARN-3368.rat-fix-08302016.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4849) [YARN-3368] cleanup code base, integrate web UI related build to mvn, and fix licenses.

2016-08-30 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4849:
-
Attachment: YARN-4849-YARN-3368.rat-fix-08302016.1.patch

Attached addendum fix for rat warnings.

> [YARN-3368] cleanup code base, integrate web UI related build to mvn, and fix 
> licenses.
> ---
>
> Key: YARN-4849
> URL: https://issues.apache.org/jira/browse/YARN-4849
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Fix For: YARN-3368
>
> Attachments: YARN-4849-YARN-3368.1.patch, 
> YARN-4849-YARN-3368.2.patch, YARN-4849-YARN-3368.3.patch, 
> YARN-4849-YARN-3368.4.patch, YARN-4849-YARN-3368.5.patch, 
> YARN-4849-YARN-3368.6.patch, YARN-4849-YARN-3368.7.patch, 
> YARN-4849-YARN-3368.8.patch, YARN-4849-YARN-3368.addendum.1.patch, 
> YARN-4849-YARN-3368.addendum.2.patch, YARN-4849-YARN-3368.addendum.3.patch, 
> YARN-4849-YARN-3368.doc-fix-08172016.1.patch, 
> YARN-4849-YARN-3368.doc-fix-08232016.1.patch, 
> YARN-4849-YARN-3368.license-fix-08172016.1.patch, 
> YARN-4849-YARN-3368.license-fix-08232016.1.patch, 
> YARN-4849-YARN-3368.rat-fix-08302016.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-4849) [YARN-3368] cleanup code base, integrate web UI related build to mvn, and fix licenses.

2016-08-30 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan reopened YARN-4849:
--

> [YARN-3368] cleanup code base, integrate web UI related build to mvn, and fix 
> licenses.
> ---
>
> Key: YARN-4849
> URL: https://issues.apache.org/jira/browse/YARN-4849
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Fix For: YARN-3368
>
> Attachments: YARN-4849-YARN-3368.1.patch, 
> YARN-4849-YARN-3368.2.patch, YARN-4849-YARN-3368.3.patch, 
> YARN-4849-YARN-3368.4.patch, YARN-4849-YARN-3368.5.patch, 
> YARN-4849-YARN-3368.6.patch, YARN-4849-YARN-3368.7.patch, 
> YARN-4849-YARN-3368.8.patch, YARN-4849-YARN-3368.addendum.1.patch, 
> YARN-4849-YARN-3368.addendum.2.patch, YARN-4849-YARN-3368.addendum.3.patch, 
> YARN-4849-YARN-3368.doc-fix-08172016.1.patch, 
> YARN-4849-YARN-3368.doc-fix-08232016.1.patch, 
> YARN-4849-YARN-3368.license-fix-08172016.1.patch, 
> YARN-4849-YARN-3368.license-fix-08232016.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4876) [Phase 1] Decoupled Init / Destroy of Containers from Start / Stop

2016-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15450152#comment-15450152
 ] 

Hadoop QA commented on YARN-4876:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 16 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 33s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
46s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 55s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 28s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 10s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 8m 29s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 8m 29s {color} 
| {color:red} root generated 1 new + 708 unchanged - 0 fixed = 709 total (was 
708) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 50s 
{color} | {color:red} root: The patch generated 157 new + 1259 unchanged - 10 
fixed = 1416 total (was 1269) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
44s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 88 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 53s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 18s 
{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api generated 
9 new + 125 unchanged - 0 fixed = 134 total (was 125) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 17s 
{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 5 new + 242 unchanged - 0 fixed = 247 total (was 242) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 15s 
{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client 
generated 4 new + 157 unchanged - 0 fixed = 161 total (was 157) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 26s {color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 20s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 28s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 8s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 37m 46s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 

[jira] [Commented] (YARN-5567) Fix script exit code checking in NodeHealthScriptRunner#reportHealthStatus

2016-08-30 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15450114#comment-15450114
 ] 

Ray Chiang commented on YARN-5567:
--

So, that's two votes for trunk only.  [~wilfreds] or [~Naganarasimha], are both 
of you okay with that?

> Fix script exit code checking in NodeHealthScriptRunner#reportHealthStatus
> --
>
> Key: YARN-5567
> URL: https://issues.apache.org/jira/browse/YARN-5567
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Fix For: 2.8.1
>
> Attachments: YARN-5567.001.patch
>
>
> In case of FAILED_WITH_EXIT_CODE, health status should be false.
> {code}
>   case FAILED_WITH_EXIT_CODE:
> setHealthStatus(true, "", now);
> break;
> {code}
> should be 
> {code}
>   case FAILED_WITH_EXIT_CODE:
> setHealthStatus(false, "", now);
> break;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5596) TestDockerContainerRuntime fails on OS X

2016-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15450074#comment-15450074
 ] 

Hadoop QA commented on YARN-5596:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 17s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 15s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826235/YARN-5596.002.patch |
| JIRA Issue | YARN-5596 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2314f86b0bbd 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c4ee691 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12950/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12950/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> TestDockerContainerRuntime fails on OS X
> 
>
> Key: YARN-5596
> URL: https://issues.apache.org/jira/browse/YARN-5596
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, yarn
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
>Priority: Minor
> Attachments: YARN-5596.001.patch, YARN-5596.002.patch
>
>
> /sys/fs/cgroup doesn't exist on Mac OS X. And the 

[jira] [Commented] (YARN-5566) client-side NM graceful decom doesn't trigger when jobs finish

2016-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15450025#comment-15450025
 ] 

Hadoop QA commented on YARN-5566:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 46s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 8s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 38m 24s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 38s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826224/YARN-5566.004.patch |
| JIRA Issue | YARN-5566 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e73f2848e9f0 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c4ee691 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12948/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12948/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> client-side NM graceful decom doesn't trigger when jobs finish
> --
>
> Key: YARN-5566
> URL: https://issues.apache.org/jira/browse/YARN-5566
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 2.8.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-5566.001.patch, YARN-5566.002.patch, 
> 

[jira] [Updated] (YARN-3658) Federation "Capacity Allocation" across sub-cluster

2016-08-30 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-3658:
-
Parent Issue: YARN-5597  (was: YARN-2915)

> Federation "Capacity Allocation" across sub-cluster
> ---
>
> Key: YARN-3658
> URL: https://issues.apache.org/jira/browse/YARN-3658
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>
> This JIRA will track mechanisms to map federation level capacity allocations 
> to sub-cluster level ones. (Possibly via reservation mechanisms).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3660) Federation Global Policy Generator (load balancing)

2016-08-30 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-3660:
-
Parent Issue: YARN-5597  (was: YARN-2915)

> Federation Global Policy Generator (load balancing)
> ---
>
> Key: YARN-3660
> URL: https://issues.apache.org/jira/browse/YARN-3660
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Carlo Curino
>Assignee: Subru Krishnan
>
> In a federated environment, local impairments of one sub-cluster might 
> unfairly affect users/queues that are mapped to that sub-cluster. A 
> centralized component (GPG) runs out-of-band and edits the policies governing 
> how users/queues are allocated to sub-clusters. This allows us to enforce 
> global invariants (by dynamically updating locally-enforced invariants).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5597) YARN Federation phase 2

2016-08-30 Thread Subru Krishnan (JIRA)
Subru Krishnan created YARN-5597:


 Summary: YARN Federation phase 2
 Key: YARN-5597
 URL: https://issues.apache.org/jira/browse/YARN-5597
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Subru Krishnan


This umbrella JIRA tracks set of improvements over the YARN Federation MVP 
(YARN-2915)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3657) Federation maintenance mechanisms (simple CLI and command propagation)

2016-08-30 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-3657:
-
Parent Issue: YARN-5597  (was: YARN-2915)

> Federation maintenance mechanisms (simple CLI and command propagation)
> --
>
> Key: YARN-3657
> URL: https://issues.apache.org/jira/browse/YARN-3657
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>
> The maintenance mechanisms provided by the RM are not sufficient in a 
> federated environment. In this JIRA we track few extensions 
> (more to come later) to allow basic maintenance mechanisms (and command 
> propagation) for the federated components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5596) TestDockerContainerRuntime fails on OS X

2016-08-30 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15449995#comment-15449995
 ] 

Daniel Templeton commented on YARN-5596:


+1 (non-binding).  Thanks, [~sidharta-s]!

> TestDockerContainerRuntime fails on OS X
> 
>
> Key: YARN-5596
> URL: https://issues.apache.org/jira/browse/YARN-5596
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, yarn
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
>Priority: Minor
> Attachments: YARN-5596.001.patch, YARN-5596.002.patch
>
>
> /sys/fs/cgroup doesn't exist on Mac OS X. And the tests seem to fail because 
> of this. 
> {code}
> Failed tests:
>   TestDockerContainerRuntime.testContainerLaunchWithCustomNetworks:456 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
>   TestDockerContainerRuntime.testContainerLaunchWithNetworkingDefaults:401 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
>   TestDockerContainerRuntime.testDockerContainerLaunch:297 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
> Tests run: 19, Failures: 3, Errors: 0, Skipped: 0
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5596) TestDockerContainerRuntime fails on OS X

2016-08-30 Thread Sidharta Seethana (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sidharta Seethana updated YARN-5596:

Attachment: YARN-5596.002.patch

Uploaded a new patch with proposed change. 

> TestDockerContainerRuntime fails on OS X
> 
>
> Key: YARN-5596
> URL: https://issues.apache.org/jira/browse/YARN-5596
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, yarn
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
>Priority: Minor
> Attachments: YARN-5596.001.patch, YARN-5596.002.patch
>
>
> /sys/fs/cgroup doesn't exist on Mac OS X. And the tests seem to fail because 
> of this. 
> {code}
> Failed tests:
>   TestDockerContainerRuntime.testContainerLaunchWithCustomNetworks:456 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
>   TestDockerContainerRuntime.testContainerLaunchWithNetworkingDefaults:401 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
>   TestDockerContainerRuntime.testDockerContainerLaunch:297 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
> Tests run: 19, Failures: 3, Errors: 0, Skipped: 0
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5388) MAPREDUCE-6719 requires changes to DockerContainerExecutor

2016-08-30 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15449978#comment-15449978
 ] 

Daniel Templeton commented on YARN-5388:


[~sidharta-s], any comments?

> MAPREDUCE-6719 requires changes to DockerContainerExecutor
> --
>
> Key: YARN-5388
> URL: https://issues.apache.org/jira/browse/YARN-5388
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Fix For: 2.9.0
>
> Attachments: YARN-5388.001.patch, YARN-5388.002.patch, 
> YARN-5388.branch-2.001.patch
>
>
> Because the {{DockerContainerExecuter}} overrides the {{writeLaunchEnv()}} 
> method, it must also have the wildcard processing logic from 
> YARN-4958/YARN-5373 added to it.  Without it, the use of -libjars will fail 
> unless wildcarding is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5567) Fix script exit code checking in NodeHealthScriptRunner#reportHealthStatus

2016-08-30 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15449979#comment-15449979
 ] 

Yufei Gu commented on YARN-5567:


Thanks [~wilfreds] for pointing out. Nice catch. My bad to miss the part of the 
Java doc. Thanks [~Naganarasimha] and [~rchiang]'s comments. I prefer to revert 
it in branch-2.8 and branch-2 for compatibility reason,  and keep it in trunk 
with updating of documentation. Thanks [~rchiang] for filing the follow up JIRA.


> Fix script exit code checking in NodeHealthScriptRunner#reportHealthStatus
> --
>
> Key: YARN-5567
> URL: https://issues.apache.org/jira/browse/YARN-5567
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Fix For: 2.8.1
>
> Attachments: YARN-5567.001.patch
>
>
> In case of FAILED_WITH_EXIT_CODE, health status should be false.
> {code}
>   case FAILED_WITH_EXIT_CODE:
> setHealthStatus(true, "", now);
> break;
> {code}
> should be 
> {code}
>   case FAILED_WITH_EXIT_CODE:
> setHealthStatus(false, "", now);
> break;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5596) TestDockerContainerRuntime fails on OS X

2016-08-30 Thread Sidharta Seethana (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15449971#comment-15449971
 ] 

Sidharta Seethana commented on YARN-5596:
-

Thanks, [~templedf]. I'll upload a new patch with the visibility changed. 

> TestDockerContainerRuntime fails on OS X
> 
>
> Key: YARN-5596
> URL: https://issues.apache.org/jira/browse/YARN-5596
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, yarn
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
>Priority: Minor
> Attachments: YARN-5596.001.patch
>
>
> /sys/fs/cgroup doesn't exist on Mac OS X. And the tests seem to fail because 
> of this. 
> {code}
> Failed tests:
>   TestDockerContainerRuntime.testContainerLaunchWithCustomNetworks:456 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
>   TestDockerContainerRuntime.testContainerLaunchWithNetworkingDefaults:401 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
>   TestDockerContainerRuntime.testDockerContainerLaunch:297 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
> Tests run: 19, Failures: 3, Errors: 0, Skipped: 0
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5596) TestDockerContainerRuntime fails on OS X

2016-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15449968#comment-15449968
 ] 

Hadoop QA commented on YARN-5596:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 12s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 1 new + 17 unchanged - 0 fixed = 18 total (was 17) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 14s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 20s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826226/YARN-5596.001.patch |
| JIRA Issue | YARN-5596 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6c2eadf7a19f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c4ee691 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12949/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12949/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12949/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> TestDockerContainerRuntime fails on OS X
> 
>
> Key: YARN-5596
> URL: https://issues.apache.org/jira/browse/YARN-5596
> 

[jira] [Commented] (YARN-5596) TestDockerContainerRuntime fails on OS X

2016-08-30 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15449935#comment-15449935
 ] 

Daniel Templeton commented on YARN-5596:


Patch looks good.  Does the path constant need to be public, or would it be 
better to make it default visibility?

> TestDockerContainerRuntime fails on OS X
> 
>
> Key: YARN-5596
> URL: https://issues.apache.org/jira/browse/YARN-5596
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, yarn
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
>Priority: Minor
> Attachments: YARN-5596.001.patch
>
>
> /sys/fs/cgroup doesn't exist on Mac OS X. And the tests seem to fail because 
> of this. 
> {code}
> Failed tests:
>   TestDockerContainerRuntime.testContainerLaunchWithCustomNetworks:456 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
>   TestDockerContainerRuntime.testContainerLaunchWithNetworkingDefaults:401 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
>   TestDockerContainerRuntime.testDockerContainerLaunch:297 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
> Tests run: 19, Failures: 3, Errors: 0, Skipped: 0
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3981) support timeline clients not associated with an application

2016-08-30 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15449912#comment-15449912
 ] 

Li Lu commented on YARN-3981:
-

Thanks [~rohithsharma]! 

bq. As part of NM daemon, start new service same as TimeLineWriterWebService. 
Idea is NM reports all these collector address to RM. Introduce new API in 
clientRMservice to get collector address. Address is given by RM in random(This 
can be decided later). This address is used by timeline client. TimeLineClient 
exposes new constructor with an flowName. So system properties can be written 
at flow level.
Actually this looks a little bit similar to the current collector discovery 
mechanism, where the NM reports app level collector information to RM, and RM 
distributes such information to all containers. 

The difference is we need to explicitly decide where and when to launch the 
collectors. The RM can decide where to launch collectors, but as of now, all 
collectors are associated with some concrete application's life-cycles 
(launched as aux-services). We can launch collectors as separate process for 
this use case? 

One concern is this will increase the load on the RM again. Not sure if this 
will be a problem on busy clusters with a lot of client connections. However, 
this is definitely better than launching a central server daemon to handle all 
client requests (which falls back to old ATS v1 architecture). 

For storing those entities posted from clients, can we put them in the entity 
table, but just leave some unknown fields empty? Will that be a concern for the 
storage API's semantics? 

> support timeline clients not associated with an application
> ---
>
> Key: YARN-3981
> URL: https://issues.apache.org/jira/browse/YARN-3981
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Rohith Sharma K S
>  Labels: YARN-5355
>
> In the current v.2 design, all timeline writes must belong in a 
> flow/application context (cluster + user + flow + flow run + application).
> But there are use cases that require writing data outside the context of an 
> application. One such example is a higher level client (e.g. tez client or 
> hive/oozie/cascading client) writing flow-level data that spans multiple 
> applications. We need to find a way to support them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5596) TestDockerContainerRuntime fails on OS X

2016-08-30 Thread Sidharta Seethana (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sidharta Seethana updated YARN-5596:

Attachment: YARN-5596.001.patch

Uploading a patch that fixes the tests. 

> TestDockerContainerRuntime fails on OS X
> 
>
> Key: YARN-5596
> URL: https://issues.apache.org/jira/browse/YARN-5596
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, yarn
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
>Priority: Minor
> Attachments: YARN-5596.001.patch
>
>
> /sys/fs/cgroup doesn't exist on Mac OS X. And the tests seem to fail because 
> of this. 
> {code}
> Failed tests:
>   TestDockerContainerRuntime.testContainerLaunchWithCustomNetworks:456 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
>   TestDockerContainerRuntime.testContainerLaunchWithNetworkingDefaults:401 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
>   TestDockerContainerRuntime.testDockerContainerLaunch:297 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
> Tests run: 19, Failures: 3, Errors: 0, Skipped: 0
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5566) client-side NM graceful decom doesn't trigger when jobs finish

2016-08-30 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated YARN-5566:

Attachment: YARN-5566.004.patch

The 004 patch addresses [~kasha]'s feedback.  
(I assume by TestRMNodeTransitions#testGracefulDecommissionWithApp you meant 
TestRMNodeTransitions#testDecommissioningUnhealthy)

> client-side NM graceful decom doesn't trigger when jobs finish
> --
>
> Key: YARN-5566
> URL: https://issues.apache.org/jira/browse/YARN-5566
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 2.8.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-5566.001.patch, YARN-5566.002.patch, 
> YARN-5566.003.patch, YARN-5566.004.patch
>
>
> I was testing the client-side NM graceful decommission and noticed that it 
> was always waiting for the timeout, even if all jobs running on that node (or 
> even the cluster) had already finished.
> For example:
> # JobA is running with at least one container on NodeA
> # User runs client-side decom on NodeA at 5:00am with a timeout of 3 hours 
> --> NodeA enters DECOMMISSIONING state
> # JobA finishes at 6:00am and there are no other jobs running on NodeA
> # User's client reaches the timeout at 8:00am, and forcibly decommissions 
> NodeA
> NodeA should have decommissioned at 6:00am.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5566) client-side NM graceful decom doesn't trigger when jobs finish

2016-08-30 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15449870#comment-15449870
 ] 

Robert Kanter edited comment on YARN-5566 at 8/30/16 7:15 PM:
--

Thanks [~kasha] for the review.  The 004 patch addresses your feedback.  
(I assume by TestRMNodeTransitions#testGracefulDecommissionWithApp you meant 
TestRMNodeTransitions#testDecommissioningUnhealthy)


was (Author: rkanter):
The 004 patch addresses [~kasha]'s feedback.  
(I assume by TestRMNodeTransitions#testGracefulDecommissionWithApp you meant 
TestRMNodeTransitions#testDecommissioningUnhealthy)

> client-side NM graceful decom doesn't trigger when jobs finish
> --
>
> Key: YARN-5566
> URL: https://issues.apache.org/jira/browse/YARN-5566
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 2.8.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-5566.001.patch, YARN-5566.002.patch, 
> YARN-5566.003.patch, YARN-5566.004.patch
>
>
> I was testing the client-side NM graceful decommission and noticed that it 
> was always waiting for the timeout, even if all jobs running on that node (or 
> even the cluster) had already finished.
> For example:
> # JobA is running with at least one container on NodeA
> # User runs client-side decom on NodeA at 5:00am with a timeout of 3 hours 
> --> NodeA enters DECOMMISSIONING state
> # JobA finishes at 6:00am and there are no other jobs running on NodeA
> # User's client reaches the timeout at 8:00am, and forcibly decommissions 
> NodeA
> NodeA should have decommissioned at 6:00am.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-3665) Federation subcluster membership mechanisms

2016-08-30 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan resolved YARN-3665.
--
  Resolution: Implemented
Hadoop Flags: Reviewed

Closing this as YARN-3671 includes this too.

> Federation subcluster membership mechanisms
> ---
>
> Key: YARN-3665
> URL: https://issues.apache.org/jira/browse/YARN-3665
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
>
> The member YARN RMs continuously heartbeat to the state store to keep alive 
> and publish their current capability/load information. This JIRA tracks this 
> mechanisms.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3671) Integrate Federation services with ResourceManager

2016-08-30 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15449846#comment-15449846
 ] 

Subru Krishnan commented on YARN-3671:
--

Thanks [~jianhe] for the thoughtful review.

> Integrate Federation services with ResourceManager
> --
>
> Key: YARN-3671
> URL: https://issues.apache.org/jira/browse/YARN-3671
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Fix For: YARN-2915
>
> Attachments: YARN-3671-YARN-2915-v1.patch, 
> YARN-3671-YARN-2915-v2.patch, YARN-3671-YARN-2915-v3.patch, 
> YARN-3671-YARN-2915-v4.patch, YARN-3671-YARN-2915-v5.patch
>
>
> This JIRA proposes adding the ability to turn on Federation services like 
> StateStore, cluster membership heartbeat etc in the RM



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5596) TestDockerContainerRuntime fails on OS X

2016-08-30 Thread Sidharta Seethana (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sidharta Seethana updated YARN-5596:

Summary: TestDockerContainerRuntime fails on OS X  (was: 
TestDockerContainerRuntime fails on the mac)

> TestDockerContainerRuntime fails on OS X
> 
>
> Key: YARN-5596
> URL: https://issues.apache.org/jira/browse/YARN-5596
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, yarn
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
>Priority: Minor
>
> /sys/fs/cgroup doesn't exist on Mac OS X. And the tests seem to fail because 
> of this. 
> {code}
> Failed tests:
>   TestDockerContainerRuntime.testContainerLaunchWithCustomNetworks:456 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
>   TestDockerContainerRuntime.testContainerLaunchWithNetworkingDefaults:401 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
>   TestDockerContainerRuntime.testDockerContainerLaunch:297 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
> Tests run: 19, Failures: 3, Errors: 0, Skipped: 0
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5596) TestDockerContainerRuntime fails on the mac

2016-08-30 Thread Sidharta Seethana (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sidharta Seethana updated YARN-5596:

Description: 
/sys/fs/cgroup doesn't exist on Mac OS X. And the tests seem to fail because of 
this. 

{code}
Failed tests:
  TestDockerContainerRuntime.testContainerLaunchWithCustomNetworks:456 
expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
/]test_container_local...> but was:<...ET_BIND_SERVICE -v 
/[]test_container_local...>
  TestDockerContainerRuntime.testContainerLaunchWithNetworkingDefaults:401 
expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
/]test_container_local...> but was:<...ET_BIND_SERVICE -v 
/[]test_container_local...>
  TestDockerContainerRuntime.testDockerContainerLaunch:297 
expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
/]test_container_local...> but was:<...ET_BIND_SERVICE -v 
/[]test_container_local...>

Tests run: 19, Failures: 3, Errors: 0, Skipped: 0
{code}

  was:
/sys/fs/cgroup doesn't exist on the Mac. And the tests seem to fail because of 
this. 

{code}
Failed tests:
  TestDockerContainerRuntime.testContainerLaunchWithCustomNetworks:456 
expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
/]test_container_local...> but was:<...ET_BIND_SERVICE -v 
/[]test_container_local...>
  TestDockerContainerRuntime.testContainerLaunchWithNetworkingDefaults:401 
expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
/]test_container_local...> but was:<...ET_BIND_SERVICE -v 
/[]test_container_local...>
  TestDockerContainerRuntime.testDockerContainerLaunch:297 
expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
/]test_container_local...> but was:<...ET_BIND_SERVICE -v 
/[]test_container_local...>

Tests run: 19, Failures: 3, Errors: 0, Skipped: 0
{code}


> TestDockerContainerRuntime fails on the mac
> ---
>
> Key: YARN-5596
> URL: https://issues.apache.org/jira/browse/YARN-5596
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, yarn
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
>Priority: Minor
>
> /sys/fs/cgroup doesn't exist on Mac OS X. And the tests seem to fail because 
> of this. 
> {code}
> Failed tests:
>   TestDockerContainerRuntime.testContainerLaunchWithCustomNetworks:456 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
>   TestDockerContainerRuntime.testContainerLaunchWithNetworkingDefaults:401 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
>   TestDockerContainerRuntime.testDockerContainerLaunch:297 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
> Tests run: 19, Failures: 3, Errors: 0, Skipped: 0
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5566) client-side NM graceful decom doesn't trigger when jobs finish

2016-08-30 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15449826#comment-15449826
 ] 

Karthik Kambatla commented on YARN-5566:


Patch makes sense to me. Minor comments:
# RMNodeImpl: Nit - the comments in the added code don't add much information. 
We should remove them or added more details so they add some information. 
{noformat}
// no running (and keeping alive) app on this node, get it
// decommissioned.
{noformat}
# TestResourceTrackerService: The second heartbeat from node1 does not need to 
indicate running containers. 
# TestRMNodeTransitions#testGracefulDecommissionWithApp: When creating 
NodeStatus, we don't need to specify ContainerStatus when creating a new 
ArrayList

> client-side NM graceful decom doesn't trigger when jobs finish
> --
>
> Key: YARN-5566
> URL: https://issues.apache.org/jira/browse/YARN-5566
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 2.8.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-5566.001.patch, YARN-5566.002.patch, 
> YARN-5566.003.patch
>
>
> I was testing the client-side NM graceful decommission and noticed that it 
> was always waiting for the timeout, even if all jobs running on that node (or 
> even the cluster) had already finished.
> For example:
> # JobA is running with at least one container on NodeA
> # User runs client-side decom on NodeA at 5:00am with a timeout of 3 hours 
> --> NodeA enters DECOMMISSIONING state
> # JobA finishes at 6:00am and there are no other jobs running on NodeA
> # User's client reaches the timeout at 8:00am, and forcibly decommissions 
> NodeA
> NodeA should have decommissioned at 6:00am.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5596) TestDockerContainerRuntime fails on the mac

2016-08-30 Thread Sidharta Seethana (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15449822#comment-15449822
 ] 

Sidharta Seethana commented on YARN-5596:
-

I'll take a look at this. Patch coming up.

> TestDockerContainerRuntime fails on the mac
> ---
>
> Key: YARN-5596
> URL: https://issues.apache.org/jira/browse/YARN-5596
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, yarn
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
>Priority: Minor
>
> /sys/fs/cgroup doesn't exist on the Mac. And the tests seem to fail because 
> of this. 
> {code}
> Failed tests:
>   TestDockerContainerRuntime.testContainerLaunchWithCustomNetworks:456 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
>   TestDockerContainerRuntime.testContainerLaunchWithNetworkingDefaults:401 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
>   TestDockerContainerRuntime.testDockerContainerLaunch:297 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
> Tests run: 19, Failures: 3, Errors: 0, Skipped: 0
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5596) TestDockerContainerRuntime fails on the mac

2016-08-30 Thread Sidharta Seethana (JIRA)
Sidharta Seethana created YARN-5596:
---

 Summary: TestDockerContainerRuntime fails on the mac
 Key: YARN-5596
 URL: https://issues.apache.org/jira/browse/YARN-5596
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager, yarn
Reporter: Sidharta Seethana
Assignee: Sidharta Seethana
Priority: Minor


/sys/fs/cgroup doesn't exist on the Mac. And the tests seem to fail because of 
this. 

{code}
Failed tests:
  TestDockerContainerRuntime.testContainerLaunchWithCustomNetworks:456 
expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
/]test_container_local...> but was:<...ET_BIND_SERVICE -v 
/[]test_container_local...>
  TestDockerContainerRuntime.testContainerLaunchWithNetworkingDefaults:401 
expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
/]test_container_local...> but was:<...ET_BIND_SERVICE -v 
/[]test_container_local...>
  TestDockerContainerRuntime.testDockerContainerLaunch:297 
expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
/]test_container_local...> but was:<...ET_BIND_SERVICE -v 
/[]test_container_local...>

Tests run: 19, Failures: 3, Errors: 0, Skipped: 0
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5428) Allow for specifying the docker client configuration directory

2016-08-30 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf updated YARN-5428:
--
Attachment: 
YARN-5428Allowforspecifyingthedockerclientconfigurationdirectory.pdf

> Allow for specifying the docker client configuration directory
> --
>
> Key: YARN-5428
> URL: https://issues.apache.org/jira/browse/YARN-5428
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-5428.001.patch, YARN-5428.002.patch, 
> YARN-5428.003.patch, YARN-5428.004.patch, 
> YARN-5428Allowforspecifyingthedockerclientconfigurationdirectory.pdf
>
>
> The docker client allows for specifying a configuration directory that 
> contains the docker client's configuration. It is common to store "docker 
> login" credentials in this config, to avoid the need to docker login on each 
> cluster member. 
> By default the docker client config is $HOME/.docker/config.json on Linux. 
> However, this does not work with the current container executor user 
> switching and it may also be desirable to centralize this configuration 
> beyond the single user's home directory.
> Note that the command line arg is for the configuration directory NOT the 
> configuration file.
> This change will be needed to allow YARN to automatically pull images at 
> localization time or within container executor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >