[jira] [Commented] (YARN-5989) hadoop build to allow hadoop version property to be explicitly set (YARN test)

2016-12-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15737409#comment-15737409
 ] 

Hadoop QA commented on YARN-5989:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
55s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 19m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  7m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 10m 
17s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 16m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  6m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
27s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m  
9s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
50s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 22m 18s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
25s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m  6s{color} 
| {color:red} hadoop-yarn-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
33s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
37s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} hadoop-yarn-server-web-proxy in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m 17s{color} 
| {color:red} hadoop-yarn-server-applicationhistoryservice in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
52s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 42m 27s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m 39s{color} 
| {color:red} hadoop-yarn-server-tests in the patch failed. {color} |
| {color:green}+1{color} | 

[jira] [Commented] (YARN-3866) AM-RM protocol changes to support container resizing

2016-12-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15737328#comment-15737328
 ] 

Hadoop QA commented on YARN-3866:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
25s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
49s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
12s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
1s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
59s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
58s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_121 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 46s{color} 
| {color:red} hadoop-yarn-project_hadoop-yarn-jdk1.8.0_111 with JDK v1.8.0_111 
generated 11 new + 39 unchanged - 0 fixed = 50 total (was 39) {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  2m 11s{color} 
| {color:red} hadoop-yarn-project_hadoop-yarn-jdk1.7.0_121 with JDK v1.7.0_121 
generated 13 new + 48 unchanged - 0 fixed = 61 total (was 48) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 33s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 28 new + 102 unchanged - 0 fixed = 130 total (was 102) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
16s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdk1.8.0_111 with JDK 
v1.8.0_111 generated 19 new + 149 unchanged - 0 fixed = 168 total (was 149) 
{color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
19s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdk1.7.0_121 with JDK 
v1.7.0_121 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-yarn-api in the patch passed with JDK 
v1.7.0_121. {color} |
| 

[jira] [Updated] (YARN-5216) Expose configurable preemption policy for OPPORTUNISTIC containers running on the NM

2016-12-09 Thread Hitesh Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitesh Sharma updated YARN-5216:

Attachment: YARN-5216-YARN-5972.001.patch

Posting a patch based on YARN-5972.

> Expose configurable preemption policy for OPPORTUNISTIC containers running on 
> the NM
> 
>
> Key: YARN-5216
> URL: https://issues.apache.org/jira/browse/YARN-5216
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: distributed-scheduling
>Reporter: Arun Suresh
>Assignee: Hitesh Sharma
>  Labels: oct16-hard
> Attachments: YARN-5216-YARN-5972.001.patch, YARN5216.001.patch, 
> yarn5216.002.patch
>
>
> Currently, the default action taken by the QueuingContainerManager, 
> introduced in YARN-2883, when a GUARANTEED Container is scheduled on an NM 
> with OPPORTUNISTIC containers using up resources, is to KILL the running 
> OPPORTUNISTIC containers.
> This JIRA proposes to expose a configurable hook to allow the NM to take a 
> different action.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5761) Separate QueueManager from Scheduler

2016-12-09 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-5761:
--
Fix Version/s: 3.0.0-alpha2
   2.9.0

> Separate QueueManager from Scheduler
> 
>
> Key: YARN-5761
> URL: https://issues.apache.org/jira/browse/YARN-5761
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Xuan Gong
>Assignee: Xuan Gong
>  Labels: oct16-medium
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5761.1.patch, YARN-5761.1.rebase.patch, 
> YARN-5761.2.patch, YARN-5761.3.patch, YARN-5761.4.patch, YARN-5761.5.patch, 
> YARN-5761.6.patch, YARN-5761.7.patch, YARN-5761.7.patch, YARN-5761.8.patch
>
>
> Currently, in scheduler code, we are doing queue manager and scheduling work. 
> We'd better separate the queue manager out of scheduler logic. In that case, 
> it would be much easier and safer to extend.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5963) Spelling errors in logging and exceptions for node manager, client, web-proxy, common, and app history code

2016-12-09 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-5963:
--
Fix Version/s: 3.0.0-alpha2
   2.9.0

> Spelling errors in logging and exceptions for node manager, client, 
> web-proxy, common, and app history code
> ---
>
> Key: YARN-5963
> URL: https://issues.apache.org/jira/browse/YARN-5963
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client, nodemanager
>Reporter: Grant Sohn
>Assignee: Grant Sohn
>Priority: Trivial
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5963.1.patch
>
>
> A set of spelling errors in the exceptions and logging messages.
> Examples:
> accessable -> accessible
> occured -> occurred
> autorized -> authorized



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5989) hadoop build to allow hadoop version property to be explicitly set (YARN test)

2016-12-09 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated YARN-5989:

Attachment: YARN-5989-testing.patch

Edited all the YARN modules to run all the YARN tests. Precommit job picks up 
the tests of the modules that the patch modified.

> hadoop build to allow hadoop version property to be explicitly set (YARN test)
> --
>
> Key: YARN-5989
> URL: https://issues.apache.org/jira/browse/YARN-5989
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13582-003.patch, YARN-5989-testing.patch
>
>
> Test the patch for HADOOP-13852 against YARN, as it has effects there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4126) RM should not issue delegation tokens in unsecure mode

2016-12-09 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736967#comment-15736967
 ] 

Jian He commented on YARN-4126:
---

bq. Assuming that Oozie wants to keep supporting 2.x releases there must be an 
Oozie side fix,
IIUC, since this is already reverted from branch-2, Oozie no longer requires a 
fix to support 2.x release.

To support 3.x which has this change. Oozie needs to have a fix.

My point is that the old behavior is a bug -- ( the logic is not matching its 
exception).  Exception says delegationToken can only be done in kerberos env, 
but the logic returns true if it is unsecure env.  -- Isn't this 
self-conflicting ? 
{code}
  if (!isAllowedDelegationTokenOp()) {
throw new IOException(
"Delegation Token can be cancelled only with kerberos 
authentication");
  }
{code} 
My preference is to keep this in 3.x so that future apps on YARN are not 
repeating the same mistake as Ozzie, and Ozzie should fix this to support 3.x 
line
On the other hand, if other folks think it's more important to keep it 
compatible and with minimal surprise for 3.x, I'm also ok to revert it.

> RM should not issue delegation tokens in unsecure mode
> --
>
> Key: YARN-4126
> URL: https://issues.apache.org/jira/browse/YARN-4126
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Bibin A Chundatt
> Fix For: 3.0.0-alpha1
>
> Attachments: 0001-YARN-4126.patch, 0002-YARN-4126.patch, 
> 0003-YARN-4126.patch, 0004-YARN-4126.patch, 0005-YARN-4126.patch, 
> 0006-YARN-4126.patch
>
>
> ClientRMService#getDelegationToken is currently  returning a delegation token 
> in insecure mode. We should not return the token if it's in insecure mode. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2009) CapacityScheduler: Add intra-queue preemption for app priority support

2016-12-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736964#comment-15736964
 ] 

Hadoop QA commented on YARN-2009:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
56s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 17s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 21 new + 155 unchanged - 33 fixed = 176 total (was 188) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 7 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
18s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_111
 with JDK v1.8.0_111 generated 0 new + 974 unchanged - 1 fixed = 974 total (was 
975) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 40s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}165m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  Inconsistent synchronization of 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.orderingPolicy;
 

[jira] [Commented] (YARN-4457) Cleanup unchecked types for EventHandler

2016-12-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736955#comment-15736955
 ] 

Hudson commented on YARN-4457:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10983 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10983/])
YARN-4457. Cleanup unchecked types for EventHandler (templedf via (rkanter: rev 
4b149a1e7781a52c2979fa3d367e4bfb1c4ccfe7)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/metrics/AbstractSystemMetricsPublisher.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeStatusUpdater.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMHA.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestKillAMPreemptionPolicy.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestAppLogAggregatorImpl.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapred/TestTaskAttemptFinishingMonitor.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRuntimeEstimators.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/NMLivelinessMonitor.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/MockAppContext.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/TestContainerLaunch.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapred/TestTaskAttemptListenerImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/TestFifoScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicy.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryCopyService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/AMLivelinessMonitor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/event/AsyncDispatcher.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/event/InlineDispatcher.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapred/TestLocalContainerLauncher.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestCheckpointPreemptionPolicy.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistory.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/speculate/DefaultSpeculator.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/TestNodesListManager.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/local/TestLocalContainerAllocator.java
* (edit) 

[jira] [Updated] (YARN-4457) Cleanup unchecked types for EventHandler

2016-12-09 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated YARN-4457:

Fix Version/s: 3.0.0-alpha2

> Cleanup unchecked types for EventHandler
> 
>
> Key: YARN-4457
> URL: https://issues.apache.org/jira/browse/YARN-4457
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>  Labels: oct16-easy
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-4457.001.patch, YARN-4457.002.patch, 
> YARN-4457.003.patch, YARN-4457.004.patch, YARN-4457.005.patch, 
> YARN-4457.006.patch
>
>
> The EventHandler class is often used in an untyped context resulting in a 
> bunch of warnings about unchecked usage.  The culprit is the 
> {{Dispatcher.getHandler()}} method.  Fixing the typing on the method to 
> return {{EventHandler}} instead of {{EventHandler}} clears up the 
> errors and doesn't not introduce any incompatible changes.  In the case that 
> some code does:
> {code}
> EventHandler h = dispatcher.getHandler();
> {code}
> it will still work and will issue a compiler warning about raw types.  There 
> are, however, no instances of this issue in the current source base.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4457) Cleanup unchecked types for EventHandler

2016-12-09 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736929#comment-15736929
 ] 

Robert Kanter commented on YARN-4457:
-

+1

> Cleanup unchecked types for EventHandler
> 
>
> Key: YARN-4457
> URL: https://issues.apache.org/jira/browse/YARN-4457
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>  Labels: oct16-easy
> Attachments: YARN-4457.001.patch, YARN-4457.002.patch, 
> YARN-4457.003.patch, YARN-4457.004.patch, YARN-4457.005.patch, 
> YARN-4457.006.patch
>
>
> The EventHandler class is often used in an untyped context resulting in a 
> bunch of warnings about unchecked usage.  The culprit is the 
> {{Dispatcher.getHandler()}} method.  Fixing the typing on the method to 
> return {{EventHandler}} instead of {{EventHandler}} clears up the 
> errors and doesn't not introduce any incompatible changes.  In the case that 
> some code does:
> {code}
> EventHandler h = dispatcher.getHandler();
> {code}
> it will still work and will issue a compiler warning about raw types.  There 
> are, however, no instances of this issue in the current source base.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5536) Multiple format support (JSON, etc.) for exclude node file in NM graceful decommission with timeout

2016-12-09 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-5536:
--
Priority: Major  (was: Blocker)

I'm going to downgrade this to Major, since it sounds like this additional 
config support can be added compatibly.

> Multiple format support (JSON, etc.) for exclude node file in NM graceful 
> decommission with timeout
> ---
>
> Key: YARN-5536
> URL: https://issues.apache.org/jira/browse/YARN-5536
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: graceful
>Reporter: Junping Du
>
> Per discussion in YARN-4676, we agree that multiple format (other than xml) 
> should be supported to decommission nodes with timeout values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5709) Cleanup leader election configs and pluggability

2016-12-09 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-5709:
--
Target Version/s: 2.8.0, 3.0.0-alpha2  (was: 2.8.0)

> Cleanup leader election configs and pluggability
> 
>
> Key: YARN-5709
> URL: https://issues.apache.org/jira/browse/YARN-5709
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Blocker
> Attachments: yarn-5709-wip.2.patch, yarn-5709.1.patch, 
> yarn-5709.2.patch, yarn-5709.3.patch, yarn-5709.4.patch
>
>
> While reviewing YARN-5677 and YARN-5694, I noticed we could make the 
> curator-based election code cleaner. It is nicer to get this fixed in 2.8 
> before we ship it, but this can be done at a later time as well. 
> # By EmbeddedElector, we meant it was running as part of the RM daemon. Since 
> the Curator-based elector is also running embedded, I feel the code should be 
> checking for {{!curatorBased}} instead of {{isEmbeddedElector}}
> # {{LeaderElectorService}} should probably be named 
> {{CuratorBasedEmbeddedElectorService}} or some such.
> # The code that initializes the elector should be at the same place 
> irrespective of whether it is curator-based or not. 
> # We seem to be caching the CuratorFramework instance in RM. It makes more 
> sense for it to be in RMContext. If others are okay with it, we might even be 
> better of having {{RMContext#getCurator()}} method to lazily create the 
> curator framework and then cache it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5709) Cleanup leader election configs and pluggability

2016-12-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736918#comment-15736918
 ] 

Hudson commented on YARN-5709:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10982 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10982/])
YARN-5709. Cleanup leader election configs and pluggability. Contribtued 
(jianhe: rev a6410a542e59acd9827457df4a257a843f785c29)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/AdminService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestLeaderElectorService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMHA.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/ClusterInfo.java
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/LeaderElectorService.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ActiveStandbyElectorBasedElectorService.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/EmbeddedElector.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/CuratorBasedElectorService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/RMHATestBase.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockRM.java
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/EmbeddedElectorService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContext.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebApp.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMEmbeddedElector.java


> Cleanup leader election configs and pluggability
> 
>
> Key: YARN-5709
> URL: https://issues.apache.org/jira/browse/YARN-5709
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Blocker
> Attachments: yarn-5709-wip.2.patch, yarn-5709.1.patch, 
> yarn-5709.2.patch, yarn-5709.3.patch, yarn-5709.4.patch
>
>
> While reviewing YARN-5677 and YARN-5694, I noticed we could make the 
> curator-based election code cleaner. It is nicer to get this fixed in 2.8 
> before we ship it, but this can be done at a later time as well. 
> # By EmbeddedElector, we meant it was running as part of the RM daemon. Since 
> the Curator-based elector is also running embedded, I feel the code should be 
> checking for {{!curatorBased}} instead of {{isEmbeddedElector}}
> # {{LeaderElectorService}} should probably be named 
> {{CuratorBasedEmbeddedElectorService}} or some such.
> # The code that initializes the elector should be at the same place 
> irrespective of whether it is curator-based or not. 
> # We seem to be caching the CuratorFramework instance in RM. It makes more 
> sense for it to be in RMContext. If others are okay with it, we might even be 
> better of having {{RMContext#getCurator()}} method to lazily create the 
> curator framework 

[jira] [Commented] (YARN-4126) RM should not issue delegation tokens in unsecure mode

2016-12-09 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736904#comment-15736904
 ] 

Andrew Wang commented on YARN-4126:
---

I'd like to revert this change while discussion continues (using YARN-5882 as a 
release notes vehicle). Assuming that Oozie wants to keep supporting 2.x 
releases there must be an Oozie side fix, and keeping the 3.x behavior inline 
with 2.x is a conservative choice.

> RM should not issue delegation tokens in unsecure mode
> --
>
> Key: YARN-4126
> URL: https://issues.apache.org/jira/browse/YARN-4126
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Bibin A Chundatt
> Fix For: 3.0.0-alpha1
>
> Attachments: 0001-YARN-4126.patch, 0002-YARN-4126.patch, 
> 0003-YARN-4126.patch, 0004-YARN-4126.patch, 0005-YARN-4126.patch, 
> 0006-YARN-4126.patch
>
>
> ClientRMService#getDelegationToken is currently  returning a delegation token 
> in insecure mode. We should not return the token if it's in insecure mode. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5975) Remove the agent - slider AM ssl related code

2016-12-09 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736894#comment-15736894
 ] 

Jian He commented on YARN-5975:
---

Since they are not needed at this point, and those code introduced a lot other 
findbugs that I need to resolve, I intend to remove them.  Those code can be 
ported over if it's needed in the future.

> Remove the agent - slider AM ssl related code
> -
>
> Key: YARN-5975
> URL: https://issues.apache.org/jira/browse/YARN-5975
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-5975-yarn-native-services.01.patch
>
>
> Now that agent doesn't exists, this piece of code is not needed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4457) Cleanup unchecked types for EventHandler

2016-12-09 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736895#comment-15736895
 ] 

Robert Kanter commented on YARN-4457:
-

Nevermind.  I was using master instead of trunk \*facepalm\*.

> Cleanup unchecked types for EventHandler
> 
>
> Key: YARN-4457
> URL: https://issues.apache.org/jira/browse/YARN-4457
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>  Labels: oct16-easy
> Attachments: YARN-4457.001.patch, YARN-4457.002.patch, 
> YARN-4457.003.patch, YARN-4457.004.patch, YARN-4457.005.patch, 
> YARN-4457.006.patch
>
>
> The EventHandler class is often used in an untyped context resulting in a 
> bunch of warnings about unchecked usage.  The culprit is the 
> {{Dispatcher.getHandler()}} method.  Fixing the typing on the method to 
> return {{EventHandler}} instead of {{EventHandler}} clears up the 
> errors and doesn't not introduce any incompatible changes.  In the case that 
> some code does:
> {code}
> EventHandler h = dispatcher.getHandler();
> {code}
> it will still work and will issue a compiler warning about raw types.  There 
> are, however, no instances of this issue in the current source base.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-4457) Cleanup unchecked types for EventHandler

2016-12-09 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736888#comment-15736888
 ] 

Robert Kanter edited comment on YARN-4457 at 12/10/16 1:07 AM:
---

Looks like it's already too late.  I can't get it to apply cleanly with 
{{patch}} or {{git apply}}.  We may have to coordinate better on this one.


was (Author: rkanter):
Looks like it's already too late.  I can't get it to apply cleanly with 
{{patch}} or {{git apply}}.

> Cleanup unchecked types for EventHandler
> 
>
> Key: YARN-4457
> URL: https://issues.apache.org/jira/browse/YARN-4457
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>  Labels: oct16-easy
> Attachments: YARN-4457.001.patch, YARN-4457.002.patch, 
> YARN-4457.003.patch, YARN-4457.004.patch, YARN-4457.005.patch, 
> YARN-4457.006.patch
>
>
> The EventHandler class is often used in an untyped context resulting in a 
> bunch of warnings about unchecked usage.  The culprit is the 
> {{Dispatcher.getHandler()}} method.  Fixing the typing on the method to 
> return {{EventHandler}} instead of {{EventHandler}} clears up the 
> errors and doesn't not introduce any incompatible changes.  In the case that 
> some code does:
> {code}
> EventHandler h = dispatcher.getHandler();
> {code}
> it will still work and will issue a compiler warning about raw types.  There 
> are, however, no instances of this issue in the current source base.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5709) Cleanup leader election configs and pluggability

2016-12-09 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736886#comment-15736886
 ] 

Jian He commented on YARN-5709:
---

branch-2, and trunk is committed.

Unfortunately, there seems quite some conflicts with branch-2.8, it's probably 
missing jiras like YARN-5677.
[~kasha], should we backport them ?

> Cleanup leader election configs and pluggability
> 
>
> Key: YARN-5709
> URL: https://issues.apache.org/jira/browse/YARN-5709
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Blocker
> Attachments: yarn-5709-wip.2.patch, yarn-5709.1.patch, 
> yarn-5709.2.patch, yarn-5709.3.patch, yarn-5709.4.patch
>
>
> While reviewing YARN-5677 and YARN-5694, I noticed we could make the 
> curator-based election code cleaner. It is nicer to get this fixed in 2.8 
> before we ship it, but this can be done at a later time as well. 
> # By EmbeddedElector, we meant it was running as part of the RM daemon. Since 
> the Curator-based elector is also running embedded, I feel the code should be 
> checking for {{!curatorBased}} instead of {{isEmbeddedElector}}
> # {{LeaderElectorService}} should probably be named 
> {{CuratorBasedEmbeddedElectorService}} or some such.
> # The code that initializes the elector should be at the same place 
> irrespective of whether it is curator-based or not. 
> # We seem to be caching the CuratorFramework instance in RM. It makes more 
> sense for it to be in RMContext. If others are okay with it, we might even be 
> better of having {{RMContext#getCurator()}} method to lazily create the 
> curator framework and then cache it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4457) Cleanup unchecked types for EventHandler

2016-12-09 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736888#comment-15736888
 ] 

Robert Kanter commented on YARN-4457:
-

Looks like it's already too late.  I can't get it to apply cleanly with 
{{patch}} or {{git apply}}.

> Cleanup unchecked types for EventHandler
> 
>
> Key: YARN-4457
> URL: https://issues.apache.org/jira/browse/YARN-4457
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>  Labels: oct16-easy
> Attachments: YARN-4457.001.patch, YARN-4457.002.patch, 
> YARN-4457.003.patch, YARN-4457.004.patch, YARN-4457.005.patch, 
> YARN-4457.006.patch
>
>
> The EventHandler class is often used in an untyped context resulting in a 
> bunch of warnings about unchecked usage.  The culprit is the 
> {{Dispatcher.getHandler()}} method.  Fixing the typing on the method to 
> return {{EventHandler}} instead of {{EventHandler}} clears up the 
> errors and doesn't not introduce any incompatible changes.  In the case that 
> some code does:
> {code}
> EventHandler h = dispatcher.getHandler();
> {code}
> it will still work and will issue a compiler warning about raw types.  There 
> are, however, no instances of this issue in the current source base.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5667) Move HBase backend code in ATS v2 into its separate module

2016-12-09 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736883#comment-15736883
 ] 

Andrew Wang commented on YARN-5667:
---

Ping [~haibochen], any update on this JIRA?

> Move HBase backend code in ATS v2  into its separate module
> ---
>
> Key: YARN-5667
> URL: https://issues.apache.org/jira/browse/YARN-5667
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: yarn
>Affects Versions: 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Blocker
> Attachments: New module structure.png, part1.yarn5667.prelim.patch, 
> part2.yarn5667.prelim.patch, part3.yarn5667.prelim.patch, 
> part4.yarn5667.prelim.patch, part5.yarn5667.prelim.patch, 
> pt1.yarn5667.001.patch, pt2.yarn5667.001.patch, pt3.yarn5667.001.patch, 
> pt4.yarn5667.001.patch, pt5.yarn5667.001.patch, pt6.yarn5667.001.patch, 
> pt9.yarn5667.001.patch, yarn5667-001.tar.gz
>
>
> The HBase backend code currently lives along with the core ATS v2 code in 
> hadoop-yarn-server-timelineservice module. Because Resource Manager depends 
> on hadoop-yarn-server-timelineservice, an unnecessary dependency of the RM 
> module on HBase modules is introduced (HBase backend is pluggable, so we do 
> not need to directly pull in HBase jars). 
> In our internal effort to try ATS v2 with HBase 2.0 which depends on Hadoop 
> 3, we encountered a circular dependency during our builds between HBase2.0 
> and Hadoop3 artifacts.
> {code}
> hadoop-mapreduce-client-common, hadoop-yarn-client, 
> hadoop-yarn-server-resourcemanager, hadoop-yarn-server-timelineservice, 
> hbase-server, hbase-prefix-tree, hbase-hadoop2-compat, 
> hadoop-mapreduce-client-jobclient, hadoop-mapreduce-client-common]
> {code}
> This jira proposes we move all HBase-backend-related code from 
> hadoop-yarn-server-timelineservice into its own module (possible name is 
> yarn-server-timelineservice-storage) so that core RM modules do not depend on 
> HBase modules any more.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5982) Simplify opportunistic container parameters and metrics

2016-12-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736872#comment-15736872
 ] 

Hudson commented on YARN-5982:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10981 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10981/])
YARN-5982. Simplify opportunistic container parameters and metrics. (arun 
suresh: rev b0aace21b1ef3436ba9d516186208fee9a9ceef2)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/scheduler/ContainerScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/metrics/NodeManagerMetrics.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/NodesPage.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerNode.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/NodeInfo.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesNodes.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/OpportunisticContainerAllocatorAMService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java


> Simplify opportunistic container parameters and metrics
> ---
>
> Key: YARN-5982
> URL: https://issues.apache.org/jira/browse/YARN-5982
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5982.001.patch, YARN-5982.002.patch, 
> YARN-5982.003.patch
>
>
> This JIRA removes some of the parameters that are related to opportunistic 
> containers (e.g., min/max memory/cpu). Instead, we will be using the 
> parameters already used by guaranteed containers.
> The goal is to reduce the number of parameters that need to be used by the 
> user.
> We also fix a small issue related to the container metrics (opportunistic 
> memory reported in GB in Web UI, although it was in MB).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5925) Extract hbase-backend-exclusive utility methods from TimelineStorageUtil

2016-12-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736833#comment-15736833
 ] 

Hudson commented on YARN-5925:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10980 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10980/])
YARN-5925. Extract hbase-backend-exclusive utility methods from (sjlee: rev 
55f5886ea24671ff3db87a64aaba2e76b3355455)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/flow/FlowActivityRowKey.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/flow/FlowRunColumn.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/common/TestRowKeys.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/flow/FlowActivityColumnPrefix.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/flow/FlowRunCoprocessor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/flow/TestHBaseStorageFlowRun.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/flow/TestHBaseStorageFlowRunCompaction.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/common/AppIdKeyConverter.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/flow/FlowRunColumnPrefix.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/common/TimelineStorageUtils.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/common/HBaseTimelineStorageUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderWebServicesHBaseStorage.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/flow/FlowScanner.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/flow/TestHBaseStorageFlowActivity.java


> Extract hbase-backend-exclusive utility methods from TimelineStorageUtil
> 
>
> Key: YARN-5925
> URL: https://issues.apache.org/jira/browse/YARN-5925
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5925-YARN-5355.01.patch, 
> YARN-5925-YARN-5355.02.patch, YARN-5925-YARN-5355.03.patch, 
> YARN-5925-YARN-5355.04.patch, YARN-5925.01.patch, YARN-5925.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5982) Simplify opportunistic container parameters and metrics

2016-12-09 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-5982:
--
Summary: Simplify opportunistic container parameters and metrics  (was: 
Simplifying opportunistic container parameters and metrics)

> Simplify opportunistic container parameters and metrics
> ---
>
> Key: YARN-5982
> URL: https://issues.apache.org/jira/browse/YARN-5982
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5982.001.patch, YARN-5982.002.patch, 
> YARN-5982.003.patch
>
>
> This JIRA removes some of the parameters that are related to opportunistic 
> containers (e.g., min/max memory/cpu). Instead, we will be using the 
> parameters already used by guaranteed containers.
> The goal is to reduce the number of parameters that need to be used by the 
> user.
> We also fix a small issue related to the container metrics (opportunistic 
> memory reported in GB in Web UI, although it was in MB).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5982) Simplifying opportunistic container parameters and metrics

2016-12-09 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736829#comment-15736829
 ] 

Arun Suresh commented on YARN-5982:
---

Committing this to trunk shortly. Thanks for working on this [~kkaranasos]..

> Simplifying opportunistic container parameters and metrics
> --
>
> Key: YARN-5982
> URL: https://issues.apache.org/jira/browse/YARN-5982
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5982.001.patch, YARN-5982.002.patch, 
> YARN-5982.003.patch
>
>
> This JIRA removes some of the parameters that are related to opportunistic 
> containers (e.g., min/max memory/cpu). Instead, we will be using the 
> parameters already used by guaranteed containers.
> The goal is to reduce the number of parameters that need to be used by the 
> user.
> We also fix a small issue related to the container metrics (opportunistic 
> memory reported in GB in Web UI, although it was in MB).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5709) Cleanup leader election configs and pluggability

2016-12-09 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736827#comment-15736827
 ] 

Jian He commented on YARN-5709:
---

+1, committing 

> Cleanup leader election configs and pluggability
> 
>
> Key: YARN-5709
> URL: https://issues.apache.org/jira/browse/YARN-5709
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Blocker
> Attachments: yarn-5709-wip.2.patch, yarn-5709.1.patch, 
> yarn-5709.2.patch, yarn-5709.3.patch, yarn-5709.4.patch
>
>
> While reviewing YARN-5677 and YARN-5694, I noticed we could make the 
> curator-based election code cleaner. It is nicer to get this fixed in 2.8 
> before we ship it, but this can be done at a later time as well. 
> # By EmbeddedElector, we meant it was running as part of the RM daemon. Since 
> the Curator-based elector is also running embedded, I feel the code should be 
> checking for {{!curatorBased}} instead of {{isEmbeddedElector}}
> # {{LeaderElectorService}} should probably be named 
> {{CuratorBasedEmbeddedElectorService}} or some such.
> # The code that initializes the elector should be at the same place 
> irrespective of whether it is curator-based or not. 
> # We seem to be caching the CuratorFramework instance in RM. It makes more 
> sense for it to be in RMContext. If others are okay with it, we might even be 
> better of having {{RMContext#getCurator()}} method to lazily create the 
> curator framework and then cache it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5982) Simplifying opportunistic container parameters and metrics

2016-12-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736805#comment-15736805
 ] 

Hadoop QA commented on YARN-5982:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
37s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
20s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 39m 
38s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5982 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842624/YARN-5982.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 2a7a30e71072 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5bd7dec |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14246/testReport/ |
| 

[jira] [Commented] (YARN-5849) Automatically create YARN control group for pre-mounted cgroups

2016-12-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736794#comment-15736794
 ] 

Hadoop QA commented on YARN-5849:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 25 unchanged - 24 fixed = 25 total (was 49) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 0 new + 230 unchanged - 5 fixed = 230 total (was 235) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
24s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m  
0s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
12s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m  

[jira] [Commented] (YARN-5910) Support for multi-cluster delegation tokens

2016-12-09 Thread Clay B. (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736764#comment-15736764
 ] 

Clay B. commented on YARN-5910:
---

This conversation has been very educational for me; thank you! I am concerned 
still that if we do not use kerberos, the requesting user will have no way to 
renew tokens as themselves? If we can not authenticate as the user, won't we be 
unable to work when the administrators of two clusters may be different (and 
thus not have the same {{yarn}} user setup -- e.g. two different principals in 
kerberos). Can we find a solution to that issue here as well (or ensure that 
this issue doesn't preclude that issue)?

I really like the idea that the client (human client) is responsible for 
specifying the resources needed, as again in a highly federated Hadoop 
environment, one administration group may not even know of all clusters and 
this allows for more agile cross-cluster usage.

I see there are two issues here I was hoping to solve:
1. A remote cluster's services are needed (e.g. as a data source to this job)
2. A remote cluster does not trust this cluster's YARN principal

[~jlowe] brings up some good questions and points which hit this well:
{quote}I'm not sure distributing the keytab is going to be considered a 
reasonable thing to do in some setups. Part of the point of getting a token is 
to avoid needing to ship a keytab everywhere. Once we have a keytab, is there a 
need to have a token?{quote}

If the YARN principals of each cluster are different but the user is entitled 
to services on both clusters is there another way around this issue? Further, 
while I think many shops may have the kerberos tooling to avoid shipping 
keytabs, some shops are heavily HBase (e.g. long running query services) 
dependent or streaming centric (jobs last longer than maximal token refresh 
periods) and thus have to use keytabs today.

{quote}There's also the problem of needing to renew the token while the AM is 
waiting to get scheduled if the cluster is really busy. If the AM isn't running 
it can't renew the token.

I would expect the remote-cluster resources to not be central to operating the 
job. E.g. we would use the local cluster for HDFS and YARN but might want to 
access a remote cluster's YARN. If the AM can request tokens (i.e. with a 
keytab or proxy kerberos credential which was refreshed by the RM) then we can 
request new tokens when the job is scheduled if it was hung-up longer than the 
renewal time; further we do not worry about exploits of custom configuration 
running as a privileged process but something running as a user.

Regardless, are there many clusters folks see today where the scheduling time 
is longer than the renewal time of a delegation token? (I.e. that would be 
by-default one seventh of the total job's maximal runtime -- longer than a day?)

{quote}My preference is to have the token be as self-descriptive as we can 
possibly get. Doing the ApplicationSubmissionContext thing could work for the 
HA case, but I could see this being a potentially non-trivial payload the RM 
has to bear for each app (configs can get quite large). It'd rather avoid 
adding that to the context for this purpose if we can do so, but if the token 
cannot be self-descriptive in all cases then we may not have much other choice 
that I can see.{quote}

I agree this seems to be the sanest idea for how to get the configuration in; 
we could also perhaps extend the various delegation token types to only 
optionally include this payload? Then we the RM would only pay the price when 
needed for an off-cluster request?

> Support for multi-cluster delegation tokens
> ---
>
> Key: YARN-5910
> URL: https://issues.apache.org/jira/browse/YARN-5910
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: security
>Reporter: Clay B.
>Priority: Minor
>
> As an administrator running many secure (kerberized) clusters, some which 
> have peer clusters managed by other teams, I am looking for a way to run jobs 
> which may require services running on other clusters. Particular cases where 
> this rears itself are running something as core as a distcp between two 
> kerberized clusters (e.g. {{hadoop --config /home/user292/conf/ distcp 
> hdfs://LOCALCLUSTER/user/user292/test.out 
> hdfs://REMOTECLUSTER/user/user292/test.out.result}}).
> Thanks to YARN-3021, once can run for a while but if the delegation token for 
> the remote cluster needs renewal the job will fail[1]. One can pre-configure 
> their {{hdfs-site.xml}} loaded by the YARN RM to know of all possible HDFSes 
> available but that requires coordination that is not always feasible, 
> especially as a cluster's peers grow into the tens of clusters or across 
> management teams. Ideally, one could have core systems 

[jira] [Commented] (YARN-5925) Extract hbase-backend-exclusive utility methods from TimelineStorageUtil

2016-12-09 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736745#comment-15736745
 ] 

Sangjin Lee commented on YARN-5925:
---

LGTM. Committing shortly.

> Extract hbase-backend-exclusive utility methods from TimelineStorageUtil
> 
>
> Key: YARN-5925
> URL: https://issues.apache.org/jira/browse/YARN-5925
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-5925-YARN-5355.01.patch, 
> YARN-5925-YARN-5355.02.patch, YARN-5925-YARN-5355.03.patch, 
> YARN-5925-YARN-5355.04.patch, YARN-5925.01.patch, YARN-5925.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5709) Cleanup leader election configs and pluggability

2016-12-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736696#comment-15736696
 ] 

Hadoop QA commented on YARN-5709:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
54s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  5m 54s{color} 
| {color:red} hadoop-yarn-project_hadoop-yarn generated 2 new + 37 unchanged - 
0 fixed = 39 total (was 37) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 58s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 3 new + 387 unchanged - 9 fixed = 390 total (was 396) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
38s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 47m 
16s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 94m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5709 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842620/yarn-5709.4.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux cc705f51bf3f 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5bd7dec |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| javac | 
https://builds.apache.org/job/PreCommit-YARN-Build/14244/artifact/patchprocess/diff-compile-javac-hadoop-yarn-project_hadoop-yarn.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/14244/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14244/testReport/ |
| modules | C: 

[jira] [Commented] (YARN-5709) Cleanup leader election configs and pluggability

2016-12-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736679#comment-15736679
 ] 

Hadoop QA commented on YARN-5709:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
55s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  4m 38s{color} 
| {color:red} hadoop-yarn-project_hadoop-yarn generated 2 new + 37 unchanged - 
0 fixed = 39 total (was 37) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 49s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 3 new + 386 unchanged - 9 fixed = 389 total (was 395) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m 43s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 84m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5709 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842620/yarn-5709.4.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0f9f8e8cc52d 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5bd7dec |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| javac | 
https://builds.apache.org/job/PreCommit-YARN-Build/14245/artifact/patchprocess/diff-compile-javac-hadoop-yarn-project_hadoop-yarn.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/14245/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
| unit | 

[jira] [Commented] (YARN-1042) add ability to specify affinity/anti-affinity in container requests

2016-12-09 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736672#comment-15736672
 ] 

Konstantinos Karanasos commented on YARN-1042:
--

[~leftnoteasy], I think you are right that exposing min/max cardinality might 
be complicated for the user...

So, you are suggesting to expose expose to the user explicit affinity and 
anti-affinity constraints, but interpret them under the hood as more complex 
unified constraints?
I think it is a good approach. This way we keep it simple for the end user, and 
we create the hooks for more complicated cardinality and tag constraints as 
future work.

> add ability to specify affinity/anti-affinity in container requests
> ---
>
> Key: YARN-1042
> URL: https://issues.apache.org/jira/browse/YARN-1042
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Steve Loughran
>Assignee: Wangda Tan
> Attachments: YARN-1042-demo.patch, YARN-1042-design-doc.pdf, 
> YARN-1042-global-scheduling.poc.1.patch, YARN-1042.001.patch, 
> YARN-1042.002.patch
>
>
> container requests to the AM should be able to request anti-affinity to 
> ensure that things like Region Servers don't come up on the same failure 
> zones. 
> Similarly, you may be able to want to specify affinity to same host or rack 
> without specifying which specific host/rack. Example: bringing up a small 
> giraph cluster in a large YARN cluster would benefit from having the 
> processes in the same rack purely for bandwidth reasons.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4752) FairScheduler should preempt for a ResourceRequest and all preempted containers should be on the same node

2016-12-09 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-4752:
---
Summary: FairScheduler should preempt for a ResourceRequest and all 
preempted containers should be on the same node  (was: [Umbrella] 
FairScheduler: Improve preemption)

> FairScheduler should preempt for a ResourceRequest and all preempted 
> containers should be on the same node
> --
>
> Key: YARN-4752
> URL: https://issues.apache.org/jira/browse/YARN-4752
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: YARN-4752.FairSchedulerPreemptionOverhaul.pdf, 
> yarn-4752-1.patch, yarn-4752.2.patch, yarn-4752.3.patch, yarn-4752.4.patch, 
> yarn-4752.4.patch
>
>
> A number of issues have been reported with respect to preemption in 
> FairScheduler along the lines of:
> # FairScheduler preempts resources from nodes even if the resultant free 
> resources cannot fit the incoming request.
> # Preemption doesn't preempt from sibling queues
> # Preemption doesn't preempt from sibling apps under the same queue that is 
> over its fairshare
> # ...
> Filing this umbrella JIRA to group all the issues together and think of a 
> comprehensive solution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5849) Automatically create YARN control group for pre-mounted cgroups

2016-12-09 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-5849:
-
Attachment: YARN-5849.008.patch

> Automatically create YARN control group for pre-mounted cgroups
> ---
>
> Key: YARN-5849
> URL: https://issues.apache.org/jira/browse/YARN-5849
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.7.3, 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Minor
> Attachments: YARN-5849.000.patch, YARN-5849.001.patch, 
> YARN-5849.002.patch, YARN-5849.003.patch, YARN-5849.004.patch, 
> YARN-5849.005.patch, YARN-5849.006.patch, YARN-5849.007.patch, 
> YARN-5849.008.patch
>
>
> Yarn can be launched with linux-container-executor.cgroups.mount set to 
> false. It will search for the cgroup mount paths set up by the administrator 
> parsing the /etc/mtab file. You can also specify 
> resource.percentage-physical-cpu-limit to limit the CPU resources assigned to 
> containers.
> linux-container-executor.cgroups.hierarchy is the root of the settings of all 
> YARN containers. If this is specified but not created YARN will fail at 
> startup:
> Caused by: java.io.FileNotFoundException: 
> /cgroups/cpu/hadoop-yarn/cpu.cfs_period_us (Permission denied)
> org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler.updateCgroup(CgroupsLCEResourcesHandler.java:263)
> This JIRA is about automatically creating YARN control group in the case 
> above. It reduces the cost of administration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-4752) [Umbrella] FairScheduler: Improve preemption

2016-12-09 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla reassigned YARN-4752:
--

Assignee: Karthik Kambatla

> [Umbrella] FairScheduler: Improve preemption
> 
>
> Key: YARN-4752
> URL: https://issues.apache.org/jira/browse/YARN-4752
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: YARN-4752.FairSchedulerPreemptionOverhaul.pdf, 
> yarn-4752-1.patch, yarn-4752.2.patch, yarn-4752.3.patch, yarn-4752.4.patch, 
> yarn-4752.4.patch
>
>
> A number of issues have been reported with respect to preemption in 
> FairScheduler along the lines of:
> # FairScheduler preempts resources from nodes even if the resultant free 
> resources cannot fit the incoming request.
> # Preemption doesn't preempt from sibling queues
> # Preemption doesn't preempt from sibling apps under the same queue that is 
> over its fairshare
> # ...
> Filing this umbrella JIRA to group all the issues together and think of a 
> comprehensive solution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5849) Automatically create YARN control group for pre-mounted cgroups

2016-12-09 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736662#comment-15736662
 ] 

Miklos Szegedi commented on YARN-5849:
--

Thank you, [~bibinchundatt] for the review! I addressed the three comments that 
you had in the next patch.
In terms of umask, it should not be an issue in this case, since the node 
manager creates and uses the directories itself. The cgroup directory is not 
passed to the container.
This raises an interesting question though. Probably we want to change 
File.mkdir() to limit the permissions to "rwx--" on all YARN cgroups 
directories regardless of umask. But since this affects the existing 
createCGroup method as well, I would address it in another Jira.
{code}
  Files.createDirectory(path, PosixFilePermissions
  .asFileAttribute(PosixFilePermissions.fromString("rwx--")));
{code}


> Automatically create YARN control group for pre-mounted cgroups
> ---
>
> Key: YARN-5849
> URL: https://issues.apache.org/jira/browse/YARN-5849
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.7.3, 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Minor
> Attachments: YARN-5849.000.patch, YARN-5849.001.patch, 
> YARN-5849.002.patch, YARN-5849.003.patch, YARN-5849.004.patch, 
> YARN-5849.005.patch, YARN-5849.006.patch, YARN-5849.007.patch
>
>
> Yarn can be launched with linux-container-executor.cgroups.mount set to 
> false. It will search for the cgroup mount paths set up by the administrator 
> parsing the /etc/mtab file. You can also specify 
> resource.percentage-physical-cpu-limit to limit the CPU resources assigned to 
> containers.
> linux-container-executor.cgroups.hierarchy is the root of the settings of all 
> YARN containers. If this is specified but not created YARN will fail at 
> startup:
> Caused by: java.io.FileNotFoundException: 
> /cgroups/cpu/hadoop-yarn/cpu.cfs_period_us (Permission denied)
> org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler.updateCgroup(CgroupsLCEResourcesHandler.java:263)
> This JIRA is about automatically creating YARN control group in the case 
> above. It reduces the cost of administration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5959) RM changes to support change of container ExecutionType

2016-12-09 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736650#comment-15736650
 ] 

Wangda Tan commented on YARN-5959:
--

Look at overall changes and discussed with [~asuresh] offline.

A couple of things we discussed:
1) Currently approach in this patch is, if any normal container request with 
higher priority, it will be allocated first. However this is discussable: if we 
give the available resource of a node to a different normal container, it is 
possible that the running opportunistic container will be killed and work will 
be wasted.
2) Instead of creating a hard-locality container request, can we wrap the 
ResourceRequest to add some additional information to identify it is for 
container increase / promotion. This could be useful for:
- Track pending container changes requests.
- Cancel / update pending container change requests
- Makes backend implementation of container increase / promotion to be same.
- If you think it sounds like a plan, I can also help with working on 
refactoring patch / reviews. 

> RM changes to support change of container ExecutionType
> ---
>
> Key: YARN-5959
> URL: https://issues.apache.org/jira/browse/YARN-5959
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5959.combined.001.patch, YARN-5959.wip.002.patch, 
> YARN-5959.wip.patch
>
>
> RM side changes to allow an AM to ask for change of ExecutionType.
> Currently, there are two cases:
> # *Promotion* : OPPORTUNISTIC to GUARANTEED.
> # *Demotion* : GUARANTEED to OPPORTUNISTIC.
> This is similar in YARN-1197 which allows for change in Container resources. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5986) Adding YARN configuration entries to ContainerLaunchContext

2016-12-09 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736578#comment-15736578
 ] 

Sangjin Lee commented on YARN-5986:
---

Thanks for the clarification. If it is used only rarely for a specific few (few 
being small), I don't have an objection.

> Adding YARN configuration entries to ContainerLaunchContext
> ---
>
> Key: YARN-5986
> URL: https://issues.apache.org/jira/browse/YARN-5986
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>
> Currently ContainerLaunchContext is defined as
> message ContainerLaunchContextProto {
>   repeated StringLocalResourceMapProto localResources = 1;
>   optional bytes tokens = 2;
>   repeated StringBytesMapProto service_data = 3;
>   repeated StringStringMapProto environment = 4;
>   repeated string command = 5;
>   repeated ApplicationACLMapProto application_ACLs = 6;
>   optional ContainerRetryContextProto container_retry_context = 7;
> }
> It would be nice to have an additional parameter "configuration" to support 
> cases like YARN-5600, where we want to pass a parameter to Yarn and not the 
> application or container.
>   repeated StringStringMapProto configuration = 8;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2009) CapacityScheduler: Add intra-queue preemption for app priority support

2016-12-09 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated YARN-2009:
-
Attachment: YARN-2009.branch-2.8.001.patch

> CapacityScheduler: Add intra-queue preemption for app priority support
> --
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
>  Labels: oct16-medium
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-2009.0001.patch, YARN-2009.0002.patch, 
> YARN-2009.0003.patch, YARN-2009.0004.patch, YARN-2009.0005.patch, 
> YARN-2009.0006.patch, YARN-2009.0007.patch, YARN-2009.0008.patch, 
> YARN-2009.0009.patch, YARN-2009.0010.patch, YARN-2009.0011.patch, 
> YARN-2009.0012.patch, YARN-2009.0013.patch, YARN-2009.0014.patch, 
> YARN-2009.0015.patch, YARN-2009.0016.patch, YARN-2009.branch-2.8.001.patch
>
>
> While preempting containers based on the queue ideal assignment, we may need 
> to consider preempting the low priority application containers first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-2009) CapacityScheduler: Add intra-queue preemption for app priority support

2016-12-09 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne reopened YARN-2009:
--

I'm backporting this to 2.8.

Reopening so that I can put it back into patch available so it will kick a 
build.

> CapacityScheduler: Add intra-queue preemption for app priority support
> --
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
>  Labels: oct16-medium
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-2009.0001.patch, YARN-2009.0002.patch, 
> YARN-2009.0003.patch, YARN-2009.0004.patch, YARN-2009.0005.patch, 
> YARN-2009.0006.patch, YARN-2009.0007.patch, YARN-2009.0008.patch, 
> YARN-2009.0009.patch, YARN-2009.0010.patch, YARN-2009.0011.patch, 
> YARN-2009.0012.patch, YARN-2009.0013.patch, YARN-2009.0014.patch, 
> YARN-2009.0015.patch, YARN-2009.0016.patch
>
>
> While preempting containers based on the queue ideal assignment, we may need 
> to consider preempting the low priority application containers first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5893) Experiment with using PriorityQueue for storing starved applications

2016-12-09 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-5893:
---
Parent Issue: YARN-5990  (was: YARN-4752)

> Experiment with using PriorityQueue for storing starved applications
> 
>
> Key: YARN-5893
> URL: https://issues.apache.org/jira/browse/YARN-5893
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>
> FSStarvedApps currently uses a BlockingQueue to store starved applications. 
> This leads to storing the apps in the order they were processed instead of 
> their priorities. Using a PriorityQueue will ensure we preempt for the 
> highest priority apps first. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5832) Preemption should consider multiple resource-requests of a starved app as needed

2016-12-09 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-5832:
---
Parent Issue: YARN-5990  (was: YARN-4752)

> Preemption should consider multiple resource-requests of a starved app as 
> needed
> 
>
> Key: YARN-5832
> URL: https://issues.apache.org/jira/browse/YARN-5832
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Karthik Kambatla
>
> When identifying containers to preempt for a starved application, the new 
> implementation considers only the highest priority resource request. If we 
> can't find candidate containers for that request, we might want to consider 
> other requests. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5829) FS preemption should reserve a node before considering containers on it for preemption

2016-12-09 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-5829:
---
Parent Issue: YARN-5990  (was: YARN-4752)

> FS preemption should reserve a node before considering containers on it for 
> preemption
> --
>
> Key: YARN-5829
> URL: https://issues.apache.org/jira/browse/YARN-5829
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>
> FS preemption evaluates nodes for preemption, and subsequently preempts 
> identified containers. If this node is not reserved for a specific 
> application, any other application could be allocated resources on this node. 
> Reserving the node for the starved application before preempting containers 
> would help avoid this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5829) FS preemption should reserve a node before considering containers on it for preemption

2016-12-09 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla reassigned YARN-5829:
--

Assignee: Karthik Kambatla

> FS preemption should reserve a node before considering containers on it for 
> preemption
> --
>
> Key: YARN-5829
> URL: https://issues.apache.org/jira/browse/YARN-5829
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>
> FS preemption evaluates nodes for preemption, and subsequently preempts 
> identified containers. If this node is not reserved for a specific 
> application, any other application could be allocated resources on this node. 
> Reserving the node for the starved application before preempting containers 
> would help avoid this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5831) Propagate allowPreemptionFrom flag all the way down to the app

2016-12-09 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-5831:
---
Parent Issue: YARN-5990  (was: YARN-4752)

> Propagate allowPreemptionFrom flag all the way down to the app
> --
>
> Key: YARN-5831
> URL: https://issues.apache.org/jira/browse/YARN-5831
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Karthik Kambatla
>
> FairScheduler allows disallowing preemption from a queue. When checking if 
> preemption for an application is allowed, the new preemption code recurses 
> all the way to the root queue to check this flag. 
> Propagating this information all the way to the app will be more efficient. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5982) Simplifying opportunistic container parameters and metrics

2016-12-09 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-5982:
-
Attachment: YARN-5982.003.patch

Fixing remaining issues, including a checkstyle problem and a failing test. The 
remaining failing tests are unrelated (to be fixed as part of HADOOP-13852).

> Simplifying opportunistic container parameters and metrics
> --
>
> Key: YARN-5982
> URL: https://issues.apache.org/jira/browse/YARN-5982
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5982.001.patch, YARN-5982.002.patch, 
> YARN-5982.003.patch
>
>
> This JIRA removes some of the parameters that are related to opportunistic 
> containers (e.g., min/max memory/cpu). Instead, we will be using the 
> parameters already used by guaranteed containers.
> The goal is to reduce the number of parameters that need to be used by the 
> user.
> We also fix a small issue related to the container metrics (opportunistic 
> memory reported in GB in Web UI, although it was in MB).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5830) Avoid preempting AM containers

2016-12-09 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-5830:
---
Parent Issue: YARN-5990  (was: YARN-4752)

> Avoid preempting AM containers
> --
>
> Key: YARN-5830
> URL: https://issues.apache.org/jira/browse/YARN-5830
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Karthik Kambatla
>
> While considering containers for preemption, avoid AM containers unless they 
> are the only container for the app. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5824) Verify app starvation under custom preemption thresholds and timeouts

2016-12-09 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-5824:
---
Parent Issue: YARN-5990  (was: YARN-4752)

> Verify app starvation under custom preemption thresholds and timeouts
> -
>
> Key: YARN-5824
> URL: https://issues.apache.org/jira/browse/YARN-5824
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Karthik Kambatla
>
> YARN-5783 adds basic tests to verify applications are identified to be 
> starved. This JIRA is to add more advanced tests for different values of 
> preemption thresholds and timeouts. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5798) Handle FSPreemptionThread crashing due to a RuntimeException

2016-12-09 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-5798:
---
Parent Issue: YARN-5990  (was: YARN-4752)

> Handle FSPreemptionThread crashing due to a RuntimeException
> 
>
> Key: YARN-5798
> URL: https://issues.apache.org/jira/browse/YARN-5798
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>Assignee: Yufei Gu
>
> YARN-5605 added an FSPreemptionThread. The run() method catches 
> InterruptedException. If this were to run into a RuntimeException, the 
> preemption thread would crash. We should probably fail the RM itself (or 
> transition to standby) when this happens. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5990) [Umbrella] Follow-up (phase-2) FairScheduler preemption improvements

2016-12-09 Thread Karthik Kambatla (JIRA)
Karthik Kambatla created YARN-5990:
--

 Summary: [Umbrella] Follow-up (phase-2) FairScheduler preemption 
improvements
 Key: YARN-5990
 URL: https://issues.apache.org/jira/browse/YARN-5990
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Karthik Kambatla


YARN-4752 overhauled FS preemption. This JIRA is to track follow-up minor 
improvements discussed in the design doc on YARN-4752. 

The reason for creating this is primarily for better release notes around 
YARN-4752. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5709) Cleanup leader election configs and pluggability

2016-12-09 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-5709:
---
Attachment: yarn-5709.4.patch

We can't directly call the method on EmbeddedElector as it can be null, and 
that was the reason I let it be. You are right though, AdminService is not the 
place for that method. It was hard to pick between RMContext and RM itself 
given the callers, but I ended up picking RMContext as it corresponds to state 
information of the RM that we have traditionally used RMContext for. 

Regarding deprecating the config for embedded, let us follow up on the other 
JIRA [~templedf] is working on. 

> Cleanup leader election configs and pluggability
> 
>
> Key: YARN-5709
> URL: https://issues.apache.org/jira/browse/YARN-5709
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Blocker
> Attachments: yarn-5709-wip.2.patch, yarn-5709.1.patch, 
> yarn-5709.2.patch, yarn-5709.3.patch, yarn-5709.4.patch
>
>
> While reviewing YARN-5677 and YARN-5694, I noticed we could make the 
> curator-based election code cleaner. It is nicer to get this fixed in 2.8 
> before we ship it, but this can be done at a later time as well. 
> # By EmbeddedElector, we meant it was running as part of the RM daemon. Since 
> the Curator-based elector is also running embedded, I feel the code should be 
> checking for {{!curatorBased}} instead of {{isEmbeddedElector}}
> # {{LeaderElectorService}} should probably be named 
> {{CuratorBasedEmbeddedElectorService}} or some such.
> # The code that initializes the elector should be at the same place 
> irrespective of whether it is curator-based or not. 
> # We seem to be caching the CuratorFramework instance in RM. It makes more 
> sense for it to be in RMContext. If others are okay with it, we might even be 
> better of having {{RMContext#getCurator()}} method to lazily create the 
> curator framework and then cache it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5257) Fix all Bad Practices

2016-12-09 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5257:
---
Description: 
The following code contain potential problems:
{code}
Unreleased Resource: StreamsTopCLI.java:738
Unreleased Resource: StreamsGraph.java:189
Unreleased Resource: StreamsCgroupsLCEResourcesHandler.java:291
Unreleased Resource: StreamsUnmanagedAMLauncher.java:195
Unreleased Resource: StreamsCGroupsHandlerImpl.java:319
Unreleased Resource: StreamsTrafficController.java:629
Null DereferenceApplicationImpl.java:465
Null DereferenceVisualizeStateMachine.java:52
Null DereferenceContainerImpl.java:1089
Null DereferenceQueueManager.java:219
Null DereferenceQueueManager.java:232
Null DereferenceResourceLocalizationService.java:1016
Null DereferenceResourceLocalizationService.java:1023
Null DereferenceResourceLocalizationService.java:1040
Null DereferenceResourceLocalizationService.java:1052
Null DereferenceProcfsBasedProcessTree.java:802
Null DereferenceTimelineClientImpl.java:639
Null DereferenceLocalizedResource.java:206
{code}

  was:
The following code contain potential problems:
{code}
Unreleased Resource: StreamsTopCLI.java:738
Unreleased Resource: StreamsGraph.java:189
Unreleased Resource: StreamsCgroupsLCEResourcesHandler.java:291
Unreleased Resource: StreamsUnmanagedAMLauncher.java:195
Unreleased Resource: StreamsCGroupsHandlerImpl.java:319
Unreleased Resource: StreamsTrafficController.java:629
Portability Flaw: Locale Dependent Comparison   TimelineWebServices.java:421
Null DereferenceApplicationImpl.java:465
Null DereferenceVisualizeStateMachine.java:52
Null DereferenceContainerImpl.java:1089
Null DereferenceQueueManager.java:219
Null DereferenceQueueManager.java:232
Null DereferenceResourceLocalizationService.java:1016
Null DereferenceResourceLocalizationService.java:1023
Null DereferenceResourceLocalizationService.java:1040
Null DereferenceResourceLocalizationService.java:1052
Null DereferenceProcfsBasedProcessTree.java:802
Null DereferenceTimelineClientImpl.java:639
Null DereferenceLocalizedResource.java:206
{code}


> Fix all Bad Practices
> -
>
> Key: YARN-5257
> URL: https://issues.apache.org/jira/browse/YARN-5257
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>
> The following code contain potential problems:
> {code}
> Unreleased Resource: Streams  TopCLI.java:738
> Unreleased Resource: Streams  Graph.java:189
> Unreleased Resource: Streams  CgroupsLCEResourcesHandler.java:291
> Unreleased Resource: Streams  UnmanagedAMLauncher.java:195
> Unreleased Resource: Streams  CGroupsHandlerImpl.java:319
> Unreleased Resource: Streams  TrafficController.java:629
> Null Dereference  ApplicationImpl.java:465
> Null Dereference  VisualizeStateMachine.java:52
> Null Dereference  ContainerImpl.java:1089
> Null Dereference  QueueManager.java:219
> Null Dereference  QueueManager.java:232
> Null Dereference  ResourceLocalizationService.java:1016
> Null Dereference  ResourceLocalizationService.java:1023
> Null Dereference  ResourceLocalizationService.java:1040
> Null Dereference  ResourceLocalizationService.java:1052
> Null Dereference  ProcfsBasedProcessTree.java:802
> Null Dereference  TimelineClientImpl.java:639
> Null Dereference  LocalizedResource.java:206
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5257) Fix all Bad Practices

2016-12-09 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu reassigned YARN-5257:
--

Assignee: Yufei Gu  (was: Daniel Templeton)

> Fix all Bad Practices
> -
>
> Key: YARN-5257
> URL: https://issues.apache.org/jira/browse/YARN-5257
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>
> The following code contain potential problems:
> {code}
> Unreleased Resource: Streams  TopCLI.java:738
> Unreleased Resource: Streams  Graph.java:189
> Unreleased Resource: Streams  CgroupsLCEResourcesHandler.java:291
> Unreleased Resource: Streams  UnmanagedAMLauncher.java:195
> Unreleased Resource: Streams  CGroupsHandlerImpl.java:319
> Unreleased Resource: Streams  TrafficController.java:629
> Portability Flaw: Locale Dependent Comparison TimelineWebServices.java:421
> Null Dereference  ApplicationImpl.java:465
> Null Dereference  VisualizeStateMachine.java:52
> Null Dereference  ContainerImpl.java:1089
> Null Dereference  QueueManager.java:219
> Null Dereference  QueueManager.java:232
> Null Dereference  ResourceLocalizationService.java:1016
> Null Dereference  ResourceLocalizationService.java:1023
> Null Dereference  ResourceLocalizationService.java:1040
> Null Dereference  ResourceLocalizationService.java:1052
> Null Dereference  ProcfsBasedProcessTree.java:802
> Null Dereference  TimelineClientImpl.java:639
> Null Dereference  LocalizedResource.java:206
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3866) AM-RM protocol changes to support container resizing

2016-12-09 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-3866:
-
Attachment: YARN-3866-branch-2.8.addendum.1.patch

Looks like Jenkins uses a wrong branch to run, reattach patch.

> AM-RM protocol changes to support container resizing
> 
>
> Key: YARN-3866
> URL: https://issues.apache.org/jira/browse/YARN-3866
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Reporter: MENG DING
>Assignee: MENG DING
>Priority: Blocker
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: YARN-3866-YARN-1197.4.patch, 
> YARN-3866-branch-2.8.addendum.1.patch, YARN-3866-branch-2.8.addendum.1.patch, 
> YARN-3866.1.patch, YARN-3866.2.patch, YARN-3866.3.patch
>
>
> YARN-1447 and YARN-1448 are outdated. 
> This ticket deals with AM-RM Protocol changes to support container resize 
> according to the latest design in YARN-1197.
> 1) Add increase/decrease requests in AllocateRequest
> 2) Get approved increase/decrease requests from RM in AllocateResponse
> 3) Add relevant test cases



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4266) Allow whitelisted users to disable user re-mapping/squashing when launching docker containers

2016-12-09 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736448#comment-15736448
 ] 

Daniel Templeton commented on YARN-4266:


I don't have an issue taking the usermod path, assuming that we're only 
changing the UID on the home dir and that we clearly document what we do and 
what to do if you want to opt out.  That we take the usermod path now does not 
preclude continued work to improve the process, such as trying to make the 
Docker logs approach work.  It would be bad if we end up with more than one way 
to solve the problem that we expose to the user, but I see no issue with 
evolving the implementation if we find a better solution.  The end users 
shouldn't care if a later implementation reduces the constraints or improves 
the process.

That said, I think it behooves us to at least investigate whether we can make 
the docker logs approach work.  I know we've done some initial research, but 
have we gone far enough to know for sure how viable it is?

> Allow whitelisted users to disable user re-mapping/squashing when launching 
> docker containers
> -
>
> Key: YARN-4266
> URL: https://issues.apache.org/jira/browse/YARN-4266
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Sidharta Seethana
>Assignee: Zhankun Tang
> Attachments: YARN-4266-branch-2.8.001.patch, YARN-4266.001.patch, 
> YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping.pdf, 
> YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping_v2.pdf, 
> YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping_v3.pdf
>
>
> Docker provides a mechanism (the --user switch) that enables us to specify 
> the user the container processes should run as. We use this mechanism today 
> when launching docker containers . In non-secure mode, we run the docker 
> container based on 
> `yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user` and in 
> secure mode, as the submitting user. However, this mechanism breaks down with 
> a large number of 'pre-created' images which don't necessarily have the users 
> available within the image. Examples of such images include shared images 
> that need to be used by multiple users. We need a way in which we can allow a 
> pre-defined set of users to run containers based on existing images, without 
> using the --user switch. There are some implications of disabling this user 
> squashing that we'll need to work through : log aggregation, artifact 
> deletion etc.,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5257) Fix all Bad Practices

2016-12-09 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5257:
---
Assignee: Daniel Templeton  (was: Yufei Gu)

> Fix all Bad Practices
> -
>
> Key: YARN-5257
> URL: https://issues.apache.org/jira/browse/YARN-5257
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Yufei Gu
>Assignee: Daniel Templeton
>
> The following code contain potential problems:
> {code}
> Unreleased Resource: Streams  TopCLI.java:738
> Unreleased Resource: Streams  Graph.java:189
> Unreleased Resource: Streams  CgroupsLCEResourcesHandler.java:291
> Unreleased Resource: Streams  UnmanagedAMLauncher.java:195
> Unreleased Resource: Streams  CGroupsHandlerImpl.java:319
> Unreleased Resource: Streams  TrafficController.java:629
> Portability Flaw: Locale Dependent Comparison TimelineWebServices.java:421
> Null Dereference  ApplicationImpl.java:465
> Null Dereference  VisualizeStateMachine.java:52
> Null Dereference  ContainerImpl.java:1089
> Null Dereference  QueueManager.java:219
> Null Dereference  QueueManager.java:232
> Null Dereference  ResourceLocalizationService.java:1016
> Null Dereference  ResourceLocalizationService.java:1023
> Null Dereference  ResourceLocalizationService.java:1040
> Null Dereference  ResourceLocalizationService.java:1052
> Null Dereference  ProcfsBasedProcessTree.java:802
> Null Dereference  TimelineClientImpl.java:639
> Null Dereference  LocalizedResource.java:206
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5257) Fix all Bad Practices

2016-12-09 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5257:
---
Description: 
The following code contain potential problems:
{code}
Unreleased Resource: StreamsTopCLI.java:738
Unreleased Resource: StreamsGraph.java:189
Unreleased Resource: StreamsCgroupsLCEResourcesHandler.java:291
Unreleased Resource: StreamsUnmanagedAMLauncher.java:195
Unreleased Resource: StreamsCGroupsHandlerImpl.java:319
Unreleased Resource: StreamsTrafficController.java:629
Portability Flaw: Locale Dependent Comparison   TimelineWebServices.java:421
Null DereferenceApplicationImpl.java:465
Null DereferenceVisualizeStateMachine.java:52
Null DereferenceContainerImpl.java:1089
Null DereferenceQueueManager.java:219
Null DereferenceQueueManager.java:232
Null DereferenceResourceLocalizationService.java:1016
Null DereferenceResourceLocalizationService.java:1023
Null DereferenceResourceLocalizationService.java:1040
Null DereferenceResourceLocalizationService.java:1052
Null DereferenceProcfsBasedProcessTree.java:802
Null DereferenceTimelineClientImpl.java:639
Null DereferenceLocalizedResource.java:206
{code}

  was:
The following code contain potential problems:
{code}
Unreleased Resource: StreamsTopCLI.java:738
Unreleased Resource: StreamsGraph.java:189
Unreleased Resource: StreamsCgroupsLCEResourcesHandler.java:291
Unreleased Resource: StreamsUnmanagedAMLauncher.java:195
Unreleased Resource: StreamsCGroupsHandlerImpl.java:319
Unreleased Resource: StreamsTrafficController.java:629
Portability Flaw: Locale Dependent Comparison   TimelineWebServices.java:421
Null DereferenceApplicationImpl.java:465
Null DereferenceVisualizeStateMachine.java:52
Null DereferenceContainerImpl.java:1089
Null DereferenceQueueManager.java:219
Null DereferenceQueueManager.java:232
Null DereferenceResourceLocalizationService.java:1016
Null DereferenceResourceLocalizationService.java:1023
Null DereferenceResourceLocalizationService.java:1040
Null DereferenceResourceLocalizationService.java:1052
Null DereferenceProcfsBasedProcessTree.java:802
Null DereferenceTimelineClientImpl.java:639
Null DereferenceLocalizedResource.java:206
Code Correctness: Double-Checked LockingResourceHandlerModule.java:142
Code Correctness: Double-Checked LockingRMPolicyProvider.java:51
{code}


> Fix all Bad Practices
> -
>
> Key: YARN-5257
> URL: https://issues.apache.org/jira/browse/YARN-5257
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>
> The following code contain potential problems:
> {code}
> Unreleased Resource: Streams  TopCLI.java:738
> Unreleased Resource: Streams  Graph.java:189
> Unreleased Resource: Streams  CgroupsLCEResourcesHandler.java:291
> Unreleased Resource: Streams  UnmanagedAMLauncher.java:195
> Unreleased Resource: Streams  CGroupsHandlerImpl.java:319
> Unreleased Resource: Streams  TrafficController.java:629
> Portability Flaw: Locale Dependent Comparison TimelineWebServices.java:421
> Null Dereference  ApplicationImpl.java:465
> Null Dereference  VisualizeStateMachine.java:52
> Null Dereference  ContainerImpl.java:1089
> Null Dereference  QueueManager.java:219
> Null Dereference  QueueManager.java:232
> Null Dereference  ResourceLocalizationService.java:1016
> Null Dereference  ResourceLocalizationService.java:1023
> Null Dereference  ResourceLocalizationService.java:1040
> Null Dereference  ResourceLocalizationService.java:1052
> Null Dereference  ProcfsBasedProcessTree.java:802
> Null Dereference  TimelineClientImpl.java:639
> Null Dereference  LocalizedResource.java:206
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3409) Add constraint node labels

2016-12-09 Thread Lei Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736246#comment-15736246
 ] 

Lei Guo commented on YARN-3409:
---

So far, can we all agree on the following?
- Boolean type constraint is definitely needed for different scenarios, and 
- pre-defined type constraints cannot handle all scenarios
- Late-binding and String type constraints may introduce performance penalty 

If we all agree, there is need to have different types for constraints. Though 
we may want to simplify the design/implementation at beginning, we still need 
set up the framework on supporting different types first, otherwise it will be 
difficult to add type attribute after. From my view, if we just do Boolean 
type, it's almost same as the current partition label. 


> Add constraint node labels
> --
>
> Key: YARN-3409
> URL: https://issues.apache.org/jira/browse/YARN-3409
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, capacityscheduler, client
>Reporter: Wangda Tan
>Assignee: Naganarasimha G R
> Attachments: Constraint-Node-Labels-Requirements-Design-doc_v1.pdf, 
> YARN-3409.WIP.001.patch
>
>
> Specify only one label for each node (IAW, partition a cluster) is a way to 
> determinate how resources of a special set of nodes could be shared by a 
> group of entities (like teams, departments, etc.). Partitions of a cluster 
> has following characteristics:
> - Cluster divided to several disjoint sub clusters.
> - ACL/priority can apply on partition (Only market team / marke team has 
> priority to use the partition).
> - Percentage of capacities can apply on partition (Market team has 40% 
> minimum capacity and Dev team has 60% of minimum capacity of the partition).
> Constraints are orthogonal to partition, they’re describing attributes of 
> node’s hardware/software just for affinity. Some example of constraints:
> - glibc version
> - JDK version
> - Type of CPU (x86_64/i686)
> - Type of OS (windows, linux, etc.)
> With this, application can be able to ask for resource has (glibc.version >= 
> 2.20 && JDK.version >= 8u20 && x86_64).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3866) AM-RM protocol changes to support container resizing

2016-12-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736114#comment-15736114
 ] 

Hadoop QA commented on YARN-3866:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 10s{color} 
| {color:red} YARN-3866 does not apply to YARN-1197. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-3866 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12745125/YARN-3866-YARN-1197.4.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/14243/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> AM-RM protocol changes to support container resizing
> 
>
> Key: YARN-3866
> URL: https://issues.apache.org/jira/browse/YARN-3866
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Reporter: MENG DING
>Assignee: MENG DING
>Priority: Blocker
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: YARN-3866-YARN-1197.4.patch, 
> YARN-3866-branch-2.8.addendum.1.patch, YARN-3866.1.patch, YARN-3866.2.patch, 
> YARN-3866.3.patch
>
>
> YARN-1447 and YARN-1448 are outdated. 
> This ticket deals with AM-RM Protocol changes to support container resize 
> according to the latest design in YARN-1197.
> 1) Add increase/decrease requests in AllocateRequest
> 2) Get approved increase/decrease requests from RM in AllocateResponse
> 3) Add relevant test cases



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4390) Do surgical preemption based on reserved container in CapacityScheduler

2016-12-09 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736088#comment-15736088
 ] 

Wangda Tan commented on YARN-4390:
--

+1, thanks [~eepayne]

> Do surgical preemption based on reserved container in CapacityScheduler
> ---
>
> Key: YARN-4390
> URL: https://issues.apache.org/jira/browse/YARN-4390
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Affects Versions: 2.8.0, 2.7.3, 3.0.0-alpha1
>Reporter: Eric Payne
>Assignee: Wangda Tan
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: QueueNotHittingMax.jpg, YARN-4390-design.1.pdf, 
> YARN-4390-test-results.pdf, YARN-4390.1.patch, YARN-4390.2.patch, 
> YARN-4390.3.branch-2.patch, YARN-4390.3.patch, YARN-4390.4.patch, 
> YARN-4390.5.patch, YARN-4390.6.patch, YARN-4390.7.patch, YARN-4390.8.patch, 
> YARN-4390.branch-2.8.001.patch, YARN-4390.branch-2.8.002.patch, 
> YARN-4390.branch-2.8.003.patch
>
>
> There are multiple reasons why preemption could unnecessarily preempt 
> containers. One is that an app could be requesting a large container (say 
> 8-GB), and the preemption monitor could conceivably preempt multiple 
> containers (say 8, 1-GB containers) in order to fill the large container 
> request. These smaller containers would then be rejected by the requesting AM 
> and potentially given right back to the preempted app.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-1042) add ability to specify affinity/anti-affinity in container requests

2016-12-09 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736084#comment-15736084
 ] 

Wangda Tan commented on YARN-1042:
--

Thanks [~kkaranasos] for these suggestions.

I agree to merge the implementation in scheduler to support affinity and 
anti-affinity, we should not have duplicated code to support both of 
(anti-)affinity.

However, personally I'm not in favor of min-cardinality / max-cardinality, two 
reasons:
1) Comparing to affinity-to / anti-affinity-to, it is not straightforward 
enough to a normal YARN user. (of course, it is easy for a PhD :p)
2) We have to tradeoff between syntax flexibility and feature availability. 
Users will be happy if YARN allow them to specify as many constraints as they 
want, but they will be soon frustrated if we cannot give them in time. For 
example, user can specify allocate min=10/max=10 MPI tasks each host, but in 
reality, it could be quite hard in a busy cluster. And it gonna be hard for 
YARN to optimize such constraints, for example, how to preempt containers to 
satisfy min=max=10 cardinality.

I have a simpler suggestion to handle most possible use cases as the first 
step. Which is:
- Extend existing ResourceRequest, by adding a new 
{{placementConstraintExpression}}
- Simple anti-affinity (min=max=1)
- Simple affinity (min=2,max=infinity)
- Within app and between app
- Once tags are allowed to specified in ResourceRequest, we can support 
(anti-)affinity for tags

The API could looks like:
- {{affinity-to app=app_1234_0002}}
- {{anti-affinity-to app=app_1234_0002}}
- {{anti-affinity-to tag="Hbase-master"}}
- {{anti-affinity-to app=app_1234_0002,tag="HBase-master"}}

And more rich syntax (like affinity to rack / cluster, etc.) we can think more 
when doing YARN-4902.

> add ability to specify affinity/anti-affinity in container requests
> ---
>
> Key: YARN-1042
> URL: https://issues.apache.org/jira/browse/YARN-1042
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Steve Loughran
>Assignee: Wangda Tan
> Attachments: YARN-1042-demo.patch, YARN-1042-design-doc.pdf, 
> YARN-1042-global-scheduling.poc.1.patch, YARN-1042.001.patch, 
> YARN-1042.002.patch
>
>
> container requests to the AM should be able to request anti-affinity to 
> ensure that things like Region Servers don't come up on the same failure 
> zones. 
> Similarly, you may be able to want to specify affinity to same host or rack 
> without specifying which specific host/rack. Example: bringing up a small 
> giraph cluster in a large YARN cluster would benefit from having the 
> processes in the same rack purely for bandwidth reasons.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4390) Do surgical preemption based on reserved container in CapacityScheduler

2016-12-09 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736080#comment-15736080
 ] 

Eric Payne commented on YARN-4390:
--

{quote}
bq. the patch LGTM except the javac warning: getMemory() should be replaced by 
getMemorySize().
Thanks Wangda Tan. I've addressed the getMemorySize() issues and attached new 
patch.
{quote}
After upmerge, I will go ahead and push this to branch-2.8.

> Do surgical preemption based on reserved container in CapacityScheduler
> ---
>
> Key: YARN-4390
> URL: https://issues.apache.org/jira/browse/YARN-4390
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Affects Versions: 2.8.0, 2.7.3, 3.0.0-alpha1
>Reporter: Eric Payne
>Assignee: Wangda Tan
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: QueueNotHittingMax.jpg, YARN-4390-design.1.pdf, 
> YARN-4390-test-results.pdf, YARN-4390.1.patch, YARN-4390.2.patch, 
> YARN-4390.3.branch-2.patch, YARN-4390.3.patch, YARN-4390.4.patch, 
> YARN-4390.5.patch, YARN-4390.6.patch, YARN-4390.7.patch, YARN-4390.8.patch, 
> YARN-4390.branch-2.8.001.patch, YARN-4390.branch-2.8.002.patch, 
> YARN-4390.branch-2.8.003.patch
>
>
> There are multiple reasons why preemption could unnecessarily preempt 
> containers. One is that an app could be requesting a large container (say 
> 8-GB), and the preemption monitor could conceivably preempt multiple 
> containers (say 8, 1-GB containers) in order to fill the large container 
> request. These smaller containers would then be rejected by the requesting AM 
> and potentially given right back to the preempted app.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3866) AM-RM protocol changes to support container resizing

2016-12-09 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736065#comment-15736065
 ] 

Arun Suresh commented on YARN-3866:
---

Thanks for working on the [~leftnoteasy],

In, yarn_service_protos.proto, instead of incrementing the index number of 
*am_rm_token* and *application_priority*, you can probably just change the 
index of *updated_container* to 16 and leave the other 2 as it is. That way in 
the branch-2 and trunk patch, you will not have conflict with *collector_addr* 
field, which is 14.

+1 otherwise.

> AM-RM protocol changes to support container resizing
> 
>
> Key: YARN-3866
> URL: https://issues.apache.org/jira/browse/YARN-3866
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Reporter: MENG DING
>Assignee: MENG DING
>Priority: Blocker
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: YARN-3866-YARN-1197.4.patch, 
> YARN-3866-branch-2.8.addendum.1.patch, YARN-3866.1.patch, YARN-3866.2.patch, 
> YARN-3866.3.patch
>
>
> YARN-1447 and YARN-1448 are outdated. 
> This ticket deals with AM-RM Protocol changes to support container resize 
> according to the latest design in YARN-1197.
> 1) Add increase/decrease requests in AllocateRequest
> 2) Get approved increase/decrease requests from RM in AllocateResponse
> 3) Add relevant test cases



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5953) Create CLI for changing YARN configurations

2016-12-09 Thread Ye Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ye Zhou reassigned YARN-5953:
-

Assignee: Ye Zhou

> Create CLI for changing YARN configurations
> ---
>
> Key: YARN-5953
> URL: https://issues.apache.org/jira/browse/YARN-5953
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Ye Zhou
>
> Based on the design in YARN-5734.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5734) OrgQueue for easy CapacityScheduler queue configuration management

2016-12-09 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736005#comment-15736005
 ] 

Wangda Tan commented on YARN-5734:
--

bq. f the reinitialization fails (i.e. scheduler.reinitialize(X+1)), then we 
will need to call scheduler.reinitialize(X). In this case we need to call 
reinitialize twice. Is this acceptable? 
If everything works as expected, reinitialize failure will not change queue 
hierarchy. If there's any cases which makes queue structure still get updated 
when reinitialize fails. Queue configs could be turned to a limbo state, we 
need fix such cases separately. 

bq. I think we will still need some sort of PluggablePolicy,... 
Make sense

bq. Not sure if this is what you meant ..
I'm not sure what is the interface design, but I think the logic you described 
should be roughly same as what in my mind. We can check detailed logic while 
doing patch review.

bq. I am thinking we can add a scheduler specific ConfigurationProvider option 
in yarn-site.xml
Instead of specifying ConfigurationProvider, I think it might be easier for end 
user to specify config like {{...scheduler.dynamic-queue-config.enabled}}. We 
can use different ConfigurationProvider implementation depends on value of 
dynamic-config.enabled.

bq. Not sure what you mean by loading configuration file from xml while setting 
the cluster, can you elaborate on that? Do you mean if store is enabled and the 
admin wants to wipe it and load a new conf from a file into the store? Do we 
plan on supporting that?
If we allow intialize store-based config based on capacity-scheduler.xml, this 
is not required.




> OrgQueue for easy CapacityScheduler queue configuration management
> --
>
> Key: YARN-5734
> URL: https://issues.apache.org/jira/browse/YARN-5734
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Min Shen
>Assignee: Min Shen
> Attachments: OrgQueue_API-Based_Config_Management_v1.pdf, 
> OrgQueue_Design_v0.pdf
>
>
> The current xml based configuration mechanism in CapacityScheduler makes it 
> very inconvenient to apply any changes to the queue configurations. We saw 2 
> main drawbacks in the file based configuration mechanism:
> # This makes it very inconvenient to automate queue configuration updates. 
> For example, in our cluster setup, we leverage the queue mapping feature from 
> YARN-2411 to route users to their dedicated organization queues. It could be 
> extremely cumbersome to keep updating the config file to manage the very 
> dynamic mapping between users to organizations.
> # Even a user has the admin permission on one specific queue, that user is 
> unable to make any queue configuration changes to resize the subqueues, 
> changing queue ACLs, or creating new queues. All these operations need to be 
> performed in a centralized manner by the cluster administrators.
> With these current limitations, we realized the need of a more flexible 
> configuration mechanism that allows queue configurations to be stored and 
> managed more dynamically. We developed the feature internally at LinkedIn 
> which introduces the concept of MutableConfigurationProvider. What it 
> essentially does is to provide a set of configuration mutation APIs that 
> allows queue configurations to be updated externally with a set of REST APIs. 
> When performing the queue configuration changes, the queue ACLs will be 
> honored, which means only queue administrators can make configuration changes 
> to a given queue. MutableConfigurationProvider is implemented as a pluggable 
> interface, and we have one implementation of this interface which is based on 
> Derby embedded database.
> This feature has been deployed at LinkedIn's Hadoop cluster for a year now, 
> and have gone through several iterations of gathering feedbacks from users 
> and improving accordingly. With this feature, cluster administrators are able 
> to automate lots of thequeue configuration management tasks, such as setting 
> the queue capacities to adjust cluster resources between queues based on 
> established resource consumption patterns, or managing updating the user to 
> queue mappings. We have attached our design documentation with this ticket 
> and would like to receive feedbacks from the community regarding how to best 
> integrate it with the latest version of YARN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3866) AM-RM protocol changes to support container resizing

2016-12-09 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-3866:
-
Attachment: YARN-3866-branch-2.8.addendum.1.patch

Attached addendum.1 patch to fix the issue.

We need two fixes added to branch-2/trunk as well:

1) In AllocateResponseProto, we need to reserve id=10/11 for the two fields.
2) In AllocateRequest#newInstance, the signature of removed stable method is 
conflicted to the new method:
{code}
  public static AllocateRequest newInstance(int responseID, float appProgress,
  List resourceAsk,
  List containersToBeReleased,
  ResourceBlacklistRequest resourceBlacklistRequest,
  List updateRequests
  )
{code}

So I have to switch last two parameters to make no conflict happens.

[~asuresh], could you check the two places? Since you may have already deployed 
YARN on branch-2.

Thanks,

> AM-RM protocol changes to support container resizing
> 
>
> Key: YARN-3866
> URL: https://issues.apache.org/jira/browse/YARN-3866
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Reporter: MENG DING
>Assignee: MENG DING
>Priority: Blocker
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: YARN-3866-YARN-1197.4.patch, 
> YARN-3866-branch-2.8.addendum.1.patch, YARN-3866.1.patch, YARN-3866.2.patch, 
> YARN-3866.3.patch
>
>
> YARN-1447 and YARN-1448 are outdated. 
> This ticket deals with AM-RM Protocol changes to support container resize 
> according to the latest design in YARN-1197.
> 1) Add increase/decrease requests in AllocateRequest
> 2) Get approved increase/decrease requests from RM in AllocateResponse
> 3) Add relevant test cases



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5986) Adding YARN configuration entries to ContainerLaunchContext

2016-12-09 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735826#comment-15735826
 ] 

Miklos Szegedi commented on YARN-5986:
--

Thank you for the comment [~sjlee0]! No, I am talking only about entries for 
the node manager about specific features with application scope. As of today, 
YARN-5600 is the only candidate, maybe YARN-5987. We can probably make a rule 
that we need to specify these only to enable rare features. The defaults apply 
in most of the cases to reduce network bandwidth.

> Adding YARN configuration entries to ContainerLaunchContext
> ---
>
> Key: YARN-5986
> URL: https://issues.apache.org/jira/browse/YARN-5986
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>
> Currently ContainerLaunchContext is defined as
> message ContainerLaunchContextProto {
>   repeated StringLocalResourceMapProto localResources = 1;
>   optional bytes tokens = 2;
>   repeated StringBytesMapProto service_data = 3;
>   repeated StringStringMapProto environment = 4;
>   repeated string command = 5;
>   repeated ApplicationACLMapProto application_ACLs = 6;
>   optional ContainerRetryContextProto container_retry_context = 7;
> }
> It would be nice to have an additional parameter "configuration" to support 
> cases like YARN-5600, where we want to pass a parameter to Yarn and not the 
> application or container.
>   repeated StringStringMapProto configuration = 8;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5304) Ship single node HBase config option with single startup command

2016-12-09 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735772#comment-15735772
 ] 

Vrushali C commented on YARN-5304:
--

Thanks Joep, I will take a look at this along with YARN-5980, might perhaps 
close that one as a duplicate depending on what I see. I will start on this 
most likely next week but will aim to close this out as soon as possible. 

> Ship single node HBase config option with single startup command
> 
>
> Key: YARN-5304
> URL: https://issues.apache.org/jira/browse/YARN-5304
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Joep Rottinghuis
>  Labels: YARN-5355
>
> For small to medium Hadoop deployments we should make it dead-simple to use 
> the timeline service v2. We should have a single command to launch and stop 
> the timelineservice back-end for the default HBase implementation.
> A default config with all the values should be packaged that launches all the 
> needed daemons (on the RM node) with a single command with all the 
> recommended settings.
> Having a timeline admin command, perhaps an init command might be needed, or 
> perhaps the timeline service can even auto-detect that and create tables, 
> deploy needed coprocessors etc.
> The overall purpose is to ensure nobody needs to be an HBase expert to get 
> this going. For those cluster operators with HBase experience, they can 
> choose their own more sophisticated deployment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5970) Validate application update timeout request parameters

2016-12-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735679#comment-15735679
 ] 

Hadoop QA commented on YARN-5970:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
44s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
0s{color} | {color:green} branch-2 passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
19s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
31s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
38s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} branch-2 passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
33s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
49s{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.7.0_121. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 45m 19s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}140m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_121 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.TestRMAdminService |
|   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  

[jira] [Comment Edited] (YARN-5292) NM Container lifecycle and state transitions to support for PAUSED container state.

2016-12-09 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15731526#comment-15731526
 ] 

Arun Suresh edited comment on YARN-5292 at 12/9/16 3:46 PM:


Hi [~kasha], the design doc and the patch allow an opportunistic container to 
be paused and resumed. The actual implementation of the pluggable interface 
will be as part of [YARN-5216] and there via yarn-site.xml you can specify the 
preemption policy for an opportunistic container when there is a guaranteed 
container waiting to run (i.e. pause or kill). Currently only the NM scheduler 
can raise the PAUSE/RESUME events and there is no support for doing so in the 
container management protocol.

One of the questions on the table is if there is a need to extend support of 
PAUSE/RESUME to guaranteed containers and have the AMs initiate that. Would 
love to hear some thoughts on that and if there are any use cases that can 
benefit from that.


was (Author: hrsharma):
Hi [~kasha], the design doc and the patch allow an opportunistic container to 
be paused and resumed. The actual implementation of the pluggable interface 
will be as part of [YARN-5196] and there via yarn-site.xml you can specify the 
preemption policy for an opportunistic container when there is a guaranteed 
container waiting to run (i.e. pause or kill). Currently only the NM scheduler 
can raise the PAUSE/RESUME events and there is no support for doing so in the 
container management protocol.

One of the questions on the table is if there is a need to extend support of 
PAUSE/RESUME to guaranteed containers and have the AMs initiate that. Would 
love to hear some thoughts on that and if there are any use cases that can 
benefit from that.

> NM Container lifecycle and state transitions to support for PAUSED container 
> state.
> ---
>
> Key: YARN-5292
> URL: https://issues.apache.org/jira/browse/YARN-5292
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Hitesh Sharma
>Assignee: Hitesh Sharma
> Attachments: YARN-5292.001.patch, YARN-5292.002.patch, 
> YARN-5292.003.patch, YARN-5292.004.patch, YARN-5292.005.patch
>
>
> This JIRA addresses the NM Container and state machine and lifecycle changes 
> needed  to support pausing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5292) NM Container lifecycle and state transitions to support for PAUSED container state.

2016-12-09 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735648#comment-15735648
 ] 

Arun Suresh edited comment on YARN-5292 at 12/9/16 3:52 PM:


Committed to YARN-5972 branch.
Thanks for the patch [~hrsharma]..


was (Author: asuresh):
Committed to YARN-5972.
Thanks for the patch [~hrsharma]..

> NM Container lifecycle and state transitions to support for PAUSED container 
> state.
> ---
>
> Key: YARN-5292
> URL: https://issues.apache.org/jira/browse/YARN-5292
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Hitesh Sharma
>Assignee: Hitesh Sharma
> Attachments: YARN-5292.001.patch, YARN-5292.002.patch, 
> YARN-5292.003.patch, YARN-5292.004.patch, YARN-5292.005.patch
>
>
> This JIRA addresses the NM Container and state machine and lifecycle changes 
> needed  to support pausing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5292) NM Container lifecycle and state transitions to support for PAUSED container state.

2016-12-09 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735638#comment-15735638
 ] 

Arun Suresh commented on YARN-5292:
---

Committing this to branch YARN-5972 shortly.
Regarding [~subru]'s comment on the applicability to guaranteed containers, we 
can continue the discussion in YARN-5972 and can tackle any required changes in 
subsequent JIRAs. In the mean time YARN-5216 will ensure that pausing will be 
used only as an alternative to killing opportunistic containers.


> NM Container lifecycle and state transitions to support for PAUSED container 
> state.
> ---
>
> Key: YARN-5292
> URL: https://issues.apache.org/jira/browse/YARN-5292
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Hitesh Sharma
>Assignee: Hitesh Sharma
> Attachments: YARN-5292.001.patch, YARN-5292.002.patch, 
> YARN-5292.003.patch, YARN-5292.004.patch, YARN-5292.005.patch
>
>
> This JIRA addresses the NM Container and state machine and lifecycle changes 
> needed  to support pausing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2962) ZKRMStateStore: Limit the number of znodes under a znode

2016-12-09 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735583#comment-15735583
 ] 

Varun Saxena commented on YARN-2962:


Sorry have been keeping busy lately.
Give me a week. Will update it.

> ZKRMStateStore: Limit the number of znodes under a znode
> 
>
> Key: YARN-2962
> URL: https://issues.apache.org/jira/browse/YARN-2962
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.6.0
>Reporter: Karthik Kambatla
>Assignee: Varun Saxena
>Priority: Critical
> Attachments: YARN-2962.01.patch, YARN-2962.04.patch, 
> YARN-2962.05.patch, YARN-2962.2.patch, YARN-2962.3.patch
>
>
> We ran into this issue where we were hitting the default ZK server message 
> size configs, primarily because the message had too many znodes even though 
> they individually they were all small.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5989) hadoop build to allow hadoop version property to be explicitly set (YARN test)

2016-12-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735484#comment-15735484
 ] 

Hadoop QA commented on YARN-5989:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
14s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 13s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
34s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5989 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842558/HADOOP-13582-003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 420e3410b2d3 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 80b8023 |
| Default Java | 1.8.0_111 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/14242/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14242/testReport/ |
| modules | C: hadoop-project hadoop-common-project/hadoop-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/14242/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> hadoop build to allow hadoop version property to be explicitly set (YARN test)
> 

[jira] [Updated] (YARN-5989) hadoop build to allow hadoop version property to be explicitly set (YARN test)

2016-12-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated YARN-5989:
-
Attachment: HADOOP-13582-003.patch

> hadoop build to allow hadoop version property to be explicitly set (YARN test)
> --
>
> Key: YARN-5989
> URL: https://issues.apache.org/jira/browse/YARN-5989
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13582-003.patch
>
>
> Test the patch for HADOOP-13852 against YARN, as it has effects there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5989) hadoop build to allow hadoop version property to be explicitly set (YARN test)

2016-12-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated YARN-5989:
-
Priority: Minor  (was: Major)

> hadoop build to allow hadoop version property to be explicitly set (YARN test)
> --
>
> Key: YARN-5989
> URL: https://issues.apache.org/jira/browse/YARN-5989
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13582-003.patch
>
>
> Test the patch for HADOOP-13852 against YARN, as it has effects there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5970) Validate application update timeout request parameters

2016-12-09 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-5970:

Attachment: YARN-5970-branch-2.0001.patch

uploaded branch-2 patch

> Validate application update timeout request parameters
> --
>
> Key: YARN-5970
> URL: https://issues.apache.org/jira/browse/YARN-5970
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: YARN-5970-branch-2.0001.patch, YARN-5970.0.patch
>
>
> AppTimeoutInfo is exposed to REST clients. This object request parameters 
> should be validated before processing it to clientRMService.
> And also handles couple of minor issues in REST services.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5989) hadoop build to allow hadoop version property to be explicitly set (YARN test)

2016-12-09 Thread Steve Loughran (JIRA)
Steve Loughran created YARN-5989:


 Summary: hadoop build to allow hadoop version property to be 
explicitly set (YARN test)
 Key: YARN-5989
 URL: https://issues.apache.org/jira/browse/YARN-5989
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0-alpha2
Reporter: Steve Loughran
Assignee: Steve Loughran


Test the patch for HADOOP-13852 against YARN, as it has effects there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5988) RM unable to start in secure setup

2016-12-09 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735059#comment-15735059
 ] 

Rohith Sharma K S commented on YARN-5988:
-

thanks Ajith for the patch.. Approach looks fine to me. 
some comments on the patch, 
# the below code is not required since refresh is done at the serviceStart of 
services.
{noformat}
try {
349   refreshActiveServicesAcls();
350 } catch (YarnException e) {
351   LOG.error("RefreshAll failed so firing fatal event", e);
352   rmContext
353   .getDispatcher()
354   .getEventHandler()
355   .handle(
356   new 
RMFatalEvent(RMFatalEventType.TRANSITION_TO_ACTIVE_FAILED,
357   e));
358   throw new ServiceFailedException(
359   "Error on refreshAll during transition to Active", e);
360 }
{noformat}
# Add a test case.

> RM unable to start in secure setup
> --
>
> Key: YARN-5988
> URL: https://issues.apache.org/jira/browse/YARN-5988
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha1
>Reporter: Ajith S
>Assignee: Ajith S
>Priority: Blocker
> Attachments: YARN-5988.01.patch
>
>
> When CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHORIZATION=true
> RM is unable to start



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5988) RM unable to start in secure setup

2016-12-09 Thread Ajith S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajith S updated YARN-5988:
--
Attachment: YARN-5988.01.patch

Attaching initial patch. will add test case soon.

> RM unable to start in secure setup
> --
>
> Key: YARN-5988
> URL: https://issues.apache.org/jira/browse/YARN-5988
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha1
>Reporter: Ajith S
>Assignee: Ajith S
>Priority: Blocker
> Attachments: YARN-5988.01.patch
>
>
> When CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHORIZATION=true
> RM is unable to start



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5912) [YARN-3368] Fix breadcrumb issues in yarn-node page in new YARN UI

2016-12-09 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15734876#comment-15734876
 ] 

Sunil G commented on YARN-5912:
---

Similar issue is observed in Application page as well. Pls fix that also here.

> [YARN-3368] Fix breadcrumb issues in yarn-node page in new YARN UI
> --
>
> Key: YARN-5912
> URL: https://issues.apache.org/jira/browse/YARN-5912
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
>Priority: Minor
> Attachments: YARN-5912.001.patch
>
>
> Fix breadcrumb issues in yarn-node-app and yarn-node-container pages in new 
> YARN UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5988) RM unable to start in secure setup

2016-12-09 Thread Ajith S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15734863#comment-15734863
 ] 

Ajith S commented on YARN-5988:
---

{code}java.lang.NullPointerException
at 
org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.refreshServiceAcls(ClientRMService.java:1271)
at 
org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshServiceAcls(AdminService.java:557)
at 
org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshAll(AdminService.java:653)
at 
org.apache.hadoop.yarn.server.resourcemanager.AdminService.transitionToActive(AdminService.java:315)
at 
org.apache.hadoop.yarn.server.resourcemanager.EmbeddedElectorService.becomeActive(EmbeddedElectorService.java:126)
at 
org.apache.hadoop.ha.ActiveStandbyElector.becomeActive(ActiveStandbyElector.java:832)
at 
org.apache.hadoop.ha.ActiveStandbyElector.processResult(ActiveStandbyElector.java:422)
at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:723)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:595)
{code}

> RM unable to start in secure setup
> --
>
> Key: YARN-5988
> URL: https://issues.apache.org/jira/browse/YARN-5988
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha1
>Reporter: Ajith S
>Assignee: Ajith S
>Priority: Blocker
>
> When CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHORIZATION=true
> RM is unable to start



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5988) RM unable to start in secure setup

2016-12-09 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15734825#comment-15734825
 ] 

Naganarasimha G R commented on YARN-5988:
-

Updated target versions as it impacts after YARN-5333 

> RM unable to start in secure setup
> --
>
> Key: YARN-5988
> URL: https://issues.apache.org/jira/browse/YARN-5988
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha1
>Reporter: Ajith S
>Assignee: Ajith S
>Priority: Blocker
>
> When CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHORIZATION=true
> RM is unable to start



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5988) RM unable to start in secure setup

2016-12-09 Thread Ajith S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajith S updated YARN-5988:
--
Target Version/s: 2.9.0, 3.0.0-alpha2  (was: 2.8.0, 2.9.0, 2.7.4, 
3.0.0-alpha2, 2.6.6)

> RM unable to start in secure setup
> --
>
> Key: YARN-5988
> URL: https://issues.apache.org/jira/browse/YARN-5988
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha1
>Reporter: Ajith S
>Assignee: Ajith S
>Priority: Blocker
>
> When CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHORIZATION=true
> RM is unable to start



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5988) RM unable to start in secure setup

2016-12-09 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5988:
---
Affects Version/s: 2.9.0
   3.0.0-alpha1

> RM unable to start in secure setup
> --
>
> Key: YARN-5988
> URL: https://issues.apache.org/jira/browse/YARN-5988
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha1
>Reporter: Ajith S
>Assignee: Ajith S
>Priority: Blocker
>
> When CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHORIZATION=true
> RM is unable to start



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5988) RM unable to start in secure setup

2016-12-09 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15734808#comment-15734808
 ] 

Naganarasimha G R commented on YARN-5988:
-

Thanks for raising the issue [~ajithshetty]
As discussed offline we need segregate the calls related the client RM 
service,ApplicationMasterService and ResourceTrackerService to another method 
and during transition we need to call only {{refreshServiceAcls(conf, 
policyProvider)}}


> RM unable to start in secure setup
> --
>
> Key: YARN-5988
> URL: https://issues.apache.org/jira/browse/YARN-5988
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Ajith S
>Assignee: Ajith S
>Priority: Blocker
>
> When CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHORIZATION=true
> RM is unable to start



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5988) RM unable to start in secure setup

2016-12-09 Thread Ajith S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajith S updated YARN-5988:
--
Priority: Blocker  (was: Major)

> RM unable to start in secure setup
> --
>
> Key: YARN-5988
> URL: https://issues.apache.org/jira/browse/YARN-5988
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Ajith S
>Assignee: Ajith S
>Priority: Blocker
>
> When CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHORIZATION=true
> RM is unable to start



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5988) RM unable to start in secure setup

2016-12-09 Thread Ajith S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15734801#comment-15734801
 ] 

Ajith S commented on YARN-5988:
---

{code}2016-12-09 14:54:27,028 ERROR [Thread-788] resourcemanager.AdminService 
(AdminService.java:refreshAll(733)) - Failure on refresh
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshServiceAcls(AdminService.java:591)
at 
org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshServiceAcls(AdminService.java:581)
at 
org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshAll(AdminService.java:729)
at 
org.apache.hadoop.yarn.server.resourcemanager.AdminService.transitionToActive(AdminService.java:326)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler.testRefreshQueuesWhenRMHA(TestFairScheduler.java:4888)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
2016-12-09 14:54:27,029 ERROR [Thread-788] resourcemanager.AdminService 
(AdminService.java:transitionToActive(328)) - RefreshAll failed so firing fatal 
event
org.apache.hadoop.ha.ServiceFailedException
at 
org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshAll(AdminService.java:734)
at 
org.apache.hadoop.yarn.server.resourcemanager.AdminService.transitionToActive(AdminService.java:326)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler.testRefreshQueuesWhenRMHA(TestFairScheduler.java:4888)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){code}

> RM unable to start in secure setup
> --
>
> Key: YARN-5988
> URL: https://issues.apache.org/jira/browse/YARN-5988
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Ajith S
>Assignee: Ajith S
>
> When CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHORIZATION=true
> RM is unable to start



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5988) RM unable to start in secure setup

2016-12-09 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-5988:

Target Version/s: 2.8.0, 2.9.0, 2.7.4, 3.0.0-alpha2, 2.6.6

> RM unable to start in secure setup
> --
>
> Key: YARN-5988
> URL: https://issues.apache.org/jira/browse/YARN-5988
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Ajith S
>Assignee: Ajith S
>
> When CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHORIZATION=true
> RM is unable to start



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5988) RM unable to start in secure setup

2016-12-09 Thread Ajith S (JIRA)
Ajith S created YARN-5988:
-

 Summary: RM unable to start in secure setup
 Key: YARN-5988
 URL: https://issues.apache.org/jira/browse/YARN-5988
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Ajith S
Assignee: Ajith S


When CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHORIZATION=true
RM is unable to start



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5983) [Umbrella] Support for FPGA as a Resource in YARN

2016-12-09 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated YARN-5983:

Description: 
As various big data workload running on YARN, CPU will no longer scale 
eventually and heterogeneous systems will become more important. ML/DL is a 
rising star in recent years, applications focused on these areas have to 
utilize GPU or FPGA to boost performance. Also, hardware vendors such as Intel 
also invest in such hardware. It is most likely that FPGA will become popular 
in data centers like CPU in the near future.

So YARN as a resource managing and scheduling system, would be great to evolve 
to support this. This JIRA proposes FPGA to be a first-class citizen. The 
changes roughly includes:
1. FPGA resource detection and heartbeat
2. Scheduler changes
3. FPGA related preparation and isolation before launch container
We know that YARN-3926 is trying to extend current resource model. But still we 
can leave some FPGA related discussion here

  was:
As various big data workload running on YARN, CPU no longer scale eventually 
and heterogeneous systems become more important. And ML/DL is a rising star in 
recent years, applications focused on these areas have to utilize GPU or FPGA 
to boost performance. Also, hardware vendors such as Intel also invest in such 
hardware. It is most likely that FPGA will become popular in data centers like 
CPU in the near future.

So YARN as a resource manager should evolve to support this. This JIRA proposes 
FPGA to be first-class citizen. The changes roughly includes:
1. FPGA resource detection and heartbeat
2. Scheduler changes
3. FPGA related preparation and isolation before launch container
We know that YARN-3926 is trying to extend current resource model. But still we 
can leave some FPGA related discussion here


> [Umbrella] Support for FPGA as a Resource in YARN
> -
>
> Key: YARN-5983
> URL: https://issues.apache.org/jira/browse/YARN-5983
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>
> As various big data workload running on YARN, CPU will no longer scale 
> eventually and heterogeneous systems will become more important. ML/DL is a 
> rising star in recent years, applications focused on these areas have to 
> utilize GPU or FPGA to boost performance. Also, hardware vendors such as 
> Intel also invest in such hardware. It is most likely that FPGA will become 
> popular in data centers like CPU in the near future.
> So YARN as a resource managing and scheduling system, would be great to 
> evolve to support this. This JIRA proposes FPGA to be a first-class citizen. 
> The changes roughly includes:
> 1. FPGA resource detection and heartbeat
> 2. Scheduler changes
> 3. FPGA related preparation and isolation before launch container
> We know that YARN-3926 is trying to extend current resource model. But still 
> we can leave some FPGA related discussion here



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org