[jira] [Updated] (YARN-5892) Capacity Scheduler: Support user-specific minimum user limit percent

2017-06-12 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated YARN-5892:
-
Attachment: YARN-5892.015.patch

[~sunilg] / [~leftnoteasy] / [~jlowe], I'm loading a new patch. I forgot to 
make the NormalizeDown modifications to the calculations for all users. In the 
previous patch, it was just for the active users' calculations.

> Capacity Scheduler: Support user-specific minimum user limit percent
> 
>
> Key: YARN-5892
> URL: https://issues.apache.org/jira/browse/YARN-5892
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: Active users highlighted.jpg, YARN-5892.001.patch, 
> YARN-5892.002.patch, YARN-5892.003.patch, YARN-5892.004.patch, 
> YARN-5892.005.patch, YARN-5892.006.patch, YARN-5892.007.patch, 
> YARN-5892.008.patch, YARN-5892.009.patch, YARN-5892.010.patch, 
> YARN-5892.012.patch, YARN-5892.013.patch, YARN-5892.014.patch, 
> YARN-5892.015.patch
>
>
> Currently, in the capacity scheduler, the {{minimum-user-limit-percent}} 
> property is per queue. A cluster admin should be able to set the minimum user 
> limit percent on a per-user basis within the queue.
> This functionality is needed so that when intra-queue preemption is enabled 
> (YARN-4945 / YARN-2113), some users can be deemed as more important than 
> other users, and resources from VIP users won't be as likely to be preempted.
> For example, if the {{getstuffdone}} queue has a MULP of 25 percent, but user 
> {{jane}} is a power user of queue {{getstuffdone}} and needs to be guaranteed 
> 75 percent, the properties for {{getstuffdone}} and {{jane}} would look like 
> this:
> {code}
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.minimum-user-limit-percent
> 25
>   
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.jane.minimum-user-limit-percent
> 75
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6681) Eliminate double-copy of child queues in canAssignToThisQueue

2017-06-12 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047387#comment-16047387
 ] 

Naganarasimha G R commented on YARN-6681:
-

Hi [~daryn], 
Any opinion on our previous comments ?

> Eliminate double-copy of child queues in canAssignToThisQueue
> -
>
> Key: YARN-6681
> URL: https://issues.apache.org/jira/browse/YARN-6681
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: YARN-6681.2.branch-2.8.patch, 
> YARN-6681.2.branch-2.patch, YARN-6681.2.trunk.patch, 
> YARN-6681.branch-2.8.patch, YARN-6681.branch-2.patch, YARN-6681.trunk.patch
>
>
> 20% of the time in {{AbstractCSQueue#canAssignToThisQueue}} is spent 
> performing two duplications a treemap of child queues into a list - once to 
> test for null, second to see if it's empty.  Eliminating the dups reduces the 
> overhead to 2%.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6691) Update YARN daemon startup/shutdown scripts to include Router service

2017-06-12 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047350#comment-16047350
 ] 

Allen Wittenauer commented on YARN-6691:


You can't use /usr/bin/patch on a git binary patch file. smart-apply-patch does 
work, however, which is why Apache Yetus didn't have a problem with it:

{code}
MacBook-Pro-4:hadoop aw$ dev-support/bin/smart-apply-patch YARN-6691
Processing: YARN-6691
YARN-6691 patch is being downloaded at Mon Jun 12 19:32:43 PDT 2017 from
  
https://issues.apache.org/jira/secure/attachment/12872113/YARN-6691-YARN-2915.v4.patch
 -> Downloaded
Applying the patch:
Mon Jun 12 19:32:44 PDT 2017
cd /Users/aw/shared-vmware/hadoop
git apply --binary -v --stat --apply -p0 /tmp/yetus-20960.31585/patch
Applied patch hadoop-yarn-project/hadoop-yarn/bin/yarn cleanly.
Applied patch hadoop-yarn-project/hadoop-yarn/bin/yarn.cmd cleanly.
Applied patch hadoop-yarn-project/hadoop-yarn/conf/yarn-env.sh cleanly.
 hadoop-yarn-project/hadoop-yarn/bin/yarn |5 +
 hadoop-yarn-project/hadoop-yarn/bin/yarn.cmd |   13 -
 hadoop-yarn-project/hadoop-yarn/conf/yarn-env.sh |   12 
 3 files changed, 29 insertions(+), 1 deletion(-)
{code}

> Update YARN daemon startup/shutdown scripts to include Router service
> -
>
> Key: YARN-6691
> URL: https://issues.apache.org/jira/browse/YARN-6691
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-6691-YARN-2915.v1.patch, 
> YARN-6691-YARN-2915.v2.patch, YARN-6691-YARN-2915.v3.patch, 
> YARN-6691-YARN-2915.v4.patch
>
>
> YARN-5410 introduce a new YARN service, i.e. Router. This jira proposes to 
> update YARN daemon startup/shutdown scripts to include Router service.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6175) FairScheduler: Negative vcore for resource needed to preempt

2017-06-12 Thread Aaron Dossett (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047260#comment-16047260
 ] 

Aaron Dossett commented on YARN-6175:
-

[~yufeigu]I'm running CDH 5.10.0.  I double checked the Cloudera release notes 
and CDH 5.10.1 does include this fix, so I'll look to upgrade.  Thanks!

> FairScheduler: Negative vcore for resource needed to preempt
> 
>
> Key: YARN-6175
> URL: https://issues.apache.org/jira/browse/YARN-6175
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Fix For: 2.8.0
>
> Attachments: YARN-6175.001.patch, YARN-6175.branch-2.8.002.patch, 
> YARN-6175.branch-2.8.003.patch
>
>
> Both old preemption code (2.8 and before) and new preemption code could have 
> negative vcores while calculating resources needed to preempt.
> For old preemption, you can find following messages in RM logs:
> {code}
> Should preempt  
> {code}
> The related code is in method {{resourceDeficit()}}. 
> For new preemption code, there are no messages in RM logs, the related code 
> is in method {{fairShareStarvation()}}. 
> The negative value isn't only a display issue, but also may cause missing 
> necessary preemption. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3659) Federation Router (hiding multiple RMs for ApplicationClientProtocol)

2017-06-12 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047198#comment-16047198
 ] 

Giovanni Matteo Fumarola commented on YARN-3659:


Attached (draft). There are multiple edited files under the package 
org.apache.hadoop.yarn.server.federation.policies.router due to introduction of 
a blacklist concept. Should we create a new jira?

> Federation Router (hiding multiple RMs for ApplicationClientProtocol)
> -
>
> Key: YARN-3659
> URL: https://issues.apache.org/jira/browse/YARN-3659
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: client, resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-3659.pdf, YARN-3659-YARN-2915.draft.patch
>
>
> This JIRA tracks the design/implementation of the layer for routing 
> ApplicaitonClientProtocol requests to the appropriate
> RM(s) in a federated YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3659) Federation Router (hiding multiple RMs for ApplicationClientProtocol)

2017-06-12 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-3659:
---
Attachment: YARN-3659-YARN-2915.draft.patch

> Federation Router (hiding multiple RMs for ApplicationClientProtocol)
> -
>
> Key: YARN-3659
> URL: https://issues.apache.org/jira/browse/YARN-3659
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: client, resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-3659.pdf, YARN-3659-YARN-2915.draft.patch
>
>
> This JIRA tracks the design/implementation of the layer for routing 
> ApplicaitonClientProtocol requests to the appropriate
> RM(s) in a federated YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6691) Update YARN daemon startup/shutdown scripts to include Router service

2017-06-12 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047135#comment-16047135
 ] 

Arun Suresh edited comment on YARN-6691 at 6/12/17 10:12 PM:
-

[~giovanni.fumarola], Was just trying to apply the patch, looks like there are 
some line ending issues.
{code}
$ patch -p1 -i YARN-6691-YARN-2915.v4.patch --dry-run
checking file hadoop-yarn/bin/yarn
checking file hadoop-yarn/bin/yarn.cmd
Hunk #1 FAILED at 130 (different line endings).
Hunk #2 FAILED at 151 (different line endings).
Hunk #3 FAILED at 248 (different line endings).
Hunk #4 FAILED at 317 (different line endings).
4 out of 4 hunks FAILED
checking file hadoop-yarn/conf/yarn-env.sh
{code}
Also, can you please recreate the patch against hadoop root dir ?


was (Author: asuresh):
[~giovanni.fumarola], Was just trying to apply the patch, looks like there are 
some line ending issues.
Also, can you please recreate the patch against hadoop root dir ?

> Update YARN daemon startup/shutdown scripts to include Router service
> -
>
> Key: YARN-6691
> URL: https://issues.apache.org/jira/browse/YARN-6691
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-6691-YARN-2915.v1.patch, 
> YARN-6691-YARN-2915.v2.patch, YARN-6691-YARN-2915.v3.patch, 
> YARN-6691-YARN-2915.v4.patch
>
>
> YARN-5410 introduce a new YARN service, i.e. Router. This jira proposes to 
> update YARN daemon startup/shutdown scripts to include Router service.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6691) Update YARN daemon startup/shutdown scripts to include Router service

2017-06-12 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047135#comment-16047135
 ] 

Arun Suresh commented on YARN-6691:
---

[~giovanni.fumarola], Was just trying to apply the patch, looks like there are 
some line ending issues.
Also, can you please recreate the patch against hadoop root dir ?

> Update YARN daemon startup/shutdown scripts to include Router service
> -
>
> Key: YARN-6691
> URL: https://issues.apache.org/jira/browse/YARN-6691
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-6691-YARN-2915.v1.patch, 
> YARN-6691-YARN-2915.v2.patch, YARN-6691-YARN-2915.v3.patch, 
> YARN-6691-YARN-2915.v4.patch
>
>
> YARN-5410 introduce a new YARN service, i.e. Router. This jira proposes to 
> update YARN daemon startup/shutdown scripts to include Router service.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6175) FairScheduler: Negative vcore for resource needed to preempt

2017-06-12 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047129#comment-16047129
 ] 

Yufei Gu commented on YARN-6175:


[~dossett], what version do you use? Do you use fair scheduler?

> FairScheduler: Negative vcore for resource needed to preempt
> 
>
> Key: YARN-6175
> URL: https://issues.apache.org/jira/browse/YARN-6175
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Fix For: 2.8.0
>
> Attachments: YARN-6175.001.patch, YARN-6175.branch-2.8.002.patch, 
> YARN-6175.branch-2.8.003.patch
>
>
> Both old preemption code (2.8 and before) and new preemption code could have 
> negative vcores while calculating resources needed to preempt.
> For old preemption, you can find following messages in RM logs:
> {code}
> Should preempt  
> {code}
> The related code is in method {{resourceDeficit()}}. 
> For new preemption code, there are no messages in RM logs, the related code 
> is in method {{fairShareStarvation()}}. 
> The negative value isn't only a display issue, but also may cause missing 
> necessary preemption. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6175) FairScheduler: Negative vcore for resource needed to preempt

2017-06-12 Thread Aaron Dossett (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047126#comment-16047126
 ] 

Aaron Dossett commented on YARN-6175:
-

I am seeing log messages like this:  Note the value of -8

{code:java}
2017-06-12 21:52:12,937 INFO 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler: 
Should preempt  res for queue 
root.production.scheduled_jobs: resDueToMinShare = , 
resDueToFairShare = 
{code}

Is that likely an instance of this bug?

> FairScheduler: Negative vcore for resource needed to preempt
> 
>
> Key: YARN-6175
> URL: https://issues.apache.org/jira/browse/YARN-6175
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Fix For: 2.8.0
>
> Attachments: YARN-6175.001.patch, YARN-6175.branch-2.8.002.patch, 
> YARN-6175.branch-2.8.003.patch
>
>
> Both old preemption code (2.8 and before) and new preemption code could have 
> negative vcores while calculating resources needed to preempt.
> For old preemption, you can find following messages in RM logs:
> {code}
> Should preempt  
> {code}
> The related code is in method {{resourceDeficit()}}. 
> For new preemption code, there are no messages in RM logs, the related code 
> is in method {{fairShareStarvation()}}. 
> The negative value isn't only a display issue, but also may cause missing 
> necessary preemption. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6691) Update YARN daemon startup/shutdown scripts to include Router service

2017-06-12 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047125#comment-16047125
 ] 

Arun Suresh commented on YARN-6691:
---

+1
Will commit this shortly. Thanks for the review [~aw]

> Update YARN daemon startup/shutdown scripts to include Router service
> -
>
> Key: YARN-6691
> URL: https://issues.apache.org/jira/browse/YARN-6691
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-6691-YARN-2915.v1.patch, 
> YARN-6691-YARN-2915.v2.patch, YARN-6691-YARN-2915.v3.patch, 
> YARN-6691-YARN-2915.v4.patch
>
>
> YARN-5410 introduce a new YARN service, i.e. Router. This jira proposes to 
> update YARN daemon startup/shutdown scripts to include Router service.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5146) [YARN-3368] Supports Fair Scheduler in new YARN UI

2017-06-12 Thread Abdullah Yousufi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abdullah Yousufi reassigned YARN-5146:
--

Assignee: Abdullah Yousufi  (was: Yufei Gu)

> [YARN-3368] Supports Fair Scheduler in new YARN UI
> --
>
> Key: YARN-5146
> URL: https://issues.apache.org/jira/browse/YARN-5146
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Abdullah Yousufi
>
> Current implementation in branch YARN-3368 only support capacity scheduler,  
> we want to make it support fair scheduler. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6702) Zk connection leak during activeService fail if embedded elector is not curator

2017-06-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16046976#comment-16046976
 ] 

Hadoop QA commented on YARN-6702:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 39m 54s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6702 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12872760/YARN-6702.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6732c205d3a2 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 86368cc |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16181/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16181/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16181/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Zk connection leak during activeService fail if embedded elector is not 
> curator
> ---
>
> Key: YARN-6702
> URL: https://issues.apache.org/jira/browse/YARN-6702
> Project: Hadoop YARN
>  Issue Type: Bug
>

[jira] [Updated] (YARN-6702) Zk connection leak during activeService fail if embedded elector is not curator

2017-06-12 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-6702:

Attachment: YARN-6702.01.patch

attached patch closing curator client if it is created by ZKRMStateStore 

> Zk connection leak during activeService fail if embedded elector is not 
> curator
> ---
>
> Key: YARN-6702
> URL: https://issues.apache.org/jira/browse/YARN-6702
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha3
>Reporter: Bibin A Chundatt
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: YARN-6702.01.patch
>
>
> {{ResourceManager#transitionToActive}} startActiveService Failure the active 
> services are reinitialized.
> {code}
> this.rmLoginUGI.doAs(new PrivilegedExceptionAction() {
>   @Override
>   public Void run() throws Exception {
> try {
>   startActiveServices();
>   return null;
> } catch (Exception e) {
>   reinitialize(true);
>   throw e;
> }
>   }
> });
> {code}
> {{ZKRMStateStore#initInternal}} will create another ZK connection.
> {code}
> curatorFramework = resourceManager.getCurator();
> if (curatorFramework == null) {
>   curatorFramework = resourceManager.createAndStartCurator(conf);
> }
> {code}
> {quote}
> secureuser@vm1:/opt/hadoop/release/hadoop/sbin> netstat -aen | grep 2181
> tcp0  0 192.168.56.101:49222192.168.56.103:2181 
> ESTABLISHED 1004   31984  
> tcp0  0 192.168.56.101:46016192.168.56.103:2181 
> ESTABLISHED 1004   26120  
> tcp0  0 192.168.56.101:50918192.168.56.103:2181 
> ESTABLISHED 1004   34761  
> tcp0  0 192.168.56.101:49598192.168.56.103:2181 
> ESTABLISHED 1004   32483  
> tcp0  0 192.168.56.101:49472192.168.56.103:2181 
> ESTABLISHED 1004   32364  
> tcp0  0 192.168.56.101:50708192.168.56.103:2181 
> ESTABLISHED 1004   34310  
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6702) Zk connection leak during activeService fail if embedded elector is not curator

2017-06-12 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S reassigned YARN-6702:
---

Assignee: Rohith Sharma K S

> Zk connection leak during activeService fail if embedded elector is not 
> curator
> ---
>
> Key: YARN-6702
> URL: https://issues.apache.org/jira/browse/YARN-6702
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha3
>Reporter: Bibin A Chundatt
>Assignee: Rohith Sharma K S
>Priority: Critical
>
> {{ResourceManager#transitionToActive}} startActiveService Failure the active 
> services are reinitialized.
> {code}
> this.rmLoginUGI.doAs(new PrivilegedExceptionAction() {
>   @Override
>   public Void run() throws Exception {
> try {
>   startActiveServices();
>   return null;
> } catch (Exception e) {
>   reinitialize(true);
>   throw e;
> }
>   }
> });
> {code}
> {{ZKRMStateStore#initInternal}} will create another ZK connection.
> {code}
> curatorFramework = resourceManager.getCurator();
> if (curatorFramework == null) {
>   curatorFramework = resourceManager.createAndStartCurator(conf);
> }
> {code}
> {quote}
> secureuser@vm1:/opt/hadoop/release/hadoop/sbin> netstat -aen | grep 2181
> tcp0  0 192.168.56.101:49222192.168.56.103:2181 
> ESTABLISHED 1004   31984  
> tcp0  0 192.168.56.101:46016192.168.56.103:2181 
> ESTABLISHED 1004   26120  
> tcp0  0 192.168.56.101:50918192.168.56.103:2181 
> ESTABLISHED 1004   34761  
> tcp0  0 192.168.56.101:49598192.168.56.103:2181 
> ESTABLISHED 1004   32483  
> tcp0  0 192.168.56.101:49472192.168.56.103:2181 
> ESTABLISHED 1004   32364  
> tcp0  0 192.168.56.101:50708192.168.56.103:2181 
> ESTABLISHED 1004   34310  
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6670) Add separate NM overallocation thresholds for cpu and memory

2017-06-12 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16046882#comment-16046882
 ] 

Haibo Chen commented on YARN-6670:
--

IIUC, this effectively turns the overallocation threshold from being based on a 
utilization snapshot to being based on utilization during a time period. This, 
however, can be difficult to implement IMO. To get an accurate picture of CPU 
and memory utilization over time, NMs need to check the usage very frequently, 
which can be very expensive. In our first phase, we rely on prompt NM 
preemption of OPPORTUNISTIC containers and tuning of overallocation threshold 
to handle such cases. 

> Add separate NM overallocation thresholds for cpu and memory
> 
>
> Key: YARN-6670
> URL: https://issues.apache.org/jira/browse/YARN-6670
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-6670-YARN-1011.00.patch, 
> YARN-6670-YARN-1011.01.patch, YARN-6670-YARN-1011.02.patch, 
> YARN-6670-YARN-1011.03.patch, YARN-6670-YARN-1011.04.patch, 
> YARN-6670-YARN-1011.05.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6670) Add separate NM overallocation thresholds for cpu and memory

2017-06-12 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16046882#comment-16046882
 ] 

Haibo Chen edited comment on YARN-6670 at 6/12/17 6:12 PM:
---

IIUC, this effectively turns the overallocation threshold from being based on a 
utilization snapshot to being based on utilization during a time period. This, 
however, can be difficult to implement IMO. To get an accurate picture of CPU 
and memory utilization over time, NMs need to check the usage very frequently, 
which can be very expensive. In our first phase, we rely on prompt NM 
preemption of OPPORTUNISTIC containers and tuning of overallocation threshold 
to handle such cases.  We could certainly revisit this when we see often spiky 
usage


was (Author: haibochen):
IIUC, this effectively turns the overallocation threshold from being based on a 
utilization snapshot to being based on utilization during a time period. This, 
however, can be difficult to implement IMO. To get an accurate picture of CPU 
and memory utilization over time, NMs need to check the usage very frequently, 
which can be very expensive. In our first phase, we rely on prompt NM 
preemption of OPPORTUNISTIC containers and tuning of overallocation threshold 
to handle such cases. 

> Add separate NM overallocation thresholds for cpu and memory
> 
>
> Key: YARN-6670
> URL: https://issues.apache.org/jira/browse/YARN-6670
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-6670-YARN-1011.00.patch, 
> YARN-6670-YARN-1011.01.patch, YARN-6670-YARN-1011.02.patch, 
> YARN-6670-YARN-1011.03.patch, YARN-6670-YARN-1011.04.patch, 
> YARN-6670-YARN-1011.05.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5648) [ATSv2 Security] Client side changes for authentication

2017-06-12 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16046844#comment-16046844
 ] 

Jian He commented on YARN-5648:
---

i see, are the UI failures related? 

> [ATSv2 Security] Client side changes for authentication
> ---
>
> Key: YARN-5648
> URL: https://issues.apache.org/jira/browse/YARN-5648
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-5648-YARN-5355.02.patch, 
> YARN-5648-YARN-5355.03.patch, YARN-5648-YARN-5355.wip.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6670) Add separate NM overallocation thresholds for cpu and memory

2017-06-12 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16046813#comment-16046813
 ] 

Miklos Szegedi commented on YARN-6670:
--

It applies to both CPU and memory. Let's say we have a guaranteed workload that 
runs in batches. After each batch it releases some memory, waits for the 
network for 0.5s and then continues with high memory and CPU usage. Without a 
period we will allocate opportunistic containers, when this happens and preempt 
them shortly after that. Instead of this, if the CPU and memory is below the 
threshold for let's say more then 2 seconds, we would start allocating 
opportunistic containers.

> Add separate NM overallocation thresholds for cpu and memory
> 
>
> Key: YARN-6670
> URL: https://issues.apache.org/jira/browse/YARN-6670
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-6670-YARN-1011.00.patch, 
> YARN-6670-YARN-1011.01.patch, YARN-6670-YARN-1011.02.patch, 
> YARN-6670-YARN-1011.03.patch, YARN-6670-YARN-1011.04.patch, 
> YARN-6670-YARN-1011.05.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6703) RM startup failure with old state store due to version mismatch

2017-06-12 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16046558#comment-16046558
 ] 

Varun Saxena commented on YARN-6703:


Thanks [~bibinchundatt] for finding the issue and committing it.
Thanks [~templedf] for reviews.

> RM startup failure with old state store due to version mismatch
> ---
>
> Key: YARN-6703
> URL: https://issues.apache.org/jira/browse/YARN-6703
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha3
>Reporter: Bibin A Chundatt
>Assignee: Varun Saxena
>Priority: Critical
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: YARN-6703.01.patch, YARN-6703.02.patch
>
>
> Currently as per the following code since the VERSION is marked as 2 .
> {code}
> 161 protected static final Version CURRENT_VERSION_INFO = Version
> 162 .newInstance(2, 0);
> {code}
> RMStateStore#checkVersion will fail and RM will not be able to start with old 
> state store. We can keep the version as 1.3 itself



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6703) RM startup failure with old state store due to version mismatch

2017-06-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16046535#comment-16046535
 ] 

Hudson commented on YARN-6703:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11856 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11856/])
YARN-6703. RM startup failure with old state store due to version 
(bibinchundatt: rev d64c842743da0d9d91c46985a9fd7350ea14b204)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java


> RM startup failure with old state store due to version mismatch
> ---
>
> Key: YARN-6703
> URL: https://issues.apache.org/jira/browse/YARN-6703
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha3
>Reporter: Bibin A Chundatt
>Assignee: Varun Saxena
>Priority: Critical
> Attachments: YARN-6703.01.patch, YARN-6703.02.patch
>
>
> Currently as per the following code since the VERSION is marked as 2 .
> {code}
> 161 protected static final Version CURRENT_VERSION_INFO = Version
> 162 .newInstance(2, 0);
> {code}
> RMStateStore#checkVersion will fail and RM will not be able to start with old 
> state store. We can keep the version as 1.3 itself



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5121) fix some container-executor portability issues

2017-06-12 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated YARN-5121:

Attachment: YARN-6698-branch-2.7-01.patch

Attaching the patch for branch-2.7. Thanks.

> fix some container-executor portability issues
> --
>
> Key: YARN-5121
> URL: https://issues.apache.org/jira/browse/YARN-5121
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager, security
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
>  Labels: security
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha1
>
> Attachments: YARN-5121.00.patch, YARN-5121.01.patch, 
> YARN-5121.02.patch, YARN-5121.03.patch, YARN-5121.04.patch, 
> YARN-5121.06.patch, YARN-5121.07.patch, YARN-5121.08.patch, 
> YARN-6698-branch-2.7-01.patch
>
>
> container-executor has some issues that are preventing it from even compiling 
> on the OS X jenkins instance.  Let's fix those.  While we're there, let's 
> also try to take care of some of the other portability problems that have 
> crept in over the years, since it used to work great on Solaris but now 
> doesn't.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6698) Backport YARN-5121 to branch-2.7

2017-06-12 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16046284#comment-16046284
 ] 

Akira Ajisaka commented on YARN-6698:
-

Thanks [~chris.douglas] and [~shv]!

> Backport YARN-5121 to branch-2.7
> 
>
> Key: YARN-6698
> URL: https://issues.apache.org/jira/browse/YARN-6698
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Attachments: YARN-6698-branch-2.7-01.patch, 
> YARN-6698-branch-2.7-test.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org