[jira] [Created] (YARN-6589) Recover all resources when NM restart

2017-05-11 Thread Yang Wang (JIRA)
Yang Wang created YARN-6589:
---

 Summary: Recover all resources when NM restart
 Key: YARN-6589
 URL: https://issues.apache.org/jira/browse/YARN-6589
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Yang Wang


When NM restart, containers will be recovered. However, only memory and vcores 
in capability have been recovered. All resources need to be recovered.
{code:title=ContainerImpl.java}
  // resource capability had been updated before NM was down
  this.resource = Resource.newInstance(recoveredCapability.getMemorySize(),
  recoveredCapability.getVirtualCores());
{code}

It should be like this.

{code:title=ContainerImpl.java}
  // resource capability had been updated before NM was down
  // need to recover all resources, not only 
  this.resource = Resources.clone(recoveredCapability);
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6447) Provide container sandbox policies for groups

2017-05-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16007518#comment-16007518
 ] 

Hadoop QA commented on YARN-6447:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
2s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 5 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
38s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
31s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6447 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12867685/YARN-6447.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6754684ecc99 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0d5c8ed |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/15911/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15911/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 

[jira] [Commented] (YARN-6584) Some license modification in codes

2017-05-11 Thread Yeliang Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16007509#comment-16007509
 ] 

Yeliang Cang commented on YARN-6584:


Just license modifications, and did not change any unit tests and source codes. 
The findbugs and unit test failures are unrelated!

> Some license modification in codes
> --
>
> Key: YARN-6584
> URL: https://issues.apache.org/jira/browse/YARN-6584
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha2
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
>Priority: Trivial
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-6584-001.patch
>
>
> The license in some java files are not same as others. Submit a patch to fix 
> this!



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6588) Add native-service AM log4j properties in classpath

2017-05-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16007490#comment-16007490
 ] 

Hadoop QA commented on YARN-6588:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
46s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
35s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
26s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
51s{color} | {color:green} hadoop-yarn-slider-core in the patch passed. {color} 
|
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 33m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-6588 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12867687/YARN-6588.yarn-native-services.01.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 537c453bb133 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 
16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | yarn-native-services / 1778733 |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15912/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15912/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add native-service AM log4j properties in classpath
> ---
>
> Key: YARN-6588
> URL: https://issues.apache.org/jira/browse/YARN-6588
> 

[jira] [Commented] (YARN-6587) Refactor of ResourceManager#startWebApp in a Util class

2017-05-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16007484#comment-16007484
 ] 

Hadoop QA commented on YARN-6587:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 2 new + 36 unchanged - 8 fixed = 38 total (was 44) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
23s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 2 new + 874 unchanged - 0 fixed = 876 total (was 874) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 40m  
4s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6587 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12867680/YARN-6587.v1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ece6e7e1e66b 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0d5c8ed |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/15910/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/15910/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15910/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 

[jira] [Commented] (YARN-2113) Add cross-user preemption within CapacityScheduler's leaf-queue

2017-05-11 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16007472#comment-16007472
 ] 

Sunil G commented on YARN-2113:
---

Thanks [~eepayne].
I will also test and confirm.

> Add cross-user preemption within CapacityScheduler's leaf-queue
> ---
>
> Key: YARN-2113
> URL: https://issues.apache.org/jira/browse/YARN-2113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Sunil G
> Attachments: IntraQueue Preemption-Impact Analysis.pdf, 
> TestNoIntraQueuePreemptionIfBelowUserLimitAndDifferentPrioritiesWithExtraUsers.txt,
>  YARN-2113.0001.patch, YARN-2113.0002.patch, YARN-2113.0003.patch, 
> YARN-2113.0004.patch, YARN-2113.0005.patch, YARN-2113.0006.patch, 
> YARN-2113.0007.patch, YARN-2113.0008.patch, YARN-2113.0009.patch, 
> YARN-2113.0010.patch, YARN-2113.0011.patch, YARN-2113.0012.patch, 
> YARN-2113.0013.patch, YARN-2113.0014.patch, YARN-2113.0015.patch, 
> YARN-2113.0016.patch, YARN-2113.apply.onto.0012.ericp.patch, YARN-2113 
> Intra-QueuePreemption Behavior.pdf, YARN-2113.v0.patch
>
>
> Preemption today only works across queues and moves around resources across 
> queues per demand and usage. We should also have user-level preemption within 
> a queue, to balance capacity across users in a predictable manner.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5328) InMemoryPlan enhancements required to support recurring reservations in the YARN ReservationSystem

2017-05-11 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16007469#comment-16007469
 ] 

Carlo Curino commented on YARN-5328:


Thanks [~subru].
I was testing this further by extending {{TestNoOverCommitPolicy}}  to include 
periodic reservations as well, as part of YARN-5330. 

I think we are missing:
{code}
  // remove periodic component
  netAvailable = RLESparseResourceAllocation.merge(resCalc,
  Resources.clone(totalCapacity), netAvailable, periodicRle,
  RLEOperator.subtractTestNonNegative, start, end);
{code}

As part of {{InMemoryPlan.getAvailableResourceOverTime(..)}}.  I will test this 
further. 

 

> InMemoryPlan enhancements required to support recurring reservations in the 
> YARN ReservationSystem
> --
>
> Key: YARN-5328
> URL: https://issues.apache.org/jira/browse/YARN-5328
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-5328-v1.patch, YARN-5328-v2.patch
>
>
> YARN-5326 proposes adding native support for recurring reservations in the 
> YARN ReservationSystem. This JIRA is a sub-task to track the changes required 
> in InMemoryPlan to accomplish it. Please refer to the design doc in the 
> parent JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6577) Useless interface and implementation class

2017-05-11 Thread ZhangBing Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16007467#comment-16007467
 ] 

ZhangBing Lin commented on YARN-6577:
-

[~ajisakaa],Can you plz take a quick review?

> Useless interface and implementation class
> --
>
> Key: YARN-6577
> URL: https://issues.apache.org/jira/browse/YARN-6577
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.3, 3.0.0-alpha2
>Reporter: ZhangBing Lin
>Assignee: ZhangBing Lin
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-6577.001.patch
>
>
> From 2.7.3  and 3.0.0-alpha2, the ContainerLocalization interface and the 
> ContainerLocalizationImpl implementation class are of no use, and I recommend 
> removing the useless interface and implementation classes



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6588) Add native-service AM log4j properties in classpath

2017-05-11 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-6588:
--
Attachment: YARN-6588.yarn-native-services.01.patch

> Add native-service AM log4j properties in classpath
> ---
>
> Key: YARN-6588
> URL: https://issues.apache.org/jira/browse/YARN-6588
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-6588.yarn-native-services.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6588) Add native-service AM log4j properties in classpath

2017-05-11 Thread Jian He (JIRA)
Jian He created YARN-6588:
-

 Summary: Add native-service AM log4j properties in classpath
 Key: YARN-6588
 URL: https://issues.apache.org/jira/browse/YARN-6588
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Jian He
Assignee: Jian He






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5328) InMemoryPlan enhancements required to support recurring reservations in the YARN ReservationSystem

2017-05-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16007424#comment-16007424
 ] 

Hadoop QA commented on YARN-5328:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m  
8s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 48s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 3 new + 280 unchanged - 37 fixed = 283 total (was 317) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 40m 35s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.TestRMEmbeddedElector |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-5328 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12867659/YARN-5328-v2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d36bfca31b34 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0d5c8ed |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/15909/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
| whitespace | 

[jira] [Updated] (YARN-6447) Provide container sandbox policies for groups

2017-05-11 Thread Greg Phillips (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Phillips updated YARN-6447:

Attachment: YARN-6447.003.patch

Checkstyle issues resolved.

> Provide container sandbox policies for groups 
> --
>
> Key: YARN-6447
> URL: https://issues.apache.org/jira/browse/YARN-6447
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager, yarn
>Affects Versions: 3.0.0-alpha3
>Reporter: Greg Phillips
>Assignee: Greg Phillips
>Priority: Minor
> Attachments: YARN-6447.001.patch, YARN-6447.002.patch, 
> YARN-6447.003.patch
>
>
> Currently the container sandbox feature 
> ([YARN-5280|https://issues.apache.org/jira/browse/YARN-5280]) allows YARN 
> administrators to use one Java Security Manager policy file to limit the 
> permissions granted to YARN containers.  It would be useful to allow for 
> different policy files to be used based on groups.
> For example, an administrator may want to ensure standard users who write 
> applications for the MapReduce or Tez frameworks are not allowed to open 
> arbitrary network connections within their data processing code.  Users who 
> are designing the ETL pipelines however may need to open sockets to extract 
> data from external sources.  By assigning these sets of users to different 
> groups and setting specific policies for each group you can assert fine 
> grained control over the permissions granted to each Java based container 
> across a YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6587) Refactor of ResourceManager#startWebApp in a Util class

2017-05-11 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16007401#comment-16007401
 ] 

Giovanni Matteo Fumarola commented on YARN-6587:


[~leftnoteasy], [~curino] can you please take a look of this JIRA?

> Refactor of ResourceManager#startWebApp in a Util class
> ---
>
> Key: YARN-6587
> URL: https://issues.apache.org/jira/browse/YARN-6587
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-6587.v1.patch
>
>
> This jira tracks the refactor of ResourceManager#startWebApp in a util class 
> since Router in YARN-5412 has to implement the same logic for Filtering and 
> Authentication.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6587) Refactor of ResourceManager#startWebApp in a Util class

2017-05-11 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-6587:
---
Attachment: YARN-6587.v1.patch

> Refactor of ResourceManager#startWebApp in a Util class
> ---
>
> Key: YARN-6587
> URL: https://issues.apache.org/jira/browse/YARN-6587
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-6587.v1.patch
>
>
> This jira tracks the refactor of ResourceManager#startWebApp in a util class 
> since Router in YARN-5412 has to implement the same logic for Filtering and 
> Authentication.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6587) Refactor of ResourceManager#startWebApp in a Util class

2017-05-11 Thread Giovanni Matteo Fumarola (JIRA)
Giovanni Matteo Fumarola created YARN-6587:
--

 Summary: Refactor of ResourceManager#startWebApp in a Util class
 Key: YARN-6587
 URL: https://issues.apache.org/jira/browse/YARN-6587
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Giovanni Matteo Fumarola
Assignee: Giovanni Matteo Fumarola


This jira tracks the refactor of ResourceManager#startWebApp in a util class 
since Router in YARN-5412 has to implement the same logic for Filtering and 
Authentication.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6586) YARN to facilitate HTTPS in AM web server

2017-05-11 Thread Haibo Chen (JIRA)
Haibo Chen created YARN-6586:


 Summary: YARN to facilitate HTTPS in AM web server
 Key: YARN-6586
 URL: https://issues.apache.org/jira/browse/YARN-6586
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: yarn
Affects Versions: 3.0.0-alpha2
Reporter: Haibo Chen
Assignee: Haibo Chen


MR AM today does not support HTTPS in its web server, so the traffic between 
RMWebproxy and MR AM is in clear text.

MR cannot easily achieve this mainly because MR AMs are untrusted by YARN. A 
potential solution purely within MR, similar to what Spark has implemented, is 
to allow users, when they enable HTTPS in MR job, to provide their own keystore 
file, and then the file is uploaded to distributed cache and localized for MR 
AM container. The configuration users need to do is complex.

More importantly, in typical deployments, however, web browsers go through 
RMWebProxy to indirectly access MR AM web server. In order to support MR AM 
HTTPs, RMWebProxy therefore needs to trust the user-provided keystore, which is 
problematic.  

Alternatively, we can add an endpoint in NM web server that acts as a proxy 
between AM web server and RMWebProxy. RMWebproxy, when configured to do so, 
will send requests in HTTPS to the NM on which the AM is running, and the NM 
then can communicate with the local AM web server in HTTP.   This adds one hop 
between RMWebproxy and AM, but both MR and Spark can use such solution.






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5543) ResourceManager SchedulingMonitor could potentially terminate the preemption checker thread

2017-05-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16007331#comment-16007331
 ] 

Hadoop QA commented on YARN-5543:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 
31s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
10s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 3 new + 27 unchanged - 1 fixed = 30 total (was 28) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2517 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m 
45s{color} | {color:red} The patch 72 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 50m 49s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}139m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_131 Failed junit tests | 
hadoop.yarn.server.resourcemanager.ahs.TestRMApplicationHistoryWriter |
|   | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
| JDK v1.7.0_121 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | 

[jira] [Commented] (YARN-2113) Add cross-user preemption within CapacityScheduler's leaf-queue

2017-05-11 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16007294#comment-16007294
 ] 

Eric Payne commented on YARN-2113:
--

Hi [~sunilg]. There's something strange about the way preemption works when 
PRIORITY_FIRST is set.

If I have 2 or more node managers in my cluster, a large amount of oscillation 
occurs. The number of preempted containers keeps increasing and never stops. 
This does not occur when USERLIMIT_FIRST is set. Before today, I had not tested 
any of these patches with 2 or more node managers, so I wondered if this was a 
recent development. I recompiled with 0013 and 0014, and they have the same 
behavior, so this is not unique to 0016.

> Add cross-user preemption within CapacityScheduler's leaf-queue
> ---
>
> Key: YARN-2113
> URL: https://issues.apache.org/jira/browse/YARN-2113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Sunil G
> Attachments: IntraQueue Preemption-Impact Analysis.pdf, 
> TestNoIntraQueuePreemptionIfBelowUserLimitAndDifferentPrioritiesWithExtraUsers.txt,
>  YARN-2113.0001.patch, YARN-2113.0002.patch, YARN-2113.0003.patch, 
> YARN-2113.0004.patch, YARN-2113.0005.patch, YARN-2113.0006.patch, 
> YARN-2113.0007.patch, YARN-2113.0008.patch, YARN-2113.0009.patch, 
> YARN-2113.0010.patch, YARN-2113.0011.patch, YARN-2113.0012.patch, 
> YARN-2113.0013.patch, YARN-2113.0014.patch, YARN-2113.0015.patch, 
> YARN-2113.0016.patch, YARN-2113.apply.onto.0012.ericp.patch, YARN-2113 
> Intra-QueuePreemption Behavior.pdf, YARN-2113.v0.patch
>
>
> Preemption today only works across queues and moves around resources across 
> queues per demand and usage. We should also have user-level preemption within 
> a queue, to balance capacity across users in a predictable manner.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4002) make ResourceTrackerService.nodeHeartbeat more concurrent

2017-05-11 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16007289#comment-16007289
 ] 

Jason Lowe commented on YARN-4002:
--

FYI we're finding this change to be quite expensive on large clusters.  See 
HADOOP-14412.

> make ResourceTrackerService.nodeHeartbeat more concurrent
> -
>
> Key: YARN-4002
> URL: https://issues.apache.org/jira/browse/YARN-4002
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Hong Zhiguo
>Assignee: Hong Zhiguo
>Priority: Critical
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: 0001-YARN-4002.patch, YARN-4002-lockless-read.patch, 
> YARN-4002-rwlock.patch, YARN-4002-rwlock-v2.patch, YARN-4002-rwlock-v2.patch, 
> YARN-4002-rwlock-v3.patch, YARN-4002-rwlock-v3-rebase.patch, 
> YARN-4002-rwlock-v4.patch, YARN-4002-rwlock-v5.patch, 
> YARN-4002-rwlock-v6.patch, YARN-4002-v0.patch
>
>
> We have multiple RPC threads to handle NodeHeartbeatRequest from NMs. By 
> design the method ResourceTrackerService.nodeHeartbeat should be concurrent 
> enough to scale for large clusters.
> But we have a "BIG" lock in NodesListManager.isValidNode which I think it's 
> unnecessary.
> First, the fields "includes" and "excludes" of HostsFileReader are only 
> updated on "refresh nodes".  All RPC threads handling node heartbeats are 
> only readers.  So RWLock could be used to  alow concurrent access by RPC 
> threads.
> Second, since he fields "includes" and "excludes" of HostsFileReader are 
> always updated by "reference assignment", which is atomic in Java, the reader 
> side lock could just be skipped.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6380) FSAppAttempt keeps redundant copy of the queue

2017-05-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16007271#comment-16007271
 ] 

Hudson commented on YARN-6380:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11726 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11726/])
YARN-6380. FSAppAttempt keeps redundant copy of the queue (templedf: rev 
90cb5b4635b7a0849912458afad052f569131a59)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java


> FSAppAttempt keeps redundant copy of the queue
> --
>
> Key: YARN-6380
> URL: https://issues.apache.org/jira/browse/YARN-6380
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha2
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: YARN-6380.001.patch, YARN-6380.002.patch, 
> YARN-6380.003.patch, YARN-6380.004.patch, YARN-6380.005.patch
>
>
> The {{FSAppAttempt}} class defines its own {{fsQueue}} variable that is a 
> second copy of the {{SchedulerApplicationAttempt}}'s {{queue}} variable.  
> Aside from being redundant, it's also a bug, because when moving 
> applications, we only update the {{SchedulerApplicationAttempt}}'s {{queue}}, 
> not the {{FSAppAttempt}}'s {{fsQueue}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5328) InMemoryPlan enhancements required to support recurring reservations in the YARN ReservationSystem

2017-05-11 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-5328:
-
Attachment: YARN-5328-v2.patch

Thanks [~curino] for the review. Attaching updated patch (v2) that addresses 
your comments and fixes Yetus warning.s

> InMemoryPlan enhancements required to support recurring reservations in the 
> YARN ReservationSystem
> --
>
> Key: YARN-5328
> URL: https://issues.apache.org/jira/browse/YARN-5328
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-5328-v1.patch, YARN-5328-v2.patch
>
>
> YARN-5326 proposes adding native support for recurring reservations in the 
> YARN ReservationSystem. This JIRA is a sub-task to track the changes required 
> in InMemoryPlan to accomplish it. Please refer to the design doc in the 
> parent JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5543) ResourceManager SchedulingMonitor could potentially terminate the preemption checker thread

2017-05-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16007199#comment-16007199
 ] 

Hudson commented on YARN-5543:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11725 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11725/])
YARN-5543. ResourceManager SchedulingMonitor could potentially terminate (shv: 
rev 2ada100da7cfe12946e43da2929bd80c2a8bd833)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/TestSchedulingMonitor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/SchedulingMonitor.java


> ResourceManager SchedulingMonitor could potentially terminate the preemption 
> checker thread
> ---
>
> Key: YARN-5543
> URL: https://issues.apache.org/jira/browse/YARN-5543
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler, resourcemanager
>Affects Versions: 2.7.0, 2.6.1
>Reporter: Min Shen
>Assignee: Min Shen
>  Labels: oct16-medium
> Fix For: 2.9.0, 2.7.4, 3.0.0-alpha3, 2.8.2
>
> Attachments: YARN-5543.001.patch, YARN-5543.002.patch, 
> YARN-5543.003.patch, YARN-5543.004.patch, YARN-5543-branch-2.7.001.patch, 
> YARN-5543-branch-2.7.002.patch
>
>
> In SchedulingMonitor.java, when the service starts, it starts a checker 
> thread to perform Capacity Scheduler's preemption. However, the 
> implementation of this checker thread has the following issue:
> {code}
> while (!stopped && !Thread.currentThread().isInterrupted()) {
> 
> try {
>   Thread.sleep(monitorInterval)
> } catch (InterruptedException e) {
>   
>   break;
> }
> }
> {code}
> The above code snippet will terminate the checker thread whenever it is 
> interrupted. 
> We noticed in our cluster that this could lead to CapacityScheduler's 
> preemption disabled unexpectedly due to the checker thread getting terminated.
> We propose to use ScheduledExecutorService to improve the robustness of this 
> part of the code to ensure the liveness of CapacityScheduler's preemption 
> functionality.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5328) InMemoryPlan enhancements required to support recurring reservations in the YARN ReservationSystem

2017-05-11 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16007184#comment-16007184
 ] 

Carlo Curino edited comment on YARN-5328 at 5/11/17 9:03 PM:
-

Thanks [~subru] for the contribution. 

Reviewing now, patch looks generally good, there are some format-fixes (correct 
though not strictly needed... I am ok with them).
One thing I think is not currently correct is the handling of {{resCount}}. You 
should assign earliestStartTime and endTime for periodic case. 
The earliestStartTime should be: the absolute start time of the periodic 
reservation, while the endTime should be Long.MAX_VALUE.

Also in {{RLESparseResourceAllocation}} you should not remove the last value to 
zero, as it help us disambiguate an RLE whose last non-zero value is sustained 
forever (but trimmed by the caller) vs an RLE who has non-zero value only up  
to the last point.

Other than that the patch is +1 pending fixes on unit-test and findbugs per 
yetus complaining.
 


was (Author: curino):
Thanks [~subru] for the contribution. 

Reviewing now, patch looks generally good, there are some format-fixes (correct 
though not strictly needed... I am ok with them).
One thing I think is not currently correct is the handling of {{resCount}}. You 
should assign earliestStartTime and endTime for periodic case. 
The earliestStartTime should be: the absolute start time of the periodic 
reservation, while the endTime should be Long.MAX_VALUE.

Other than that the patch is +1 pending fixes on unit-test and findbugs per 
yetus complaining.
 

> InMemoryPlan enhancements required to support recurring reservations in the 
> YARN ReservationSystem
> --
>
> Key: YARN-5328
> URL: https://issues.apache.org/jira/browse/YARN-5328
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-5328-v1.patch
>
>
> YARN-5326 proposes adding native support for recurring reservations in the 
> YARN ReservationSystem. This JIRA is a sub-task to track the changes required 
> in InMemoryPlan to accomplish it. Please refer to the design doc in the 
> parent JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5328) InMemoryPlan enhancements required to support recurring reservations in the YARN ReservationSystem

2017-05-11 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16007184#comment-16007184
 ] 

Carlo Curino commented on YARN-5328:


Thanks [~subru] for the contribution. 

Reviewing now, patch looks generally good, there are some format-fixes (correct 
though not strictly needed... I am ok with them).
One thing I think is not currently correct is the handling of {{resCount}}. You 
should assign earliestStartTime and endTime for periodic case. 
The earliestStartTime should be: the absolute start time of the periodic 
reservation, while the endTime should be Long.MAX_VALUE.

Other than that the patch is +1 pending fixes on unit-test and findbugs per 
yetus complaining.
 

> InMemoryPlan enhancements required to support recurring reservations in the 
> YARN ReservationSystem
> --
>
> Key: YARN-5328
> URL: https://issues.apache.org/jira/browse/YARN-5328
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-5328-v1.patch
>
>
> YARN-5326 proposes adding native support for recurring reservations in the 
> YARN ReservationSystem. This JIRA is a sub-task to track the changes required 
> in InMemoryPlan to accomplish it. Please refer to the design doc in the 
> parent JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5543) ResourceManager SchedulingMonitor could potentially terminate the preemption checker thread

2017-05-11 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated YARN-5543:
--
Labels: oct16-medium  (was: oct16-medium release-blocker)

> ResourceManager SchedulingMonitor could potentially terminate the preemption 
> checker thread
> ---
>
> Key: YARN-5543
> URL: https://issues.apache.org/jira/browse/YARN-5543
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler, resourcemanager
>Affects Versions: 2.7.0, 2.6.1
>Reporter: Min Shen
>Assignee: Min Shen
>  Labels: oct16-medium
> Fix For: 2.9.0, 2.7.4, 3.0.0-alpha3, 2.8.2
>
> Attachments: YARN-5543.001.patch, YARN-5543.002.patch, 
> YARN-5543.003.patch, YARN-5543.004.patch, YARN-5543-branch-2.7.001.patch, 
> YARN-5543-branch-2.7.002.patch
>
>
> In SchedulingMonitor.java, when the service starts, it starts a checker 
> thread to perform Capacity Scheduler's preemption. However, the 
> implementation of this checker thread has the following issue:
> {code}
> while (!stopped && !Thread.currentThread().isInterrupted()) {
> 
> try {
>   Thread.sleep(monitorInterval)
> } catch (InterruptedException e) {
>   
>   break;
> }
> }
> {code}
> The above code snippet will terminate the checker thread whenever it is 
> interrupted. 
> We noticed in our cluster that this could lead to CapacityScheduler's 
> preemption disabled unexpectedly due to the checker thread getting terminated.
> We propose to use ScheduledExecutorService to improve the robustness of this 
> part of the code to ensure the liveness of CapacityScheduler's preemption 
> functionality.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5543) ResourceManager SchedulingMonitor could potentially terminate the preemption checker thread

2017-05-11 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16007133#comment-16007133
 ] 

Jonathan Hung commented on YARN-5543:
-

Uploaded YARN-5543.004 and YARN-5543-branch-2.7.002 to fix this.

> ResourceManager SchedulingMonitor could potentially terminate the preemption 
> checker thread
> ---
>
> Key: YARN-5543
> URL: https://issues.apache.org/jira/browse/YARN-5543
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler, resourcemanager
>Affects Versions: 2.7.0, 2.6.1
>Reporter: Min Shen
>Assignee: Min Shen
>  Labels: oct16-medium, release-blocker
> Attachments: YARN-5543.001.patch, YARN-5543.002.patch, 
> YARN-5543.003.patch, YARN-5543.004.patch, YARN-5543-branch-2.7.001.patch, 
> YARN-5543-branch-2.7.002.patch
>
>
> In SchedulingMonitor.java, when the service starts, it starts a checker 
> thread to perform Capacity Scheduler's preemption. However, the 
> implementation of this checker thread has the following issue:
> {code}
> while (!stopped && !Thread.currentThread().isInterrupted()) {
> 
> try {
>   Thread.sleep(monitorInterval)
> } catch (InterruptedException e) {
>   
>   break;
> }
> }
> {code}
> The above code snippet will terminate the checker thread whenever it is 
> interrupted. 
> We noticed in our cluster that this could lead to CapacityScheduler's 
> preemption disabled unexpectedly due to the checker thread getting terminated.
> We propose to use ScheduledExecutorService to improve the robustness of this 
> part of the code to ensure the liveness of CapacityScheduler's preemption 
> functionality.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5543) ResourceManager SchedulingMonitor could potentially terminate the preemption checker thread

2017-05-11 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-5543:

Attachment: YARN-5543-branch-2.7.002.patch

> ResourceManager SchedulingMonitor could potentially terminate the preemption 
> checker thread
> ---
>
> Key: YARN-5543
> URL: https://issues.apache.org/jira/browse/YARN-5543
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler, resourcemanager
>Affects Versions: 2.7.0, 2.6.1
>Reporter: Min Shen
>Assignee: Min Shen
>  Labels: oct16-medium, release-blocker
> Attachments: YARN-5543.001.patch, YARN-5543.002.patch, 
> YARN-5543.003.patch, YARN-5543.004.patch, YARN-5543-branch-2.7.001.patch, 
> YARN-5543-branch-2.7.002.patch
>
>
> In SchedulingMonitor.java, when the service starts, it starts a checker 
> thread to perform Capacity Scheduler's preemption. However, the 
> implementation of this checker thread has the following issue:
> {code}
> while (!stopped && !Thread.currentThread().isInterrupted()) {
> 
> try {
>   Thread.sleep(monitorInterval)
> } catch (InterruptedException e) {
>   
>   break;
> }
> }
> {code}
> The above code snippet will terminate the checker thread whenever it is 
> interrupted. 
> We noticed in our cluster that this could lead to CapacityScheduler's 
> preemption disabled unexpectedly due to the checker thread getting terminated.
> We propose to use ScheduledExecutorService to improve the robustness of this 
> part of the code to ensure the liveness of CapacityScheduler's preemption 
> functionality.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5543) ResourceManager SchedulingMonitor could potentially terminate the preemption checker thread

2017-05-11 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-5543:

Attachment: YARN-5543.004.patch

> ResourceManager SchedulingMonitor could potentially terminate the preemption 
> checker thread
> ---
>
> Key: YARN-5543
> URL: https://issues.apache.org/jira/browse/YARN-5543
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler, resourcemanager
>Affects Versions: 2.7.0, 2.6.1
>Reporter: Min Shen
>Assignee: Min Shen
>  Labels: oct16-medium, release-blocker
> Attachments: YARN-5543.001.patch, YARN-5543.002.patch, 
> YARN-5543.003.patch, YARN-5543.004.patch, YARN-5543-branch-2.7.001.patch
>
>
> In SchedulingMonitor.java, when the service starts, it starts a checker 
> thread to perform Capacity Scheduler's preemption. However, the 
> implementation of this checker thread has the following issue:
> {code}
> while (!stopped && !Thread.currentThread().isInterrupted()) {
> 
> try {
>   Thread.sleep(monitorInterval)
> } catch (InterruptedException e) {
>   
>   break;
> }
> }
> {code}
> The above code snippet will terminate the checker thread whenever it is 
> interrupted. 
> We noticed in our cluster that this could lead to CapacityScheduler's 
> preemption disabled unexpectedly due to the checker thread getting terminated.
> We propose to use ScheduledExecutorService to improve the robustness of this 
> part of the code to ensure the liveness of CapacityScheduler's preemption 
> functionality.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5543) ResourceManager SchedulingMonitor could potentially terminate the preemption checker thread

2017-05-11 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16007117#comment-16007117
 ] 

Konstantin Shvachko commented on YARN-5543:
---

In the new test could you guys make sure that {{rm}} and {{monitor}} objects 
are closed.
To avoid warning "Resource leak: 'monitor' is never closed"

> ResourceManager SchedulingMonitor could potentially terminate the preemption 
> checker thread
> ---
>
> Key: YARN-5543
> URL: https://issues.apache.org/jira/browse/YARN-5543
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler, resourcemanager
>Affects Versions: 2.7.0, 2.6.1
>Reporter: Min Shen
>Assignee: Min Shen
>  Labels: oct16-medium, release-blocker
> Attachments: YARN-5543.001.patch, YARN-5543.002.patch, 
> YARN-5543.003.patch, YARN-5543-branch-2.7.001.patch
>
>
> In SchedulingMonitor.java, when the service starts, it starts a checker 
> thread to perform Capacity Scheduler's preemption. However, the 
> implementation of this checker thread has the following issue:
> {code}
> while (!stopped && !Thread.currentThread().isInterrupted()) {
> 
> try {
>   Thread.sleep(monitorInterval)
> } catch (InterruptedException e) {
>   
>   break;
> }
> }
> {code}
> The above code snippet will terminate the checker thread whenever it is 
> interrupted. 
> We noticed in our cluster that this could lead to CapacityScheduler's 
> preemption disabled unexpectedly due to the checker thread getting terminated.
> We propose to use ScheduledExecutorService to improve the robustness of this 
> part of the code to ensure the liveness of CapacityScheduler's preemption 
> functionality.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6585) RM fails to start when upgrading from 2.7 to 2.8 for clusters with node labels.

2017-05-11 Thread Nathan Roberts (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16007016#comment-16007016
 ] 

Nathan Roberts edited comment on YARN-6585 at 5/11/17 7:13 PM:
---

YARN-6143 changed AddToClusterNodeLabelsRequestProto such that field 1 is now 
referred to as deprecatedNodeLabels. FileSystemNodeLabelsStore is referencing 
field 2 (nodeLabels) which will not be present in a 2.7 labelStore.

CC: [~leftnoteasy] [~sunilg]



was (Author: nroberts):
YARN-6143 changed AddToClusterNodeLabelsRequestProto such that field 1 is now 
referred to as deprecatedNodeLabels. FileSystemNodeLabelsStore is referencing 
field 2 (nodeLabels) which will not be present in a 2.7 labelStore.

CC: [~leftnoteasy]


> RM fails to start when upgrading from 2.7 to 2.8 for clusters with node 
> labels.
> ---
>
> Key: YARN-6585
> URL: https://issues.apache.org/jira/browse/YARN-6585
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Payne
>Priority: Blocker
>
> {noformat}
> Caused by: java.io.IOException: Not all labels being replaced contained by 
> known label collections, please check, new labels=[abc]
> at 
> org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager.checkReplaceLabelsOnNode(CommonNodeLabelsManager.java:718)
> at 
> org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager.replaceLabelsOnNode(CommonNodeLabelsManager.java:737)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager.replaceLabelsOnNode(RMNodeLabelsManager.java:189)
> at 
> org.apache.hadoop.yarn.nodelabels.FileSystemNodeLabelsStore.loadFromMirror(FileSystemNodeLabelsStore.java:181)
> at 
> org.apache.hadoop.yarn.nodelabels.FileSystemNodeLabelsStore.recover(FileSystemNodeLabelsStore.java:208)
> at 
> org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager.initNodeLabelStore(CommonNodeLabelsManager.java:251)
> at 
> org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager.serviceStart(CommonNodeLabelsManager.java:265)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
> ... 13 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6585) RM fails to start when upgrading from 2.7 to 2.8 for clusters with node labels.

2017-05-11 Thread Nathan Roberts (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16007016#comment-16007016
 ] 

Nathan Roberts commented on YARN-6585:
--

YARN-6143 changed AddToClusterNodeLabelsRequestProto such that field 1 is now 
referred to as deprecatedNodeLabels. FileSystemNodeLabelsStore is referencing 
field 2 (nodeLabels) which will not be present in a 2.7 labelStore.

CC: [~leftnoteasy]


> RM fails to start when upgrading from 2.7 to 2.8 for clusters with node 
> labels.
> ---
>
> Key: YARN-6585
> URL: https://issues.apache.org/jira/browse/YARN-6585
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Payne
>Priority: Blocker
>
> {noformat}
> Caused by: java.io.IOException: Not all labels being replaced contained by 
> known label collections, please check, new labels=[abc]
> at 
> org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager.checkReplaceLabelsOnNode(CommonNodeLabelsManager.java:718)
> at 
> org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager.replaceLabelsOnNode(CommonNodeLabelsManager.java:737)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager.replaceLabelsOnNode(RMNodeLabelsManager.java:189)
> at 
> org.apache.hadoop.yarn.nodelabels.FileSystemNodeLabelsStore.loadFromMirror(FileSystemNodeLabelsStore.java:181)
> at 
> org.apache.hadoop.yarn.nodelabels.FileSystemNodeLabelsStore.recover(FileSystemNodeLabelsStore.java:208)
> at 
> org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager.initNodeLabelStore(CommonNodeLabelsManager.java:251)
> at 
> org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager.serviceStart(CommonNodeLabelsManager.java:265)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
> ... 13 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5892) Capacity Scheduler: Support user-specific minimum user limit percent

2017-05-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16007009#comment-16007009
 ] 

Hadoop QA commented on YARN-5892:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 2s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
8s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
40s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  0s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 19 new + 682 unchanged - 1 fixed = 701 total (was 683) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
27s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 4 new + 874 unchanged - 0 fixed = 878 total (was 874) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
26s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 39m 
25s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
14s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}104m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-5892 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12867610/YARN-5892.013.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 23522cb60157 

[jira] [Updated] (YARN-6585) RM fails to start when upgrading from 2.7 to 2.8 for clusters with node labels.

2017-05-11 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated YARN-6585:
-
Target Version/s: 2.8.1
Priority: Blocker  (was: Major)

> RM fails to start when upgrading from 2.7 to 2.8 for clusters with node 
> labels.
> ---
>
> Key: YARN-6585
> URL: https://issues.apache.org/jira/browse/YARN-6585
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Payne
>Priority: Blocker
>
> {noformat}
> Caused by: java.io.IOException: Not all labels being replaced contained by 
> known label collections, please check, new labels=[abc]
> at 
> org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager.checkReplaceLabelsOnNode(CommonNodeLabelsManager.java:718)
> at 
> org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager.replaceLabelsOnNode(CommonNodeLabelsManager.java:737)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager.replaceLabelsOnNode(RMNodeLabelsManager.java:189)
> at 
> org.apache.hadoop.yarn.nodelabels.FileSystemNodeLabelsStore.loadFromMirror(FileSystemNodeLabelsStore.java:181)
> at 
> org.apache.hadoop.yarn.nodelabels.FileSystemNodeLabelsStore.recover(FileSystemNodeLabelsStore.java:208)
> at 
> org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager.initNodeLabelStore(CommonNodeLabelsManager.java:251)
> at 
> org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager.serviceStart(CommonNodeLabelsManager.java:265)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
> ... 13 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6585) RM fails to start when upgrading from 2.7 to 2.8 for clusters with node labels.

2017-05-11 Thread Eric Payne (JIRA)
Eric Payne created YARN-6585:


 Summary: RM fails to start when upgrading from 2.7 to 2.8 for 
clusters with node labels.
 Key: YARN-6585
 URL: https://issues.apache.org/jira/browse/YARN-6585
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Eric Payne


{noformat}
Caused by: java.io.IOException: Not all labels being replaced contained by 
known label collections, please check, new labels=[abc]
at 
org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager.checkReplaceLabelsOnNode(CommonNodeLabelsManager.java:718)
at 
org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager.replaceLabelsOnNode(CommonNodeLabelsManager.java:737)
at 
org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager.replaceLabelsOnNode(RMNodeLabelsManager.java:189)
at 
org.apache.hadoop.yarn.nodelabels.FileSystemNodeLabelsStore.loadFromMirror(FileSystemNodeLabelsStore.java:181)
at 
org.apache.hadoop.yarn.nodelabels.FileSystemNodeLabelsStore.recover(FileSystemNodeLabelsStore.java:208)
at 
org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager.initNodeLabelStore(CommonNodeLabelsManager.java:251)
at 
org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager.serviceStart(CommonNodeLabelsManager.java:265)
at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
... 13 more
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5841) Report only local collectors on node upon resync with RM after RM fails over

2017-05-11 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16006885#comment-16006885
 ] 

Haibo Chen commented on YARN-5841:
--

Closing this as not a problem. Please reopen if you disagree

> Report only local collectors on node upon resync with RM after RM fails over
> 
>
> Key: YARN-5841
> URL: https://issues.apache.org/jira/browse/YARN-5841
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Haibo Chen
>
> As per discussion on YARN-3359, we can potentially optimize reporting of 
> collectors to RM after RM fails over.
> Currently NM would report all the collectors known to itself in HB after 
> resync with RM. This would mean many NMs' may pretty much report similar set 
> of collector infos in first NM HB on reconnection. 
> This JIRA is to explore how to optimize this flow and if possible, fix it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-5841) Report only local collectors on node upon resync with RM after RM fails over

2017-05-11 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen resolved YARN-5841.
--
Resolution: Not A Problem

> Report only local collectors on node upon resync with RM after RM fails over
> 
>
> Key: YARN-5841
> URL: https://issues.apache.org/jira/browse/YARN-5841
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Haibo Chen
>
> As per discussion on YARN-3359, we can potentially optimize reporting of 
> collectors to RM after RM fails over.
> Currently NM would report all the collectors known to itself in HB after 
> resync with RM. This would mean many NMs' may pretty much report similar set 
> of collector infos in first NM HB on reconnection. 
> This JIRA is to explore how to optimize this flow and if possible, fix it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5892) Capacity Scheduler: Support user-specific minimum user limit percent

2017-05-11 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated YARN-5892:
-
Attachment: YARN-5892.013.patch

Attaching patch YARN-5892.013.patch. Previous patch had a rounding error.

> Capacity Scheduler: Support user-specific minimum user limit percent
> 
>
> Key: YARN-5892
> URL: https://issues.apache.org/jira/browse/YARN-5892
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: Active users highlighted.jpg, YARN-5892.001.patch, 
> YARN-5892.002.patch, YARN-5892.003.patch, YARN-5892.004.patch, 
> YARN-5892.005.patch, YARN-5892.006.patch, YARN-5892.007.patch, 
> YARN-5892.008.patch, YARN-5892.009.patch, YARN-5892.010.patch, 
> YARN-5892.012.patch, YARN-5892.013.patch
>
>
> Currently, in the capacity scheduler, the {{minimum-user-limit-percent}} 
> property is per queue. A cluster admin should be able to set the minimum user 
> limit percent on a per-user basis within the queue.
> This functionality is needed so that when intra-queue preemption is enabled 
> (YARN-4945 / YARN-2113), some users can be deemed as more important than 
> other users, and resources from VIP users won't be as likely to be preempted.
> For example, if the {{getstuffdone}} queue has a MULP of 25 percent, but user 
> {{jane}} is a power user of queue {{getstuffdone}} and needs to be guaranteed 
> 75 percent, the properties for {{getstuffdone}} and {{jane}} would look like 
> this:
> {code}
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.minimum-user-limit-percent
> 25
>   
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.jane.minimum-user-limit-percent
> 75
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6582) FSAppAttempt demand can be updated atomically in updateDemand()

2017-05-11 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16006781#comment-16006781
 ] 

Karthik Kambatla commented on YARN-6582:


The patch doesn't change the behavior, hence no tests. 

> FSAppAttempt demand can be updated atomically in updateDemand()
> ---
>
> Key: YARN-6582
> URL: https://issues.apache.org/jira/browse/YARN-6582
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: YARN-6582.001.patch
>
>
> FSAppAttempt#updateDemand first sets demand to 0, and then adds up all the 
> outstanding requests. Instead, we could use another variable tmpDemand to 
> build the new value and atomically replace the demand.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6584) Some license modification in codes

2017-05-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16006707#comment-16006707
 ] 

Hadoop QA commented on YARN-6584:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
49s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  4m 
 6s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-common-project/hadoop-auth in trunk has 1 
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
25s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
28s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
46s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
15s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  4m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 12m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-maven-plugins in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
56s{color} | {color:green} hadoop-auth in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
59s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
24s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 23s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
58s{color} | {color:green} hadoop-yarn-server-applicationhistoryservice in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
53s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
28s{color} | {color:green} 

[jira] [Commented] (YARN-6366) Refactor the NodeManager DeletionService to support additional DeletionTask types.

2017-05-11 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16006699#comment-16006699
 ] 

Varun Vasudev commented on YARN-6366:
-

bq. Regarding #4) Moving the conversion methods into ProtoUtils creates a 
circular dependency between hadoop-common and the nodemanager. Would a 
consolidated NMProtoUtils within the nodemanager module suffice?

Yep. Makes sense.

bq. Regarding #5) If you take a look at the diff for that test, the existing 
test passes an empty Path array. I've tried to keep the test functionally the 
same.

Got it. No need to change it then.

> Refactor the NodeManager DeletionService to support additional DeletionTask 
> types.
> --
>
> Key: YARN-6366
> URL: https://issues.apache.org/jira/browse/YARN-6366
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager, yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-6366.001.patch, YARN-6366.002.patch, 
> YARN-6366.003.patch, YARN-6366.004.patch, YARN-6366.005.patch, 
> YARN-6366.006.patch
>
>
> The NodeManager's DeletionService only supports file based DeletionTask's. 
> This makes sense as files (and directories) have been the primary concern for 
> clean up to date. With the addition of the Docker container runtime, addition 
> types of DeletionTask are likely to be required, such as deletion of docker 
> container and images. See YARN-5366 and YARN-5670. This issue is to refactor 
> the DeletionService to support additional DeletionTask's.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6246) Identifying starved apps does not need the scheduler writelock

2017-05-11 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16006676#comment-16006676
 ] 

Daniel Templeton commented on YARN-6246:


Thanks for taking a closer look.  I agree that the impact of the race on 
{{lastTimeAtMinShare}} is pretty minor right now, but I don't like the idea of 
knowingly creating even a trivial race condition.  Can we make 
{{lastTimeAtMinShare}} atomic?

> Identifying starved apps does not need the scheduler writelock
> --
>
> Key: YARN-6246
> URL: https://issues.apache.org/jira/browse/YARN-6246
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: YARN-6246.001.patch, YARN-6246.002.patch, 
> YARN-6246.003.patch
>
>
> Currently, the starvation checks are done holding the scheduler writelock. We 
> are probably better of doing this outside. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6366) Refactor the NodeManager DeletionService to support additional DeletionTask types.

2017-05-11 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16006564#comment-16006564
 ] 

Shane Kumpf commented on YARN-6366:
---

Thanks for the review, [~vvasudev]!

I will put together a patch to address a bulk of the above.

Regarding #4) Moving the conversion methods into ProtoUtils creates a circular 
dependency between hadoop-common and the nodemanager. Would a consolidated 
NMProtoUtils within the nodemanager module suffice?

Regarding #5) If you take a look at the diff for that test, the existing test 
passes an empty Path array. I've tried to keep the test functionally the same.
{code}
-FileDeletionTask dependentDeletionTask =
-del.createFileDeletionTask(null, userDirPath, new Path[] {});
+FileDeletionTask dependentDeletionTask = new FileDeletionTask(del, null,
+userDirPath, new ArrayList<>());
{code}

> Refactor the NodeManager DeletionService to support additional DeletionTask 
> types.
> --
>
> Key: YARN-6366
> URL: https://issues.apache.org/jira/browse/YARN-6366
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager, yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-6366.001.patch, YARN-6366.002.patch, 
> YARN-6366.003.patch, YARN-6366.004.patch, YARN-6366.005.patch, 
> YARN-6366.006.patch
>
>
> The NodeManager's DeletionService only supports file based DeletionTask's. 
> This makes sense as files (and directories) have been the primary concern for 
> clean up to date. With the addition of the Docker container runtime, addition 
> types of DeletionTask are likely to be required, such as deletion of docker 
> container and images. See YARN-5366 and YARN-5670. This issue is to refactor 
> the DeletionService to support additional DeletionTask's.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3137) CapacityScheduler.checkAccess unnecessarily grabs the scheduler lock

2017-05-11 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16006467#comment-16006467
 ] 

Jason Lowe commented on YARN-3137:
--

Seeing this lock contention quite a bit in 2.8, and I guess I'm still a little 
confused why we need a full CapacityScheduler lock to do this.  Even if we have 
a lock, we're either checking just before or just after some kind of hierarchy 
change.  Queue lookup is not by path but simply a concurrent hashmap lookup, so 
that's not an issue.   Either the queue is there or it's not.  Then we can 
invoke checkAccess on the queue itself.  The queue knows its path in the 
hierarchy, and again either we're checking before the queue is moved or after.  
As long as we lock the specific queue when trying to check the ACL we should be 
fine, but maybe I'm missing something.

Can someone elaborate the scenario where moving the checkAccess lock from the 
highly-contended CapacityScheduler lock to a queue-specific lock causes 
problems when the hierarchy changes?

> CapacityScheduler.checkAccess unnecessarily grabs the scheduler lock
> 
>
> Key: YARN-3137
> URL: https://issues.apache.org/jira/browse/YARN-3137
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.5.0
>Reporter: Jason Lowe
>Assignee: Varun Saxena
>
> The queues are stored in a ConcurrentHashMap and the code already checks for 
> a null queue.  At best we need to lock individual queues when processing the 
> access check, but I don't see why we need to grab the highly-contested 
> scheduler lock to lookup which queue to use for the hasAccess call.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3981) offline collector: support timeline clients not associated with an application

2017-05-11 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16006278#comment-16006278
 ] 

Rohith Sharma K S commented on YARN-3981:
-

Thanks [~vrushalic] for skimming through doc. 

bq. Do I understand it correctly that flow collectors will run on each node 
that runs an NM in the cluster?
No. Plan in to start one flow collector service that start only one container 
only. One flow collector service serve for all offline timeline clients. 
However, number of flow collectors are admin configurable. So, there can be N 
collector service. Our TimelineClient will be able to discover all N collector 
service and make use of only one at a time. 

bq. How much traffic do we think might come in? Would it be similar to app 
table writes? If not, is there a possibility we can run this on head node of 
the cluster like where RM or NNs run? Not on the same node as RM but a node 
similar to RM, so that it's "outside" the cluster. We have fairly big sized 
clusters and having each node run a collector may not be optimal.
As of today, traffic is very less compared to app collectors. Let say, when 
ever a HIVE executes a query, this query details are published to atsv2. But we 
can not take a call on traffic which is not guessable. 

bq. aggregation is not relevant I think for a flow collector. Or do we want to 
support it? If not, we don't need to mention it under challenges, it is a non 
issue.
Yep, aggregation is not relevant. I will update doc. Btw, is it possible to 
support aggregation at flow-run level? 

> offline collector: support timeline clients not associated with an application
> --
>
> Key: YARN-3981
> URL: https://issues.apache.org/jira/browse/YARN-3981
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Rohith Sharma K S
>  Labels: YARN-5355
> Attachments: YARN-3981- offline-collector-draft.pdf
>
>
> In the current v.2 design, all timeline writes must belong in a 
> flow/application context (cluster + user + flow + flow run + application).
> But there are use cases that require writing data outside the context of an 
> application. One such example is a higher level client (e.g. tez client or 
> hive/oozie/cascading client) writing flow-level data that spans multiple 
> applications. We need to find a way to support them.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6584) Some license modification in codes

2017-05-11 Thread Yeliang Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yeliang Cang updated YARN-6584:
---
Attachment: YARN-6584-001.patch

> Some license modification in codes
> --
>
> Key: YARN-6584
> URL: https://issues.apache.org/jira/browse/YARN-6584
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha2
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
>Priority: Trivial
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-6584-001.patch
>
>
> The license in some java files are not same as others. Submit a patch to fix 
> this!



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6584) Some license modification in codes

2017-05-11 Thread Yeliang Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yeliang Cang updated YARN-6584:
---
Attachment: (was: YARN-6584-001.patch)

> Some license modification in codes
> --
>
> Key: YARN-6584
> URL: https://issues.apache.org/jira/browse/YARN-6584
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha2
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
>Priority: Trivial
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-6584-001.patch
>
>
> The license in some java files are not same as others. Submit a patch to fix 
> this!



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6584) Some license modification in codes

2017-05-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16006197#comment-16006197
 ] 

Hadoop QA commented on YARN-6584:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  4m 
14s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
33s{color} | {color:red} hadoop-common-project/hadoop-auth in trunk has 1 
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
29s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
40s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
55s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
28s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  4m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 12m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-maven-plugins in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
55s{color} | {color:green} hadoop-auth in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
54s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
31s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 50s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
1s{color} | {color:green} hadoop-yarn-server-applicationhistoryservice in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
55s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
11s{color} | {color:green} 

[jira] [Commented] (YARN-6266) Extend the resource class to support ports management

2017-05-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16006107#comment-16006107
 ] 

Hadoop QA commented on YARN-6266:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
30s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
3s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
50s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 5 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
1s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 13m 
45s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 13m 45s{color} 
| {color:red} root generated 10 new + 787 unchanged - 0 fixed = 797 total (was 
787) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  0s{color} | {color:orange} root: The patch generated 172 new + 528 
unchanged - 1 fixed = 700 total (was 529) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
52s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 67 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
2s{color} | {color:red} The patch 9 line(s) with tabs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
17s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
generated 5 new + 1 unchanged - 0 fixed = 6 total (was 1) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
59s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
5s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 generated 1 new + 5 unchanged - 0 fixed = 6 total (was 5) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
33s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api 
generated 2 new + 123 unchanged - 0 fixed = 125 total (was 123) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
37s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 40s{color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | 

[jira] [Updated] (YARN-6580) Incorrect logger for FairSharePolicy

2017-05-11 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-6580:
---
Summary: Incorrect logger for FairSharePolicy  (was: Incorrect LOG for 
FairSharePolicy)

> Incorrect logger for FairSharePolicy
> 
>
> Key: YARN-6580
> URL: https://issues.apache.org/jira/browse/YARN-6580
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Yufei Gu
>Priority: Minor
>  Labels: newbie++
>
> {code}
> public class FairSharePolicy extends SchedulingPolicy {
>   private static final Log LOG = LogFactory.getLog(FifoPolicy.class);
> {code}
> should be {{LogFactory.getLog(FairSharePolicy.class);}}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-6581) Function ContainersMonitorImpl.MonitoringThread#run() is too long

2017-05-11 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu resolved YARN-6581.

Resolution: Duplicate

> Function ContainersMonitorImpl.MonitoringThread#run() is too long 
> --
>
> Key: YARN-6581
> URL: https://issues.apache.org/jira/browse/YARN-6581
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>  Labels: newbie
>
> It is almost 200 lines. It's hard to read and maintenance. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6581) Function ContainersMonitorImpl.MonitoringThread#run() is too long

2017-05-11 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-6581:
---
Summary: Function ContainersMonitorImpl.MonitoringThread#run() is too long  
 (was: Function length of MonitoringThread#run() is too long )

> Function ContainersMonitorImpl.MonitoringThread#run() is too long 
> --
>
> Key: YARN-6581
> URL: https://issues.apache.org/jira/browse/YARN-6581
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>  Labels: newbie
>
> It is almost 200 lines. It's hard to read and maintenance. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org