[jira] [Comment Edited] (YARN-6266) Extend the resource class to support ports management

2017-05-10 Thread jialei weng (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005920#comment-16005920
 ] 

jialei weng edited comment on YARN-6266 at 5/11/17 5:46 AM:


not same as anti-affinity scheduling, just bring one way to manager ports as 
resource in yarn.


was (Author: wjlei):
not same as anti-affinity scheduling, just bring one way to manager ports as 
resource.

> Extend the resource class to support ports management
> -
>
> Key: YARN-6266
> URL: https://issues.apache.org/jira/browse/YARN-6266
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: jialei weng
> Attachments: YARN-6266.001.patch
>
>
> Just like the vcores and memory, ports is an important resource for job to 
> allocate. We should add the ports management logic to yarn. It can support 
> the user to allocate two jobs(with same port requirement) to different 
> machines. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6266) Extend the resource class to support ports management

2017-05-10 Thread jialei weng (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005920#comment-16005920
 ] 

jialei weng commented on YARN-6266:
---

not same as anti-affinity scheduling, just bring one way to manager ports as 
resource.

> Extend the resource class to support ports management
> -
>
> Key: YARN-6266
> URL: https://issues.apache.org/jira/browse/YARN-6266
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: jialei weng
> Attachments: YARN-6266.001.patch
>
>
> Just like the vcores and memory, ports is an important resource for job to 
> allocate. We should add the ports management logic to yarn. It can support 
> the user to allocate two jobs(with same port requirement) to different 
> machines. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6266) Extend the resource class to support ports management

2017-05-10 Thread jialei weng (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jialei weng reassigned YARN-6266:
-

Assignee: jialei weng

> Extend the resource class to support ports management
> -
>
> Key: YARN-6266
> URL: https://issues.apache.org/jira/browse/YARN-6266
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: jialei weng
>Assignee: jialei weng
> Attachments: YARN-6266.001.patch
>
>
> Just like the vcores and memory, ports is an important resource for job to 
> allocate. We should add the ports management logic to yarn. It can support 
> the user to allocate two jobs(with same port requirement) to different 
> machines. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6266) Extend the resource class to support ports management

2017-05-10 Thread jialei weng (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jialei weng reassigned YARN-6266:
-

Assignee: (was: jialei weng)

> Extend the resource class to support ports management
> -
>
> Key: YARN-6266
> URL: https://issues.apache.org/jira/browse/YARN-6266
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: jialei weng
> Attachments: YARN-6266.001.patch
>
>
> Just like the vcores and memory, ports is an important resource for job to 
> allocate. We should add the ports management logic to yarn. It can support 
> the user to allocate two jobs(with same port requirement) to different 
> machines. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6266) Extend the resource class to support ports management

2017-05-10 Thread jialei weng (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jialei weng updated YARN-6266:
--
Attachment: YARN-6266.001.patch

> Extend the resource class to support ports management
> -
>
> Key: YARN-6266
> URL: https://issues.apache.org/jira/browse/YARN-6266
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: jialei weng
> Attachments: YARN-6266.001.patch
>
>
> Just like the vcores and memory, ports is an important resource for job to 
> allocate. We should add the ports management logic to yarn. It can support 
> the user to allocate two jobs(with same port requirement) to different 
> machines. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6584) Some license modification in codes

2017-05-10 Thread Yeliang Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yeliang Cang updated YARN-6584:
---
Description: The license in some java files are not same as others. Submit 
a patch to fix this!

> Some license modification in codes
> --
>
> Key: YARN-6584
> URL: https://issues.apache.org/jira/browse/YARN-6584
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha2
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
>Priority: Trivial
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-6584-001.patch
>
>
> The license in some java files are not same as others. Submit a patch to fix 
> this!



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6584) Some license modification in codes

2017-05-10 Thread Yeliang Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yeliang Cang updated YARN-6584:
---
Attachment: YARN-6584-001.patch

Submit a patch!

> Some license modification in codes
> --
>
> Key: YARN-6584
> URL: https://issues.apache.org/jira/browse/YARN-6584
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha2
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
>Priority: Trivial
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-6584-001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6584) Some license modification in codes

2017-05-10 Thread Yeliang Cang (JIRA)
Yeliang Cang created YARN-6584:
--

 Summary: Some license modification in codes
 Key: YARN-6584
 URL: https://issues.apache.org/jira/browse/YARN-6584
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0-alpha2
Reporter: Yeliang Cang
Assignee: Yeliang Cang
Priority: Trivial
 Fix For: 3.0.0-alpha2






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5328) InMemoryPlan enhancements required to support recurring reservations in the YARN ReservationSystem

2017-05-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005841#comment-16005841
 ] 

Hadoop QA commented on YARN-5328:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 53s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 26 new + 275 unchanged - 16 fixed = 301 total (was 291) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
14s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
33s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 38m 39s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  Suspicious comparison of Long references in 
org.apache.hadoop.yarn.server.resourcemanager.reservation.RLESparseResourceAllocation.getMinimumCapacityInInterval(ReservationInterval)
  At RLESparseResourceAllocation.java:in 
org.apache.hadoop.yarn.server.resourcemanager.reservation.RLESparseResourceAllocation.getMinimumCapacityInInterval(ReservationInterval)
  At RLESparseResourceAllocation.java:[line 581] |
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy |
|   | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.reservation.TestNoOverCommitPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-5328 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12867485/YARN-5328-v1.patch |
| 

[jira] [Comment Edited] (YARN-6566) add a property for a hadoop job to identified the full hive HQL script text in hadoop web view in multi-users environment

2017-05-10 Thread liuzhenhua (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16004315#comment-16004315
 ] 

liuzhenhua edited comment on YARN-6566 at 5/11/17 3:37 AM:
---

[~wilfreds] Thanks for your advice,I think you are right,but sometimes the 
requirements were
really exist,Could you add the feature and make it be configurable in hive HQL 
or spark SQL on Yarn?
it is convenient for cluster administrator to tuning the others' HQL.If 
not,when we upgrade the hadoop next time,we must add the feature in our cluster 
manually again



was (Author: liberty1789):
[~wilfreds] Thanks for your advice,I think you are right,but sometimes the 
requirements were
really exist,Could you add the feature and make it be configuarble in hive HQL 
or spark SQL on Yarn?
it is convenient for cluster administrator to tuning the others' HQL.If 
not,when we upgrade the hadoop next time,we must add the feature in our cluster 
manually again


> add a property for a hadoop job to identified the full hive HQL script text 
> in hadoop web view in multi-users environment
> -
>
> Key: YARN-6566
> URL: https://issues.apache.org/jira/browse/YARN-6566
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: client, resourcemanager, webapp, yarn
>Affects Versions: 2.6.0
> Environment: centos 6.4 64bit
>Reporter: liuzhenhua
>  Labels: patch
> Fix For: 2.6.0
>
> Attachments: application-page.bmp, applications-page.bmp, 
> YARN-6566.1.patch, YARN-6566.2.patch, YARN-6566.3.patch, YARN-6566.4.patch, 
> YARN-6566.5.patch, YARN-6566.6.patch, YARN-6566.7.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> When I tuning the hive HQL in multi-users environment,I can not get the full 
> SQL text in hadoop web view,so it is difficult to tuning the  SQL.When I try 
> to set the SQL text in hadoop job's jobname property,I realize it is going to 
> damage the structure of hadoop applications web view,so I add a property to 
> hadoop job,which named "jobdescription",When a hive HQL be submitted,the full 
> HQL text was assigned to the property,so i can identified the HQL coveniently 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6583) Hadoop-sls failed to start because of premature state of RM

2017-05-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005829#comment-16005829
 ] 

ASF GitHub Bot commented on YARN-6583:
--

GitHub user scutojr opened a pull request:

https://github.com/apache/hadoop/pull/222

YARN-6583 Hadoop-sls failed to start because of premature state of RM



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/scutojr/hadoop sls

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/222.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #222


commit 70d996ae4cd482aacfa8cdc0a4330e4433911bc1
Author: Jayce Au 
Date:   2017-05-10T14:04:31Z

YARN-6583 Hadoop-sls failed to start because of premature state of RM




> Hadoop-sls failed to start because of premature state of RM
> ---
>
> Key: YARN-6583
> URL: https://issues.apache.org/jira/browse/YARN-6583
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler-load-simulator
>Affects Versions: 2.6.0
>Reporter: JayceAu
>  Labels: easyfix
>
> During startup of SLS, after startRM() in SLSRunner.start(), 
> BaseContainerTokenSecretManager not yet generate its onw internal key or it's 
> not yet exposed to the other thread, then NM registration will fail because 
> of the following exception. Finally, the whole SLS process will crash.
> {noformat}
> Exception in thread "main" java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.security.BaseContainerTokenSecretManager.getCurrentKey(BaseContainerTokenSecretManager.java:81)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService.registerNodeManager(ResourceTrackerService.java:300)
> at 
> org.apache.hadoop.yarn.sls.nodemanager.NMSimulator.init(NMSimulator.java:105)
> at org.apache.hadoop.yarn.sls.SLSRunner.startNM(SLSRunner.java:202)
> at org.apache.hadoop.yarn.sls.SLSRunner.start(SLSRunner.java:143)
> at org.apache.hadoop.yarn.sls.SLSRunner.main(SLSRunner.java:528)
> 17/05/11 10:21:06 INFO resourcemanager.ResourceManager: Recovery started
> 17/05/11 10:21:06 INFO recovery.ZKRMStateStore: Watcher event type: None with 
> state:SyncConnected for path:null for Service 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore in state 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: STARTED
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6583) Hadoop-sls failed to start because of premature state of RM

2017-05-10 Thread JayceAu (JIRA)
JayceAu created YARN-6583:
-

 Summary: Hadoop-sls failed to start because of premature state of 
RM
 Key: YARN-6583
 URL: https://issues.apache.org/jira/browse/YARN-6583
 Project: Hadoop YARN
  Issue Type: Bug
  Components: scheduler-load-simulator
Affects Versions: 2.6.0
Reporter: JayceAu


During startup of SLS, after startRM() in SLSRunner.start(), 
BaseContainerTokenSecretManager not yet generate its onw internal key or it's 
not yet exposed to the other thread, then NM registration will fail because of 
the following exception. Finally, the whole SLS process will crash.

{noformat}
Exception in thread "main" java.lang.NullPointerException
at 
org.apache.hadoop.yarn.server.security.BaseContainerTokenSecretManager.getCurrentKey(BaseContainerTokenSecretManager.java:81)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService.registerNodeManager(ResourceTrackerService.java:300)
at 
org.apache.hadoop.yarn.sls.nodemanager.NMSimulator.init(NMSimulator.java:105)
at org.apache.hadoop.yarn.sls.SLSRunner.startNM(SLSRunner.java:202)
at org.apache.hadoop.yarn.sls.SLSRunner.start(SLSRunner.java:143)
at org.apache.hadoop.yarn.sls.SLSRunner.main(SLSRunner.java:528)
17/05/11 10:21:06 INFO resourcemanager.ResourceManager: Recovery started
17/05/11 10:21:06 INFO recovery.ZKRMStateStore: Watcher event type: None with 
state:SyncConnected for path:null for Service 
org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore in state 
org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: STARTED
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6280) Add a query parameter in ResourceManager Cluster Applications REST API to control whether or not returns ResourceRequest

2017-05-10 Thread Lantao Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005803#comment-16005803
 ] 

Lantao Jin commented on YARN-6280:
--

The timeout unit tests on my side seem all good:
{quote}
Running org.apache.hadoop.yarn.server.resourcemanager.TestRMStoreCommands
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.137 sec - in 
org.apache.hadoop.yarn.server.resourcemanager.TestRMStoreCommands

Running 
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.708 sec - in 
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA

Running 
org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.601 sec - in 
org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA

Running org.apache.hadoop.yarn.server.resourcemanager.TestRMHAForNodeLabels
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.372 sec - in 
org.apache.hadoop.yarn.server.resourcemanager.TestRMHAForNodeLabels
{quote}

> Add a query parameter in ResourceManager Cluster Applications REST API to 
> control whether or not returns ResourceRequest
> 
>
> Key: YARN-6280
> URL: https://issues.apache.org/jira/browse/YARN-6280
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager, restapi
>Affects Versions: 2.7.3
>Reporter: Lantao Jin
>Assignee: Lantao Jin
> Attachments: YARN-6280.001.patch, YARN-6280.002.patch, 
> YARN-6280.003.patch, YARN-6280.004.patch, YARN-6280.005.patch, 
> YARN-6280.006.patch, YARN-6280.007.patch, YARN-6280.008.patch, 
> YARN-6280.009.patch
>
>
> Begin from v2.7, the ResourceManager Cluster Applications REST API returns   
> ResourceRequest list. It's a very large construction in AppInfo.
> As a test, we use below URI to query only 2 results:
> http:// address:port>/ws/v1/cluster/apps?states=running,accepted=2
> The results are very different:
> ||Hadoop version|Total Character|Total Word|Total Lines|Size||
> |2.4.1|1192|  42| 42| 1.2 KB|
> |2.7.1|1222179|   48740|  48735|  1.21 MB|
> Most RESTful API requesters don't know about this after upgraded and their 
> old queries may cause ResourceManager more GC consuming and slower. Even if 
> they know this but have no idea to reduce the impact of ResourceManager 
> except slow down their query frequency.
> The patch adding a query parameter "showResourceRequests" to help requesters 
> who don't need this information to reduce the overhead. In consideration of 
> compatibility of interface, the default value is true if they don't set the 
> parameter, so the behaviour is the same as now.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5328) InMemoryPlan enhancements required to support recurring reservations in the YARN ReservationSystem

2017-05-10 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-5328:
-
Attachment: YARN-5328-v1.patch

Attaching first attempt of the patch

> InMemoryPlan enhancements required to support recurring reservations in the 
> YARN ReservationSystem
> --
>
> Key: YARN-5328
> URL: https://issues.apache.org/jira/browse/YARN-5328
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-5328-v1.patch
>
>
> YARN-5326 proposes adding native support for recurring reservations in the 
> YARN ReservationSystem. This JIRA is a sub-task to track the changes required 
> in InMemoryPlan to accomplish it. Please refer to the design doc in the 
> parent JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6582) FSAppAttempt demand can be updated atomically in updateDemand()

2017-05-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005792#comment-16005792
 ] 

Hadoop QA commented on YARN-6582:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 41m 
12s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6582 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12867475/YARN-6582.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux fc70daf4025e 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / eed7314 |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15901/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15901/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> FSAppAttempt demand can be updated atomically in updateDemand()
> ---
>
> Key: YARN-6582
> URL: https://issues.apache.org/jira/browse/YARN-6582
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla

[jira] [Commented] (YARN-3981) offline collector: support timeline clients not associated with an application

2017-05-10 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005775#comment-16005775
 ] 

Vrushali C commented on YARN-3981:
--

Thanks for the design draft Rohith. I think I have some preliminary questions, 
more like discussion. 

- Do I understand it correctly that flow collectors will run on each node that 
runs an NM in the cluster? 
- How much traffic do we think might come in? Would it be similar to app table 
writes? If not, is there a possibility we can run this on head node of the 
cluster like where RM or NNs run? Not on the same node as RM but a node similar 
to RM, so that it's "outside" the cluster. We have fairly big sized clusters 
and having each node run a collector may not be optimal. 
- aggregation is not relevant I think for a flow collector. Or do we want to 
support it? If not, we don't need to mention it under challenges, it is a non 
issue.
- We surely want to think about optimizing connections to hbase

Perhaps I will have more as I think over this further. 

> offline collector: support timeline clients not associated with an application
> --
>
> Key: YARN-3981
> URL: https://issues.apache.org/jira/browse/YARN-3981
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Rohith Sharma K S
>  Labels: YARN-5355
> Attachments: YARN-3981- offline-collector-draft.pdf
>
>
> In the current v.2 design, all timeline writes must belong in a 
> flow/application context (cluster + user + flow + flow run + application).
> But there are use cases that require writing data outside the context of an 
> application. One such example is a higher level client (e.g. tez client or 
> hive/oozie/cascading client) writing flow-level data that spans multiple 
> applications. We need to find a way to support them.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6571) Fix JavaDoc issues in SchedulingPolicy

2017-05-10 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005759#comment-16005759
 ] 

Weiwei Yang commented on YARN-6571:
---

Thank you [~templedf] for the help.

> Fix JavaDoc issues in SchedulingPolicy
> --
>
> Key: YARN-6571
> URL: https://issues.apache.org/jira/browse/YARN-6571
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Weiwei Yang
>Priority: Trivial
>  Labels: newbie
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: YARN-6571.001.patch, YARN-6571.002.patch, 
> YARN-6571.003.patch
>
>
> There are several javadoc issues:
> * Class JavaDoc is missing.
> * {{getInstance()}} is missing {{@return}} and {{@param}} tags.
> * {{parse()}} is missing {{@return}} tag and description for {{@throws}} tag.
> * {{checkIfUsageOverFairShare()}} is missing a period at the end of the first 
> sentence.
> * {{getHeadroom()}} should use {code}{@code}{code} instead of {{}} tags.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6582) FSAppAttempt demand can be updated atomically in updateDemand()

2017-05-10 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-6582:
---
Attachment: YARN-6582.001.patch

Uploading patch to improve this:
# Use a new variable tmpDemand to build the value of demand.
# App's schedulerKeys are in a ConcurrentSkipList, so there is no need for a 
lock.
# Assign tmpDemand to demand in one go. This is safe: a previous access to 
demand is stale, but that is expected. And, there are no locks held on demand.

> FSAppAttempt demand can be updated atomically in updateDemand()
> ---
>
> Key: YARN-6582
> URL: https://issues.apache.org/jira/browse/YARN-6582
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: YARN-6582.001.patch
>
>
> FSAppAttempt#updateDemand first sets demand to 0, and then adds up all the 
> outstanding requests. Instead, we could use another variable tmpDemand to 
> build the new value and atomically replace the demand.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6246) Identifying starved apps does not need the scheduler writelock

2017-05-10 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005727#comment-16005727
 ] 

Karthik Kambatla commented on YARN-6246:


Carefully looked through the locking. dumpStateInternal could race with the 
lastTimeAtMinShare value, but that shouldn't really be a problem. 

While investigating this, I realized FSAppAttempt#updateDemand (called with the 
scheduler writelock) can be improved. Filed YARN-6582 for it. 

> Identifying starved apps does not need the scheduler writelock
> --
>
> Key: YARN-6246
> URL: https://issues.apache.org/jira/browse/YARN-6246
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: YARN-6246.001.patch, YARN-6246.002.patch, 
> YARN-6246.003.patch
>
>
> Currently, the starvation checks are done holding the scheduler writelock. We 
> are probably better of doing this outside. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6582) FSAppAttempt demand can be updated atomically in updateDemand()

2017-05-10 Thread Karthik Kambatla (JIRA)
Karthik Kambatla created YARN-6582:
--

 Summary: FSAppAttempt demand can be updated atomically in 
updateDemand()
 Key: YARN-6582
 URL: https://issues.apache.org/jira/browse/YARN-6582
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla


FSAppAttempt#updateDemand first sets demand to 0, and then adds up all the 
outstanding requests. Instead, we could use another variable tmpDemand to build 
the new value and atomically replace the demand.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6577) Useless interface and implementation class

2017-05-10 Thread ZhangBing Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005716#comment-16005716
 ] 

ZhangBing Lin commented on YARN-6577:
-

[~rohithsharma],can you plz take a quick review?

> Useless interface and implementation class
> --
>
> Key: YARN-6577
> URL: https://issues.apache.org/jira/browse/YARN-6577
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.3, 3.0.0-alpha2
>Reporter: ZhangBing Lin
>Assignee: ZhangBing Lin
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-6577.001.patch
>
>
> From 2.7.3  and 3.0.0-alpha2, the ContainerLocalization interface and the 
> ContainerLocalizationImpl implementation class are of no use, and I recommend 
> removing the useless interface and implementation classes



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6486) FairScheduler: Deprecate continuous scheduling in 2.9

2017-05-10 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005704#comment-16005704
 ] 

Karthik Kambatla commented on YARN-6486:


The non-HB driven approach allows certain optimizations. However, in our 
experience, the current implementation doesn't scale well. Current 
implementation continuously simulates node heartbeats and exposes lock 
contention issues in the scheduler. We have been recommending turning this off 
for large clusters.

Global scheduling is likely the better approach that is not driven by 
heartbeats. 

I am open to less-lock-contention approaches.

> FairScheduler: Deprecate continuous scheduling in 2.9
> -
>
> Key: YARN-6486
> URL: https://issues.apache.org/jira/browse/YARN-6486
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: fairscheduler
>Affects Versions: 2.9.0
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>
> Mark continuous scheduling as deprecated in 2.9 and remove the code in 3.0. 
> Removing continuous scheduling from the code will be logged as a separate jira



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6484) [Documentation] Documenting the YARN Federation feature

2017-05-10 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005679#comment-16005679
 ] 

Carlo Curino commented on YARN-6484:


Thanks [~botong] for the review. I have updated the patch addressing your 
feedback. [~subru] can you check it out as well, and confirm if it is ok to 
commit to branch YARN-2915.

> [Documentation] Documenting the YARN Federation feature
> ---
>
> Key: YARN-6484
> URL: https://issues.apache.org/jira/browse/YARN-6484
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Subru Krishnan
>Assignee: Carlo Curino
> Attachments: YARN-6484-YARN-2915.v0.patch, 
> YARN-6484-YARN-2915.v1.patch, YARN-6484-YARN-2915.v2.patch
>
>
> We should document the high level design and configuration to enable YARN 
> Federation



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6484) [Documentation] Documenting the YARN Federation feature

2017-05-10 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-6484:
---
Attachment: YARN-6484-YARN-2915.v2.patch

> [Documentation] Documenting the YARN Federation feature
> ---
>
> Key: YARN-6484
> URL: https://issues.apache.org/jira/browse/YARN-6484
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Subru Krishnan
>Assignee: Carlo Curino
> Attachments: YARN-6484-YARN-2915.v0.patch, 
> YARN-6484-YARN-2915.v1.patch, YARN-6484-YARN-2915.v2.patch
>
>
> We should document the high level design and configuration to enable YARN 
> Federation



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6581) Function length of MonitoringThread#run() is too long

2017-05-10 Thread Yufei Gu (JIRA)
Yufei Gu created YARN-6581:
--

 Summary: Function length of MonitoringThread#run() is too long 
 Key: YARN-6581
 URL: https://issues.apache.org/jira/browse/YARN-6581
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: nodemanager
Reporter: Yufei Gu


It is almost 200 lines. It's hard to read and maintenance. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6581) Function length of MonitoringThread#run() is too long

2017-05-10 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu reassigned YARN-6581:
--

Assignee: Yufei Gu

> Function length of MonitoringThread#run() is too long 
> --
>
> Key: YARN-6581
> URL: https://issues.apache.org/jira/browse/YARN-6581
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>  Labels: newbie
>
> It is almost 200 lines. It's hard to read and maintenance. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6457) Allow custom SSL configuration to be supplied in WebApps

2017-05-10 Thread Vlad Rozov (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005616#comment-16005616
 ] 

Vlad Rozov commented on YARN-6457:
--

Never mind, I see it now. WebAppsUtils creates Configuration with loadDefaults 
set to false, so it will not load settings from default including yarn-site.xml.

> Allow custom SSL configuration to be supplied in WebApps
> 
>
> Key: YARN-6457
> URL: https://issues.apache.org/jira/browse/YARN-6457
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: webapp, yarn
>Reporter: Sanjay M Pujare
>Assignee: Sanjay M Pujare
> Fix For: 2.9.0, 2.7.4, 2.8.1, 3.0.0-alpha3
>
> Attachments: YARN-6457.00.patch, YARN-6457.01.patch
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> Currently a custom SSL store cannot be passed on to WebApps which forces the 
> embedded web-server to use the default keystore set up in ssl-server.xml for 
> the whole Hadoop cluster. There are cases where the Hadoop app needs to use 
> its own/custom keystore.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6380) FSAppAttempt keeps redundant copy of the queue

2017-05-10 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005592#comment-16005592
 ] 

Karthik Kambatla commented on YARN-6380:


I am torn between following the style guide in spirit vs letter. I would think 
spirit, but our scripts all check for letter. :) 

That said, the condition would be that much more complicated if split further. 
+1 on the latest patch. 

> FSAppAttempt keeps redundant copy of the queue
> --
>
> Key: YARN-6380
> URL: https://issues.apache.org/jira/browse/YARN-6380
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha2
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: YARN-6380.001.patch, YARN-6380.002.patch, 
> YARN-6380.003.patch, YARN-6380.004.patch, YARN-6380.005.patch
>
>
> The {{FSAppAttempt}} class defines its own {{fsQueue}} variable that is a 
> second copy of the {{SchedulerApplicationAttempt}}'s {{queue}} variable.  
> Aside from being redundant, it's also a bug, because when moving 
> applications, we only update the {{SchedulerApplicationAttempt}}'s {{queue}}, 
> not the {{FSAppAttempt}}'s {{fsQueue}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5949) Add pluggable configuration policy interface as a component of MutableCSConfigurationProvider

2017-05-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005586#comment-16005586
 ] 

Hadoop QA commented on YARN-5949:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
14s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
23s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
55s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
10s{color} | {color:green} YARN-5734 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
6s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in 
YARN-5734 has 2 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
8s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in YARN-5734 has 8 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} YARN-5734 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
30s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 44m 
26s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}121m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-5949 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12867417/YARN-5949-YARN-5734.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 7eee1d2426af 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 
16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 

[jira] [Commented] (YARN-5531) UnmanagedAM pool manager for federating application across clusters

2017-05-10 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005569#comment-16005569
 ] 

Karthik Kambatla commented on YARN-5531:


Thanks for working on this, [~botong]. I took a close look at the new files and 
skimmed through the remaining. Comments:
# Is yarn-server the best place for these? In the future, don't we want other 
clients to use this UAMPool? If we do change it to a different package, we need 
to think about the Visibility and Stability annotations. 
# UnmanagedAMPoolManager:
## The create methods seem to be expect AppAttemptId from the user. Is that 
reasonable? Should it be the other way round, where we give the user 
AppAttemptId for the new app created?
## What are the benefits of using maps keyed by String, passed by the user. Why 
not just use ApplicationAttemptId? create methods could just return the 
app-attempt? 
## Nit: In serviceStart, when creating maps, no need to specify the types 
starting Java 7.
## In serviceStart and serviceStop, shouldn't we call the equivalue super. 
methods right at the end? Otherwise, the state machine would transition the 
service to INITED or STOPPED even if it is not fully in that state? 
## serviceStop
### I see the code tries to parallelize killing AMs. Is this necessary? How bad 
is sequential killing of apps? 
### Nit: ExecutionCompletionService doesn't need the type in the creation. 
### Why do we need the lock on the uamMap? 
### Nit: Style choice. Where possible, I like to avoid nesting. The isEmpty 
check is for the logging. Can we not have the for nested. 
### If we fail to kill the application, is catching the exception enough? Is 
there merit to retrying? Should we capture this state and throw an exception 
past this loop?
## createUAM should be annotated @VisibleForTesting
## Nit: allocateAsync: Don't see the need for variable uam. 
## finishAM
### Nit: Don't see the need for variable uam. 
### Don't we need to handle the case where the app is still registered? Retry?
# UnmanagedApplicationManager
## Should this class be called UnmanagedApplicationMaster?
## Constructor: Don't need to specify type when creating LinkedBlockingQueue
## UnmanagedAMLauncher
### It is not clear to me that this needs to be a separate inner class, outside 
of grouping methods that create an AM.
### submitAndGetAppId doesn't seem to really get app id? 
### Why not use YarnClient? I understand this UAM pool is currently in 
yarn-server, but once we move this out, it should be easier. 
### Would it be possible to have a single monitor method?
### Isn't one second too long a wait in monitor* methods?
## UnmanagedAMIdentifier can be private, so can be its methods. 
## CallbackHandlerThread
### Can the combination of requestQueue and CallbackHandlerThread be achieved 
using a dispatcher? 
### Should this thread be named HeartbeatHandlerThread or 
AMRequestHandlerThread? The thread is processing requests.
### We seem to throw RuntimeExceptions. Should these be YarnExceptions instead?
### Since the thread can crash, it is nicer to implement an 
UncaughtExceptionhandler for this thread? 
## finishApplicationMaster
### Can the two {{if (rmProxy == null)}} checks be merged into one?
### Should the {{rmProxy.finishApplicationMaster}} be in a loop? Or, is one 
check and re-register enough?
## allocateAsync
### Is it okay to ignore the InterruptedException?
### The warning on UAM not being launched/registered seems unnecessary.
### Should the {{rmProxy == null && registerRequest == null}} check be first 
before we even queue this request?



> UnmanagedAM pool manager for federating application across clusters
> ---
>
> Key: YARN-5531
> URL: https://issues.apache.org/jira/browse/YARN-5531
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Botong Huang
> Attachments: YARN-5531-YARN-2915.v10.patch, 
> YARN-5531-YARN-2915.v1.patch, YARN-5531-YARN-2915.v2.patch, 
> YARN-5531-YARN-2915.v3.patch, YARN-5531-YARN-2915.v4.patch, 
> YARN-5531-YARN-2915.v5.patch, YARN-5531-YARN-2915.v6.patch, 
> YARN-5531-YARN-2915.v7.patch, YARN-5531-YARN-2915.v8.patch, 
> YARN-5531-YARN-2915.v9.patch
>
>
> One of the main tenets the YARN Federation is to *transparently* scale 
> applications across multiple clusters. This is achieved by running UAMs on 
> behalf of the application on other clusters. This JIRA tracks the addition of 
> a UnmanagedAM pool manager for federating application across clusters which 
> will be used the FederationInterceptor (YARN-3666) which is part of the 
> AMRMProxy pipeline introduced in YARN-2884.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To 

[jira] [Commented] (YARN-6533) Race condition in writing service record to registry in yarn native services

2017-05-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005564#comment-16005564
 ] 

Hadoop QA commented on YARN-6533:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
22s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
22s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core:
 The patch generated 1 new + 145 unchanged - 1 fixed = 146 total (was 146) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
39s{color} | {color:green} hadoop-yarn-slider-core in the patch passed. {color} 
|
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-6533 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12867432/YARN-6533-yarn-native-services.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3c52071850fa 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 
16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | yarn-native-services / 3c9f707 |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/15899/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-slider_hadoop-yarn-slider-core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15899/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core
 |
| Console output | 

[jira] [Commented] (YARN-6473) Create ReservationInvariantChecker to validate ReservationSystem + Scheduler operations

2017-05-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005566#comment-16005566
 ] 

Hudson commented on YARN-6473:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11720 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11720/])
YARN-6473. Create ReservationInvariantChecker to validate (carlo curino: rev 
5cb6e3e082ed9edbdb7c46d27daa049a4712e82b)
* (add) 
hadoop-tools/hadoop-sls/src/test/java/org/apache/hadoop/yarn/sls/TestReservationSystemInvariants.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/invariants/ReservationInvariantsChecker.java


> Create ReservationInvariantChecker to validate ReservationSystem + Scheduler 
> operations
> ---
>
> Key: YARN-6473
> URL: https://issues.apache.org/jira/browse/YARN-6473
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-6473.v0.patch, YARN-6473.v1.patch, 
> YARN-6473.v2.patch
>
>
> This JIRA tracks an application of YARN-6451 ideas to the ReservationSystem. 
> It is in particularly useful to create integration tests, or for test 
> clusters, where we can continuously (and possibly costly) check the 
> ReservationSystem + Scheduler are operating as expected.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6571) Fix JavaDoc issues in SchedulingPolicy

2017-05-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005565#comment-16005565
 ] 

Hudson commented on YARN-6571:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11720 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11720/])
YARN-6571. Fix JavaDoc issues in SchedulingPolicy (Contributed by Weiwei 
(templedf: rev e7654c4a1f3599a2032a2d02186af12124c23f7d)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/SchedulingPolicy.java


> Fix JavaDoc issues in SchedulingPolicy
> --
>
> Key: YARN-6571
> URL: https://issues.apache.org/jira/browse/YARN-6571
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Weiwei Yang
>Priority: Trivial
>  Labels: newbie
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: YARN-6571.001.patch, YARN-6571.002.patch, 
> YARN-6571.003.patch
>
>
> There are several javadoc issues:
> * Class JavaDoc is missing.
> * {{getInstance()}} is missing {{@return}} and {{@param}} tags.
> * {{parse()}} is missing {{@return}} tag and description for {{@throws}} tag.
> * {{checkIfUsageOverFairShare()}} is missing a period at the end of the first 
> sentence.
> * {{getHeadroom()}} should use {code}{@code}{code} instead of {{}} tags.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6535) Program needs to exit when SLS finishes.

2017-05-10 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005544#comment-16005544
 ] 

Robert Kanter commented on YARN-6535:
-

LGTM +1

[~leftnoteasy], any other comments?

> Program needs to exit when SLS finishes. 
> -
>
> Key: YARN-6535
> URL: https://issues.apache.org/jira/browse/YARN-6535
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler-load-simulator
>Affects Versions: 3.0.0-alpha2
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-6535.001.patch, YARN-6535.002.patch, 
> YARN-6535.003.patch
>
>
> Program need to exit when SLS finishes except in unit tests.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6473) Create ReservationInvariantChecker to validate ReservationSystem + Scheduler operations

2017-05-10 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005526#comment-16005526
 ] 

Carlo Curino edited comment on YARN-6473 at 5/10/17 9:59 PM:
-

Thanks [~subru] for the review and +1. I committed this to trunk (backport to 
branch-2 has lots of dependencies we should discuss with [~leftnoteasy] whether 
could/should be backported to branch-2).


was (Author: curino):
I committed this to trunk (backport to branch-2 has lots of dependencies we 
should discuss with [~leftnoteasy] whether could/should be backported to 
branch-2).

> Create ReservationInvariantChecker to validate ReservationSystem + Scheduler 
> operations
> ---
>
> Key: YARN-6473
> URL: https://issues.apache.org/jira/browse/YARN-6473
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-6473.v0.patch, YARN-6473.v1.patch, 
> YARN-6473.v2.patch
>
>
> This JIRA tracks an application of YARN-6451 ideas to the ReservationSystem. 
> It is in particularly useful to create integration tests, or for test 
> clusters, where we can continuously (and possibly costly) check the 
> ReservationSystem + Scheduler are operating as expected.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6473) Create ReservationInvariantChecker to validate ReservationSystem + Scheduler operations

2017-05-10 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005526#comment-16005526
 ] 

Carlo Curino commented on YARN-6473:


I committed this to trunk (backport to branch-2 has lots of dependencies we 
should discuss with [~leftnoteasy] whether could/should be backported to 
branch-2).

> Create ReservationInvariantChecker to validate ReservationSystem + Scheduler 
> operations
> ---
>
> Key: YARN-6473
> URL: https://issues.apache.org/jira/browse/YARN-6473
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-6473.v0.patch, YARN-6473.v1.patch, 
> YARN-6473.v2.patch
>
>
> This JIRA tracks an application of YARN-6451 ideas to the ReservationSystem. 
> It is in particularly useful to create integration tests, or for test 
> clusters, where we can continuously (and possibly costly) check the 
> ReservationSystem + Scheduler are operating as expected.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6409) RM does not blacklist node for AM launch failures

2017-05-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005507#comment-16005507
 ] 

Hadoop QA commented on YARN-6409:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 3 new + 156 unchanged - 0 fixed = 159 total (was 156) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 39m 20s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | 
hadoop.yarn.server.resourcemanager.rmapp.attempt.TestRMAppAttemptTransitions |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12867418/YARN-6409.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b154299c946a 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ad1e3e4 |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/15898/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/15898/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15898/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 

[jira] [Commented] (YARN-6473) Create ReservationInvariantChecker to validate ReservationSystem + Scheduler operations

2017-05-10 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005490#comment-16005490
 ] 

Carlo Curino commented on YARN-6473:


Hi [~subru] the test failure is unrelated, and is due to rumen input format 
parsing (discussion ongoing in YARN-6111). I will commit to trunk/branch-2.

> Create ReservationInvariantChecker to validate ReservationSystem + Scheduler 
> operations
> ---
>
> Key: YARN-6473
> URL: https://issues.apache.org/jira/browse/YARN-6473
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-6473.v0.patch, YARN-6473.v1.patch, 
> YARN-6473.v2.patch
>
>
> This JIRA tracks an application of YARN-6451 ideas to the ReservationSystem. 
> It is in particularly useful to create integration tests, or for test 
> clusters, where we can continuously (and possibly costly) check the 
> ReservationSystem + Scheduler are operating as expected.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6111) Rumen input does't work in SLS

2017-05-10 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005483#comment-16005483
 ] 

Carlo Curino commented on YARN-6111:


I observed the same, in YARN-6473. [~wangda] is anyone using the rumen format? 
Shall we omit it from the TestSLSRunner parametrized test? If it is not used by 
people (to the point that parsing is busted) we might just retire it.

> Rumen input does't work in SLS
> --
>
> Key: YARN-6111
> URL: https://issues.apache.org/jira/browse/YARN-6111
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Affects Versions: 2.6.0, 2.7.3, 3.0.0-alpha2
> Environment: ubuntu14.0.4 os
>Reporter: YuJie Huang
>  Labels: test
>
> Hi guys,
> I am trying to learn the use of SLS.
> I would like to get the file realtimetrack.json, but this it only 
> contains "[]" at the end of a simulation. This is the command I use to 
> run the instance:
> HADOOP_HOME $ bin/slsrun.sh --input-rumen=sample-data/2jobsmin-rumen-jh.json 
> --output-dir=sample-data 
> All other files, including metrics, appears to be properly populated.I can 
> also trace with web:http://localhost:10001/simulate
> Can someone help?
> Thanks



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6533) Race condition in writing service record to registry in yarn native services

2017-05-10 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-6533:
-
Attachment: YARN-6533-yarn-native-services.003.patch

> Race condition in writing service record to registry in yarn native services
> 
>
> Key: YARN-6533
> URL: https://issues.apache.org/jira/browse/YARN-6533
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
> Attachments: YARN-6533-yarn-native-services.001.patch, 
> YARN-6533-yarn-native-services.002.patch, 
> YARN-6533-yarn-native-services.003.patch
>
>
> The ServiceRecord is written twice, once when the container is initially 
> registered and again in the Docker provider once the IP has been obtained for 
> the container. These occur asynchronously, so the more important record (the 
> one with the IP) can be overwritten by the initial record. Only one record 
> needs to be written, so we can stop writing the initial record when the 
> Docker provider is being used.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6473) Create ReservationInvariantChecker to validate ReservationSystem + Scheduler operations

2017-05-10 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005467#comment-16005467
 ] 

Subru Krishnan commented on YARN-6473:
--

Thanks [~curino] for the patch, this is a nice start. +1 my side pending Yetus 
warning fixes (test). 

> Create ReservationInvariantChecker to validate ReservationSystem + Scheduler 
> operations
> ---
>
> Key: YARN-6473
> URL: https://issues.apache.org/jira/browse/YARN-6473
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-6473.v0.patch, YARN-6473.v1.patch, 
> YARN-6473.v2.patch
>
>
> This JIRA tracks an application of YARN-6451 ideas to the ReservationSystem. 
> It is in particularly useful to create integration tests, or for test 
> clusters, where we can continuously (and possibly costly) check the 
> ReservationSystem + Scheduler are operating as expected.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6571) Fix JavaDoc issues in SchedulingPolicy

2017-05-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005403#comment-16005403
 ] 

Hadoop QA commented on YARN-6571:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 3 unchanged - 5 fixed = 3 total (was 8) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 0 new + 874 unchanged - 3 fixed = 874 total (was 877) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 41m  5s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing |
| Timed out junit tests | 
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6571 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12867401/YARN-6571.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ae8c5c37a71c 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ad1e3e4 |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/15896/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15896/testReport/ |
| modules | C: 

[jira] [Commented] (YARN-6380) FSAppAttempt keeps redundant copy of the queue

2017-05-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005394#comment-16005394
 ] 

Hadoop QA commented on YARN-6380:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 39m 52s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6380 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12867380/YARN-6380.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e64b568f6a57 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ad1e3e4 |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/15895/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/15895/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15895/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 

[jira] [Commented] (YARN-6457) Allow custom SSL configuration to be supplied in WebApps

2017-05-10 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005377#comment-16005377
 ] 

Haibo Chen commented on YARN-6457:
--

Yes. only configuration properties specified in ssl-server.xml and marked as 
final cannot be overridden

> Allow custom SSL configuration to be supplied in WebApps
> 
>
> Key: YARN-6457
> URL: https://issues.apache.org/jira/browse/YARN-6457
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: webapp, yarn
>Reporter: Sanjay M Pujare
>Assignee: Sanjay M Pujare
> Fix For: 2.9.0, 2.7.4, 2.8.1, 3.0.0-alpha3
>
> Attachments: YARN-6457.00.patch, YARN-6457.01.patch
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> Currently a custom SSL store cannot be passed on to WebApps which forces the 
> embedded web-server to use the default keystore set up in ssl-server.xml for 
> the whole Hadoop cluster. There are cases where the Hadoop app needs to use 
> its own/custom keystore.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6409) RM does not blacklist node for AM launch failures

2017-05-10 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005365#comment-16005365
 ] 

Haibo Chen commented on YARN-6409:
--

 Going three levels down so that the failures indicated in the above stacktrace 
can be captured.

> RM does not blacklist node for AM launch failures
> -
>
> Key: YARN-6409
> URL: https://issues.apache.org/jira/browse/YARN-6409
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha2
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-6409.00.patch, YARN-6409.01.patch, 
> YARN-6409.02.patch
>
>
> Currently, node blacklisting upon AM failures only handles failures that 
> happen after AM container is launched (see 
> RMAppAttemptImpl.shouldCountTowardsNodeBlacklisting()).  However, AM launch 
> can also fail if the NM, where the AM container is allocated, goes 
> unresponsive.  Because it is not handled, scheduler may continue to allocate 
> AM containers on that same NM for the following app attempts. 
> {code}
> Application application_1478721503753_0870 failed 2 times due to Error 
> launching appattempt_1478721503753_0870_02. Got exception: 
> java.io.IOException: Failed on local exception: java.io.IOException: 
> java.net.SocketTimeoutException: 6 millis timeout while waiting for 
> channel to be ready for read. ch : java.nio.channels.SocketChannel[connected 
> local=/17.111.179.113:46702 remote=*.me.com/17.111.178.125:8041]; Host 
> Details : local host is: "*.me.com/17.111.179.113"; destination host is: 
> "*.me.com":8041; 
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) 
> at org.apache.hadoop.ipc.Client.call(Client.java:1475) 
> at org.apache.hadoop.ipc.Client.call(Client.java:1408) 
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
>  
> at com.sun.proxy.$Proxy86.startContainers(Unknown Source) 
> at 
> org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.startContainers(ContainerManagementProtocolPBClientImpl.java:96)
>  
> at sun.reflect.GeneratedMethodAccessor155.invoke(Unknown Source) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  
> at java.lang.reflect.Method.invoke(Method.java:497) 
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256)
>  
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
>  
> at com.sun.proxy.$Proxy87.startContainers(Unknown Source) 
> at 
> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:120)
>  
> at 
> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:256)
>  
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  
> at java.lang.Thread.run(Thread.java:745) 
> Caused by: java.io.IOException: java.net.SocketTimeoutException: 6 millis 
> timeout while waiting for channel to be ready for read. ch : 
> java.nio.channels.SocketChannel[connected local=/17.111.179.113:46702 
> remote=*.me.com/17.111.178.125:8041] 
> at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:687) 
> at java.security.AccessController.doPrivileged(Native Method) 
> at javax.security.auth.Subject.doAs(Subject.java:422) 
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
>  
> at 
> org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:650)
>  
> at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:738) 
> at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:375) 
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1524) 
> at org.apache.hadoop.ipc.Client.call(Client.java:1447) 
> ... 15 more 
> Caused by: java.net.SocketTimeoutException: 6 millis timeout while 
> waiting for channel to be ready for read. ch : 
> java.nio.channels.SocketChannel[connected local=/17.111.179.113:46702 
> remote=*.me.com/17.111.178.125:8041] 
> at 
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) 
> at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) 
> at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) 
> at java.io.FilterInputStream.read(FilterInputStream.java:133) 
> at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) 
> at java.io.BufferedInputStream.read(BufferedInputStream.java:265) 
> at java.io.DataInputStream.readInt(DataInputStream.java:387) 
> at 
> org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:367) 
> 

[jira] [Updated] (YARN-6409) RM does not blacklist node for AM launch failures

2017-05-10 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-6409:
-
Attachment: YARN-6409.02.patch

Updated the patch to only blacklist a node upon AM launch failure due to 
SocketTimeoutException (up to three levels down in the cause chain)

> RM does not blacklist node for AM launch failures
> -
>
> Key: YARN-6409
> URL: https://issues.apache.org/jira/browse/YARN-6409
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha2
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-6409.00.patch, YARN-6409.01.patch, 
> YARN-6409.02.patch
>
>
> Currently, node blacklisting upon AM failures only handles failures that 
> happen after AM container is launched (see 
> RMAppAttemptImpl.shouldCountTowardsNodeBlacklisting()).  However, AM launch 
> can also fail if the NM, where the AM container is allocated, goes 
> unresponsive.  Because it is not handled, scheduler may continue to allocate 
> AM containers on that same NM for the following app attempts. 
> {code}
> Application application_1478721503753_0870 failed 2 times due to Error 
> launching appattempt_1478721503753_0870_02. Got exception: 
> java.io.IOException: Failed on local exception: java.io.IOException: 
> java.net.SocketTimeoutException: 6 millis timeout while waiting for 
> channel to be ready for read. ch : java.nio.channels.SocketChannel[connected 
> local=/17.111.179.113:46702 remote=*.me.com/17.111.178.125:8041]; Host 
> Details : local host is: "*.me.com/17.111.179.113"; destination host is: 
> "*.me.com":8041; 
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) 
> at org.apache.hadoop.ipc.Client.call(Client.java:1475) 
> at org.apache.hadoop.ipc.Client.call(Client.java:1408) 
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
>  
> at com.sun.proxy.$Proxy86.startContainers(Unknown Source) 
> at 
> org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.startContainers(ContainerManagementProtocolPBClientImpl.java:96)
>  
> at sun.reflect.GeneratedMethodAccessor155.invoke(Unknown Source) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  
> at java.lang.reflect.Method.invoke(Method.java:497) 
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256)
>  
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
>  
> at com.sun.proxy.$Proxy87.startContainers(Unknown Source) 
> at 
> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:120)
>  
> at 
> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:256)
>  
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  
> at java.lang.Thread.run(Thread.java:745) 
> Caused by: java.io.IOException: java.net.SocketTimeoutException: 6 millis 
> timeout while waiting for channel to be ready for read. ch : 
> java.nio.channels.SocketChannel[connected local=/17.111.179.113:46702 
> remote=*.me.com/17.111.178.125:8041] 
> at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:687) 
> at java.security.AccessController.doPrivileged(Native Method) 
> at javax.security.auth.Subject.doAs(Subject.java:422) 
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
>  
> at 
> org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:650)
>  
> at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:738) 
> at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:375) 
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1524) 
> at org.apache.hadoop.ipc.Client.call(Client.java:1447) 
> ... 15 more 
> Caused by: java.net.SocketTimeoutException: 6 millis timeout while 
> waiting for channel to be ready for read. ch : 
> java.nio.channels.SocketChannel[connected local=/17.111.179.113:46702 
> remote=*.me.com/17.111.178.125:8041] 
> at 
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) 
> at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) 
> at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) 
> at java.io.FilterInputStream.read(FilterInputStream.java:133) 
> at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) 
> at java.io.BufferedInputStream.read(BufferedInputStream.java:265) 
> at java.io.DataInputStream.readInt(DataInputStream.java:387) 
> at 
> 

[jira] [Commented] (YARN-5949) Add pluggable configuration policy interface as a component of MutableCSConfigurationProvider

2017-05-10 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005358#comment-16005358
 ] 

Jonathan Hung commented on YARN-5949:
-

004 fixes checkstyle/javadoc/whitespace/TestYarnConfigurationFields unit test.

> Add pluggable configuration policy interface as a component of 
> MutableCSConfigurationProvider
> -
>
> Key: YARN-5949
> URL: https://issues.apache.org/jira/browse/YARN-5949
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-5949-YARN-5734.001.patch, 
> YARN-5949-YARN-5734.002.patch, YARN-5949-YARN-5734.003.patch, 
> YARN-5949-YARN-5734.004.patch
>
>
> This will allow different policies to customize how/if configuration changes 
> should be applied (for example, a policy might restrict whether a 
> configuration change by a certain user is allowed). This will be enforced by 
> the MutableCSConfigurationProvider.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5949) Add pluggable configuration policy interface as a component of MutableCSConfigurationProvider

2017-05-10 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-5949:

Attachment: YARN-5949-YARN-5734.004.patch

> Add pluggable configuration policy interface as a component of 
> MutableCSConfigurationProvider
> -
>
> Key: YARN-5949
> URL: https://issues.apache.org/jira/browse/YARN-5949
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-5949-YARN-5734.001.patch, 
> YARN-5949-YARN-5734.002.patch, YARN-5949-YARN-5734.003.patch, 
> YARN-5949-YARN-5734.004.patch
>
>
> This will allow different policies to customize how/if configuration changes 
> should be applied (for example, a policy might restrict whether a 
> configuration change by a certain user is allowed). This will be enforced by 
> the MutableCSConfigurationProvider.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6533) Race condition in writing service record to registry in yarn native services

2017-05-10 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005346#comment-16005346
 ] 

Jian He commented on YARN-6533:
---

yeah, sounds good to me, we can leave it unencoded. Else the code will make 
readers wondering why in one place it's encoded, in another it's not

> Race condition in writing service record to registry in yarn native services
> 
>
> Key: YARN-6533
> URL: https://issues.apache.org/jira/browse/YARN-6533
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
> Attachments: YARN-6533-yarn-native-services.001.patch, 
> YARN-6533-yarn-native-services.002.patch
>
>
> The ServiceRecord is written twice, once when the container is initially 
> registered and again in the Docker provider once the IP has been obtained for 
> the container. These occur asynchronously, so the more important record (the 
> one with the IP) can be overwritten by the initial record. Only one record 
> needs to be written, so we can stop writing the initial record when the 
> Docker provider is being used.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6457) Allow custom SSL configuration to be supplied in WebApps

2017-05-10 Thread Vlad Rozov (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005307#comment-16005307
 ] 

Vlad Rozov commented on YARN-6457:
--

What if "ssl.server.truststore.location" is set in yarn-site.xml and is not 
specified in ssl-server.xml? Should the setting in yarn-site.xml still be in 
effect?

> Allow custom SSL configuration to be supplied in WebApps
> 
>
> Key: YARN-6457
> URL: https://issues.apache.org/jira/browse/YARN-6457
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: webapp, yarn
>Reporter: Sanjay M Pujare
>Assignee: Sanjay M Pujare
> Fix For: 2.9.0, 2.7.4, 2.8.1, 3.0.0-alpha3
>
> Attachments: YARN-6457.00.patch, YARN-6457.01.patch
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> Currently a custom SSL store cannot be passed on to WebApps which forces the 
> embedded web-server to use the default keystore set up in ssl-server.xml for 
> the whole Hadoop cluster. There are cases where the Hadoop app needs to use 
> its own/custom keystore.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6457) Allow custom SSL configuration to be supplied in WebApps

2017-05-10 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005297#comment-16005297
 ] 

Haibo Chen commented on YARN-6457:
--

I think yes. Prior to this patch, all SSL configuration parameters need to be 
specified in ssl-server.xml, that is, ssl-server configurations are effectively 
final. There may be cluster setups that rely on that fact. Allowing custom SSL 
configuration should not alter that behavior

> Allow custom SSL configuration to be supplied in WebApps
> 
>
> Key: YARN-6457
> URL: https://issues.apache.org/jira/browse/YARN-6457
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: webapp, yarn
>Reporter: Sanjay M Pujare
>Assignee: Sanjay M Pujare
> Fix For: 2.9.0, 2.7.4, 2.8.1, 3.0.0-alpha3
>
> Attachments: YARN-6457.00.patch, YARN-6457.01.patch
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> Currently a custom SSL store cannot be passed on to WebApps which forces the 
> embedded web-server to use the default keystore set up in ssl-server.xml for 
> the whole Hadoop cluster. There are cases where the Hadoop app needs to use 
> its own/custom keystore.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6580) Incorrect LOG for FairSharePolicy

2017-05-10 Thread Yufei Gu (JIRA)
Yufei Gu created YARN-6580:
--

 Summary: Incorrect LOG for FairSharePolicy
 Key: YARN-6580
 URL: https://issues.apache.org/jira/browse/YARN-6580
 Project: Hadoop YARN
  Issue Type: Bug
  Components: fairscheduler
Affects Versions: 3.0.0-alpha2, 2.8.0
Reporter: Yufei Gu
Priority: Minor


{code}
public class FairSharePolicy extends SchedulingPolicy {
  private static final Log LOG = LogFactory.getLog(FifoPolicy.class);
{code}
should be {{LogFactory.getLog(FairSharePolicy.class);}}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6571) Fix JavaDoc issues in SchedulingPolicy

2017-05-10 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005205#comment-16005205
 ] 

Daniel Templeton edited comment on YARN-6571 at 5/10/17 6:50 PM:
-

LGTM.  +1 on the latest patch, pending a clean bill of health from Jenkins.


was (Author: templedf):
LGTM.  +1 on the latest patch.

> Fix JavaDoc issues in SchedulingPolicy
> --
>
> Key: YARN-6571
> URL: https://issues.apache.org/jira/browse/YARN-6571
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Weiwei Yang
>Priority: Trivial
>  Labels: newbie
> Attachments: YARN-6571.001.patch, YARN-6571.002.patch, 
> YARN-6571.003.patch
>
>
> There are several javadoc issues:
> * Class JavaDoc is missing.
> * {{getInstance()}} is missing {{@return}} and {{@param}} tags.
> * {{parse()}} is missing {{@return}} tag and description for {{@throws}} tag.
> * {{checkIfUsageOverFairShare()}} is missing a period at the end of the first 
> sentence.
> * {{getHeadroom()}} should use {code}{@code}{code} instead of {{}} tags.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6571) Fix JavaDoc issues in SchedulingPolicy

2017-05-10 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005205#comment-16005205
 ] 

Daniel Templeton commented on YARN-6571:


LGTM.  +1 on the latest patch.

> Fix JavaDoc issues in SchedulingPolicy
> --
>
> Key: YARN-6571
> URL: https://issues.apache.org/jira/browse/YARN-6571
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Weiwei Yang
>Priority: Trivial
>  Labels: newbie
> Attachments: YARN-6571.001.patch, YARN-6571.002.patch, 
> YARN-6571.003.patch
>
>
> There are several javadoc issues:
> * Class JavaDoc is missing.
> * {{getInstance()}} is missing {{@return}} and {{@param}} tags.
> * {{parse()}} is missing {{@return}} tag and description for {{@throws}} tag.
> * {{checkIfUsageOverFairShare()}} is missing a period at the end of the first 
> sentence.
> * {{getHeadroom()}} should use {code}{@code}{code} instead of {{}} tags.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6457) Allow custom SSL configuration to be supplied in WebApps

2017-05-10 Thread Vlad Rozov (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005201#comment-16005201
 ] 

Vlad Rozov commented on YARN-6457:
--

Another question. As far as I can see, all SSL configuration parameters may be 
set in yarn-site.xml or any other configuration files that Yarn reads by 
default and be overwritten (if not declared as final) by settings in 
ssl-server.xml. Is it desired behavior?

> Allow custom SSL configuration to be supplied in WebApps
> 
>
> Key: YARN-6457
> URL: https://issues.apache.org/jira/browse/YARN-6457
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: webapp, yarn
>Reporter: Sanjay M Pujare
>Assignee: Sanjay M Pujare
> Fix For: 2.9.0, 2.7.4, 2.8.1, 3.0.0-alpha3
>
> Attachments: YARN-6457.00.patch, YARN-6457.01.patch
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> Currently a custom SSL store cannot be passed on to WebApps which forces the 
> embedded web-server to use the default keystore set up in ssl-server.xml for 
> the whole Hadoop cluster. There are cases where the Hadoop app needs to use 
> its own/custom keystore.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6571) Fix JavaDoc issues in SchedulingPolicy

2017-05-10 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005197#comment-16005197
 ] 

Weiwei Yang commented on YARN-6571:
---

Hi [~templedf]

Thank you so much for the detail note, this is good. I used to use a simple 
sentence or two for class doc but it is really helpful like this one, I only 
modified a bit (mostly format) and uploaded this in v3 patch. Again, thank you 
for providing your ideas and thoughtful explanation. I appreciate that.

> Fix JavaDoc issues in SchedulingPolicy
> --
>
> Key: YARN-6571
> URL: https://issues.apache.org/jira/browse/YARN-6571
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Weiwei Yang
>Priority: Trivial
>  Labels: newbie
> Attachments: YARN-6571.001.patch, YARN-6571.002.patch, 
> YARN-6571.003.patch
>
>
> There are several javadoc issues:
> * Class JavaDoc is missing.
> * {{getInstance()}} is missing {{@return}} and {{@param}} tags.
> * {{parse()}} is missing {{@return}} tag and description for {{@throws}} tag.
> * {{checkIfUsageOverFairShare()}} is missing a period at the end of the first 
> sentence.
> * {{getHeadroom()}} should use {code}{@code}{code} instead of {{}} tags.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6571) Fix JavaDoc issues in SchedulingPolicy

2017-05-10 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-6571:
--
Attachment: YARN-6571.003.patch

> Fix JavaDoc issues in SchedulingPolicy
> --
>
> Key: YARN-6571
> URL: https://issues.apache.org/jira/browse/YARN-6571
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Weiwei Yang
>Priority: Trivial
>  Labels: newbie
> Attachments: YARN-6571.001.patch, YARN-6571.002.patch, 
> YARN-6571.003.patch
>
>
> There are several javadoc issues:
> * Class JavaDoc is missing.
> * {{getInstance()}} is missing {{@return}} and {{@param}} tags.
> * {{parse()}} is missing {{@return}} tag and description for {{@throws}} tag.
> * {{checkIfUsageOverFairShare()}} is missing a period at the end of the first 
> sentence.
> * {{getHeadroom()}} should use {code}{@code}{code} instead of {{}} tags.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6484) [Documentation] Documenting the YARN Federation feature

2017-05-10 Thread Botong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005146#comment-16005146
 ] 

Botong Huang edited comment on YARN-6484 at 5/10/17 6:32 PM:
-

Thanks [~curino] for the patch. The description looks great to me. 

Here's some minor things I found, please kindly fix: 
* number of active applications -> number of active applications, *number of 
active containers* ?
* (10-100k) nodes -> (10-100k nodes)
* One more enter before "This design is structurally"
* Federation is being design as -> (one more enter) Federation is *designed* as
* on any nodes cluster -> on any nodes in the large cluster
* sub-clusters (this -> sub-cluster (this 
* Furthermore separate ... : this sentence is repeated in the *Note* 
afterwards, consider delete?
* sub-cluster a job -> sub-clusters a job
* At any one time, a job ...: consider move this paragraph into AMRMProxy? 
* all cluster operations, : remove entra enter afterwards
* policies run -> policies that run
* also modified -> also modified by the NM when launching the AM
* both clusters -> relevant sub-clusters
* allocations is done -> allocations are done
* Whether failover across subclsuters -> Whether should retry considering RM 
failover within each subcluster
* yarn.resourcemanager.cluster-id: this is also required in all NMs
* If this optional address is: format (extra spaces) after this
* Recent analysis of failure modes suggest that we should also maintain an 
explicit mapping between the notion of an “external App id” and the “internal 
App id”. This would allow us to hide some class of local failures (e.g., one RM 
is not reachable and we need to resubmit with a new app id) --- We 
didn't implement this part, for this case, we will likely use a new attempt 
number with the same app id. 


was (Author: botong):
Thanks [~curino] for the patch. The description looks great to me. 

Here's some minor things I found, please kindly fix: 
* number of active applications -> number of active applications, *number of 
active containers* ?
* (10-100k) nodes -> (10-100k nodes)
* One more enter before "This design is structurally"
* Federation is being design as -> (one more enter) Federation is *designed* as
* on any nodes cluster -> on any nodes in the large cluster
* sub-clusters (this -> sub-cluster (this 
* Furthermore separate ... : this sentence is repeated in the *Note* 
afterwards, consider delete?
* sub-cluster a job -> sub-clusters a job
* At any one time, a job ...: consider move this paragraph into AMRMProxy? 
* all cluster operations, : remove entra enter afterwards
* policies run -> policies that run
* also modified -> also modified by the NM when launching the AM
* both clusters -> relevant sub-clusters
* allocations is done -> allocations are done
* Whether failover across subclsuters -> Whether failover within each subclsuter
* yarn.resourcemanager.cluster-id: this is also required in all NMs
* If this optional address is: format (extra spaces) after this
* Recent analysis of failure modes suggest that we should also maintain an 
explicit mapping between the notion of an “external App id” and the “internal 
App id”. This would allow us to hide some class of local failures (e.g., one RM 
is not reachable and we need to resubmit with a new app id) --- We 
didn't implement this part, for this case, we will likely use a new attempt 
number with the same app id. 

> [Documentation] Documenting the YARN Federation feature
> ---
>
> Key: YARN-6484
> URL: https://issues.apache.org/jira/browse/YARN-6484
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Subru Krishnan
>Assignee: Carlo Curino
> Attachments: YARN-6484-YARN-2915.v0.patch, 
> YARN-6484-YARN-2915.v1.patch
>
>
> We should document the high level design and configuration to enable YARN 
> Federation



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6475) Fix some long function checkstyle issues

2017-05-10 Thread Soumabrata Chakraborty (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005152#comment-16005152
 ] 

Soumabrata Chakraborty commented on YARN-6475:
--

Thanks [~templedf] and [~miklos.szeg...@cloudera.com]
I have closed the PR manually.

> Fix some long function checkstyle issues
> 
>
> Key: YARN-6475
> URL: https://issues.apache.org/jira/browse/YARN-6475
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Soumabrata Chakraborty
>Priority: Trivial
>  Labels: newbie
> Fix For: 3.0.0-alpha3
>
> Attachments: YARN-6475.001.patch
>
>
> I am talking about these two:
> {code}
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java:441:
>   @Override:3: Method length is 176 lines (max allowed is 150). [MethodLength]
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java:159:
>   @Override:3: Method length is 158 lines (max allowed is 150). [MethodLength]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6475) Fix some long function checkstyle issues

2017-05-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005151#comment-16005151
 ] 

ASF GitHub Bot commented on YARN-6475:
--

Github user soumabrata-chakraborty closed the pull request at:

https://github.com/apache/hadoop/pull/218


> Fix some long function checkstyle issues
> 
>
> Key: YARN-6475
> URL: https://issues.apache.org/jira/browse/YARN-6475
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Soumabrata Chakraborty
>Priority: Trivial
>  Labels: newbie
> Fix For: 3.0.0-alpha3
>
> Attachments: YARN-6475.001.patch
>
>
> I am talking about these two:
> {code}
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java:441:
>   @Override:3: Method length is 176 lines (max allowed is 150). [MethodLength]
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java:159:
>   @Override:3: Method length is 158 lines (max allowed is 150). [MethodLength]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6484) [Documentation] Documenting the YARN Federation feature

2017-05-10 Thread Botong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005146#comment-16005146
 ] 

Botong Huang commented on YARN-6484:


Thanks [~curino] for the patch. The description looks great to me. 

Here's some minor things I found, please kindly fix: 
* number of active applications -> number of active applications, *number of 
active containers* ?
* (10-100k) nodes -> (10-100k nodes)
* One more enter before "This design is structurally"
* Federation is being design as -> (one more enter) Federation is *designed* as
* on any nodes cluster -> on any nodes in the large cluster
* sub-clusters (this -> sub-cluster (this 
* Furthermore separate ... : this sentence is repeated in the *Note* 
afterwards, consider delete?
* sub-cluster a job -> sub-clusters a job
* At any one time, a job ...: consider move this paragraph into AMRMProxy? 
* all cluster operations, : remove entra enter afterwards
* policies run -> policies that run
* also modified -> also modified by the NM when launching the AM
* both clusters -> relevant sub-clusters
* allocations is done -> allocations are done
* Whether failover across subclsuters -> Whether failover within each subclsuter
* yarn.resourcemanager.cluster-id: this is also required in all NMs
* If this optional address is: format (extra spaces) after this
* Recent analysis of failure modes suggest that we should also maintain an 
explicit mapping between the notion of an “external App id” and the “internal 
App id”. This would allow us to hide some class of local failures (e.g., one RM 
is not reachable and we need to resubmit with a new app id) --- We 
didn't implement this part, for this case, we will likely use a new attempt 
number with the same app id. 

> [Documentation] Documenting the YARN Federation feature
> ---
>
> Key: YARN-6484
> URL: https://issues.apache.org/jira/browse/YARN-6484
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Subru Krishnan
>Assignee: Carlo Curino
> Attachments: YARN-6484-YARN-2915.v0.patch, 
> YARN-6484-YARN-2915.v1.patch
>
>
> We should document the high level design and configuration to enable YARN 
> Federation



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6475) Fix some long function checkstyle issues

2017-05-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005150#comment-16005150
 ] 

ASF GitHub Bot commented on YARN-6475:
--

Github user soumabrata-chakraborty commented on the issue:

https://github.com/apache/hadoop/pull/218
  
@templedf committed to trunk 

Message:
SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11717 (See 
https://builds.apache.org/job/Hadoop-trunk-Commit/11717/)

Closing PR


> Fix some long function checkstyle issues
> 
>
> Key: YARN-6475
> URL: https://issues.apache.org/jira/browse/YARN-6475
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Soumabrata Chakraborty
>Priority: Trivial
>  Labels: newbie
> Fix For: 3.0.0-alpha3
>
> Attachments: YARN-6475.001.patch
>
>
> I am talking about these two:
> {code}
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java:441:
>   @Override:3: Method length is 176 lines (max allowed is 150). [MethodLength]
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java:159:
>   @Override:3: Method length is 158 lines (max allowed is 150). [MethodLength]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6475) Fix some long function checkstyle issues

2017-05-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005132#comment-16005132
 ] 

Hudson commented on YARN-6475:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11717 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11717/])
YARN-6475. Fix some long function checkstyle issues (Contributed by (templedf: 
rev 74a61438ca01e2191b54000af73b654a2d0b8253)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java


> Fix some long function checkstyle issues
> 
>
> Key: YARN-6475
> URL: https://issues.apache.org/jira/browse/YARN-6475
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Soumabrata Chakraborty
>Priority: Trivial
>  Labels: newbie
> Fix For: 3.0.0-alpha3
>
> Attachments: YARN-6475.001.patch
>
>
> I am talking about these two:
> {code}
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java:441:
>   @Override:3: Method length is 176 lines (max allowed is 150). [MethodLength]
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java:159:
>   @Override:3: Method length is 158 lines (max allowed is 150). [MethodLength]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-6475) Fix some long function checkstyle issues

2017-05-10 Thread Soumabrata Chakraborty (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Soumabrata Chakraborty updated YARN-6475:
-
Comment: was deleted

(was: Hi [~templedf],  [~miklos.szeg...@cloudera.com]

With reference to the auto-generated comments from Hadoop QA above:
1. There are no new or modified tests since the patch does not change current 
behavior of the code in any way -- it just refactors code and makes it more 
readable.  In fact, we rely on the existing tests passing to ensure that the 
refactoring has not broken any functionality.
2. The extant findbugs warnings was not the focus of my patch.  I can give them 
a shot if you feel they should be handled together with this JIRA.

Please advise)

> Fix some long function checkstyle issues
> 
>
> Key: YARN-6475
> URL: https://issues.apache.org/jira/browse/YARN-6475
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Soumabrata Chakraborty
>Priority: Trivial
>  Labels: newbie
> Fix For: 3.0.0-alpha3
>
> Attachments: YARN-6475.001.patch
>
>
> I am talking about these two:
> {code}
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java:441:
>   @Override:3: Method length is 176 lines (max allowed is 150). [MethodLength]
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java:159:
>   @Override:3: Method length is 158 lines (max allowed is 150). [MethodLength]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6475) Fix some long function checkstyle issues

2017-05-10 Thread Soumabrata Chakraborty (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005128#comment-16005128
 ] 

Soumabrata Chakraborty commented on YARN-6475:
--

Hi [~templedf],  [~miklos.szeg...@cloudera.com]

With reference to the auto-generated comments from Hadoop QA above:
1. There are no new or modified tests since the patch does not change current 
behavior of the code in any way -- it just refactors code and makes it more 
readable.  In fact, we rely on the existing tests passing to ensure that the 
refactoring has not broken any functionality.
2. The extant findbugs warnings was not the focus of my patch.  I can give them 
a shot if you feel they should be handled together with this JIRA.

Please advise

> Fix some long function checkstyle issues
> 
>
> Key: YARN-6475
> URL: https://issues.apache.org/jira/browse/YARN-6475
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Soumabrata Chakraborty
>Priority: Trivial
>  Labels: newbie
> Fix For: 3.0.0-alpha3
>
> Attachments: YARN-6475.001.patch
>
>
> I am talking about these two:
> {code}
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java:441:
>   @Override:3: Method length is 176 lines (max allowed is 150). [MethodLength]
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java:159:
>   @Override:3: Method length is 158 lines (max allowed is 150). [MethodLength]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3545) Investigate the concurrency issue with the map of timeline collector

2017-05-10 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005125#comment-16005125
 ] 

Haibo Chen commented on YARN-3545:
--

Thanks, Vrushali. Given the access pattern on TimelineCollectorManager leads to 
little contention, I agree with you guys on that this is low priority one.

> Investigate the concurrency issue with the map of timeline collector
> 
>
> Key: YARN-3545
> URL: https://issues.apache.org/jira/browse/YARN-3545
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zhijie Shen
>Assignee: Li Lu
>  Labels: YARN-5355, oct16-medium
> Attachments: YARN-3545-YARN-2928.000.patch
>
>
> See the discussion in YARN-3390 for details. Let's continue the discussion 
> here.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6457) Allow custom SSL configuration to be supplied in WebApps

2017-05-10 Thread Vlad Rozov (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005108#comment-16005108
 ] 

Vlad Rozov commented on YARN-6457:
--

[~haibochen] Please see YARN-6579.

> Allow custom SSL configuration to be supplied in WebApps
> 
>
> Key: YARN-6457
> URL: https://issues.apache.org/jira/browse/YARN-6457
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: webapp, yarn
>Reporter: Sanjay M Pujare
>Assignee: Sanjay M Pujare
> Fix For: 2.9.0, 2.7.4, 2.8.1, 3.0.0-alpha3
>
> Attachments: YARN-6457.00.patch, YARN-6457.01.patch
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> Currently a custom SSL store cannot be passed on to WebApps which forces the 
> embedded web-server to use the default keystore set up in ssl-server.xml for 
> the whole Hadoop cluster. There are cases where the Hadoop app needs to use 
> its own/custom keystore.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6579) Yarn and HDFS configuration should use SSLFactory.class

2017-05-10 Thread Vlad Rozov (JIRA)
Vlad Rozov created YARN-6579:


 Summary: Yarn and HDFS configuration should use SSLFactory.class
 Key: YARN-6579
 URL: https://issues.apache.org/jira/browse/YARN-6579
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: security, webapp, yarn
Reporter: Vlad Rozov


There are multiple configuration parameters provided by SSLFactory that are 
duplicated by YarnConfiguration and other classes. It will be good to unify how 
Hadoop SSL endpoints are configured by using SSLFactory.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6571) Fix JavaDoc issues in SchedulingPolicy

2017-05-10 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005082#comment-16005082
 ] 

Daniel Templeton commented on YARN-6571:


Alrighty then! Let's work on the text.  The {{SchedulingPolicy}} is used by the 
fair scheduler mainly to determine what a queue's fair share and steady fair 
share should be as well as calculating available headroom.  Every queue has 
one, including parents and children.  The policy for a child queue must be 
compatible with the policy of the parent queue; there are some combinations 
that aren't allowed.  See {{isChildPolicyAllowed()}}.  The policy for a queue 
is specified by setting  in the fair scheduler 
configuration file.  If a child queue doesn't specify a policy, it inherits the 
parent's policy.  The default policy is {{FairSharePolicy}}.

> Fix JavaDoc issues in SchedulingPolicy
> --
>
> Key: YARN-6571
> URL: https://issues.apache.org/jira/browse/YARN-6571
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Weiwei Yang
>Priority: Trivial
>  Labels: newbie
> Attachments: YARN-6571.001.patch, YARN-6571.002.patch
>
>
> There are several javadoc issues:
> * Class JavaDoc is missing.
> * {{getInstance()}} is missing {{@return}} and {{@param}} tags.
> * {{parse()}} is missing {{@return}} tag and description for {{@throws}} tag.
> * {{checkIfUsageOverFairShare()}} is missing a period at the end of the first 
> sentence.
> * {{getHeadroom()}} should use {code}{@code}{code} instead of {{}} tags.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6486) FairScheduler: Deprecate continuous scheduling in 2.9

2017-05-10 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005078#comment-16005078
 ] 

Arun Suresh commented on YARN-6486:
---

Any reason why you would want to deprecate this ?

Think we should unify Async Scheduling (from the Capacity Scheduler) and 
Continuous scheduling, since it looks pretty similar. Atleast the triggering 
thread can be made a common RM service.

There are some interesting scheduling performance improvements that can be done 
using a non-HB driven approach, coupled with time-based localilty relaxation 
that I've been experimenting with, both of which are offered by the Fair 
Scheduler. it would be sad to see this deprecated as a feature.

Thoughts ? (cc: [~ka...@cloudera.com], [~subru], [~leftnoteasy], [~curino])

> FairScheduler: Deprecate continuous scheduling in 2.9
> -
>
> Key: YARN-6486
> URL: https://issues.apache.org/jira/browse/YARN-6486
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: fairscheduler
>Affects Versions: 2.9.0
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>
> Mark continuous scheduling as deprecated in 2.9 and remove the code in 3.0. 
> Removing continuous scheduling from the code will be logged as a separate jira



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6475) Fix some long function checkstyle issues

2017-05-10 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005064#comment-16005064
 ] 

Daniel Templeton commented on YARN-6475:


LGTM +1

> Fix some long function checkstyle issues
> 
>
> Key: YARN-6475
> URL: https://issues.apache.org/jira/browse/YARN-6475
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Soumabrata Chakraborty
>Priority: Trivial
>  Labels: newbie
> Attachments: YARN-6475.001.patch
>
>
> I am talking about these two:
> {code}
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java:441:
>   @Override:3: Method length is 176 lines (max allowed is 150). [MethodLength]
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java:159:
>   @Override:3: Method length is 158 lines (max allowed is 150). [MethodLength]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6571) Fix JavaDoc issues in SchedulingPolicy

2017-05-10 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005062#comment-16005062
 ] 

Weiwei Yang edited comment on YARN-6571 at 5/10/17 5:21 PM:


Hi [~templedf]

I appreciate that, but I am not new to Yarn, do you think the doc I added for 
this class is not appropriate? I can do a better job than that if you let me 
know why? Thank you.


was (Author: cheersyang):
Hi [~templedf]

I am not new to Yarn, do you think the doc I added for this class is not 
appropriate? I can do a better job than that if you let me know why? Thank you.

> Fix JavaDoc issues in SchedulingPolicy
> --
>
> Key: YARN-6571
> URL: https://issues.apache.org/jira/browse/YARN-6571
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Weiwei Yang
>Priority: Trivial
>  Labels: newbie
> Attachments: YARN-6571.001.patch, YARN-6571.002.patch
>
>
> There are several javadoc issues:
> * Class JavaDoc is missing.
> * {{getInstance()}} is missing {{@return}} and {{@param}} tags.
> * {{parse()}} is missing {{@return}} tag and description for {{@throws}} tag.
> * {{checkIfUsageOverFairShare()}} is missing a period at the end of the first 
> sentence.
> * {{getHeadroom()}} should use {code}{@code}{code} instead of {{}} tags.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6571) Fix JavaDoc issues in SchedulingPolicy

2017-05-10 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005062#comment-16005062
 ] 

Weiwei Yang commented on YARN-6571:
---

Hi [~templedf]

I am not new to Yarn, do you think the doc I added for this class is not 
appropriate? I can do a better job than that if you let me know why? Thank you.

> Fix JavaDoc issues in SchedulingPolicy
> --
>
> Key: YARN-6571
> URL: https://issues.apache.org/jira/browse/YARN-6571
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Weiwei Yang
>Priority: Trivial
>  Labels: newbie
> Attachments: YARN-6571.001.patch, YARN-6571.002.patch
>
>
> There are several javadoc issues:
> * Class JavaDoc is missing.
> * {{getInstance()}} is missing {{@return}} and {{@param}} tags.
> * {{parse()}} is missing {{@return}} tag and description for {{@throws}} tag.
> * {{checkIfUsageOverFairShare()}} is missing a period at the end of the first 
> sentence.
> * {{getHeadroom()}} should use {code}{@code}{code} instead of {{}} tags.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6457) Allow custom SSL configuration to be supplied in WebApps

2017-05-10 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005053#comment-16005053
 ] 

Haibo Chen commented on YARN-6457:
--

Thanks [~vrozov] for pointing out SSLFactory. I have noticed that 
SSLFactory.SSL_SERVER_CONF_DEFAULT and the like have been duplicated, both in 
YARN and HDFS. I did not suggest it with the intent to keep this patch small. 
Can you file a jira so that we take a comprehensive look at fixing the 
duplication across all hadoop component?

> Allow custom SSL configuration to be supplied in WebApps
> 
>
> Key: YARN-6457
> URL: https://issues.apache.org/jira/browse/YARN-6457
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: webapp, yarn
>Reporter: Sanjay M Pujare
>Assignee: Sanjay M Pujare
> Fix For: 2.9.0, 2.7.4, 2.8.1, 3.0.0-alpha3
>
> Attachments: YARN-6457.00.patch, YARN-6457.01.patch
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> Currently a custom SSL store cannot be passed on to WebApps which forces the 
> embedded web-server to use the default keystore set up in ssl-server.xml for 
> the whole Hadoop cluster. There are cases where the Hadoop app needs to use 
> its own/custom keystore.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6571) Fix JavaDoc issues in SchedulingPolicy

2017-05-10 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005042#comment-16005042
 ] 

Daniel Templeton commented on YARN-6571:


Thanks for the update, [~cheersyang].  It occurred to me after I posted my 
comments that adding the class docs requires a bit more understanding of the 
inner working of YARN than is fair for a newbie JIRA.  Let's do this.  I'll 
commit your first patch and then file a new JIRA to add the class javadoc.  
With that in mind, +1 to patch 1.

> Fix JavaDoc issues in SchedulingPolicy
> --
>
> Key: YARN-6571
> URL: https://issues.apache.org/jira/browse/YARN-6571
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Weiwei Yang
>Priority: Trivial
>  Labels: newbie
> Attachments: YARN-6571.001.patch, YARN-6571.002.patch
>
>
> There are several javadoc issues:
> * Class JavaDoc is missing.
> * {{getInstance()}} is missing {{@return}} and {{@param}} tags.
> * {{parse()}} is missing {{@return}} tag and description for {{@throws}} tag.
> * {{checkIfUsageOverFairShare()}} is missing a period at the end of the first 
> sentence.
> * {{getHeadroom()}} should use {code}{@code}{code} instead of {{}} tags.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6380) FSAppAttempt keeps redundant copy of the queue

2017-05-10 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-6380:
---
Attachment: YARN-6380.005.patch

Address the unused import.  I don't think the cost in readability is worth 
reducing the 81-char line to 80 chars.

> FSAppAttempt keeps redundant copy of the queue
> --
>
> Key: YARN-6380
> URL: https://issues.apache.org/jira/browse/YARN-6380
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha2
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: YARN-6380.001.patch, YARN-6380.002.patch, 
> YARN-6380.003.patch, YARN-6380.004.patch, YARN-6380.005.patch
>
>
> The {{FSAppAttempt}} class defines its own {{fsQueue}} variable that is a 
> second copy of the {{SchedulerApplicationAttempt}}'s {{queue}} variable.  
> Aside from being redundant, it's also a bug, because when moving 
> applications, we only update the {{SchedulerApplicationAttempt}}'s {{queue}}, 
> not the {{FSAppAttempt}}'s {{fsQueue}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6280) Add a query parameter in ResourceManager Cluster Applications REST API to control whether or not returns ResourceRequest

2017-05-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16004989#comment-16004989
 ] 

Hadoop QA commented on YARN-6280:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 28s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 9 new + 110 unchanged - 2 fixed = 119 total (was 112) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 47m 13s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | 
org.apache.hadoop.yarn.server.resourcemanager.TestRMStoreCommands |
|   | 
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA |
|   | org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA 
|
|   | org.apache.hadoop.yarn.server.resourcemanager.TestRMHAForNodeLabels |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6280 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12867369/YARN-6280.009.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 48c28faf8ae5 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1e71fe8 |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/15894/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/15894/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15894/testReport/ |
| modules 

[jira] [Commented] (YARN-6457) Allow custom SSL configuration to be supplied in WebApps

2017-05-10 Thread Vlad Rozov (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16004981#comment-16004981
 ] 

Vlad Rozov commented on YARN-6457:
--

What is the reason that Yarn/WebApps do not use 
SSLFactory.SSL_SERVER_CONF_DEFAULT and other conf settings like 
SSLFactory.SSL_SERVER_CONF_KEY ("hadoop.ssl.server.conf"). According to 
SSLFactory java doc "it is used to configure HTTPS in Hadoop HTTP based 
endpoints, client and server" and instead hardcodes SSL conf to 
"ssl-server.xml"? In addition, in trunk, HttpServer2.Builder provides 
setSSLConf() method, that may simplify WebAppsUtils implementation.

> Allow custom SSL configuration to be supplied in WebApps
> 
>
> Key: YARN-6457
> URL: https://issues.apache.org/jira/browse/YARN-6457
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: webapp, yarn
>Reporter: Sanjay M Pujare
>Assignee: Sanjay M Pujare
> Fix For: 2.9.0, 2.7.4, 2.8.1, 3.0.0-alpha3
>
> Attachments: YARN-6457.00.patch, YARN-6457.01.patch
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> Currently a custom SSL store cannot be passed on to WebApps which forces the 
> embedded web-server to use the default keystore set up in ssl-server.xml for 
> the whole Hadoop cluster. There are cases where the Hadoop app needs to use 
> its own/custom keystore.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6577) Useless interface and implementation class

2017-05-10 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S reassigned YARN-6577:
---

Assignee: ZhangBing Lin

[~linzhangbing] added you as YARN contributor list. Feel free to assign JIRAs 
and contribute more. Appreciate your interest :-)

> Useless interface and implementation class
> --
>
> Key: YARN-6577
> URL: https://issues.apache.org/jira/browse/YARN-6577
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.3, 3.0.0-alpha2
>Reporter: ZhangBing Lin
>Assignee: ZhangBing Lin
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-6577.001.patch
>
>
> From 2.7.3  and 3.0.0-alpha2, the ContainerLocalization interface and the 
> ContainerLocalizationImpl implementation class are of no use, and I recommend 
> removing the useless interface and implementation classes



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3981) offline collector: support timeline clients not associated with an application

2017-05-10 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-3981:

Attachment: YARN-3981- offline-collector-draft.pdf

Updated draft of design doc. Please feel free to post your comments.

> offline collector: support timeline clients not associated with an application
> --
>
> Key: YARN-3981
> URL: https://issues.apache.org/jira/browse/YARN-3981
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Rohith Sharma K S
>  Labels: YARN-5355
> Attachments: YARN-3981- offline-collector-draft.pdf
>
>
> In the current v.2 design, all timeline writes must belong in a 
> flow/application context (cluster + user + flow + flow run + application).
> But there are use cases that require writing data outside the context of an 
> application. One such example is a higher level client (e.g. tez client or 
> hive/oozie/cascading client) writing flow-level data that spans multiple 
> applications. We need to find a way to support them.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6544) Add Null check RegistryDNS service while parsing registry records

2017-05-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16004856#comment-16004856
 ] 

Hadoop QA commented on YARN-6544:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
16s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
19s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-6544 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12867358/YARN-6544-yarn-native-services.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1f20e263e19c 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 
16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | yarn-native-services / 3c9f707 |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15893/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15893/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add Null check RegistryDNS service while parsing registry records
> -
>
> Key: YARN-6544
> URL: https://issues.apache.org/jira/browse/YARN-6544
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Affects Versions: yarn-native-services
>Reporter: Karam Singh
>Assignee: 

[jira] [Updated] (YARN-6280) Add a query parameter in ResourceManager Cluster Applications REST API to control whether or not returns ResourceRequest

2017-05-10 Thread Lantao Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lantao Jin updated YARN-6280:
-
Attachment: YARN-6280.009.patch

> Add a query parameter in ResourceManager Cluster Applications REST API to 
> control whether or not returns ResourceRequest
> 
>
> Key: YARN-6280
> URL: https://issues.apache.org/jira/browse/YARN-6280
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager, restapi
>Affects Versions: 2.7.3
>Reporter: Lantao Jin
>Assignee: Lantao Jin
> Attachments: YARN-6280.001.patch, YARN-6280.002.patch, 
> YARN-6280.003.patch, YARN-6280.004.patch, YARN-6280.005.patch, 
> YARN-6280.006.patch, YARN-6280.007.patch, YARN-6280.008.patch, 
> YARN-6280.009.patch
>
>
> Begin from v2.7, the ResourceManager Cluster Applications REST API returns   
> ResourceRequest list. It's a very large construction in AppInfo.
> As a test, we use below URI to query only 2 results:
> http:// address:port>/ws/v1/cluster/apps?states=running,accepted=2
> The results are very different:
> ||Hadoop version|Total Character|Total Word|Total Lines|Size||
> |2.4.1|1192|  42| 42| 1.2 KB|
> |2.7.1|1222179|   48740|  48735|  1.21 MB|
> Most RESTful API requesters don't know about this after upgraded and their 
> old queries may cause ResourceManager more GC consuming and slower. Even if 
> they know this but have no idea to reduce the impact of ResourceManager 
> except slow down their query frequency.
> The patch adding a query parameter "showResourceRequests" to help requesters 
> who don't need this information to reduce the overhead. In consideration of 
> compatibility of interface, the default value is true if they don't set the 
> parameter, so the behaviour is the same as now.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6533) Race condition in writing service record to registry in yarn native services

2017-05-10 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16004772#comment-16004772
 ] 

Billie Rinaldi commented on YARN-6533:
--

I am not sure about the purpose of the YARN_ID. I saw in 
SelectByYarnPersistence and RMRegistryOperationsService that sometimes it is 
used as a selection criterion for deleting service records. It looks like the 
appId should not be encoded. I think the reason the containerId is encoded is 
that it's used as a path in the registry, while the appId is not.

That said, I was worried that the YARN_ID was being used for deleting container 
service records, but it does not look like that is true. It might only be used 
for deleting application service records. Then my only concern would be the 
fact that the YARN_ID for the container record was different in the initial 
registration service record vs. the update service record. This would no longer 
be the case since we are removing the initial record. So maybe we could leave 
the container YARN_ID unencoded? What do you think, [~jianhe]?

> Race condition in writing service record to registry in yarn native services
> 
>
> Key: YARN-6533
> URL: https://issues.apache.org/jira/browse/YARN-6533
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
> Attachments: YARN-6533-yarn-native-services.001.patch, 
> YARN-6533-yarn-native-services.002.patch
>
>
> The ServiceRecord is written twice, once when the container is initially 
> registered and again in the Docker provider once the IP has been obtained for 
> the container. These occur asynchronously, so the more important record (the 
> one with the IP) can be overwritten by the initial record. Only one record 
> needs to be written, so we can stop writing the initial record when the 
> Docker provider is being used.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6544) Add Null check RegistryDNS service while parsing registry records

2017-05-10 Thread Karam Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karam Singh updated YARN-6544:
--
Attachment: YARN-6544-yarn-native-services.004.patch

Fixing whitespace issues

> Add Null check RegistryDNS service while parsing registry records
> -
>
> Key: YARN-6544
> URL: https://issues.apache.org/jira/browse/YARN-6544
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Affects Versions: yarn-native-services
>Reporter: Karam Singh
>Assignee: Karam Singh
> Fix For: yarn-native-services
>
> Attachments: YARN-6544-yarn-native-services.001.patch, 
> YARN-6544-yarn-native-services.002.patch, 
> YARN-6544-yarn-native-services.002.patch, 
> YARN-6544-yarn-native-services.003.patch, 
> YARN-6544-yarn-native-services.004.patch
>
>
> Add Null check RegistryDNS service while parsing registry records for Yarn 
> persistance attribute. 
> As of now It assumes that  yarn registry record always contain yarn 
> persistance which is not the case



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6578) Return container resource utilization from NM ContainerStatus call

2017-05-10 Thread Yang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16004749#comment-16004749
 ] 

Yang Wang commented on YARN-6578:
-

[~Naganarasimha], I have uploaded a WIP patch.

> Return container resource utilization from NM ContainerStatus call
> --
>
> Key: YARN-6578
> URL: https://issues.apache.org/jira/browse/YARN-6578
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Yang Wang
> Attachments: YARN-6578.001.patch
>
>
> When the applicationMaster wants to change(increase/decrease) resources of an 
> allocated container, resource utilization is an important reference indicator 
> for decision making.  So, when AM call NMClient.getContainerStatus, resource 
> utilization needs to be returned.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6578) Return container resource utilization from NM ContainerStatus call

2017-05-10 Thread Yang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Wang updated YARN-6578:

Attachment: YARN-6578.001.patch

> Return container resource utilization from NM ContainerStatus call
> --
>
> Key: YARN-6578
> URL: https://issues.apache.org/jira/browse/YARN-6578
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Yang Wang
> Attachments: YARN-6578.001.patch
>
>
> When the applicationMaster wants to change(increase/decrease) resources of an 
> allocated container, resource utilization is an important reference indicator 
> for decision making.  So, when AM call NMClient.getContainerStatus, resource 
> utilization needs to be returned.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6552) Increase YARN test timeouts from 1 second to 10 seconds

2017-05-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16004735#comment-16004735
 ] 

Hudson commented on YARN-6552:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11714 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11714/])
YARN-6552. Increase YARN test timeouts from 1 second to 10 seconds. (jlowe: rev 
6099deebcb0ea23adc0f0c62db749c8f528869e9)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/policies/TestEmptyQueues.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestLogsCLI.java


> Increase YARN test timeouts from 1 second to 10 seconds
> ---
>
> Key: YARN-6552
> URL: https://issues.apache.org/jira/browse/YARN-6552
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Fix For: 2.9.0, 3.0.0-alpha3, 2.8.2
>
> Attachments: YARN-6552.001.patch
>
>
> 1 second test timeouts are susceptible to failure on overloaded or otherwise 
> slow machines



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6552) Increase YARN test timeouts from 1 second to 10 seconds

2017-05-10 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16004683#comment-16004683
 ] 

Jason Lowe commented on YARN-6552:
--

Test failures are unrelated.  TestAMRMClient failure is tracked by YARN-6272.  
TestRMRestart failure is tracked by YARN-5548.  TestDelegationTokenRenewer 
failure is tracked by YARN-5816.

+1 lgtm.  Committing this.

> Increase YARN test timeouts from 1 second to 10 seconds
> ---
>
> Key: YARN-6552
> URL: https://issues.apache.org/jira/browse/YARN-6552
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: YARN-6552.001.patch
>
>
> 1 second test timeouts are susceptible to failure on overloaded or otherwise 
> slow machines



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2113) Add cross-user preemption within CapacityScheduler's leaf-queue

2017-05-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16004682#comment-16004682
 ] 

Hadoop QA commented on YARN-2113:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 30s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 22 new + 179 unchanged - 3 fixed = 201 total (was 182) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 44m  4s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption |
|   | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart |
| Timed out junit tests | 
org.apache.hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStore |
|   | 
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-2113 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12867319/YARN-2113.0016.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5a4e9085c913 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2ba9903 |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/15892/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/15892/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test 

[jira] [Commented] (YARN-6568) A queue which runs a long time job couldn't acquire any container for long time.

2017-05-10 Thread zhengchenyu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16004604#comment-16004604
 ] 

zhengchenyu commented on YARN-6568:
---

I solve this problem, when I set the timeout for queue. 
In my first version, the timeout of every queue are same. I think it is 
necessary to set different timeout for queues. By this, we could accelerate 
specified queue! 

> A queue which runs a long time job couldn't acquire any container for long 
> time.
> 
>
> Key: YARN-6568
> URL: https://issues.apache.org/jira/browse/YARN-6568
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.7.1
> Environment: CentOS 7.1
>Reporter: zhengchenyu
> Fix For: 2.7.4
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> In our cluster, we find some applications couldn't acquire any container for 
> long time. (Note: we use FairSharePolicy and FairScheduler)
> First, I found some unreasonable configuration, we set minRes=maxRes. So some 
> application keep pending for long time, we kill some large applicaiton to 
> solve this problem. Then we changed this configuration, this problem 
> relieves. 
> But this problem is not completely solved. In our cluster, I found 
> applications in  some queue which request few container keep pending for long 
> time. 
> I simulate in test cluster. I submit DistributedShell application which run 
> many loo applications to queueA, then I submit my own yarn application which 
> request container and release container constantly to queueB.  At this time, 
> any applicaitons which are submmited to queueA keep pending!
> We know this is the problem of FairSharePolicy, it consider the request of 
> queue. So after sort the queues, some queues which have few request are 
> ordered last all time.
> We know if the AM container is launched, then the request will increase, But 
> FairSharePolicy can't distinguish which request is AM request. I think if am 
> container is assigned, the problem is solved. 
> Our companion discuss this problem. we recommend set a timeout for queue, it 
> means the time length of a queue is not assigned. If timeout, we set this 
> queue to the first place of queues list. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2113) Add cross-user preemption within CapacityScheduler's leaf-queue

2017-05-10 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-2113:
--
Attachment: YARN-2113.0016.patch

Last patch had a compilation issue. Attaching new one.

> Add cross-user preemption within CapacityScheduler's leaf-queue
> ---
>
> Key: YARN-2113
> URL: https://issues.apache.org/jira/browse/YARN-2113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Sunil G
> Attachments: IntraQueue Preemption-Impact Analysis.pdf, 
> TestNoIntraQueuePreemptionIfBelowUserLimitAndDifferentPrioritiesWithExtraUsers.txt,
>  YARN-2113.0001.patch, YARN-2113.0002.patch, YARN-2113.0003.patch, 
> YARN-2113.0004.patch, YARN-2113.0005.patch, YARN-2113.0006.patch, 
> YARN-2113.0007.patch, YARN-2113.0008.patch, YARN-2113.0009.patch, 
> YARN-2113.0010.patch, YARN-2113.0011.patch, YARN-2113.0012.patch, 
> YARN-2113.0013.patch, YARN-2113.0014.patch, YARN-2113.0015.patch, 
> YARN-2113.0016.patch, YARN-2113.apply.onto.0012.ericp.patch, YARN-2113 
> Intra-QueuePreemption Behavior.pdf, YARN-2113.v0.patch
>
>
> Preemption today only works across queues and moves around resources across 
> queues per demand and usage. We should also have user-level preemption within 
> a queue, to balance capacity across users in a predictable manner.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2113) Add cross-user preemption within CapacityScheduler's leaf-queue

2017-05-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16004526#comment-16004526
 ] 

Hadoop QA commented on YARN-2113:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
35s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
33s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 33s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 22 new + 179 unchanged - 3 fixed = 201 total (was 182) 
{color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
42s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 33s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-2113 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12867308/YARN-2113.0015.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2f5800f4fecc 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2ba9903 |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-YARN-Build/15891/artifact/patchprocess/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-YARN-Build/15891/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-YARN-Build/15891/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| checkstyle | 

[jira] [Updated] (YARN-2113) Add cross-user preemption within CapacityScheduler's leaf-queue

2017-05-10 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-2113:
--
Attachment: YARN-2113.0015.patch

Thanks [~eepayne] and [~leftnoteasy]
Attaching patch with latest comments.

> Add cross-user preemption within CapacityScheduler's leaf-queue
> ---
>
> Key: YARN-2113
> URL: https://issues.apache.org/jira/browse/YARN-2113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Sunil G
> Attachments: IntraQueue Preemption-Impact Analysis.pdf, 
> TestNoIntraQueuePreemptionIfBelowUserLimitAndDifferentPrioritiesWithExtraUsers.txt,
>  YARN-2113.0001.patch, YARN-2113.0002.patch, YARN-2113.0003.patch, 
> YARN-2113.0004.patch, YARN-2113.0005.patch, YARN-2113.0006.patch, 
> YARN-2113.0007.patch, YARN-2113.0008.patch, YARN-2113.0009.patch, 
> YARN-2113.0010.patch, YARN-2113.0011.patch, YARN-2113.0012.patch, 
> YARN-2113.0013.patch, YARN-2113.0014.patch, YARN-2113.0015.patch, 
> YARN-2113.apply.onto.0012.ericp.patch, YARN-2113 Intra-QueuePreemption 
> Behavior.pdf, YARN-2113.v0.patch
>
>
> Preemption today only works across queues and moves around resources across 
> queues per demand and usage. We should also have user-level preemption within 
> a queue, to balance capacity across users in a predictable manner.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6471) Support to add min/max resource configuration for a queue

2017-05-10 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-6471:
--
Attachment: YARN-6471.004.patch

Uploading a new patch by handling more error conditions.

Handled below scenarios
# We can now either configure capacity by percentage or absolute configuration. 
Both cannot be configured together
# Fixed few more validations when max resource is not configured.

ToDo:
# more test cases

cc/[~leftnoteasy]

> Support to add min/max resource configuration for a queue
> -
>
> Key: YARN-6471
> URL: https://issues.apache.org/jira/browse/YARN-6471
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-6471.001.patch, YARN-6471.002.patch, 
> YARN-6471.003.patch, YARN-6471.004.patch
>
>
> This jira will track the new configurations which are needed to configure min 
> resource and max resource of various resource types in a queue.
> For eg: 
> {noformat}
> yarn.scheduler.capacity.root.default.memory.min-resource
> yarn.scheduler.capacity.root.default.memory.max-resource
> yarn.scheduler.capacity.root.default.vcores.min-resource
> yarn.scheduler.capacity.root.default.vcores.max-resource
> {noformat}
> Uploading a patch soon



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6578) Return container resource utilization from NM ContainerStatus call

2017-05-10 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16004447#comment-16004447
 ] 

Naganarasimha G R commented on YARN-6578:
-

Hi [~fly_in_gis], seems like you have a WIP patch, if possible can you share it 
across ?

> Return container resource utilization from NM ContainerStatus call
> --
>
> Key: YARN-6578
> URL: https://issues.apache.org/jira/browse/YARN-6578
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Yang Wang
>
> When the applicationMaster wants to change(increase/decrease) resources of an 
> allocated container, resource utilization is an important reference indicator 
> for decision making.  So, when AM call NMClient.getContainerStatus, resource 
> utilization needs to be returned.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6578) Return container resource utilization from NM ContainerStatus call

2017-05-10 Thread Yang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16004415#comment-16004415
 ] 

Yang Wang commented on YARN-6578:
-

[~Naganarasimha], thanks for your reply.
I plan to get usage from ContainerMetrics and return in ContainerStatus.
If you worry about this will make the NM heartbeat getting bigger, we could set 
the utilization only in the response of  NMClient.getContainerStatus.

{code}
ContainerImpl.cloneAndGetContainerStatus()
...
  ContainerMetrics metrics = 
ContainerMetrics.getContainerMetrics(this.containerId);
  if (metrics != null) {
status.setUtilization(ResourceUtilization
.newInstance((int) metrics.pMemMBsStat.lastStat().mean(), 0,
(float) metrics.cpuCoreUsagePercent.lastStat().mean()));
  } else {
status.setUtilization(ResourceUtilization.newInstance(0, 0, 0));
  }
...
{code}

> Return container resource utilization from NM ContainerStatus call
> --
>
> Key: YARN-6578
> URL: https://issues.apache.org/jira/browse/YARN-6578
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Yang Wang
>
> When the applicationMaster wants to change(increase/decrease) resources of an 
> allocated container, resource utilization is an important reference indicator 
> for decision making.  So, when AM call NMClient.getContainerStatus, resource 
> utilization needs to be returned.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6366) Refactor the NodeManager DeletionService to support additional DeletionTask types.

2017-05-10 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16004377#comment-16004377
 ] 

Varun Vasudev commented on YARN-6366:
-

Thanks for the patch [~shaneku...@gmail.com]! Some minor things to clean up -

1)
Should we add the 'user' field to the DeletionTask base class instead of 
keeping it in the FileDeletionTask class?

2)
{code}
if (proto.getTaskType() != null) {
{code}
Can you add a check for {code} proto.hasTaskType() {code} and then check if 
it's null?

3)
{code}
int taskId = proto.getId(); 
{code}
Shouldn't this be in the base converter? It seems to be a common piece of code 
that every task type will have to call.

4)
You don't need the ProtoToFileDeletionTaskConverter and 
DelegatingProtoToDeletionTaskConverter classes. Please move the convert 
functions to the ProtoUtils class as static functions.

5)
{code}
+FileDeletionTask dependentDeletionTask = new FileDeletionTask(del, null,
+userDirPath, new ArrayList<>());
{code}
Why create a new ArrayList here? You've used "null" in other places.

6)
The new tests you've added need to be renamed to match the naming convention. 
Invoked test functions need to be renamed to start with "test" (like 
testGetUser)

Thanks!

> Refactor the NodeManager DeletionService to support additional DeletionTask 
> types.
> --
>
> Key: YARN-6366
> URL: https://issues.apache.org/jira/browse/YARN-6366
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager, yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-6366.001.patch, YARN-6366.002.patch, 
> YARN-6366.003.patch, YARN-6366.004.patch, YARN-6366.005.patch, 
> YARN-6366.006.patch
>
>
> The NodeManager's DeletionService only supports file based DeletionTask's. 
> This makes sense as files (and directories) have been the primary concern for 
> clean up to date. With the addition of the Docker container runtime, addition 
> types of DeletionTask are likely to be required, such as deletion of docker 
> container and images. See YARN-5366 and YARN-5670. This issue is to refactor 
> the DeletionService to support additional DeletionTask's.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6182) [YARN-3368] Fix alignment issues and missing information in queue pages

2017-05-10 Thread Akhil PB (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-6182:
---
Description: 
This patch fixes following issues:

In Queues page:
# Queue Capacities: Absolute Max Capacity should be aligned better.
# Queue Information: State is coming empty
# The queue tree graph is taking too much space. We should reduce both the 
vertical and horizontal spacing.
# Queues tab becomes inactive while hovering on the queue.
# Fixes the capacity decimal places to two places.


  was:
This patch fixes following issues:

In Queues page:
# Queue Capacities: Absolute Max Capacity should be aligned better.
# Queue Information: State is coming empty
# The queue tree graph is taking too much space. We should reduce both the 
vertical and horizontal spacing.
# Queues tab becomes inactive while hovering on the queue.



> [YARN-3368] Fix alignment issues and missing information in queue pages
> ---
>
> Key: YARN-6182
> URL: https://issues.apache.org/jira/browse/YARN-6182
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
> Attachments: YARN-6182.001.patch, YARN-6182.002.patch, 
> YARN-6182.003.patch
>
>
> This patch fixes following issues:
> In Queues page:
> # Queue Capacities: Absolute Max Capacity should be aligned better.
> # Queue Information: State is coming empty
> # The queue tree graph is taking too much space. We should reduce both the 
> vertical and horizontal spacing.
> # Queues tab becomes inactive while hovering on the queue.
> # Fixes the capacity decimal places to two places.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6182) [YARN-3368] Fix alignment issues and missing information in queue pages

2017-05-10 Thread Akhil PB (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-6182:
---
Summary: [YARN-3368] Fix alignment issues and missing information in queue 
pages  (was: [YARN-3368] Fix alignment issues and missing information in Queue 
pages)

> [YARN-3368] Fix alignment issues and missing information in queue pages
> ---
>
> Key: YARN-6182
> URL: https://issues.apache.org/jira/browse/YARN-6182
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
> Attachments: YARN-6182.001.patch, YARN-6182.002.patch, 
> YARN-6182.003.patch
>
>
> This patch fixes following issues:
> In Queues page:
> # Queue Capacities: Absolute Max Capacity should be aligned better.
> # Queue Information: State is coming empty
> # The queue tree graph is taking too much space. We should reduce both the 
> vertical and horizontal spacing.
> # Queues tab becomes inactive while hovering on the queue.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6577) Useless interface and implementation class

2017-05-10 Thread ZhangBing Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16004331#comment-16004331
 ] 

ZhangBing Lin commented on YARN-6577:
-

Hi,[~rohithsharma],
I can't change the Assignee to oneself also when submit the YARN-6577.Please 
add me as contributor for Apache Hadoop.My account is ZhangBing Lin.
Thanks
ZhangBing Lin

> Useless interface and implementation class
> --
>
> Key: YARN-6577
> URL: https://issues.apache.org/jira/browse/YARN-6577
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.3, 3.0.0-alpha2
>Reporter: ZhangBing Lin
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-6577.001.patch
>
>
> From 2.7.3  and 3.0.0-alpha2, the ContainerLocalization interface and the 
> ContainerLocalizationImpl implementation class are of no use, and I recommend 
> removing the useless interface and implementation classes



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >