[jira] [Commented] (YARN-8006) Make Hbase-2 profile as default for YARN-7055 branch

2018-03-12 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396581#comment-16396581
 ] 

genericqa commented on YARN-8006:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} YARN-8006 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  5m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
21s{color} | {color:green} YARN-8006 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 
47s{color} | {color:green} YARN-8006 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} YARN-8006 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
63m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} YARN-8006 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m  
9s{color} | {color:red} hadoop-yarn-server-timelineservice-hbase-tests in the 
patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
11s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 11s{color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m  
8s{color} | {color:red} hadoop-yarn-server-timelineservice-hbase-tests in the 
patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  0m 
15s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m  
9s{color} | {color:red} hadoop-yarn-server-timelineservice-hbase-tests in the 
patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
10s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m  9s{color} 
| {color:red} hadoop-yarn-server-timelineservice-hbase-tests in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-server in 
the patch passed. {color} |
| {color:blue}0{color} | {color:blue} asflicense {color} | {color:blue}  0m 
12s{color} | {color:blue} ASF License check generated no output? {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | YARN-8006 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12914215/YARN-8006-YARN-8006.01.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  |
| uname | Linux ccaabf245329 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-8006 / 9a082fb |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| mvninstall | 

[jira] [Commented] (YARN-8022) ResourceManager UI cluster/app/ page fails to render

2018-03-12 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396564#comment-16396564
 ] 

Rohith Sharma K S commented on YARN-8022:
-

bq. Is there {color:#ff}any{color} behavior change comparing this to trunk 
few days ago (before I revert these patches)? 
yes, it is there. Before revert of HADOOP-14077 if callerUgi is null then 
AuthenticationException is thrown. With this patch,  we continue to retrieve 
app report without ugi#doAs. 

bq. Is there any behavior change comparing this to logics originally before 
HADOOP-14077 get committed?
*NO*. This patch and before HADOOP-14077 get committed state should be same.

It looks confusion is because I said we need to revert the Appblock code in 
earlier comments. Actually *NO*. We should *NOT* revert AppBlock modifications 
after Owen commit. Sorry for the confusion. 

YARN-7163 did some modifications to AppBlock class which got missed after Owen 
commit that caused NPE. This patch bringing back those modifications without 
any behavioral changes.



> ResourceManager UI cluster/app/ page fails to render
> 
>
> Key: YARN-8022
> URL: https://issues.apache.org/jira/browse/YARN-8022
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Tarun Parimi
>Assignee: Tarun Parimi
>Priority: Blocker
> Attachments: Screen Shot 2018-03-12 at 1.45.05 PM.png, 
> YARN-8022.001.patch, YARN-8022.002.patch
>
>
> The page displays the message "Failed to read the attempts of the application"
>  
> The following stack trace is observed in RM log.
> org.apache.hadoop.yarn.server.webapp.AppBlock: Failed to read the attempts of 
> the application application_1520597233415_0002.
> java.lang.NullPointerException
>  at org.apache.hadoop.yarn.server.webapp.AppBlock$3.run(AppBlock.java:283)
>  at org.apache.hadoop.yarn.server.webapp.AppBlock$3.run(AppBlock.java:280)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
>  at org.apache.hadoop.yarn.server.webapp.AppBlock.render(AppBlock.java:279)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.RMAppBlock.render(RMAppBlock.java:71)
>  at org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>  at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>  at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
>  at org.apache.hadoop.yarn.webapp.view.HtmlPage$Page.subView(HtmlPage.java:49)
>  at 
> org.apache.hadoop.yarn.webapp.hamlet2.HamletImpl$EImp._v(HamletImpl.java:117)
>  at org.apache.hadoop.yarn.webapp.hamlet2.Hamlet$TD.__(Hamlet.java:848)
>  at 
> org.apache.hadoop.yarn.webapp.view.TwoColumnLayout.render(TwoColumnLayout.java:71)
>  at org.apache.hadoop.yarn.webapp.view.HtmlPage.render(HtmlPage.java:82)
>  at org.apache.hadoop.yarn.webapp.Controller.render(Controller.java:212)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.RmController.app(RmController.java:54)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7975) Add an optional arg to yarn cluster -list-node-labels to list nodes collection partitioned by labels

2018-03-12 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396559#comment-16396559
 ] 

genericqa commented on YARN-7975:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client: The patch generated 24 new 
+ 35 unchanged - 1 fixed = 59 total (was 36) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
40s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 28m  
6s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client |
|  |  org.apache.hadoop.yarn.client.cli.ClusterCLI.printClusterNodeLabelsMap() 
makes inefficient use of keySet iterator instead of entrySet iterator  At 
ClusterCLI.java:keySet iterator instead of entrySet iterator  At 
ClusterCLI.java:[line 155] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | YARN-7975 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12914212/YARN-7975_1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 93866783a3f5 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0355ec2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Commented] (YARN-5627) [Atsv2] Support streaming reader API to fetch entities

2018-03-12 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396547#comment-16396547
 ] 

Rohith Sharma K S commented on YARN-5627:
-

Alternative to this JIRA, we did support for FROM_ID in all the REST queries 
which helps for pagination. See YARN-6027 YARN-5585 YARN-6064 YARN-6047. 

> [Atsv2] Support streaming reader API to fetch entities
> --
>
> Key: YARN-5627
> URL: https://issues.apache.org/jira/browse/YARN-5627
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
>
> There is no limit for size of TimelineEntitie object. It can be varied from 
> KB's to MB. While reading entities list, it would be an potential issue that 
> TimeLineReder would go into OOM situation based on the entity size and limit. 
> Proposal is to support an streaming API to read entities list. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8006) Make Hbase-2 profile as default for YARN-7055 branch

2018-03-12 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-8006:
-
Attachment: YARN-8006-YARN-8006.01.patch

> Make Hbase-2 profile as default for YARN-7055 branch
> 
>
> Key: YARN-8006
> URL: https://issues.apache.org/jira/browse/YARN-8006
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-8006-YARN-7055.001.patch, 
> YARN-8006-YARN-8006.00.patch, YARN-8006-YARN-8006.01.patch
>
>
> In last weekly call folks discussed that we should have separate branch with 
> hbase-2 as profile by default. Trunk default profile is hbase-1 which runs 
> all the tests under hbase-1 profile. But for hbase-2 profile tests are not 
> running.
> As per the discussion, lets keep YARN-7055 branch for hbase-2 profile as 
> default. Any server side patches can be given to this branch as well which 
> runs tests for hbase-2 profile. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5627) [Atsv2] Support streaming reader API to fetch entities

2018-03-12 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396520#comment-16396520
 ] 

Haibo Chen commented on YARN-5627:
--

Yes, max N entities at a time is what MR needs. I was indeed able to retrieve 
only relevant fields to reduce the data transfer. But filters are not 
applicable when all tasks or task attempts are to be retrieved. The data size 
as a result will still be big for big jobs. Server side pagination will reduce 
the one time data transfer to multiple requests that are only made whenever 
necessary.

> [Atsv2] Support streaming reader API to fetch entities
> --
>
> Key: YARN-5627
> URL: https://issues.apache.org/jira/browse/YARN-5627
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
>
> There is no limit for size of TimelineEntitie object. It can be varied from 
> KB's to MB. While reading entities list, it would be an potential issue that 
> TimeLineReder would go into OOM situation based on the entity size and limit. 
> Proposal is to support an streaming API to read entities list. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7975) Add an optional arg to yarn cluster -list-node-labels to list nodes collection partitioned by labels

2018-03-12 Thread Shen Yinjie (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396517#comment-16396517
 ] 

Shen Yinjie commented on YARN-7975:
---

[~sunilg],Thanks for reply,I updated a patch with UT.

> Add an optional arg to yarn cluster -list-node-labels to list nodes 
> collection partitioned by labels
> 
>
> Key: YARN-7975
> URL: https://issues.apache.org/jira/browse/YARN-7975
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Shen Yinjie
>Assignee: Shen Yinjie
>Priority: Major
> Attachments: YARN-7975.patch, YARN-7975_1.patch
>
>
> Since we have "yarn cluster -lnl" to print all nodelabels info .But it's not 
> enough,we should be abale to list nodes collection partitioned by 
> labels,especially in large cluster.
> So  I propose to add an optional argument  "-nodes" for  "yarn cluster -lnl" 
> to achieve this.
> e.g.
> [yarn@docker1 ~]$ yarn cluster -lnl -nodes
> Node Labels Num: 3
>               Labels                                               Nodes
>  

[jira] [Updated] (YARN-7975) Add an optional arg to yarn cluster -list-node-labels to list nodes collection partitioned by labels

2018-03-12 Thread Shen Yinjie (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shen Yinjie updated YARN-7975:
--
Attachment: YARN-7975_1.patch

> Add an optional arg to yarn cluster -list-node-labels to list nodes 
> collection partitioned by labels
> 
>
> Key: YARN-7975
> URL: https://issues.apache.org/jira/browse/YARN-7975
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Shen Yinjie
>Assignee: Shen Yinjie
>Priority: Major
> Attachments: YARN-7975.patch, YARN-7975_1.patch
>
>
> Since we have "yarn cluster -lnl" to print all nodelabels info .But it's not 
> enough,we should be abale to list nodes collection partitioned by 
> labels,especially in large cluster.
> So  I propose to add an optional argument  "-nodes" for  "yarn cluster -lnl" 
> to achieve this.
> e.g.
> [yarn@docker1 ~]$ yarn cluster -lnl -nodes
> Node Labels Num: 3
>               Labels                                               Nodes
>  

[jira] [Updated] (YARN-8016) Refine PlacementRule interface and add a app-name queue mapping rule as an example

2018-03-12 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-8016:
-
Description: 
After YARN-3635/YARN-6689, PlacementRule becomes a common interface which can 
be used by scheduler and can be dynamically updated by scheduler according to 
configs. There're some other works. 
- There's no way to initialize PlacementRule.
- No example of PlacementRule except the user-group mapping one.

This JIRA is targeted to refine PlacementRule interfaces and add another 
PlacementRule example.

  was:
Currently in Capacity Scheduler, we hard code queue mappings to 
UserGroupMappingPlacementRule for queue mappings.

We need to expose a general framework to dynamically create various queue 
mapping placement rules by reading queue mapping rule property from 
capacity-scheduler.xml   


> Refine PlacementRule interface and add a app-name queue mapping rule as an 
> example
> --
>
> Key: YARN-8016
> URL: https://issues.apache.org/jira/browse/YARN-8016
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Zian Chen
>Assignee: Zian Chen
>Priority: Major
> Attachments: YARN-8016.001.patch
>
>
> After YARN-3635/YARN-6689, PlacementRule becomes a common interface which can 
> be used by scheduler and can be dynamically updated by scheduler according to 
> configs. There're some other works. 
> - There's no way to initialize PlacementRule.
> - No example of PlacementRule except the user-group mapping one.
> This JIRA is targeted to refine PlacementRule interfaces and add another 
> PlacementRule example.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7657) Queue Mapping could provide options to provide 'user' specific auto-created queues under a specified group parent queue

2018-03-12 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396495#comment-16396495
 ] 

Suma Shivaprasad commented on YARN-7657:


[~leftnoteasy] Thanks!   The UT failure in unrelated and related to 
ResourceTracker

Failures: [ERROR] 
TestResourceTrackerService.testNodeRemovalGracefully:1608->testNodeRemovalUtilLost:1877
 There should be no Lost NMs! expected:<2> but was:<0> [INFO]

 

 

> Queue Mapping could provide options to provide 'user' specific auto-created 
> queues under a specified group parent queue
> ---
>
> Key: YARN-7657
> URL: https://issues.apache.org/jira/browse/YARN-7657
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-7657.1.patch, YARN-7657.2.patch, YARN-7657.3.patch, 
> YARN-7657.4.patch
>
>
> Current Queue-Mapping only provides %user as an option for 'user' specific 
> queues as u:%user:%user. We can also support %user with group as 
> 'g:makerting-group:marketing.%user'  and user specific queues can be 
> automatically created under a group queue in this case.
> cc [~leftnoteasy]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8016) Refine PlacementRule interface and add a app-name queue mapping rule as an example

2018-03-12 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-8016:
-
Summary: Refine PlacementRule interface and add a app-name queue mapping 
rule as an example  (was: Provide a common interface for queues mapping rules)

> Refine PlacementRule interface and add a app-name queue mapping rule as an 
> example
> --
>
> Key: YARN-8016
> URL: https://issues.apache.org/jira/browse/YARN-8016
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Zian Chen
>Assignee: Zian Chen
>Priority: Major
> Attachments: YARN-8016.001.patch
>
>
> Currently in Capacity Scheduler, we hard code queue mappings to 
> UserGroupMappingPlacementRule for queue mappings.
> We need to expose a general framework to dynamically create various queue 
> mapping placement rules by reading queue mapping rule property from 
> capacity-scheduler.xml   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8016) Provide a common interface for queues mapping rules

2018-03-12 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396488#comment-16396488
 ] 

Wangda Tan commented on YARN-8016:
--

Thanks [~yufeigu] for looking at this JIRA,
I discussed with Sandy long time before on YARN-3635 when introduce the common 
scheduler queue mapping interface. It has slightly different semantic comparing 
to Fair Scheduler's queue mapping: Plz see my comment 
https://issues.apache.org/jira/browse/YARN-3635?focusedCommentId=14630139=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14630139
 for more details. I also included works need to be done to use this new API in 
FS.

Please let me know if you have any other comments.

> Provide a common interface for queues mapping rules
> ---
>
> Key: YARN-8016
> URL: https://issues.apache.org/jira/browse/YARN-8016
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Zian Chen
>Assignee: Zian Chen
>Priority: Major
> Attachments: YARN-8016.001.patch
>
>
> Currently in Capacity Scheduler, we hard code queue mappings to 
> UserGroupMappingPlacementRule for queue mappings.
> We need to expose a general framework to dynamically create various queue 
> mapping placement rules by reading queue mapping rule property from 
> capacity-scheduler.xml   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7657) Queue Mapping could provide options to provide 'user' specific auto-created queues under a specified group parent queue

2018-03-12 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396484#comment-16396484
 ] 

Wangda Tan commented on YARN-7657:
--

Thanks [~suma.shivaprasad], the latest patch looks good, plz confirm the UT 
failure is not related. Will commit tomorrow if no objections.

> Queue Mapping could provide options to provide 'user' specific auto-created 
> queues under a specified group parent queue
> ---
>
> Key: YARN-7657
> URL: https://issues.apache.org/jira/browse/YARN-7657
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-7657.1.patch, YARN-7657.2.patch, YARN-7657.3.patch, 
> YARN-7657.4.patch
>
>
> Current Queue-Mapping only provides %user as an option for 'user' specific 
> queues as u:%user:%user. We can also support %user with group as 
> 'g:makerting-group:marketing.%user'  and user specific queues can be 
> automatically created under a group queue in this case.
> cc [~leftnoteasy]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8022) ResourceManager UI cluster/app/ page fails to render

2018-03-12 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396481#comment-16396481
 ] 

Wangda Tan commented on YARN-8022:
--

Thanks [~tarunparimi], 

A couple of questions to [~rohithsharma] since I don't have full understand of 
this part of code.
1) Is there {color:#FF}any{color} behavior change comparing this to trunk 
few days ago (before I revert these patches)? 
2) Is there {color:#FF}any{color} behavior change comparing this to logics 
originally before HADOOP-14077 get committed? (Like 2.6.x). 

 

> ResourceManager UI cluster/app/ page fails to render
> 
>
> Key: YARN-8022
> URL: https://issues.apache.org/jira/browse/YARN-8022
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Tarun Parimi
>Assignee: Tarun Parimi
>Priority: Blocker
> Attachments: Screen Shot 2018-03-12 at 1.45.05 PM.png, 
> YARN-8022.001.patch, YARN-8022.002.patch
>
>
> The page displays the message "Failed to read the attempts of the application"
>  
> The following stack trace is observed in RM log.
> org.apache.hadoop.yarn.server.webapp.AppBlock: Failed to read the attempts of 
> the application application_1520597233415_0002.
> java.lang.NullPointerException
>  at org.apache.hadoop.yarn.server.webapp.AppBlock$3.run(AppBlock.java:283)
>  at org.apache.hadoop.yarn.server.webapp.AppBlock$3.run(AppBlock.java:280)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
>  at org.apache.hadoop.yarn.server.webapp.AppBlock.render(AppBlock.java:279)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.RMAppBlock.render(RMAppBlock.java:71)
>  at org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>  at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>  at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
>  at org.apache.hadoop.yarn.webapp.view.HtmlPage$Page.subView(HtmlPage.java:49)
>  at 
> org.apache.hadoop.yarn.webapp.hamlet2.HamletImpl$EImp._v(HamletImpl.java:117)
>  at org.apache.hadoop.yarn.webapp.hamlet2.Hamlet$TD.__(Hamlet.java:848)
>  at 
> org.apache.hadoop.yarn.webapp.view.TwoColumnLayout.render(TwoColumnLayout.java:71)
>  at org.apache.hadoop.yarn.webapp.view.HtmlPage.render(HtmlPage.java:82)
>  at org.apache.hadoop.yarn.webapp.Controller.render(Controller.java:212)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.RmController.app(RmController.java:54)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8016) Provide a common interface for queues mapping rules

2018-03-12 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396474#comment-16396474
 ] 

Wangda Tan edited comment on YARN-8016 at 3/13/18 3:11 AM:
---

Thanks [~Zian Chen] for working on the patch.

1) updatePlacementRules, why add readLock? It is already under writelock 
protection.

2) why check this:
{code:java}
  if (null != rule.getQueueMappingLists()) {
placementRules.add(rule);
  }
{code}
Instead of
{code:java}
  if (null != rule) {
placementRules.add(rule);
  }
{code}
And I'm not sure why List becomes a public interface, I 
think {{getPlacementForApp}} should be enough for mapping, no?

3) IIUC, CapacitySchedulerConfiguration#getQueueMappingEntity is created for 
code reuse, but it's better not to place under CapacitySchedulerConfiguration. 
Maybe place it to a class like QueuePlacementRuleUtils under 
{{...resourcemanager.placement}}?

4) Similarily, QueueMappingEntity is not a {{must-to-have}} field for a general 
PlacementRule, suggest to move it to a separate class instead of as a inner 
class of PlacementRule.

5)
{code:java}
public UserGroupMappingPlacementRule(){}
{code}
Is not necessary.

6) {{validateParentQueue}} rename it to something like 
{{validateQueueMappingUnderParentQueue}} and place it under 
{{QueuePlacementRuleUtils}}?

7) AppNameMappingPlacementRule:
 - Add a more detailed explanations about purpose, configs of this class to 
Javadocs?
 - Why {{getPlacementForApp}} returns empty?
 - Basic test cases for this class?
 - {{QUEUE_MAPPING_NAME}} maybe set it to {{app-name}} for short?

8) 
{{TestCapacitySchedulerAutoCreatedQueueBase#testUpdatePlacementRulesFactory}} 
should not belong to the {{TestCapacitySchedulerAutoCreatedQueueBase}}, the 
meaning of {{...Base}} is it contains non-static util functions and doesn't 
have have test cases. If you really want to reuse some of the methods, I 
suggest to extend the class and rename it to 
{{TestCapacitySchedulerQueueMappingBase}}. These test cases should be added to 
a class like {{TestCapacitySchedulerQueueMappingFactory}}. Following test cases 
can be considered:
 - Setup chain of placement rules. (I don't see a any test case include > 1 
placement rule).

9) {{updatePlacementRules}}
 - Add a {{app-name}} rule to switch .. case?
 - For behavior of following statement:
{code:java}
if (placementRuleStrs.isEmpty()) {
  PlacementRule ugRule = getUserGroupMappingPlacementRule();
  if (null != ugRule) {
placementRules.add(ugRule);
  }
} else {
  // ...
}
{code}
I think we should add getUserGroupMappingPlacementRule in any case, otherwise 
the {{yarn.scheduler.capacity.queue-mappings}} will be invalidated when 
{{YarnConfiguration.QUEUE_PLACEMENT_RULES}} has non-empty value which is a 
behavior change.
 Instead this, I propose:
 a. Check added rule class must be unique. (no two rule with same full 
qualified class name can be added). 
 b. If UserGroupMappingPlacementRule is absent, add it to the tail of the list 
(and print log)


was (Author: leftnoteasy):
1) updatePlacementRules, why add readLock? It is already under writelock 
protection.

2) why check this: 
{code}
  if (null != rule.getQueueMappingLists()) {
placementRules.add(rule);
  }
{code} 
Instead of 
{code}
  if (null != rule) {
placementRules.add(rule);
  }
{code}
And I'm not sure why List becomes a public interface, I 
think {{getPlacementForApp}} should be enough for mapping, no? 

3) IIUC, CapacitySchedulerConfiguration#getQueueMappingEntity is created for 
code reuse, but it's better not to place under CapacitySchedulerConfiguration. 
Maybe place it to a class like QueuePlacementRuleUtils under 
{{...resourcemanager.placement}}? 

4) Similarily, QueueMappingEntity is not a {{must-to-have}} field for a general 
PlacementRule, suggest to move it to a separate class instead of as a inner 
class of PlacementRule.

5)
{code}
public UserGroupMappingPlacementRule(){}
{code}
Is not necessary. 

6) {{validateParentQueue}} rename it to something like 
{{validateQueueMappingUnderParentQueue}} and place it under 
{{QueuePlacementRuleUtils}}?

7) AppNameMappingPlacementRule:
- Add a more detailed explanations about purpose, configs of this class to 
Javadocs? 
- Why {{getPlacementForApp}} returns empty?
- Basic test cases for this class?
- {{QUEUE_MAPPING_NAME}} maybe set it to {{app-name}} for short?

8) 
{{TestCapacitySchedulerAutoCreatedQueueBase#testUpdatePlacementRulesFactory}} 
should not belong to the {{TestCapacitySchedulerAutoCreatedQueueBase}}, the 
meaning of {{...Base}} is it contains non-static util functions and doesn't 
have have test cases. If you really want to reuse some of the methods, I 
suggest to extend the class and rename it to 
{{TestCapacitySchedulerQueueMappingBase}}. These test cases should be added to 
a class like 

[jira] [Commented] (YARN-8016) Provide a common interface for queues mapping rules

2018-03-12 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396474#comment-16396474
 ] 

Wangda Tan commented on YARN-8016:
--

1) updatePlacementRules, why add readLock? It is already under writelock 
protection.

2) why check this: 
{code}
  if (null != rule.getQueueMappingLists()) {
placementRules.add(rule);
  }
{code} 
Instead of 
{code}
  if (null != rule) {
placementRules.add(rule);
  }
{code}
And I'm not sure why List becomes a public interface, I 
think {{getPlacementForApp}} should be enough for mapping, no? 

3) IIUC, CapacitySchedulerConfiguration#getQueueMappingEntity is created for 
code reuse, but it's better not to place under CapacitySchedulerConfiguration. 
Maybe place it to a class like QueuePlacementRuleUtils under 
{{...resourcemanager.placement}}? 

4) Similarily, QueueMappingEntity is not a {{must-to-have}} field for a general 
PlacementRule, suggest to move it to a separate class instead of as a inner 
class of PlacementRule.

5)
{code}
public UserGroupMappingPlacementRule(){}
{code}
Is not necessary. 

6) {{validateParentQueue}} rename it to something like 
{{validateQueueMappingUnderParentQueue}} and place it under 
{{QueuePlacementRuleUtils}}?

7) AppNameMappingPlacementRule:
- Add a more detailed explanations about purpose, configs of this class to 
Javadocs? 
- Why {{getPlacementForApp}} returns empty?
- Basic test cases for this class?
- {{QUEUE_MAPPING_NAME}} maybe set it to {{app-name}} for short?

8) 
{{TestCapacitySchedulerAutoCreatedQueueBase#testUpdatePlacementRulesFactory}} 
should not belong to the {{TestCapacitySchedulerAutoCreatedQueueBase}}, the 
meaning of {{...Base}} is it contains non-static util functions and doesn't 
have have test cases. If you really want to reuse some of the methods, I 
suggest to extend the class and rename it to 
{{TestCapacitySchedulerQueueMappingBase}}. These test cases should be added to 
a class like {{TestCapacitySchedulerQueueMappingFactory}}. Following test cases 
can be considered: 
- Setup chain of placement rules. (I don't see a any test case include > 1 
placement rule). 

9) {{updatePlacementRules}}
- Add a {{app-name}} rule to switch .. case?
- For behavior of following statement:
{code}
if (placementRuleStrs.isEmpty()) {
  PlacementRule ugRule = getUserGroupMappingPlacementRule();
  if (null != ugRule) {
placementRules.add(ugRule);
  }
} else {
  // ...
}
{code} 
I think we should add getUserGroupMappingPlacementRule in any case, otherwise 
the {{yarn.scheduler.capacity.queue-mappings}} will be invalidated when 
{{YarnConfiguration.QUEUE_PLACEMENT_RULES}} has non-empty value which is a 
behavior change.
Instead this, I propose:
a. Check added rule class must be unique. (no two rule with same full qualified 
class name can be added). 
b. If UserGroupMappingPlacementRule is absent, add it to the tail of the list 
(and print log)

> Provide a common interface for queues mapping rules
> ---
>
> Key: YARN-8016
> URL: https://issues.apache.org/jira/browse/YARN-8016
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Zian Chen
>Assignee: Zian Chen
>Priority: Major
> Attachments: YARN-8016.001.patch
>
>
> Currently in Capacity Scheduler, we hard code queue mappings to 
> UserGroupMappingPlacementRule for queue mappings.
> We need to expose a general framework to dynamically create various queue 
> mapping placement rules by reading queue mapping rule property from 
> capacity-scheduler.xml   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5627) [Atsv2] Support streaming reader API to fetch entities

2018-03-12 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396371#comment-16396371
 ] 

Vrushali C commented on YARN-5627:
--

Do you mean pagination? 

Pagination could mean return max N entities at a time or max N bytes of 
entities at a time.  Also, I cant recollect if we already do, but we should 
support filters in entities. For example, not all fields in entities may be 
relevant to a user. We should allow retrieving only some fields. 


> [Atsv2] Support streaming reader API to fetch entities
> --
>
> Key: YARN-5627
> URL: https://issues.apache.org/jira/browse/YARN-5627
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
>
> There is no limit for size of TimelineEntitie object. It can be varied from 
> KB's to MB. While reading entities list, it would be an potential issue that 
> TimeLineReder would go into OOM situation based on the entity size and limit. 
> Proposal is to support an streaming API to read entities list. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3409) Support Node Attribute functionality

2018-03-12 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396339#comment-16396339
 ] 

Chris Douglas commented on YARN-3409:
-

bq. Me and Sunil G tried to delete it but permissions were not there so were 
trying to get that done with Jian he and Others and in the mean while you 
helped us out. Delete of a branch could not be done by all ?
I don't/shouldn't have any special privileges. Probably a change to the set of 
protected branches between when you tried and today.

> Support Node Attribute functionality
> 
>
> Key: YARN-3409
> URL: https://issues.apache.org/jira/browse/YARN-3409
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: api, client, RM
>Reporter: Wangda Tan
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: 3409-apiChanges_v2.pdf (4).pdf, 
> Constraint-Node-Labels-Requirements-Design-doc_v1.pdf, YARN-3409.WIP.001.patch
>
>
> Specify only one label for each node (IAW, partition a cluster) is a way to 
> determinate how resources of a special set of nodes could be shared by a 
> group of entities (like teams, departments, etc.). Partitions of a cluster 
> has following characteristics:
> - Cluster divided to several disjoint sub clusters.
> - ACL/priority can apply on partition (Only market team / marke team has 
> priority to use the partition).
> - Percentage of capacities can apply on partition (Market team has 40% 
> minimum capacity and Dev team has 60% of minimum capacity of the partition).
> Attributes are orthogonal to partition, they’re describing features of node’s 
> hardware/software just for affinity. Some example of attributes:
> - glibc version
> - JDK version
> - Type of CPU (x86_64/i686)
> - Type of OS (windows, linux, etc.)
> With this, application can be able to ask for resource has (glibc.version >= 
> 2.20 && JDK.version >= 8u20 && x86_64).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8027) Setting hostname of docker container breaks for --net=host in docker 1.13

2018-03-12 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396334#comment-16396334
 ] 

Billie Rinaldi commented on YARN-8027:
--

We should look into whether it is a bug in that version of Docker. I see a 
couple of tickets regarding adding support for setting hostname when net=host, 
which would indicate that is a valid setting. I have not dug far enough to 
determine which versions are supposed to support it.

> Setting hostname of docker container breaks for --net=host in docker 1.13
> -
>
> Key: YARN-8027
> URL: https://issues.apache.org/jira/browse/YARN-8027
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Major
>
> In DockerLinuxContainerRuntime:launchContainer, we are adding the --hostname 
> argument to the docker run command to set the hostname in the container to 
> something like:  ctr-e84-1520889172376-0001-01-01.
> This does not work when combined with the --net=host command line option in 
> Docker 1.13.1.  It causes multiple failures when the client tries to resolve 
> the hostname and it fails.
> We haven't seen this before because we were using docker 1.12.6 which seems 
> to ignore --hostname when you are using --net=host.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8010) add config in FederationRMFailoverProxy to not bypass facade cache when failing over

2018-03-12 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396292#comment-16396292
 ] 

genericqa commented on YARN-8010:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m 
39s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
59s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
24s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
59s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
51s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
32s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
21s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 29m 
11s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}132m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | 

[jira] [Commented] (YARN-3409) Support Node Attribute functionality

2018-03-12 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396280#comment-16396280
 ] 

Naganarasimha G R commented on YARN-3409:
-

Thanks [~chris.douglas], not completely an accidental push but it was 
improperly created and hence we created a proper branch but we were unable to 
delete it since a long while. Me and [~sunilg] tried to delete it but 
permissions were not there so were trying to get that done with Jian he and 
Others and in the mean while you helped us out. Delete of a branch could not be 
done by all ?

> Support Node Attribute functionality
> 
>
> Key: YARN-3409
> URL: https://issues.apache.org/jira/browse/YARN-3409
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: api, client, RM
>Reporter: Wangda Tan
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: 3409-apiChanges_v2.pdf (4).pdf, 
> Constraint-Node-Labels-Requirements-Design-doc_v1.pdf, YARN-3409.WIP.001.patch
>
>
> Specify only one label for each node (IAW, partition a cluster) is a way to 
> determinate how resources of a special set of nodes could be shared by a 
> group of entities (like teams, departments, etc.). Partitions of a cluster 
> has following characteristics:
> - Cluster divided to several disjoint sub clusters.
> - ACL/priority can apply on partition (Only market team / marke team has 
> priority to use the partition).
> - Percentage of capacities can apply on partition (Market team has 40% 
> minimum capacity and Dev team has 60% of minimum capacity of the partition).
> Attributes are orthogonal to partition, they’re describing features of node’s 
> hardware/software just for affinity. Some example of attributes:
> - glibc version
> - JDK version
> - Type of CPU (x86_64/i686)
> - Type of OS (windows, linux, etc.)
> With this, application can be able to ask for resource has (glibc.version >= 
> 2.20 && JDK.version >= 8u20 && x86_64).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8024) LOG in class MaxRunningAppsEnforcer is initialized with a faulty class FairScheduler

2018-03-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396262#comment-16396262
 ] 

Hudson commented on YARN-8024:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13817 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13817/])
YARN-8024. LOG in class MaxRunningAppsEnforcer is initialized with a (yufei: 
rev ff31d8aefa0490ccf1d44fe8a738fdc002aa712c)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/MaxRunningAppsEnforcer.java


> LOG in class MaxRunningAppsEnforcer is initialized with a faulty class 
> FairScheduler 
> -
>
> Key: YARN-8024
> URL: https://issues.apache.org/jira/browse/YARN-8024
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Sen Zhao
>Priority: Major
>  Labels: newbie++
> Fix For: 3.2.0
>
> Attachments: YARN-8024.001.patch
>
>
> It should be initialized with class MaxRunningAppsEnforcer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8024) LOG in class MaxRunningAppsEnforcer is initialized with a faulty class FairScheduler

2018-03-12 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-8024:
---
Fix Version/s: (was: 3.1.0)
   3.2.0

> LOG in class MaxRunningAppsEnforcer is initialized with a faulty class 
> FairScheduler 
> -
>
> Key: YARN-8024
> URL: https://issues.apache.org/jira/browse/YARN-8024
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Sen Zhao
>Priority: Major
>  Labels: newbie++
> Fix For: 3.2.0
>
> Attachments: YARN-8024.001.patch
>
>
> It should be initialized with class MaxRunningAppsEnforcer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8024) LOG in class MaxRunningAppsEnforcer is initialized with a faulty class FairScheduler

2018-03-12 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396247#comment-16396247
 ] 

Yufei Gu commented on YARN-8024:


+1. Committed to trunk. Thanks [~Sen Zhao] for working on this.

> LOG in class MaxRunningAppsEnforcer is initialized with a faulty class 
> FairScheduler 
> -
>
> Key: YARN-8024
> URL: https://issues.apache.org/jira/browse/YARN-8024
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Sen Zhao
>Priority: Major
>  Labels: newbie++
> Fix For: 3.1.0
>
> Attachments: YARN-8024.001.patch
>
>
> It should be initialized with class MaxRunningAppsEnforcer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5627) [Atsv2] Support streaming reader API to fetch entities

2018-03-12 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396234#comment-16396234
 ] 

Haibo Chen commented on YARN-5627:
--

This has come up in next-gen JHS with ATSv2 exploration that I was doing. It is 
necessary to support extreme large jobs (500,000 tasks and above).

> [Atsv2] Support streaming reader API to fetch entities
> --
>
> Key: YARN-5627
> URL: https://issues.apache.org/jira/browse/YARN-5627
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
>
> There is no limit for size of TimelineEntitie object. It can be varied from 
> KB's to MB. While reading entities list, it would be an potential issue that 
> TimeLineReder would go into OOM situation based on the entity size and limit. 
> Proposal is to support an streaming API to read entities list. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6058) Support for listing all applications i.e /apps

2018-03-12 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396232#comment-16396232
 ] 

Haibo Chen commented on YARN-6058:
--

This showed up while I was exploring replacing JHS with python scripts (will be 
YARN UI2-like thing) + ATSv2. This is no good story in ATSv2 that supports 
client retrieval of all jobs (flow->flow run-> apps does not work without user 
id).

I think in MR's use case, a limit param on the query would be good enough.  The 
application id, in YARN's case, contains cluster timestamp + the current app 
counter that increments each time. We may be able to leverage that information 
to support time-range queries as well.

> Support for listing all applications i.e /apps
> --
>
> Key: YARN-6058
> URL: https://issues.apache.org/jira/browse/YARN-6058
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
>
> Primary use case for /apps is many execution engines runs on top of YARN 
> example, Tez, MR. These engines will have their own UI's which list specific 
> type of entities which are published by them Ex: DAG entities. 
> But, these UI's do not aware of either userName or flowName or applicationId 
> which are submitted by these engines.
> Currently, given that user do not aware of user, flownName, and 
> applicationId, then he can not retrieve any entities. 
> By supporting /apps with filters, user can list of application with given 
> ApplicationType. These applications can be used for retrieving engine 
> specific entities like DAG. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7999) Docker launch fails when user private filecache directory is missing

2018-03-12 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396223#comment-16396223
 ] 

Eric Yang edited comment on YARN-7999 at 3/12/18 11:12 PM:
---

[~jlowe] I am getting this error:

{code}
Exception message: Invalid docker rw mount 
'/usr/local/hadoop-3.2.0-SNAPSHOT/logs/userlogs/application_1520895272530_0001/container_1520895272530_0001_01_05:/usr/local/hadoop-3.2.0-SNAPSHOT/logs/userlogs/application_1520895272530_0001/container_1520895272530_0001_01_05',
 
realpath=/usr/local/hadoop-3.2.0-SNAPSHOT/logs/userlogs/application_1520895272530_0001/container_1520895272530_0001_01_05
Error constructing docker command, docker error code=14, error message='Invalid 
docker read-write mount'

Shell output: main : command provided 4
main : run as user is hbase
main : requested yarn user is hbase
Creating script paths...
Creating local dirs...


[2018-03-12 22:57:31.027]Diagnostic message from attempt 0 : [2018-03-12 
22:57:31.027]
[2018-03-12 22:57:31.027]Container exited with a non-zero exit code 29. 
{code}

The container logging directory is not available when docker tries to bind 
mount the logging directory.  I also found something interesting that if one of 
the cluster node's docker is not working properly.  The container attempted on 
the faulty node, and initialized logging directory on the faulty node.  When 
the same attempt is started on other nodes, it does not initialize logging 
directory on other node which leads to the failure.


was (Author: eyang):
[~jlowe] I am getting this error:

{code}
Exception message: Invalid docker rw mount 
'/usr/local/hadoop-3.2.0-SNAPSHOT/logs/userlogs/application_1520895272530_0001/container_1520895272530_0001_01_05:/usr/local/hadoop-3.2.0-SNAPSHOT/logs/userlogs/application_1520895272530_0001/container_1520895272530_0001_01_05',
 
realpath=/usr/local/hadoop-3.2.0-SNAPSHOT/logs/userlogs/application_1520895272530_0001/container_1520895272530_0001_01_05
Error constructing docker command, docker error code=14, error message='Invalid 
docker read-write mount'

Shell output: main : command provided 4
main : run as user is hbase
main : requested yarn user is hbase
Creating script paths...
Creating local dirs...


[2018-03-12 22:57:31.027]Diagnostic message from attempt 0 : [2018-03-12 
22:57:31.027]
[2018-03-12 22:57:31.027]Container exited with a non-zero exit code 29. 
{code}

The container logging directory is not available when docker tries to bind 
mount the logging directory.  I also found something interesting that if one of 
the cluster node's docker is not working properly.  The container attempt on 
the faulty node, and initialized logging directory on the faulty node.  When 
the same attempt is started on other nodes, it does not initialize logging 
directory on other node which leads to the failure.

> Docker launch fails when user private filecache directory is missing
> 
>
> Key: YARN-7999
> URL: https://issues.apache.org/jira/browse/YARN-7999
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Assignee: Jason Lowe
>Priority: Major
> Attachments: YARN-7999.001.patch, YARN-7999.002.patch
>
>
> Docker container is failing to launch in trunk.  The root cause is:
> {code}
> [COMPINSTANCE sleeper-1 : container_1520032931921_0001_01_20]: 
> [2018-03-02 23:26:09.196]Exception from container-launch.
> Container id: container_1520032931921_0001_01_20
> Exit code: 29
> Exception message: image: hadoop/centos:latest is trusted in hadoop registry.
> Could not determine real path of mount 
> '/tmp/hadoop-yarn/nm-local-dir/usercache/hbase/filecache'
> Could not determine real path of mount 
> '/tmp/hadoop-yarn/nm-local-dir/usercache/hbase/filecache'
> Invalid docker mount 
> '/tmp/hadoop-yarn/nm-local-dir/usercache/hbase/filecache:/tmp/hadoop-yarn/nm-local-dir/usercache/hbase/filecache',
>  realpath=/tmp/hadoop-yarn/nm-local-dir/usercache/hbase/filecache
> Error constructing docker command, docker error code=12, error 
> message='Invalid docker mount'
> Shell output: main : command provided 4
> main : run as user is hbase
> main : requested yarn user is hbase
> Creating script paths...
> Creating local dirs...
> [2018-03-02 23:26:09.240]Diagnostic message from attempt 0 : [2018-03-02 
> 23:26:09.240]
> [2018-03-02 23:26:09.240]Container exited with a non-zero exit code 29.
> [2018-03-02 23:26:39.278]Could not find 
> nmPrivate/application_1520032931921_0001/container_1520032931921_0001_01_20//container_1520032931921_0001_01_20.pid
>  in any of the directories
> [COMPONENT sleeper]: Failed 11 times, exceeded the limit - 10. Shutting down 
> now...
> {code}
> The filecache cant not be mounted because it 

[jira] [Commented] (YARN-7999) Docker launch fails when user private filecache directory is missing

2018-03-12 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396223#comment-16396223
 ] 

Eric Yang commented on YARN-7999:
-

[~jlowe] I am getting this error:

{code}
Exception message: Invalid docker rw mount 
'/usr/local/hadoop-3.2.0-SNAPSHOT/logs/userlogs/application_1520895272530_0001/container_1520895272530_0001_01_05:/usr/local/hadoop-3.2.0-SNAPSHOT/logs/userlogs/application_1520895272530_0001/container_1520895272530_0001_01_05',
 
realpath=/usr/local/hadoop-3.2.0-SNAPSHOT/logs/userlogs/application_1520895272530_0001/container_1520895272530_0001_01_05
Error constructing docker command, docker error code=14, error message='Invalid 
docker read-write mount'

Shell output: main : command provided 4
main : run as user is hbase
main : requested yarn user is hbase
Creating script paths...
Creating local dirs...


[2018-03-12 22:57:31.027]Diagnostic message from attempt 0 : [2018-03-12 
22:57:31.027]
[2018-03-12 22:57:31.027]Container exited with a non-zero exit code 29. 
{code}

The container logging directory is not available when docker tries to bind 
mount the logging directory.  I also found something interesting that if one of 
the cluster node's docker is not working properly.  The container attempt on 
the faulty node, and initialized logging directory on the faulty node.  When 
the same attempt is started on other nodes, it does not initialize logging 
directory on other node which leads to the failure.

> Docker launch fails when user private filecache directory is missing
> 
>
> Key: YARN-7999
> URL: https://issues.apache.org/jira/browse/YARN-7999
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Assignee: Jason Lowe
>Priority: Major
> Attachments: YARN-7999.001.patch, YARN-7999.002.patch
>
>
> Docker container is failing to launch in trunk.  The root cause is:
> {code}
> [COMPINSTANCE sleeper-1 : container_1520032931921_0001_01_20]: 
> [2018-03-02 23:26:09.196]Exception from container-launch.
> Container id: container_1520032931921_0001_01_20
> Exit code: 29
> Exception message: image: hadoop/centos:latest is trusted in hadoop registry.
> Could not determine real path of mount 
> '/tmp/hadoop-yarn/nm-local-dir/usercache/hbase/filecache'
> Could not determine real path of mount 
> '/tmp/hadoop-yarn/nm-local-dir/usercache/hbase/filecache'
> Invalid docker mount 
> '/tmp/hadoop-yarn/nm-local-dir/usercache/hbase/filecache:/tmp/hadoop-yarn/nm-local-dir/usercache/hbase/filecache',
>  realpath=/tmp/hadoop-yarn/nm-local-dir/usercache/hbase/filecache
> Error constructing docker command, docker error code=12, error 
> message='Invalid docker mount'
> Shell output: main : command provided 4
> main : run as user is hbase
> main : requested yarn user is hbase
> Creating script paths...
> Creating local dirs...
> [2018-03-02 23:26:09.240]Diagnostic message from attempt 0 : [2018-03-02 
> 23:26:09.240]
> [2018-03-02 23:26:09.240]Container exited with a non-zero exit code 29.
> [2018-03-02 23:26:39.278]Could not find 
> nmPrivate/application_1520032931921_0001/container_1520032931921_0001_01_20//container_1520032931921_0001_01_20.pid
>  in any of the directories
> [COMPONENT sleeper]: Failed 11 times, exceeded the limit - 10. Shutting down 
> now...
> {code}
> The filecache cant not be mounted because it doesn't exist.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7657) Queue Mapping could provide options to provide 'user' specific auto-created queues under a specified group parent queue

2018-03-12 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396220#comment-16396220
 ] 

Suma Shivaprasad commented on YARN-7657:


[~leftnoteasy] Thanks for the review. Currently we do not support for 
g:marketing-group:%user. Supporting this case seems valid like you mentioned - 
queue mapping for user from specific groups alone instead of just 
u:%user:%user. Have added this in the latest patch and also added end to end 
tests with normal queue mapping along with the auto creation of queue behaviour 
tests with parent queue.

> Queue Mapping could provide options to provide 'user' specific auto-created 
> queues under a specified group parent queue
> ---
>
> Key: YARN-7657
> URL: https://issues.apache.org/jira/browse/YARN-7657
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-7657.1.patch, YARN-7657.2.patch, YARN-7657.3.patch, 
> YARN-7657.4.patch
>
>
> Current Queue-Mapping only provides %user as an option for 'user' specific 
> queues as u:%user:%user. We can also support %user with group as 
> 'g:makerting-group:marketing.%user'  and user specific queues can be 
> automatically created under a group queue in this case.
> cc [~leftnoteasy]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7657) Queue Mapping could provide options to provide 'user' specific auto-created queues under a specified group parent queue

2018-03-12 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396217#comment-16396217
 ] 

genericqa commented on YARN-7657:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 34s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 22 new + 309 unchanged - 0 fixed = 331 total (was 309) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 54s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}117m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestResourceTrackerService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | YARN-7657 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12914119/YARN-7657.4.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b6eeceb87825 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ddb67ca |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/19962/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 

[jira] [Commented] (YARN-8027) Setting hostname of docker container breaks for --net=host in docker 1.13

2018-03-12 Thread Jim Brennan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396115#comment-16396115
 ] 

Jim Brennan commented on YARN-8027:
---

This code was added by [YARN-6804].

[~billie.rinaldi], [~jianh], I don't think we should be setting --hostname when 
--net=host.  Do you agree?


> Setting hostname of docker container breaks for --net=host in docker 1.13
> -
>
> Key: YARN-8027
> URL: https://issues.apache.org/jira/browse/YARN-8027
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Major
>
> In DockerLinuxContainerRuntime:launchContainer, we are adding the --hostname 
> argument to the docker run command to set the hostname in the container to 
> something like:  ctr-e84-1520889172376-0001-01-01.
> This does not work when combined with the --net=host command line option in 
> Docker 1.13.1.  It causes multiple failures when the client tries to resolve 
> the hostname and it fails.
> We haven't seen this before because we were using docker 1.12.6 which seems 
> to ignore --hostname when you are using --net=host.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8027) Setting hostname of docker container breaks for --net=host in docker 1.13

2018-03-12 Thread Jim Brennan (JIRA)
Jim Brennan created YARN-8027:
-

 Summary: Setting hostname of docker container breaks for 
--net=host in docker 1.13
 Key: YARN-8027
 URL: https://issues.apache.org/jira/browse/YARN-8027
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn
Affects Versions: 3.0.0
Reporter: Jim Brennan
Assignee: Jim Brennan


In DockerLinuxContainerRuntime:launchContainer, we are adding the --hostname 
argument to the docker run command to set the hostname in the container to 
something like:  ctr-e84-1520889172376-0001-01-01.

This does not work when combined with the --net=host command line option in 
Docker 1.13.1.  It causes multiple failures when the client tries to resolve 
the hostname and it fails.

We haven't seen this before because we were using docker 1.12.6 which seems to 
ignore --hostname when you are using --net=host.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8010) add config in FederationRMFailoverProxy to not bypass facade cache when failing over

2018-03-12 Thread Botong Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Botong Huang updated YARN-8010:
---
Attachment: YARN-8010.v2.patch

> add config in FederationRMFailoverProxy to not bypass facade cache when 
> failing over
> 
>
> Key: YARN-8010
> URL: https://issues.apache.org/jira/browse/YARN-8010
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Attachments: YARN-8010.v1.patch, YARN-8010.v1.patch, 
> YARN-8010.v2.patch
>
>
> Today when YarnRM is failing over, the FederationRMFailoverProxy running in 
> AMRMProxy will perform failover, try to get latest subcluster info from 
> FederationStateStore and then retry connect to the latest YarnRM master. When 
> calling getSubCluster() to FederationStateStoreFacade, it bypasses the cache 
> with a flush flag. When YarnRM is failing over, every AM heartbeat thread 
> creates a different thread inside FederationInterceptor, each of which keeps 
> performing failover several times. This leads to a big spike of getSubCluster 
> call to FederationStateStore. 
> Depending on the cluster setup (e.g. putting a VIP before all YarnRMs), 
> YarnRM master slave change might not result in RM addr change. In other 
> cases, a small delay of getting latest subcluster information may be 
> acceptable. This patch thus creates a config option, so that it is possible 
> to ask the FederationRMFailoverProxy to not flush cache when calling 
> getSubCluster(). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4606) CapacityScheduler: applications could get starved because computation of #activeUsers considers pending apps

2018-03-12 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396094#comment-16396094
 ] 

Eric Payne commented on YARN-4606:
--

bq. resources are not assigned to the second app when they should be
I'm unsure about the appropriate way to fix this. My original thinking was that 
we could do something similar to the following:
{code:title=AppSchedulingInfo#updatePendingResources}
if( Not Waiting For AM Container
|| (Queue Used AM Resources < Queue Max AM Resources) {
  abstractUsersManager.activateApplication(user, applicationId);
}
{code}

However, I'm not sure  of the best way to get the values for a queue's {{Used 
AM Resources}} and {{Max AM Resources}} from this context. Those may be 
capacity scheduler-specific values.

> CapacityScheduler: applications could get starved because computation of 
> #activeUsers considers pending apps 
> -
>
> Key: YARN-4606
> URL: https://issues.apache.org/jira/browse/YARN-4606
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler
>Affects Versions: 2.8.0, 2.7.1
>Reporter: Karam Singh
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-4606.1.poc.patch, YARN-4606.POC.patch
>
>
> Currently, if all applications belong to same user in LeafQueue are pending 
> (caused by max-am-percent, etc.), ActiveUsersManager still considers the user 
> is an active user. This could lead to starvation of active applications, for 
> example:
> - App1(belongs to user1)/app2(belongs to user2) are active, app3(belongs to 
> user3)/app4(belongs to user4) are pending
> - ActiveUsersManager returns #active-users=4
> - However, there're only two users (user1/user2) are able to allocate new 
> resources. So computed user-limit-resource could be lower than expected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7657) Queue Mapping could provide options to provide 'user' specific auto-created queues under a specified group parent queue

2018-03-12 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395895#comment-16395895
 ] 

Suma Shivaprasad commented on YARN-7657:


[~leftnoteasy] Thanks for the review. Currently we do not support for 
g:makerting-group:%user. Supporting this case seems valid like you mentioned - 
queue mapping for user from specific groups alone instead of just 
u:%user:%user. Have added this in the latest patch and also added end to end 
tests with normal queue mapping along with the auto creation of queue behaviour 
tests with parent queue.

> Queue Mapping could provide options to provide 'user' specific auto-created 
> queues under a specified group parent queue
> ---
>
> Key: YARN-7657
> URL: https://issues.apache.org/jira/browse/YARN-7657
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-7657.1.patch, YARN-7657.2.patch, YARN-7657.3.patch, 
> YARN-7657.4.patch
>
>
> Current Queue-Mapping only provides %user as an option for 'user' specific 
> queues as u:%user:%user. We can also support %user with group as 
> 'g:makerting-group:marketing.%user'  and user specific queues can be 
> automatically created under a group queue in this case.
> cc [~leftnoteasy]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8002) Support NOT_SELF and ALL namespace types for allocation tag

2018-03-12 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395878#comment-16395878
 ] 

Wangda Tan edited comment on YARN-8002 at 3/12/18 9:00 PM:
---

Thanks [~cheersyang],
bq. The approach in this patch adds the flexibility to let it easy to get 
cardinality on a set of application IDs regardless what namespace it is. The 
extra computation will only happen when we need to compute PC against app-tag 
(or other namespace that evals to a set of app IDs), not every time. And with 
the extra cost you mentioned (which I don't think it is very bad). What you 
suggested saves this cost, but it will be hard to support not-self, app-ids(a 
set of IDs) or possibly app-regex etc. I'd like to hold on re-implementing this 
until every one agrees we are not going to support those.
I completely understand benefits here, I think we all agree that we should only 
aggregate results when necessary (for app-ids, not-self, etc.). I'm fine with 
keep the logics here, but here I just want to make sure that we don't do any 
unnecessary aggregations for single.
More specifically, following call need to be avoided:
{code}
TargetApplications ta = new TargetApplications(currentAppId,
atm.getAllApplicationIds());
namespace.evaluate(ta);
{code}

For {{AllocationTags}}, I'm fine with keep the name, but I suggest to move it 
to {{scheduler.constraint}} package. I think it should not belong to API 
package.

{{AllocationTags#fromScope}}: First I don't suggest to use the "scope" name 
since we already have it inside {{TargetConstraint}}. I suggest to make 
explicit names to this. Such as 
{code}
createSingleAppAllocationTags(ApplicationId, Set tags)
createGlobalAllocationTags(Set tags).
In the future, we can add
createAppsAllocationTags(...) 
createAppLabelAllocationTags(...)
{code}
This makes code more readable and less implicit rules in parameters.

Similarly,
For {{AllocationTagNamespace}}, I saw it is used by some api packages, from 
what I can see, the only code usage is toString. We should move it to 
{{scheduler.constraint}} as well.
Instead of doing this, I would prefer to add a simple Java enum to api package 
(such as AllocationTagTypes or AllocationTagNamespaceTypes), and use it in the 
AllocationTagNamespaces class.


was (Author: leftnoteasy):
Thanks [~cheersyang],
bq. The approach in this patch adds the flexibility to let it easy to get 
cardinality on a set of application IDs regardless what namespace it is. The 
extra computation will only happen when we need to compute PC against app-tag 
(or other namespace that evals to a set of app IDs), not every time. And with 
the extra cost you mentioned (which I don't think it is very bad). What you 
suggested saves this cost, but it will be hard to support not-self, app-ids(a 
set of IDs) or possibly app-regex etc. I'd like to hold on re-implementing this 
until every one agrees we are not going to support those.
I completely understand benefits here, I think we all agree that we should only 
aggregate results when necessary (for app-ids, not-self, etc.). I'm fine with 
keep the logics here, but here I just want to make sure that we don't do any 
unnecessary aggregations for single.
More specifically, following call need to be avoided:
{code}
TargetApplications ta = new TargetApplications(currentAppId,
atm.getAllApplicationIds());
namespace.evaluate(ta);
{code}

For {{AllocationTags}}, I'm fine with keep the name, but I suggest to move it 
to {{scheduler.constraint}} package. I think it should not belong to API 
package.

{{AllocationTags#fromScope}}: First I don't suggest to use the "scope" name 
since we already have it inside {{TargetConstraint}}. I suggest to make 
explicit names to this. Such as 
{code}
createSingleAppAllocationTags(ApplicationId, Set tags)
createGlobalAllocationTags(Set tags).
In the future, we can add
createAppsAllocationTags(...) 
createAppLabelAllocationTags(...)
{code}
This makes code more readable and less implicit rules in configs.

Similarly,
For {{AllocationTagNamespace}}, I saw it is used by some api packages, from 
what I can see, the only code usage is toString. We should move it to 
{{scheduler.constraint}} as well.
Instead of doing this, I would prefer to add a simple Java enum to api package 
(such as AllocationTagTypes or AllocationTagNamespaceTypes), and use it in the 
AllocationTagNamespaces class.

> Support NOT_SELF and ALL namespace types for allocation tag
> ---
>
> Key: YARN-8002
> URL: https://issues.apache.org/jira/browse/YARN-8002
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-8002.001.patch, 

[jira] [Updated] (YARN-7657) Queue Mapping could provide options to provide 'user' specific auto-created queues under a specified group parent queue

2018-03-12 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7657:
---
Attachment: YARN-7657.4.patch

> Queue Mapping could provide options to provide 'user' specific auto-created 
> queues under a specified group parent queue
> ---
>
> Key: YARN-7657
> URL: https://issues.apache.org/jira/browse/YARN-7657
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-7657.1.patch, YARN-7657.2.patch, YARN-7657.3.patch, 
> YARN-7657.4.patch
>
>
> Current Queue-Mapping only provides %user as an option for 'user' specific 
> queues as u:%user:%user. We can also support %user with group as 
> 'g:makerting-group:marketing.%user'  and user specific queues can be 
> automatically created under a group queue in this case.
> cc [~leftnoteasy]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8002) Support NOT_SELF and ALL namespace types for allocation tag

2018-03-12 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395878#comment-16395878
 ] 

Wangda Tan commented on YARN-8002:
--

Thanks [~cheersyang],
bq. The approach in this patch adds the flexibility to let it easy to get 
cardinality on a set of application IDs regardless what namespace it is. The 
extra computation will only happen when we need to compute PC against app-tag 
(or other namespace that evals to a set of app IDs), not every time. And with 
the extra cost you mentioned (which I don't think it is very bad). What you 
suggested saves this cost, but it will be hard to support not-self, app-ids(a 
set of IDs) or possibly app-regex etc. I'd like to hold on re-implementing this 
until every one agrees we are not going to support those.
I completely understand benefits here, I think we all agree that we should only 
aggregate results when necessary (for app-ids, not-self, etc.). I'm fine with 
keep the logics here, but here I just want to make sure that we don't do any 
unnecessary aggregations for single.
More specifically, following call need to be avoided:
{code}
TargetApplications ta = new TargetApplications(currentAppId,
atm.getAllApplicationIds());
namespace.evaluate(ta);
{code}

For {{AllocationTags}}, I'm fine with keep the name, but I suggest to move it 
to {{scheduler.constraint}} package. I think it should not belong to API 
package.

{{AllocationTags#fromScope}}: First I don't suggest to use the "scope" name 
since we already have it inside {{TargetConstraint}}. I suggest to make 
explicit names to this. Such as 
{code}
createSingleAppAllocationTags(ApplicationId, Set tags)
createGlobalAllocationTags(Set tags).
In the future, we can add
createAppsAllocationTags(...) 
createAppLabelAllocationTags(...)
{code}
This makes code more readable and less implicit rules in configs.

Similarly,
For {{AllocationTagNamespace}}, I saw it is used by some api packages, from 
what I can see, the only code usage is toString. We should move it to 
{{scheduler.constraint}} as well.
Instead of doing this, I would prefer to add a simple Java enum to api package 
(such as AllocationTagTypes or AllocationTagNamespaceTypes), and use it in the 
AllocationTagNamespaces class.

> Support NOT_SELF and ALL namespace types for allocation tag
> ---
>
> Key: YARN-8002
> URL: https://issues.apache.org/jira/browse/YARN-8002
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-8002.001.patch, YARN-8002.002.patch
>
>
> This is a continua task after YARN-7972, YARN-7972 adds support to specify 
> tags with namespace SELF and APP_ID, like following
>  * self/
>  * app-id//
> this task is to track the work to support 2 of remaining namespace types 
> *NOT_SELF* & *ALL* (we'll support app-label later),
>  * not-self/
>  * all/
> this will require a bit refactoring in {{AllocationTagsManager}} as it needs 
> to do some proper aggregation on tags for multiple apps.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5015) Support sliding window retry capability for container restart

2018-03-12 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395819#comment-16395819
 ] 

Wangda Tan commented on YARN-5015:
--

+1, thanks [~csingh], I will commit the patch by tomorrow if no objections. 

> Support sliding window retry capability for container restart 
> --
>
> Key: YARN-5015
> URL: https://issues.apache.org/jira/browse/YARN-5015
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Varun Vasudev
>Assignee: Chandni Singh
>Priority: Major
>  Labels: oct16-medium
> Attachments: YARN-5015.01.patch, YARN-5015.02.patch, 
> YARN-5015.03.patch, YARN-5015.04.patch, YARN-5015.05.patch, 
> YARN-5015.06.patch, YARN-5015.07.patch, YARN-5015.08.patch
>
>
> We support sliding window retry policy for AM restarts (Introduced in 
> YARN-611). Similar sliding window retry policy is needed for container 
> restarts.
> With this change, we can introduce a common class for 
> SlidingWindowRetryPolicy ( suggested by [~vvasudev] in the comments) and 
> integrate it to container restart. 
> In a subsequent jira, we can modify the AM code to use 
> SlidingWindowRetryPolicy which will unify the AM and container restart code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8026) FairScheduler queue ACLs not implemented for application actions

2018-03-12 Thread Tristan Stevens (JIRA)
Tristan Stevens created YARN-8026:
-

 Summary: FairScheduler queue ACLs not implemented for application 
actions
 Key: YARN-8026
 URL: https://issues.apache.org/jira/browse/YARN-8026
 Project: Hadoop YARN
  Issue Type: Bug
  Components: fairscheduler
Reporter: Tristan Stevens


The mapred-site.xml options mapreduce.job.acl-modify-job and 
mapreduce.job.acl-view-job both specify that queue ACLs should apply for read 
and modify operations on a job, however according to 
org.apache.hadoop.yarn.server.security.ApplicationACLsManager.java this feature 
has not been implemented.

This is very important otherwise it is difficult to manage a cluster with a 
complicated queue hierarchy without either putting everyone in the admin ACL, 
getting many support tickets or asking people to remember to set 
mapreduce.job.acl-modify-job and mapreduce.job.acl-view-job.

Extract from mapred-default.xml:
bq.  Irrespective of this ACL configuration, (a) job-owner, (b) the user who 
started the cluster, (c) members of an admin configured supergroup configured 
via mapreduce.cluster.permissions.supergroup and *(d) queue administrators of 
the queue to which this job was submitted* to configured via 
acl-administer-jobs for the specific queue in mapred-queues.xml can do all the 
view operations on a job. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3409) Support Node Attribute functionality

2018-03-12 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395767#comment-16395767
 ] 

Chris Douglas commented on YARN-3409:
-

Deleted the {{yarn-3409}} branch, because it collides with {{YARN-3409}} on 
case-insensitive systems. The former looked like an accidental push.

> Support Node Attribute functionality
> 
>
> Key: YARN-3409
> URL: https://issues.apache.org/jira/browse/YARN-3409
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: api, client, RM
>Reporter: Wangda Tan
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: 3409-apiChanges_v2.pdf (4).pdf, 
> Constraint-Node-Labels-Requirements-Design-doc_v1.pdf, YARN-3409.WIP.001.patch
>
>
> Specify only one label for each node (IAW, partition a cluster) is a way to 
> determinate how resources of a special set of nodes could be shared by a 
> group of entities (like teams, departments, etc.). Partitions of a cluster 
> has following characteristics:
> - Cluster divided to several disjoint sub clusters.
> - ACL/priority can apply on partition (Only market team / marke team has 
> priority to use the partition).
> - Percentage of capacities can apply on partition (Market team has 40% 
> minimum capacity and Dev team has 60% of minimum capacity of the partition).
> Attributes are orthogonal to partition, they’re describing features of node’s 
> hardware/software just for affinity. Some example of attributes:
> - glibc version
> - JDK version
> - Type of CPU (x86_64/i686)
> - Type of OS (windows, linux, etc.)
> With this, application can be able to ask for resource has (glibc.version >= 
> 2.20 && JDK.version >= 8u20 && x86_64).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7987) Docker container name(--name) needs to be DNS friendly for DNS resolution to work in user defined networks.

2018-03-12 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395680#comment-16395680
 ] 

Suma Shivaprasad commented on YARN-7987:


Thanks [~shaneku...@gmail.com] We can close this issue and go with YARN-7994 
for DNS resolution for now.

> Docker container name(--name) needs to be DNS friendly for DNS resolution to 
> work in user defined networks. 
> 
>
> Key: YARN-7987
> URL: https://issues.apache.org/jira/browse/YARN-7987
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
>
> User defined networks like overlays support DNS resolution through Docker 
> Embedded DNS which needs the container name (–name parameter value in docker 
> run) to be a FQDN for container names to be resolved - Please refer 
> documentation 
> [https://docs.docker.com/v17.09/engine/userguide/networking/configure-dns/]
> However Yarn sets the container name to the container's id which is not DNS 
> friendly(eg: container_e26_1519402686002_0035_01_03) and is not a FQDN. 
> The proposal is to set a FQDN(eg: 
> ctr-e26-1519402686002-0035-01-03.domain-name) as the docker container's 
> name for containers to be able to communicate to each other via hostnames in 
> user defined networks like overlays, bridges etc. The domain name will be 
> picked up from the YARN DNS registry configuration 
> (hadoop.registry.dns.domain-name)
>  
>  
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-7987) Docker container name(--name) needs to be DNS friendly for DNS resolution to work in user defined networks.

2018-03-12 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad resolved YARN-7987.

Resolution: Won't Fix

> Docker container name(--name) needs to be DNS friendly for DNS resolution to 
> work in user defined networks. 
> 
>
> Key: YARN-7987
> URL: https://issues.apache.org/jira/browse/YARN-7987
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
>
> User defined networks like overlays support DNS resolution through Docker 
> Embedded DNS which needs the container name (–name parameter value in docker 
> run) to be a FQDN for container names to be resolved - Please refer 
> documentation 
> [https://docs.docker.com/v17.09/engine/userguide/networking/configure-dns/]
> However Yarn sets the container name to the container's id which is not DNS 
> friendly(eg: container_e26_1519402686002_0035_01_03) and is not a FQDN. 
> The proposal is to set a FQDN(eg: 
> ctr-e26-1519402686002-0035-01-03.domain-name) as the docker container's 
> name for containers to be able to communicate to each other via hostnames in 
> user defined networks like overlays, bridges etc. The domain name will be 
> picked up from the YARN DNS registry configuration 
> (hadoop.registry.dns.domain-name)
>  
>  
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8017) Validate the application ID has been persisted to the service definition prior to use

2018-03-12 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-8017:
-
Priority: Critical  (was: Major)

> Validate the application ID has been persisted to the service definition 
> prior to use
> -
>
> Key: YARN-8017
> URL: https://issues.apache.org/jira/browse/YARN-8017
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Priority: Critical
>
> The service definition is persisted to disk prior to launching the 
> application. Once the application is launched, the service definition is 
> updated to include the application ID. If submit fails, the application ID is 
> never added to the previously persisted service definition.
> When this occurs, attempting to stop or destroy the application results in a 
> NPE while trying to get the application ID from the service definition, 
> making it impossible to clean up.
> {code:java}
> 2018-03-02 18:28:05,512 INFO 
> org.apache.hadoop.yarn.service.utils.ServiceApiUtil: Loading service 
> definition from 
> hdfs://y7001.yns.hortonworks.com:8020/user/hadoopuser/.yarn/services/skumpfcents/skumpfcents.json
> 2018-03-02 18:28:05,525 WARN 
> org.apache.hadoop.yarn.webapp.GenericExceptionHandler: INTERNAL_SERVER_ERROR
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.api.records.ApplicationId.fromString(ApplicationId.java:111)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.getAppId(ServiceClient.java:1106)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.actionStop(ServiceClient.java:363)
>   at 
> org.apache.hadoop.yarn.service.webapp.ApiServer$4.run(ApiServer.java:251)
>   at 
> org.apache.hadoop.yarn.service.webapp.ApiServer$4.run(ApiServer.java:243)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8024) LOG in class MaxRunningAppsEnforcer is initialized with a faulty class FairScheduler

2018-03-12 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395598#comment-16395598
 ] 

genericqa commented on YARN-8024:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 24s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 9 unchanged - 0 fixed = 10 total (was 9) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 21s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 66m 
58s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | YARN-8024 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913983/YARN-8024.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5583d7d42326 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / dd05871 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/19960/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19960/testReport/ |
| Max. process+thread count | 824 (vs. ulimit 

[jira] [Commented] (YARN-7581) HBase filters are not constructed correctly in ATSv2

2018-03-12 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395511#comment-16395511
 ] 

genericqa commented on YARN-7581:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-client in 
the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 42m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | YARN-7581 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12914084/YARN-7581.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 13f26d204bb4 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / dd05871 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19961/testReport/ |
| Max. process+thread count | 407 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 |
| Console 

[jira] [Commented] (YARN-8024) LOG in class MaxRunningAppsEnforcer is initialized with a faulty class FairScheduler

2018-03-12 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395510#comment-16395510
 ] 

Yufei Gu commented on YARN-8024:


LGTM

> LOG in class MaxRunningAppsEnforcer is initialized with a faulty class 
> FairScheduler 
> -
>
> Key: YARN-8024
> URL: https://issues.apache.org/jira/browse/YARN-8024
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Sen Zhao
>Priority: Major
>  Labels: newbie++
> Attachments: YARN-8024.001.patch
>
>
> It should be initialized with class MaxRunningAppsEnforcer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8016) Provide a common interface for queues mapping rules

2018-03-12 Thread Zian Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395469#comment-16395469
 ] 

Zian Chen commented on YARN-8016:
-

Quickly checked the failed test case, the failed part was container memory 
allocation size not used as expected which is not related to this patch. Any 
idea for this?

> Provide a common interface for queues mapping rules
> ---
>
> Key: YARN-8016
> URL: https://issues.apache.org/jira/browse/YARN-8016
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Zian Chen
>Assignee: Zian Chen
>Priority: Major
> Attachments: YARN-8016.001.patch
>
>
> Currently in Capacity Scheduler, we hard code queue mappings to 
> UserGroupMappingPlacementRule for queue mappings.
> We need to expose a general framework to dynamically create various queue 
> mapping placement rules by reading queue mapping rule property from 
> capacity-scheduler.xml   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7581) HBase filters are not constructed correctly in ATSv2

2018-03-12 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395444#comment-16395444
 ] 

Haibo Chen commented on YARN-7581:
--

02 patch to address the checkstyle issue as well.

> HBase filters are not constructed correctly in ATSv2
> 
>
> Key: YARN-7581
> URL: https://issues.apache.org/jira/browse/YARN-7581
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: ATSv2
>Affects Versions: 3.0.0-beta1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-7581.00.patch, YARN-7581.01.patch, 
> YARN-7581.02.patch
>
>
> Post YARN-7346,
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesConfigFilters() and 
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesMetricFilters() 
> start to fail when hbase.profile is set to 2.0)
> *Error Message*
>  [ERROR] Failures:
>  [ERROR] 
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesConfigFilters:1266 
> expected:<2> but was:<0>
>  [ERROR] 
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesMetricFilters:1523 
> expected:<1> but was:<0>



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8024) LOG in class MaxRunningAppsEnforcer is initialized with a faulty class FairScheduler

2018-03-12 Thread Sen Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395443#comment-16395443
 ] 

Sen Zhao commented on YARN-8024:


I just submitted a patch. Please review it. Thanks, [~yufeigu]

> LOG in class MaxRunningAppsEnforcer is initialized with a faulty class 
> FairScheduler 
> -
>
> Key: YARN-8024
> URL: https://issues.apache.org/jira/browse/YARN-8024
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Sen Zhao
>Priority: Major
>  Labels: newbie++
> Attachments: YARN-8024.001.patch
>
>
> It should be initialized with class MaxRunningAppsEnforcer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7581) HBase filters are not constructed correctly in ATSv2

2018-03-12 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-7581:
-
Attachment: YARN-7581.02.patch

> HBase filters are not constructed correctly in ATSv2
> 
>
> Key: YARN-7581
> URL: https://issues.apache.org/jira/browse/YARN-7581
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: ATSv2
>Affects Versions: 3.0.0-beta1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-7581.00.patch, YARN-7581.01.patch, 
> YARN-7581.02.patch
>
>
> Post YARN-7346,
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesConfigFilters() and 
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesMetricFilters() 
> start to fail when hbase.profile is set to 2.0)
> *Error Message*
>  [ERROR] Failures:
>  [ERROR] 
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesConfigFilters:1266 
> expected:<2> but was:<0>
>  [ERROR] 
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesMetricFilters:1523 
> expected:<1> but was:<0>



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7581) HBase filters are not constructed correctly in ATSv2

2018-03-12 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395432#comment-16395432
 ] 

genericqa commented on YARN-7581:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
46s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 10s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client:
 The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
22s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-client in 
the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 42m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | YARN-7581 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12914074/YARN-7581.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2a9963029e15 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / dd05871 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/19959/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client.txt
 |
|  Test Results | 

[jira] [Commented] (YARN-8006) Make Hbase-2 profile as default for YARN-7055 branch

2018-03-12 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395396#comment-16395396
 ] 

Haibo Chen commented on YARN-8006:
--

{quote}Do you think it is happening because timelineservice-hbase-client 
compiled on hbase-1.2.6 and trying to run against hbase-2.0-beta1?
{quote}
>From the NoSuchMethodError, this is possible. Do you know how we can get the 
>Jenkins command (i.e. make a clone of the preCommit-yarn job and add debug 
>messages to the maven command that the job executes)?

> Make Hbase-2 profile as default for YARN-7055 branch
> 
>
> Key: YARN-8006
> URL: https://issues.apache.org/jira/browse/YARN-8006
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-8006-YARN-7055.001.patch, 
> YARN-8006-YARN-8006.00.patch
>
>
> In last weekly call folks discussed that we should have separate branch with 
> hbase-2 as profile by default. Trunk default profile is hbase-1 which runs 
> all the tests under hbase-1 profile. But for hbase-2 profile tests are not 
> running.
> As per the discussion, lets keep YARN-7055 branch for hbase-2 profile as 
> default. Any server side patches can be given to this branch as well which 
> runs tests for hbase-2 profile. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7581) HBase filters are not constructed correctly in ATSv2

2018-03-12 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395380#comment-16395380
 ] 

Haibo Chen commented on YARN-7581:
--

Patch updated to enforce byte[] to String encoding with UTF-8

> HBase filters are not constructed correctly in ATSv2
> 
>
> Key: YARN-7581
> URL: https://issues.apache.org/jira/browse/YARN-7581
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: ATSv2
>Affects Versions: 3.0.0-beta1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-7581.00.patch, YARN-7581.01.patch
>
>
> Post YARN-7346,
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesConfigFilters() and 
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesMetricFilters() 
> start to fail when hbase.profile is set to 2.0)
> *Error Message*
>  [ERROR] Failures:
>  [ERROR] 
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesConfigFilters:1266 
> expected:<2> but was:<0>
>  [ERROR] 
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesMetricFilters:1523 
> expected:<1> but was:<0>



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-8024) LOG in class MaxRunningAppsEnforcer is initialized with a faulty class FairScheduler

2018-03-12 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-8024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergely Novák reassigned YARN-8024:
---

Assignee: Sen Zhao

> LOG in class MaxRunningAppsEnforcer is initialized with a faulty class 
> FairScheduler 
> -
>
> Key: YARN-8024
> URL: https://issues.apache.org/jira/browse/YARN-8024
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Sen Zhao
>Priority: Major
>  Labels: newbie++
> Attachments: YARN-8024.001.patch
>
>
> It should be initialized with class MaxRunningAppsEnforcer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7581) HBase filters are not constructed correctly in ATSv2

2018-03-12 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-7581:
-
Attachment: YARN-7581.01.patch

> HBase filters are not constructed correctly in ATSv2
> 
>
> Key: YARN-7581
> URL: https://issues.apache.org/jira/browse/YARN-7581
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: ATSv2
>Affects Versions: 3.0.0-beta1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-7581.00.patch, YARN-7581.01.patch
>
>
> Post YARN-7346,
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesConfigFilters() and 
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesMetricFilters() 
> start to fail when hbase.profile is set to 2.0)
> *Error Message*
>  [ERROR] Failures:
>  [ERROR] 
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesConfigFilters:1266 
> expected:<2> but was:<0>
>  [ERROR] 
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesMetricFilters:1523 
> expected:<1> but was:<0>



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8025) UsersManangers#getComputedResourceLimitForActiveUsers throws NPE due to preComputedActiveUserLimit is empty

2018-03-12 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated YARN-8025:
-
Description: 
UsersManangers#getComputedResourceLimitForActiveUsers throws NPE when I run SLS.
 *preComputedActiveUserLimit* is not put any element in the code.
{code:java}
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.UsersManager.getComputedResourceLimitForActiveUsers(UsersManager.java:511)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.getResourceLimitForActiveUsers(LeafQueue.java:1576)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.computeUserLimitAndSetHeadroom(LeafQueue.java:1517)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.assignContainers(LeafQueue.java:1190)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue.assignContainersToChildQueues(ParentQueue.java:824)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue.assignContainers(ParentQueue.java:630)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateOrReserveNewContainers(CapacityScheduler.java:1834)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateOrReserveNewContainers(CapacityScheduler.java:1802)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersOnMultiNodes(CapacityScheduler.java:1925)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersToNode(CapacityScheduler.java:1946)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.scheduleBasedOnNodeLabels(CapacityScheduler.java:732)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler$AsyncScheduleThread.run(CapacityScheduler.java:774)
{code}

> UsersManangers#getComputedResourceLimitForActiveUsers throws NPE due to 
> preComputedActiveUserLimit is empty
> ---
>
> Key: YARN-8025
> URL: https://issues.apache.org/jira/browse/YARN-8025
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Jiandan Yang 
>Priority: Major
>
> UsersManangers#getComputedResourceLimitForActiveUsers throws NPE when I run 
> SLS.
>  *preComputedActiveUserLimit* is not put any element in the code.
> {code:java}
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.UsersManager.getComputedResourceLimitForActiveUsers(UsersManager.java:511)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.getResourceLimitForActiveUsers(LeafQueue.java:1576)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.computeUserLimitAndSetHeadroom(LeafQueue.java:1517)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.assignContainers(LeafQueue.java:1190)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue.assignContainersToChildQueues(ParentQueue.java:824)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue.assignContainers(ParentQueue.java:630)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateOrReserveNewContainers(CapacityScheduler.java:1834)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateOrReserveNewContainers(CapacityScheduler.java:1802)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersOnMultiNodes(CapacityScheduler.java:1925)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersToNode(CapacityScheduler.java:1946)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.scheduleBasedOnNodeLabels(CapacityScheduler.java:732)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler$AsyncScheduleThread.run(CapacityScheduler.java:774)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8025) UsersManangers#getComputedResourceLimitForActiveUsers throws NPE due to preComputedActiveUserLimit is empty

2018-03-12 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated YARN-8025:
-
Environment: (was: 
UsersManangers#getComputedResourceLimitForActiveUsers throws NPE  when I run 
SLS.
*preComputedActiveUserLimit* is not put any element in the code.

{code:java}
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.UsersManager.getComputedResourceLimitForActiveUsers(UsersManager.java:511)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.getResourceLimitForActiveUsers(LeafQueue.java:1576)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.computeUserLimitAndSetHeadroom(LeafQueue.java:1517)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.assignContainers(LeafQueue.java:1190)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue.assignContainersToChildQueues(ParentQueue.java:824)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue.assignContainers(ParentQueue.java:630)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateOrReserveNewContainers(CapacityScheduler.java:1834)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateOrReserveNewContainers(CapacityScheduler.java:1802)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersOnMultiNodes(CapacityScheduler.java:1925)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersToNode(CapacityScheduler.java:1946)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.scheduleBasedOnNodeLabels(CapacityScheduler.java:732)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler$AsyncScheduleThread.run(CapacityScheduler.java:774)
{code}
)

> UsersManangers#getComputedResourceLimitForActiveUsers throws NPE due to 
> preComputedActiveUserLimit is empty
> ---
>
> Key: YARN-8025
> URL: https://issues.apache.org/jira/browse/YARN-8025
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Jiandan Yang 
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5150) [YARN-3368] Add a sunburst chart view for queues/applications resource usage to new YARN UI

2018-03-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/YARN-5150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395249#comment-16395249
 ] 

Gergely Novák commented on YARN-5150:
-

In patch #3:
 - moved the sunburst chart to the Queues tab
 - refactored the code for tree/sunburst view to use common codebase
 - added query parameters (view type and sunburst chart type)
 - fixed some partition related issues

+ added new screenshot


> [YARN-3368] Add a sunburst chart view for queues/applications resource usage 
> to new YARN UI
> ---
>
> Key: YARN-5150
> URL: https://issues.apache.org/jira/browse/YARN-5150
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Gergely Novák
>Priority: Major
> Attachments: Screen Shot 2017-11-17 at 12.05.28.png, Screen Shot 
> 2018-03-12 at 14.47.27.png, YARN-5150.001.patch, YARN-5150.002.patch, 
> YARN-5150.003.patch
>
>
> An example of sunburst chart: https://bl.ocks.org/kerryrodden/7090426.
> If we can introduce it to YARN UI, admins can easily get understanding of 
> relative resource usages and configured capacities for queues/applications.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5150) [YARN-3368] Add a sunburst chart view for queues/applications resource usage to new YARN UI

2018-03-12 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-5150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergely Novák updated YARN-5150:

Attachment: Screen Shot 2018-03-12 at 14.47.27.png

> [YARN-3368] Add a sunburst chart view for queues/applications resource usage 
> to new YARN UI
> ---
>
> Key: YARN-5150
> URL: https://issues.apache.org/jira/browse/YARN-5150
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Gergely Novák
>Priority: Major
> Attachments: Screen Shot 2017-11-17 at 12.05.28.png, Screen Shot 
> 2018-03-12 at 14.47.27.png, YARN-5150.001.patch, YARN-5150.002.patch, 
> YARN-5150.003.patch
>
>
> An example of sunburst chart: https://bl.ocks.org/kerryrodden/7090426.
> If we can introduce it to YARN UI, admins can easily get understanding of 
> relative resource usages and configured capacities for queues/applications.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5150) [YARN-3368] Add a sunburst chart view for queues/applications resource usage to new YARN UI

2018-03-12 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395209#comment-16395209
 ] 

genericqa commented on YARN-5150:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
24m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 36m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | YARN-5150 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12914041/YARN-5150.003.patch |
| Optional Tests |  asflicense  shadedclient  |
| uname | Linux 59a8cd4f41f4 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e1f5251 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 420 (vs. ulimit of 1) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19957/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [YARN-3368] Add a sunburst chart view for queues/applications resource usage 
> to new YARN UI
> ---
>
> Key: YARN-5150
> URL: https://issues.apache.org/jira/browse/YARN-5150
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Gergely Novák
>Priority: Major
> Attachments: Screen Shot 2017-11-17 at 12.05.28.png, 
> YARN-5150.001.patch, YARN-5150.002.patch, YARN-5150.003.patch
>
>
> An example of sunburst chart: https://bl.ocks.org/kerryrodden/7090426.
> If we can introduce it to YARN UI, admins can easily get understanding of 
> relative resource usages and configured capacities for queues/applications.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5150) [YARN-3368] Add a sunburst chart view for queues/applications resource usage to new YARN UI

2018-03-12 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-5150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergely Novák updated YARN-5150:

Attachment: YARN-5150.003.patch

> [YARN-3368] Add a sunburst chart view for queues/applications resource usage 
> to new YARN UI
> ---
>
> Key: YARN-5150
> URL: https://issues.apache.org/jira/browse/YARN-5150
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Gergely Novák
>Priority: Major
> Attachments: Screen Shot 2017-11-17 at 12.05.28.png, 
> YARN-5150.001.patch, YARN-5150.002.patch, YARN-5150.003.patch
>
>
> An example of sunburst chart: https://bl.ocks.org/kerryrodden/7090426.
> If we can introduce it to YARN UI, admins can easily get understanding of 
> relative resource usages and configured capacities for queues/applications.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5150) [YARN-3368] Add a sunburst chart view for queues/applications resource usage to new YARN UI

2018-03-12 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-5150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergely Novák updated YARN-5150:

Attachment: YARN-5150.003.patch

> [YARN-3368] Add a sunburst chart view for queues/applications resource usage 
> to new YARN UI
> ---
>
> Key: YARN-5150
> URL: https://issues.apache.org/jira/browse/YARN-5150
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Gergely Novák
>Priority: Major
> Attachments: Screen Shot 2017-11-17 at 12.05.28.png, 
> YARN-5150.001.patch, YARN-5150.002.patch
>
>
> An example of sunburst chart: https://bl.ocks.org/kerryrodden/7090426.
> If we can introduce it to YARN UI, admins can easily get understanding of 
> relative resource usages and configured capacities for queues/applications.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5150) [YARN-3368] Add a sunburst chart view for queues/applications resource usage to new YARN UI

2018-03-12 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-5150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergely Novák updated YARN-5150:

Attachment: YARN-5150.003.wip.patch

> [YARN-3368] Add a sunburst chart view for queues/applications resource usage 
> to new YARN UI
> ---
>
> Key: YARN-5150
> URL: https://issues.apache.org/jira/browse/YARN-5150
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Gergely Novák
>Priority: Major
> Attachments: Screen Shot 2017-11-17 at 12.05.28.png, 
> YARN-5150.001.patch, YARN-5150.002.patch
>
>
> An example of sunburst chart: https://bl.ocks.org/kerryrodden/7090426.
> If we can introduce it to YARN UI, admins can easily get understanding of 
> relative resource usages and configured capacities for queues/applications.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5150) [YARN-3368] Add a sunburst chart view for queues/applications resource usage to new YARN UI

2018-03-12 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-5150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergely Novák updated YARN-5150:

Attachment: (was: YARN-5150.003.wip.patch)

> [YARN-3368] Add a sunburst chart view for queues/applications resource usage 
> to new YARN UI
> ---
>
> Key: YARN-5150
> URL: https://issues.apache.org/jira/browse/YARN-5150
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Gergely Novák
>Priority: Major
> Attachments: Screen Shot 2017-11-17 at 12.05.28.png, 
> YARN-5150.001.patch, YARN-5150.002.patch
>
>
> An example of sunburst chart: https://bl.ocks.org/kerryrodden/7090426.
> If we can introduce it to YARN UI, admins can easily get understanding of 
> relative resource usages and configured capacities for queues/applications.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5150) [YARN-3368] Add a sunburst chart view for queues/applications resource usage to new YARN UI

2018-03-12 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-5150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergely Novák updated YARN-5150:

Attachment: (was: YARN-5150.003.patch)

> [YARN-3368] Add a sunburst chart view for queues/applications resource usage 
> to new YARN UI
> ---
>
> Key: YARN-5150
> URL: https://issues.apache.org/jira/browse/YARN-5150
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Gergely Novák
>Priority: Major
> Attachments: Screen Shot 2017-11-17 at 12.05.28.png, 
> YARN-5150.001.patch, YARN-5150.002.patch
>
>
> An example of sunburst chart: https://bl.ocks.org/kerryrodden/7090426.
> If we can introduce it to YARN UI, admins can easily get understanding of 
> relative resource usages and configured capacities for queues/applications.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5150) [YARN-3368] Add a sunburst chart view for queues/applications resource usage to new YARN UI

2018-03-12 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-5150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergely Novák updated YARN-5150:

Attachment: (was: YARN-5150.003.patch)

> [YARN-3368] Add a sunburst chart view for queues/applications resource usage 
> to new YARN UI
> ---
>
> Key: YARN-5150
> URL: https://issues.apache.org/jira/browse/YARN-5150
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Gergely Novák
>Priority: Major
> Attachments: Screen Shot 2017-11-17 at 12.05.28.png, 
> YARN-5150.001.patch, YARN-5150.002.patch
>
>
> An example of sunburst chart: https://bl.ocks.org/kerryrodden/7090426.
> If we can introduce it to YARN UI, admins can easily get understanding of 
> relative resource usages and configured capacities for queues/applications.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5150) [YARN-3368] Add a sunburst chart view for queues/applications resource usage to new YARN UI

2018-03-12 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-5150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergely Novák updated YARN-5150:

Attachment: YARN-5150.003.patch

> [YARN-3368] Add a sunburst chart view for queues/applications resource usage 
> to new YARN UI
> ---
>
> Key: YARN-5150
> URL: https://issues.apache.org/jira/browse/YARN-5150
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Gergely Novák
>Priority: Major
> Attachments: Screen Shot 2017-11-17 at 12.05.28.png, 
> YARN-5150.001.patch, YARN-5150.002.patch, YARN-5150.003.patch
>
>
> An example of sunburst chart: https://bl.ocks.org/kerryrodden/7090426.
> If we can introduce it to YARN UI, admins can easily get understanding of 
> relative resource usages and configured capacities for queues/applications.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8022) ResourceManager UI cluster/app/ page fails to render

2018-03-12 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395088#comment-16395088
 ] 

genericqa commented on YARN-8022:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
48s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common: 
The patch generated 1 new + 20 unchanged - 18 fixed = 21 total (was 38) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
5s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | YARN-8022 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913998/YARN-8022.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d03b58bc0d00 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e1f5251 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| findbugs | 

[jira] [Updated] (YARN-8022) ResourceManager UI cluster/app/ page fails to render

2018-03-12 Thread Tarun Parimi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tarun Parimi updated YARN-8022:
---
Attachment: YARN-8022.002.patch

> ResourceManager UI cluster/app/ page fails to render
> 
>
> Key: YARN-8022
> URL: https://issues.apache.org/jira/browse/YARN-8022
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Tarun Parimi
>Assignee: Tarun Parimi
>Priority: Blocker
> Attachments: Screen Shot 2018-03-12 at 1.45.05 PM.png, 
> YARN-8022.001.patch, YARN-8022.002.patch
>
>
> The page displays the message "Failed to read the attempts of the application"
>  
> The following stack trace is observed in RM log.
> org.apache.hadoop.yarn.server.webapp.AppBlock: Failed to read the attempts of 
> the application application_1520597233415_0002.
> java.lang.NullPointerException
>  at org.apache.hadoop.yarn.server.webapp.AppBlock$3.run(AppBlock.java:283)
>  at org.apache.hadoop.yarn.server.webapp.AppBlock$3.run(AppBlock.java:280)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
>  at org.apache.hadoop.yarn.server.webapp.AppBlock.render(AppBlock.java:279)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.RMAppBlock.render(RMAppBlock.java:71)
>  at org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>  at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>  at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
>  at org.apache.hadoop.yarn.webapp.view.HtmlPage$Page.subView(HtmlPage.java:49)
>  at 
> org.apache.hadoop.yarn.webapp.hamlet2.HamletImpl$EImp._v(HamletImpl.java:117)
>  at org.apache.hadoop.yarn.webapp.hamlet2.Hamlet$TD.__(Hamlet.java:848)
>  at 
> org.apache.hadoop.yarn.webapp.view.TwoColumnLayout.render(TwoColumnLayout.java:71)
>  at org.apache.hadoop.yarn.webapp.view.HtmlPage.render(HtmlPage.java:82)
>  at org.apache.hadoop.yarn.webapp.Controller.render(Controller.java:212)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.RmController.app(RmController.java:54)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6688) Add client interface to know default queue for user

2018-03-12 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt reassigned YARN-6688:
--

Assignee: (was: Bibin A Chundatt)

> Add client interface to know default queue for user
> ---
>
> Key: YARN-6688
> URL: https://issues.apache.org/jira/browse/YARN-6688
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Bibin A Chundatt
>Priority: Major
>
> Currently user gets to know queue placement only once application is 
> accepted. Provide an option for client/user to know default queue for user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8022) ResourceManager UI cluster/app/ page fails to render

2018-03-12 Thread Tarun Parimi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395060#comment-16395060
 ] 

Tarun Parimi commented on YARN-8022:


Thanks for the clarification [~rohithsharma] . Attached a patch as per your 
suggestions

> ResourceManager UI cluster/app/ page fails to render
> 
>
> Key: YARN-8022
> URL: https://issues.apache.org/jira/browse/YARN-8022
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Tarun Parimi
>Assignee: Tarun Parimi
>Priority: Blocker
> Attachments: Screen Shot 2018-03-12 at 1.45.05 PM.png, 
> YARN-8022.001.patch, YARN-8022.002.patch
>
>
> The page displays the message "Failed to read the attempts of the application"
>  
> The following stack trace is observed in RM log.
> org.apache.hadoop.yarn.server.webapp.AppBlock: Failed to read the attempts of 
> the application application_1520597233415_0002.
> java.lang.NullPointerException
>  at org.apache.hadoop.yarn.server.webapp.AppBlock$3.run(AppBlock.java:283)
>  at org.apache.hadoop.yarn.server.webapp.AppBlock$3.run(AppBlock.java:280)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
>  at org.apache.hadoop.yarn.server.webapp.AppBlock.render(AppBlock.java:279)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.RMAppBlock.render(RMAppBlock.java:71)
>  at org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>  at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>  at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
>  at org.apache.hadoop.yarn.webapp.view.HtmlPage$Page.subView(HtmlPage.java:49)
>  at 
> org.apache.hadoop.yarn.webapp.hamlet2.HamletImpl$EImp._v(HamletImpl.java:117)
>  at org.apache.hadoop.yarn.webapp.hamlet2.Hamlet$TD.__(Hamlet.java:848)
>  at 
> org.apache.hadoop.yarn.webapp.view.TwoColumnLayout.render(TwoColumnLayout.java:71)
>  at org.apache.hadoop.yarn.webapp.view.HtmlPage.render(HtmlPage.java:82)
>  at org.apache.hadoop.yarn.webapp.Controller.render(Controller.java:212)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.RmController.app(RmController.java:54)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8022) ResourceManager UI cluster/app/ page fails to render

2018-03-12 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-8022:
--
Affects Version/s: (was: 3.2.0)
   (was: 3.1.0)
 Target Version/s: 3.1.0

> ResourceManager UI cluster/app/ page fails to render
> 
>
> Key: YARN-8022
> URL: https://issues.apache.org/jira/browse/YARN-8022
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Tarun Parimi
>Assignee: Tarun Parimi
>Priority: Blocker
> Attachments: Screen Shot 2018-03-12 at 1.45.05 PM.png, 
> YARN-8022.001.patch
>
>
> The page displays the message "Failed to read the attempts of the application"
>  
> The following stack trace is observed in RM log.
> org.apache.hadoop.yarn.server.webapp.AppBlock: Failed to read the attempts of 
> the application application_1520597233415_0002.
> java.lang.NullPointerException
>  at org.apache.hadoop.yarn.server.webapp.AppBlock$3.run(AppBlock.java:283)
>  at org.apache.hadoop.yarn.server.webapp.AppBlock$3.run(AppBlock.java:280)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
>  at org.apache.hadoop.yarn.server.webapp.AppBlock.render(AppBlock.java:279)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.RMAppBlock.render(RMAppBlock.java:71)
>  at org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>  at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>  at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
>  at org.apache.hadoop.yarn.webapp.view.HtmlPage$Page.subView(HtmlPage.java:49)
>  at 
> org.apache.hadoop.yarn.webapp.hamlet2.HamletImpl$EImp._v(HamletImpl.java:117)
>  at org.apache.hadoop.yarn.webapp.hamlet2.Hamlet$TD.__(Hamlet.java:848)
>  at 
> org.apache.hadoop.yarn.webapp.view.TwoColumnLayout.render(TwoColumnLayout.java:71)
>  at org.apache.hadoop.yarn.webapp.view.HtmlPage.render(HtmlPage.java:82)
>  at org.apache.hadoop.yarn.webapp.Controller.render(Controller.java:212)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.RmController.app(RmController.java:54)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8022) ResourceManager UI cluster/app/ page fails to render

2018-03-12 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395029#comment-16395029
 ] 

Rohith Sharma K S commented on YARN-8022:
-

However in secure cluster, unauthorized user will be going into else part since 
callerUGI will NOT be null at all. If user is unauthorized then this user will 
be blocked in AuthenticationFilter itself. CallerUGI can be null only in 
non-secure cluster. So we can proceed ahead with this modification. 
cc : [~leftnoteasy] [~sunilg]

> ResourceManager UI cluster/app/ page fails to render
> 
>
> Key: YARN-8022
> URL: https://issues.apache.org/jira/browse/YARN-8022
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Affects Versions: 3.1.0, 3.2.0
>Reporter: Tarun Parimi
>Assignee: Tarun Parimi
>Priority: Blocker
> Attachments: Screen Shot 2018-03-12 at 1.45.05 PM.png, 
> YARN-8022.001.patch
>
>
> The page displays the message "Failed to read the attempts of the application"
>  
> The following stack trace is observed in RM log.
> org.apache.hadoop.yarn.server.webapp.AppBlock: Failed to read the attempts of 
> the application application_1520597233415_0002.
> java.lang.NullPointerException
>  at org.apache.hadoop.yarn.server.webapp.AppBlock$3.run(AppBlock.java:283)
>  at org.apache.hadoop.yarn.server.webapp.AppBlock$3.run(AppBlock.java:280)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
>  at org.apache.hadoop.yarn.server.webapp.AppBlock.render(AppBlock.java:279)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.RMAppBlock.render(RMAppBlock.java:71)
>  at org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>  at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>  at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
>  at org.apache.hadoop.yarn.webapp.view.HtmlPage$Page.subView(HtmlPage.java:49)
>  at 
> org.apache.hadoop.yarn.webapp.hamlet2.HamletImpl$EImp._v(HamletImpl.java:117)
>  at org.apache.hadoop.yarn.webapp.hamlet2.Hamlet$TD.__(Hamlet.java:848)
>  at 
> org.apache.hadoop.yarn.webapp.view.TwoColumnLayout.render(TwoColumnLayout.java:71)
>  at org.apache.hadoop.yarn.webapp.view.HtmlPage.render(HtmlPage.java:82)
>  at org.apache.hadoop.yarn.webapp.Controller.render(Controller.java:212)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.RmController.app(RmController.java:54)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8025) UsersManangers#getComputedResourceLimitForActiveUsers throws NPE due to preComputedActiveUserLimit is empty

2018-03-12 Thread Jiandan Yang (JIRA)
Jiandan Yang  created YARN-8025:
---

 Summary: UsersManangers#getComputedResourceLimitForActiveUsers 
throws NPE due to preComputedActiveUserLimit is empty
 Key: YARN-8025
 URL: https://issues.apache.org/jira/browse/YARN-8025
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn
 Environment: UsersManangers#getComputedResourceLimitForActiveUsers 
throws NPE  when I run SLS.
*preComputedActiveUserLimit* is not put any element in the code.

{code:java}
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.UsersManager.getComputedResourceLimitForActiveUsers(UsersManager.java:511)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.getResourceLimitForActiveUsers(LeafQueue.java:1576)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.computeUserLimitAndSetHeadroom(LeafQueue.java:1517)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.assignContainers(LeafQueue.java:1190)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue.assignContainersToChildQueues(ParentQueue.java:824)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue.assignContainers(ParentQueue.java:630)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateOrReserveNewContainers(CapacityScheduler.java:1834)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateOrReserveNewContainers(CapacityScheduler.java:1802)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersOnMultiNodes(CapacityScheduler.java:1925)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersToNode(CapacityScheduler.java:1946)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.scheduleBasedOnNodeLabels(CapacityScheduler.java:732)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler$AsyncScheduleThread.run(CapacityScheduler.java:774)
{code}

Reporter: Jiandan Yang 






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8022) ResourceManager UI cluster/app/ page fails to render

2018-03-12 Thread Tarun Parimi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395015#comment-16395015
 ] 

Tarun Parimi commented on YARN-8022:


YARN-6991 depends on running {code}getApplicationAttemptsReport(request){code} 
as callerUGI to validate whether kill button needs to be displayed. If we do 
{code}
 if (callerUGI == null) {
+attempts = getApplicationAttemptsReport(request);
+  } else {
{code}

then, I guess the kill button will be displayed for an unauthorized user? 
Correct me if am wrong.

> ResourceManager UI cluster/app/ page fails to render
> 
>
> Key: YARN-8022
> URL: https://issues.apache.org/jira/browse/YARN-8022
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Affects Versions: 3.1.0, 3.2.0
>Reporter: Tarun Parimi
>Assignee: Tarun Parimi
>Priority: Blocker
> Attachments: Screen Shot 2018-03-12 at 1.45.05 PM.png, 
> YARN-8022.001.patch
>
>
> The page displays the message "Failed to read the attempts of the application"
>  
> The following stack trace is observed in RM log.
> org.apache.hadoop.yarn.server.webapp.AppBlock: Failed to read the attempts of 
> the application application_1520597233415_0002.
> java.lang.NullPointerException
>  at org.apache.hadoop.yarn.server.webapp.AppBlock$3.run(AppBlock.java:283)
>  at org.apache.hadoop.yarn.server.webapp.AppBlock$3.run(AppBlock.java:280)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
>  at org.apache.hadoop.yarn.server.webapp.AppBlock.render(AppBlock.java:279)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.RMAppBlock.render(RMAppBlock.java:71)
>  at org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>  at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>  at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
>  at org.apache.hadoop.yarn.webapp.view.HtmlPage$Page.subView(HtmlPage.java:49)
>  at 
> org.apache.hadoop.yarn.webapp.hamlet2.HamletImpl$EImp._v(HamletImpl.java:117)
>  at org.apache.hadoop.yarn.webapp.hamlet2.Hamlet$TD.__(Hamlet.java:848)
>  at 
> org.apache.hadoop.yarn.webapp.view.TwoColumnLayout.render(TwoColumnLayout.java:71)
>  at org.apache.hadoop.yarn.webapp.view.HtmlPage.render(HtmlPage.java:82)
>  at org.apache.hadoop.yarn.webapp.Controller.render(Controller.java:212)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.RmController.app(RmController.java:54)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7820) Fix the currentAppAttemptId error in AHS when an application is running

2018-03-12 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395010#comment-16395010
 ] 

genericqa commented on YARN-7820:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 22s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 45s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 2 new + 
99 unchanged - 0 fixed = 101 total (was 99) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
39s{color} | {color:green} hadoop-yarn-server-applicationhistoryservice in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 67m 
17s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}123m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | YARN-7820 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913979/YARN-7820.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 6ae2319cf246 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e1f5251 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Comment Edited] (YARN-8022) ResourceManager UI cluster/app/ page fails to render

2018-03-12 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395006#comment-16395006
 ] 

Rohith Sharma K S edited comment on YARN-8022 at 3/12/18 9:56 AM:
--

I would suggest following changes. 
# Revert AppBlock as this patch is doing. 
# Additionally, in line no. 145, can you add below code. Get app attempt report 
directly if caller ugi is null.
{code}
+  if (callerUGI == null) {
+attempts = getApplicationAttemptsReport(request);
+  } else {
{code}


was (Author: rohithsharma):
I would suggest following changes. 
# Revert AppBlock as this patch is doing. 
# Additionally, in line no. 155, can you add below code. Get app attempt report 
directly if caller ugi is null.
{code}
+  if (callerUGI == null) {
+attempts = getApplicationAttemptsReport(request);
+  } else {
{code}

> ResourceManager UI cluster/app/ page fails to render
> 
>
> Key: YARN-8022
> URL: https://issues.apache.org/jira/browse/YARN-8022
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Affects Versions: 3.1.0, 3.2.0
>Reporter: Tarun Parimi
>Assignee: Tarun Parimi
>Priority: Blocker
> Attachments: Screen Shot 2018-03-12 at 1.45.05 PM.png, 
> YARN-8022.001.patch
>
>
> The page displays the message "Failed to read the attempts of the application"
>  
> The following stack trace is observed in RM log.
> org.apache.hadoop.yarn.server.webapp.AppBlock: Failed to read the attempts of 
> the application application_1520597233415_0002.
> java.lang.NullPointerException
>  at org.apache.hadoop.yarn.server.webapp.AppBlock$3.run(AppBlock.java:283)
>  at org.apache.hadoop.yarn.server.webapp.AppBlock$3.run(AppBlock.java:280)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
>  at org.apache.hadoop.yarn.server.webapp.AppBlock.render(AppBlock.java:279)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.RMAppBlock.render(RMAppBlock.java:71)
>  at org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>  at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>  at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
>  at org.apache.hadoop.yarn.webapp.view.HtmlPage$Page.subView(HtmlPage.java:49)
>  at 
> org.apache.hadoop.yarn.webapp.hamlet2.HamletImpl$EImp._v(HamletImpl.java:117)
>  at org.apache.hadoop.yarn.webapp.hamlet2.Hamlet$TD.__(Hamlet.java:848)
>  at 
> org.apache.hadoop.yarn.webapp.view.TwoColumnLayout.render(TwoColumnLayout.java:71)
>  at org.apache.hadoop.yarn.webapp.view.HtmlPage.render(HtmlPage.java:82)
>  at org.apache.hadoop.yarn.webapp.Controller.render(Controller.java:212)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.RmController.app(RmController.java:54)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8022) ResourceManager UI cluster/app/ page fails to render

2018-03-12 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395006#comment-16395006
 ] 

Rohith Sharma K S commented on YARN-8022:
-

I would suggest following changes. 
# Revert AppBlock as this patch is doing. 
# Additionally, in line no. 155, can you add below code. Get app attempt report 
directly if caller ugi is null.
{code}
+  if (callerUGI == null) {
+attempts = getApplicationAttemptsReport(request);
+  } else {
{code}

> ResourceManager UI cluster/app/ page fails to render
> 
>
> Key: YARN-8022
> URL: https://issues.apache.org/jira/browse/YARN-8022
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Affects Versions: 3.1.0, 3.2.0
>Reporter: Tarun Parimi
>Assignee: Tarun Parimi
>Priority: Blocker
> Attachments: Screen Shot 2018-03-12 at 1.45.05 PM.png, 
> YARN-8022.001.patch
>
>
> The page displays the message "Failed to read the attempts of the application"
>  
> The following stack trace is observed in RM log.
> org.apache.hadoop.yarn.server.webapp.AppBlock: Failed to read the attempts of 
> the application application_1520597233415_0002.
> java.lang.NullPointerException
>  at org.apache.hadoop.yarn.server.webapp.AppBlock$3.run(AppBlock.java:283)
>  at org.apache.hadoop.yarn.server.webapp.AppBlock$3.run(AppBlock.java:280)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
>  at org.apache.hadoop.yarn.server.webapp.AppBlock.render(AppBlock.java:279)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.RMAppBlock.render(RMAppBlock.java:71)
>  at org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>  at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>  at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
>  at org.apache.hadoop.yarn.webapp.view.HtmlPage$Page.subView(HtmlPage.java:49)
>  at 
> org.apache.hadoop.yarn.webapp.hamlet2.HamletImpl$EImp._v(HamletImpl.java:117)
>  at org.apache.hadoop.yarn.webapp.hamlet2.Hamlet$TD.__(Hamlet.java:848)
>  at 
> org.apache.hadoop.yarn.webapp.view.TwoColumnLayout.render(TwoColumnLayout.java:71)
>  at org.apache.hadoop.yarn.webapp.view.HtmlPage.render(HtmlPage.java:82)
>  at org.apache.hadoop.yarn.webapp.Controller.render(Controller.java:212)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.RmController.app(RmController.java:54)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7925) Some NPE errors caused a display errors when setting node labels

2018-03-12 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16394982#comment-16394982
 ] 

genericqa commented on YARN-7925:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 35 unchanged - 0 fixed = 36 total (was 35) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 19s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}109m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | YARN-7925 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913977/YARN-7925.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b3566697bc91 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 
21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e1f5251 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/19952/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 

[jira] [Commented] (YARN-8022) ResourceManager UI cluster/app/ page fails to render

2018-03-12 Thread Tarun Parimi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16394977#comment-16394977
 ] 

Tarun Parimi commented on YARN-8022:


[~rohithsharma] Looks like callerUgi null check before the revert was handled 
by 
{code:java}
if (callerUGI == null) {
  throw new AuthenticationException(
  "Failed to get user name from request");
}{code}
which is not present in this patch. So even before the revert if callerUgi is 
equal to null, the page will not be rendered. But the NPE will be avoided. 

Also if we are going to throw this exception, the further null check of 
callerUgi below is not needed. Can we remove that?
{code:java}
if (callerUGI == null) {
  containerReport =
  getContainerReport(request);
} else {
  containerReport = callerUGI.doAs(
  new PrivilegedExceptionAction() {
{code}
 

> ResourceManager UI cluster/app/ page fails to render
> 
>
> Key: YARN-8022
> URL: https://issues.apache.org/jira/browse/YARN-8022
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Affects Versions: 3.1.0, 3.2.0
>Reporter: Tarun Parimi
>Assignee: Tarun Parimi
>Priority: Blocker
> Attachments: Screen Shot 2018-03-12 at 1.45.05 PM.png, 
> YARN-8022.001.patch
>
>
> The page displays the message "Failed to read the attempts of the application"
>  
> The following stack trace is observed in RM log.
> org.apache.hadoop.yarn.server.webapp.AppBlock: Failed to read the attempts of 
> the application application_1520597233415_0002.
> java.lang.NullPointerException
>  at org.apache.hadoop.yarn.server.webapp.AppBlock$3.run(AppBlock.java:283)
>  at org.apache.hadoop.yarn.server.webapp.AppBlock$3.run(AppBlock.java:280)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
>  at org.apache.hadoop.yarn.server.webapp.AppBlock.render(AppBlock.java:279)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.RMAppBlock.render(RMAppBlock.java:71)
>  at org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>  at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>  at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
>  at org.apache.hadoop.yarn.webapp.view.HtmlPage$Page.subView(HtmlPage.java:49)
>  at 
> org.apache.hadoop.yarn.webapp.hamlet2.HamletImpl$EImp._v(HamletImpl.java:117)
>  at org.apache.hadoop.yarn.webapp.hamlet2.Hamlet$TD.__(Hamlet.java:848)
>  at 
> org.apache.hadoop.yarn.webapp.view.TwoColumnLayout.render(TwoColumnLayout.java:71)
>  at org.apache.hadoop.yarn.webapp.view.HtmlPage.render(HtmlPage.java:82)
>  at org.apache.hadoop.yarn.webapp.Controller.render(Controller.java:212)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.RmController.app(RmController.java:54)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7581) HBase filters are not constructed correctly in ATSv2

2018-03-12 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16394967#comment-16394967
 ] 

Rohith Sharma K S commented on YARN-7581:
-

[~haibochen] would you update the patch fixing findbugs? 

> HBase filters are not constructed correctly in ATSv2
> 
>
> Key: YARN-7581
> URL: https://issues.apache.org/jira/browse/YARN-7581
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: ATSv2
>Affects Versions: 3.0.0-beta1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-7581.00.patch
>
>
> Post YARN-7346,
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesConfigFilters() and 
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesMetricFilters() 
> start to fail when hbase.profile is set to 2.0)
> *Error Message*
>  [ERROR] Failures:
>  [ERROR] 
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesConfigFilters:1266 
> expected:<2> but was:<0>
>  [ERROR] 
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesMetricFilters:1523 
> expected:<1> but was:<0>



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8024) LOG in class MaxRunningAppsEnforcer is initialized with a faulty class FairScheduler

2018-03-12 Thread Sen Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sen Zhao updated YARN-8024:
---
Attachment: YARN-8024.001.patch

> LOG in class MaxRunningAppsEnforcer is initialized with a faulty class 
> FairScheduler 
> -
>
> Key: YARN-8024
> URL: https://issues.apache.org/jira/browse/YARN-8024
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Reporter: Yufei Gu
>Priority: Major
>  Labels: newbie++
> Attachments: YARN-8024.001.patch
>
>
> It should be initialized with class MaxRunningAppsEnforcer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8022) ResourceManager UI cluster/app/ page fails to render

2018-03-12 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16394936#comment-16394936
 ] 

Rohith Sharma K S commented on YARN-8022:
-

This seems even after revert it fails with NPE because of callerUgi null is not 
handled.
{code}
2018-03-12 13:36:10,860 ERROR org.apache.hadoop.yarn.server.webapp.AppBlock: 
Failed to read the attempts of the application application_1520833852015_0001.
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.server.webapp.AppBlock.render(AppBlock.java:145)
at 
org.apache.hadoop.yarn.server.resourcemanager.webapp.RMAppBlock.render(RMAppBlock.java:71)
{code}

There are many issue fix has gone into AppBlock.java. Pls check all these jira 
before uploading new patch.
 !Screen Shot 2018-03-12 at 1.45.05 PM.png! 

> ResourceManager UI cluster/app/ page fails to render
> 
>
> Key: YARN-8022
> URL: https://issues.apache.org/jira/browse/YARN-8022
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Affects Versions: 3.1.0, 3.2.0
>Reporter: Tarun Parimi
>Assignee: Tarun Parimi
>Priority: Blocker
> Attachments: Screen Shot 2018-03-12 at 1.45.05 PM.png, 
> YARN-8022.001.patch
>
>
> The page displays the message "Failed to read the attempts of the application"
>  
> The following stack trace is observed in RM log.
> org.apache.hadoop.yarn.server.webapp.AppBlock: Failed to read the attempts of 
> the application application_1520597233415_0002.
> java.lang.NullPointerException
>  at org.apache.hadoop.yarn.server.webapp.AppBlock$3.run(AppBlock.java:283)
>  at org.apache.hadoop.yarn.server.webapp.AppBlock$3.run(AppBlock.java:280)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
>  at org.apache.hadoop.yarn.server.webapp.AppBlock.render(AppBlock.java:279)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.RMAppBlock.render(RMAppBlock.java:71)
>  at org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>  at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>  at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
>  at org.apache.hadoop.yarn.webapp.view.HtmlPage$Page.subView(HtmlPage.java:49)
>  at 
> org.apache.hadoop.yarn.webapp.hamlet2.HamletImpl$EImp._v(HamletImpl.java:117)
>  at org.apache.hadoop.yarn.webapp.hamlet2.Hamlet$TD.__(Hamlet.java:848)
>  at 
> org.apache.hadoop.yarn.webapp.view.TwoColumnLayout.render(TwoColumnLayout.java:71)
>  at org.apache.hadoop.yarn.webapp.view.HtmlPage.render(HtmlPage.java:82)
>  at org.apache.hadoop.yarn.webapp.Controller.render(Controller.java:212)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.RmController.app(RmController.java:54)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8022) ResourceManager UI cluster/app/ page fails to render

2018-03-12 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-8022:

Attachment: Screen Shot 2018-03-12 at 1.45.05 PM.png

> ResourceManager UI cluster/app/ page fails to render
> 
>
> Key: YARN-8022
> URL: https://issues.apache.org/jira/browse/YARN-8022
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Affects Versions: 3.1.0, 3.2.0
>Reporter: Tarun Parimi
>Assignee: Tarun Parimi
>Priority: Blocker
> Attachments: Screen Shot 2018-03-12 at 1.45.05 PM.png, 
> YARN-8022.001.patch
>
>
> The page displays the message "Failed to read the attempts of the application"
>  
> The following stack trace is observed in RM log.
> org.apache.hadoop.yarn.server.webapp.AppBlock: Failed to read the attempts of 
> the application application_1520597233415_0002.
> java.lang.NullPointerException
>  at org.apache.hadoop.yarn.server.webapp.AppBlock$3.run(AppBlock.java:283)
>  at org.apache.hadoop.yarn.server.webapp.AppBlock$3.run(AppBlock.java:280)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
>  at org.apache.hadoop.yarn.server.webapp.AppBlock.render(AppBlock.java:279)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.RMAppBlock.render(RMAppBlock.java:71)
>  at org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>  at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>  at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
>  at org.apache.hadoop.yarn.webapp.view.HtmlPage$Page.subView(HtmlPage.java:49)
>  at 
> org.apache.hadoop.yarn.webapp.hamlet2.HamletImpl$EImp._v(HamletImpl.java:117)
>  at org.apache.hadoop.yarn.webapp.hamlet2.Hamlet$TD.__(Hamlet.java:848)
>  at 
> org.apache.hadoop.yarn.webapp.view.TwoColumnLayout.render(TwoColumnLayout.java:71)
>  at org.apache.hadoop.yarn.webapp.view.HtmlPage.render(HtmlPage.java:82)
>  at org.apache.hadoop.yarn.webapp.Controller.render(Controller.java:212)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.RmController.app(RmController.java:54)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7820) Fix the currentAppAttemptId error in AHS when an application is running

2018-03-12 Thread Jinjiang Ling (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinjiang Ling updated YARN-7820:

Attachment: YARN-7820.003.patch

> Fix the currentAppAttemptId error in AHS when an application is running
> ---
>
> Key: YARN-7820
> URL: https://issues.apache.org/jira/browse/YARN-7820
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Reporter: Jinjiang Ling
>Assignee: Jinjiang Ling
>Priority: Major
> Attachments: YARN-7820.001.patch, YARN-7820.003.patch, 
> YARN-7820.003.patch, YARN-7820.003.patch, image-2018-01-26-14-35-09-796.png
>
>
> When I using the REST API of the AHS to get a running app's latest attempt 
> id, it always returns a invalid id like 
> *appattempt_1516873125047_0013_{color:#FF}-01{color}*. 
> But when the app is finished, the RM will push a finished event which 
> contains the latest attempt id to TimelineServer, so the id will transitive 
> to a correct one in the end of the application. 
> I think as the app is running, this value should be a correct one, so I add 
> the latest attempt id in the other info of the app's entity when the app 
> trans to RUNNING state. Then the AHS will use this value to set the 
> currentAppAttemptId.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7925) Some NPE errors caused a display errors when setting node labels

2018-03-12 Thread Jinjiang Ling (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinjiang Ling updated YARN-7925:

Attachment: YARN-7925.003.patch

> Some NPE errors caused a display errors when setting node labels
> 
>
> Key: YARN-7925
> URL: https://issues.apache.org/jira/browse/YARN-7925
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Jinjiang Ling
>Assignee: Jinjiang Ling
>Priority: Blocker
> Attachments: DisplayError.png, YARN-7925.001.patch, 
> YARN-7925.002.patch, YARN-7925.003.patch
>
>
> I'm trying to using the node label with latest hadoop (3.1.0-SNAPSHOT). But 
> when I add a new node label and append a nodemanager to it, sometimes it may 
> cause a display error.
> !DisplayError.png|width=573,height=188!
> Then I found *when there is no queues can access to the label*,  this error 
> will happen.
> After checking the log, I find some NPE errors.
> {quote}..
>  Caused by: java.lang.NullPointerException
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ResourceInfo.toString(ResourceInfo.java:73)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:160)
>  ..
> {quote}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-8023) REST API doesn't show new application

2018-03-12 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S resolved YARN-8023.
-
Resolution: Invalid

> REST API doesn't show new application
> -
>
> Key: YARN-8023
> URL: https://issues.apache.org/jira/browse/YARN-8023
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api
>Affects Versions: 2.7.3
> Environment: Release label:emr-5.5.0
> Hadoop distribution:Amazon 2.7.3
> Applications:Spark 2.1.0, Hive 2.1.1, Hue 3.12.0
>Reporter: Airton Sampaio de Sobral
>Priority: Major
> Attachments: Screen Shot 2018-03-10 at 5.46.13 PM.png
>
>
> After killing an application using the HADOOP UI, and creating a new one with 
> the same configuration, the brand new one doesn't appear on the API route:
> /api/v1/applications?status=running
> I've tried the applications path as well (/api/v1/applications) without 
> success.
> This happens randomly and it seems that after a really long time, the new 
> instance appears on the API.
> On the UI the new application instance appears, and it's working fine. (Print 
> screen on attachment).
> On the API it shows the last instance of the application as it's running, but 
> it's dead by one hour!
> {code:java}
> /api/v1/applications?status=running
> {
> "id" : "application_1511385973584_0087",
> "name" : "AdActionPaymentKafkaToJDBC",
> "attempts" : [ {
> "attemptId" : "1",
> "startTime" : "2018-01-16T19:08:32.275GMT",
> "endTime" : "1969-12-31T23:59:59.999GMT",
> "lastUpdated" : "2018-01-16T19:08:34.016GMT",
> "duration" : 0,
> "sparkUser" : "hadoop",
> "completed" : false,
> "endTimeEpoch" : -1,
> "startTimeEpoch" : 1516129712275,
> "lastUpdatedEpoch" : 1516129714016
> }
> {code}
>  
> Update:
> After two hours, the application appeared on the API response:
>  
> {code:java}
> {
> "id" : "application_1511385973584_0154",
> "name" : "AdActionPaymentKafkaToJDBC",
> "attempts" : [ {
> "attemptId" : "1",
> "startTime" : "2018-03-10T21:08:30.557GMT",
> "endTime" : "1969-12-31T23:59:59.999GMT",
> "lastUpdated" : "2018-03-10T21:08:32.310GMT",
> "duration" : 0,
> "sparkUser" : "hadoop",
> "completed" : false,
> "endTimeEpoch" : -1,
> "startTimeEpoch" : 1520716110557,
> "lastUpdatedEpoch" : 1520716112310
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8023) REST API doesn't show new application

2018-03-12 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16394870#comment-16394870
 ] 

Rohith Sharma K S commented on YARN-8023:
-

The given REST end points and response are not part of RM daemon. It appears it 
is specific to particular vendors. May be you need to contact with them 
directly.

I am closing it as invalid. Pls reopen if you feel it is an issue from YARN.

> REST API doesn't show new application
> -
>
> Key: YARN-8023
> URL: https://issues.apache.org/jira/browse/YARN-8023
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api
>Affects Versions: 2.7.3
> Environment: Release label:emr-5.5.0
> Hadoop distribution:Amazon 2.7.3
> Applications:Spark 2.1.0, Hive 2.1.1, Hue 3.12.0
>Reporter: Airton Sampaio de Sobral
>Priority: Major
> Attachments: Screen Shot 2018-03-10 at 5.46.13 PM.png
>
>
> After killing an application using the HADOOP UI, and creating a new one with 
> the same configuration, the brand new one doesn't appear on the API route:
> /api/v1/applications?status=running
> I've tried the applications path as well (/api/v1/applications) without 
> success.
> This happens randomly and it seems that after a really long time, the new 
> instance appears on the API.
> On the UI the new application instance appears, and it's working fine. (Print 
> screen on attachment).
> On the API it shows the last instance of the application as it's running, but 
> it's dead by one hour!
> {code:java}
> /api/v1/applications?status=running
> {
> "id" : "application_1511385973584_0087",
> "name" : "AdActionPaymentKafkaToJDBC",
> "attempts" : [ {
> "attemptId" : "1",
> "startTime" : "2018-01-16T19:08:32.275GMT",
> "endTime" : "1969-12-31T23:59:59.999GMT",
> "lastUpdated" : "2018-01-16T19:08:34.016GMT",
> "duration" : 0,
> "sparkUser" : "hadoop",
> "completed" : false,
> "endTimeEpoch" : -1,
> "startTimeEpoch" : 1516129712275,
> "lastUpdatedEpoch" : 1516129714016
> }
> {code}
>  
> Update:
> After two hours, the application appeared on the API response:
>  
> {code:java}
> {
> "id" : "application_1511385973584_0154",
> "name" : "AdActionPaymentKafkaToJDBC",
> "attempts" : [ {
> "attemptId" : "1",
> "startTime" : "2018-03-10T21:08:30.557GMT",
> "endTime" : "1969-12-31T23:59:59.999GMT",
> "lastUpdated" : "2018-03-10T21:08:32.310GMT",
> "duration" : 0,
> "sparkUser" : "hadoop",
> "completed" : false,
> "endTimeEpoch" : -1,
> "startTimeEpoch" : 1520716110557,
> "lastUpdatedEpoch" : 1520716112310
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7864) YARN Federation document has error. spelling mistakes.

2018-03-12 Thread Yiran Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16394858#comment-16394858
 ] 

Yiran Wu commented on YARN-7864:


Who can review this issue? pls.

> YARN Federation document has error. spelling mistakes.
> --
>
> Key: YARN-7864
> URL: https://issues.apache.org/jira/browse/YARN-7864
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: docs
>Affects Versions: 2.9.0, 3.0.0, 2.9.1
> Environment: 3.0.0
>Reporter: Yiran Wu
>Priority: Major
> Attachments: YARN-7864.001.patch, image-2018-01-31-19-01-12-739.png
>
>
> YARN Federation document has error. spelling mistakes.
> yarn.resourcemanger.scheduler.address -> 
> yarn.resourcemanager.scheduler.address
>  
> !image-2018-01-31-19-01-12-739.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8006) Make Hbase-2 profile as default for YARN-7055 branch

2018-03-12 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16394857#comment-16394857
 ] 

Rohith Sharma K S commented on YARN-8006:
-

Going through HBase code base, TableDescriptor is interface and 
HTableDescriptor implements TableDescriptor. Note that HTableDescriptor is 
deprecated but it should still work for backward compatibility. 
{code}
/**
 * HTableDescriptor contains the details about an HBase table  such as the 
descriptors of
 * all the column families, is the table a catalog table,  hbase:meta 
,
 * if the table is read only, the maximum size of the memstore,
 * when the region split should occur, coprocessors associated with it etc...
 * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0.
 * Use {@link TableDescriptorBuilder} to build {@link 
HTableDescriptor}.
 */
@Deprecated
@InterfaceAudience.Public
public class HTableDescriptor implements TableDescriptor, 
Comparable {
{code}

> Make Hbase-2 profile as default for YARN-7055 branch
> 
>
> Key: YARN-8006
> URL: https://issues.apache.org/jira/browse/YARN-8006
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-8006-YARN-7055.001.patch, 
> YARN-8006-YARN-8006.00.patch
>
>
> In last weekly call folks discussed that we should have separate branch with 
> hbase-2 as profile by default. Trunk default profile is hbase-1 which runs 
> all the tests under hbase-1 profile. But for hbase-2 profile tests are not 
> running.
> As per the discussion, lets keep YARN-7055 branch for hbase-2 profile as 
> default. Any server side patches can be given to this branch as well which 
> runs tests for hbase-2 profile. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7773) YARN Federation used Mysql as state store throw exception, Unknown column 'homeSubCluster' in 'field list'

2018-03-12 Thread Yiran Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16394853#comment-16394853
 ] 

Yiran Wu commented on YARN-7773:


Who can review this issue? please.

> YARN Federation used Mysql as state store throw exception, Unknown column 
> 'homeSubCluster' in 'field list'
> --
>
> Key: YARN-7773
> URL: https://issues.apache.org/jira/browse/YARN-7773
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 2.9.0, 3.0.0-alpha1, 3.0.0-alpha2, 3.0.0-beta1, 
> 3.0.0-alpha4, 3.0.0-alpha3, 3.0.0
> Environment: Hadoop 3.0.0
>Reporter: Yiran Wu
>Priority: Blocker
>  Labels: patch
> Attachments: YARN-7773.001.patch
>
>
> An error occurred when YARN Federation used Mysql as state store. The reason 
> I found it was because the field used to create the 
> applicationsHomeSubCluster table was 'subClusterId' and the stored procedure 
> used 'homeSubCluster'. I fixed this problem.
>  
> submitApplication appIdapplication_1516277664083_0014 try #0 on SubCluster 
> cluster1 , queue: root.bdp_federation
>  [2018-01-18T23:25:29.325+08:00] [ERROR] 
> store.impl.SQLFederationStateStore.logAndThrowRetriableException(FederationStateStoreUtils.java
>  158) [IPC Server handler 44 on 8050] : Unable to insert the newly generated 
> application application_1516277664083_0014
>  com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown column 
> 'homeSubCluster' in 'field list'
>  at sun.reflect.GeneratedConstructorAccessor15.newInstance(Unknown Source)
>  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>  at com.mysql.jdbc.Util.handleNewInstance(Util.java:425)
>  at com.mysql.jdbc.Util.getInstance(Util.java:408)
>  at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:944)
>  at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3973)
>  at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3909)
>  at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2527)
>  at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2680)
>  at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2484)
>  at 
> com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1858)
>  at 
> com.mysql.jdbc.PreparedStatement.executeUpdateInternal(PreparedStatement.java:2079)
>  at 
> com.mysql.jdbc.PreparedStatement.executeUpdateInternal(PreparedStatement.java:2013)
>  at 
> com.mysql.jdbc.PreparedStatement.executeLargeUpdate(PreparedStatement.java:5104)
>  at 
> com.mysql.jdbc.CallableStatement.executeLargeUpdate(CallableStatement.java:2418)
>  at com.mysql.jdbc.CallableStatement.executeUpdate(CallableStatement.java:887)
>  at 
> com.zaxxer.hikari.pool.ProxyPreparedStatement.executeUpdate(ProxyPreparedStatement.java:61)
>  at 
> com.zaxxer.hikari.pool.HikariProxyCallableStatement.executeUpdate(HikariProxyCallableStatement.java)
>  at 
> org.apache.hadoop.yarn.server.federation.store.impl.SQLFederationStateStore.addApplicationHomeSubCluster(SQLFederationStateStore.java:547)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>  at com.sun.proxy.$Proxy31.addApplicationHomeSubCluster(Unknown Source)
>  at 
> org.apache.hadoop.yarn.server.federation.utils.FederationStateStoreFacade.addApplicationHomeSubCluster(FederationStateStoreFacade.java:345)
>  at 
> org.apache.hadoop.yarn.server.router.clientrm.JDFederationClientInterceptor.submitApplication(JDFederationClientInterceptor.java:334)
>  at 
> org.apache.hadoop.yarn.server.router.clientrm.RouterClientRMService.submitApplication(RouterClientRMService.java:196)
>  at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:218)
>  at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:419)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2076)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2072)
>  

[jira] [Commented] (YARN-8006) Make Hbase-2 profile as default for YARN-7055 branch

2018-03-12 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16394836#comment-16394836
 ] 

Rohith Sharma K S commented on YARN-8006:
-

{quote}But I am not sure how this goes undetected in YARN-7346 when I ran the 
test with -Dhbase.profile=2.0.
{quote}
However, in my local machine these tests doesn't fail. 

> Make Hbase-2 profile as default for YARN-7055 branch
> 
>
> Key: YARN-8006
> URL: https://issues.apache.org/jira/browse/YARN-8006
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-8006-YARN-7055.001.patch, 
> YARN-8006-YARN-8006.00.patch
>
>
> In last weekly call folks discussed that we should have separate branch with 
> hbase-2 as profile by default. Trunk default profile is hbase-1 which runs 
> all the tests under hbase-1 profile. But for hbase-2 profile tests are not 
> running.
> As per the discussion, lets keep YARN-7055 branch for hbase-2 profile as 
> default. Any server side patches can be given to this branch as well which 
> runs tests for hbase-2 profile. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org