[jira] [Commented] (YARN-6261) YARN queue mapping fails for users with no group

2017-03-01 Thread Pierre Villard (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891737#comment-15891737
 ] 

Pierre Villard commented on YARN-6261:
--

Yeah agreed [~templedf]. Initially I was looking at throwing a specific 
exception (or not catching the {{ExecutionException}} in {{getGroups()}}) but 
that is indeed quite impacting and not sure how this is usually handled in 
Hadoop project. An option could be to have a new {{getGroups()}} method with 
correct exception handling and mark as deprecated the existing one. Anyway, I 
added a log message at warn level as you suggested, this is the easiest way to 
solve the issue right now.

> YARN queue mapping fails for users with no group
> 
>
> Key: YARN-6261
> URL: https://issues.apache.org/jira/browse/YARN-6261
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>
> *Issue:* 
> Since Hadoop group mapping can be overridden (to get groups from an AD for 
> example), it is possible to be in a situation where a user does not have any 
> group (because the user is not in the AD but only defined locally):
> {noformat}
> $ hdfs groups zeppelin
> zeppelin:
> {noformat}
> In this case, if the YARN Queue Mapping is configured and contains at least 
> one mapping of {{MappingType.GROUP}}, it won't be possible to get a queue for 
> the job submitted by such a user and the job won't be submitted at all.
> *Expected result:* 
> In case a user does not have any group and no mapping is defined for this 
> user, the default queue should be assigned whatever the queue mapping 
> definition is.
> *Workaround:* 
> A workaround is to define a group mapping of {{MappingType.USER}} for the 
> given user before defining any mapping of {{MappingType.GROUP}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6027) Support fromid(offset) filter for /flows API

2017-03-01 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891719#comment-15891719
 ] 

Varun Saxena commented on YARN-6027:


[~sjlee0], thanks!
While cherry-picking there was a conflict and I had encountered the same 
compilation issue. I infact fixed it as well.
But it seems we have to explicitly call git add for it to be taken up. Wasn't 
aware of that. Just saw the changes were lying around in my branch.
Sorry for the trouble.

> Support fromid(offset) filter for /flows API
> 
>
> Key: YARN-6027
> URL: https://issues.apache.org/jira/browse/YARN-6027
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: yarn-5355-merge-blocker
> Fix For: YARN-5355, YARN-5355-branch-2
>
> Attachments: YARN-6027-YARN-5355.0001.patch, 
> YARN-6027-YARN-5355.0002.patch, YARN-6027-YARN-5355.0003.patch, 
> YARN-6027-YARN-5355.0004.patch, YARN-6027-YARN-5355.0005.patch, 
> YARN-6027-YARN-5355.0006.patch, YARN-6027-YARN-5355.0007.patch, 
> YARN-6027-YARN-5355.0008.patch, YARN-6027-YARN-5355-branch-2.01.patch
>
>
> In YARN-5585 , fromId is supported for retrieving entities. We need similar 
> filter for flows/flowRun apps and flow run and flow as well. 
> Along with supporting fromId, this JIRA should also discuss following points
> * Should we throw an exception for entities/entity retrieval if duplicates 
> found?
> * TimelieEntity :
> ** Should equals method also check for idPrefix?
> ** Does idPrefix is part of identifiers?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6196) [YARN-3368] Invalid information in Node pages and improve Resource Donut chart with better label

2017-03-01 Thread Akhil PB (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-6196:
---
Description: 
In nodes page:
# Change 'Nodes Table' label to 'Information'
# Show Health Report as N/A if not available
# When there are 0 nodes in cluster, nodes page breaks.
# Heatmap - Hovering on the box shows the info but hovering on the hostname 
text inside it doesn’t.

In node page:
# Node Health Report missing
# NodeManager Start Time shows Invalid Date
# Reverse colors in the 'Resouce - Memory' and 'Resource - VCores' donut charts
# Convert Resource Memory into GB/TB
# Diagnostics is empty in Container Information

  was:
In nodes page:
# Change 'Nodes Table' label to 'Information'
# Show Health Report as N/A if not available
# When there are 0 nodes in cluster, nodes page breaks.

In node page:
# Node Health Report missing
# NodeManager Start Time shows Invalid Date
# Reverse colors in the 'Resouce - Memory' and 'Resource - VCores' donut charts
# Convert Resource Memory into GB/TB
# Diagnostics is empty in Container Information


> [YARN-3368] Invalid information in Node pages and improve Resource Donut 
> chart with better label
> 
>
> Key: YARN-6196
> URL: https://issues.apache.org/jira/browse/YARN-6196
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Akhil PB
>Assignee: Akhil PB
>
> In nodes page:
> # Change 'Nodes Table' label to 'Information'
> # Show Health Report as N/A if not available
> # When there are 0 nodes in cluster, nodes page breaks.
> # Heatmap - Hovering on the box shows the info but hovering on the hostname 
> text inside it doesn’t.
> In node page:
> # Node Health Report missing
> # NodeManager Start Time shows Invalid Date
> # Reverse colors in the 'Resouce - Memory' and 'Resource - VCores' donut 
> charts
> # Convert Resource Memory into GB/TB
> # Diagnostics is empty in Container Information



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5892) Capacity Scheduler: Support user-specific minimum user limit percent

2017-03-01 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891712#comment-15891712
 ] 

Wangda Tan commented on YARN-5892:
--

bq. In my mind, overriding queue's MULP with user-specific MULP is equivalent 
to adding weights to special users, and would be implemented in a similar way. 

Yeah it is similar, and actual it's inspired by what you mentioned above, 
weight=x means x unit users. Thus we will only have a identical MULP in the 
queue, but have different weights. Which in my mind is easier to be understood 
than overwriting MULP.

bq. If I understand correctly, you are saying that the weighted approach gives 
more flexibility for future features like user quota, weighted user fair share, 
etc. Is that correct?

Yes because we may not want to continue MULP in the future, but we can continue 
use weight of users.

> Capacity Scheduler: Support user-specific minimum user limit percent
> 
>
> Key: YARN-5892
> URL: https://issues.apache.org/jira/browse/YARN-5892
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: YARN-5892.001.patch, YARN-5892.002.patch
>
>
> Currently, in the capacity scheduler, the {{minimum-user-limit-percent}} 
> property is per queue. A cluster admin should be able to set the minimum user 
> limit percent on a per-user basis within the queue.
> This functionality is needed so that when intra-queue preemption is enabled 
> (YARN-4945 / YARN-2113), some users can be deemed as more important than 
> other users, and resources from VIP users won't be as likely to be preempted.
> For example, if the {{getstuffdone}} queue has a MULP of 25 percent, but user 
> {{jane}} is a power user of queue {{getstuffdone}} and needs to be guaranteed 
> 75 percent, the properties for {{getstuffdone}} and {{jane}} would look like 
> this:
> {code}
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.minimum-user-limit-percent
> 25
>   
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.jane.minimum-user-limit-percent
> 75
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6196) [YARN-3368] Invalid information in Node pages and improve Resource Donut chart with better label

2017-03-01 Thread Akhil PB (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-6196:
---
Description: 
In nodes page:
# Change 'Nodes Table' label to 'Information'
# Show Health Report as N/A if not available
# When there are 0 nodes in cluster, nodes page breaks.

In node page:
# Node Health Report missing
# NodeManager Start Time shows Invalid Date
# Reverse colors in the 'Resouce - Memory' and 'Resource - VCores' donut charts
# Convert Resource Memory into GB/TB
# Diagnostics is empty in Container Information

  was:
In nodes page:
# Change 'Nodes Table' label to 'Information'
# Change 'VCores Avail' label to 'VCores Available'
# Show Health Report as N/A if not available
# Change 'Mem Avail' to 'Mem Available'
# When there are 0 nodes in cluster, nodes page breaks.

In node page:
# Node Health Report missing
# NodeManager Start Time shows Invalid Date
# Reverse colors in the 'Resouce - Memory' and 'Resource - VCores' donut charts
# Convert Resource Memory into GB/TB
# Diagnostics is empty in Container Information


> [YARN-3368] Invalid information in Node pages and improve Resource Donut 
> chart with better label
> 
>
> Key: YARN-6196
> URL: https://issues.apache.org/jira/browse/YARN-6196
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Akhil PB
>Assignee: Akhil PB
>
> In nodes page:
> # Change 'Nodes Table' label to 'Information'
> # Show Health Report as N/A if not available
> # When there are 0 nodes in cluster, nodes page breaks.
> In node page:
> # Node Health Report missing
> # NodeManager Start Time shows Invalid Date
> # Reverse colors in the 'Resouce - Memory' and 'Resource - VCores' donut 
> charts
> # Convert Resource Memory into GB/TB
> # Diagnostics is empty in Container Information



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6207) Move application can fail when attempt add event is delayed

2017-03-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891635#comment-15891635
 ] 

Hadoop QA commented on YARN-6207:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 10 new + 313 unchanged - 0 fixed = 323 total (was 313) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 39m 
33s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6207 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12855432/YARN-6207.007.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 789ce9691fa7 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6f6dfe0 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/15131/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15131/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15131/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Move application can  fail when attempt add event is delayed
> 
>
> Key: YARN-6207
> 

[jira] [Commented] (YARN-6264) Resource comparison should depends on policy

2017-03-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891607#comment-15891607
 ] 

Hadoop QA commented on YARN-6264:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 40m 35s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6264 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12855535/YARN-6264.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6b4864935ce9 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6f6dfe0 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/15130/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15130/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15130/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Resource comparison should depends on policy
> 
>
> Key: YARN-6264
> URL: https://issues.apache.org/jira/browse/YARN-6264
> Project: Hadoop YARN

[jira] [Commented] (YARN-6027) Support fromid(offset) filter for /flows API

2017-03-01 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891589#comment-15891589
 ] 

Rohith Sharma K S commented on YARN-6027:
-

Thanks [~sjlee0] and [~varun_saxena] for reviewing and committing it :-)

> Support fromid(offset) filter for /flows API
> 
>
> Key: YARN-6027
> URL: https://issues.apache.org/jira/browse/YARN-6027
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: yarn-5355-merge-blocker
> Fix For: YARN-5355, YARN-5355-branch-2
>
> Attachments: YARN-6027-YARN-5355.0001.patch, 
> YARN-6027-YARN-5355.0002.patch, YARN-6027-YARN-5355.0003.patch, 
> YARN-6027-YARN-5355.0004.patch, YARN-6027-YARN-5355.0005.patch, 
> YARN-6027-YARN-5355.0006.patch, YARN-6027-YARN-5355.0007.patch, 
> YARN-6027-YARN-5355.0008.patch, YARN-6027-YARN-5355-branch-2.01.patch
>
>
> In YARN-5585 , fromId is supported for retrieving entities. We need similar 
> filter for flows/flowRun apps and flow run and flow as well. 
> Along with supporting fromId, this JIRA should also discuss following points
> * Should we throw an exception for entities/entity retrieval if duplicates 
> found?
> * TimelieEntity :
> ** Should equals method also check for idPrefix?
> ** Does idPrefix is part of identifiers?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6264) Resource comparison should depends on policy

2017-03-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891560#comment-15891560
 ] 

Hadoop QA commented on YARN-6264:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 42m  
9s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6264 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12855523/YARN-6264.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0706734b940d 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6f6dfe0 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15128/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15128/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Resource comparison should depends on policy
> 
>
> Key: YARN-6264
> URL: https://issues.apache.org/jira/browse/YARN-6264
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-6264.001.patch, YARN-6264.002.patch
>
>
> In method {{canRunAppAM()}}, we should use policy related resource comparison 
> 

[jira] [Created] (YARN-6266) Extend the resource class to support ports management

2017-03-01 Thread jialei weng (JIRA)
jialei weng created YARN-6266:
-

 Summary: Extend the resource class to support ports management
 Key: YARN-6266
 URL: https://issues.apache.org/jira/browse/YARN-6266
 Project: Hadoop YARN
  Issue Type: New Feature
Reporter: jialei weng


Just like the vcores and memory, ports is an important resource for job to 
allocate. We should add the ports management logic to yarn. It can support the 
user to allocate two jobs(with same port requirement) to different machines. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6264) Resource comparison should depends on policy

2017-03-01 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-6264:
---
Attachment: YARN-6264.002.patch

> Resource comparison should depends on policy
> 
>
> Key: YARN-6264
> URL: https://issues.apache.org/jira/browse/YARN-6264
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-6264.001.patch, YARN-6264.002.patch
>
>
> In method {{canRunAppAM()}}, we should use policy related resource comparison 
> instead of using {{Resources.fitsIn()}} to determined if the queue has enough 
> resource for the AM. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6255) Refactor yarn-native-services framework

2017-03-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891534#comment-15891534
 ] 

Hadoop QA commented on YARN-6255:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
45s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
23s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
31s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} yarn-native-services passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 42s{color} 
| {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications 
generated 5 new + 24 unchanged - 10 fixed = 29 total (was 34) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 36s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications: The patch generated 
115 new + 1287 unchanged - 388 fixed = 1402 total (was 1675) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
46s{color} | {color:green} hadoop-yarn-slider-core in the patch passed. {color} 
|
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
17s{color} | {color:green} hadoop-yarn-services-api in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 33m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6255 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12855526/YARN-6255.yarn-native-services.02.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  xml  |
| uname | Linux 5d7d266fd6b0 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (YARN-6264) Resource comparison should depends on policy

2017-03-01 Thread DjvuLee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891509#comment-15891509
 ] 

DjvuLee commented on YARN-6264:
---

Can you offer more reasons?

> Resource comparison should depends on policy
> 
>
> Key: YARN-6264
> URL: https://issues.apache.org/jira/browse/YARN-6264
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-6264.001.patch
>
>
> In method {{canRunAppAM()}}, we should use policy related resource comparison 
> instead of using {{Resources.fitsIn()}} to determined if the queue has enough 
> resource for the AM. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6255) Refactor yarn-native-services framework

2017-03-01 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-6255:
--
Attachment: YARN-6255.yarn-native-services.02.patch

> Refactor yarn-native-services framework 
> 
>
> Key: YARN-6255
> URL: https://issues.apache.org/jira/browse/YARN-6255
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-6255.yarn-native-services.01.patch, 
> YARN-6255.yarn-native-services.02.patch
>
>
> YARN-4692 provides a good abstraction of services on YARN. We could use this 
> as a building block in yarn-native-services framework code base as well.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6264) Resource comparison should depends on policy

2017-03-01 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-6264:
---
Attachment: YARN-6264.001.patch

> Resource comparison should depends on policy
> 
>
> Key: YARN-6264
> URL: https://issues.apache.org/jira/browse/YARN-6264
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-6264.001.patch
>
>
> In method {{canRunAppAM()}}, we should use policy related resource comparison 
> instead of using {{Resources.fitsIn()}} to determined if the queue has enough 
> resource for the AM. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5280) Allow YARN containers to run with Java Security Manager

2017-03-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891486#comment-15891486
 ] 

Hudson commented on YARN-5280:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11328 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11328/])
YARN-5280. Allow YARN containers to run with Java Security Manager (rkanter: 
rev 6f6dfe0202249c129b36edfd145a2224140139cc)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/LinuxContainerRuntimeConstants.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/JavaSandboxLinuxContainerRuntime.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/TestJavaSandboxLinuxContainerRuntime.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DelegatingLinuxContainerRuntime.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/executor/ContainerPrepareContext.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/resources/java.policy
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java


> Allow YARN containers to run with Java Security Manager
> ---
>
> Key: YARN-5280
> URL: https://issues.apache.org/jira/browse/YARN-5280
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager, yarn
>Affects Versions: 2.6.4
>Reporter: Greg Phillips
>Assignee: Greg Phillips
>Priority: Minor
>  Labels: oct16-medium
> Fix For: 3.0.0-alpha3
>
> Attachments: YARN-5280.001.patch, YARN-5280.002.patch, 
> YARN-5280.003.patch, YARN-5280.004.patch, YARN-5280.005.patch, 
> YARN-5280.006.patch, YARN-5280.007.patch, YARN-5280.008.patch, 
> YARN-5280.patch, YARNContainerSandbox.pdf
>
>
> YARN applications have the ability to perform privileged actions which have 
> the potential to add instability into the cluster. The Java Security Manager 
> can be used to prevent users from running privileged actions while still 
> allowing their core data processing use cases. 
> Introduce a YARN flag which will allow a Hadoop administrator to enable the 
> Java Security Manager for user code, while still providing complete 
> permissions to core Hadoop libraries.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6263) NMTokenSecretManagerInRM.createAndGetNMToken is not thread safe

2017-03-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891469#comment-15891469
 ] 

Hadoop QA commented on YARN-6263:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
1s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 40m 51s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.TestResourceTrackerService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6263 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12855500/YARN-6263.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 87bf2606da50 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 899d5c4 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/15127/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15127/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15127/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> 

[jira] [Commented] (YARN-6259) Support pagination and optimize data transfer with zero-copy approach for containerlogs REST API in NMWebServices

2017-03-01 Thread Tao Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891423#comment-15891423
 ] 

Tao Yang commented on YARN-6259:


Hi, [~rohithsharma]. Thank you for looking into this issue.
{quote}
I am not sure about how use cases will be served
{quote}
One common use case is to request last part of log and easily skip to another 
part for detecting problem, instead of loading the entire log, it perhaps can 
save a lot of time. We have an outer system to track apps and show container 
logs inside, meanwhile most of logs are very large, so that pagination function 
is needed and the newly added containerlogs-info REST API is a part of it.

{quote}
Instead of adding new LogInfo file, there is ContainerLogInfo file which can be 
used for pageSize and pageIndex.
{quote}
ContainerLogInfo seems not exist in branch-2.8, perhaps it's for higher version?

> Support pagination and optimize data transfer with zero-copy approach for 
> containerlogs REST API in NMWebServices
> -
>
> Key: YARN-6259
> URL: https://issues.apache.org/jira/browse/YARN-6259
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.8.1
>Reporter: Tao Yang
>Assignee: Tao Yang
> Attachments: YARN-6259.001.patch
>
>
> Currently containerlogs REST API in NMWebServices will read and send the 
> entire content of container logs. Most of container logs are large and it's 
> useful to support pagination.
> * Add pagesize and pageindex parameters for containerlogs REST API
> {code}
> URL: http:///ws/v1/node/containerlogs//
> QueryParams:
>   pagesize - max bytes of one page , default 1MB
>   pageindex - index of required page, default 0, can be nagative(set -1 will 
> get the last page content)
> {code}
> * Add containerlogs-info REST API since sometimes we need to know the 
> totalSize/pageSize/pageCount info of log 
> {code}
> URL: 
> http:///ws/v1/node/containerlogs-info//
> QueryParams:
>   pagesize - max bytes of one page , default 1MB
> Response example:
>   {"logInfo":{"totalSize":2497280,"pageSize":1048576,"pageCount":3}}
> {code}
> Moreover, the data transfer pipeline (disk --> read buffer --> NM buffer --> 
> socket buffer) can be optimized to pipeline(disk --> read buffer --> socket 
> buffer) with zero-copy approach.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6050) AMs can't be scheduled on racks or nodes

2017-03-01 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891420#comment-15891420
 ] 

Robert Kanter commented on YARN-6050:
-

[~leftnoteasy], I've been working on fixing this, and here's the code I 
currently have:
{code:java}
  public static int getApplicableNodeCountForAM(RMContext rmContext,
 Configuration conf, List amReqs) {
Set nodesForReqs = new HashSet<>();
for (ResourceRequest amReq : amReqs) {
  if (amReq.getRelaxLocality() &&
  !amReq.getResourceName().equals(ResourceRequest.ANY)) {
nodesForReqs.addAll(
rmContext.getScheduler().getClusterNodeIdsByResourceName(
amReq.getResourceName()));
  }
}

if (YarnConfiguration.areNodeLabelsEnabled(conf)) {
  RMNodeLabelsManager labelManager = rmContext.getNodeLabelManager();
  String amNodeLabelExpression = amReqs.get(0).getNodeLabelExpression();
  amNodeLabelExpression = (amNodeLabelExpression == null
  || amNodeLabelExpression.trim().isEmpty())
  ? RMNodeLabelsManager.NO_LABEL : amNodeLabelExpression;
  Map labelsToNodes =
  labelManager.getLabelsToNodes(
  Collections.singleton(amNodeLabelExpression));
  if (labelsToNodes.containsKey(amNodeLabelExpression)) {
Set nodesForLabels = labelsToNodes.get(amNodeLabelExpression);
if (nodesForReqs.isEmpty()) {
  return nodesForLabels.size();
}
return Sets.intersection(nodesForLabels, nodesForReqs).size();
  }
}

if (nodesForReqs.isEmpty()) {
  return rmContext.getScheduler().getNumClusterNodes();
}
return nodesForReqs.size();
  }
}
{code}
Basically, I'm getting {{NodeId}}'s for each of the resource requests and for 
the node labels, and then finding the ones that satisfy both.  The problem is 
that {{getLabelsToNodes}} returns _all_ {{NodeId}}'s, instead of just the 
active ones.  For example, if {{nodeA:1234}} has label "label1", then 
{{getLabelsToNodes("label1")}} returns {{nodeA:1234}} and {{nodeA:0}}.  In this 
case, we don't want {{nodeA:0}}, and in fact, 
{{getActiveNMCountPerLabel("label1")}} only returns {{1}}, not {{2}}.  Looking 
at {{RMNodeLabel}}, it only keeps track of the count for the active nodes for 
the label, and not the {{NodeId}}'s themselves.  Do you know if there was any 
reason for that?  If not, I could refactor {{RMNodeLabel}} to keep track of the 
active {{NodeId}} and that would solve my problem.

> AMs can't be scheduled on racks or nodes
> 
>
> Key: YARN-6050
> URL: https://issues.apache.org/jira/browse/YARN-6050
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-6050.001.patch, YARN-6050.002.patch, 
> YARN-6050.003.patch, YARN-6050.004.patch, YARN-6050.005.patch, 
> YARN-6050.006.patch, YARN-6050.007.patch, YARN-6050.008.patch
>
>
> Yarn itself supports rack/node aware scheduling for AMs; however, there 
> currently are two problems:
> # To specify hard or soft rack/node requests, you have to specify more than 
> one {{ResourceRequest}}.  For example, if you want to schedule an AM only on 
> "rackA", you have to create two {{ResourceRequest}}, like this:
> {code}
> ResourceRequest.newInstance(PRIORITY, ANY, CAPABILITY, NUM_CONTAINERS, false);
> ResourceRequest.newInstance(PRIORITY, "rackA", CAPABILITY, NUM_CONTAINERS, 
> true);
> {code}
> The problem is that the Yarn API doesn't actually allow you to specify more 
> than one {{ResourceRequest}} in the {{ApplicationSubmissionContext}}.  The 
> current behavior is to either build one from {{getResource}} or directly from 
> {{getAMContainerResourceRequest}}, depending on if 
> {{getAMContainerResourceRequest}} is null or not.  We'll need to add a third 
> method, say {{getAMContainerResourceRequests}}, which takes a list of 
> {{ResourceRequest}} so that clients can specify the multiple resource 
> requests.
> # There are some places where things are hardcoded to overwrite what the 
> client specifies.  These are pretty straightforward to fix.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6254) Provide a mechanism to whitelist the RM REST API clients

2017-03-01 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891400#comment-15891400
 ] 

Allen Wittenauer edited comment on YARN-6254 at 3/2/17 1:17 AM:


Hosts should be able to be controlled via the Service Level Authorization hook. 
 See 
https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/ServiceLevelAuth.html
 for more info.

EDIT:

Never mind.  I forgot that no one expanded that to the web APIs. :(


was (Author: aw):
Hosts should be able to be controlled via the Service Level Authorization hook. 
 See 
https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/ServiceLevelAuth.html
 for more info.

> Provide a mechanism to whitelist the RM REST API clients
> 
>
> Key: YARN-6254
> URL: https://issues.apache.org/jira/browse/YARN-6254
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: resourcemanager
>Affects Versions: 2.7.1
>Reporter: Aroop Maliakkal
>
> Currently RM REST APIs are open to everyone. Can we provide a whitelist 
> feature so that we can control what frequency and what hosts can hit the RM 
> REST APIs ?
> Thanks,
> /Aroop



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6254) Provide a mechanism to whitelist the RM REST API clients

2017-03-01 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891400#comment-15891400
 ] 

Allen Wittenauer commented on YARN-6254:


Hosts should be able to be controlled via the Service Level Authorization hook. 
 See 
https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/ServiceLevelAuth.html
 for more info.

> Provide a mechanism to whitelist the RM REST API clients
> 
>
> Key: YARN-6254
> URL: https://issues.apache.org/jira/browse/YARN-6254
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: resourcemanager
>Affects Versions: 2.7.1
>Reporter: Aroop Maliakkal
>
> Currently RM REST APIs are open to everyone. Can we provide a whitelist 
> feature so that we can control what frequency and what hosts can hit the RM 
> REST APIs ?
> Thanks,
> /Aroop



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6265) yarn.resourcemanager.fail-fast is used inconsistently

2017-03-01 Thread Daniel Templeton (JIRA)
Daniel Templeton created YARN-6265:
--

 Summary: yarn.resourcemanager.fail-fast is used inconsistently
 Key: YARN-6265
 URL: https://issues.apache.org/jira/browse/YARN-6265
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.8.0
Reporter: Daniel Templeton


In capacity scheduler, the property is used to control whether an app with 
no/bad queue should be killed.  In the state store, the property controls 
whether a state store op failure should cause the RM to exit in non-HA mode.  
Those are two very different things, and they should be separated.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6264) Resource comparison should depends on policy

2017-03-01 Thread Yufei Gu (JIRA)
Yufei Gu created YARN-6264:
--

 Summary: Resource comparison should depends on policy
 Key: YARN-6264
 URL: https://issues.apache.org/jira/browse/YARN-6264
 Project: Hadoop YARN
  Issue Type: Bug
  Components: fairscheduler
Reporter: Yufei Gu
Assignee: Yufei Gu


In method {{canRunAppAM()}}, we should use policy related resource comparison 
instead of using {{Resources.fitsIn()}} to determined if the queue has enough 
resource for the AM. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3742) YARN RM will shut down if ZKClient creation times out

2017-03-01 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891385#comment-15891385
 ] 

Daniel Templeton commented on YARN-3742:


This is actually the same issue as HADOOP-10584, though the scope is a little 
different.  For some reason, there's lots of code that wants to shoot the RM 
instead of dropping it into standby.  my proposed changes for HADOOP-10584 
would get rid of some of that code, but there's more inside YARN.  I think a 
good approach here would be to change the {{RMFatalEvent}} handler to 
transition to standby as the default reaction, with shutdown as a special case 
for certain types of failures.  Independent of HADOOP-10584 that's probably a 
good thing to do.  I'll see if I can throw a patch together quickly.

> YARN RM  will shut down if ZKClient creation times out 
> ---
>
> Key: YARN-3742
> URL: https://issues.apache.org/jira/browse/YARN-3742
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.0
>Reporter: Wilfred Spiegelenburg
>Assignee: Daniel Templeton
>
> The RM goes down showing the following stacktrace if the ZK client connection 
> fails to be created. We should not exit but transition to StandBy and stop 
> doing things and let the other RM take over.
> {code}
> 2015-04-19 01:22:20,513  FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Received a 
> org.apache.hadoop.yarn.server.resourcemanager.RMFatalEvent of type 
> STATE_STORE_OP_FAILED. Cause:
> java.io.IOException: Wait for ZKClient creation timed out
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithCheck(ZKRMStateStore.java:1066)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithRetries(ZKRMStateStore.java:1090)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.existsWithRetries(ZKRMStateStore.java:996)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.updateApplicationStateInternal(ZKRMStateStore.java:643)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$UpdateAppTransition.transition(RMStateStore.java:162)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$UpdateAppTransition.transition(RMStateStore.java:147)
>   at 
> org.apache.hadoop.yarn.state.StateMachineFactory$SingleInternalArc.doTransition(StateMachineFactory.java:362)
>   at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
>   at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
>   at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore.handleStoreEvent(RMStateStore.java:806)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$ForwardingEventHandler.handle(RMStateStore.java:879)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$ForwardingEventHandler.handle(RMStateStore.java:874)
>   at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173)
>   at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6263) NMTokenSecretManagerInRM.createAndGetNMToken is not thread safe

2017-03-01 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-6263:
-
Attachment: YARN-6263.01.patch

> NMTokenSecretManagerInRM.createAndGetNMToken is not thread safe
> ---
>
> Key: YARN-6263
> URL: https://issues.apache.org/jira/browse/YARN-6263
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0-alpha2
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-6263.01.patch
>
>
> NMTokenSecretManagerInRM.createAndGetNMToken modifies values of a 
> ConcurrentHashMap, which are of type HashTable, but it only acquires read 
> lock.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Moved] (YARN-6263) NMTokenSecretManagerInRM.createAndGetNMToken is not thread safe

2017-03-01 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen moved MAPREDUCE-6853 to YARN-6263:
-

Affects Version/s: (was: 3.0.0-alpha2)
   3.0.0-alpha2
 Target Version/s: 3.0.0-alpha3  (was: 3.0.0-alpha3)
  Component/s: (was: yarn)
   yarn
  Key: YARN-6263  (was: MAPREDUCE-6853)
  Project: Hadoop YARN  (was: Hadoop Map/Reduce)

> NMTokenSecretManagerInRM.createAndGetNMToken is not thread safe
> ---
>
> Key: YARN-6263
> URL: https://issues.apache.org/jira/browse/YARN-6263
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0-alpha2
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>
> NMTokenSecretManagerInRM.createAndGetNMToken modifies values of a 
> ConcurrentHashMap, which are of type HashTable, but it only acquires read 
> lock.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6262) org.apache.hadoop.security.Groups should include YARN in the privacy scope

2017-03-01 Thread Daniel Templeton (JIRA)
Daniel Templeton created YARN-6262:
--

 Summary: org.apache.hadoop.security.Groups should include YARN in 
the privacy scope
 Key: YARN-6262
 URL: https://issues.apache.org/jira/browse/YARN-6262
 Project: Hadoop YARN
  Issue Type: Bug
  Components: security
Affects Versions: 2.8.0
Reporter: Daniel Templeton
Priority: Minor


{{Groups}} is currently declared as 
{{@InterfaceAudience.LimitedPrivate({"HDFS", "MapReduce"})}}, but it's also 
used from YARN.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6261) YARN queue mapping fails for users with no group

2017-03-01 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891360#comment-15891360
 ] 

Daniel Templeton commented on YARN-6261:


Thanks for the patch, [~pvillard].  Unfortunately, the {{getGroups()}} method 
makes no distinction between a user with no groups and a failure retrieving a 
user from the cache.  I think the cheapest way to deal with that is to add a 
log message that explain why the group mapping is not being applied.  Seems 
like it should be warn or info, probably warn.  It's a scenario that's 
undesirable and should be addressed no matter which reason it fails.  The 
better solution would be to update the API to throw different exceptions, but 
that could be a pretty big change.

> YARN queue mapping fails for users with no group
> 
>
> Key: YARN-6261
> URL: https://issues.apache.org/jira/browse/YARN-6261
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>
> *Issue:* 
> Since Hadoop group mapping can be overridden (to get groups from an AD for 
> example), it is possible to be in a situation where a user does not have any 
> group (because the user is not in the AD but only defined locally):
> {noformat}
> $ hdfs groups zeppelin
> zeppelin:
> {noformat}
> In this case, if the YARN Queue Mapping is configured and contains at least 
> one mapping of {{MappingType.GROUP}}, it won't be possible to get a queue for 
> the job submitted by such a user and the job won't be submitted at all.
> *Expected result:* 
> In case a user does not have any group and no mapping is defined for this 
> user, the default queue should be assigned whatever the queue mapping 
> definition is.
> *Workaround:* 
> A workaround is to define a group mapping of {{MappingType.USER}} for the 
> given user before defining any mapping of {{MappingType.GROUP}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6261) YARN queue mapping fails for users with no group

2017-03-01 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton reassigned YARN-6261:
--

Assignee: Pierre Villard  (was: Daniel Templeton)

> YARN queue mapping fails for users with no group
> 
>
> Key: YARN-6261
> URL: https://issues.apache.org/jira/browse/YARN-6261
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>
> *Issue:* 
> Since Hadoop group mapping can be overridden (to get groups from an AD for 
> example), it is possible to be in a situation where a user does not have any 
> group (because the user is not in the AD but only defined locally):
> {noformat}
> $ hdfs groups zeppelin
> zeppelin:
> {noformat}
> In this case, if the YARN Queue Mapping is configured and contains at least 
> one mapping of {{MappingType.GROUP}}, it won't be possible to get a queue for 
> the job submitted by such a user and the job won't be submitted at all.
> *Expected result:* 
> In case a user does not have any group and no mapping is defined for this 
> user, the default queue should be assigned whatever the queue mapping 
> definition is.
> *Workaround:* 
> A workaround is to define a group mapping of {{MappingType.USER}} for the 
> given user before defining any mapping of {{MappingType.GROUP}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6261) YARN queue mapping fails for users with no group

2017-03-01 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton reassigned YARN-6261:
--

Assignee: Daniel Templeton

> YARN queue mapping fails for users with no group
> 
>
> Key: YARN-6261
> URL: https://issues.apache.org/jira/browse/YARN-6261
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Pierre Villard
>Assignee: Daniel Templeton
>
> *Issue:* 
> Since Hadoop group mapping can be overridden (to get groups from an AD for 
> example), it is possible to be in a situation where a user does not have any 
> group (because the user is not in the AD but only defined locally):
> {noformat}
> $ hdfs groups zeppelin
> zeppelin:
> {noformat}
> In this case, if the YARN Queue Mapping is configured and contains at least 
> one mapping of {{MappingType.GROUP}}, it won't be possible to get a queue for 
> the job submitted by such a user and the job won't be submitted at all.
> *Expected result:* 
> In case a user does not have any group and no mapping is defined for this 
> user, the default queue should be assigned whatever the queue mapping 
> definition is.
> *Workaround:* 
> A workaround is to define a group mapping of {{MappingType.USER}} for the 
> given user before defining any mapping of {{MappingType.GROUP}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2962) ZKRMStateStore: Limit the number of znodes under a znode

2017-03-01 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891302#comment-15891302
 ] 

Daniel Templeton commented on YARN-2962:


Sweet!  Thanks for the rebase, [~varun_saxena].  It's been a while, so starting 
over with a fresh review. :)  Lots of minor points, but no major issues.

# In {{ZKRMStateStore.loadRMAppState()}}, I think {{leafNodePath}} should be 
{{parentNodePath}} to be clearer: {{String leafNodePath = getNodePath(appRoot, 
childNodeName);}}
# In {{ZKRMStateStore.loadRMAppState()}}, the final _if_ in the _for_ shouldn't 
be performed if {{splitIndex}} is 0: {code}  if (splitIndex != 
appIdNodeSplitIndex && !appNodeFound) {
// If no loaded app exists for a particular split index and the split
// index for which apps are being loaded is not the one configured, then
// we do not need to keep track of this hierarchy for storing/updating/
// removing app/app attempt znodes.
rmAppRootHierarchies.remove(splitIndex);
  }{code}  It doesn't hurt anything, though.  Maybe best to just add a 
comment that says it's OK to remove something that doesn't exist?
# In {{ZKRMStateStore.loadApplicationAttemptState()}}, the {{if 
(LOG.isDebugEnabled())}} is superfluous.  The arg to the log call doesn't cost 
anything to create.
# {{ZKRMStateStore.checkRemoveParentAppNode()}} is missing the description for 
the {{@throws}} tag.  Same in both {{getLeafAppIdNodePath()}} methods.
# In {{ZKRMStateStore.checkRemoveParentAppNode()}} the last log line isn't 
wrapped in an _if_ like all the others
# In {{ZKRMStateStore.getLeafAppIdNodePath()}}, I'd prefer if we didn't do 
assignments to the parameters
# In {{ZKRMStateStore.getLeafAppIdNodePath()}} the log line isn't wrapped in an 
_if_ like all the others
# This is maybe a bit pedantic, but shouldn't the exception in 
{{ZKRMStateStore.storeApplicationAttemptStateInternal()}} be a 
{{YarnException}} instead of a {{YarnRuntimeException}}?  Unchecked exceptions 
should be unexpected.  Failing to store an app in ZK is an obvious possibility.
# Seems like the new logic in 
{{ZKRMStateStore.storeApplicationAttemptStateInternal()}} and 
{{ZKRMStateStore.updateApplicationAttemptStateInternal()}} should be refactored 
into another method maybe.  You could also use it from 
{{removeApplicationAttemptInternal()}}, {{removeApplicationStateInternal()}}, 
and  {{removeApplication()}}
# In {{RMZKStateStore.getSplitAppNodeParent()}}, can we add a comment to 
explain why we're subtracting - 1 from the length - split index?
# Instead of {{TestZKRMStateStore.createPath()}}, can we use a Guava {{Joiner}}?
# {{appId}} isn't used in {{TestZKRMStateStore.verifyLoadedAttempt()}}
# Super minor, but in {{TestZKRMStateStore.testAppNodeSplit()}}, it would be 
nice to visually separate the app2 code from the app1 code: {code}// Store 
attempt associated with app1.
Token appAttemptToken1 =
generateAMRMToken(attemptId1, appTokenMgr);
SecretKey clientTokenKey1 =
clientToAMTokenMgr.createMasterKey(attemptId1);
ContainerId containerId1 =
ConverterUtils.toContainerId("container_1352994193343_0001_01_01");
storeAttempt(store, attemptId1, containerId1.toString(), appAttemptToken1,
clientTokenKey1, dispatcher);
String appAttemptIdStr2 = "appattempt_1352994193343_0001_02";
ApplicationAttemptId attemptId2 =
ConverterUtils.toApplicationAttemptId(appAttemptIdStr2);

// Store attempt associated with app2.
Token appAttemptToken2 =
generateAMRMToken(attemptId2, appTokenMgr);
SecretKey clientTokenKey2 =
clientToAMTokenMgr.createMasterKey(attemptId2);
Credentials attemptCred = new Credentials();
attemptCred.addSecretKey(RMStateStore.AM_CLIENT_TOKEN_MASTER_KEY_NAME,
clientTokenKey2.getEncoded());
ContainerId containerId2 =
ConverterUtils.toContainerId("container_1352994193343_0001_02_01");
storeAttempt(store, attemptId2, containerId2.toString(), appAttemptToken2,
clientTokenKey2, dispatcher);
{code}  Note that the last two statements in the app1 section are actually for 
app2.


> ZKRMStateStore: Limit the number of znodes under a znode
> 
>
> Key: YARN-2962
> URL: https://issues.apache.org/jira/browse/YARN-2962
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.6.0
>Reporter: Karthik Kambatla
>Assignee: Varun Saxena
>Priority: Critical
> Attachments: YARN-2962.006.patch, YARN-2962.007.patch, 
> YARN-2962.01.patch, YARN-2962.04.patch, YARN-2962.05.patch, 
> YARN-2962.2.patch, YARN-2962.3.patch
>
>
> We ran into this issue where we were hitting the default ZK server message 
> size configs, primarily because 

[jira] [Commented] (YARN-6027) Support fromid(offset) filter for /flows API

2017-03-01 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891269#comment-15891269
 ] 

Sangjin Lee commented on YARN-6027:
---

Committing the addendum patch to YARN-5355-branch-2.

> Support fromid(offset) filter for /flows API
> 
>
> Key: YARN-6027
> URL: https://issues.apache.org/jira/browse/YARN-6027
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: yarn-5355-merge-blocker
> Fix For: YARN-5355, YARN-5355-branch-2
>
> Attachments: YARN-6027-YARN-5355.0001.patch, 
> YARN-6027-YARN-5355.0002.patch, YARN-6027-YARN-5355.0003.patch, 
> YARN-6027-YARN-5355.0004.patch, YARN-6027-YARN-5355.0005.patch, 
> YARN-6027-YARN-5355.0006.patch, YARN-6027-YARN-5355.0007.patch, 
> YARN-6027-YARN-5355.0008.patch, YARN-6027-YARN-5355-branch-2.01.patch
>
>
> In YARN-5585 , fromId is supported for retrieving entities. We need similar 
> filter for flows/flowRun apps and flow run and flow as well. 
> Along with supporting fromId, this JIRA should also discuss following points
> * Should we throw an exception for entities/entity retrieval if duplicates 
> found?
> * TimelieEntity :
> ** Should equals method also check for idPrefix?
> ** Does idPrefix is part of identifiers?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6027) Support fromid(offset) filter for /flows API

2017-03-01 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated YARN-6027:
--
Attachment: YARN-6027-YARN-5355-branch-2.01.patch

The addendum patch for branch-2. This restores the original version of the code 
that's refactored. The tests pass.

> Support fromid(offset) filter for /flows API
> 
>
> Key: YARN-6027
> URL: https://issues.apache.org/jira/browse/YARN-6027
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: yarn-5355-merge-blocker
> Fix For: YARN-5355, YARN-5355-branch-2
>
> Attachments: YARN-6027-YARN-5355.0001.patch, 
> YARN-6027-YARN-5355.0002.patch, YARN-6027-YARN-5355.0003.patch, 
> YARN-6027-YARN-5355.0004.patch, YARN-6027-YARN-5355.0005.patch, 
> YARN-6027-YARN-5355.0006.patch, YARN-6027-YARN-5355.0007.patch, 
> YARN-6027-YARN-5355.0008.patch, YARN-6027-YARN-5355-branch-2.01.patch
>
>
> In YARN-5585 , fromId is supported for retrieving entities. We need similar 
> filter for flows/flowRun apps and flow run and flow as well. 
> Along with supporting fromId, this JIRA should also discuss following points
> * Should we throw an exception for entities/entity retrieval if duplicates 
> found?
> * TimelieEntity :
> ** Should equals method also check for idPrefix?
> ** Does idPrefix is part of identifiers?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5892) Capacity Scheduler: Support user-specific minimum user limit percent

2017-03-01 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891256#comment-15891256
 ] 

Eric Payne commented on YARN-5892:
--

{quote}
So I preferred to keep the semantic more similar to existing one, I propose to 
introduce a weight of users instead of overriding the MULP: scheduler will 
continue assign MULP% shares to each "unit users", but different user can have 
different weight to adjust quota based on share of "unit users". Also, the 
weights of users can be used independent from MULP: because in the future we 
may want to replace concept of user limit by different ones. (Like setting 
quota for each user, give weighted fair share to users, etc.)
{quote}
Thanks [~leftnoteasy] for your review.

In my mind, overriding queue's MULP with user-specific MULP is equivalent to 
adding weights to special users, and would be implemented in a similar way. If 
I understand correctly, you are saying that the weighted approach gives more 
flexibility for future features like user quota, weighted user fair share, etc. 
Is that correct?

> Capacity Scheduler: Support user-specific minimum user limit percent
> 
>
> Key: YARN-5892
> URL: https://issues.apache.org/jira/browse/YARN-5892
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: YARN-5892.001.patch, YARN-5892.002.patch
>
>
> Currently, in the capacity scheduler, the {{minimum-user-limit-percent}} 
> property is per queue. A cluster admin should be able to set the minimum user 
> limit percent on a per-user basis within the queue.
> This functionality is needed so that when intra-queue preemption is enabled 
> (YARN-4945 / YARN-2113), some users can be deemed as more important than 
> other users, and resources from VIP users won't be as likely to be preempted.
> For example, if the {{getstuffdone}} queue has a MULP of 25 percent, but user 
> {{jane}} is a power user of queue {{getstuffdone}} and needs to be guaranteed 
> 75 percent, the properties for {{getstuffdone}} and {{jane}} would look like 
> this:
> {code}
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.minimum-user-limit-percent
> 25
>   
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.jane.minimum-user-limit-percent
> 75
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5280) Allow YARN containers to run with Java Security Manager

2017-03-01 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891240#comment-15891240
 ] 

Robert Kanter commented on YARN-5280:
-

[~gphillips], you're right: that was {{TestContainerManagerSecurity}} being 
flaky.

+1

Will commit soon

> Allow YARN containers to run with Java Security Manager
> ---
>
> Key: YARN-5280
> URL: https://issues.apache.org/jira/browse/YARN-5280
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager, yarn
>Affects Versions: 2.6.4
>Reporter: Greg Phillips
>Assignee: Greg Phillips
>Priority: Minor
>  Labels: oct16-medium
> Attachments: YARN-5280.001.patch, YARN-5280.002.patch, 
> YARN-5280.003.patch, YARN-5280.004.patch, YARN-5280.005.patch, 
> YARN-5280.006.patch, YARN-5280.007.patch, YARN-5280.008.patch, 
> YARN-5280.patch, YARNContainerSandbox.pdf
>
>
> YARN applications have the ability to perform privileged actions which have 
> the potential to add instability into the cluster. The Java Security Manager 
> can be used to prevent users from running privileged actions while still 
> allowing their core data processing use cases. 
> Introduce a YARN flag which will allow a Hadoop administrator to enable the 
> Java Security Manager for user code, while still providing complete 
> permissions to core Hadoop libraries.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6027) Support fromid(offset) filter for /flows API

2017-03-01 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891235#comment-15891235
 ] 

Sangjin Lee edited comment on YARN-6027 at 3/1/17 10:48 PM:


The YARN-5355-branch-2 branch is failing compilation at 
{{AbstractTimelineReaderHBaseTestBase.java}}:
{noformat}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
(default-testCompile) on project 
hadoop-yarn-server-timelineservice-hbase-tests: Compilation failure: 
Compilation failure:
[ERROR] 
/Users/sjlee/git/hadoop-ats/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/AbstractTimelineReaderHBaseTestBase.java:[90,9]
 method does not override or implement a method from a supertype
[ERROR] 
/Users/sjlee/git/hadoop-ats/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/AbstractTimelineReaderHBaseTestBase.java:[122,29]
 cannot find symbol
[ERROR] symbol:   method getStatusInfo()
[ERROR] location: variable resp of type com.sun.jersey.api.client.ClientResponse
[ERROR] 
/Users/sjlee/git/hadoop-ats/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/AbstractTimelineReaderHBaseTestBase.java:[126,34]
 cannot find symbol
[ERROR] symbol:   method getStatusInfo()
[ERROR] location: variable resp of type com.sun.jersey.api.client.ClientResponse
[ERROR] 
/Users/sjlee/git/hadoop-ats/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/AbstractTimelineReaderHBaseTestBase.java:[140,13]
 cannot find symbol
[ERROR] symbol:   method getStatusInfo()
[ERROR] location: variable resp of type com.sun.jersey.api.client.ClientResponse
{noformat}

I'll post an addendum patch for the branch-2 commit.


was (Author: sjlee0):
The YARN-5355-branch-2 branch is failing compilation at 
{{AbstractTimelineReaderHBaseTestBase.java}}:
{noformat}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
(default-testCompile) on project 
hadoop-yarn-server-timelineservice-hbase-tests: Compilation failure: 
Compilation failure:
[ERROR] 
/Users/sjlee/git/hadoop-ats/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/AbstractTimelineReaderHBaseTestBase.java:[90,9]
 method does not override or implement a method from a supertype
[ERROR] 
/Users/sjlee/git/hadoop-ats/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/AbstractTimelineReaderHBaseTestBase.java:[122,29]
 cannot find symbol
[ERROR] symbol:   method getStatusInfo()
[ERROR] location: variable resp of type com.sun.jersey.api.client.ClientResponse
[ERROR] 
/Users/sjlee/git/hadoop-ats/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/AbstractTimelineReaderHBaseTestBase.java:[126,34]
 cannot find symbol
[ERROR] symbol:   method getStatusInfo()
[ERROR] location: variable resp of type com.sun.jersey.api.client.ClientResponse
[ERROR] 
/Users/sjlee/git/hadoop-ats/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/AbstractTimelineReaderHBaseTestBase.java:[140,13]
 cannot find symbol
[ERROR] symbol:   method getStatusInfo()
[ERROR] location: variable resp of type com.sun.jersey.api.client.ClientResponse
{noformat}

I'll file a small JIRA to fix it.

> Support fromid(offset) filter for /flows API
> 
>
> Key: YARN-6027
> URL: https://issues.apache.org/jira/browse/YARN-6027
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: yarn-5355-merge-blocker
> Fix For: YARN-5355, YARN-5355-branch-2
>
> Attachments: YARN-6027-YARN-5355.0001.patch, 
> YARN-6027-YARN-5355.0002.patch, YARN-6027-YARN-5355.0003.patch, 
> YARN-6027-YARN-5355.0004.patch, YARN-6027-YARN-5355.0005.patch, 
> YARN-6027-YARN-5355.0006.patch, YARN-6027-YARN-5355.0007.patch, 
> YARN-6027-YARN-5355.0008.patch
>
>
> In YARN-5585 , fromId is supported for retrieving entities. We need similar 
> filter for flows/flowRun apps and flow run and flow as 

[jira] [Commented] (YARN-6027) Support fromid(offset) filter for /flows API

2017-03-01 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891235#comment-15891235
 ] 

Sangjin Lee commented on YARN-6027:
---

The YARN-5355-branch-2 branch is failing compilation at 
{{AbstractTimelineReaderHBaseTestBase.java}}:
{noformat}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
(default-testCompile) on project 
hadoop-yarn-server-timelineservice-hbase-tests: Compilation failure: 
Compilation failure:
[ERROR] 
/Users/sjlee/git/hadoop-ats/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/AbstractTimelineReaderHBaseTestBase.java:[90,9]
 method does not override or implement a method from a supertype
[ERROR] 
/Users/sjlee/git/hadoop-ats/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/AbstractTimelineReaderHBaseTestBase.java:[122,29]
 cannot find symbol
[ERROR] symbol:   method getStatusInfo()
[ERROR] location: variable resp of type com.sun.jersey.api.client.ClientResponse
[ERROR] 
/Users/sjlee/git/hadoop-ats/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/AbstractTimelineReaderHBaseTestBase.java:[126,34]
 cannot find symbol
[ERROR] symbol:   method getStatusInfo()
[ERROR] location: variable resp of type com.sun.jersey.api.client.ClientResponse
[ERROR] 
/Users/sjlee/git/hadoop-ats/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/AbstractTimelineReaderHBaseTestBase.java:[140,13]
 cannot find symbol
[ERROR] symbol:   method getStatusInfo()
[ERROR] location: variable resp of type com.sun.jersey.api.client.ClientResponse
{noformat}

I'll file a small JIRA to fix it.

> Support fromid(offset) filter for /flows API
> 
>
> Key: YARN-6027
> URL: https://issues.apache.org/jira/browse/YARN-6027
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: yarn-5355-merge-blocker
> Fix For: YARN-5355, YARN-5355-branch-2
>
> Attachments: YARN-6027-YARN-5355.0001.patch, 
> YARN-6027-YARN-5355.0002.patch, YARN-6027-YARN-5355.0003.patch, 
> YARN-6027-YARN-5355.0004.patch, YARN-6027-YARN-5355.0005.patch, 
> YARN-6027-YARN-5355.0006.patch, YARN-6027-YARN-5355.0007.patch, 
> YARN-6027-YARN-5355.0008.patch
>
>
> In YARN-5585 , fromId is supported for retrieving entities. We need similar 
> filter for flows/flowRun apps and flow run and flow as well. 
> Along with supporting fromId, this JIRA should also discuss following points
> * Should we throw an exception for entities/entity retrieval if duplicates 
> found?
> * TimelieEntity :
> ** Should equals method also check for idPrefix?
> ** Does idPrefix is part of identifiers?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6261) YARN queue mapping fails for users with no group

2017-03-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891202#comment-15891202
 ] 

ASF GitHub Bot commented on YARN-6261:
--

GitHub user pvillard31 opened a pull request:

https://github.com/apache/hadoop/pull/198

YARN-6261 - Catch user with no group when getting queue from mapping …

…definition

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pvillard31/hadoop YARN-6261

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/198.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #198


commit 59f19be39e4fbf2b59151ac205d274b26716e478
Author: Pierre Villard 
Date:   2017-03-01T22:25:58Z

YARN-6261 - Catch user with no group when getting queue from mapping 
definition




> YARN queue mapping fails for users with no group
> 
>
> Key: YARN-6261
> URL: https://issues.apache.org/jira/browse/YARN-6261
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Pierre Villard
>
> *Issue:* 
> Since Hadoop group mapping can be overridden (to get groups from an AD for 
> example), it is possible to be in a situation where a user does not have any 
> group (because the user is not in the AD but only defined locally):
> {noformat}
> $ hdfs groups zeppelin
> zeppelin:
> {noformat}
> In this case, if the YARN Queue Mapping is configured and contains at least 
> one mapping of {{MappingType.GROUP}}, it won't be possible to get a queue for 
> the job submitted by such a user and the job won't be submitted at all.
> *Expected result:* 
> In case a user does not have any group and no mapping is defined for this 
> user, the default queue should be assigned whatever the queue mapping 
> definition is.
> *Workaround:* 
> A workaround is to define a group mapping of {{MappingType.USER}} for the 
> given user before defining any mapping of {{MappingType.GROUP}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6261) YARN queue mapping fails for users with no group

2017-03-01 Thread Pierre Villard (JIRA)
Pierre Villard created YARN-6261:


 Summary: YARN queue mapping fails for users with no group
 Key: YARN-6261
 URL: https://issues.apache.org/jira/browse/YARN-6261
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Pierre Villard


*Issue:* 
Since Hadoop group mapping can be overridden (to get groups from an AD for 
example), it is possible to be in a situation where a user does not have any 
group (because the user is not in the AD but only defined locally):
{noformat}
$ hdfs groups zeppelin
zeppelin:
{noformat}

In this case, if the YARN Queue Mapping is configured and contains at least one 
mapping of {{MappingType.GROUP}}, it won't be possible to get a queue for the 
job submitted by such a user and the job won't be submitted at all.

*Expected result:* 
In case a user does not have any group and no mapping is defined for this user, 
the default queue should be assigned whatever the queue mapping definition is.

*Workaround:* 
A workaround is to define a group mapping of {{MappingType.USER}} for the given 
user before defining any mapping of {{MappingType.GROUP}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6255) Refactor yarn-native-services framework

2017-03-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891140#comment-15891140
 ] 

Hadoop QA commented on YARN-6255:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 3s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
29s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} yarn-native-services passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 33s{color} 
| {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications 
generated 5 new + 24 unchanged - 10 fixed = 29 total (was 34) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 31s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications: The patch generated 
108 new + 1273 unchanged - 386 fixed = 1381 total (was 1659) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
4s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core
 generated 9 new + 0 unchanged - 0 fixed = 9 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
36s{color} | {color:green} hadoop-yarn-slider-core in the patch passed. {color} 
|
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 14s{color} 
| {color:red} hadoop-yarn-services-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core
 |
|  |  Dead store to clusterOperations in 
org.apache.slider.client.SliderClient.actionDiagnosticIntelligent(ActionDiagnosticArgs)
  At 
SliderClient.java:org.apache.slider.client.SliderClient.actionDiagnosticIntelligent(ActionDiagnosticArgs)

[jira] [Issue Comment Deleted] (YARN-6255) Refactor yarn-native-services framework

2017-03-01 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-6255:
--
Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} YARN-6255 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-6255 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12855279/YARN-6255.01-yarn-native-servies.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15116/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.

)

> Refactor yarn-native-services framework 
> 
>
> Key: YARN-6255
> URL: https://issues.apache.org/jira/browse/YARN-6255
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-6255.yarn-native-services.01.patch
>
>
> YARN-4692 provides a good abstraction of services on YARN. We could use this 
> as a building block in yarn-native-services framework code base as well.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6255) Refactor yarn-native-services framework

2017-03-01 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-6255:
--
Attachment: YARN-6255.yarn-native-services.01.patch

> Refactor yarn-native-services framework 
> 
>
> Key: YARN-6255
> URL: https://issues.apache.org/jira/browse/YARN-6255
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-6255.yarn-native-services.01.patch
>
>
> YARN-4692 provides a good abstraction of services on YARN. We could use this 
> as a building block in yarn-native-services framework code base as well.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6255) Refactor yarn-native-services framework

2017-03-01 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-6255:
--
Attachment: (was: YARN-6255.01-yarn-native-servies.patch)

> Refactor yarn-native-services framework 
> 
>
> Key: YARN-6255
> URL: https://issues.apache.org/jira/browse/YARN-6255
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-6255.yarn-native-services.01.patch
>
>
> YARN-4692 provides a good abstraction of services on YARN. We could use this 
> as a building block in yarn-native-services framework code base as well.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6232) Update resource usage and preempted resource calculations to take into account all resource types

2017-03-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891044#comment-15891044
 ] 

Hadoop QA commented on YARN-6232:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 13 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
19s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
18s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
16s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
 4s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
0s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
7s{color} | {color:green} YARN-3926 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  5m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
38s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 44s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 9 new + 943 unchanged - 24 fixed = 952 total (was 967) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
24s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m  1s{color} 
| {color:red} hadoop-yarn-server-applicationhistoryservice in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 42m  6s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 
34s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}132m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.timeline.webapp.TestTimelineWebServices |
|   | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6232 |
| JIRA Patch URL | 

[jira] [Commented] (YARN-6199) Support for listing flows with filter userid

2017-03-01 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890933#comment-15890933
 ] 

Varun Saxena commented on YARN-6199:


bq. The response contains all the users flow activities data.
What I actually meant was that when you use userid filter for what I presume is 
a Web UI use case, will you be using it with daterange filter? Because 
otherwise we will have a full table scan. daterange would limit the scan to 
some degree depending on the range specified.
I was not suggesting using existing daterange filter to retrieve user specific 
records.
As you said userX may have run a flow, a week ago. I hope the use case is not 
something on the lines of retrieve last 10 flows, say, executed by userX. 
Because some user may have run flows even a year ago. And we cannot be doing a 
humongous scan for even the default view for that user in UI.

I hope UI can be designed in a manner wherein you can show data based on a 
predetermined date range(say, current day, last 3 days, etc.).
Also an option can be provided to specify a date which can be selected to query 
records for other date ranges

We can add a user column in flow activity table to restrict rows retrieved from 
backend.

Whether its a merge blocker or not, depends entirely on how important it is for 
you. :)

> Support for listing flows with filter userid
> 
>
> Key: YARN-6199
> URL: https://issues.apache.org/jira/browse/YARN-6199
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>  Labels: yarn-5355-merge-blocker
>
> Currently */flows* API retrieves flow entities for all the users by default. 
> It is required to provide filter user i.e */flows?user=rohith* . This is 
> critical filter in secured environment. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6232) Update resource usage and preempted resource calculations to take into account all resource types

2017-03-01 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890890#comment-15890890
 ] 

Wangda Tan commented on YARN-6232:
--

[~vvasudev], 

Thanks for update, +1 from my side.

bq. Hmm. I figured it's easier to keep as much of the the existing code as 
possible. Wangda Tan - what do you think?
If there's no race condition, I would prefer to move this to a separate patch 
when we start perf tests. 

> Update resource usage and preempted resource calculations to take into 
> account all resource types
> -
>
> Key: YARN-6232
> URL: https://issues.apache.org/jira/browse/YARN-6232
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-6232-YARN-3926.001.patch, 
> YARN-6232-YARN-3926.002.patch, YARN-6232-YARN-3926.003.patch
>
>
> The chargeback calculations that take place on the RM should be updated to 
> take all resource types into account.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6207) Move application can fail when attempt add event is delayed

2017-03-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890888#comment-15890888
 ] 

Hadoop QA commented on YARN-6207:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 40s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 15 new + 313 unchanged - 0 fixed = 328 total (was 313) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 22s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
51s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler |
|   | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
|   | hadoop.yarn.server.resourcemanager.TestRMEmbeddedElector |
| Timed out junit tests | 
org.apache.hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStorePerf |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6207 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12855422/YARN-6207.006.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux bb3ba1939ecc 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 82ef9ac |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/15122/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/15122/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 

[jira] [Resolved] (YARN-5068) Expose scheduler queue to application master

2017-03-01 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli resolved YARN-5068.
---
Resolution: Duplicate

Closing this accurately as a dup of YARN-1623.

> Expose scheduler queue to application master
> 
>
> Key: YARN-5068
> URL: https://issues.apache.org/jira/browse/YARN-5068
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Harish Jaiprakash
>Assignee: Harish Jaiprakash
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: MAPREDUCE-6692.patch, YARN-5068.1.patch, 
> YARN-5068.2.patch, YARN-5068-branch-2.1.patch
>
>
> The AM needs to know the queue name in which it was launched.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-5068) Expose scheduler queue to application master

2017-03-01 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli reopened YARN-5068:
---

> Expose scheduler queue to application master
> 
>
> Key: YARN-5068
> URL: https://issues.apache.org/jira/browse/YARN-5068
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Harish Jaiprakash
>Assignee: Harish Jaiprakash
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: MAPREDUCE-6692.patch, YARN-5068.1.patch, 
> YARN-5068.2.patch, YARN-5068-branch-2.1.patch
>
>
> The AM needs to know the queue name in which it was launched.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6207) Move application can fail when attempt add event is delayed

2017-03-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890802#comment-15890802
 ] 

Hadoop QA commented on YARN-6207:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  8m 
49s{color} | {color:red} Docker failed to build yetus/hadoop:a9ad5d6. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-6207 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12855432/YARN-6207.007.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15123/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Move application can  fail when attempt add event is delayed
> 
>
> Key: YARN-6207
> URL: https://issues.apache.org/jira/browse/YARN-6207
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-6207.001.patch, YARN-6207.002.patch, 
> YARN-6207.003.patch, YARN-6207.004.patch, YARN-6207.005.patch, 
> YARN-6207.006.patch, YARN-6207.007.patch
>
>
> *Steps to reproduce*
> 1.Submit application  and delay attempt add to Scheduler
> (Simulate using debug at EventDispatcher for SchedulerEventDispatcher)
> 2. Call move application to destination queue.
> {noformat}
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.preValidateMoveApplication(CapacityScheduler.java:2086)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.moveApplicationAcrossQueue(RMAppManager.java:669)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.moveApplicationAcrossQueues(ClientRMService.java:1231)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.moveApplicationAcrossQueues(ApplicationClientProtocolPBServiceImpl.java:388)
>   at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:537)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:522)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:867)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:813)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1892)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2659)
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1483)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1429)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1339)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:115)
>   at com.sun.proxy.$Proxy7.moveApplicationAcrossQueues(Unknown Source)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.moveApplicationAcrossQueues(ApplicationClientProtocolPBClientImpl.java:398)
>   ... 16 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6207) Move application can fail when attempt add event is delayed

2017-03-01 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-6207:
---
Attachment: YARN-6207.007.patch

> Move application can  fail when attempt add event is delayed
> 
>
> Key: YARN-6207
> URL: https://issues.apache.org/jira/browse/YARN-6207
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-6207.001.patch, YARN-6207.002.patch, 
> YARN-6207.003.patch, YARN-6207.004.patch, YARN-6207.005.patch, 
> YARN-6207.006.patch, YARN-6207.007.patch
>
>
> *Steps to reproduce*
> 1.Submit application  and delay attempt add to Scheduler
> (Simulate using debug at EventDispatcher for SchedulerEventDispatcher)
> 2. Call move application to destination queue.
> {noformat}
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.preValidateMoveApplication(CapacityScheduler.java:2086)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.moveApplicationAcrossQueue(RMAppManager.java:669)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.moveApplicationAcrossQueues(ClientRMService.java:1231)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.moveApplicationAcrossQueues(ApplicationClientProtocolPBServiceImpl.java:388)
>   at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:537)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:522)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:867)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:813)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1892)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2659)
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1483)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1429)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1339)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:115)
>   at com.sun.proxy.$Proxy7.moveApplicationAcrossQueues(Unknown Source)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.moveApplicationAcrossQueues(ApplicationClientProtocolPBClientImpl.java:398)
>   ... 16 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6232) Update resource usage and preempted resource calculations to take into account all resource types

2017-03-01 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev updated YARN-6232:

Attachment: YARN-6232-YARN-3926.003.patch

Thanks for the review [~leftnoteasy] and [~sunilg]!

Uploaded a new patch with the following fixes -
bq. 1. In getResourceSecondsString, usage of getOrDefault could be avoided as 
we may need to back port to branch-2. Similarly in ApplicationAttemptStateData 
as well
Fixed.

bq. RMAppManager#createAppSummary has a typo. .add("resourceSeonds", 
Fixed.

bq. 1. In RMAppAttemptMetrics, instead of using private Map 
resourceUsageMap = new HashMap<>();, could we use ConcurrentHashMap itself.?
Hmm. I figured it's easier to keep as much of the the existing code as 
possible. [~leftnoteasy] - what do you think?

bq. @Deprecated methods can be removed from following unstable classes:
Fixed.

bq. Changes of ResourceInfo: are they compatible? I'm fine with these changes 
if they're compatible changes.
Yes. They are compatible changes.

bq. ApplicationResourceUsageMapProto: Is it better to rename it to 
StringLongMapProto
Fixed.

> Update resource usage and preempted resource calculations to take into 
> account all resource types
> -
>
> Key: YARN-6232
> URL: https://issues.apache.org/jira/browse/YARN-6232
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-6232-YARN-3926.001.patch, 
> YARN-6232-YARN-3926.002.patch, YARN-6232-YARN-3926.003.patch
>
>
> The chargeback calculations that take place on the RM should be updated to 
> take all resource types into account.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6248) Killing an app with pending container requests leaves the user in UsersManager

2017-03-01 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890688#comment-15890688
 ] 

Sunil G commented on YARN-6248:
---

yes. looks fine. if there are no other major concerns, i cud help to commit the 
same tomorrow.

> Killing an app with pending container requests leaves the user in UsersManager
> --
>
> Key: YARN-6248
> URL: https://issues.apache.org/jira/browse/YARN-6248
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha3
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: User Left Over.jpg, YARN-6248.001.patch
>
>
> If an app is still asking for resources when it is killed, the user is left 
> in the UsersManager structure and shows up on the GUI.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6207) Move application can fail when attempt add event is delayed

2017-03-01 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-6207:
---
Attachment: YARN-6207.006.patch

[~rohithsharma]
Thank you for review comment .. uploading patch after handling comments

> Move application can  fail when attempt add event is delayed
> 
>
> Key: YARN-6207
> URL: https://issues.apache.org/jira/browse/YARN-6207
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-6207.001.patch, YARN-6207.002.patch, 
> YARN-6207.003.patch, YARN-6207.004.patch, YARN-6207.005.patch, 
> YARN-6207.006.patch
>
>
> *Steps to reproduce*
> 1.Submit application  and delay attempt add to Scheduler
> (Simulate using debug at EventDispatcher for SchedulerEventDispatcher)
> 2. Call move application to destination queue.
> {noformat}
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.preValidateMoveApplication(CapacityScheduler.java:2086)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.moveApplicationAcrossQueue(RMAppManager.java:669)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.moveApplicationAcrossQueues(ClientRMService.java:1231)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.moveApplicationAcrossQueues(ApplicationClientProtocolPBServiceImpl.java:388)
>   at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:537)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:522)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:867)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:813)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1892)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2659)
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1483)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1429)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1339)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:115)
>   at com.sun.proxy.$Proxy7.moveApplicationAcrossQueues(Unknown Source)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.moveApplicationAcrossQueues(ApplicationClientProtocolPBClientImpl.java:398)
>   ... 16 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5179) Issue of CPU usage of containers

2017-03-01 Thread Manikandan R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15873308#comment-15873308
 ] 

Manikandan R edited comment on YARN-5179 at 3/1/17 5:36 PM:


[~asuresh]

(edited with more details based on investigation)

Based on my understanding, resourceCalculatorPlugin.getNumProcessors() and 
maxVCoresAllottedForContainers difference cause this millivcores calculation 
issue. resourceCalculatorPlugin.getNumProcessors() simply holds of logical 
processors count. If a node has 4 logical cpu's, then getNumProcessors returns 
4.

When yarn.nodemanager.resource.cpu-vcores is -1,

maxVCoresAllottedForContainers has values based on 
yarn.nodemanager.resource.count-logical-processors-as-cores property. If 
yarn.nodemanager.resource.count-logical-processors-as-cores is true, then 
maxVCoresAllottedForContainers is equal to 
resourceCalculatorPlugin.getNumProcessors(), otherwise, it differs. I am 
assuming yarn.nodemanager.resource.detect-hardware-capabilities has been 
enabled and yarn.nodemanager.resource.percentage-physical-cpu-limit is 100 in 
this case.

When yarn.nodemanager.resource.cpu-vcores is not equal to -1,

maxVCoresAllottedForContainers simply holds value of 
yarn.nodemanager.resource.cpu-vcores property. Since there is no validation in 
place, it can have any arbitrary no. (for ex, 100 etc)

Like Memory overflow limit, currently containers won't get killed when cpu 
usage exceeds. A container can use all the logical cores available in the node. 
Given this situation, I don't think using maxVCoresAllottedForContainers for 
millivcores calculation is correct as it can either lead to low or high cpu 
usage with respect to actual cpu usage when maxVCoresAllottedForContainers != 
resourceCalculatorPlugin.getNumProcessors().

Approach 1: Can we use actual vcores being used?

Instead of 

{code}float cpuUsageTotalCoresPercentage = cpuUsagePercentPerCore / 
resourceCalculatorPlugin.getNumProcessors();
int milliVcoresUsed = (int) (cpuUsageTotalCoresPercentage * 1000 * 
maxVCoresAllottedForContainers /nodeCpuPercentageForYARN);{code}

Can we use this?

{code}float cpuUsageTotalCoresPercentage = cpuUsagePercentPerCore / 
resourceCalculatorPlugin.getNumVcoresUsed();
int milliVcoresUsed = (int) (cpuUsageTotalCoresPercentage * 1000 * 
resourceCalculatorPlugin.getNumVcoresUsed()/nodeCpuPercentageForYARN);{code}

Approach 2:

Instead of 

{code}float cpuUsageTotalCoresPercentage = cpuUsagePercentPerCore / 
resourceCalculatorPlugin.getNumProcessors();
int milliVcoresUsed = (int) (cpuUsageTotalCoresPercentage * 1000 * 
maxVCoresAllottedForContainers /nodeCpuPercentageForYARN);{code}

Can we use?

{code}float cpuUsageTotalCoresPercentage = cpuUsagePercentPerCore / 
resourceCalculatorPlugin.getNumProcessors();
int milliVcoresUsed = (int) ((int) (cpuUsageTotalCoresPercentage * 
resourceCalculatorPlugin.getNumProcessors() * 1000)/100.0f);{code}

Please go through these above options and let me know your suggestions.


was (Author: maniraj...@gmail.com):
[~asuresh]

Based on my understanding, resourceCalculatorPlugin.getNumProcessors() and 
maxVCoresAllottedForContainers difference cause this millivcores calculation 
issue. resourceCalculatorPlugin.getNumProcessors() simply return no. of logical 
processors count, whereas maxVCoresAllottedForContainers has values based on 
yarn.nodemanager.resource.count-logical-processors-as-cores property. If above 
property is true, then maxVCoresAllottedForContainers is equal to 
resourceCalculatorPlugin.getNumProcessors(), otherwise, it differs. I am 
assuming yarn.nodemanager.resource.detect-hardware-capabilities has been 
enabled in this case.

Can we use actual vcores being used?

Instead of 

{code}float cpuUsageTotalCoresPercentage = cpuUsagePercentPerCore / 
resourceCalculatorPlugin.getNumProcessors();
int milliVcoresUsed = (int) (cpuUsageTotalCoresPercentage * 1000 * 
maxVCoresAllottedForContainers /nodeCpuPercentageForYARN);{code}

Can we use this?

{code}float cpuUsageTotalCoresPercentage = cpuUsagePercentPerCore / 
resourceCalculatorPlugin.getNumVcoresUsed();
int milliVcoresUsed = (int) (cpuUsageTotalCoresPercentage * 1000 * 
resourceCalculatorPlugin.getNumVcoresUsed()/nodeCpuPercentageForYARN);{code}

Please provide your thoughts.

> Issue of CPU usage of containers
> 
>
> Key: YARN-5179
> URL: https://issues.apache.org/jira/browse/YARN-5179
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.0
> Environment: Both on Windows and Linux
>Reporter: Zhongkai Mi
>
> // Multiply by 1000 to avoid losing data when converting to int 
>int milliVcoresUsed = (int) (cpuUsageTotalCoresPercentage * 1000 
>   * maxVCoresAllottedForContainers /nodeCpuPercentageForYARN); 
> This formula will not 

[jira] [Commented] (YARN-6027) Support fromid(offset) filter for /flows API

2017-03-01 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890484#comment-15890484
 ] 

Sangjin Lee commented on YARN-6027:
---

+1. Thanks [~rohithsharma]!

> Support fromid(offset) filter for /flows API
> 
>
> Key: YARN-6027
> URL: https://issues.apache.org/jira/browse/YARN-6027
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6027-YARN-5355.0001.patch, 
> YARN-6027-YARN-5355.0002.patch, YARN-6027-YARN-5355.0003.patch, 
> YARN-6027-YARN-5355.0004.patch, YARN-6027-YARN-5355.0005.patch, 
> YARN-6027-YARN-5355.0006.patch, YARN-6027-YARN-5355.0007.patch, 
> YARN-6027-YARN-5355.0008.patch
>
>
> In YARN-5585 , fromId is supported for retrieving entities. We need similar 
> filter for flows/flowRun apps and flow run and flow as well. 
> Along with supporting fromId, this JIRA should also discuss following points
> * Should we throw an exception for entities/entity retrieval if duplicates 
> found?
> * TimelieEntity :
> ** Should equals method also check for idPrefix?
> ** Does idPrefix is part of identifiers?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6027) Support fromid(offset) filter for /flows API

2017-03-01 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890390#comment-15890390
 ] 

Varun Saxena commented on YARN-6027:


Thanks [~rohithsharma] for the patch.
The latest patch looks good to me. Will wait for Sangjin to review once before 
proceeding to commit it.
Will handle whitespace by myself.


> Support fromid(offset) filter for /flows API
> 
>
> Key: YARN-6027
> URL: https://issues.apache.org/jira/browse/YARN-6027
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6027-YARN-5355.0001.patch, 
> YARN-6027-YARN-5355.0002.patch, YARN-6027-YARN-5355.0003.patch, 
> YARN-6027-YARN-5355.0004.patch, YARN-6027-YARN-5355.0005.patch, 
> YARN-6027-YARN-5355.0006.patch, YARN-6027-YARN-5355.0007.patch, 
> YARN-6027-YARN-5355.0008.patch
>
>
> In YARN-5585 , fromId is supported for retrieving entities. We need similar 
> filter for flows/flowRun apps and flow run and flow as well. 
> Along with supporting fromId, this JIRA should also discuss following points
> * Should we throw an exception for entities/entity retrieval if duplicates 
> found?
> * TimelieEntity :
> ** Should equals method also check for idPrefix?
> ** Does idPrefix is part of identifiers?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6248) Killing an app with pending container requests leaves the user in UsersManager

2017-03-01 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890386#comment-15890386
 ] 

Eric Payne commented on YARN-6248:
--

The above Unit tests 
({{TestLeaderElectorService,TestDelegationTokenRenewer,TestFairSchedulerPreemption}})
 are passing for me. I also ran all of the UT from under capacity scheduler.

> Killing an app with pending container requests leaves the user in UsersManager
> --
>
> Key: YARN-6248
> URL: https://issues.apache.org/jira/browse/YARN-6248
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha3
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: User Left Over.jpg, YARN-6248.001.patch
>
>
> If an app is still asking for resources when it is killed, the user is left 
> in the UsersManager structure and shows up on the GUI.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6245) Add FinalResource object to reduce overhead of Resource class instancing

2017-03-01 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890378#comment-15890378
 ] 

Karthik Kambatla commented on YARN-6245:


[~leftnoteasy] - I didn't understand you question about immutable_add. In 
FairScheduler, there is a lot of {{Resources.addTo}}. This can be performed on 
a {{Resource}} and the corresponding getter can return 
{{Resource.getObservableCopy}}.

YARN-3926 needs to be incorporated carefully. I haven't looked at the code 
there, but will we still be using Resource? If yes, we will have to implement 
all the methods in Resource. 

> Add FinalResource object to reduce overhead of Resource class instancing
> 
>
> Key: YARN-6245
> URL: https://issues.apache.org/jira/browse/YARN-6245
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
> Attachments: observable-resource.patch, 
> YARN-6245.preliminary-staled.1.patch
>
>
> There're lots of Resource object creation in YARN Scheduler, since Resource 
> object is backed by protobuf, creation of such objects is expensive and 
> becomes bottleneck.
> To address the problem, we can introduce a FinalResource (Is it better to 
> call it ImmutableResource?) object, which is not backed by PBImpl. We can use 
> this object in frequent invoke paths in the scheduler.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6245) Add FinalResource object to reduce overhead of Resource class instancing

2017-03-01 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-6245:
---
Attachment: observable-resource.patch

Here is the approach I had in mind. Usecase: app, leaf-queue and parent-queue 
store a bunch of stats; in fairscheduler, there is fairshare, demand, usage 
etc. that is accessed often. By returning an observable copy, we don't have to 
create full copies or hold locks. 

I ran multiple cases in a loop and we seem to save 20 - 30% time based on 
whether this is called on a new resource.  

> Add FinalResource object to reduce overhead of Resource class instancing
> 
>
> Key: YARN-6245
> URL: https://issues.apache.org/jira/browse/YARN-6245
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
> Attachments: observable-resource.patch, 
> YARN-6245.preliminary-staled.1.patch
>
>
> There're lots of Resource object creation in YARN Scheduler, since Resource 
> object is backed by protobuf, creation of such objects is expensive and 
> becomes bottleneck.
> To address the problem, we can introduce a FinalResource (Is it better to 
> call it ImmutableResource?) object, which is not backed by PBImpl. We can use 
> this object in frequent invoke paths in the scheduler.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5496) Make Node Heatmap Chart categories clickable

2017-03-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890244#comment-15890244
 ] 

Hadoop QA commented on YARN-5496:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  0m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5496 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12855370/YARN-5496.002.patch |
| Optional Tests |  asflicense  |
| uname | Linux a129111f207b 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 82ef9ac |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15121/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Make Node Heatmap Chart categories clickable
> 
>
> Key: YARN-5496
> URL: https://issues.apache.org/jira/browse/YARN-5496
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yesha Vora
>Assignee: Gergely Novák
> Attachments: YARN-5496.001.patch, YARN-5496.002.patch
>
>
> Make Node Heatmap Chart categories clickable. 
> This Heatmap chart has few categories like 10% used, 30% used etc.
> This tags should be clickable. If user clicks on 10% used tag, it should show 
> hosts with 10% usage.  This can be a useful feature for clusters having 1000s 
> of nodes.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5496) Make Node Heatmap Chart categories clickable

2017-03-01 Thread JIRA

[ 
https://issues.apache.org/jira/browse/YARN-5496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890237#comment-15890237
 ] 

Gergely Novák commented on YARN-5496:
-

Implemented [~sunilg]'s 1st and 3rd suggestion in patch #2, still waiting for 
[~leftnoteasy]'s opinion about the 2nd.

> Make Node Heatmap Chart categories clickable
> 
>
> Key: YARN-5496
> URL: https://issues.apache.org/jira/browse/YARN-5496
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yesha Vora
>Assignee: Gergely Novák
> Attachments: YARN-5496.001.patch, YARN-5496.002.patch
>
>
> Make Node Heatmap Chart categories clickable. 
> This Heatmap chart has few categories like 10% used, 30% used etc.
> This tags should be clickable. If user clicks on 10% used tag, it should show 
> hosts with 10% usage.  This can be a useful feature for clusters having 1000s 
> of nodes.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5496) Make Node Heatmap Chart categories clickable

2017-03-01 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-5496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergely Novák updated YARN-5496:

Attachment: YARN-5496.002.patch

> Make Node Heatmap Chart categories clickable
> 
>
> Key: YARN-5496
> URL: https://issues.apache.org/jira/browse/YARN-5496
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yesha Vora
>Assignee: Gergely Novák
> Attachments: YARN-5496.001.patch, YARN-5496.002.patch
>
>
> Make Node Heatmap Chart categories clickable. 
> This Heatmap chart has few categories like 10% used, 30% used etc.
> This tags should be clickable. If user clicks on 10% used tag, it should show 
> hosts with 10% usage.  This can be a useful feature for clusters having 1000s 
> of nodes.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6027) Support fromid(offset) filter for /flows API

2017-03-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890173#comment-15890173
 ] 

Hadoop QA commented on YARN-6027:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
43s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
40s{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
25s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase
 in YARN-5355 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: 
The patch generated 0 new + 28 unchanged - 1 fixed = 28 total (was 29) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
19s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
44s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-tests in 
the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-6027 |
| JIRA Patch URL | 

[jira] [Commented] (YARN-6259) Support pagination and optimize data transfer with zero-copy approach for containerlogs REST API in NMWebServices

2017-03-01 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890159#comment-15890159
 ] 

Rohith Sharma K S commented on YARN-6259:
-

I am not sure about how use cases will be served, but skimmed through the 
patch. 
bq. Add containerlogs-info REST API since sometimes we need to know the 
totalSize/pageSize/pageCount info of log
Instead of adding new LogInfo file, there is ContainerLogInfo file which can be 
used for pageSize and pageIndex. 

> Support pagination and optimize data transfer with zero-copy approach for 
> containerlogs REST API in NMWebServices
> -
>
> Key: YARN-6259
> URL: https://issues.apache.org/jira/browse/YARN-6259
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.8.1
>Reporter: Tao Yang
>Assignee: Tao Yang
> Attachments: YARN-6259.001.patch
>
>
> Currently containerlogs REST API in NMWebServices will read and send the 
> entire content of container logs. Most of container logs are large and it's 
> useful to support pagination.
> * Add pagesize and pageindex parameters for containerlogs REST API
> {code}
> URL: http:///ws/v1/node/containerlogs//
> QueryParams:
>   pagesize - max bytes of one page , default 1MB
>   pageindex - index of required page, default 0, can be nagative(set -1 will 
> get the last page content)
> {code}
> * Add containerlogs-info REST API since sometimes we need to know the 
> totalSize/pageSize/pageCount info of log 
> {code}
> URL: 
> http:///ws/v1/node/containerlogs-info//
> QueryParams:
>   pagesize - max bytes of one page , default 1MB
> Response example:
>   {"logInfo":{"totalSize":2497280,"pageSize":1048576,"pageCount":3}}
> {code}
> Moreover, the data transfer pipeline (disk --> read buffer --> NM buffer --> 
> socket buffer) can be optimized to pipeline(disk --> read buffer --> socket 
> buffer) with zero-copy approach.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6027) Support fromid(offset) filter for /flows API

2017-03-01 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-6027:

Attachment: YARN-6027-YARN-5355.0008.patch

updated patch addressing previous comments and checkstyle errors. Hope build 
will be green!! :-)

> Support fromid(offset) filter for /flows API
> 
>
> Key: YARN-6027
> URL: https://issues.apache.org/jira/browse/YARN-6027
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6027-YARN-5355.0001.patch, 
> YARN-6027-YARN-5355.0002.patch, YARN-6027-YARN-5355.0003.patch, 
> YARN-6027-YARN-5355.0004.patch, YARN-6027-YARN-5355.0005.patch, 
> YARN-6027-YARN-5355.0006.patch, YARN-6027-YARN-5355.0007.patch, 
> YARN-6027-YARN-5355.0008.patch
>
>
> In YARN-5585 , fromId is supported for retrieving entities. We need similar 
> filter for flows/flowRun apps and flow run and flow as well. 
> Along with supporting fromId, this JIRA should also discuss following points
> * Should we throw an exception for entities/entity retrieval if duplicates 
> found?
> * TimelieEntity :
> ** Should equals method also check for idPrefix?
> ** Does idPrefix is part of identifiers?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6027) Support fromid(offset) filter for /flows API

2017-03-01 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890099#comment-15890099
 ] 

Varun Saxena commented on YARN-6027:


Yes.

> Support fromid(offset) filter for /flows API
> 
>
> Key: YARN-6027
> URL: https://issues.apache.org/jira/browse/YARN-6027
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6027-YARN-5355.0001.patch, 
> YARN-6027-YARN-5355.0002.patch, YARN-6027-YARN-5355.0003.patch, 
> YARN-6027-YARN-5355.0004.patch, YARN-6027-YARN-5355.0005.patch, 
> YARN-6027-YARN-5355.0006.patch, YARN-6027-YARN-5355.0007.patch
>
>
> In YARN-5585 , fromId is supported for retrieving entities. We need similar 
> filter for flows/flowRun apps and flow run and flow as well. 
> Along with supporting fromId, this JIRA should also discuss following points
> * Should we throw an exception for entities/entity retrieval if duplicates 
> found?
> * TimelieEntity :
> ** Should equals method also check for idPrefix?
> ** Does idPrefix is part of identifiers?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6027) Support fromid(offset) filter for /flows API

2017-03-01 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890096#comment-15890096
 ] 

Rohith Sharma K S commented on YARN-6027:
-

>From above, basically 2 points to be considered right?
# Do not change FlowActivityRowKeyConverter to static and lets keep as is. 
# Modify the Javadoc as suggested. 


> Support fromid(offset) filter for /flows API
> 
>
> Key: YARN-6027
> URL: https://issues.apache.org/jira/browse/YARN-6027
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6027-YARN-5355.0001.patch, 
> YARN-6027-YARN-5355.0002.patch, YARN-6027-YARN-5355.0003.patch, 
> YARN-6027-YARN-5355.0004.patch, YARN-6027-YARN-5355.0005.patch, 
> YARN-6027-YARN-5355.0006.patch, YARN-6027-YARN-5355.0007.patch
>
>
> In YARN-5585 , fromId is supported for retrieving entities. We need similar 
> filter for flows/flowRun apps and flow run and flow as well. 
> Along with supporting fromId, this JIRA should also discuss following points
> * Should we throw an exception for entities/entity retrieval if duplicates 
> found?
> * TimelieEntity :
> ** Should equals method also check for idPrefix?
> ** Does idPrefix is part of identifiers?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6260) Findbugs warning in YARN-5355 branch

2017-03-01 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-6260:
---
Description: 
{noformat}
Bug type SE_BAD_FIELD 
In class 
org.apache.hadoop.yarn.server.timelineservice.storage.entity.EntityColumnPrefix
Field 
org.apache.hadoop.yarn.server.timelineservice.storage.entity.EntityColumnPrefix.column
Actual type 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnHelper
In EntityColumnPrefix.java
{noformat}

> Findbugs warning in YARN-5355 branch
> 
>
> Key: YARN-6260
> URL: https://issues.apache.org/jira/browse/YARN-6260
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>Priority: Minor
>
> {noformat}
> Bug type SE_BAD_FIELD 
> In class 
> org.apache.hadoop.yarn.server.timelineservice.storage.entity.EntityColumnPrefix
> Field 
> org.apache.hadoop.yarn.server.timelineservice.storage.entity.EntityColumnPrefix.column
> Actual type 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnHelper
> In EntityColumnPrefix.java
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6260) Findbugs warning in YARN-5355 branch

2017-03-01 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-6260:
---
Summary: Findbugs warning in YARN-5355 branch  (was: Findbugs warning on 
YARN-5355 branch)

> Findbugs warning in YARN-5355 branch
> 
>
> Key: YARN-6260
> URL: https://issues.apache.org/jira/browse/YARN-6260
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6260) Findbugs warning on YARN-5355 branch

2017-03-01 Thread Varun Saxena (JIRA)
Varun Saxena created YARN-6260:
--

 Summary: Findbugs warning on YARN-5355 branch
 Key: YARN-6260
 URL: https://issues.apache.org/jira/browse/YARN-6260
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Varun Saxena
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6027) Support fromid(offset) filter for /flows API

2017-03-01 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890033#comment-15890033
 ] 

Varun Saxena commented on YARN-6027:


Thanks [~rohithsharma] for the patch.
You have turned FlowActivityRowKeyConverter object inside FlowActivityRowKey to 
static which would mean its one instance per class. We made a conscious 
decision in YARN-5170 to eliminate singletons so I would not change this here. 
In decode flow, I understand that we will create key converter objects multiple 
times (i.e. equivalent to number of rows returned) which means more objects 
which are created and which have to be garbage collected. But key converters 
are lightweight objects so it may not have much impact. Also, we do that(create 
multiple times) for row key objects as well which are slightly more 
heavyweight. Probably we can avoid creating row key converters by creating a 
row key instance in entity reader(s) and reusing them. Anyways we do not know 
if this causes any big issue as of now.
So let us not change it in this JIRA. If you think this needs to be changed, we 
can raise a new JIRA and discuss pros and cons.

Javadoc over KeyConverterToString, I would rather say "Interface which has to 
be implemented for encoding and decoding row keys or column qualifiers as 
string.". Over encode and decode method simply mention encodes/decodes key as 
string (or also mention about column qualifiers). 
For the javadoc over decode in the same interface I would say "Decode row key 
from string to a key of type T." instead of to an object.

Was wondering if we can use {{@value}} tag to fetch default separator char 
instead of hardcoding ! in comment over getRowKeyAsString but then we hardcode 
this at multiple places. So lets leave it as it is.

I think the patch is good to go once checkstyle issues and comments above (bar 
the last comment) are fixed.

I will raise a separate JIRA for findbugs issue. Not sure why that's coming.

> Support fromid(offset) filter for /flows API
> 
>
> Key: YARN-6027
> URL: https://issues.apache.org/jira/browse/YARN-6027
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6027-YARN-5355.0001.patch, 
> YARN-6027-YARN-5355.0002.patch, YARN-6027-YARN-5355.0003.patch, 
> YARN-6027-YARN-5355.0004.patch, YARN-6027-YARN-5355.0005.patch, 
> YARN-6027-YARN-5355.0006.patch, YARN-6027-YARN-5355.0007.patch
>
>
> In YARN-5585 , fromId is supported for retrieving entities. We need similar 
> filter for flows/flowRun apps and flow run and flow as well. 
> Along with supporting fromId, this JIRA should also discuss following points
> * Should we throw an exception for entities/entity retrieval if duplicates 
> found?
> * TimelieEntity :
> ** Should equals method also check for idPrefix?
> ** Does idPrefix is part of identifiers?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6259) Support pagination and optimize data transfer with zero-copy approach for containerlogs REST API in NMWebServices

2017-03-01 Thread Tao Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Yang updated YARN-6259:
---
Attachment: YARN-6259.001.patch

Attach a patch for review

> Support pagination and optimize data transfer with zero-copy approach for 
> containerlogs REST API in NMWebServices
> -
>
> Key: YARN-6259
> URL: https://issues.apache.org/jira/browse/YARN-6259
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.8.1
>Reporter: Tao Yang
>Assignee: Tao Yang
> Attachments: YARN-6259.001.patch
>
>
> Currently containerlogs REST API in NMWebServices will read and send the 
> entire content of container logs. Most of container logs are large and it's 
> useful to support pagination.
> * Add pagesize and pageindex parameters for containerlogs REST API
> {code}
> URL: http:///ws/v1/node/containerlogs//
> QueryParams:
>   pagesize - max bytes of one page , default 1MB
>   pageindex - index of required page, default 0, can be nagative(set -1 will 
> get the last page content)
> {code}
> * Add containerlogs-info REST API since sometimes we need to know the 
> totalSize/pageSize/pageCount info of log 
> {code}
> URL: 
> http:///ws/v1/node/containerlogs-info//
> QueryParams:
>   pagesize - max bytes of one page , default 1MB
> Response example:
>   {"logInfo":{"totalSize":2497280,"pageSize":1048576,"pageCount":3}}
> {code}
> Moreover, the data transfer pipeline (disk --> read buffer --> NM buffer --> 
> socket buffer) can be optimized to pipeline(disk --> read buffer --> socket 
> buffer) with zero-copy approach.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6259) Support pagination and optimize data transfer with zero-copy approach for containerlogs REST API in NMWebServices

2017-03-01 Thread Tao Yang (JIRA)
Tao Yang created YARN-6259:
--

 Summary: Support pagination and optimize data transfer with 
zero-copy approach for containerlogs REST API in NMWebServices
 Key: YARN-6259
 URL: https://issues.apache.org/jira/browse/YARN-6259
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: nodemanager
Affects Versions: 2.8.1
Reporter: Tao Yang
Assignee: Tao Yang


Currently containerlogs REST API in NMWebServices will read and send the entire 
content of container logs. Most of container logs are large and it's useful to 
support pagination.
* Add pagesize and pageindex parameters for containerlogs REST API
{code}
URL: http:///ws/v1/node/containerlogs//
QueryParams:
  pagesize - max bytes of one page , default 1MB
  pageindex - index of required page, default 0, can be nagative(set -1 will 
get the last page content)
{code}
* Add containerlogs-info REST API since sometimes we need to know the 
totalSize/pageSize/pageCount info of log 
{code}
URL: 
http:///ws/v1/node/containerlogs-info//
QueryParams:
  pagesize - max bytes of one page , default 1MB
Response example:
  {"logInfo":{"totalSize":2497280,"pageSize":1048576,"pageCount":3}}
{code}

Moreover, the data transfer pipeline (disk --> read buffer --> NM buffer --> 
socket buffer) can be optimized to pipeline(disk --> read buffer --> socket 
buffer) with zero-copy approach.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6182) [YARN-3368] Fix alignment issues and missing information in Queue pages

2017-03-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15889945#comment-15889945
 ] 

Hadoop QA commented on YARN-6182:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  0m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6182 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12855339/YARN-6182.003.patch |
| Optional Tests |  asflicense  |
| uname | Linux 96653972be9e 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 82ef9ac |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15119/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [YARN-3368] Fix alignment issues and missing information in Queue pages
> ---
>
> Key: YARN-6182
> URL: https://issues.apache.org/jira/browse/YARN-6182
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
> Attachments: YARN-6182.001.patch, YARN-6182.002.patch, 
> YARN-6182.003.patch
>
>
> This patch fixes following issues:
> In Queues page:
> # Queue Capacities: Absolute Max Capacity should be aligned better.
> # Queue Information: State is coming empty
> # The queue tree graph is taking too much space. We should reduce both the 
> vertical and horizontal spacing.
> # Queues tab becomes inactive while hovering on the queue.
> In application list page and per application page:
> # Change left nav label to ''Applications'
> # Convert labels 'Master Node' and 'Master Node Expression' to headings



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6258) The node sites don't work with local (CORS) setup for new UI

2017-03-01 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-6258:
--
Summary: The node sites don't work with local (CORS) setup for new UI  
(was: The node sites don't work with local (CORS) setup)

> The node sites don't work with local (CORS) setup for new UI
> 
>
> Key: YARN-6258
> URL: https://issues.apache.org/jira/browse/YARN-6258
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gergely Novák
>Assignee: Gergely Novák
> Attachments: YARN-6258.001.patch
>
>
> If CORS proxy is configured for development purposes, all the yarn-node sites 
> (yarn-node, yarn-node-apps, yarn-node-containers) throw an error:
> {noformat}
> Error: Adapter operation failed
> at ember$data$lib$adapters$errors$$AdapterError.EmberError 
> (ember.debug.js:15860)
> at ember$data$lib$adapters$errors$$AdapterError (errors.js:19)
> at Class.handleResponse (rest-adapter.js:677)
> at Class.hash.error (rest-adapter.js:757)
> at fire (jquery.js:3099)
> at Object.fireWith [as rejectWith] (jquery.js:3211)
> at done (jquery.js:8266)
> at XMLHttpRequest. (jquery.js:8605)
> {noformat}
> This might be caused by bad request url: 
> "http://localhost:1337/{color:red}/{color}192.168.0.104:8042/ws/v1/node;.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6258) The node sites don't work with local (CORS) setup

2017-03-01 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-6258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergely Novák updated YARN-6258:

Attachment: YARN-6258.001.patch

> The node sites don't work with local (CORS) setup
> -
>
> Key: YARN-6258
> URL: https://issues.apache.org/jira/browse/YARN-6258
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gergely Novák
>Assignee: Gergely Novák
> Attachments: YARN-6258.001.patch
>
>
> If CORS proxy is configured for development purposes, all the yarn-node sites 
> (yarn-node, yarn-node-apps, yarn-node-containers) throw an error:
> {noformat}
> Error: Adapter operation failed
> at ember$data$lib$adapters$errors$$AdapterError.EmberError 
> (ember.debug.js:15860)
> at ember$data$lib$adapters$errors$$AdapterError (errors.js:19)
> at Class.handleResponse (rest-adapter.js:677)
> at Class.hash.error (rest-adapter.js:757)
> at fire (jquery.js:3099)
> at Object.fireWith [as rejectWith] (jquery.js:3211)
> at done (jquery.js:8266)
> at XMLHttpRequest. (jquery.js:8605)
> {noformat}
> This might be caused by bad request url: 
> "http://localhost:1337/{color:red}/{color}192.168.0.104:8042/ws/v1/node;.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6258) The node sites don't work with local (CORS) setup

2017-03-01 Thread JIRA
Gergely Novák created YARN-6258:
---

 Summary: The node sites don't work with local (CORS) setup
 Key: YARN-6258
 URL: https://issues.apache.org/jira/browse/YARN-6258
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Gergely Novák
Assignee: Gergely Novák


If CORS proxy is configured for development purposes, all the yarn-node sites 
(yarn-node, yarn-node-apps, yarn-node-containers) throw an error:
{noformat}
Error: Adapter operation failed
at ember$data$lib$adapters$errors$$AdapterError.EmberError 
(ember.debug.js:15860)
at ember$data$lib$adapters$errors$$AdapterError (errors.js:19)
at Class.handleResponse (rest-adapter.js:677)
at Class.hash.error (rest-adapter.js:757)
at fire (jquery.js:3099)
at Object.fireWith [as rejectWith] (jquery.js:3211)
at done (jquery.js:8266)
at XMLHttpRequest. (jquery.js:8605)
{noformat}

This might be caused by bad request url: 
"http://localhost:1337/{color:red}/{color}192.168.0.104:8042/ws/v1/node;.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6182) [YARN-3368] Fix alignment issues and missing information in Queue pages

2017-03-01 Thread Akhil PB (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15889920#comment-15889920
 ] 

Akhil PB commented on YARN-6182:


v3 patch

> [YARN-3368] Fix alignment issues and missing information in Queue pages
> ---
>
> Key: YARN-6182
> URL: https://issues.apache.org/jira/browse/YARN-6182
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
> Attachments: YARN-6182.001.patch, YARN-6182.002.patch, 
> YARN-6182.003.patch
>
>
> This patch fixes following issues:
> In Queues page:
> # Queue Capacities: Absolute Max Capacity should be aligned better.
> # Queue Information: State is coming empty
> # The queue tree graph is taking too much space. We should reduce both the 
> vertical and horizontal spacing.
> # Queues tab becomes inactive while hovering on the queue.
> In application list page and per application page:
> # Change left nav label to ''Applications'
> # Convert labels 'Master Node' and 'Master Node Expression' to headings



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6182) [YARN-3368] Fix alignment issues and missing information in Queue pages

2017-03-01 Thread Akhil PB (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-6182:
---
Attachment: YARN-6182.003.patch

> [YARN-3368] Fix alignment issues and missing information in Queue pages
> ---
>
> Key: YARN-6182
> URL: https://issues.apache.org/jira/browse/YARN-6182
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
> Attachments: YARN-6182.001.patch, YARN-6182.002.patch, 
> YARN-6182.003.patch
>
>
> This patch fixes following issues:
> In Queues page:
> # Queue Capacities: Absolute Max Capacity should be aligned better.
> # Queue Information: State is coming empty
> # The queue tree graph is taking too much space. We should reduce both the 
> vertical and horizontal spacing.
> # Queues tab becomes inactive while hovering on the queue.
> In application list page and per application page:
> # Change left nav label to ''Applications'
> # Convert labels 'Master Node' and 'Master Node Expression' to headings



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6182) [YARN-3368] Fix alignment issues and missing information in Queue pages

2017-03-01 Thread Akhil PB (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-6182:
---
Description: 
This patch fixes following issues:

In Queues page:
# Queue Capacities: Absolute Max Capacity should be aligned better.
# Queue Information: State is coming empty
# The queue tree graph is taking too much space. We should reduce both the 
vertical and horizontal spacing.
# Queues tab becomes inactive while hovering on the queue.

In application list page and per application page:
# Change left nav label to ''Applications'
# Convert labels 'Master Node' and 'Master Node Expression' to headings

  was:
In Queues page:
# Queue Capacities: Absolute Max Capacity should be aligned better.
# Queue Information: State is coming empty
# The queue tree graph is taking too much space. We should reduce both the 
vertical and horizontal spacing.

In application list page and per application page:
# Change left nav label to ''Applications'
# Convert labels 'Master Node' and 'Master Node Expression' to headings


> [YARN-3368] Fix alignment issues and missing information in Queue pages
> ---
>
> Key: YARN-6182
> URL: https://issues.apache.org/jira/browse/YARN-6182
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
> Attachments: YARN-6182.001.patch, YARN-6182.002.patch
>
>
> This patch fixes following issues:
> In Queues page:
> # Queue Capacities: Absolute Max Capacity should be aligned better.
> # Queue Information: State is coming empty
> # The queue tree graph is taking too much space. We should reduce both the 
> vertical and horizontal spacing.
> # Queues tab becomes inactive while hovering on the queue.
> In application list page and per application page:
> # Change left nav label to ''Applications'
> # Convert labels 'Master Node' and 'Master Node Expression' to headings



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6257) CapacityScheduler REST API produces incorrect JSON - JSON object operationsInfo contains deplicate key

2017-03-01 Thread Tao Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Yang updated YARN-6257:
---
Description: 
In response string of CapacityScheduler REST API, 
scheduler/schedulerInfo/health/operationsInfo have duplicate key 'entry' as a 
JSON object :
{code}
"operationsInfo":{
  
"entry":{"key":"last-preemption","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}},
  
"entry":{"key":"last-reservation","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}},
  
"entry":{"key":"last-allocation","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}},
  
"entry":{"key":"last-release","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}}
}
{code}

To solve this problem, I suppose the type of operationsInfo field in 
CapacitySchedulerHealthInfo class should be converted from Map to List.

After convert to List, The operationsInfo string will be:
{code}
"operationInfos":[
  
{"operation":"last-allocation","nodeId":"N/A","containerId":"N/A","queue":"N/A"},
  {"operation":"last-release","nodeId":"N/A","containerId":"N/A","queue":"N/A"},
  
{"operation":"last-preemption","nodeId":"N/A","containerId":"N/A","queue":"N/A"},
  
{"operation":"last-reservation","nodeId":"N/A","containerId":"N/A","queue":"N/A"}
]
{code}

  was:
In response string of CapacityScheduler REST API, 
scheduler/schedulerInfo/health/operationsInfo have duplicate key 'entry' as a 
JSON object :
{code}
"operationsInfo":{
  
"entry":{"key":"last-preemption","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}},
  
"entry":{"key":"last-reservation","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}},
  
"entry":{"key":"last-allocation","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}},
  
"entry":{"key":"last-release","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}}
}
{code}

To solve this problem, I suppose the type of operationsInfo field in 
CapacitySchedulerHealthInfo class should be converted from Map to List.


> CapacityScheduler REST API produces incorrect JSON - JSON object 
> operationsInfo contains deplicate key
> --
>
> Key: YARN-6257
> URL: https://issues.apache.org/jira/browse/YARN-6257
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 2.8.1
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Minor
> Attachments: YARN-6257.001.patch
>
>
> In response string of CapacityScheduler REST API, 
> scheduler/schedulerInfo/health/operationsInfo have duplicate key 'entry' as a 
> JSON object :
> {code}
> "operationsInfo":{
>   
> "entry":{"key":"last-preemption","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}},
>   
> "entry":{"key":"last-reservation","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}},
>   
> "entry":{"key":"last-allocation","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}},
>   
> "entry":{"key":"last-release","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}}
> }
> {code}
> To solve this problem, I suppose the type of operationsInfo field in 
> CapacitySchedulerHealthInfo class should be converted from Map to List.
> After convert to List, The operationsInfo string will be:
> {code}
> "operationInfos":[
>   
> {"operation":"last-allocation","nodeId":"N/A","containerId":"N/A","queue":"N/A"},
>   
> {"operation":"last-release","nodeId":"N/A","containerId":"N/A","queue":"N/A"},
>   
> {"operation":"last-preemption","nodeId":"N/A","containerId":"N/A","queue":"N/A"},
>   
> {"operation":"last-reservation","nodeId":"N/A","containerId":"N/A","queue":"N/A"}
> ]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6257) CapacityScheduler REST API produces incorrect JSON - JSON object operationsInfo contains deplicate key

2017-03-01 Thread Tao Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15889796#comment-15889796
 ] 

Tao Yang commented on YARN-6257:


Hi, [~sunilg]. Thank you for looking into the issue.
{quote}
It will break compatibility to previous version since REST response will be 
different from map to list, correct?
{quote}
Correct. The response will be different, but I think the effect would be small, 
since this field is only used in REST and the previous version can not be used 
with incorrect format. Thoughts?

> CapacityScheduler REST API produces incorrect JSON - JSON object 
> operationsInfo contains deplicate key
> --
>
> Key: YARN-6257
> URL: https://issues.apache.org/jira/browse/YARN-6257
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 2.8.1
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Minor
> Attachments: YARN-6257.001.patch
>
>
> In response string of CapacityScheduler REST API, 
> scheduler/schedulerInfo/health/operationsInfo have duplicate key 'entry' as a 
> JSON object :
> {code}
> "operationsInfo":{
>   
> "entry":{"key":"last-preemption","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}},
>   
> "entry":{"key":"last-reservation","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}},
>   
> "entry":{"key":"last-allocation","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}},
>   
> "entry":{"key":"last-release","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}}
> }
> {code}
> To solve this problem, I suppose the type of operationsInfo field in 
> CapacitySchedulerHealthInfo class should be converted from Map to List.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6257) CapacityScheduler REST API produces incorrect JSON - JSON object operationsInfo contains deplicate key

2017-03-01 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15889743#comment-15889743
 ] 

Sunil G commented on YARN-6257:
---

It will break compatibility to previous version since REST response will be 
different from map to list, correct?

> CapacityScheduler REST API produces incorrect JSON - JSON object 
> operationsInfo contains deplicate key
> --
>
> Key: YARN-6257
> URL: https://issues.apache.org/jira/browse/YARN-6257
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 2.8.1
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Minor
> Attachments: YARN-6257.001.patch
>
>
> In response string of CapacityScheduler REST API, 
> scheduler/schedulerInfo/health/operationsInfo have duplicate key 'entry' as a 
> JSON object :
> {code}
> "operationsInfo":{
>   
> "entry":{"key":"last-preemption","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}},
>   
> "entry":{"key":"last-reservation","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}},
>   
> "entry":{"key":"last-allocation","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}},
>   
> "entry":{"key":"last-release","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}}
> }
> {code}
> To solve this problem, I suppose the type of operationsInfo field in 
> CapacitySchedulerHealthInfo class should be converted from Map to List.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org