[jira] [Updated] (YARN-5554) MoveApplicationAcrossQueues does not check user permission on the target queue

2016-10-04 Thread Wilfred Spiegelenburg (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wilfred Spiegelenburg updated YARN-5554:

Attachment: YARN-5554.9.patch

updated the text in the messages, it does make sense to include it not just in 
the message of the queue manager. Does the message look OK [~bibinchundatt] ?

> MoveApplicationAcrossQueues does not check user permission on the target queue
> --
>
> Key: YARN-5554
> URL: https://issues.apache.org/jira/browse/YARN-5554
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Haibo Chen
>Assignee: Wilfred Spiegelenburg
> Attachments: YARN-5554.2.patch, YARN-5554.3.patch, YARN-5554.4.patch, 
> YARN-5554.5.patch, YARN-5554.6.patch, YARN-5554.7.patch, YARN-5554.8.patch, 
> YARN-5554.9.patch
>
>
> moveApplicationAcrossQueues operation currently does not check user 
> permission on the target queue. This incorrectly allows one user to move 
> his/her own applications to a queue that the user has no access to



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4911) Bad placement policy in FairScheduler causes the RM to crash

2016-10-04 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated YARN-4911:
-
Attachment: YARN-4911.003.patch

> Bad placement policy in FairScheduler causes the RM to crash
> 
>
> Key: YARN-4911
> URL: https://issues.apache.org/jira/browse/YARN-4911
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>  Labels: supportability
> Attachments: YARN-4911.001.patch, YARN-4911.002.patch, 
> YARN-4911.003.patch
>
>
> When you have a fair-scheduler.xml with the rule:
>   
> 
>   
> and the queue okay1 doesn't exist, the following exception occurs in the RM:
> 2016-04-01 16:56:33,383 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
> handling event type APP_ADDED to the scheduler
> java.lang.IllegalStateException: Should have applied a rule before reaching 
> here
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.QueuePlacementPolicy.assignAppToQueue(QueuePlacementPolicy.java:173)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.assignToQueue(FairScheduler.java:728)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.addApplication(FairScheduler.java:634)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:1224)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:112)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:691)
> at java.lang.Thread.run(Thread.java:745)
> which causes the RM to crash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5554) MoveApplicationAcrossQueues does not check user permission on the target queue

2016-10-04 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547699#comment-15547699
 ] 

Bibin A Chundatt commented on YARN-5554:


Thank  you [~wilfreds] for patch.
In Audit logger and Remote Exception can we add queue doesn't exists message 
too. 
{noformat}
+  RMAuditLogger.logFailure(callerUGI.getShortUserName(),
+  AuditConstants.MOVE_APP_REQUEST,
+  "User doesn't have permissions to move application to queue "
+  + targetQueue, "ClientRMService",
+  AuditConstants.UNAUTHORIZED_USER, applicationId);
+  throw RPCUtil.getRemoteException(new AccessControlException("User "
+  + callerUGI.getShortUserName()
+  + " doesn't have permissions to move application to queue "
+  + targetQueue + " on " + applicationId));
{noformat}

> MoveApplicationAcrossQueues does not check user permission on the target queue
> --
>
> Key: YARN-5554
> URL: https://issues.apache.org/jira/browse/YARN-5554
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Haibo Chen
>Assignee: Wilfred Spiegelenburg
> Attachments: YARN-5554.2.patch, YARN-5554.3.patch, YARN-5554.4.patch, 
> YARN-5554.5.patch, YARN-5554.6.patch, YARN-5554.7.patch, YARN-5554.8.patch
>
>
> moveApplicationAcrossQueues operation currently does not check user 
> permission on the target queue. This incorrectly allows one user to move 
> his/her own applications to a queue that the user has no access to



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5554) MoveApplicationAcrossQueues does not check user permission on the target queue

2016-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547586#comment-15547586
 ] 

Hadoop QA commented on YARN-5554:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 46s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 77 unchanged - 3 fixed = 77 total (was 80) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 44m 17s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 23s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831676/YARN-5554.8.patch |
| JIRA Issue | YARN-5554 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 323ca0d0bd1d 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 31f8da2 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13286/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13286/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> MoveApplicationAcrossQueues does not check user permission on the target queue
> --
>
> Key: YARN-5554
> URL: https://issues.apache.org/jira/browse/YARN-5554
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects 

[jira] [Updated] (YARN-5554) MoveApplicationAcrossQueues does not check user permission on the target queue

2016-10-04 Thread Wilfred Spiegelenburg (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wilfred Spiegelenburg updated YARN-5554:

Attachment: YARN-5554.8.patch

Thanks [~kasha] I changed the return value to just a false instead of throwing 
and updated all the code that relies on it.

> MoveApplicationAcrossQueues does not check user permission on the target queue
> --
>
> Key: YARN-5554
> URL: https://issues.apache.org/jira/browse/YARN-5554
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Haibo Chen
>Assignee: Wilfred Spiegelenburg
> Attachments: YARN-5554.2.patch, YARN-5554.3.patch, YARN-5554.4.patch, 
> YARN-5554.5.patch, YARN-5554.6.patch, YARN-5554.7.patch, YARN-5554.8.patch
>
>
> moveApplicationAcrossQueues operation currently does not check user 
> permission on the target queue. This incorrectly allows one user to move 
> his/her own applications to a queue that the user has no access to



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5659) getPathFromYarnURL should use standard methods

2016-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547382#comment-15547382
 ] 

Hadoop QA commented on YARN-5659:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
52s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 25s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 59s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831674/YARN-5659.05.patch |
| JIRA Issue | YARN-5659 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9636eb52862d 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 31f8da2 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13285/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13285/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> getPathFromYarnURL should use standard methods
> --
>
> Key: YARN-5659
> URL: https://issues.apache.org/jira/browse/YARN-5659
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: YARN-5659.01.patch, YARN-5659.02.patch, 
> YARN-5659.03.patch, YARN-5659.04.patch, YARN-5659.04.patch, 
> YARN-5659.05.patch, YARN-5659.05.patch, YARN-5659.patch
>
>
> getPathFromYarnURL does some string shenanigans where  standard ctors should 
> suffice.
> 

[jira] [Updated] (YARN-5659) getPathFromYarnURL should use standard methods

2016-10-04 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated YARN-5659:
---
Attachment: YARN-5659.05.patch

> getPathFromYarnURL should use standard methods
> --
>
> Key: YARN-5659
> URL: https://issues.apache.org/jira/browse/YARN-5659
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: YARN-5659.01.patch, YARN-5659.02.patch, 
> YARN-5659.03.patch, YARN-5659.04.patch, YARN-5659.04.patch, 
> YARN-5659.05.patch, YARN-5659.05.patch, YARN-5659.patch
>
>
> getPathFromYarnURL does some string shenanigans where  standard ctors should 
> suffice.
> There are also bugs in it e.g. passing an empty scheme to the URI ctor is 
> invalid, null should be used. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5659) getPathFromYarnURL should use standard methods

2016-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547331#comment-15547331
 ] 

Hadoop QA commented on YARN-5659:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 0s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 9s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api: The 
patch generated 1 new + 3 unchanged - 0 fixed = 4 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 53s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831672/YARN-5659.05.patch |
| JIRA Issue | YARN-5659 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 263651747df4 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 31f8da2 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13284/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/13284/artifact/patchprocess/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13284/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13284/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> getPathFromYarnURL should use standard methods
> --
>
> Key: YARN-5659
> URL: https://issues.apache.org/jira/browse/YARN-5659
> 

[jira] [Updated] (YARN-5659) getPathFromYarnURL should use standard methods

2016-10-04 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated YARN-5659:
---
Attachment: YARN-5659.05.patch

Added the annotations.
Please feel free to change the patch wrt whitespace, annotations, method names, 
and other such stuff that is easier to change on commit than to have 
back-and-forth on the jira.

> getPathFromYarnURL should use standard methods
> --
>
> Key: YARN-5659
> URL: https://issues.apache.org/jira/browse/YARN-5659
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: YARN-5659.01.patch, YARN-5659.02.patch, 
> YARN-5659.03.patch, YARN-5659.04.patch, YARN-5659.04.patch, 
> YARN-5659.05.patch, YARN-5659.patch
>
>
> getPathFromYarnURL does some string shenanigans where  standard ctors should 
> suffice.
> There are also bugs in it e.g. passing an empty scheme to the URI ctor is 
> invalid, null should be used. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5667) Move HBase backend code in ATS v2 into its separate module

2016-10-04 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547279#comment-15547279
 ] 

Haibo Chen commented on YARN-5667:
--

Here is the description of what each part does:
part1: remove direct references of HBaseTimelineWriter/Reader in core ATS 
classes that Yarn will need.
part2: extract some methods that are used only by the HBase implementation in 
TimelineStorageUtil into a new util class HBaseTimelineStorageUtil
part3: move all HBase related code into a new module that the 
hadoop-yarn-server-timelineservice module depends on only at runtime.
part4: fix issues introduced in the hbase-test module
part5: update the ATS v2 documentation.

> Move HBase backend code in ATS v2  into its separate module
> ---
>
> Key: YARN-5667
> URL: https://issues.apache.org/jira/browse/YARN-5667
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: part1.yarn5667.prelim.patch, 
> part2.yarn5667.prelim.patch, part3.yarn5667.prelim.patch, 
> part4.yarn5667.prelim.patch, part5.yarn5667.prelim.patch
>
>
> The HBase backend code currently lives along with the core ATS v2 code in 
> hadoop-yarn-server-timelineservice module. Because Resource Manager depends 
> on hadoop-yarn-server-timelineservice, an unnecessary dependency of the RM 
> module on HBase modules is introduced (HBase backend is pluggable, so we do 
> not need to directly pull in HBase jars). 
> In our internal effort to try ATS v2 with HBase 2.0 which depends on Hadoop 
> 3, we encountered a circular dependency during our builds between HBase2.0 
> and Hadoop3 artifacts.
> {code}
> hadoop-mapreduce-client-common, hadoop-yarn-client, 
> hadoop-yarn-server-resourcemanager, hadoop-yarn-server-timelineservice, 
> hbase-server, hbase-prefix-tree, hbase-hadoop2-compat, 
> hadoop-mapreduce-client-jobclient, hadoop-mapreduce-client-common]
> {code}
> This jira proposes we move all HBase-backend-related code from 
> hadoop-yarn-server-timelineservice into its own module (possible name is 
> yarn-server-timelineservice-storage) so that core RM modules do not depend on 
> HBase modules any more.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3139) Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler

2016-10-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547212#comment-15547212
 ] 

Hudson commented on YARN-3139:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10542 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10542/])
YARN-3139. Improve locks in (jianhe: rev 
31f8da22d0b8d2dcce5fbc8e45d832f40acf056f)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMServerUtils.java


> Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler
> --
>
> Key: YARN-3139
> URL: https://issues.apache.org/jira/browse/YARN-3139
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-3139.0.patch, YARN-3139.1.patch, YARN-3139.2.patch, 
> YARN-3139.3.patch, YARN-3139.4.patch, YARN-3139.5.patch, YARN-3139.6.patch, 
> YARN-3139.7.patch
>
>
> Enhance locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler, as 
> mentioned in YARN-3091, a possible solution is using read/write lock. Other 
> fine-graind locks for specific purposes / bugs should be addressed in 
> separated tickets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5139) [Umbrella] Move YARN scheduler towards global scheduler

2016-10-04 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5139:
-
Attachment: wip-5.YARN-5139.patch

Attached ver.5 wip patch which I used to run the test, it has few TODO items, I 
will update the patch in the next few days to make it to ready-to-review.

> [Umbrella] Move YARN scheduler towards global scheduler
> ---
>
> Key: YARN-5139
> URL: https://issues.apache.org/jira/browse/YARN-5139
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: Explanantions of Global Scheduling (YARN-5139) 
> Implementation.pdf, YARN-5139-Concurrent-scheduling-performance-report.pdf, 
> YARN-5139-Global-Schedulingd-esign-and-implementation-notes-v2.pdf, 
> YARN-5139-Global-Schedulingd-esign-and-implementation-notes.pdf, 
> wip-1.YARN-5139.patch, wip-2.YARN-5139.patch, wip-3.YARN-5139.patch, 
> wip-4.YARN-5139.patch, wip-5.YARN-5139.patch
>
>
> Existing YARN scheduler is based on node heartbeat. This can lead to 
> sub-optimal decisions because scheduler can only look at one node at the time 
> when scheduling resources.
> Pseudo code of existing scheduling logic looks like:
> {code}
> for node in allNodes:
>Go to parentQueue
>   Go to leafQueue
> for application in leafQueue.applications:
>for resource-request in application.resource-requests
>   try to schedule on node
> {code}
> Considering future complex resource placement requirements, such as node 
> constraints (give me "a && b || c") or anti-affinity (do not allocate HBase 
> regionsevers and Storm workers on the same host), we may need to consider 
> moving YARN scheduler towards global scheduling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5139) [Umbrella] Move YARN scheduler towards global scheduler

2016-10-04 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5139:
-
Attachment: YARN-5139-Concurrent-scheduling-performance-report.pdf

We're glad to share that our recently performance tests about concurrent 
scheduling based on the new global scheduling framework.

We use SLS (scheduler load simulator) simulated 2.4PB memory cluster, which has 
20K nodes, and 4k-12k applications running in parallel.

The new concurrent scheduling can get up to *6.25X* throughput (avg #containers 
allocated per second) comparing to original async scheduling of Capacity 
Scheduler.

More details please refer to attached 
{{YARN-5139-Concurrent-scheduling-performance-report.pdf}} and code is 
{{wip-5.YARN-5139.patch}}.

Thanks [~vinodkv]/[~gtCarrera9] for lots of valuable offline suggestions.

+ People who might be interesting for this: 
[~jlowe]/[~curino]/[~kasha]/[~asuresh]/[~kkaranasos]/[~subru].

> [Umbrella] Move YARN scheduler towards global scheduler
> ---
>
> Key: YARN-5139
> URL: https://issues.apache.org/jira/browse/YARN-5139
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: Explanantions of Global Scheduling (YARN-5139) 
> Implementation.pdf, YARN-5139-Concurrent-scheduling-performance-report.pdf, 
> YARN-5139-Global-Schedulingd-esign-and-implementation-notes-v2.pdf, 
> YARN-5139-Global-Schedulingd-esign-and-implementation-notes.pdf, 
> wip-1.YARN-5139.patch, wip-2.YARN-5139.patch, wip-3.YARN-5139.patch, 
> wip-4.YARN-5139.patch
>
>
> Existing YARN scheduler is based on node heartbeat. This can lead to 
> sub-optimal decisions because scheduler can only look at one node at the time 
> when scheduling resources.
> Pseudo code of existing scheduling logic looks like:
> {code}
> for node in allNodes:
>Go to parentQueue
>   Go to leafQueue
> for application in leafQueue.applications:
>for resource-request in application.resource-requests
>   try to schedule on node
> {code}
> Considering future complex resource placement requirements, such as node 
> constraints (give me "a && b || c") or anti-affinity (do not allocate HBase 
> regionsevers and Storm workers on the same host), we may need to consider 
> moving YARN scheduler towards global scheduling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5667) Move HBase backend code in ATS v2 into its separate module

2016-10-04 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547147#comment-15547147
 ] 

Haibo Chen commented on YARN-5667:
--

I have run all ATS-v2 unit tests, and deployed this into a pseudo-distributed 
cluster to run an example mapreduce job to verify this change does not break 
functionalities. [~vrushalic], [~sjlee0], Do you have any suggestion on how to 
further test this change?

However, there is still one piece that is missing here, that is, copying hbase 
jars into the share/hadoop/yarn/lib directory at packaging time. For some 
reason, the hadoop-specific  maven assembly plugin (I believe it is 
hadoop-yarn-dist.xml) no longer detects the hbase dependency in the new module, 
which was previously in the timelineservice module. [~sjlee0] What I am missing?

> Move HBase backend code in ATS v2  into its separate module
> ---
>
> Key: YARN-5667
> URL: https://issues.apache.org/jira/browse/YARN-5667
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: part1.yarn5667.prelim.patch, 
> part2.yarn5667.prelim.patch, part3.yarn5667.prelim.patch, 
> part4.yarn5667.prelim.patch, part5.yarn5667.prelim.patch
>
>
> The HBase backend code currently lives along with the core ATS v2 code in 
> hadoop-yarn-server-timelineservice module. Because Resource Manager depends 
> on hadoop-yarn-server-timelineservice, an unnecessary dependency of the RM 
> module on HBase modules is introduced (HBase backend is pluggable, so we do 
> not need to directly pull in HBase jars). 
> In our internal effort to try ATS v2 with HBase 2.0 which depends on Hadoop 
> 3, we encountered a circular dependency during our builds between HBase2.0 
> and Hadoop3 artifacts.
> {code}
> hadoop-mapreduce-client-common, hadoop-yarn-client, 
> hadoop-yarn-server-resourcemanager, hadoop-yarn-server-timelineservice, 
> hbase-server, hbase-prefix-tree, hbase-hadoop2-compat, 
> hadoop-mapreduce-client-jobclient, hadoop-mapreduce-client-common]
> {code}
> This jira proposes we move all HBase-backend-related code from 
> hadoop-yarn-server-timelineservice into its own module (possible name is 
> yarn-server-timelineservice-storage) so that core RM modules do not depend on 
> HBase modules any more.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5554) MoveApplicationAcrossQueues does not check user permission on the target queue

2016-10-04 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547136#comment-15547136
 ] 

Karthik Kambatla commented on YARN-5554:


Thanks for reporting and working on this, [~wilfreds]. 

Main comment on the patch: QueueACLsManager#checkAccess: Instead of throwing an 
IOException, can we log the fact that queue does not exist and return false?


> MoveApplicationAcrossQueues does not check user permission on the target queue
> --
>
> Key: YARN-5554
> URL: https://issues.apache.org/jira/browse/YARN-5554
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Haibo Chen
>Assignee: Wilfred Spiegelenburg
> Attachments: YARN-5554.2.patch, YARN-5554.3.patch, YARN-5554.4.patch, 
> YARN-5554.5.patch, YARN-5554.6.patch, YARN-5554.7.patch
>
>
> moveApplicationAcrossQueues operation currently does not check user 
> permission on the target queue. This incorrectly allows one user to move 
> his/her own applications to a queue that the user has no access to



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5667) Move HBase backend code in ATS v2 into its separate module

2016-10-04 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-5667:
-
Attachment: part5.yarn5667.prelim.patch
part4.yarn5667.prelim.patch
part3.yarn5667.prelim.patch
part2.yarn5667.prelim.patch
part1.yarn5667.prelim.patch

Uploading a preliminary patch for reviews. The patch is broken down into 
multiple parts, each of which depends on the previous one.

> Move HBase backend code in ATS v2  into its separate module
> ---
>
> Key: YARN-5667
> URL: https://issues.apache.org/jira/browse/YARN-5667
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: part1.yarn5667.prelim.patch, 
> part2.yarn5667.prelim.patch, part3.yarn5667.prelim.patch, 
> part4.yarn5667.prelim.patch, part5.yarn5667.prelim.patch
>
>
> The HBase backend code currently lives along with the core ATS v2 code in 
> hadoop-yarn-server-timelineservice module. Because Resource Manager depends 
> on hadoop-yarn-server-timelineservice, an unnecessary dependency of the RM 
> module on HBase modules is introduced (HBase backend is pluggable, so we do 
> not need to directly pull in HBase jars). 
> In our internal effort to try ATS v2 with HBase 2.0 which depends on Hadoop 
> 3, we encountered a circular dependency during our builds between HBase2.0 
> and Hadoop3 artifacts.
> {code}
> hadoop-mapreduce-client-common, hadoop-yarn-client, 
> hadoop-yarn-server-resourcemanager, hadoop-yarn-server-timelineservice, 
> hbase-server, hbase-prefix-tree, hbase-hadoop2-compat, 
> hadoop-mapreduce-client-jobclient, hadoop-mapreduce-client-common]
> {code}
> This jira proposes we move all HBase-backend-related code from 
> hadoop-yarn-server-timelineservice into its own module (possible name is 
> yarn-server-timelineservice-storage) so that core RM modules do not depend on 
> HBase modules any more.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5710) Fix inconsistent naming in class ResourceRequest

2016-10-04 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547079#comment-15547079
 ] 

Yufei Gu commented on YARN-5710:


No need to add unit test.

> Fix inconsistent naming in class ResourceRequest
> 
>
> Key: YARN-5710
> URL: https://issues.apache.org/jira/browse/YARN-5710
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>Priority: Trivial
> Attachments: YARN-5710.001.patch, YARN-5710.002.patch
>
>
>  "node", "machine" and "host" are the same thing with different name in this 
> context. Consolidate them to "node". 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5710) Fix inconsistent naming in class ResourceRequest

2016-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547063#comment-15547063
 ] 

Hadoop QA commented on YARN-5710:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 2s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 21s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 30s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831645/YARN-5710.002.patch |
| JIRA Issue | YARN-5710 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a0811dff0277 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 44f48ee |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13283/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13283/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Fix inconsistent naming in class ResourceRequest
> 
>
> Key: YARN-5710
> URL: https://issues.apache.org/jira/browse/YARN-5710
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>Priority: Trivial
> Attachments: YARN-5710.001.patch, YARN-5710.002.patch
>
>
>  "node", "machine" and "host" are the same 

[jira] [Commented] (YARN-5710) Fix inconsistent naming in class ResourceRequest

2016-10-04 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547003#comment-15547003
 ] 

Yufei Gu commented on YARN-5710:


Since "host" is used in many place, we think "host" should be fine, and change 
"machine" to "host" in patch 002. 

> Fix inconsistent naming in class ResourceRequest
> 
>
> Key: YARN-5710
> URL: https://issues.apache.org/jira/browse/YARN-5710
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>Priority: Trivial
> Attachments: YARN-5710.001.patch, YARN-5710.002.patch
>
>
>  "node", "machine" and "host" are the same thing with different name in this 
> context. Consolidate them to "node". 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5710) Fix inconsistent naming in class ResourceRequest

2016-10-04 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5710:
---
Attachment: YARN-5710.002.patch

> Fix inconsistent naming in class ResourceRequest
> 
>
> Key: YARN-5710
> URL: https://issues.apache.org/jira/browse/YARN-5710
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>Priority: Trivial
> Attachments: YARN-5710.001.patch, YARN-5710.002.patch
>
>
>  "node", "machine" and "host" are the same thing with different name in this 
> context. Consolidate them to "node". 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5710) Fix inconsistent naming in class ResourceRequest

2016-10-04 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546956#comment-15546956
 ] 

Yufei Gu commented on YARN-5710:


It is OK to leave the style issue alone and not add any unit test since the 
patch doesn't change any logic. 

> Fix inconsistent naming in class ResourceRequest
> 
>
> Key: YARN-5710
> URL: https://issues.apache.org/jira/browse/YARN-5710
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>Priority: Trivial
> Attachments: YARN-5710.001.patch
>
>
>  "node", "machine" and "host" are the same thing with different name in this 
> context. Consolidate them to "node". 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3877) YarnClientImpl.submitApplication swallows exceptions

2016-10-04 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546975#comment-15546975
 ] 

Zhe Zhang commented on YARN-3877:
-

Hi Vinod, I'm considering this patch for branch-2.7. Any reason it was moved 
out of 2.7.2? Compatibility concern? Thanks.

> YarnClientImpl.submitApplication swallows exceptions
> 
>
> Key: YARN-3877
> URL: https://issues.apache.org/jira/browse/YARN-3877
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: client
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Varun Saxena
>Priority: Minor
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-3877.01.patch, YARN-3877.02.patch, 
> YARN-3877.03.patch, YARN-3877.04.patch
>
>
> When {{YarnClientImpl.submitApplication}} spins waiting for the application 
> to be accepted, any interruption during its Sleep() calls are logged and 
> swallowed.
> this makes it hard to interrupt the thread during shutdown. Really it should 
> throw some form of exception and let the caller deal with it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4998) Minor cleanup to UGI use in AdminService

2016-10-04 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546958#comment-15546958
 ] 

Karthik Kambatla commented on YARN-4998:


Trivial change, +1. Unfortunately though, the patch does not apply any more.

[~templedf] - mind revving it and submitting patch for Jenkins run? 

> Minor cleanup to UGI use in AdminService
> 
>
> Key: YARN-4998
> URL: https://issues.apache.org/jira/browse/YARN-4998
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Trivial
> Attachments: YARN-4998.001.patch
>
>
> Instead of calling {{UserGroupInformation.getCurrentUser()}} over and over, 
> we should just use the stored {{daemonUser}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5388) MAPREDUCE-6719 requires changes to DockerContainerExecutor

2016-10-04 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546941#comment-15546941
 ] 

Karthik Kambatla commented on YARN-5388:


+1 for the trunk patch. [~sidharta-s], [~vvasudev] - would really appreciate a 
quick nod from you too. 

branch-2 patch: Shouldn't the deprecation be a java annotation as below instead 
of being part of the javadoc the current patch has? Also, we should add more 
details on the reasons for deprecation and suggest alternatives. 
{code}
@Deprecated
public class DockerContainerExecutor extends ContainerExecutor {
{code}

> MAPREDUCE-6719 requires changes to DockerContainerExecutor
> --
>
> Key: YARN-5388
> URL: https://issues.apache.org/jira/browse/YARN-5388
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Fix For: 2.9.0
>
> Attachments: YARN-5388.001.patch, YARN-5388.002.patch, 
> YARN-5388.branch-2.001.patch
>
>
> Because the {{DockerContainerExecuter}} overrides the {{writeLaunchEnv()}} 
> method, it must also have the wildcard processing logic from 
> YARN-4958/YARN-5373 added to it.  Without it, the use of -libjars will fail 
> unless wildcarding is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5710) Fix inconsistent naming in class ResourceRequest

2016-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546907#comment-15546907
 ] 

Hadoop QA commented on YARN-5710:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 12s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api: The 
patch generated 1 new + 16 unchanged - 1 fixed = 17 total (was 17) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 28s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 3s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831629/YARN-5710.001.patch |
| JIRA Issue | YARN-5710 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux bb90a2f1f490 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 44f48ee |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13282/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13282/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13282/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Fix inconsistent naming in class ResourceRequest
> 
>
> Key: YARN-5710
> URL: https://issues.apache.org/jira/browse/YARN-5710
> Project: Hadoop 

[jira] [Updated] (YARN-5710) Fix inconsistent naming in class ResourceRequest

2016-10-04 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5710:
---
Description:  "node", "machine" and "host" are the same thing with 
different name in this context. Consolidate them to "node".   (was: Consolidate 
"node", "machine", "host" to "node". )

> Fix inconsistent naming in class ResourceRequest
> 
>
> Key: YARN-5710
> URL: https://issues.apache.org/jira/browse/YARN-5710
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>Priority: Trivial
> Attachments: YARN-5710.001.patch
>
>
>  "node", "machine" and "host" are the same thing with different name in this 
> context. Consolidate them to "node". 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5700) testAMRestartNotLostContainerCompleteMsg times out intermittently in 2.8

2016-10-04 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546852#comment-15546852
 ] 

Eric Badger commented on YARN-5700:
---

Looks like there are 2 bugs here. 

1) TestAMRestart uses {{waitForState()}} to wait for the completed container. 
However, this checks for the liveContainers list. Once the container is 
completed, it will quickly be taken out of this list. I think we can instead 
use {{waitForContainerToComplete()}} to check for the last set of finished 
containers. Something like this:

{noformat}
-rm1.waitForState(nm1, containerId2, RMContainerState.RUNNING);
+NMContainerStatus completedContainer =
+TestRMRestart.createNMContainerStatus(am1.getApplicationAttemptId(), 2,
+ContainerState.COMPLETE);
+rm1.waitForContainerToComplete(app1.getCurrentAppAttempt(), 
completedContainer);
{noformat}

2) YARN-4807 changed {{waitForState()}} in MockRM.java so that it quietly 
returns false on failure instead of throwing an exception. In 2.8 and below, 
the code would call an {{assertNotNull()}} on the container to make sure that 
it wasn't null and throw an exception if it was. Since 2.9+ quietly returns 
false instead of throwing an exception, the test waits for the timeout and then 
continues with the test once {{waitForState()}} returns (even though it 
returned false). We could fix the test to check for a false return value, but 
there are most likely other tests that also depend on {{waitForState()}} 
throwing an exception on failure instead of checking the return value. So I 
would think that it'd be better to put the {{assertNotNull()}} back in. 

[~kasha], [~yufeigu] (reporter/assignee from YARN-4807), what do you think 
about adding the {{assertNotNull()}} back into {{waitForState()}}

> testAMRestartNotLostContainerCompleteMsg times out intermittently in 2.8
> 
>
> Key: YARN-5700
> URL: https://issues.apache.org/jira/browse/YARN-5700
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
>
> {noformat}
> java.lang.Exception: test timed out after 3 milliseconds
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.waitForState(MockRM.java:301)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.waitForState(MockRM.java:286)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.waitForState(MockRM.java:281)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart.testAMRestartNotLostContainerCompleteMsg(TestAMRestart.java:774)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2995) Enhance UI to show cluster resource utilization of various container types

2016-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546851#comment-15546851
 ] 

Hadoop QA commented on YARN-2995:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
8s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 16s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 53s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
0s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
50s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 7m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 6s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 32s 
{color} | {color:red} root: The patch generated 12 new + 226 unchanged - 1 
fixed = 238 total (was 227) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 27s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 14s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 35m 42s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 0s 
{color} | {color:green} hadoop-sls in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 94m 32s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.webapp.TestRMWebApp |
|   | hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesNodes |
|   | hadoop.yarn.server.resourcemanager.webapp.TestNodesPage |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12830805/YARN-2995.001.patch |
| JIRA Issue | YARN-2995 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux f0e0b7457c64 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | 

[jira] [Updated] (YARN-5710) Fix inconsistent naming in class ResourceRequest

2016-10-04 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5710:
---
Attachment: YARN-5710.001.patch

> Fix inconsistent naming in class ResourceRequest
> 
>
> Key: YARN-5710
> URL: https://issues.apache.org/jira/browse/YARN-5710
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>Priority: Trivial
> Attachments: YARN-5710.001.patch
>
>
> Consolidate "node", "machine", "host" to "node". 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5710) Fix inconsistent naming in class ResourceRequest

2016-10-04 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5710:
---
Description: Consolidate "node", "machine", "host" to "node". 

> Fix inconsistent naming in class ResourceRequest
> 
>
> Key: YARN-5710
> URL: https://issues.apache.org/jira/browse/YARN-5710
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>Priority: Trivial
>
> Consolidate "node", "machine", "host" to "node". 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5710) Fix inconsistent naming in class ResourceRequest

2016-10-04 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5710:
---
Summary: Fix inconsistent naming in class ResourceRequest  (was: Fix naming 
issue in class ResourceRequest)

> Fix inconsistent naming in class ResourceRequest
> 
>
> Key: YARN-5710
> URL: https://issues.apache.org/jira/browse/YARN-5710
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>Priority: Trivial
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5710) Fix naming issue in class ResourceRequest

2016-10-04 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5710:
---
Summary: Fix naming issue in class ResourceRequest  (was: Fix doc issue in 
class ResourceRequest)

> Fix naming issue in class ResourceRequest
> -
>
> Key: YARN-5710
> URL: https://issues.apache.org/jira/browse/YARN-5710
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>Priority: Trivial
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5710) Fix doc issue in class ResourceRequest

2016-10-04 Thread Yufei Gu (JIRA)
Yufei Gu created YARN-5710:
--

 Summary: Fix doc issue in class ResourceRequest
 Key: YARN-5710
 URL: https://issues.apache.org/jira/browse/YARN-5710
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn
Reporter: Yufei Gu
Assignee: Yufei Gu
Priority: Trivial






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5585) [Atsv2] Add a new filter fromId in REST endpoints

2016-10-04 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546650#comment-15546650
 ] 

Varun Saxena commented on YARN-5585:


bq. I purposefully used VARIABLE_SIZE because prefix can be empty bytes 
That's correct. Sorry, had missed it.

bq. Given your point-5 is valid, id_prefix is need to be stored in column and 
give it back to user while reading. Basically intention is user can provide 
fromEntityPrefix as filter.
When fromEntityPrefix is given as query param, we will construct the row key 
using it. We do not necessarily need a column. We can use Result#getRow() and 
using EntityRowKey#parseRowKey in parseEntity to fetch the prefix. Like below.
{code}
EntityRowKey rowKey = EntityRowKey.parseRowKey(result.getRow());
entity.setIdPrefix(rowKey.getEntityIdPrefix());
{code}

bq. After fetching 2 rows, user knows prefix is 2 , and gives fromEntityPrefix 
as 2 for retrieving next batch. Then reader need not to scan rows from 
beginning rather directly start scanning row-key prefixed with 2. And stop row 
need to be calculated for entityType level i.e till prefix 4.
Ok...Got it. 
But do we need to copy over code from HBase i.e. 
Scan#calculateTheClosestNextRowKeyForPrefix for it. What we can do is as under:
{code}
  // get the bytes for stop row
  entityRowKeyPrefix = new 
EntityRowKeyPrefix(context.getClusterId(),
  context.getUserId(), context.getFlowName(), 
context.getFlowRunId(),
  context.getAppId(), context.getEntityType());

  // set stop row
 byte[] stopRow = entityRowKeyPrefix.getRowKeyPrefix();
 stopRow[stopRow.length - 1] = 0xFF;
  scan.setStopRow(stopRow);
{code}
This is because getRowKeyPrefix will give a byte array ending with 
Separator#QUALIFIERS i.e. "!", which is equivalent to 0x21 in hex. QUALIFIERS 
will never be ending in a string equivalent of 0xFF so we can safely set last 
byte to 0xFF and set it as stop row.



> [Atsv2] Add a new filter fromId in REST endpoints
> -
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: 0001-YARN-5585.patch, YARN-5585-workaround.patch, 
> YARN-5585.v0.patch
>
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Current Behavior : Default limit is set to 100. If there are 1000 entities 
> then REST call gives first/last 100 entities. How to retrieve next set of 100 
> entities i.e 101 to 200 OR 900 to 801?
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-5. But to retrieve next 5 apps, there is 
> no way to achieve this. 
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&=app-5* which gives list of apps from app-6 to 
> app-10. 
> Since ATS is targeting large number of entities storage, it is very common 
> use case to get next set of entities using fromId rather than querying all 
> the entites. This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5491) Random Failure TestCapacityScheduler#testCSQueueBlocked

2016-10-04 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-5491:
-
Fix Version/s: (was: 2.9.0)
   2.8.0

Thanks, [~bibinchundatt]!  I committed this to branch-2.8 as well.

> Random Failure TestCapacityScheduler#testCSQueueBlocked
> ---
>
> Key: YARN-5491
> URL: https://issues.apache.org/jira/browse/YARN-5491
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: Failure-TestCapacityScheduler-output.txt, 
> Sucess-TestCapacityScheduler-output.txt, YARN-5491.0001.patch
>
>
> Random testcase failure in trunk for 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler.testCSQueueBlocked
> https://builds.apache.org/job/PreCommit-YARN-Build/12694/testReport/org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity/TestCapacityScheduler/testCSQueueBlocked/
> {noformat}
> java.lang.AssertionError: B Used Resource should be 12 GB expected:<12288> 
> but was:<11264>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler.testCSQueueBlocked(TestCapacityScheduler.java:3667)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4329) Allow fetching exact reason as to why a submitted app is in ACCEPTED state in Fair Scheduler

2016-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546482#comment-15546482
 ] 

Hadoop QA commented on YARN-4329:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
54s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 80 unchanged - 1 fixed = 80 total (was 81) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 0 new + 935 unchanged - 3 fixed = 935 total (was 938) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 38m 22s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m 39s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831595/YARN-4329.003.patch |
| JIRA Issue | YARN-4329 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3d9263a81fc8 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 88b9444 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13280/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13280/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Allow fetching exact reason as to why a submitted app is in ACCEPTED state in 
> Fair Scheduler
> 
>
> Key: YARN-4329
> URL: 

[jira] [Commented] (YARN-5694) ZKRMStateStore should only start its verification thread when in HA failover is not embedded

2016-10-04 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546413#comment-15546413
 ] 

Karthik Kambatla commented on YARN-5694:


We should probably check for isAutoFailoverEnabledAndEmbedded?

I filed YARN-5709 to clean up surrounding code. Please feel free to pick that 
up. 

> ZKRMStateStore should only start its verification thread when in HA failover 
> is not embedded
> 
>
> Key: YARN-5694
> URL: https://issues.apache.org/jira/browse/YARN-5694
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: YARN-5694.001.patch, YARN-5694.branch-2.7.001.patch
>
>
> There are two cases.  In branch-2.7, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is always started, even when 
> using embedded or Curator failover.  In branch-2.8, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is only started when HA is 
> disabled, which makes no sense.  Based on the JIRA that introduced that 
> change (YARN-4559), I believe the intent was to start it only when embedded 
> failover is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5709) Cleanup Curator-based leader election code

2016-10-04 Thread Karthik Kambatla (JIRA)
Karthik Kambatla created YARN-5709:
--

 Summary: Cleanup Curator-based leader election code
 Key: YARN-5709
 URL: https://issues.apache.org/jira/browse/YARN-5709
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: resourcemanager
Affects Versions: 2.8.0
Reporter: Karthik Kambatla
Priority: Critical


While reviewing YARN-5677 and YARN-5694, I noticed we could make the 
curator-based election code cleaner. It is nicer to get this fixed in 2.8 
before we ship it, but this can be done at a later time as well. 
# By EmbeddedElector, we meant it was running as part of the RM daemon. Since 
the Curator-based elector is also running embedded, I feel the code should be 
checking for {{!curatorBased}} instead of {{isEmbeddedElector}}
# {{LeaderElectorService}} should probably be named 
{{CuratorBasedEmbeddedElectorService}} or some such.
# The code that initializes the elector should be at the same place 
irrespective of whether it is curator-based or not. 
# We seem to be caching the CuratorFramework instance in RM. It makes more 
sense for it to be in RMContext. If others are okay with it, we might even be 
better of having {{RMContext#getCurator()}} method to lazily create the curator 
framework and then cache it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4329) Allow fetching exact reason as to why a submitted app is in ACCEPTED state in Fair Scheduler

2016-10-04 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546352#comment-15546352
 ] 

Yufei Gu commented on YARN-4329:


Thanks [~templedf]'s review. The new patched solve the comments.
Thanks. [~Naganarasimha]'s review. Nice catch for the node skipping. I've 
uploaded the new patch for that. There are some reasons that a node got 
skipped. For example, there are not enough resources for the container in the 
node. The new patch covers this. Since AM resource request doesn't have the 
locality constraint, there is no need to check that. 
For your other comments:
1. The comment is deleted.
2. Formatted
3. Done.
4. Kind of out of scope, we can open another JIRA for it. 


> Allow fetching exact reason as to why a submitted app is in ACCEPTED state in 
> Fair Scheduler
> 
>
> Key: YARN-4329
> URL: https://issues.apache.org/jira/browse/YARN-4329
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler, resourcemanager
>Reporter: Naganarasimha G R
>Assignee: Yufei Gu
> Attachments: YARN-4329.001.patch, YARN-4329.002.patch, 
> YARN-4329.003.patch
>
>
> Similar to YARN-3946, it would be useful to capture possible reason why the 
> Application is in accepted state in FairScheduler



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5561) [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and entities via REST

2016-10-04 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546350#comment-15546350
 ] 

Li Lu commented on YARN-5561:
-

Seems like we're reaching much agreement on the general proposal of this JIRA. 
[~rohithsharma] would you please update the patch so that we can move forward 
with the rest process? Thanks! 

> [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and 
> entities via REST
> ---
>
> Key: YARN-5561
> URL: https://issues.apache.org/jira/browse/YARN-5561
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: YARN-5561.02.patch, YARN-5561.patch, YARN-5561.v0.patch
>
>
> ATSv2 model lacks retrieval of {{list-of-all-apps}}, 
> {{list-of-all-app-attempts}} and {{list-of-all-containers-per-attempt}} via 
> REST API's. And also it is required to know about all the entities in an 
> applications.
> It is pretty much highly required these URLs for Web  UI.
> New REST URL would be 
> # GET {{/ws/v2/timeline/apps}}
> # GET {{/ws/v2/timeline/apps/\{app-id\}/appattempts}}.
> # GET 
> {{/ws/v2/timeline/apps/\{app-id\}/appattempts/\{attempt-id\}/containers}}
> # GET {{/ws/v2/timeline/apps/\{app id\}/entities}} should display list of 
> entities that can be queried.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5699) Retrospect container entity fields which are publishing in events info fields.

2016-10-04 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546344#comment-15546344
 ] 

Li Lu commented on YARN-5699:
-

I think the major concern is that when putting useful fields into event info 
field instead of entity info field, we lost the ability to easily query them. 
For example, if an user would like to list all containers with a specific exit 
code, or finished within a time window. This said, I think the key point is to 
put data related to the whole entity into the entity's info field. Ideally we 
should only put some quite lightweight data into the event's info fields (since 
they're not easily to query), but if we share a lot of code path we may want to 
either replicate those data in entity info, or refactor the code? 

> Retrospect container entity fields which are publishing in events info fields.
> --
>
> Key: YARN-5699
> URL: https://issues.apache.org/jira/browse/YARN-5699
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>
> Currently, all the container information are published at 2 places. Some of 
> them are at entity info(top-level) and some are  at event info. 
> For containers, some of the event info should be published at container info 
> level. For example : container exist status, container state, createdTime, 
> finished time. These are general information to container required for 
> container-report. So it is better to publish at top level info field. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4329) Allow fetching exact reason as to why a submitted app is in ACCEPTED state in Fair Scheduler

2016-10-04 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-4329:
---
Attachment: YARN-4329.003.patch

> Allow fetching exact reason as to why a submitted app is in ACCEPTED state in 
> Fair Scheduler
> 
>
> Key: YARN-4329
> URL: https://issues.apache.org/jira/browse/YARN-4329
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler, resourcemanager
>Reporter: Naganarasimha G R
>Assignee: Yufei Gu
> Attachments: YARN-4329.001.patch, YARN-4329.002.patch, 
> YARN-4329.003.patch
>
>
> Similar to YARN-3946, it would be useful to capture possible reason why the 
> Application is in accepted state in FairScheduler



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5677) RM can be in active-active state for an extended period

2016-10-04 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546317#comment-15546317
 ] 

Karthik Kambatla commented on YARN-5677:


Meaningful implementation for {{enterNeutralMode}} makes a lot of sense. Sorry 
for not filing a JIRA for the TODO I added years ago.

The patch here makes sense. My one concern is with letting the outstanding task 
run even after canceling the timer, especially when canceled as part of 
becomeActive. 

[~templedf] - in an offline conversation, you mentioned running into issues 
with the VerifyActiveStatusThread being stuck on transition to standby. Is the 
plan to fix that too in this JIRA? Or, to take care of it as a follow-up? 


> RM can be in active-active state for an extended period
> ---
>
> Key: YARN-5677
> URL: https://issues.apache.org/jira/browse/YARN-5677
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-5677.001.patch
>
>
> In trunk, there is no maximum number of retries that I see.  It appears the 
> connection will be retried forever, with the active never figuring out it's 
> no longer active.  In my testing, the active-active state lasted almost 2 
> hours with no sign of stopping before I killed it.  The solution appears to 
> be to cap the number of retries or amount of time spent retrying.
> This issue is significant because of the asynchronous nature of job 
> submission.  If the active doesn't know it's not active, it will buffer up 
> job submissions until it finally realizes it has become the standby. Then it 
> will fail all the job submissions in bulk. In high-volume workflows, that 
> behavior can create huge mass job failures.
> This issue is also important because the node managers will not fail over to 
> the new active until the old active realizes it's the standby.  Workloads 
> submitted after the old active loses contact with ZK will therefore fail to 
> be executed regardless of which RM the clients contact.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5707) Add manager class for resource profiles

2016-10-04 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546274#comment-15546274
 ] 

Arun Suresh commented on YARN-5707:
---

Thanks for the patch [~vvasudev]. It looks mostly good..

Couple of comments:
* Think you can remove the default constructor {{ResourceProfilesManagerImpl()}}
* The profiles map can be final and initialized to ConcurrentHashMap.. that 
way, you might not need to synchronize all the methods.
* I feel {{getResourceProfiles()}} should not return null... this would place 
the burden of checking for null on the client. It would would be better if an 
empty map is sent.
* Rather than explicitly asking the manager to reload, maybe it would be nice 
to have a reloading thread that monitors a given file for changes. We do this 
in the {{KMSACLs}} class.

> Add manager class for resource profiles
> ---
>
> Key: YARN-5707
> URL: https://issues.apache.org/jira/browse/YARN-5707
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-5707-YARN-3926.001.patch, 
> YARN-5707-YARN-3926.002.patch
>
>
> Add a class that manages the resource profiles that are available for 
> applications to use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5707) Add manager class for resource profiles

2016-10-04 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546275#comment-15546275
 ] 

Wangda Tan commented on YARN-5707:
--

[~vvasudev],

You can check top level pom.xml as an example.

{code}
  
org.apache.rat
apache-rat-plugin
   
  
.gitattributes
.gitignore
.git/**
.idea/**
**/build/**
**/patchprocess/**
 
   
  
{code}

> Add manager class for resource profiles
> ---
>
> Key: YARN-5707
> URL: https://issues.apache.org/jira/browse/YARN-5707
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-5707-YARN-3926.001.patch, 
> YARN-5707-YARN-3926.002.patch
>
>
> Add a class that manages the resource profiles that are available for 
> applications to use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5587) Add support for resource profiles

2016-10-04 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev updated YARN-5587:

Attachment: YARN-5587-YARN-3926.004.patch

> Add support for resource profiles
> -
>
> Key: YARN-5587
> URL: https://issues.apache.org/jira/browse/YARN-5587
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-5587-YARN-3926.001.patch, 
> YARN-5587-YARN-3926.002.patch, YARN-5587-YARN-3926.003.patch, 
> YARN-5587-YARN-3926.004.patch
>
>
> Add support for resource profiles on the RM side to allow users to use 
> shorthands to specify resource requirements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5707) Add manager class for resource profiles

2016-10-04 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546260#comment-15546260
 ] 

Varun Vasudev commented on YARN-5707:
-

Thanks for the review [~leftnoteasy]!

bq. Suggest to add @Public/@Unstable to all new fields added to 
YarnConfiguration

Will do.

bq. could you add the patch which will use these new interfaces?

You can look at the latest patches on YARN-5708 and YARN-5587. YARN-5708 adds 
the APIs and protobuf implementations. The latest patch on 
YARN-5587(YARN-5587-YARN-3926.004.patch) actually uses all the APIs

bq. And ASF warnings need to be fixed/excluded.

Do you know how to do this? The warnings are generated by the json files added 
for testing - I'm not sure how to add them to an exclude list.


> Add manager class for resource profiles
> ---
>
> Key: YARN-5707
> URL: https://issues.apache.org/jira/browse/YARN-5707
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-5707-YARN-3926.001.patch, 
> YARN-5707-YARN-3926.002.patch
>
>
> Add a class that manages the resource profiles that are available for 
> applications to use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5587) Add support for resource profiles

2016-10-04 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev updated YARN-5587:

Attachment: (was: YARN-5587-YARN-3926.003.patch)

> Add support for resource profiles
> -
>
> Key: YARN-5587
> URL: https://issues.apache.org/jira/browse/YARN-5587
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-5587-YARN-3926.001.patch, 
> YARN-5587-YARN-3926.002.patch, YARN-5587-YARN-3926.003.patch, 
> YARN-5587-YARN-3926.004.patch
>
>
> Add support for resource profiles on the RM side to allow users to use 
> shorthands to specify resource requirements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5587) Add support for resource profiles

2016-10-04 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev updated YARN-5587:

Attachment: YARN-5587-YARN-3926.003.patch

> Add support for resource profiles
> -
>
> Key: YARN-5587
> URL: https://issues.apache.org/jira/browse/YARN-5587
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-5587-YARN-3926.001.patch, 
> YARN-5587-YARN-3926.002.patch, YARN-5587-YARN-3926.003.patch, 
> YARN-5587-YARN-3926.003.patch
>
>
> Add support for resource profiles on the RM side to allow users to use 
> shorthands to specify resource requirements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5707) Add manager class for resource profiles

2016-10-04 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546234#comment-15546234
 ] 

Wangda Tan commented on YARN-5707:
--

Thanks [~vvasudev],

Suggest to add {{@Public/@Unstable}} to all new fields added to 
YarnConfiguration since this is a new feature and we can update it, and could 
you add the patch which will use these new interfaces? Which can help us to 
understand how this patch will be used.

And ASF warnings need to be fixed/excluded.


> Add manager class for resource profiles
> ---
>
> Key: YARN-5707
> URL: https://issues.apache.org/jira/browse/YARN-5707
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-5707-YARN-3926.001.patch, 
> YARN-5707-YARN-3926.002.patch
>
>
> Add a class that manages the resource profiles that are available for 
> applications to use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5698) [YARN-3368] Launch new YARN UI under hadoop web app port

2016-10-04 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546217#comment-15546217
 ] 

Wangda Tan commented on YARN-5698:
--

Thanks [~sunilg] for working on the patch.

Few comments & suggestions:
1) Is it better to rename yarn.resourcemanager.webapp.ui2.enable to 
yarn.webapp.ui2.enable, since the UI is not only for resourcemanager.
2) When RM is runs on the local host, we still need to run {{corsproxy}}. Do 
you know what can we do to fix it?
3) A couple of renames: 
- URL endpoint: /newUI to /ui2 (consistent with config key)
- {{public WebApp start(WebApp webapp, WebAppContext context)}}, context to 
ui2Context

Beyond this, patch looks good. And if you have some bandwidth, could you test 
the patch on a distributed environment?

> [YARN-3368] Launch new YARN UI under hadoop web app port
> 
>
> Key: YARN-5698
> URL: https://issues.apache.org/jira/browse/YARN-5698
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-5698-YARN-3368.0001.patch
>
>
> As discussed in YARN-5145, it will be better to launch new web ui as a new 
> webapp under same old port.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5698) [YARN-3368] Launch new YARN UI under hadoop web app port

2016-10-04 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546201#comment-15546201
 ] 

Sunil G commented on YARN-5698:
---

Thanks Kai Sasaki for the comments. It makes sense to me to change the name.  I 
ll also wait for Wangda to review before uploading next patch. 

> [YARN-3368] Launch new YARN UI under hadoop web app port
> 
>
> Key: YARN-5698
> URL: https://issues.apache.org/jira/browse/YARN-5698
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-5698-YARN-3368.0001.patch
>
>
> As discussed in YARN-5145, it will be better to launch new web ui as a new 
> webapp under same old port.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5585) [Atsv2] Add a new filter fromId in REST endpoints

2016-10-04 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546179#comment-15546179
 ] 

Rohith Sharma K S commented on YARN-5585:
-

Thanks Varun for quick review.. 

bq. Intention behind having ID_PREFIX in EntityColumn ? According to me, we 
need not store prefix in the column. Is it because we want to read it back and 
send it to client ?
Given your point-5 is valid, id_prefix is need to be stored in column and give 
it back to user while reading. Basically intention is user can provide 
fromEntityPrefix as filter. 


bq. No need of GenericEntityReader#calculateTheClosestNextRowKeyForPrefix. 
Scan#setRowPrefixFilter will do it for you. We should call it the same way as 
was done previously.
This is an optimization while scanning rows. This makes directly seeking to 
required row-key and start scanning. Say, the row-keys are stored in below 
order. Consider limit is 2 and prefix is unknown then scanning start from 
row-key beginning. After fetching 2 rows, user knows prefix is 2 , and gives 
fromEntityPrefix as 2 for retrieving next batch. Then reader need not to scan 
rows from beginning rather directly start scanning row-key prefixed with 2. And 
stop row need to be calculated for entityType level i.e till prefix 4.
{code}
cluster!user!flow!flowrun!app!entitytype!1!{entityid}
cluster!user!flow!flowrun!app!entitytype!2!{entityid}
cluster!user!flow!flowrun!app!entitytype!3!{entityid}
cluster!user!flow!flowrun!app!entitytype!4!{entityid}
{code}
bq. As entity ID prefix is a long, EntityRowKeyConverter#SEGMENT_SIZES should 
have new segment as Bytes.SIZEOF_LONG. It is currently given as VARIABLE_SIZE. 
Same change in TestRowKeys.
I purposefully used VARIABLE_SIZE because prefix can be empty bytes also when 
there is no prefix is specified. If we use Bytes.SIZEOF_LONG, then decoding 
always expect that there are some bytes for prefix, but ideally its not.  
Whenever prefix is not specified then do not want to use any default value 
which takes an extra byte for storage. 

bq. We will have to change Get to Scan with a SingleColumnValueFilter 
accordingly.
This is open point in attached patch, I will  look for feasibility to make use 
same  REST end point for prefix supported entities. 

> [Atsv2] Add a new filter fromId in REST endpoints
> -
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: 0001-YARN-5585.patch, YARN-5585-workaround.patch, 
> YARN-5585.v0.patch
>
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Current Behavior : Default limit is set to 100. If there are 1000 entities 
> then REST call gives first/last 100 entities. How to retrieve next set of 100 
> entities i.e 101 to 200 OR 900 to 801?
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-5. But to retrieve next 5 apps, there is 
> no way to achieve this. 
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&=app-5* which gives list of apps from app-6 to 
> app-10. 
> Since ATS is targeting large number of entities storage, it is very common 
> use case to get next set of entities using fromId rather than querying all 
> the entites. This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy

2016-10-04 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-2009:
--
Attachment: YARN-2009.0005.patch

Thanks [~eepayne] for pointing out this corner case.

Updating new patch.

> Priority support for preemption in ProportionalCapacityPreemptionPolicy
> ---
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
> Attachments: YARN-2009.0001.patch, YARN-2009.0002.patch, 
> YARN-2009.0003.patch, YARN-2009.0004.patch, YARN-2009.0005.patch
>
>
> While preempting containers based on the queue ideal assignment, we may need 
> to consider preempting the low priority application containers first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5707) Add manager class for resource profiles

2016-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546067#comment-15546067
 ] 

Hadoop QA commented on YARN-5707:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 3m 4s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
57s {color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 25s 
{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
42s {color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 40s 
{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
44s {color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 1s 
{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} YARN-3926 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 22s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 38s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 9 
new + 214 unchanged - 1 fixed = 223 total (was 215) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 25s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 20s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 36m 49s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 24s 
{color} | {color:red} The patch generated 5 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m 51s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831561/YARN-5707-YARN-3926.002.patch
 |
| JIRA Issue | YARN-5707 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 4e66da1b537f 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-3926 / 0bc6696 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13279/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
|  Test 

[jira] [Commented] (YARN-5707) Add manager class for resource profiles

2016-10-04 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546001#comment-15546001
 ] 

Varun Vasudev commented on YARN-5707:
-

[~leftnoteasy], [~asuresh] - can you please take a look? This is the first of 2 
patches to reduce the review size for YARN-5587. Thanks!

> Add manager class for resource profiles
> ---
>
> Key: YARN-5707
> URL: https://issues.apache.org/jira/browse/YARN-5707
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-5707-YARN-3926.001.patch, 
> YARN-5707-YARN-3926.002.patch
>
>
> Add a class that manages the resource profiles that are available for 
> applications to use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4597) Add SCHEDULE to NM container lifecycle

2016-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15545896#comment-15545896
 ] 

Hadoop QA commented on YARN-4597:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
55s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 25s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 43s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
58s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 17s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 2m 17s {color} 
| {color:red} hadoop-yarn-project_hadoop-yarn generated 3 new + 32 unchanged - 
3 fixed = 35 total (was 35) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 42s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 28 
new + 480 unchanged - 9 fixed = 508 total (was 489) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 50s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 0 new + 239 unchanged - 1 fixed = 239 total (was 240) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} hadoop-yarn-server-tests in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 12m 15s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 4m 25s {color} 
| {color:red} hadoop-yarn-server-tests in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 18m 52s {color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 58s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 

[jira] [Updated] (YARN-5707) Add manager class for resource profiles

2016-10-04 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev updated YARN-5707:

Attachment: YARN-5707-YARN-3926.002.patch

Uploaded a patch to fix findbugs.

> Add manager class for resource profiles
> ---
>
> Key: YARN-5707
> URL: https://issues.apache.org/jira/browse/YARN-5707
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-5707-YARN-3926.001.patch, 
> YARN-5707-YARN-3926.002.patch
>
>
> Add a class that manages the resource profiles that are available for 
> applications to use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5585) [Atsv2] Add a new filter fromId in REST endpoints

2016-10-04 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15545818#comment-15545818
 ] 

Varun Saxena edited comment on YARN-5585 at 10/4/16 4:06 PM:
-

Thanks [~rohithsharma] for the patch. Few comments.

# Intention behind having ID_PREFIX in EntityColumn ? According to me, we need 
not store prefix in the column. Is it because we want to read it back and send 
it to client ?
# No need of GenericEntityReader#calculateTheClosestNextRowKeyForPrefix. 
Scan#setRowPrefixFilter will do it for you. We should call it the same way as 
was done previously.
# As entity ID prefix is a long, EntityRowKeyConverter#SEGMENT_SIZES should 
have new segment as Bytes.SIZEOF_LONG. It is currently given as VARIABLE_SIZE. 
Same change in TestRowKeys.
# In EntityRowKeyConverter#encode, no need to invert entity id prefix. We will 
take prefix as-is. Sender can publish the entity with inverted prefix if he 
wants contents in descending order (say). We can probably add something to 
TimelineUtils to invert it, if required, which then clients can use.
# In GenericEntityReader#parseEntity we should fetch id prefix from result set 
and setIdPrefix in TimelineEntity to be returned back. This will be useful for 
clients when they want to set fromPrefix (will be useful in Tez UI use case).
# Javadoc in TimelineReader should be changed. It currently says entities would 
be sorted by created time which is no longer true.
{code}
   * @return A set of TimelineEntity instances of the given entity
   *type in the given context scope which matches the given predicates
   *ordered by created time, descending. Each entity will only contain the
   *metadata(id, type and created time) plus the given fields to retrieve.
{code}
# We should also update documentation to reflect id prefix.





was (Author: varun_saxena):
Thanks [~rohithsharma] for the patch. Few comments.

# Intention behind having ID_PREFIX in EntityColumn ? According to me, we need 
not store prefix in the column. Is it because we want to read it back and send 
it to client ?
# No need of GenericEntityReader#calculateTheClosestNextRowKeyForPrefix. 
Scan#setRowPrefixFilter will do it for you. We should call it the same way as 
was done previously.
# As entity ID prefix is a long, EntityRowKeyConverter#SEGMENT_SIZES should 
have new segment as Bytes.SIZEOF_LONG. Same change in TestRowKeys.
# In EntityRowKeyConverter#encode, no need to invert entity id prefix. We will 
take prefix as-is. Sender can publish the entity with inverted prefix if he 
wants contents in descending order (say). We can probably add something to 
TimelineUtils to invert it, if required, which then clients can use.
# In GenericEntityReader#parseEntity we should fetch id prefix from result set 
and setIdPrefix in TimelineEntity to be returned back. This will be useful for 
clients when they want to set fromPrefix (will be useful in Tez UI use case).
# Javadoc in TimelineReader should be changed. It currently says entities would 
be sorted by created time which is no longer true.
{code}
   * @return A set of TimelineEntity instances of the given entity
   *type in the given context scope which matches the given predicates
   *ordered by created time, descending. Each entity will only contain the
   *metadata(id, type and created time) plus the given fields to retrieve.
{code}
# We should also update documentation to reflect id prefix.




> [Atsv2] Add a new filter fromId in REST endpoints
> -
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: 0001-YARN-5585.patch, YARN-5585-workaround.patch, 
> YARN-5585.v0.patch
>
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Current Behavior : Default limit is set to 100. If there are 1000 entities 
> then REST call gives first/last 100 entities. How to retrieve next set of 100 
> entities i.e 101 to 200 OR 900 to 801?
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-5. But to retrieve next 5 apps, there is 
> no way to achieve this. 
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&=app-5* which gives list of apps from app-6 to 
> app-10. 
> Since ATS is targeting large number of entities storage, it is very common 
> use case to get next set of entities using fromId rather than querying all 
> the entites. This is very useful for pagination in 

[jira] [Commented] (YARN-5585) [Atsv2] Add a new filter fromId in REST endpoints

2016-10-04 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15545835#comment-15545835
 ] 

Varun Saxena commented on YARN-5585:


Moreover, changes need to be made for GenericEntityReader#getResult as well. 
But that I assume will be done once we decide on REST APIs'. Because we need to 
handle 2 cases and hence have two different REST endpoints for the same. One 
where user queries an entity type which does not have a prefix and other where 
entity type is stored with a prefix but user may or may not supply the prefix. 
We will have to change Get to Scan with a SingleColumnValueFilter accordingly.

> [Atsv2] Add a new filter fromId in REST endpoints
> -
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: 0001-YARN-5585.patch, YARN-5585-workaround.patch, 
> YARN-5585.v0.patch
>
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Current Behavior : Default limit is set to 100. If there are 1000 entities 
> then REST call gives first/last 100 entities. How to retrieve next set of 100 
> entities i.e 101 to 200 OR 900 to 801?
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-5. But to retrieve next 5 apps, there is 
> no way to achieve this. 
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&=app-5* which gives list of apps from app-6 to 
> app-10. 
> Since ATS is targeting large number of entities storage, it is very common 
> use case to get next set of entities using fromId rather than querying all 
> the entites. This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5707) Add manager class for resource profiles

2016-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15545823#comment-15545823
 ] 

Hadoop QA commented on YARN-5707:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 2s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
10s {color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 19s 
{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
45s {color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 52s 
{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
46s {color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
18s {color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s 
{color} | {color:green} YARN-3926 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 39s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 41s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 8 
new + 214 unchanged - 1 fixed = 222 total (was 215) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 4s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 17s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 37m 15s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 18s 
{color} | {color:red} The patch generated 5 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m 24s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  Inconsistent synchronization of 
org.apache.hadoop.yarn.server.resourcemanager.resource.ResourceProfilesManagerImpl.profiles;
 locked 60% of time  Unsynchronized access at 
ResourceProfilesManagerImpl.java:60% of time  Unsynchronized access at 
ResourceProfilesManagerImpl.java:[line 151] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831547/YARN-5707-YARN-3926.001.patch
 |
| JIRA Issue | YARN-5707 |
| Optional Tests |  asflicense  

[jira] [Commented] (YARN-5585) [Atsv2] Add a new filter fromId in REST endpoints

2016-10-04 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15545818#comment-15545818
 ] 

Varun Saxena commented on YARN-5585:


Thanks [~rohithsharma] for the patch. Few comments.

# Intention behind having ID_PREFIX in EntityColumn ? According to me, we need 
not store prefix in the column. Is it because we want to read it back and send 
it to client ?
# No need of GenericEntityReader#calculateTheClosestNextRowKeyForPrefix. 
Scan#setRowPrefixFilter will do it for you. We should call it the same way as 
was done previously.
# As entity ID prefix is a long, EntityRowKeyConverter#SEGMENT_SIZES should 
have new segment as Bytes.SIZEOF_LONG. Same change in TestRowKeys.
# In EntityRowKeyConverter#encode, no need to invert entity id prefix. We will 
take prefix as-is. Sender can publish the entity with inverted prefix if he 
wants contents in descending order (say). We can probably add something to 
TimelineUtils to invert it, if required, which then clients can use.
# In GenericEntityReader#parseEntity we should fetch id prefix from result set 
and setIdPrefix in TimelineEntity to be returned back. This will be useful for 
clients when they want to set fromPrefix (will be useful in Tez UI use case).
# Javadoc in TimelineReader should be changed. It currently says entities would 
be sorted by created time which is no longer true.
{code}
   * @return A set of TimelineEntity instances of the given entity
   *type in the given context scope which matches the given predicates
   *ordered by created time, descending. Each entity will only contain the
   *metadata(id, type and created time) plus the given fields to retrieve.
{code}
# We should also update documentation to reflect id prefix.




> [Atsv2] Add a new filter fromId in REST endpoints
> -
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: 0001-YARN-5585.patch, YARN-5585-workaround.patch, 
> YARN-5585.v0.patch
>
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Current Behavior : Default limit is set to 100. If there are 1000 entities 
> then REST call gives first/last 100 entities. How to retrieve next set of 100 
> entities i.e 101 to 200 OR 900 to 801?
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-5. But to retrieve next 5 apps, there is 
> no way to achieve this. 
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&=app-5* which gives list of apps from app-6 to 
> app-10. 
> Since ATS is targeting large number of entities storage, it is very common 
> use case to get next set of entities using fromId rather than querying all 
> the entites. This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5682) [YARN-3368] Fix maven build to keep all generated or downloaded files in target folder

2016-10-04 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-5682:
--
Fix Version/s: YARN-3368

> [YARN-3368] Fix maven build to keep all generated or downloaded files in 
> target folder
> --
>
> Key: YARN-5682
> URL: https://issues.apache.org/jira/browse/YARN-5682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Fix For: YARN-3368
>
> Attachments: YARN-5682-YARN-3368.001.patch, 
> YARN-5682-YARN-3368.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5541) Handling of opportunistic containers in the NM

2016-10-04 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15545639#comment-15545639
 ] 

Arun Suresh commented on YARN-5541:
---

Just added an initial patch for YARN-4597 which should simplify some code paths 
in the NM (we can decommission the QueuingContainerManager)

> Handling of opportunistic containers in the NM
> --
>
> Key: YARN-5541
> URL: https://issues.apache.org/jira/browse/YARN-5541
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Konstantinos Karanasos
>
> I am creating this JIRA in order to group all tasks related to the management 
> of opportunistic containers in the NMs, such as the queuing of containers, 
> the pausing of containers and the prioritization of queued containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4597) Add SCHEDULE to NM container lifecycle

2016-10-04 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-4597:
--
Attachment: YARN-4597.001.patch

Attaching initial patch.

* Introduced a {{ContainerScheduler}} : The Scheduler can decide when a 
Container can be launched. This can be done based on policy. Currently, there 
is flag that it checks to see if Queuing is enabled, it performs the role of 
the {{QueuingContainerManagerImpl}}
* Removed the {{QueuingContainerManagerImpl}} : As described above, it is not 
required anymore. Most of the logic has been moved to the Scheduler.
* There is no need for a {{QueuingContext}} anymore.
* {{TestQueuingContainerManager}} has been moved to 
{{TestContainerSchedulerQueuing}}
* The {{LOCALIZED}} container state has been renamed to {{SCHEDULED}}

Since a lot of code paths have been simplified, a lot of code changes are 
actually deletions.

Do take a look. (cc: [~kasha], [~chris.douglas], [~vvasudev], [~jianhe], 
[~kkaranasos], [~subru])



> Add SCHEDULE to NM container lifecycle
> --
>
> Key: YARN-4597
> URL: https://issues.apache.org/jira/browse/YARN-4597
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Chris Douglas
>Assignee: Arun Suresh
> Attachments: YARN-4597.001.patch
>
>
> Currently, the NM immediately launches containers after resource 
> localization. Several features could be more cleanly implemented if the NM 
> included a separate stage for reserving resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5585) [Atsv2] Add a new filter fromId in REST endpoints

2016-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15545552#comment-15545552
 ] 

Hadoop QA commented on YARN-5585:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
0s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 27s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 20s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 37s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 5 
new + 24 unchanged - 0 fixed = 29 total (was 24) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
44s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 14s 
{color} | {color:red} hadoop-yarn-server-timelineservice in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 44s 
{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 15s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831536/0001-YARN-5585.patch |
| JIRA Issue | YARN-5585 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 73ab6fc84a67 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ef7f06f |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13276/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/13276/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13276/testReport/ |
| modules | C: 

[jira] [Commented] (YARN-5148) [YARN-3368] Add page to new YARN UI to view server side configurations/logs/JVM-metrics

2016-10-04 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15545548#comment-15545548
 ] 

Kai Sasaki commented on YARN-5148:
--

[~sunilg] I updated to categorize labels of configurations. Could you review 
this when you get a chance?

> [YARN-3368] Add page to new YARN UI to view server side 
> configurations/logs/JVM-metrics
> ---
>
> Key: YARN-5148
> URL: https://issues.apache.org/jira/browse/YARN-5148
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Kai Sasaki
> Attachments: Screen Shot 2016-09-11 at 23.28.31.png, Screen Shot 
> 2016-09-13 at 22.27.00.png, YARN-5148-YARN-3368.01.patch, 
> YARN-5148-YARN-3368.02.patch, YARN-5148-YARN-3368.03.patch, yarn-conf.png, 
> yarn-tools.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5698) [YARN-3368] Launch new YARN UI under hadoop web app port

2016-10-04 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15545542#comment-15545542
 ] 

Kai Sasaki commented on YARN-5698:
--

[~sunilg] Thanks for updating!
I confirmed the path works as expected. I think servlet name {{newUI}} can be 
{{uiV2}} or {{v2}} for future ui version.
What do you think?


> [YARN-3368] Launch new YARN UI under hadoop web app port
> 
>
> Key: YARN-5698
> URL: https://issues.apache.org/jira/browse/YARN-5698
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-5698-YARN-3368.0001.patch
>
>
> As discussed in YARN-5145, it will be better to launch new web ui as a new 
> webapp under same old port.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5708) Implement APIs to get resource profiles from the RM

2016-10-04 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev updated YARN-5708:

Attachment: YARN-5708-YARN-3926.001.patch

> Implement APIs to get resource profiles from the RM
> ---
>
> Key: YARN-5708
> URL: https://issues.apache.org/jira/browse/YARN-5708
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: client
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-5708-YARN-3926.001.patch
>
>
> Implement a set of APIs to get the available resource profiles from the RM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5707) Add manager class for resource profiles

2016-10-04 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev updated YARN-5707:

Attachment: YARN-5707-YARN-3926.001.patch

Fixed filename and branch.

> Add manager class for resource profiles
> ---
>
> Key: YARN-5707
> URL: https://issues.apache.org/jira/browse/YARN-5707
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-5707-YARN-3926.001.patch
>
>
> Add a class that manages the resource profiles that are available for 
> applications to use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5707) Add manager class for resource profiles

2016-10-04 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev updated YARN-5707:

Attachment: (was: YARN-5607-YARN-3927.001.patch)

> Add manager class for resource profiles
> ---
>
> Key: YARN-5707
> URL: https://issues.apache.org/jira/browse/YARN-5707
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-5707-YARN-3926.001.patch
>
>
> Add a class that manages the resource profiles that are available for 
> applications to use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5707) Add manager class for resource profiles

2016-10-04 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev updated YARN-5707:

Attachment: YARN-5607-YARN-3927.001.patch

Patch with resource profiles manager attached.

> Add manager class for resource profiles
> ---
>
> Key: YARN-5707
> URL: https://issues.apache.org/jira/browse/YARN-5707
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-5607-YARN-3927.001.patch
>
>
> Add a class that manages the resource profiles that are available for 
> applications to use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5708) Implement APIs to get resource profiles from the RM

2016-10-04 Thread Varun Vasudev (JIRA)
Varun Vasudev created YARN-5708:
---

 Summary: Implement APIs to get resource profiles from the RM
 Key: YARN-5708
 URL: https://issues.apache.org/jira/browse/YARN-5708
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: client
Reporter: Varun Vasudev
Assignee: Varun Vasudev


Implement a set of APIs to get the available resource profiles from the RM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5707) Add manager class for resource profiles

2016-10-04 Thread Varun Vasudev (JIRA)
Varun Vasudev created YARN-5707:
---

 Summary: Add manager class for resource profiles
 Key: YARN-5707
 URL: https://issues.apache.org/jira/browse/YARN-5707
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Varun Vasudev
Assignee: Varun Vasudev


Add a class that manages the resource profiles that are available for 
applications to use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5585) [Atsv2] Add a new filter fromId in REST endpoints

2016-10-04 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-5585:

Attachment: 0001-YARN-5585.patch

Updating the patch with following changes for early review
# Added *idPrefix* in TimelineEntity object.

Collector Implementation:
# id_prefix is been published as row-key and as column to HBase.

Reader Implementation:
# entityPrefixId is part of TimelineReaderContext.
# Along with entityPrefixId, fromId is also supported. Difference is 
entityPrefixId applied at storage level, and fromId is applied at reader side.
# Removed sortedKey variable.
# Used LinkedHashSet instead of TreeSet while adding entities to Set.
# startRow and stopRow is calculated dynamically based on entityPrefixId 
existence. 

Pending Task.
# Need to add queryParams in REST end points from entityPrefixId and fromId
# For single entity retrieval, need to use SingleColumnFileter for retrieval. 
# Need to test in real cluster

> [Atsv2] Add a new filter fromId in REST endpoints
> -
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: 0001-YARN-5585.patch, YARN-5585-workaround.patch, 
> YARN-5585.v0.patch
>
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Current Behavior : Default limit is set to 100. If there are 1000 entities 
> then REST call gives first/last 100 entities. How to retrieve next set of 100 
> entities i.e 101 to 200 OR 900 to 801?
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-5. But to retrieve next 5 apps, there is 
> no way to achieve this. 
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&=app-5* which gives list of apps from app-6 to 
> app-10. 
> Since ATS is targeting large number of entities storage, it is very common 
> use case to get next set of entities using fromId rather than querying all 
> the entites. This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5706) Fail to launch SLSRunner due to NPE

2016-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15545439#comment-15545439
 ] 

Hadoop QA commented on YARN-5706:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 4s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
12s {color} | {color:green} The patch generated 0 new + 74 unchanged - 1 fixed 
= 74 total (was 75) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 16s 
{color} | {color:green} hadoop-sls in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 9m 49s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831527/YARN-5706.01.patch |
| JIRA Issue | YARN-5706 |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux 0d5c69c00722 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ef7f06f |
| shellcheck | v0.4.4 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13275/testReport/ |
| modules | C: hadoop-tools/hadoop-sls U: hadoop-tools/hadoop-sls |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13275/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Fail to launch SLSRunner due to NPE
> ---
>
> Key: YARN-5706
> URL: https://issues.apache.org/jira/browse/YARN-5706
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha2
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
> Attachments: YARN-5706.01.patch
>
>
> {code}
> java.lang.NullPointerException
>   at org.apache.hadoop.yarn.sls.web.SLSWebApp.(SLSWebApp.java:88)
>   at 
> org.apache.hadoop.yarn.sls.scheduler.SLSCapacityScheduler.initMetrics(SLSCapacityScheduler.java:459)
>   at 
> org.apache.hadoop.yarn.sls.scheduler.SLSCapacityScheduler.setConf(SLSCapacityScheduler.java:153)
>   at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
> {code}
> CLASSPATH for html resource is not configured properly.
> {code}
> DEBUG: Injecting share/hadoop/tools/sls/html into CLASSPATH
> DEBUG: Rejected CLASSPATH: share/hadoop/tools/sls/html (does not exist)
> {code}
> This issue can be reproduced when doing according to the documentation 
> instruction.
> http://hadoop.apache.org/docs/current/hadoop-sls/SchedulerLoadSimulator.html
> {code}
> $ cd $HADOOP_ROOT/share/hadoop/tools/sls
> $ bin/slsrun.sh
>   --input-rumen |--input-sls=
>   --output-dir= [--nodes=]
> [--track-jobs=] [--print-simulation]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5706) Fail to launch SLSRunner due to NPE

2016-10-04 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated YARN-5706:
-
Description: 
{code}
java.lang.NullPointerException
at org.apache.hadoop.yarn.sls.web.SLSWebApp.(SLSWebApp.java:88)
at 
org.apache.hadoop.yarn.sls.scheduler.SLSCapacityScheduler.initMetrics(SLSCapacityScheduler.java:459)
at 
org.apache.hadoop.yarn.sls.scheduler.SLSCapacityScheduler.setConf(SLSCapacityScheduler.java:153)
at 
org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
{code}

CLASSPATH for html resource is not configured properly.
{code}
DEBUG: Injecting share/hadoop/tools/sls/html into CLASSPATH
DEBUG: Rejected CLASSPATH: share/hadoop/tools/sls/html (does not exist)
{code}

This issue can be reproduced when doing according to the documentation 
instruction.
http://hadoop.apache.org/docs/current/hadoop-sls/SchedulerLoadSimulator.html

{code}
$ cd $HADOOP_ROOT/share/hadoop/tools/sls
$ bin/slsrun.sh
  --input-rumen |--input-sls=
  --output-dir= [--nodes=]
[--track-jobs=] [--print-simulation]
{code}

  was:
{code}
java.lang.NullPointerException
at org.apache.hadoop.yarn.sls.web.SLSWebApp.(SLSWebApp.java:88)
at 
org.apache.hadoop.yarn.sls.scheduler.SLSCapacityScheduler.initMetrics(SLSCapacityScheduler.java:459)
at 
org.apache.hadoop.yarn.sls.scheduler.SLSCapacityScheduler.setConf(SLSCapacityScheduler.java:153)
at 
org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
{code}

CLASSPATH for html resource is not configured properly.
{code}
DEBUG: Injecting share/hadoop/tools/sls/html into CLASSPATH
DEBUG: Rejected CLASSPATH: share/hadoop/tools/sls/html (does not exist)
{code}

This issue can be reproduced when doing according the documentation instruction.
http://hadoop.apache.org/docs/current/hadoop-sls/SchedulerLoadSimulator.html

{code}
$ cd $HADOOP_ROOT/share/hadoop/tools/sls
$ bin/slsrun.sh
  --input-rumen |--input-sls=
  --output-dir= [--nodes=]
[--track-jobs=] [--print-simulation]
{code}


> Fail to launch SLSRunner due to NPE
> ---
>
> Key: YARN-5706
> URL: https://issues.apache.org/jira/browse/YARN-5706
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha2
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
> Attachments: YARN-5706.01.patch
>
>
> {code}
> java.lang.NullPointerException
>   at org.apache.hadoop.yarn.sls.web.SLSWebApp.(SLSWebApp.java:88)
>   at 
> org.apache.hadoop.yarn.sls.scheduler.SLSCapacityScheduler.initMetrics(SLSCapacityScheduler.java:459)
>   at 
> org.apache.hadoop.yarn.sls.scheduler.SLSCapacityScheduler.setConf(SLSCapacityScheduler.java:153)
>   at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
> {code}
> CLASSPATH for html resource is not configured properly.
> {code}
> DEBUG: Injecting share/hadoop/tools/sls/html into CLASSPATH
> DEBUG: Rejected CLASSPATH: share/hadoop/tools/sls/html (does not exist)
> {code}
> This issue can be reproduced when doing according to the documentation 
> instruction.
> http://hadoop.apache.org/docs/current/hadoop-sls/SchedulerLoadSimulator.html
> {code}
> $ cd $HADOOP_ROOT/share/hadoop/tools/sls
> $ bin/slsrun.sh
>   --input-rumen |--input-sls=
>   --output-dir= [--nodes=]
> [--track-jobs=] [--print-simulation]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5706) Fail to launch SLSRunner due to NPE

2016-10-04 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated YARN-5706:
-
Description: 
{code}
java.lang.NullPointerException
at org.apache.hadoop.yarn.sls.web.SLSWebApp.(SLSWebApp.java:88)
at 
org.apache.hadoop.yarn.sls.scheduler.SLSCapacityScheduler.initMetrics(SLSCapacityScheduler.java:459)
at 
org.apache.hadoop.yarn.sls.scheduler.SLSCapacityScheduler.setConf(SLSCapacityScheduler.java:153)
at 
org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
{code}

CLASSPATH for html resource is not configured properly.
{code}
DEBUG: Injecting share/hadoop/tools/sls/html into CLASSPATH
DEBUG: Rejected CLASSPATH: share/hadoop/tools/sls/html (does not exist)
{code}

This issue can be reproduced when doing according the documentation instruction.
http://hadoop.apache.org/docs/current/hadoop-sls/SchedulerLoadSimulator.html

{code}
$ cd $HADOOP_ROOT/share/hadoop/tools/sls
$ bin/slsrun.sh
  --input-rumen |--input-sls=
  --output-dir= [--nodes=]
[--track-jobs=] [--print-simulation]
{code}

  was:
{code}
java.lang.NullPointerException
at org.apache.hadoop.yarn.sls.web.SLSWebApp.(SLSWebApp.java:88)
at 
org.apache.hadoop.yarn.sls.scheduler.SLSCapacityScheduler.initMetrics(SLSCapacityScheduler.java:459)
at 
org.apache.hadoop.yarn.sls.scheduler.SLSCapacityScheduler.setConf(SLSCapacityScheduler.java:153)
at 
org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
{code}

CLASSPATH for html resource is not configured properly.
{code}
DEBUG: Injecting share/hadoop/tools/sls/html into CLASSPATH
DEBUG: Rejected CLASSPATH: share/hadoop/tools/sls/html (does not exist)
{code}


> Fail to launch SLSRunner due to NPE
> ---
>
> Key: YARN-5706
> URL: https://issues.apache.org/jira/browse/YARN-5706
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha2
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
> Attachments: YARN-5706.01.patch
>
>
> {code}
> java.lang.NullPointerException
>   at org.apache.hadoop.yarn.sls.web.SLSWebApp.(SLSWebApp.java:88)
>   at 
> org.apache.hadoop.yarn.sls.scheduler.SLSCapacityScheduler.initMetrics(SLSCapacityScheduler.java:459)
>   at 
> org.apache.hadoop.yarn.sls.scheduler.SLSCapacityScheduler.setConf(SLSCapacityScheduler.java:153)
>   at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
> {code}
> CLASSPATH for html resource is not configured properly.
> {code}
> DEBUG: Injecting share/hadoop/tools/sls/html into CLASSPATH
> DEBUG: Rejected CLASSPATH: share/hadoop/tools/sls/html (does not exist)
> {code}
> This issue can be reproduced when doing according the documentation 
> instruction.
> http://hadoop.apache.org/docs/current/hadoop-sls/SchedulerLoadSimulator.html
> {code}
> $ cd $HADOOP_ROOT/share/hadoop/tools/sls
> $ bin/slsrun.sh
>   --input-rumen |--input-sls=
>   --output-dir= [--nodes=]
> [--track-jobs=] [--print-simulation]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5706) Fail to launch SLSRunner due to NPE

2016-10-04 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated YARN-5706:
-
Attachment: YARN-5706.01.patch

> Fail to launch SLSRunner due to NPE
> ---
>
> Key: YARN-5706
> URL: https://issues.apache.org/jira/browse/YARN-5706
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha2
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
> Attachments: YARN-5706.01.patch
>
>
> {code}
> java.lang.NullPointerException
>   at org.apache.hadoop.yarn.sls.web.SLSWebApp.(SLSWebApp.java:88)
>   at 
> org.apache.hadoop.yarn.sls.scheduler.SLSCapacityScheduler.initMetrics(SLSCapacityScheduler.java:459)
>   at 
> org.apache.hadoop.yarn.sls.scheduler.SLSCapacityScheduler.setConf(SLSCapacityScheduler.java:153)
>   at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
> {code}
> CLASSPATH for html resource is not configured properly.
> {code}
> DEBUG: Injecting share/hadoop/tools/sls/html into CLASSPATH
> DEBUG: Rejected CLASSPATH: share/hadoop/tools/sls/html (does not exist)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5706) Fail to launch SLSRunner due to NPE

2016-10-04 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated YARN-5706:
-
Description: 
{code}
java.lang.NullPointerException
at org.apache.hadoop.yarn.sls.web.SLSWebApp.(SLSWebApp.java:88)
at 
org.apache.hadoop.yarn.sls.scheduler.SLSCapacityScheduler.initMetrics(SLSCapacityScheduler.java:459)
at 
org.apache.hadoop.yarn.sls.scheduler.SLSCapacityScheduler.setConf(SLSCapacityScheduler.java:153)
at 
org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
{code}

CLASSPATH for html resource is not configured properly.
{code}
DEBUG: Injecting share/hadoop/tools/sls/html into CLASSPATH
DEBUG: Rejected CLASSPATH: share/hadoop/tools/sls/html (does not exist)
{code}

  was:
CLASSPATH for html resource is not configured properly.
{code}
java.lang.NullPointerException
at org.apache.hadoop.yarn.sls.web.SLSWebApp.(SLSWebApp.java:88)
at 
org.apache.hadoop.yarn.sls.scheduler.SLSCapacityScheduler.initMetrics(SLSCapacityScheduler.java:459)
at 
org.apache.hadoop.yarn.sls.scheduler.SLSCapacityScheduler.setConf(SLSCapacityScheduler.java:153)
at 
org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
{code}


> Fail to launch SLSRunner due to NPE
> ---
>
> Key: YARN-5706
> URL: https://issues.apache.org/jira/browse/YARN-5706
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha2
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>
> {code}
> java.lang.NullPointerException
>   at org.apache.hadoop.yarn.sls.web.SLSWebApp.(SLSWebApp.java:88)
>   at 
> org.apache.hadoop.yarn.sls.scheduler.SLSCapacityScheduler.initMetrics(SLSCapacityScheduler.java:459)
>   at 
> org.apache.hadoop.yarn.sls.scheduler.SLSCapacityScheduler.setConf(SLSCapacityScheduler.java:153)
>   at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
> {code}
> CLASSPATH for html resource is not configured properly.
> {code}
> DEBUG: Injecting share/hadoop/tools/sls/html into CLASSPATH
> DEBUG: Rejected CLASSPATH: share/hadoop/tools/sls/html (does not exist)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5706) Fail to launch SLSRunner due to NPE

2016-10-04 Thread Kai Sasaki (JIRA)
Kai Sasaki created YARN-5706:


 Summary: Fail to launch SLSRunner due to NPE
 Key: YARN-5706
 URL: https://issues.apache.org/jira/browse/YARN-5706
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 3.0.0-alpha2
Reporter: Kai Sasaki
Assignee: Kai Sasaki


CLASSPATH for html resource is not configured properly.
{code}
java.lang.NullPointerException
at org.apache.hadoop.yarn.sls.web.SLSWebApp.(SLSWebApp.java:88)
at 
org.apache.hadoop.yarn.sls.scheduler.SLSCapacityScheduler.initMetrics(SLSCapacityScheduler.java:459)
at 
org.apache.hadoop.yarn.sls.scheduler.SLSCapacityScheduler.setConf(SLSCapacityScheduler.java:153)
at 
org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5705) [YARN-3368] Add support for Timeline V2 to new web UI

2016-10-04 Thread Akhil PB (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-5705:
---
Attachment: YARN-5705.001.patch

> [YARN-3368] Add support for Timeline V2 to new web UI
> -
>
> Key: YARN-5705
> URL: https://issues.apache.org/jira/browse/YARN-5705
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil G
>Assignee: Akhil PB
> Attachments: YARN-5705.001.patch
>
>
> Integrate timeline v2 to YARN-3368. This is a clone JIRA for YARN-4097



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5705) [YARN-3368] Add support for Timeline V2 to new web UI

2016-10-04 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-5705:
--
Summary: [YARN-3368] Add support for Timeline V2 to new web UI  (was: Add 
support for Timeline V2 to new web UI)

> [YARN-3368] Add support for Timeline V2 to new web UI
> -
>
> Key: YARN-5705
> URL: https://issues.apache.org/jira/browse/YARN-5705
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil G
>Assignee: Akhil PB
>
> Integrate timeline v2 to YARN-3368. This is a clone JIRA for YARN-4097



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5705) Add support for Timeline V2 to new web UI

2016-10-04 Thread Sunil G (JIRA)
Sunil G created YARN-5705:
-

 Summary: Add support for Timeline V2 to new web UI
 Key: YARN-5705
 URL: https://issues.apache.org/jira/browse/YARN-5705
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Sunil G
Assignee: Akhil PB


Integrate timeline v2 to YARN-3368. This is a clone JIRA for YARN-4097



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5682) [YARN-3368] Fix maven build to keep all generated or downloaded files in target folder

2016-10-04 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15544736#comment-15544736
 ] 

Sunil G commented on YARN-5682:
---

No tests are needed for this pom file changes. Hence test failures/no tests can 
be accepted.


> [YARN-3368] Fix maven build to keep all generated or downloaded files in 
> target folder
> --
>
> Key: YARN-5682
> URL: https://issues.apache.org/jira/browse/YARN-5682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-5682-YARN-3368.001.patch, 
> YARN-5682-YARN-3368.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5682) [YARN-3368] Fix maven build to keep all generated or downloaded files in target folder

2016-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15544546#comment-15544546
 ] 

Hadoop QA commented on YARN-5682:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 4m 45s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 3m 54s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
2s {color} | {color:green} YARN-3368 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 16s 
{color} | {color:green} YARN-3368 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 8s 
{color} | {color:green} YARN-3368 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
41s {color} | {color:green} YARN-3368 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 31s 
{color} | {color:green} YARN-3368 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 28s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 20s {color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 8s 
{color} | {color:green} hadoop-yarn-ui in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 42s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9baccb9 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831466/YARN-5682-YARN-3368.002.patch
 |
| JIRA Issue | YARN-5682 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 024b0f8b74ec 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-3368 / 9ef8291 |
| Default Java | 1.8.0_101 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/13274/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/13274/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13274/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13274/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> [YARN-3368] Fix maven 

[jira] [Commented] (YARN-5682) [YARN-3368] Fix maven build to keep all generated or downloaded files in target folder

2016-10-04 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15544527#comment-15544527
 ] 

Sunil G commented on YARN-5682:
---

Thanks [~leftnoteasy] for the latest patch.. Builds seem fine for me.
{noformat}
[INFO] Apache Hadoop YARN UI .. SUCCESS [ 34.506 s]
{noformat}

+1

I will commit the  patch if there are no objections later today..

> [YARN-3368] Fix maven build to keep all generated or downloaded files in 
> target folder
> --
>
> Key: YARN-5682
> URL: https://issues.apache.org/jira/browse/YARN-5682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-5682-YARN-3368.001.patch, 
> YARN-5682-YARN-3368.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5704) Provide config knobs to control enabling/disabling new/work in progress features in container-executor

2016-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15544499#comment-15544499
 ] 

Hadoop QA commented on YARN-5704:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 30s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 9s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 20s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831468/YARN-5704.001.patch |
| JIRA Issue | YARN-5704 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux fe50de5dffa5 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f61e3d1 |
| Default Java | 1.8.0_101 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13273/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13273/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Provide config knobs to control enabling/disabling new/work in progress 
> features in container-executor
> --
>
> Key: YARN-5704
> URL: https://issues.apache.org/jira/browse/YARN-5704
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: yarn
>Affects Versions: 2.8.0, 2.7.3, 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
> Attachments: YARN-5704.001.patch
>
>
> Provide a mechanism to enable/disable Docker and TC (Traffic Control) 
> functionality at the container-executor level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5599) Publish AM launch command to ATS

2016-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15544458#comment-15544458
 ] 

Hadoop QA commented on YARN-5599:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 52s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
35s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 9s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 21s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
45s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 6s 
{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
58s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
13s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 32s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 48s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 12s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 40s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 2 
new + 231 unchanged - 2 fixed = 233 total (was 233) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 54s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 24s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s 
{color} | {color:green} hadoop-yarn-api in the patch passed with JDK 
v1.8.0_101. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 6s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.8.0_101. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed with JDK 
v1.8.0_101. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 38m 36s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed 
with JDK v1.8.0_101. {color} |
|