[jira] [Commented] (YARN-5375) invoke MockRM#drainEvents implicitly in MockRM methods to reduce test failures

2016-11-06 Thread sandflee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15643429#comment-15643429
 ] 

sandflee commented on YARN-5375:


update the patch to fix TestFairScheduler, adding drainEvents after a node is 
registered. not using a rm.dispatcher.handle() for most TestFairScheduler not 
using this way.

> invoke MockRM#drainEvents implicitly in MockRM methods to reduce test failures
> --
>
> Key: YARN-5375
> URL: https://issues.apache.org/jira/browse/YARN-5375
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: sandflee
>Assignee: sandflee
>  Labels: oct16-medium
> Attachments: YARN-5375.01.patch, YARN-5375.03.patch, 
> YARN-5375.04.patch, YARN-5375.05.patch, YARN-5375.06.patch, 
> YARN-5375.07-drain-statestore.patch, YARN-5375.07-sync-statestore.patch, 
> YARN-5375.08.patch, YARN-5375.09.patch
>
>
> seen many test failures related to RMApp/RMAppattempt comes to some state but 
> some event are not processed in rm event queue or scheduler event queue, 
> cause test failure, seems we could implicitly invokes drainEvents(should also 
> drain sheduler event) in some mockRM method like waitForState



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5375) invoke MockRM#drainEvents implicitly in MockRM methods to reduce test failures

2016-11-06 Thread sandflee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sandflee updated YARN-5375:
---
Attachment: YARN-5375.09.patch

> invoke MockRM#drainEvents implicitly in MockRM methods to reduce test failures
> --
>
> Key: YARN-5375
> URL: https://issues.apache.org/jira/browse/YARN-5375
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: sandflee
>Assignee: sandflee
>  Labels: oct16-medium
> Attachments: YARN-5375.01.patch, YARN-5375.03.patch, 
> YARN-5375.04.patch, YARN-5375.05.patch, YARN-5375.06.patch, 
> YARN-5375.07-drain-statestore.patch, YARN-5375.07-sync-statestore.patch, 
> YARN-5375.08.patch, YARN-5375.09.patch
>
>
> seen many test failures related to RMApp/RMAppattempt comes to some state but 
> some event are not processed in rm event queue or scheduler event queue, 
> cause test failure, seems we could implicitly invokes drainEvents(should also 
> drain sheduler event) in some mockRM method like waitForState



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5843) Documentation wrong for entityType/events rest end point

2016-11-06 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15643396#comment-15643396
 ] 

Varun Saxena commented on YARN-5843:


Thanks [~bibinchundatt] for raising the issue.

This should be fixed. Additionally we can mention that *entityId* is a 
mandatory query parameter because without it no events will be returned.

> Documentation wrong for entityType/events rest end point
> 
>
> Key: YARN-5843
> URL: https://issues.apache.org/jira/browse/YARN-5843
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Minor
>
> http(s):// address:port>/ws/v1/timeline/{entityType}/events
> {noformat}
> entityIds - The entity IDs to retrieve events for.
> limit - A limit on the number of events to return for each entity. If null, 
> defaults to 100 events per entity.
> windowStart - If not null, retrieves only events later than the given time 
> (exclusive)
> windowEnd - If not null, retrieves only events earlier than the given time 
> (inclusive)
> eventTypes - Restricts the events returned to the given types. If null, 
> events of all types will be returned.
> {noformat}
> parameter should be
> *entityId*
> *eventType*
> Mention  comma separated *entityId* and *entitytype* for multiple arguments



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5820) yarn node CLI help should be clearer

2016-11-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15643397#comment-15643397
 ] 

Hadoop QA commented on YARN-5820:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 14s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client: The patch generated 2 new + 
138 unchanged - 0 fixed = 140 total (was 138) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 56s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | 
org.apache.hadoop.yarn.client.api.impl.TestOpportunisticContainerAllocation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | YARN-5820 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837701/YARN-5820.04.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2c385e2aa970 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ca33bdd |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13803/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/13803/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13803/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13803/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> yarn node CLI help should be clearer
> 

[jira] [Assigned] (YARN-5843) Documentation wrong for entityType/events rest end point

2016-11-06 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt reassigned YARN-5843:
--

Assignee: Bibin A Chundatt

> Documentation wrong for entityType/events rest end point
> 
>
> Key: YARN-5843
> URL: https://issues.apache.org/jira/browse/YARN-5843
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Minor
>
> http(s):// address:port>/ws/v1/timeline/{entityType}/events
> {noformat}
> entityIds - The entity IDs to retrieve events for.
> limit - A limit on the number of events to return for each entity. If null, 
> defaults to 100 events per entity.
> windowStart - If not null, retrieves only events later than the given time 
> (exclusive)
> windowEnd - If not null, retrieves only events earlier than the given time 
> (inclusive)
> eventTypes - Restricts the events returned to the given types. If null, 
> events of all types will be returned.
> {noformat}
> parameter should be
> *entityId*
> *eventType*
> Mention  comma separated *entityId* and *entitytype* for multiple arguments



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5843) Documentation wrong for entityType/events rest end point

2016-11-06 Thread Bibin A Chundatt (JIRA)
Bibin A Chundatt created YARN-5843:
--

 Summary: Documentation wrong for entityType/events rest end point
 Key: YARN-5843
 URL: https://issues.apache.org/jira/browse/YARN-5843
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Bibin A Chundatt
Priority: Minor


http(s):///ws/v1/timeline/{entityType}/events

{noformat}
entityIds - The entity IDs to retrieve events for.
limit - A limit on the number of events to return for each entity. If null, 
defaults to 100 events per entity.
windowStart - If not null, retrieves only events later than the given time 
(exclusive)
windowEnd - If not null, retrieves only events earlier than the given time 
(inclusive)
eventTypes - Restricts the events returned to the given types. If null, events 
of all types will be returned.
{noformat}

parameter should be
*entityId*
*eventType*
Mention  comma separated *entityId* and *entitytype* for multiple arguments




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5820) yarn node CLI help should be clearer

2016-11-06 Thread Ajith S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15643334#comment-15643334
 ] 

Ajith S commented on YARN-5820:
---

Thanks [~sunilg] and [~Naganarasimha] for your comments
I have updated the patch based on comments. Please review

> yarn node CLI help should be clearer
> 
>
> Key: YARN-5820
> URL: https://issues.apache.org/jira/browse/YARN-5820
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client
>Affects Versions: 2.6.0
>Reporter: Grant Sohn
>Assignee: Ajith S
>Priority: Trivial
> Attachments: YARN-5820.01.patch, YARN-5820.02.patch, 
> YARN-5820.03.patch, YARN-5820.04.patch
>
>
> Current message is:
> {noformat}
> usage: node
>  -all   Works with -list to list all nodes.
>  -list  List all running nodes. Supports optional use of
> -states to filter nodes based on node state, all -all
> to list all nodes.
>  -statesWorks with -list to filter nodes based on input
> comma-separated list of node states.
>  -statusPrints the status report of the node.
> {noformat}
> It should be either this:
> {noformat}
> usage: yarn node [-list [-states |-all] | -status ]
>  -all   Works with -list to list all nodes.
>  -list  List all running nodes. Supports optional use of
> -states to filter nodes based on node state, all -all
> to list all nodes.
>  -statesWorks with -list to filter nodes based on input
> comma-separated list of node states.
>  -statusPrints the status report of the node.
> {noformat}
> or that.
> {noformat}
> usage: yarn node -list [-states |-all] 
>yarn node -status 
>  -all   Works with -list to list all nodes.
>  -list  List all running nodes. Supports optional use of
> -states to filter nodes based on node state, all -all
> to list all nodes.
>  -statesWorks with -list to filter nodes based on input
> comma-separated list of node states.
>  -statusPrints the status report of the node.
> {noformat}
> The latter is the least ambiguous.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5820) yarn node CLI help should be clearer

2016-11-06 Thread Ajith S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajith S updated YARN-5820:
--
Attachment: YARN-5820.04.patch

> yarn node CLI help should be clearer
> 
>
> Key: YARN-5820
> URL: https://issues.apache.org/jira/browse/YARN-5820
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client
>Affects Versions: 2.6.0
>Reporter: Grant Sohn
>Assignee: Ajith S
>Priority: Trivial
> Attachments: YARN-5820.01.patch, YARN-5820.02.patch, 
> YARN-5820.03.patch, YARN-5820.04.patch
>
>
> Current message is:
> {noformat}
> usage: node
>  -all   Works with -list to list all nodes.
>  -list  List all running nodes. Supports optional use of
> -states to filter nodes based on node state, all -all
> to list all nodes.
>  -statesWorks with -list to filter nodes based on input
> comma-separated list of node states.
>  -statusPrints the status report of the node.
> {noformat}
> It should be either this:
> {noformat}
> usage: yarn node [-list [-states |-all] | -status ]
>  -all   Works with -list to list all nodes.
>  -list  List all running nodes. Supports optional use of
> -states to filter nodes based on node state, all -all
> to list all nodes.
>  -statesWorks with -list to filter nodes based on input
> comma-separated list of node states.
>  -statusPrints the status report of the node.
> {noformat}
> or that.
> {noformat}
> usage: yarn node -list [-states |-all] 
>yarn node -status 
>  -all   Works with -list to list all nodes.
>  -list  List all running nodes. Supports optional use of
> -states to filter nodes based on node state, all -all
> to list all nodes.
>  -statesWorks with -list to filter nodes based on input
> comma-separated list of node states.
>  -statusPrints the status report of the node.
> {noformat}
> The latter is the least ambiguous.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5820) yarn node CLI help should be clearer

2016-11-06 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15643314#comment-15643314
 ] 

Naganarasimha G R commented on YARN-5820:
-

Thanks [~ajithshetty] for the patch, 
IMO instead of {{"usage: node"}} we can have {{ "usage: yarn node"}} like other 
commands,
and agree with sunil's first comment but for 2nd point, feel "-help" is not 
required as its not captured in other commands and as well this is the content 
of help also

> yarn node CLI help should be clearer
> 
>
> Key: YARN-5820
> URL: https://issues.apache.org/jira/browse/YARN-5820
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client
>Affects Versions: 2.6.0
>Reporter: Grant Sohn
>Assignee: Ajith S
>Priority: Trivial
> Attachments: YARN-5820.01.patch, YARN-5820.02.patch, 
> YARN-5820.03.patch
>
>
> Current message is:
> {noformat}
> usage: node
>  -all   Works with -list to list all nodes.
>  -list  List all running nodes. Supports optional use of
> -states to filter nodes based on node state, all -all
> to list all nodes.
>  -statesWorks with -list to filter nodes based on input
> comma-separated list of node states.
>  -statusPrints the status report of the node.
> {noformat}
> It should be either this:
> {noformat}
> usage: yarn node [-list [-states |-all] | -status ]
>  -all   Works with -list to list all nodes.
>  -list  List all running nodes. Supports optional use of
> -states to filter nodes based on node state, all -all
> to list all nodes.
>  -statesWorks with -list to filter nodes based on input
> comma-separated list of node states.
>  -statusPrints the status report of the node.
> {noformat}
> or that.
> {noformat}
> usage: yarn node -list [-states |-all] 
>yarn node -status 
>  -all   Works with -list to list all nodes.
>  -list  List all running nodes. Supports optional use of
> -states to filter nodes based on node state, all -all
> to list all nodes.
>  -statesWorks with -list to filter nodes based on input
> comma-separated list of node states.
>  -statusPrints the status report of the node.
> {noformat}
> The latter is the least ambiguous.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5545) App submit failure on queue with label when default queue partition capacity is zero

2016-11-06 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15643280#comment-15643280
 ] 

Sunil G commented on YARN-5545:
---

Hi [~naganarasimha...@apache.org] and [~bibinchundatt]

I also have a similar opinion with [~naganarasimha...@apache.org]. We are 
introducing lot of *instanceof* checks, which is not so clean.

Couple of options I feel
- We could have this check inside {{CS#addApplication}}, and we could raise 
APP_REJECTED event back of the limit is met.
- As suggested by Naga, We can also try to have a interface in 
{{YarnScheduler}} and then create a dummy implementation in 
{{AbstractYarnScheduler}}. Then in CS, we could have checks as mentioned in the 
patch.

I feel option 1 is slightly simple if we could achieve the same. Thoughts?

> App submit failure on queue with label when default queue partition capacity 
> is zero
> 
>
> Key: YARN-5545
> URL: https://issues.apache.org/jira/browse/YARN-5545
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-medium
> Attachments: YARN-5545.0001.patch, YARN-5545.0002.patch, 
> YARN-5545.0003.patch, YARN-5545.0005.patch, YARN-5545.004.patch, 
> capacity-scheduler.xml
>
>
> Configure capacity scheduler 
> yarn.scheduler.capacity.root.default.capacity=0
> yarn.scheduler.capacity.root.queue1.accessible-node-labels.labelx.capacity=50
> yarn.scheduler.capacity.root.default.accessible-node-labels.labelx.capacity=50
> Submit application as below
> ./yarn jar 
> ../share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-alpha2-SNAPSHOT-tests.jar
>  sleep -Dmapreduce.job.node-label-expression=labelx 
> -Dmapreduce.job.queuename=default -m 1 -r 1 -mt 1000 -rt 1
> {noformat}
> 2016-08-21 18:21:31,375 INFO mapreduce.JobSubmitter: Cleaning up the staging 
> area /tmp/hadoop-yarn/staging/root/.staging/job_1471670113386_0001
> java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Failed 
> to submit application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default already 
> has 0 applications, cannot accept submission of application: 
> application_1471670113386_0001
>   at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:316)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:255)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1344)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1341)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1790)
>   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1341)
>   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1362)
>   at org.apache.hadoop.mapreduce.SleepJob.run(SleepJob.java:273)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.mapreduce.SleepJob.main(SleepJob.java:194)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
>   at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
>   at 
> org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:136)
>   at 
> org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:144)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit 
> application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default already 
> has 0 applications, cannot accept submission of application: 
> application_1471670113386_0001
>   at 
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:286)
>   at 
> 

[jira] [Commented] (YARN-5820) yarn node CLI help should be clearer

2016-11-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15643255#comment-15643255
 ] 

Hadoop QA commented on YARN-5820:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client: The patch generated 1 new + 
138 unchanged - 0 fixed = 139 total (was 138) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m  2s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 29m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | 
org.apache.hadoop.yarn.client.api.impl.TestOpportunisticContainerAllocation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | YARN-5820 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837698/YARN-5820.03.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f17deeea7043 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ca33bdd |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13801/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/13801/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13801/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13801/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> yarn node CLI help should be clearer
> 

[jira] [Commented] (YARN-5545) App submit failure on queue with label when default queue partition capacity is zero

2016-11-06 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15643248#comment-15643248
 ] 

Bibin A Chundatt commented on YARN-5545:


[~Naganarasimha]/[~sunilg]

We are add interface in {{YarnScheduler}} to check app can be submitted. So 
that each scheduler we can implement as per the needs.
Probably the access check in {{RMAppManager}} we can move to the same.

> App submit failure on queue with label when default queue partition capacity 
> is zero
> 
>
> Key: YARN-5545
> URL: https://issues.apache.org/jira/browse/YARN-5545
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-medium
> Attachments: YARN-5545.0001.patch, YARN-5545.0002.patch, 
> YARN-5545.0003.patch, YARN-5545.0005.patch, YARN-5545.004.patch, 
> capacity-scheduler.xml
>
>
> Configure capacity scheduler 
> yarn.scheduler.capacity.root.default.capacity=0
> yarn.scheduler.capacity.root.queue1.accessible-node-labels.labelx.capacity=50
> yarn.scheduler.capacity.root.default.accessible-node-labels.labelx.capacity=50
> Submit application as below
> ./yarn jar 
> ../share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-alpha2-SNAPSHOT-tests.jar
>  sleep -Dmapreduce.job.node-label-expression=labelx 
> -Dmapreduce.job.queuename=default -m 1 -r 1 -mt 1000 -rt 1
> {noformat}
> 2016-08-21 18:21:31,375 INFO mapreduce.JobSubmitter: Cleaning up the staging 
> area /tmp/hadoop-yarn/staging/root/.staging/job_1471670113386_0001
> java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Failed 
> to submit application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default already 
> has 0 applications, cannot accept submission of application: 
> application_1471670113386_0001
>   at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:316)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:255)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1344)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1341)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1790)
>   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1341)
>   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1362)
>   at org.apache.hadoop.mapreduce.SleepJob.run(SleepJob.java:273)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.mapreduce.SleepJob.main(SleepJob.java:194)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
>   at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
>   at 
> org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:136)
>   at 
> org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:144)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit 
> application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default already 
> has 0 applications, cannot accept submission of application: 
> application_1471670113386_0001
>   at 
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:286)
>   at 
> org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:296)
>   at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:301)
>   ... 25 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Commented] (YARN-5705) [YARN-3368] Add support for Timeline V2 to new web UI

2016-11-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15643240#comment-15643240
 ] 

Hadoop QA commented on YARN-5705:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  1m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:183a5e9 |
| JIRA Issue | YARN-5705 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837700/YARN-5705-YARN-3368.004.patch
 |
| Optional Tests |  asflicense  |
| uname | Linux c7438244fc64 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-3368 / 40ebdb1 |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13802/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [YARN-3368] Add support for Timeline V2 to new web UI
> -
>
> Key: YARN-5705
> URL: https://issues.apache.org/jira/browse/YARN-5705
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil G
>Assignee: Akhil PB
>  Labels: oct16-hard
> Attachments: YARN-5705-YARN-3368.001.patch, 
> YARN-5705-YARN-3368.002.patch, YARN-5705-YARN-3368.003.patch, 
> YARN-5705-YARN-3368.004.patch, YARN-5705.001.patch, YARN-5705.002.patch, 
> YARN-5705.003.patch, YARN-5705.004.patch, YARN-5705.005.patch, 
> YARN-5705.006.patch, YARN-5705.007.patch, YARN-5705.008.patch, 
> YARN-5705.009.patch, YARN-5705.010.patch, YARN-5705.011.patch, 
> YARN-5705.012.patch, YARN-5705.013.patch
>
>
> Integrate timeline v2 to YARN-3368. This is a clone JIRA for YARN-4097



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5705) [YARN-3368] Add support for Timeline V2 to new web UI

2016-11-06 Thread Akhil PB (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-5705:
---
Attachment: YARN-5705-YARN-3368.004.patch

> [YARN-3368] Add support for Timeline V2 to new web UI
> -
>
> Key: YARN-5705
> URL: https://issues.apache.org/jira/browse/YARN-5705
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil G
>Assignee: Akhil PB
>  Labels: oct16-hard
> Attachments: YARN-5705-YARN-3368.001.patch, 
> YARN-5705-YARN-3368.002.patch, YARN-5705-YARN-3368.003.patch, 
> YARN-5705-YARN-3368.004.patch, YARN-5705.001.patch, YARN-5705.002.patch, 
> YARN-5705.003.patch, YARN-5705.004.patch, YARN-5705.005.patch, 
> YARN-5705.006.patch, YARN-5705.007.patch, YARN-5705.008.patch, 
> YARN-5705.009.patch, YARN-5705.010.patch, YARN-5705.011.patch, 
> YARN-5705.012.patch, YARN-5705.013.patch
>
>
> Integrate timeline v2 to YARN-3368. This is a clone JIRA for YARN-4097



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5820) yarn node CLI help should be clearer

2016-11-06 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15643221#comment-15643221
 ] 

Sunil G commented on YARN-5820:
---

[~ajithshetty]

Than for the patch. Few nits
- In below code,
{noformat}
pw.println(
"usage: node [-list [-states |-showDetails|-all] 
|-status ]");
{noformat}
Suboptions of {{node -list}} could be displayed in alphabetical order with 
*-all* at start (currently all options in help are displayed in alphabetical)
- I am not sure whether to show *-help*  also there. Clearly we are showing 
-help below. There are no strong arguments for that, but more thoughts are 
welcome.

> yarn node CLI help should be clearer
> 
>
> Key: YARN-5820
> URL: https://issues.apache.org/jira/browse/YARN-5820
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client
>Affects Versions: 2.6.0
>Reporter: Grant Sohn
>Assignee: Ajith S
>Priority: Trivial
> Attachments: YARN-5820.01.patch, YARN-5820.02.patch, 
> YARN-5820.03.patch
>
>
> Current message is:
> {noformat}
> usage: node
>  -all   Works with -list to list all nodes.
>  -list  List all running nodes. Supports optional use of
> -states to filter nodes based on node state, all -all
> to list all nodes.
>  -statesWorks with -list to filter nodes based on input
> comma-separated list of node states.
>  -statusPrints the status report of the node.
> {noformat}
> It should be either this:
> {noformat}
> usage: yarn node [-list [-states |-all] | -status ]
>  -all   Works with -list to list all nodes.
>  -list  List all running nodes. Supports optional use of
> -states to filter nodes based on node state, all -all
> to list all nodes.
>  -statesWorks with -list to filter nodes based on input
> comma-separated list of node states.
>  -statusPrints the status report of the node.
> {noformat}
> or that.
> {noformat}
> usage: yarn node -list [-states |-all] 
>yarn node -status 
>  -all   Works with -list to list all nodes.
>  -list  List all running nodes. Supports optional use of
> -states to filter nodes based on node state, all -all
> to list all nodes.
>  -statesWorks with -list to filter nodes based on input
> comma-separated list of node states.
>  -statusPrints the status report of the node.
> {noformat}
> The latter is the least ambiguous.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5820) yarn node CLI help should be clearer

2016-11-06 Thread Ajith S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajith S updated YARN-5820:
--
Attachment: YARN-5820.03.patch

Fixed test case failures. Please review

> yarn node CLI help should be clearer
> 
>
> Key: YARN-5820
> URL: https://issues.apache.org/jira/browse/YARN-5820
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client
>Affects Versions: 2.6.0
>Reporter: Grant Sohn
>Assignee: Ajith S
>Priority: Trivial
> Attachments: YARN-5820.01.patch, YARN-5820.02.patch, 
> YARN-5820.03.patch
>
>
> Current message is:
> {noformat}
> usage: node
>  -all   Works with -list to list all nodes.
>  -list  List all running nodes. Supports optional use of
> -states to filter nodes based on node state, all -all
> to list all nodes.
>  -statesWorks with -list to filter nodes based on input
> comma-separated list of node states.
>  -statusPrints the status report of the node.
> {noformat}
> It should be either this:
> {noformat}
> usage: yarn node [-list [-states |-all] | -status ]
>  -all   Works with -list to list all nodes.
>  -list  List all running nodes. Supports optional use of
> -states to filter nodes based on node state, all -all
> to list all nodes.
>  -statesWorks with -list to filter nodes based on input
> comma-separated list of node states.
>  -statusPrints the status report of the node.
> {noformat}
> or that.
> {noformat}
> usage: yarn node -list [-states |-all] 
>yarn node -status 
>  -all   Works with -list to list all nodes.
>  -list  List all running nodes. Supports optional use of
> -states to filter nodes based on node state, all -all
> to list all nodes.
>  -statesWorks with -list to filter nodes based on input
> comma-separated list of node states.
>  -statusPrints the status report of the node.
> {noformat}
> The latter is the least ambiguous.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5713) Update jackson from 1.9.13 to 2.x in hadoop-yarn

2016-11-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15643142#comment-15643142
 ] 

Hadoop QA commented on YARN-5713:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
47s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
39s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
13s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
50s{color} | {color:green} root: The patch generated 0 new + 131 unchanged - 1 
fixed = 131 total (was 132) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  3m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
7s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} hadoop-yarn-server-applicationhistoryservice in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-yarn-server-timeline-pluginstorage in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry 
generated 0 new + 48 unchanged - 4 fixed = 48 total (was 52) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-project in the patch passed. {color} |

[jira] [Commented] (YARN-5368) memory leak at timeline server

2016-11-06 Thread Wataru Yukawa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15643017#comment-15643017
 ] 

Wataru Yukawa commented on YARN-5368:
-

>Did you get any workaround for this..?
I restart timeline server.

> memory leak at timeline server
> --
>
> Key: YARN-5368
> URL: https://issues.apache.org/jira/browse/YARN-5368
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 2.7.1
> Environment: HDP2.4
> CentOS 6.7
> jdk1.8.0_72
>Reporter: Wataru Yukawa
>
> memory usage of timeline server machine increases gradually.
> https://gyazo.com/952dad96c77ae053bae2e4d8c8ab0572
> please check since April.
> According to my investigation, timeline server used about 25GB.
> top command result
> {code}
> 90577 yarn  20   0 28.4g  25g  12m S  0.0 40.1   5162:53 
> /usr/java/jdk1.8.0_72/bin/java -Dproc_timelineserver -Xmx1024m 
> -Dhdp.version=2.4.0.0-169 -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn 
> -Dyarn.log.dir=/var/log/hadoop-yarn/yarn ...
> {code}
> ps command result
> {code}
> $ ps ww 90577
>  90577 ?Sl   5162:53 /usr/java/jdk1.8.0_72/bin/java 
> -Dproc_timelineserver -Xmx1024m -Dhdp.version=2.4.0.0-169 
> -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn 
> -Dyarn.log.dir=/var/log/hadoop-yarn/yarn 
> -Dhadoop.log.file=yarn-yarn-timelineserver-myhost.log 
> -Dyarn.log.file=yarn-yarn-timelineserver-myhost.log -Dyarn.home.dir= 
> -Dyarn.id.str=yarn -Dhadoop.root.logger=INFO,EWMA,RFA 
> -Dyarn.root.logger=INFO,EWMA,RFA 
> -Djava.library.path=:/usr/hdp/2.4.0.0-169/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.4.0.0-169/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.4.0.0-169/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.4.0.0-169/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir
>  -Dyarn.policy.file=hadoop-policy.xml 
> -Djava.io.tmpdir=/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir 
> -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn 
> -Dyarn.log.dir=/var/log/hadoop-yarn/yarn 
> -Dhadoop.log.file=yarn-yarn-timelineserver-myhost.log 
> -Dyarn.log.file=yarn-yarn-timelineserver-myhost.log 
> -Dyarn.home.dir=/usr/hdp/current/hadoop-yarn-timelineserver 
> -Dhadoop.home.dir=/usr/hdp/2.4.0.0-169/hadoop 
> -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA 
> -Djava.library.path=:/usr/hdp/2.4.0.0-169/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.4.0.0-169/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.4.0.0-169/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.4.0.0-169/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir
>  -classpath 
> /usr/hdp/2.4.0.0-169/hadoop/conf:/usr/hdp/2.4.0.0-169/hadoop/conf:/usr/hdp/2.4.0.0-169/hadoop/conf:/usr/hdp/2.4.0.0-169/hadoop/lib/*:/usr/hdp/2.4.0.0-169/hadoop/.//*:/usr/hdp/2.4.0.0-169/hadoop-hdfs/./:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/*:/usr/hdp/2.4.0.0-169/hadoop-hdfs/.//*:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/*:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//*:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/*:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//*::/usr/hdp/2.4.0.0-169/tez/*:/usr/hdp/2.4.0.0-169/tez/lib/*:/usr/hdp/2.4.0.0-169/tez/conf:/usr/hdp/2.4.0.0-169/tez/*:/usr/hdp/2.4.0.0-169/tez/lib/*:/usr/hdp/2.4.0.0-169/tez/conf:/usr/hdp/current/hadoop-yarn-timelineserver/.//*:/usr/hdp/current/hadoop-yarn-timelineserver/lib/*:/usr/hdp/2.4.0.0-169/hadoop/conf/timelineserver-config/log4j.properties
>  
> org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer
> {code}
>  
> Alghough I set -Xmx1024m, actual memory usage is 25GB.
> After I restart timeline server, memory usage of timeline server machine 
> decreases.
> https://gyazo.com/130600c17a7d41df8606727a859ae7e3
> Now timelineserver uses less than 1GB memory.
> top command result
> {code}
>  6163 yarn  20   0 3959m 783m  46m S  0.3  1.2   3:37.60 
> /usr/java/jdk1.8.0_72/bin/java -Dproc_timelineserver -Xmx1024m 
> -Dhdp.version=2.4.0.0-169 ...
> {code}
> I suspect memory leak at timeline server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5453) FairScheduler#update may skip update demand resource of child queue/app if current demand reached maxResource

2016-11-06 Thread sandflee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642979#comment-15642979
 ] 

sandflee commented on YARN-5453:


thanks [~kasha], patch updated.

> FairScheduler#update may skip update demand resource of child queue/app if 
> current demand reached maxResource
> -
>
> Key: YARN-5453
> URL: https://issues.apache.org/jira/browse/YARN-5453
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Reporter: sandflee
>Assignee: sandflee
>  Labels: oct16-easy
> Attachments: YARN-5453.01.patch, YARN-5453.02.patch, 
> YARN-5453.03.patch, YARN-5453.04.patch, YARN-5453.05.patch
>
>
> {code}
>   demand = Resources.createResource(0);
>   for (FSQueue childQueue : childQueues) {
> childQueue.updateDemand();
> Resource toAdd = childQueue.getDemand();
> demand = Resources.add(demand, toAdd);
> demand = Resources.componentwiseMin(demand, maxRes);
> if (Resources.equals(demand, maxRes)) {
>   break;
> }
>   }
> {code}
> if one singe queue's demand resource exceed maxRes,  the other queue's demand 
> resource will not update.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5453) FairScheduler#update may skip update demand resource of child queue/app if current demand reached maxResource

2016-11-06 Thread sandflee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sandflee updated YARN-5453:
---
Attachment: YARN-5453.05.patch

> FairScheduler#update may skip update demand resource of child queue/app if 
> current demand reached maxResource
> -
>
> Key: YARN-5453
> URL: https://issues.apache.org/jira/browse/YARN-5453
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Reporter: sandflee
>Assignee: sandflee
>  Labels: oct16-easy
> Attachments: YARN-5453.01.patch, YARN-5453.02.patch, 
> YARN-5453.03.patch, YARN-5453.04.patch, YARN-5453.05.patch
>
>
> {code}
>   demand = Resources.createResource(0);
>   for (FSQueue childQueue : childQueues) {
> childQueue.updateDemand();
> Resource toAdd = childQueue.getDemand();
> demand = Resources.add(demand, toAdd);
> demand = Resources.componentwiseMin(demand, maxRes);
> if (Resources.equals(demand, maxRes)) {
>   break;
> }
>   }
> {code}
> if one singe queue's demand resource exceed maxRes,  the other queue's demand 
> resource will not update.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5713) Update jackson from 1.9.13 to 2.x in hadoop-yarn

2016-11-06 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated YARN-5713:

Attachment: YARN-5713.04.patch

Thanks [~ste...@apache.org] for reviewing. Attaching v4 patch:
* Set jackson-module-jaxb-annotations and jackson-jaxrs-json-provider versions 
in hadoop-project pom.
* Fixed checkstyle warning.

> Update jackson from 1.9.13 to 2.x in hadoop-yarn
> 
>
> Key: YARN-5713
> URL: https://issues.apache.org/jira/browse/YARN-5713
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: build, timelineserver
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>  Labels: oct16-medium
> Attachments: HADOOP-13677.01.patch, HADOOP-13677.02.patch, 
> YARN-5713.03.patch, YARN-5713.04.patch
>
>
> Sub-task of HADOOP-13332.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-5840) Yarn queues not being tracked correctly by Yarn Timeline

2016-11-06 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang resolved YARN-5840.
---
   Resolution: Duplicate
Fix Version/s: 3.0.0-alpha1
   2.8.0

>  Yarn queues not being tracked correctly by Yarn Timeline
> -
>
> Key: YARN-5840
> URL: https://issues.apache.org/jira/browse/YARN-5840
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.2
>Reporter: ramtin
>Assignee: Weiwei Yang
> Fix For: 2.8.0, 3.0.0-alpha1
>
>
> By creating Yarn sub-queues and mapping users/groups to these sub-queues when 
> the Job runs the Yarn client seems to capture the correct queue for that Job 
> but if you go to the Yarn Timeline Server to see these jobs, they all get 
> tagged to the "default" queue.
> This makes it hard for easily map the cluster consumption by different 
> departments which belong to different users



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5840) Yarn queues not being tracked correctly by Yarn Timeline

2016-11-06 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642949#comment-15642949
 ] 

Weiwei Yang commented on YARN-5840:
---

This should have been fixed in YARN-4044, marked as duplicated.

>  Yarn queues not being tracked correctly by Yarn Timeline
> -
>
> Key: YARN-5840
> URL: https://issues.apache.org/jira/browse/YARN-5840
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.2
>Reporter: ramtin
>Assignee: Weiwei Yang
> Fix For: 2.8.0, 3.0.0-alpha1
>
>
> By creating Yarn sub-queues and mapping users/groups to these sub-queues when 
> the Job runs the Yarn client seems to capture the correct queue for that Job 
> but if you go to the Yarn Timeline Server to see these jobs, they all get 
> tagged to the "default" queue.
> This makes it hard for easily map the cluster consumption by different 
> departments which belong to different users



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5368) memory leak at timeline server

2016-11-06 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642909#comment-15642909
 ] 

Brahma Reddy Battula commented on YARN-5368:


Forgot to update here.. Upon investigation, there was internal change which 
caused the leveldb leak..:(.. 

> memory leak at timeline server
> --
>
> Key: YARN-5368
> URL: https://issues.apache.org/jira/browse/YARN-5368
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 2.7.1
> Environment: HDP2.4
> CentOS 6.7
> jdk1.8.0_72
>Reporter: Wataru Yukawa
>
> memory usage of timeline server machine increases gradually.
> https://gyazo.com/952dad96c77ae053bae2e4d8c8ab0572
> please check since April.
> According to my investigation, timeline server used about 25GB.
> top command result
> {code}
> 90577 yarn  20   0 28.4g  25g  12m S  0.0 40.1   5162:53 
> /usr/java/jdk1.8.0_72/bin/java -Dproc_timelineserver -Xmx1024m 
> -Dhdp.version=2.4.0.0-169 -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn 
> -Dyarn.log.dir=/var/log/hadoop-yarn/yarn ...
> {code}
> ps command result
> {code}
> $ ps ww 90577
>  90577 ?Sl   5162:53 /usr/java/jdk1.8.0_72/bin/java 
> -Dproc_timelineserver -Xmx1024m -Dhdp.version=2.4.0.0-169 
> -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn 
> -Dyarn.log.dir=/var/log/hadoop-yarn/yarn 
> -Dhadoop.log.file=yarn-yarn-timelineserver-myhost.log 
> -Dyarn.log.file=yarn-yarn-timelineserver-myhost.log -Dyarn.home.dir= 
> -Dyarn.id.str=yarn -Dhadoop.root.logger=INFO,EWMA,RFA 
> -Dyarn.root.logger=INFO,EWMA,RFA 
> -Djava.library.path=:/usr/hdp/2.4.0.0-169/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.4.0.0-169/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.4.0.0-169/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.4.0.0-169/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir
>  -Dyarn.policy.file=hadoop-policy.xml 
> -Djava.io.tmpdir=/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir 
> -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn 
> -Dyarn.log.dir=/var/log/hadoop-yarn/yarn 
> -Dhadoop.log.file=yarn-yarn-timelineserver-myhost.log 
> -Dyarn.log.file=yarn-yarn-timelineserver-myhost.log 
> -Dyarn.home.dir=/usr/hdp/current/hadoop-yarn-timelineserver 
> -Dhadoop.home.dir=/usr/hdp/2.4.0.0-169/hadoop 
> -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA 
> -Djava.library.path=:/usr/hdp/2.4.0.0-169/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.4.0.0-169/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.4.0.0-169/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.4.0.0-169/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir
>  -classpath 
> /usr/hdp/2.4.0.0-169/hadoop/conf:/usr/hdp/2.4.0.0-169/hadoop/conf:/usr/hdp/2.4.0.0-169/hadoop/conf:/usr/hdp/2.4.0.0-169/hadoop/lib/*:/usr/hdp/2.4.0.0-169/hadoop/.//*:/usr/hdp/2.4.0.0-169/hadoop-hdfs/./:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/*:/usr/hdp/2.4.0.0-169/hadoop-hdfs/.//*:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/*:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//*:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/*:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//*::/usr/hdp/2.4.0.0-169/tez/*:/usr/hdp/2.4.0.0-169/tez/lib/*:/usr/hdp/2.4.0.0-169/tez/conf:/usr/hdp/2.4.0.0-169/tez/*:/usr/hdp/2.4.0.0-169/tez/lib/*:/usr/hdp/2.4.0.0-169/tez/conf:/usr/hdp/current/hadoop-yarn-timelineserver/.//*:/usr/hdp/current/hadoop-yarn-timelineserver/lib/*:/usr/hdp/2.4.0.0-169/hadoop/conf/timelineserver-config/log4j.properties
>  
> org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer
> {code}
>  
> Alghough I set -Xmx1024m, actual memory usage is 25GB.
> After I restart timeline server, memory usage of timeline server machine 
> decreases.
> https://gyazo.com/130600c17a7d41df8606727a859ae7e3
> Now timelineserver uses less than 1GB memory.
> top command result
> {code}
>  6163 yarn  20   0 3959m 783m  46m S  0.3  1.2   3:37.60 
> /usr/java/jdk1.8.0_72/bin/java -Dproc_timelineserver -Xmx1024m 
> -Dhdp.version=2.4.0.0-169 ...
> {code}
> I suspect memory leak at timeline server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-2255) YARN Audit logging not added to log4j.properties

2016-11-06 Thread Ying Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ying Zhang reassigned YARN-2255:


Assignee: Ying Zhang  (was: Varun Saxena)

> YARN Audit logging not added to log4j.properties
> 
>
> Key: YARN-2255
> URL: https://issues.apache.org/jira/browse/YARN-2255
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Varun Saxena
>Assignee: Ying Zhang
>
> log4j.properties file which is part of the hadoop package, doesnt have YARN 
> Audit logging tied to it. This leads to audit logs getting generated in 
> normal log files. Audit logs should be generated in a separate log file



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5720) Update document for "rmadmin -replaceLabelOnNode"

2016-11-06 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642739#comment-15642739
 ] 

Naganarasimha G R commented on YARN-5720:
-

[~Tao Jie], 
YARN-4884 is the one which adds it, have commented there, if possible will get 
that merged and then apply this !

> Update document for "rmadmin -replaceLabelOnNode"
> -
>
> Key: YARN-5720
> URL: https://issues.apache.org/jira/browse/YARN-5720
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Minor
> Attachments: YARN-5720-branch-2.8.patch, YARN-5720.001.patch, 
> YARN-5720.002.patch, YarnCommands.png, nodeLabel.png
>
>
> As mentioned in YARN-4855, document should be updated since commands has 
> changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4884) Fix missing documentation about rmadmin command regarding node labels

2016-11-06 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642737#comment-15642737
 ] 

Naganarasimha G R commented on YARN-4884:
-

hi [~vvasudev], [~kaisasak],
Seems like we need to apply this patch to 2.8 too, any particular reason not to 
do it ? shall i go ahead and apply it or any rebase required ?

> Fix missing documentation about rmadmin command regarding node labels
> -
>
> Key: YARN-4884
> URL: https://issues.apache.org/jira/browse/YARN-4884
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: YARN-4884.01.patch
>
>
> There is no documentation about node labels in rmadmin section such as 
> {{-addToClusterNodeLabels}}, {{-removeFromClusterNodeLabels}}.
> In addition, the command inherited from HAAdmin command are also missing. 
> They are available when {{yarn.resourcemanager.ha.enabled}} with rmadmin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4498) Application level node labels stats to be available in REST

2016-11-06 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642708#comment-15642708
 ] 

Naganarasimha G R commented on YARN-4498:
-

[~rohithsharma], will commit it today if no further comments on the addendum 
patches

> Application level node labels stats to be available in REST
> ---
>
> Key: YARN-4498
> URL: https://issues.apache.org/jira/browse/YARN-4498
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, client, resourcemanager
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-medium
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2
>
> Attachments: 0001-YARN-4498.patch, YARN-4498.0002.patch, 
> YARN-4498.0003.patch, YARN-4498.0004.patch, YARN-4498.addendum.001.patch, 
> YARN-4498.branch-2.8.0001.patch, YARN-4498.branch-2.8.addendum.001.patch, 
> apps.xml
>
>
> Currently nodelabel stats per application is not available through REST like 
> currently used labels by all live containers, total stats of containers per 
> label for app etc..
> CLI and web UI scenarios will be handled separately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5545) App submit failure on queue with label when default queue partition capacity is zero

2016-11-06 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642691#comment-15642691
 ] 

Naganarasimha G R edited comment on YARN-5545 at 11/7/16 12:36 AM:
---

[~sunilg] & [~bibinchundatt],
Was wondering whether to type cast would be the right approach or to introduce 
an api in YarnScheduler to validate whether application can be accepted or 
event better to do it in CapacityScheduler.addApplication which call 
leafQueue.submitApplication(which currently does the queue level validation for 
max apps)  ? As in future there can be similar checks for other schedulers too 
and not good to have specific scheduler checks in the main RM flow


was (Author: naganarasimha):
[~sunilg] & [~bibinchundatt],
Was wondering whether to type cast would be the right approach or to introduce 
an api in YarnScheduler to validate whether application can be accepted ? As in 
future there can be similar checks for other schedulers too.

> App submit failure on queue with label when default queue partition capacity 
> is zero
> 
>
> Key: YARN-5545
> URL: https://issues.apache.org/jira/browse/YARN-5545
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-medium
> Attachments: YARN-5545.0001.patch, YARN-5545.0002.patch, 
> YARN-5545.0003.patch, YARN-5545.0005.patch, YARN-5545.004.patch, 
> capacity-scheduler.xml
>
>
> Configure capacity scheduler 
> yarn.scheduler.capacity.root.default.capacity=0
> yarn.scheduler.capacity.root.queue1.accessible-node-labels.labelx.capacity=50
> yarn.scheduler.capacity.root.default.accessible-node-labels.labelx.capacity=50
> Submit application as below
> ./yarn jar 
> ../share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-alpha2-SNAPSHOT-tests.jar
>  sleep -Dmapreduce.job.node-label-expression=labelx 
> -Dmapreduce.job.queuename=default -m 1 -r 1 -mt 1000 -rt 1
> {noformat}
> 2016-08-21 18:21:31,375 INFO mapreduce.JobSubmitter: Cleaning up the staging 
> area /tmp/hadoop-yarn/staging/root/.staging/job_1471670113386_0001
> java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Failed 
> to submit application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default already 
> has 0 applications, cannot accept submission of application: 
> application_1471670113386_0001
>   at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:316)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:255)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1344)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1341)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1790)
>   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1341)
>   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1362)
>   at org.apache.hadoop.mapreduce.SleepJob.run(SleepJob.java:273)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.mapreduce.SleepJob.main(SleepJob.java:194)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
>   at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
>   at 
> org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:136)
>   at 
> org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:144)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit 
> application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default already 
> has 0 applications, cannot accept submission of application: 
> 

[jira] [Commented] (YARN-5765) LinuxContainerExecutor creates appcache and its subdirectories with wrong group owner.

2016-11-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642693#comment-15642693
 ] 

Hadoop QA commented on YARN-5765:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  4m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 44s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | YARN-5765 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837669/YARN-5765.001.patch |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 462b8b244822 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 59bc84a |
| Default Java | 1.8.0_101 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/13799/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13799/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13799/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> LinuxContainerExecutor creates appcache and its subdirectories with wrong 
> group owner.
> --
>
> Key: YARN-5765
> URL: https://issues.apache.org/jira/browse/YARN-5765
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Naganarasimha G R
>Priority: Blocker
> Attachments: YARN-5765.001.patch
>
>
> LinuxContainerExecutor creates usercache/\{userId\}/appcache/\{appId\} with 
> wrong group owner, causing Log aggregation and ShuffleHandler to fail because 
> node manager process does not have permission to read the files under the 
> directory.
> This can be 

[jira] [Commented] (YARN-5545) App submit failure on queue with label when default queue partition capacity is zero

2016-11-06 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642691#comment-15642691
 ] 

Naganarasimha G R commented on YARN-5545:
-

[~sunilg] & [~bibinchundatt],
Was wondering whether to type cast would be the right approach or to introduce 
an api in YarnScheduler to validate whether application can be accepted ? As in 
future there can be similar checks for other schedulers too.

> App submit failure on queue with label when default queue partition capacity 
> is zero
> 
>
> Key: YARN-5545
> URL: https://issues.apache.org/jira/browse/YARN-5545
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-medium
> Attachments: YARN-5545.0001.patch, YARN-5545.0002.patch, 
> YARN-5545.0003.patch, YARN-5545.0005.patch, YARN-5545.004.patch, 
> capacity-scheduler.xml
>
>
> Configure capacity scheduler 
> yarn.scheduler.capacity.root.default.capacity=0
> yarn.scheduler.capacity.root.queue1.accessible-node-labels.labelx.capacity=50
> yarn.scheduler.capacity.root.default.accessible-node-labels.labelx.capacity=50
> Submit application as below
> ./yarn jar 
> ../share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-alpha2-SNAPSHOT-tests.jar
>  sleep -Dmapreduce.job.node-label-expression=labelx 
> -Dmapreduce.job.queuename=default -m 1 -r 1 -mt 1000 -rt 1
> {noformat}
> 2016-08-21 18:21:31,375 INFO mapreduce.JobSubmitter: Cleaning up the staging 
> area /tmp/hadoop-yarn/staging/root/.staging/job_1471670113386_0001
> java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Failed 
> to submit application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default already 
> has 0 applications, cannot accept submission of application: 
> application_1471670113386_0001
>   at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:316)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:255)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1344)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1341)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1790)
>   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1341)
>   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1362)
>   at org.apache.hadoop.mapreduce.SleepJob.run(SleepJob.java:273)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.mapreduce.SleepJob.main(SleepJob.java:194)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
>   at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
>   at 
> org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:136)
>   at 
> org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:144)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit 
> application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default already 
> has 0 applications, cannot accept submission of application: 
> application_1471670113386_0001
>   at 
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:286)
>   at 
> org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:296)
>   at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:301)
>   ... 25 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, 

[jira] [Updated] (YARN-5765) LinuxContainerExecutor creates appcache and its subdirectories with wrong group owner.

2016-11-06 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-5765:

Attachment: YARN-5765.001.patch

Thanks [~haibochen],
Sorry for the delay and never mind to give comments at any point of time...
what i meant was not just add *"umask(0027)"* but kind of revert the solution 
of YARN-5287 and to add *"umask(0027)"* to solve the original issue for which 
chmod was introduced.
Attached the patch and it would be helpful if you can test it.
Have not added test case as YARN-5287 have already added it

> LinuxContainerExecutor creates appcache and its subdirectories with wrong 
> group owner.
> --
>
> Key: YARN-5765
> URL: https://issues.apache.org/jira/browse/YARN-5765
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Naganarasimha G R
>Priority: Blocker
> Attachments: YARN-5765.001.patch
>
>
> LinuxContainerExecutor creates usercache/\{userId\}/appcache/\{appId\} with 
> wrong group owner, causing Log aggregation and ShuffleHandler to fail because 
> node manager process does not have permission to read the files under the 
> directory.
> This can be easily reproduced by enabling LCE and submitting a MR example job 
> as a user that does not belong to the same group that NM process belongs to. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5504) [YARN-3368] Fix YARN UI build pom.xml

2016-11-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642512#comment-15642512
 ] 

Hudson commented on YARN-5504:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10778 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10778/])
YARN-5504. [YARN-3368] Fix YARN UI build pom.xml (Sreenath Somarajapuram 
(wangda: rev 7005580752bf9346b57f80f1e73ccd5737ae11a5)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/ember-cli-build.js
* (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/package.json


> [YARN-3368] Fix YARN UI build pom.xml
> -
>
> Key: YARN-5504
> URL: https://issues.apache.org/jira/browse/YARN-5504
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sreenath Somarajapuram
>Assignee: Sreenath Somarajapuram
> Attachments: YARN-5504-YARN-3368-0001.patch, 
> YARN-5504-YARN-3368-0002.patch
>
>
> - Disable tests as we don't have UTs.
> - Disable lint & hint as they are not followed by the current codebase, and 
> are throwing build errors.
> - Disable clearing of UI package on building, so that n/w is required only in 
> the first build.
> - Remove duplicate bower installs.
> -Change the default packaging.type to 'war' as our UI is a Web application- - 
> Will keep it in the profile
> -Final war should just contain the end result of the build and not all files-
> [~wangda] [~vinodkv] [~sunilg] please share your thoughts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5490) [YARN-3368] Fix various alignment issues and broken breadcrumb link in Node page

2016-11-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642516#comment-15642516
 ] 

Hudson commented on YARN-5490:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10778 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10778/])
YARN-5490. [YARN-3368] Fix various alignment issues and broken (wangda: rev 
64c7cda7e5ac0d3cf4018273809b215cf4181a6a)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-nodes.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-nodes.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-node-containers.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-node.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-node.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-nodes-heatmap.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-node-apps.js


> [YARN-3368] Fix various alignment issues and broken breadcrumb link in Node 
> page 
> -
>
> Key: YARN-5490
> URL: https://issues.apache.org/jira/browse/YARN-5490
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil G
>Assignee: Akhil PB
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5490-YARN-3368.001.patch
>
>
> Few alignment issue in nodes page
> breadcrumb view is not showing the node table when clicked on Nodes option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5500) 'Master node' link under application tab is broken

2016-11-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642506#comment-15642506
 ] 

Hudson commented on YARN-5500:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10778 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10778/])
YARN-5500. [YARN-3368]  ‘Master node' link under application tab is (wangda: 
rev bc273c43ae9959fae224026447d61b519a4b5da4)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-app.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app.hbs


> 'Master node' link under application tab is broken
> --
>
> Key: YARN-5500
> URL: https://issues.apache.org/jira/browse/YARN-5500
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sumana Sathish
>Assignee: Akhil PB
>Priority: Critical
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5500-YARN-3368.001.patch
>
>
> Steps to reproduce:
> * Click on the running application portion on the donut under "Cluster 
> resource usage by applications"
> * Under App Master Info, there is a link provided for "Master Node". 
> The link is broken. It doesn't redirect to any page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5509) Build error due to preparing 3.0.0-alpha2 deployment

2016-11-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642513#comment-15642513
 ] 

Hudson commented on YARN-5509:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10778 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10778/])
YARN-5509. Build error due to preparing 3.0.0-alpha2 deployment. (Kai (wangda: 
rev b61e60fb92fab9887483f57caac2341ba0490963)
* (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml


> Build error due to preparing 3.0.0-alpha2 deployment
> 
>
> Key: YARN-5509
> URL: https://issues.apache.org/jira/browse/YARN-5509
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.0.0-alpha1
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5509-YARN-3368.002.patch, 
> YARN-5509-YARN-3368.01.patch
>
>
> Since trunk is now prepared for 
> [3.0.0-alpha2-SNAPSHOT|https://github.com/apache/hadoop/commit/da456ffd625db93cc16d7daf809b85f24f0d7e0a],
>  hadoop-yarn package should be also refer updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5804) New UI2 is not able to launch with jetty 9 upgrade post HADOOP-10075

2016-11-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642515#comment-15642515
 ] 

Hudson commented on YARN-5804:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10778 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10778/])
YARN-5804. New UI2 is not able to launch with jetty 9 upgrade post (wangda: rev 
c00b5d1e51b3f495893921dd804085bba66235e0)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApps.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java


> New UI2 is not able to launch with jetty 9 upgrade post HADOOP-10075
> 
>
> Key: YARN-5804
> URL: https://issues.apache.org/jira/browse/YARN-5804
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Sunil G
>Assignee: Sunil G
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5804-YARN-3368.0001.patch
>
>
> Post HADOOP-10075, few compilation errors popped up. This jira is to track 
> these problems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5785) [YARN-3368] Accessing applications and containers list from Node page is throwing few exceptions in console

2016-11-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642507#comment-15642507
 ] 

Hudson commented on YARN-5785:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10778 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10778/])
YARN-5785. [YARN-3368] Accessing applications and containers list from (wangda: 
rev 013ff07bc695ec7c7386e7cb59873776a1eb1ea5)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-node.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/config/default-config.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-node-app.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-node-container.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-node-container.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-node-app.js


> [YARN-3368] Accessing applications and containers list from Node page is 
> throwing few exceptions in console
> ---
>
> Key: YARN-5785
> URL: https://issues.apache.org/jira/browse/YARN-5785
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Sunil G
>Assignee: Akhil PB
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5785-YARN-3368.001.patch, 
> YARN-5785-YARN-3368.002.patch
>
>
> In node page, "List of Applications" and "List of Containers" links are 
> causing few error logs in console.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5741) [YARN-3368] Update UI2 documentation for new UI2 path

2016-11-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642505#comment-15642505
 ] 

Hudson commented on YARN-5741:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10778 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10778/])
YARN-5741. [YARN-3368] Update UI2 documentation for new UI2 path (Kai (wangda: 
rev cb77e3eb409dde1fc05cd19345ebba74296f0579)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YarnUI2.md


> [YARN-3368] Update UI2 documentation for new UI2 path
> -
>
> Key: YARN-5741
> URL: https://issues.apache.org/jira/browse/YARN-5741
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5741-YARN-3368.01.patch, 
> YARN-5741-YARN-3368.02.patch
>
>
> This is a followup of YARN-5698.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4515) [YARN-3368] Support hosting web UI framework inside YARN RM

2016-11-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642508#comment-15642508
 ] 

Hudson commented on YARN-4515:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10778 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10778/])
YARN-4515. [YARN-3368] Support hosting web UI framework inside YARN RM. 
(wangda: rev c85cc3b56ebc63010fb22eb1bfa5849d591f4bcc)
* (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-node-container.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/config/environment.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/helpers/log-files-comma.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/node-menu-panel.hbs
* (edit) LICENSE.txt
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/app-table.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/error.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/package.json
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-container-log.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-node-container.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/tree-selector.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-apps.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-app-attempt.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-node-app.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-node-container.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/timeline-view.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-app.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/helpers/node-menu.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-nodes.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/helpers/node-link.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app-attempt.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-node.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-node-app.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-node-app.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-node-apps.hbs
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/helpers/node-name.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/container-table.hbs
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/helpers/node-name-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/app-attempt-table.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/timeline-view.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-app-attempt.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-app-attempt.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queue.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-app-attempt.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-app.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-rm-node.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-node.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-container.js
* (edit) 

[jira] [Commented] (YARN-4514) [YARN-3368] Cleanup hardcoded configurations, such as RM/ATS addresses

2016-11-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642511#comment-15642511
 ] 

Hudson commented on YARN-4514:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10778 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10778/])
YARN-4514. [YARN-3368] Cleanup hardcoded configurations, such as RM/ATS 
(wangda: rev dea4a296e558a11ba72e64344e8e34dcfba8598d)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-container-log.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/services/hosts.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/package.json
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-node-app.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/services/env-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/cluster-metric.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-app.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/ember-cli-build.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-node.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-app-attempt.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-container.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/bower.json
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-node-container.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/index.html
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/initializers/hosts.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/app.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/config.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/initializers/env.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/initializers/jquery-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-queue.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/config/default-config.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/cluster-info.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/services/hosts-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/abstract.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/initializers/hosts-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-rm-node.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/config/configs.env
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/initializers/env-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/services/env.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/config/environment.js


> [YARN-3368] Cleanup hardcoded configurations, such as RM/ATS addresses
> --
>
> Key: YARN-4514
> URL: https://issues.apache.org/jira/browse/YARN-4514
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Sunil G
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-4514-YARN-3368.1.patch, 
> YARN-4514-YARN-3368.2.patch, YARN-4514-YARN-3368.3.patch, 
> YARN-4514-YARN-3368.4.patch, YARN-4514-YARN-3368.5.patch, 
> YARN-4514-YARN-3368.6.patch, YARN-4514-YARN-3368.7.patch, 
> YARN-4514-YARN-3368.8.patch
>
>
> We have several configurations are hard-coded, for example, RM/ATS addresses, 
> we should make them configurable. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3368) [Umbrella] YARN web UI: Next generation

2016-11-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642503#comment-15642503
 ] 

Hudson commented on YARN-3368:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10778 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10778/])
YARN-4733. [YARN-3368] Initial commit of new YARN web UI. (wangda) (wangda: rev 
53e661f68e29f639ff94cdccd51b7a13015dc534)
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/helpers/start-app.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/components/base-chart-component.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/components/item-selector.js
* (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/models/.gitkeep
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/adapters/yarn-app-test.js
* (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/routes/.gitkeep
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/templates/components/queue-configuration-table.hbs
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/templates/yarn-queue.hbs
* (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/utils/converter.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/utils/converter-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/components/container-table.js
* (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml
* (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/.travis.yml
* (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/models/yarn-queue.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/templates/components/container-table.hbs
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/controllers/yarn-apps-test.js
* (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/helpers/resolver.js
* (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/routes/yarn-apps.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-app-attempt.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/controllers/yarn-queues-test.js
* (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/utils/sorter.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/models/yarn-container.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/controllers/yarn-queue.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/assets/images/datatables/sort_both.png
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/templates/components/timeline-view.hbs
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/components/timeline-view.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/routes/yarn-apps-test.js
* (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/package.json
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/assets/images/datatables/favicon.ico
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/adapters/yarn-queue.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/assets/images/datatables/sort_desc.png
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-app.js
* (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/models/yarn-app.js
* (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/vendor/.gitkeep
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-container.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/cluster-info.js
* (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/styles/app.css
* (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/.watchmanconfig
* (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/.bowerrc
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/models/cluster-metric.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-queue.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/models/yarn-app-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/adapters/yarn-app-attempt.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/adapters/cluster-info.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/templates/yarn-app-attempt.hbs
* (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/.jshintrc
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/assets/images/datatables/sort_asc.png
* (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/helpers/.gitkeep
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/routes/yarn-queues/queues-selector.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/models/cluster-info.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/components/simple-table.js
* (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/config/environment.js
* (add) 

[jira] [Commented] (YARN-5488) Applications table overflows beyond the page boundary

2016-11-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642504#comment-15642504
 ] 

Hudson commented on YARN-5488:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10778 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10778/])
YARN-5488. [YARN-3368] Applications table overflows beyond the page (wangda: 
rev 23b0287f62d5d026b2d31132579b043b23eb270c)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/app.css


> Applications table overflows beyond the page boundary
> -
>
> Key: YARN-5488
> URL: https://issues.apache.org/jira/browse/YARN-5488
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Harish Jaiprakash
>Assignee: Harish Jaiprakash
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5488-YARN-3368.01.patch, YARN-5488.01.patch
>
>
> Table in Application tab overflows beyond page boundary and make the UI look 
> broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4733) [YARN-3368] Initial commit of new YARN web UI

2016-11-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642509#comment-15642509
 ] 

Hudson commented on YARN-4733:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10778 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10778/])
YARN-4733. [YARN-3368] Initial commit of new YARN web UI. (wangda) (wangda: rev 
53e661f68e29f639ff94cdccd51b7a13015dc534)
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/helpers/start-app.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/components/base-chart-component.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/components/item-selector.js
* (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/models/.gitkeep
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/adapters/yarn-app-test.js
* (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/routes/.gitkeep
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/templates/components/queue-configuration-table.hbs
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/templates/yarn-queue.hbs
* (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/utils/converter.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/utils/converter-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/components/container-table.js
* (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml
* (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/.travis.yml
* (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/models/yarn-queue.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/templates/components/container-table.hbs
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/controllers/yarn-apps-test.js
* (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/helpers/resolver.js
* (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/routes/yarn-apps.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-app-attempt.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/controllers/yarn-queues-test.js
* (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/utils/sorter.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/models/yarn-container.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/controllers/yarn-queue.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/assets/images/datatables/sort_both.png
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/templates/components/timeline-view.hbs
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/components/timeline-view.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/routes/yarn-apps-test.js
* (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/package.json
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/assets/images/datatables/favicon.ico
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/adapters/yarn-queue.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/assets/images/datatables/sort_desc.png
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-app.js
* (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/models/yarn-app.js
* (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/vendor/.gitkeep
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-container.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/cluster-info.js
* (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/styles/app.css
* (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/.watchmanconfig
* (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/.bowerrc
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/models/cluster-metric.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-queue.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/models/yarn-app-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/adapters/yarn-app-attempt.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/adapters/cluster-info.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/templates/yarn-app-attempt.hbs
* (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/.jshintrc
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/assets/images/datatables/sort_asc.png
* (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/helpers/.gitkeep
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/routes/yarn-queues/queues-selector.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/models/cluster-info.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/components/simple-table.js
* (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/config/environment.js
* (add) 

[jira] [Commented] (YARN-5503) [YARN-3368] Add missing hidden files in webapp folder for deployment

2016-11-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642510#comment-15642510
 ] 

Hudson commented on YARN-5503:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10778 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10778/])
YARN-5503. [YARN-3368] Add missing hidden files in webapp folder for (wangda: 
rev f6574d9ff6940ffc526bef2efae403df0efb2195)
* (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/.jshintrc
* (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/.bowerrc
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/.ember-cli
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/.watchmanconfig
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/.editorconfig
* (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml


> [YARN-3368] Add missing hidden files in webapp folder for deployment
> 
>
> Key: YARN-5503
> URL: https://issues.apache.org/jira/browse/YARN-5503
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sreenath Somarajapuram
>Assignee: Sreenath Somarajapuram
> Attachments: YARN-5503-YARN-3368-0001.patch, 
> YARN-5503-YARN-3368-0001.patch, YARN-5503-YARN-3368-0002.patch, 
> YARN-5503-YARN-3368-0003.patch, YARN-5503-YARN-3368-0004.patch, 
> YARN-5503-YARN-3368.0005.patch, YARN-5503-YARN-3368.0006.patch
>
>
> - Feel it might be good to have a readme file with the basic instructions.
> - Change package type to war, as ours is a web application
> - Just noticed that the hidden files that must be present in the base 
> directory of ember app, are missing. Most of them are used for configuration, 
> and when missing the default vakues would be used by ember.
> -- They include - .bowerrc, .editorconfig, .ember-cli, .gitignore, .jshintrc, 
> .travis.yml, .watchmanconfig



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4517) [YARN-3368] Add nodes page

2016-11-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642514#comment-15642514
 ] 

Hudson commented on YARN-4517:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10778 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10778/])
YARN-4517. Add nodes page and fix bunch of license issues. (Varun Saxena 
(wangda: rev 0a5f6520713fbd00a8c8ae563be8abaa2b8c868b)
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/models/yarn-node-container-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/routes/yarn-container-log.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/serializers/yarn-node-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/adapters/yarn-node-app-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/adapters/yarn-app-attempt.js
* (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/routes/yarn-apps.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-node-container.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/models/yarn-container-log.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/serializers/yarn-container-log-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/adapters/cluster-metric.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/adapters/yarn-container-log-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/adapters/yarn-node-container.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/routes/yarn-node-container-test.js
* (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/helpers/divide.js
* (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/utils/converter.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/adapters/yarn-node-container-test.js
* (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/router.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/templates/yarn-node.hbs
* (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/config.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/routes/yarn-node-containers-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/routes/yarn-node-apps-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/routes/yarn-nodes-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/utils/converter-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/models/yarn-rm-node-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/utils/sorter-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/adapters/yarn-node-app.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/components/simple-table.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/templates/yarn-node-app.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/adapters/yarn-container.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/adapters/yarn-rm-node-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/routes/yarn-node-app.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/adapters/yarn-queue.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-container-log.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-rm-node.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/templates/yarn-container-log.hbs
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/adapters/yarn-rm-node.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/routes/yarn-node-container.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/adapters/cluster-info.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/routes/yarn-node-app-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/serializers/yarn-node-app-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/templates/yarn-node-containers.hbs
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/adapters/yarn-node-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-node-app.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/serializers/yarn-node-container-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/templates/yarn-node-apps.hbs
* (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/bower.json
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/routes/yarn-node-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/templates/yarn-node-container.hbs
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/models/yarn-node-app-test.js
* (edit) 

[jira] [Commented] (YARN-5497) Use different color for Undefined and Succeeded for Final State in applications page

2016-11-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642487#comment-15642487
 ] 

Hudson commented on YARN-5497:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10778 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10778/])
YARN-5497. [YARN-3368] Use different color for Undefined and Succeeded (wangda: 
rev 12ddbbc61d0c90ac4c0057fd7b151eaa778123c8)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-app.js


> Use different color for Undefined and Succeeded for Final State in 
> applications page
> 
>
> Key: YARN-5497
> URL: https://issues.apache.org/jira/browse/YARN-5497
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yesha Vora
>Assignee: Akhil PB
>Priority: Trivial
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5497-YARN-3368.001.patch
>
>
> When application is in Running state, Final status value is set to "Undefined"
> When application is succeeded , Final status value is set to "SUCCEEDED".
> Yarn UI use same green color for both the above final status. 
> It will be good to have different colors for each final status value. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5161) [YARN-3368] Add Apache Hadoop logo in YarnUI home page

2016-11-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642498#comment-15642498
 ] 

Hudson commented on YARN-5161:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10778 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10778/])
YARN-5161. [YARN-3368] Add Apache Hadoop logo in YarnUI home page. (Kai 
(wangda: rev 35f08122e23b8ee48abeb04bcc5cb7b7b907db35)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/application.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/app.css
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/assets/images/hadoop_logo.png


> [YARN-3368] Add Apache Hadoop logo in YarnUI home page
> --
>
> Key: YARN-5161
> URL: https://issues.apache.org/jira/browse/YARN-5161
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: webapp
>Reporter: Sunil G
>Assignee: Kai Sasaki
> Attachments: Screen Shot 2016-05-31 at 21.22.30.png, Screen Shot 
> 2016-06-11 at 12.33.39.png, Screen Shot 2016-06-20 at 23.15.05.png, 
> YARN-5161-YARN-3368.03.patch, YARN-5161-YARN-3368.04.patch, 
> YARN-5161-YARN-3368.05.patch, YARN-5161.01.patch, YARN-5161.02.patch, 
> apache_logo.png, hadoop_logo.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4849) [YARN-3368] cleanup code base, integrate web UI related build to mvn, and fix licenses.

2016-11-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642495#comment-15642495
 ] 

Hudson commented on YARN-4849:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10778 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10778/])
YARN-4849. [YARN-3368] cleanup code base, integrate web UI related build 
(wangda: rev 266784b84915c1b14dcd17fd0f648f66355d5322)
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/routes/yarn-node-containers.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/application.hbs
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/queue-view.js
* (edit) .gitignore
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/routes/yarn-node-app.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/assets/images/datatables/sort_both.png
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/app.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/test-helper.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/utils/converter-test.js
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/helpers/node-link.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/models/yarn-node-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-rm-node.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-queue.js
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-container-log.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/application.js
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-node-app.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/app.css
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/templates/yarn-app-attempt.hbs
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/queue-navigator.hbs
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/assets/images/datatables/sort_desc.png
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/utils/converter.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/adapters/yarn-container-log-test.js
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/models/yarn-node-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/donut-chart.js
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/routes/application.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/routes/yarn-node-containers-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-container-log.js
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/templates/yarn-node-app.hbs
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-app-attempt.js
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/components/item-selector.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/routes/yarn-node-app-test.js
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-app.js
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/serializers/yarn-node-container-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-node-container.js
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/routes/yarn-node-container-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/assets/images/datatables/sort_desc_disabled.png
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/models/yarn-rm-node.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-app.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/index.html
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/routes/yarn-queue.js
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/controllers/.gitkeep
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/components/queue-configuration-table.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-queue.js
* (edit) hadoop-yarn-project/hadoop-yarn/pom.xml
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/models/yarn-node-container-test.js
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/cluster-metric.js
* (add) 

[jira] [Commented] (YARN-5583) [YARN-3368] Fix wrong paths in .gitignore

2016-11-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642501#comment-15642501
 ] 

Hudson commented on YARN-5583:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10778 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10778/])
YARN-5583. [YARN-3368] Fix wrong paths in .gitignore (Sreenath (wangda: rev 
0f8f0ac6d1a1f62766084b6f382b76a67742)
* (edit) .gitignore


> [YARN-3368] Fix wrong paths in .gitignore
> -
>
> Key: YARN-5583
> URL: https://issues.apache.org/jira/browse/YARN-5583
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sreenath Somarajapuram
>Assignee: Sreenath Somarajapuram
> Attachments: YARN-5583-YARN-3368-0001.patch
>
>
> npm-debug.log & testem.log paths are mentioned wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5183) [YARN-3368] Support for responsive navbar when window is resized

2016-11-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642497#comment-15642497
 ] 

Hudson commented on YARN-5183:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10778 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10778/])
YARN-5183. [YARN-3368] Support for responsive navbar when window is (wangda: 
rev 14aa58c5942d3178444043a038298776f2b03e0e)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/ember-cli-build.js


> [YARN-3368] Support for responsive navbar when window is resized
> 
>
> Key: YARN-5183
> URL: https://issues.apache.org/jira/browse/YARN-5183
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
> Fix For: 3.0.0-alpha2
>
> Attachments: Screen Shot 2016-05-31 at 22.41.35.png, 
> YARN-5183-YARN-3368.02.patch, YARN-5183-YARN-3368.1.patch, 
> YARN-5183-YARN-3368.3.patch, YARN-5183.01.patch
>
>
> Responsive navbar currently not work even navbar icon is shownup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5779) [YARN-3368] Document limits/notes of the new YARN UI

2016-11-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642493#comment-15642493
 ] 

Hudson commented on YARN-5779:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10778 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10778/])
YARN-5779. [YARN-3368] Document limits/notes of the new YARN UI (Wangda 
(wangda: rev 825de90b96d7daaaf3636fde4e0152e9f4cfe3c4)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YarnUI2.md
YARN-5779. [YARN-3368] Addendum patch to document limits/notes of the (wangda: 
rev fad392a22f1007e6b6e7f6af55051ebd912a6e4a)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YarnUI2.md


> [YARN-3368] Document limits/notes of the new YARN UI
> 
>
> Key: YARN-5779
> URL: https://issues.apache.org/jira/browse/YARN-5779
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5779-YARN-3368.01.patch, 
> YARN-5779-YARN-3368.02.patch, YARN-5779-YARN-3368.addendum.1.patch
>
>
> For example, we don't make sure it's able to run on security enabled 
> environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5321) [YARN-3368] Add resource usage for application by node managers

2016-11-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642499#comment-15642499
 ] 

Hudson commented on YARN-5321:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10778 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10778/])
YARN-5321. [YARN-3368] Add resource usage for application by node (wangda: rev 
8f584a561edd4b119eb36b074ff74a4088279d76)
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-node.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-nodes-heatmap.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-apps/apps.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/donut-chart.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-app-attempt.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-nodes.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/queue-configuration-table.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-app.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-node-apps.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-apps.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/bar-chart.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-node-apps.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/controllers/yarn-app-attempts-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/package.json
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/queue-usage-donut-chart.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-node-container.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/app.css
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/utils/mock.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queues.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-app.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/utils/color-utils.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/queue-view.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-queue.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-apps.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/controllers/yarn-nodes-heatmap-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-services.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-app-attempts.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/routes/yarn-queues-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/base-chart-component.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-app-attempts.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/queue-navigator.hbs
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-apps/services.hbs
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/controllers/yarn-node-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-apps/services.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-node-containers.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/controllers/yarn-node-containers-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-nodes/table.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/controllers/yarn-queues-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/timeline-view.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-nodes.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queue-apps.js
* (add) 

[jira] [Commented] (YARN-3334) [Event Producers] NM TimelineClient container metrics posting to new timeline service.

2016-11-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642489#comment-15642489
 ] 

Hudson commented on YARN-3334:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10778 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10778/])
YARN-3334. [YARN-3368] Introduce REFRESH button in various UI pages (wangda: 
rev 561839cb5f37436d1fdd011b3c4de5f1c1879b29)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/donut-chart.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-node.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/breadcrumb-bar.hbs
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-node-app.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/app-usage-donut-chart.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-node-container.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-nodes.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-node-containers.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-app.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-apps.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/cluster-overview.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-nodes.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app-attempts.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-node-app.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-container-log.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queue-apps.hbs
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-container-log.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-app-attempt.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-app-attempts.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-node-apps.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queue.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app-attempt.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-node-container.hbs
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/breadcrumb-bar.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-apps.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queue.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/app.css
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/abstract.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-node-apps.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-node-app.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/queue-usage-donut-chart.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/bar-chart.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queues.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/integration/components/breadcrumb-bar-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/cluster-overview.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-container-log.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queue-apps.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-node.hbs
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/controllers/yarn-container-log-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-node-containers.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/controllers/yarn-node-app-test.js
* (edit) 

[jira] [Commented] (YARN-5772) Replace old Hadoop logo with new one

2016-11-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642488#comment-15642488
 ] 

Hudson commented on YARN-5772:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10778 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10778/])
YARN-5772. [YARN-3368] Replace old Hadoop logo with new one (Akhil P B (wangda: 
rev e93f900b5544b4c7fc1b5279baff81ded6be56f5)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/assets/images/hadoop_logo.png


> Replace old Hadoop logo with new one
> 
>
> Key: YARN-5772
> URL: https://issues.apache.org/jira/browse/YARN-5772
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Affects Versions: YARN-3368
>Reporter: Akira Ajisaka
>Assignee: Akhil PB
> Attachments: YARN-5772-YARN-3368.0001.patch, ui2-with-newlogo.png
>
>
> YARN-5161 added Apache Hadoop logo in the UI but the logo is old.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5698) [YARN-3368] Launch new YARN UI under hadoop web app port

2016-11-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642491#comment-15642491
 ] 

Hudson commented on YARN-5698:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10778 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10778/])
YARN-5698. [YARN-3368] Launch new YARN UI under hadoop web app port. (wangda: 
rev 3de0da2a7659db268d630cb8c4ad1d1c4b8398a2)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/config/default-config.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApps.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml


> [YARN-3368] Launch new YARN UI under hadoop web app port
> 
>
> Key: YARN-5698
> URL: https://issues.apache.org/jira/browse/YARN-5698
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil G
>Assignee: Sunil G
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5698-YARN-3368.0001.patch, 
> YARN-5698-YARN-3368.0002.patch, YARN-5698-YARN-3368.0003.patch
>
>
> As discussed in YARN-5145, it will be better to launch new web ui as a new 
> webapp under same old port.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5682) [YARN-3368] Fix maven build to keep all generated or downloaded files in target folder

2016-11-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642502#comment-15642502
 ] 

Hudson commented on YARN-5682:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10778 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10778/])
YARN-5682. [YARN-3368] Fix maven build to keep all generated or (wangda: rev 
98b2ad7208eaadc205afd96cc002e5650128da4d)
* (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
* (edit) hadoop-yarn-project/hadoop-yarn/pom.xml


> [YARN-3368] Fix maven build to keep all generated or downloaded files in 
> target folder
> --
>
> Key: YARN-5682
> URL: https://issues.apache.org/jira/browse/YARN-5682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5682-YARN-3368.001.patch, 
> YARN-5682-YARN-3368.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5598) [YARN-3368] Fix create-release to be able to generate bits for the new yarn-ui

2016-11-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642490#comment-15642490
 ] 

Hudson commented on YARN-5598:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10778 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10778/])
YARN-5598. [YARN-3368] Fix create-release to be able to generate bits (wangda: 
rev e8096911acbcb067f3ecb9d054ee02ecbbe666e4)
* (edit) dev-support/docker/Dockerfile
* (edit) dev-support/bin/create-release
* (delete) dev-support/create-release.sh


> [YARN-3368] Fix create-release to be able to generate bits for the new yarn-ui
> --
>
> Key: YARN-5598
> URL: https://issues.apache.org/jira/browse/YARN-5598
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn, yarn-ui-v2
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5598-YARN-3368.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5145) [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR

2016-11-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642500#comment-15642500
 ] 

Hudson commented on YARN-5145:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10778 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10778/])
YARN-5145. [YARN-3368] Move new YARN UI configuration to (wangda: rev 
d8be7667597b468865e3d888cb3663c1d78823ec)
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/initializers/loader-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/initializers/loader.js


> [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR
> -
>
> Key: YARN-5145
> URL: https://issues.apache.org/jira/browse/YARN-5145
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Sunil G
> Fix For: 3.0.0-alpha2
>
> Attachments: 0001-YARN-5145-Run-NewUI-WithOldPort-POC.patch, 
> YARN-5145-YARN-3368.01.patch, YARN-5145-YARN-3368.02.patch, 
> YARN-5145-YARN-3368.03.patch, YARN-5145-YARN-3368.04.patch, 
> YARN-5145-YARN-3368.05.patch, YARN-5145-YARN-3368.06.patch, 
> newUIInOldRMWebServer.png
>
>
> Existing YARN UI configuration is under Hadoop package's directory: 
> $HADOOP_PREFIX/share/hadoop/yarn/webapps/, we should move it to 
> $HADOOP_CONF_DIR like other configurations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5019) [YARN-3368] Change urls in new YARN ui from camel casing to hyphens

2016-11-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642492#comment-15642492
 ] 

Hudson commented on YARN-5019:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10778 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10778/])
YARN-5019. [YARN-3368] Change urls in new YARN ui from camel casing to (wangda: 
rev 2c4f164e16fbcf72c05e147532ba6c41c36fdda9)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/helpers/node-link.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-apps.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/router.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-node.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/tree-selector.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-node-container.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/helpers/node-menu.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-node-containers.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queue.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/application.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-node-apps.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-node-containers.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-node-app.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queues/index.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-container-log.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-node-app.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-app.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-node-container.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-app-attempt.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-app-attempt.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-node-apps.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-node.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-nodes.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-container-log.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/index.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/helpers/log-files-comma.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queues/queues-selector.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/app-table.hbs


> [YARN-3368] Change urls in new YARN ui from camel casing to hyphens
> ---
>
> Key: YARN-5019
> URL: https://issues.apache.org/jira/browse/YARN-5019
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Varun Vasudev
>Assignee: Sunil G
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5019-YARN-3368.1.patch
>
>
> There are a couple of reasons we should recommend avoiding camel casing in 
> urls -
> 1. Some web servers are case insensitive
> 2. Google suggests using hyphens - 
> https://support.google.com/webmasters/answer/76329



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5804) New UI2 is not able to launch with jetty 9 upgrade post HADOOP-10075

2016-11-06 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5804:
-
Fix Version/s: (was: YARN-3368)
   3.0.0-alpha2

> New UI2 is not able to launch with jetty 9 upgrade post HADOOP-10075
> 
>
> Key: YARN-5804
> URL: https://issues.apache.org/jira/browse/YARN-5804
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Sunil G
>Assignee: Sunil G
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5804-YARN-3368.0001.patch
>
>
> Post HADOOP-10075, few compilation errors popped up. This jira is to track 
> these problems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5785) [YARN-3368] Accessing applications and containers list from Node page is throwing few exceptions in console

2016-11-06 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5785:
-
Fix Version/s: (was: YARN-3368)
   3.0.0-alpha2

> [YARN-3368] Accessing applications and containers list from Node page is 
> throwing few exceptions in console
> ---
>
> Key: YARN-5785
> URL: https://issues.apache.org/jira/browse/YARN-5785
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Sunil G
>Assignee: Akhil PB
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5785-YARN-3368.001.patch, 
> YARN-5785-YARN-3368.002.patch
>
>
> In node page, "List of Applications" and "List of Containers" links are 
> causing few error logs in console.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5741) [YARN-3368] Update UI2 documentation for new UI2 path

2016-11-06 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5741:
-
Fix Version/s: (was: YARN-3368)
   3.0.0-alpha2

> [YARN-3368] Update UI2 documentation for new UI2 path
> -
>
> Key: YARN-5741
> URL: https://issues.apache.org/jira/browse/YARN-5741
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5741-YARN-3368.01.patch, 
> YARN-5741-YARN-3368.02.patch
>
>
> This is a followup of YARN-5698.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5500) 'Master node' link under application tab is broken

2016-11-06 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5500:
-
Fix Version/s: (was: YARN-3368)
   3.0.0-alpha2

> 'Master node' link under application tab is broken
> --
>
> Key: YARN-5500
> URL: https://issues.apache.org/jira/browse/YARN-5500
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sumana Sathish
>Assignee: Akhil PB
>Priority: Critical
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5500-YARN-3368.001.patch
>
>
> Steps to reproduce:
> * Click on the running application portion on the donut under "Cluster 
> resource usage by applications"
> * Under App Master Info, there is a link provided for "Master Node". 
> The link is broken. It doesn't redirect to any page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5698) [YARN-3368] Launch new YARN UI under hadoop web app port

2016-11-06 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5698:
-
Fix Version/s: (was: YARN-3368)
   3.0.0-alpha2

> [YARN-3368] Launch new YARN UI under hadoop web app port
> 
>
> Key: YARN-5698
> URL: https://issues.apache.org/jira/browse/YARN-5698
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil G
>Assignee: Sunil G
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5698-YARN-3368.0001.patch, 
> YARN-5698-YARN-3368.0002.patch, YARN-5698-YARN-3368.0003.patch
>
>
> As discussed in YARN-5145, it will be better to launch new web ui as a new 
> webapp under same old port.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5682) [YARN-3368] Fix maven build to keep all generated or downloaded files in target folder

2016-11-06 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5682:
-
Fix Version/s: (was: YARN-3368)
   3.0.0-alpha2

> [YARN-3368] Fix maven build to keep all generated or downloaded files in 
> target folder
> --
>
> Key: YARN-5682
> URL: https://issues.apache.org/jira/browse/YARN-5682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5682-YARN-3368.001.patch, 
> YARN-5682-YARN-3368.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5598) [YARN-3368] Fix create-release to be able to generate bits for the new yarn-ui

2016-11-06 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5598:
-
Fix Version/s: (was: YARN-3368)
   3.0.0-alpha2

> [YARN-3368] Fix create-release to be able to generate bits for the new yarn-ui
> --
>
> Key: YARN-5598
> URL: https://issues.apache.org/jira/browse/YARN-5598
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn, yarn-ui-v2
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5598-YARN-3368.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5509) Build error due to preparing 3.0.0-alpha2 deployment

2016-11-06 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5509:
-
Fix Version/s: (was: YARN-3368)
   3.0.0-alpha2

> Build error due to preparing 3.0.0-alpha2 deployment
> 
>
> Key: YARN-5509
> URL: https://issues.apache.org/jira/browse/YARN-5509
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.0.0-alpha1
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5509-YARN-3368.002.patch, 
> YARN-5509-YARN-3368.01.patch
>
>
> Since trunk is now prepared for 
> [3.0.0-alpha2-SNAPSHOT|https://github.com/apache/hadoop/commit/da456ffd625db93cc16d7daf809b85f24f0d7e0a],
>  hadoop-yarn package should be also refer updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5779) [YARN-3368] Document limits/notes of the new YARN UI

2016-11-06 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5779:
-
Fix Version/s: (was: YARN-3368)
   3.0.0-alpha2

> [YARN-3368] Document limits/notes of the new YARN UI
> 
>
> Key: YARN-5779
> URL: https://issues.apache.org/jira/browse/YARN-5779
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5779-YARN-3368.01.patch, 
> YARN-5779-YARN-3368.02.patch, YARN-5779-YARN-3368.addendum.1.patch
>
>
> For example, we don't make sure it's able to run on security enabled 
> environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5490) [YARN-3368] Fix various alignment issues and broken breadcrumb link in Node page

2016-11-06 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5490:
-
Fix Version/s: (was: YARN-3368)
   3.0.0-alpha2

> [YARN-3368] Fix various alignment issues and broken breadcrumb link in Node 
> page 
> -
>
> Key: YARN-5490
> URL: https://issues.apache.org/jira/browse/YARN-5490
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil G
>Assignee: Akhil PB
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5490-YARN-3368.001.patch
>
>
> Few alignment issue in nodes page
> breadcrumb view is not showing the node table when clicked on Nodes option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5488) Applications table overflows beyond the page boundary

2016-11-06 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5488:
-
Fix Version/s: (was: YARN-3368)
   3.0.0-alpha2

> Applications table overflows beyond the page boundary
> -
>
> Key: YARN-5488
> URL: https://issues.apache.org/jira/browse/YARN-5488
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Harish Jaiprakash
>Assignee: Harish Jaiprakash
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5488-YARN-3368.01.patch, YARN-5488.01.patch
>
>
> Table in Application tab overflows beyond page boundary and make the UI look 
> broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5344) [YARN-3368] Generic UI improvements

2016-11-06 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5344:
-
Fix Version/s: (was: YARN-3368)
   3.0.0-alpha2

> [YARN-3368] Generic UI improvements
> ---
>
> Key: YARN-5344
> URL: https://issues.apache.org/jira/browse/YARN-5344
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sreenath Somarajapuram
>Assignee: Sreenath Somarajapuram
> Fix For: 3.0.0-alpha2
>
>
> - Add breadcrumps in all pages
> - Define a vertical space (to the left) for displaying sub-pages



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5497) Use different color for Undefined and Succeeded for Final State in applications page

2016-11-06 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5497:
-
Fix Version/s: (was: YARN-3368)
   3.0.0-alpha2

> Use different color for Undefined and Succeeded for Final State in 
> applications page
> 
>
> Key: YARN-5497
> URL: https://issues.apache.org/jira/browse/YARN-5497
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yesha Vora
>Assignee: Akhil PB
>Priority: Trivial
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5497-YARN-3368.001.patch
>
>
> When application is in Running state, Final status value is set to "Undefined"
> When application is succeeded , Final status value is set to "SUCCEEDED".
> Yarn UI use same green color for both the above final status. 
> It will be good to have different colors for each final status value. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5145) [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR

2016-11-06 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5145:
-
Fix Version/s: (was: YARN-3368)
   3.0.0-alpha2

> [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR
> -
>
> Key: YARN-5145
> URL: https://issues.apache.org/jira/browse/YARN-5145
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Sunil G
> Fix For: 3.0.0-alpha2
>
> Attachments: 0001-YARN-5145-Run-NewUI-WithOldPort-POC.patch, 
> YARN-5145-YARN-3368.01.patch, YARN-5145-YARN-3368.02.patch, 
> YARN-5145-YARN-3368.03.patch, YARN-5145-YARN-3368.04.patch, 
> YARN-5145-YARN-3368.05.patch, YARN-5145-YARN-3368.06.patch, 
> newUIInOldRMWebServer.png
>
>
> Existing YARN UI configuration is under Hadoop package's directory: 
> $HADOOP_PREFIX/share/hadoop/yarn/webapps/, we should move it to 
> $HADOOP_CONF_DIR like other configurations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5183) [YARN-3368] Support for responsive navbar when window is resized

2016-11-06 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5183:
-
Fix Version/s: (was: YARN-3368)
   3.0.0-alpha2

> [YARN-3368] Support for responsive navbar when window is resized
> 
>
> Key: YARN-5183
> URL: https://issues.apache.org/jira/browse/YARN-5183
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
> Fix For: 3.0.0-alpha2
>
> Attachments: Screen Shot 2016-05-31 at 22.41.35.png, 
> YARN-5183-YARN-3368.02.patch, YARN-5183-YARN-3368.1.patch, 
> YARN-5183-YARN-3368.3.patch, YARN-5183.01.patch
>
>
> Responsive navbar currently not work even navbar icon is shownup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5334) [YARN-3368] Introduce REFRESH button in various UI pages

2016-11-06 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5334:
-
Fix Version/s: (was: YARN-3368)
   3.0.0-alpha2

> [YARN-3368] Introduce REFRESH button in various UI pages
> 
>
> Key: YARN-5334
> URL: https://issues.apache.org/jira/browse/YARN-5334
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: webapp
>Reporter: Sunil G
>Assignee: Sreenath Somarajapuram
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5334-YARN-3368-0001.patch, 
> YARN-5334-YARN-3368-0002.patch
>
>
> It will be better if we have a common Refresh button in all pages to get the 
> latest information in all tables such as apps/nodes/queue etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5019) [YARN-3368] Change urls in new YARN ui from camel casing to hyphens

2016-11-06 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5019:
-
Fix Version/s: (was: YARN-3368)
   3.0.0-alpha2

> [YARN-3368] Change urls in new YARN ui from camel casing to hyphens
> ---
>
> Key: YARN-5019
> URL: https://issues.apache.org/jira/browse/YARN-5019
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Varun Vasudev
>Assignee: Sunil G
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5019-YARN-3368.1.patch
>
>
> There are a couple of reasons we should recommend avoiding camel casing in 
> urls -
> 1. Some web servers are case insensitive
> 2. Google suggests using hyphens - 
> https://support.google.com/webmasters/answer/76329



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5000) [YARN-3368] App attempt page is not loading when timeline server is not started

2016-11-06 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5000:
-
Fix Version/s: (was: YARN-3368)
   3.0.0-alpha2

> [YARN-3368] App attempt page is not loading when timeline server is not 
> started
> ---
>
> Key: YARN-5000
> URL: https://issues.apache.org/jira/browse/YARN-5000
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil G
>Assignee: Sunil G
> Fix For: 3.0.0-alpha2
>
> Attachments: 0001-YARN-5000.patch, 
> AppFinishedAndNoTimelineServer.png, AppRunningAndNoTimelineServer.png, 
> AppRunningAndNoTimelineServer_v2.png, YARN-5000-YARN-3368.1.patch, 
> YARN-5000-YARN-3368.2.patch, YARN-5000-YARN-3368.3.patch, 
> YARN-5000-YARN-3368.4.patch, YARN-5000-YARN-3368.5.patch, screenshot-1.png
>
>
> If timeline server is not started, app attempt page is not getting loaded.
> In new web-ui, yarnContainer route is tightly coupled with both RM and 
> Timeline server. And if one of server is not up, page will not load. If 
> timeline server is not up, container information from RM is to be displayed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5038) [YARN-3368] Application and Container pages shows wrong values when RM is stopped

2016-11-06 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5038:
-
Fix Version/s: (was: YARN-3368)
   3.0.0-alpha2

> [YARN-3368] Application and Container pages shows wrong values when RM is 
> stopped
> -
>
> Key: YARN-5038
> URL: https://issues.apache.org/jira/browse/YARN-5038
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil G
>Assignee: Sunil G
> Fix For: 3.0.0-alpha2
>
>
> Few minor issues to fix.
> - In Applications page, "Running Container" is shows as -1 when app is 
> finished.
> - In container page, "Finished Time" is showing 1970 as date by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4515) [YARN-3368] Support hosting web UI framework inside YARN RM

2016-11-06 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4515:
-
Fix Version/s: (was: YARN-3368)
   3.0.0-alpha2

> [YARN-3368] Support hosting web UI framework inside YARN RM
> ---
>
> Key: YARN-4515
> URL: https://issues.apache.org/jira/browse/YARN-4515
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Sunil G
> Fix For: 3.0.0-alpha2
>
> Attachments: 0001-YARN-4515.patch, YARN-4515-YARN-3368.1.patch, 
> YARN-4515-YARN-3368.2.patch, YARN-4515-YARN-3368.3.patch, 
> YARN-4515-YARN-3368.4.patch, YARN-4515-YARN-3368.5.patch, 
> YARN-4515-YARN-3368.6.patch, YARN-4515-YARN-3368.7.patch, 
> preliminary-YARN-4515-host_rm_web_ui_v2.patch
>
>
> Currently it can be only launched outside of YARN, we should make it runnable 
> inside YARN for easier testing and we should have a configuration to 
> enable/disable it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4517) [YARN-3368] Add nodes page

2016-11-06 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4517:
-
Fix Version/s: (was: YARN-3368)
   3.0.0-alpha2

> [YARN-3368] Add nodes page
> --
>
> Key: YARN-4517
> URL: https://issues.apache.org/jira/browse/YARN-4517
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Wangda Tan
>Assignee: Varun Saxena
>  Labels: webui
> Fix For: 3.0.0-alpha2
>
> Attachments: (21-Feb-2016)yarn-ui-screenshots.zip, 
> Screenshot_after_4709.png, Screenshot_after_4709_1.png, 
> YARN-4517-YARN-3368.01.patch, YARN-4517-YARN-3368.02.patch
>
>
> We need nodes page added to next generation web UI, similar to existing 
> RM/nodes page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4849) [YARN-3368] cleanup code base, integrate web UI related build to mvn, and fix licenses.

2016-11-06 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4849:
-
Fix Version/s: (was: YARN-3368)
   3.0.0-alpha2

> [YARN-3368] cleanup code base, integrate web UI related build to mvn, and fix 
> licenses.
> ---
>
> Key: YARN-4849
> URL: https://issues.apache.org/jira/browse/YARN-4849
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-4849-YARN-3368.1.patch, 
> YARN-4849-YARN-3368.2.patch, YARN-4849-YARN-3368.3.patch, 
> YARN-4849-YARN-3368.4.patch, YARN-4849-YARN-3368.5.patch, 
> YARN-4849-YARN-3368.6.patch, YARN-4849-YARN-3368.7.patch, 
> YARN-4849-YARN-3368.8.patch, YARN-4849-YARN-3368.addendum.1.patch, 
> YARN-4849-YARN-3368.addendum.2.patch, YARN-4849-YARN-3368.addendum.3.patch, 
> YARN-4849-YARN-3368.addendum.4.patch, YARN-4849-YARN-3368.addendum.5.patch, 
> YARN-4849-YARN-3368.doc-fix-08172016.1.patch, 
> YARN-4849-YARN-3368.doc-fix-08232016.1.patch, 
> YARN-4849-YARN-3368.javadoc-fix-09082016.1.patch, 
> YARN-4849-YARN-3368.javadoc-fix-09082016.2.patch, 
> YARN-4849-YARN-3368.javadoc-fix-09082016.3.patch, 
> YARN-4849-YARN-3368.license-fix-08172016.1.patch, 
> YARN-4849-YARN-3368.license-fix-08232016.1.patch, 
> YARN-4849-YARN-3368.rat-fix-08302016.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4733) [YARN-3368] Initial commit of new YARN web UI

2016-11-06 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4733:
-
Fix Version/s: (was: YARN-3368)
   3.0.0-alpha2

> [YARN-3368] Initial commit of new YARN web UI
> -
>
> Key: YARN-4733
> URL: https://issues.apache.org/jira/browse/YARN-4733
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Fix For: 3.0.0-alpha2
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4514) [YARN-3368] Cleanup hardcoded configurations, such as RM/ATS addresses

2016-11-06 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4514:
-
Fix Version/s: (was: YARN-3368)
   3.0.0-alpha2

> [YARN-3368] Cleanup hardcoded configurations, such as RM/ATS addresses
> --
>
> Key: YARN-4514
> URL: https://issues.apache.org/jira/browse/YARN-4514
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Sunil G
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-4514-YARN-3368.1.patch, 
> YARN-4514-YARN-3368.2.patch, YARN-4514-YARN-3368.3.patch, 
> YARN-4514-YARN-3368.4.patch, YARN-4514-YARN-3368.5.patch, 
> YARN-4514-YARN-3368.6.patch, YARN-4514-YARN-3368.7.patch, 
> YARN-4514-YARN-3368.8.patch
>
>
> We have several configurations are hard-coded, for example, RM/ATS addresses, 
> we should make them configurable. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4355) NPE while processing localizer heartbeat

2016-11-06 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642368#comment-15642368
 ] 

Varun Saxena commented on YARN-4355:


bq. Also cant we pass the same tracker instance to getPathForLocalization from 
LocalizerRunner.processHeartbeat ?
That is what is being done. Previously tracker was fetched inside 
getPathForLocalization. Do you mean something else ?
{code}
1119  LocalResourcesTracker tracker = getLocalResourcesTracker(
1120  next.getVisibility(), user, applicationId);
1121  if (tracker != null) {
1122ResourceLocalizationSpec resource =
1123
NodeManagerBuilderUtils.newResourceLocalizationSpec(next,
1124getPathForLocalization(next, tracker));
1125rsrcs.add(resource);
1126  }
{code}

bq. If cleanup has happened then are there chances of having pending 
LocalizerResourceRequestEvent in LocalizerRunner ?
It can even if rarely. This section of code isn't really synchronized. So event 
processing for cleaning up container resources and destroying application 
resources can happen before localizer HB is fully processed. Localizer will 
only DIE if it cannot find the localizer in list of localizers which is removed 
when container is cleaned up. But it is possible that HB carries on for 
processing if it finds the localizer but later does not find the tracker as 
application resources are later destroyed before HB is fully processed due to 
the corresponding sections are not guarded by lock. Evidence of this is the NPE 
reported in JIRA which came on a real cluster. This NPE came when NM was 
shutting down so all the apps on the NM were being cleaned up as well. So yes 
there can be chances of having pending events. And anyways having a null check 
is not a bad thing to do.
Why did I not synchronize this section of code as a solution. Well the reason 
was that the possibility of this race happening is very rare.

> NPE while processing localizer heartbeat
> 
>
> Key: YARN-4355
> URL: https://issues.apache.org/jira/browse/YARN-4355
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.2
>Reporter: Jason Lowe
>Assignee: Varun Saxena
> Attachments: YARN-4355.01.patch, YARN-4355.02.patch, 
> YARN-4355.03.patch, YARN-4355.04.patch
>
>
> While analyzing YARN-4354 I noticed a nodemanager was getting NPEs while 
> processing a private localizer heartbeat.  I think there's a race where we 
> can cleanup resources for an application and therefore remove the app local 
> resource tracker just as we are trying to handle the localizer heartbeat.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4330) MiniYARNCluster is showing multiple Failed to instantiate default resource calculator warning messages.

2016-11-06 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642169#comment-15642169
 ] 

Varun Saxena commented on YARN-4330:


Thanks [~Naganarasimha] for the review.

bq. do we need this additional check monitoringInterval > 0;
This was added because i am considering node monitoring as disabled if its <=0. 
Did not want to introduce a new config for test only change. So I just carried 
forward the same principle for container monitoring because a monitoring of 
interval of <=0 does not make any sense. I do not have a strong opinion on this 
though.

bq. NodeManagerHardwareUtils.java, ln no 40: name "isEnabled" can be modified 
to isHardwareDetectionEnabled ?
Ok. Can do so.

> MiniYARNCluster is showing multiple  Failed to instantiate default resource 
> calculator warning messages.
> 
>
> Key: YARN-4330
> URL: https://issues.apache.org/jira/browse/YARN-4330
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test, yarn
>Affects Versions: 2.8.0
> Environment: OSX, JUnit
>Reporter: Steve Loughran
>Assignee: Varun Saxena
>Priority: Blocker
>  Labels: oct16-hard
> Attachments: YARN-4330.002.patch, YARN-4330.01.patch
>
>
> Whenever I try to start a MiniYARNCluster on Branch-2 (commit #0b61cca), I 
> see multiple stack traces warning me that a resource calculator plugin could 
> not be created
> {code}
> (ResourceCalculatorPlugin.java:getResourceCalculatorPlugin(184)) - 
> java.lang.UnsupportedOperationException: Could not determine OS: Failed to 
> instantiate default resource calculator.
> java.lang.UnsupportedOperationException: Could not determine OS
> {code}
> This is a minicluster. It doesn't need resource calculation. It certainly 
> doesn't need test logs being cluttered with even more stack traces which will 
> only generate false alarms about tests failing. 
> There needs to be a way to turn this off, and the minicluster should have it 
> that way by default.
> Being ruthless and marking as a blocker, because its a fairly major 
> regression for anyone testing with the minicluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5599) Publish AM launch command to ATS

2016-11-06 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642161#comment-15642161
 ] 

Varun Saxena commented on YARN-5599:


Updated the patch used for YARN-5355-branch-2

> Publish AM launch command to ATS
> 
>
> Key: YARN-5599
> URL: https://issues.apache.org/jira/browse/YARN-5599
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Daniel Templeton
>Assignee: Rohith Sharma K S
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: 0001-YARN-5599.patch, 0002-YARN-5599.patch, 
> 0003-YARN-5599.patch, YARN-5599-YARN-5355-branch-2.01.patch, 
> YARN-5599-branch-2.patch, YARN-5599-branch-2.patch
>
>
> To aid in debugging launch failures, it would be valuable to have an 
> application's launch script and logs posted to ATS.  Because the 
> application's command line may contain private credentials or other secure 
> information, access to the data in ATS should be restricted to the job owner, 
> including the at-rest data.
> Along with making the data available through ATS, the configuration parameter 
> introduced in YARN-5549 and the log line that it guards should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5599) Publish AM launch command to ATS

2016-11-06 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5599:
---
Attachment: YARN-5599-YARN-5355-branch-2.01.patch

> Publish AM launch command to ATS
> 
>
> Key: YARN-5599
> URL: https://issues.apache.org/jira/browse/YARN-5599
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Daniel Templeton
>Assignee: Rohith Sharma K S
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: 0001-YARN-5599.patch, 0002-YARN-5599.patch, 
> 0003-YARN-5599.patch, YARN-5599-YARN-5355-branch-2.01.patch, 
> YARN-5599-branch-2.patch, YARN-5599-branch-2.patch
>
>
> To aid in debugging launch failures, it would be valuable to have an 
> application's launch script and logs posted to ATS.  Because the 
> application's command line may contain private credentials or other secure 
> information, access to the data in ATS should be restricted to the job owner, 
> including the at-rest data.
> Along with making the data available through ATS, the configuration parameter 
> introduced in YARN-5549 and the log line that it guards should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5599) Publish AM launch command to ATS

2016-11-06 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5599:
---
Attachment: (was: YARN-5599-YARN-5355-branch-2.01.patch)

> Publish AM launch command to ATS
> 
>
> Key: YARN-5599
> URL: https://issues.apache.org/jira/browse/YARN-5599
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Daniel Templeton
>Assignee: Rohith Sharma K S
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: 0001-YARN-5599.patch, 0002-YARN-5599.patch, 
> 0003-YARN-5599.patch, YARN-5599-branch-2.patch, YARN-5599-branch-2.patch
>
>
> To aid in debugging launch failures, it would be valuable to have an 
> application's launch script and logs posted to ATS.  Because the 
> application's command line may contain private credentials or other secure 
> information, access to the data in ATS should be restricted to the job owner, 
> including the at-rest data.
> Along with making the data available through ATS, the configuration parameter 
> introduced in YARN-5549 and the log line that it guards should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3359) Recover collector list when RM fails over

2016-11-06 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642146#comment-15642146
 ] 

Varun Saxena commented on YARN-3359:


Thanks [~gtCarrera9] for your contribution and thanks [~sjlee0] and 
[~vrushalic] for reviews

> Recover collector list when RM fails over
> -
>
> Key: YARN-3359
> URL: https://issues.apache.org/jira/browse/YARN-3359
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Junping Du
>Assignee: Li Lu
>  Labels: YARN-5355, oct16-medium
> Fix For: YARN-5355
>
> Attachments: YARN-3359-YARN-5355.001.patch, 
> YARN-3359-YARN-5355.002.patch, YARN-3359-YARN-5355.003.patch, 
> YARN-3359-YARN-5355.004.patch, YARN-3359-YARN-5638.patch
>
>
> Per discussion in YARN-3039, split the recover work from RMStateStore in a 
> separated JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4330) MiniYARNCluster is showing multiple Failed to instantiate default resource calculator warning messages.

2016-11-06 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642084#comment-15642084
 ] 

Naganarasimha G R commented on YARN-4330:
-

Hi [~varun_saxena], 
Jenkins failed tests seems to pass for me locally and Overall the patch looks 
fine except for the following nits :
# ContainersMonitorImpl.java, ln no 186: do we need this additional check 
{{monitoringInterval > 0;}} as there is already boolean 
configuration(*"yarn.nodemanager.container-monitor.enabled"*) for it i feel 
these checks are not required.
# NodeManagerHardwareUtils.java, ln no 40: name *"isEnabled"* can be modified 
to {{isHardwareDetectionEnabled}} ?

Other changes seems to be reorganization, so i am ok with other changes.


> MiniYARNCluster is showing multiple  Failed to instantiate default resource 
> calculator warning messages.
> 
>
> Key: YARN-4330
> URL: https://issues.apache.org/jira/browse/YARN-4330
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test, yarn
>Affects Versions: 2.8.0
> Environment: OSX, JUnit
>Reporter: Steve Loughran
>Assignee: Varun Saxena
>Priority: Blocker
>  Labels: oct16-hard
> Attachments: YARN-4330.002.patch, YARN-4330.01.patch
>
>
> Whenever I try to start a MiniYARNCluster on Branch-2 (commit #0b61cca), I 
> see multiple stack traces warning me that a resource calculator plugin could 
> not be created
> {code}
> (ResourceCalculatorPlugin.java:getResourceCalculatorPlugin(184)) - 
> java.lang.UnsupportedOperationException: Could not determine OS: Failed to 
> instantiate default resource calculator.
> java.lang.UnsupportedOperationException: Could not determine OS
> {code}
> This is a minicluster. It doesn't need resource calculation. It certainly 
> doesn't need test logs being cluttered with even more stack traces which will 
> only generate false alarms about tests failing. 
> There needs to be a way to turn this off, and the minicluster should have it 
> that way by default.
> Being ruthless and marking as a blocker, because its a fairly major 
> regression for anyone testing with the minicluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5545) App submit failure on queue with label when default queue partition capacity is zero

2016-11-06 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15642028#comment-15642028
 ] 

Naganarasimha G R commented on YARN-5545:
-

[~bibinchundatt], 
bq. IIUC the finished application never gets to scheduler. From NEW state to 
FINISHED the transition will be complete. But for pending cases might cause a 
problem. ...  If we add the check in current location we have the additional 
benefit of not creating apps and attempts when not necessary.
I could not get you completely but additionally adding a check {{!isRecovery}} 
like you mentioned should be sufficient, earlier had not seen this argument & 
just wanted to say that finished app also goes through this call and then moves 
to the state *FINISHED* so recover flow would fail.
[~sunilg], any other comments on the latest patch ? if the above mentioned 
issue is fixed would it be sufficient to go on ?


> App submit failure on queue with label when default queue partition capacity 
> is zero
> 
>
> Key: YARN-5545
> URL: https://issues.apache.org/jira/browse/YARN-5545
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-medium
> Attachments: YARN-5545.0001.patch, YARN-5545.0002.patch, 
> YARN-5545.0003.patch, YARN-5545.0005.patch, YARN-5545.004.patch, 
> capacity-scheduler.xml
>
>
> Configure capacity scheduler 
> yarn.scheduler.capacity.root.default.capacity=0
> yarn.scheduler.capacity.root.queue1.accessible-node-labels.labelx.capacity=50
> yarn.scheduler.capacity.root.default.accessible-node-labels.labelx.capacity=50
> Submit application as below
> ./yarn jar 
> ../share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-alpha2-SNAPSHOT-tests.jar
>  sleep -Dmapreduce.job.node-label-expression=labelx 
> -Dmapreduce.job.queuename=default -m 1 -r 1 -mt 1000 -rt 1
> {noformat}
> 2016-08-21 18:21:31,375 INFO mapreduce.JobSubmitter: Cleaning up the staging 
> area /tmp/hadoop-yarn/staging/root/.staging/job_1471670113386_0001
> java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Failed 
> to submit application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default already 
> has 0 applications, cannot accept submission of application: 
> application_1471670113386_0001
>   at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:316)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:255)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1344)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1341)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1790)
>   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1341)
>   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1362)
>   at org.apache.hadoop.mapreduce.SleepJob.run(SleepJob.java:273)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.mapreduce.SleepJob.main(SleepJob.java:194)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
>   at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
>   at 
> org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:136)
>   at 
> org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:144)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit 
> application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default already 
> has 0 applications, cannot accept submission of application: 
> application_1471670113386_0001
>   at 
> 

[jira] [Commented] (YARN-5545) App submit failure on queue with label when default queue partition capacity is zero

2016-11-06 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15641999#comment-15641999
 ] 

Bibin A Chundatt commented on YARN-5545:


Thank you [~naganarasimha...@apache.org] for comments.IIUC the finished 
application never gets to scheduler. From NEW state to FINISHED the transition 
will be complete. But for pending cases might cause a problem.
Adding handling for isRecovery should be enough. If we add the check in current 
location we have the additional benefit of not creating apps and attempts when 
not necessary.
{code}
// Check system level max application limit is reached
if (!isRecovery && scheduler instanceof CapacityScheduler) {
  if (((CapacityScheduler) scheduler).isSystemAppsLimitReached()) {
String message =
"Cluster level application limit reached,rejecting application";
throw new YarnException(message);
  }
}
{code}

> App submit failure on queue with label when default queue partition capacity 
> is zero
> 
>
> Key: YARN-5545
> URL: https://issues.apache.org/jira/browse/YARN-5545
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-medium
> Attachments: YARN-5545.0001.patch, YARN-5545.0002.patch, 
> YARN-5545.0003.patch, YARN-5545.0005.patch, YARN-5545.004.patch, 
> capacity-scheduler.xml
>
>
> Configure capacity scheduler 
> yarn.scheduler.capacity.root.default.capacity=0
> yarn.scheduler.capacity.root.queue1.accessible-node-labels.labelx.capacity=50
> yarn.scheduler.capacity.root.default.accessible-node-labels.labelx.capacity=50
> Submit application as below
> ./yarn jar 
> ../share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-alpha2-SNAPSHOT-tests.jar
>  sleep -Dmapreduce.job.node-label-expression=labelx 
> -Dmapreduce.job.queuename=default -m 1 -r 1 -mt 1000 -rt 1
> {noformat}
> 2016-08-21 18:21:31,375 INFO mapreduce.JobSubmitter: Cleaning up the staging 
> area /tmp/hadoop-yarn/staging/root/.staging/job_1471670113386_0001
> java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Failed 
> to submit application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default already 
> has 0 applications, cannot accept submission of application: 
> application_1471670113386_0001
>   at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:316)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:255)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1344)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1341)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1790)
>   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1341)
>   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1362)
>   at org.apache.hadoop.mapreduce.SleepJob.run(SleepJob.java:273)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.mapreduce.SleepJob.main(SleepJob.java:194)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
>   at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
>   at 
> org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:136)
>   at 
> org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:144)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit 
> application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default already 
> has 0 applications, cannot accept submission of application: 
> application_1471670113386_0001
>   at 
> 

[jira] [Commented] (YARN-5840) Yarn queues not being tracked correctly by Yarn Timeline

2016-11-06 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15641945#comment-15641945
 ] 

Naganarasimha G R commented on YARN-5840:
-

Hi [~cheersyang] & [~ramtinb],
My guess is, its already got solved in the trunk/2.7.3 code. Can you once check 
with it ?



>  Yarn queues not being tracked correctly by Yarn Timeline
> -
>
> Key: YARN-5840
> URL: https://issues.apache.org/jira/browse/YARN-5840
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.2
>Reporter: ramtin
>Assignee: Weiwei Yang
>
> By creating Yarn sub-queues and mapping users/groups to these sub-queues when 
> the Job runs the Yarn client seems to capture the correct queue for that Job 
> but if you go to the Yarn Timeline Server to see these jobs, they all get 
> tagged to the "default" queue.
> This makes it hard for easily map the cluster consumption by different 
> departments which belong to different users



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5545) App submit failure on queue with label when default queue partition capacity is zero

2016-11-06 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15641943#comment-15641943
 ] 

Naganarasimha G R commented on YARN-5545:
-

Thanks for the patch [~bibinchundatt],
{{createAndPopulateNewRMApp}} is used in the recover flow and will be called 
for the finished apps too so i think checking for it here would not be the 
right location. Other than other parts of the patch is fine !


> App submit failure on queue with label when default queue partition capacity 
> is zero
> 
>
> Key: YARN-5545
> URL: https://issues.apache.org/jira/browse/YARN-5545
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-medium
> Attachments: YARN-5545.0001.patch, YARN-5545.0002.patch, 
> YARN-5545.0003.patch, YARN-5545.0005.patch, YARN-5545.004.patch, 
> capacity-scheduler.xml
>
>
> Configure capacity scheduler 
> yarn.scheduler.capacity.root.default.capacity=0
> yarn.scheduler.capacity.root.queue1.accessible-node-labels.labelx.capacity=50
> yarn.scheduler.capacity.root.default.accessible-node-labels.labelx.capacity=50
> Submit application as below
> ./yarn jar 
> ../share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-alpha2-SNAPSHOT-tests.jar
>  sleep -Dmapreduce.job.node-label-expression=labelx 
> -Dmapreduce.job.queuename=default -m 1 -r 1 -mt 1000 -rt 1
> {noformat}
> 2016-08-21 18:21:31,375 INFO mapreduce.JobSubmitter: Cleaning up the staging 
> area /tmp/hadoop-yarn/staging/root/.staging/job_1471670113386_0001
> java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Failed 
> to submit application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default already 
> has 0 applications, cannot accept submission of application: 
> application_1471670113386_0001
>   at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:316)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:255)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1344)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1341)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1790)
>   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1341)
>   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1362)
>   at org.apache.hadoop.mapreduce.SleepJob.run(SleepJob.java:273)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.mapreduce.SleepJob.main(SleepJob.java:194)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
>   at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
>   at 
> org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:136)
>   at 
> org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:144)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit 
> application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default already 
> has 0 applications, cannot accept submission of application: 
> application_1471670113386_0001
>   at 
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:286)
>   at 
> org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:296)
>   at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:301)
>   ... 25 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional 

[jira] [Commented] (YARN-4355) NPE while processing localizer heartbeat

2016-11-06 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15641889#comment-15641889
 ] 

Naganarasimha G R commented on YARN-4355:
-

Thanks [~varun_saxena] for the patch,
I have some doubts for avoiding race @ ResourceLocalizationService, ln no 1119.
#  If cleanup has happened then are there chances of having pending 
{{LocalizerResourceRequestEvent}} in *LocalizerRunner* ?
#  Also cant we pass the same tracker instance to {{getPathForLocalization}} 
from LocalizerRunner.processHeartbeat ?

Few trivial nits:
# TestResourceLocalizationService, ln no 1543 : no need to create a new array 
list when we are using {{Arrays.asList(req1, req2)}}
# TestResourceLocalizationService, ln no 1581 : may be we can join the line 
below line as it doesn't exceed 80 chars.

> NPE while processing localizer heartbeat
> 
>
> Key: YARN-4355
> URL: https://issues.apache.org/jira/browse/YARN-4355
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.2
>Reporter: Jason Lowe
>Assignee: Varun Saxena
> Attachments: YARN-4355.01.patch, YARN-4355.02.patch, 
> YARN-4355.03.patch, YARN-4355.04.patch
>
>
> While analyzing YARN-4354 I noticed a nodemanager was getting NPEs while 
> processing a private localizer heartbeat.  I think there's a race where we 
> can cleanup resources for an application and therefore remove the app local 
> resource tracker just as we are trying to handle the localizer heartbeat.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5599) Publish AM launch command to ATS

2016-11-06 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15641865#comment-15641865
 ] 

Varun Saxena commented on YARN-5599:


Backported this fix to YARN-5355-branch-2

> Publish AM launch command to ATS
> 
>
> Key: YARN-5599
> URL: https://issues.apache.org/jira/browse/YARN-5599
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Daniel Templeton
>Assignee: Rohith Sharma K S
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: 0001-YARN-5599.patch, 0002-YARN-5599.patch, 
> 0003-YARN-5599.patch, YARN-5599-YARN-5355-branch-2.01.patch, 
> YARN-5599-branch-2.patch, YARN-5599-branch-2.patch
>
>
> To aid in debugging launch failures, it would be valuable to have an 
> application's launch script and logs posted to ATS.  Because the 
> application's command line may contain private credentials or other secure 
> information, access to the data in ATS should be restricted to the job owner, 
> including the at-rest data.
> Along with making the data available through ATS, the configuration parameter 
> introduced in YARN-5549 and the log line that it guards should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5599) Publish AM launch command to ATS

2016-11-06 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5599:
---
Attachment: YARN-5599-YARN-5355-branch-2.01.patch

> Publish AM launch command to ATS
> 
>
> Key: YARN-5599
> URL: https://issues.apache.org/jira/browse/YARN-5599
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Daniel Templeton
>Assignee: Rohith Sharma K S
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: 0001-YARN-5599.patch, 0002-YARN-5599.patch, 
> 0003-YARN-5599.patch, YARN-5599-YARN-5355-branch-2.01.patch, 
> YARN-5599-branch-2.patch, YARN-5599-branch-2.patch
>
>
> To aid in debugging launch failures, it would be valuable to have an 
> application's launch script and logs posted to ATS.  Because the 
> application's command line may contain private credentials or other secure 
> information, access to the data in ATS should be restricted to the job owner, 
> including the at-rest data.
> Along with making the data available through ATS, the configuration parameter 
> introduced in YARN-5549 and the log line that it guards should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5545) App submit failure on queue with label when default queue partition capacity is zero

2016-11-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15641546#comment-15641546
 ] 

Hadoop QA commented on YARN-5545:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 38m 
34s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-5545 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837644/YARN-5545.0005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3eeeb13a7305 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d8bab3d |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13798/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13798/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> App submit failure on queue with label when default queue partition capacity 
> is zero
> 
>
> Key: YARN-5545
> URL: https://issues.apache.org/jira/browse/YARN-5545
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-medium
> 

[jira] [Updated] (YARN-5561) [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and entities via REST

2016-11-06 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5561:
---
Fix Version/s: 3.0.0-alpha2

> [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and 
> entities via REST
> ---
>
> Key: YARN-5561
> URL: https://issues.apache.org/jira/browse/YARN-5561
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Fix For: 3.0.0-alpha2, YARN-5355
>
> Attachments: 0001-YARN-5561.YARN-5355.patch, YARN-5561.02.patch, 
> YARN-5561.03.patch, YARN-5561.04.patch, YARN-5561.05.patch, YARN-5561.patch, 
> YARN-5561.v0.patch
>
>
> ATSv2 model lacks retrieval of {{list-of-all-apps}}, 
> {{list-of-all-app-attempts}} and {{list-of-all-containers-per-attempt}} via 
> REST API's. And also it is required to know about all the entities in an 
> applications.
> It is pretty much highly required these URLs for Web  UI.
> New REST URL would be 
> # GET {{/ws/v2/timeline/apps}}
> # GET {{/ws/v2/timeline/apps/\{app-id\}/appattempts}}.
> # GET 
> {{/ws/v2/timeline/apps/\{app-id\}/appattempts/\{attempt-id\}/containers}}
> # GET {{/ws/v2/timeline/apps/\{app id\}/entities}} should display list of 
> entities that can be queried.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >