[jira] [Updated] (YARN-5200) Improve yarn logs to get Container List

2016-06-20 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-5200:

Attachment: YARN-5200.5.patch

> Improve yarn logs to get Container List
> ---
>
> Key: YARN-5200
> URL: https://issues.apache.org/jira/browse/YARN-5200
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-5200.1.patch, YARN-5200.2.patch, YARN-5200.3.patch, 
> YARN-5200.4.patch, YARN-5200.5.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5200) Improve yarn logs to get Container List

2016-06-20 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341150#comment-15341150
 ] 

Xuan Gong commented on YARN-5200:
-

Command:
{code}
yarn logs --applicationId application_1466487508854_0001 
-show_container_log_info
{code}

The output would be
{code}
Container: container_1466487508854_0001_01_03 on 
xuanmacbook-pro.home_59821_1466487562000
Log Upload Time:1466487561998
===
LogType:directory.info
LogLength:2280
LogType:launch_container.sh
LogLength:5360
LogType:stderr
LogLength:70
LogType:stdout
LogLength:0
LogType:syslog
LogLength:3354
LogType:syslog.shuffle
LogLength:3207


Container: container_1466487508854_0001_01_02 on 
xuanmacbook-pro.home_59821_1466487562000
Log Upload Time:1466487561998
===
LogType:directory.info
LogLength:2280
LogType:launch_container.sh
LogLength:5169
LogType:stderr
LogLength:70
LogType:stdout
LogLength:0
LogType:syslog
LogLength:3987


Container: container_1466487508854_0001_01_01 on 
xuanmacbook-pro.home_59821_1466487562000
Log Upload Time:1466487561998
===
LogType:directory.info
LogLength:2684
LogType:launch_container.sh
LogLength:5344
LogType:stderr
LogLength:1780
LogType:stdout
LogLength:0
LogType:syslog
LogLength:34492
{code}

> Improve yarn logs to get Container List
> ---
>
> Key: YARN-5200
> URL: https://issues.apache.org/jira/browse/YARN-5200
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-5200.1.patch, YARN-5200.2.patch, YARN-5200.3.patch, 
> YARN-5200.4.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5200) Improve yarn logs to get Container List

2016-06-20 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341148#comment-15341148
 ] 

Xuan Gong commented on YARN-5200:
-

{code}
yarn logs --applicationId application_1466487508854_0001 
-show_application_log_info
{code}

the output for this command line will be
{code}
Application State: Completed.
container_1466487508854_0001_01_03 on 
xuanmacbook-pro.home_59821_1466487562000
container_1466487508854_0001_01_02 on 
xuanmacbook-pro.home_59821_1466487562000
container_1466487508854_0001_01_01 on 
xuanmacbook-pro.home_59821_1466487562000
{code}

> Improve yarn logs to get Container List
> ---
>
> Key: YARN-5200
> URL: https://issues.apache.org/jira/browse/YARN-5200
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-5200.1.patch, YARN-5200.2.patch, YARN-5200.3.patch, 
> YARN-5200.4.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5197) RM leaks containers if running container disappears from node update

2016-06-20 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341066#comment-15341066
 ] 

Rohith Sharma K S commented on YARN-5197:
-

Thanks [~jlowe] for providing branch patches. HadoopQA results failures are 
unrelated to the patch. I will go-ahead committing the patch to branches.

> RM leaks containers if running container disappears from node update
> 
>
> Key: YARN-5197
> URL: https://issues.apache.org/jira/browse/YARN-5197
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2, 2.6.4
>Reporter: Jason Lowe
>Assignee: Jason Lowe
> Attachments: YARN-5197-branch-2.7.003.patch, 
> YARN-5197-branch-2.8.003.patch, YARN-5197.001.patch, YARN-5197.002.patch, 
> YARN-5197.003.patch
>
>
> Once a node reports a container running in a status update, the corresponding 
> RMNodeImpl will track the container in its launchedContainers map.  If the 
> node somehow misses sending the completed container status to the RM and the 
> container simply disappears from subsequent heartbeats, the container will 
> leak in launchedContainers forever and the container completion event will 
> not be sent to the scheduler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5171) Extend DistributedSchedulerProtocol to notify RM of containers allocated by the Node

2016-06-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341043#comment-15341043
 ] 

Hadoop QA commented on YARN-5171:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 39s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 48s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 2s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 13s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
5s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 31s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 59s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 35s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 15 
new + 193 unchanged - 20 fixed = 208 total (was 213) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
56s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch 6 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
58s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 10s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 12s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 36m 48s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 22s {color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 95m 24s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.client.api.impl.TestYarnClient |
|   | hadoop.yarn.client.cli.TestLogsCLI |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12812078/YARN-5171.008.patch |
| JIRA Issue | YARN-5171 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 7b71d91d0819 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 

[jira] [Updated] (YARN-5262) Optimize sending RMNodeFinishedContainersPulledByAMEvent for every AM heartbeat

2016-06-20 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-5262:

Target Version/s: 2.9.0

> Optimize sending RMNodeFinishedContainersPulledByAMEvent for every AM 
> heartbeat
> ---
>
> Key: YARN-5262
> URL: https://issues.apache.org/jira/browse/YARN-5262
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5262.patch, 0002-YARN-5262.patch
>
>
> It is observed that RM triggers an one event for every 
> ApplicationMaster#allocate request in the following trace. This is not 
> necessarily required and it can be optimized such that send only if any 
> containers are there to acknowledge to NodeManager. 
> {code}
>   RMAppAttemptImpl.sendFinishedContainersToNM() line: 1871
>   RMAppAttemptImpl.pullJustFinishedContainers() line: 805 
>   ApplicationMasterService.allocate(AllocateRequest) line: 567
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5242) Update DominantResourceCalculator to consider all resource types in calculations

2016-06-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340958#comment-15340958
 ] 

Hadoop QA commented on YARN-5242:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 26s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
39s {color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 13s 
{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
39s {color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s 
{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 4s 
{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s 
{color} | {color:green} YARN-3926 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 26s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 15s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m 33s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12812073/YARN-5242-YARN-3926.002.patch
 |
| JIRA Issue | YARN-5242 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3878fd5110d9 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-3926 / 7a0d5db |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12085/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 
hadoop-yarn-project/hadoop-yarn |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12085/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was 

[jira] [Commented] (YARN-5171) Extend DistributedSchedulerProtocol to notify RM of containers allocated by the Node

2016-06-20 Thread Inigo Goiri (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340950#comment-15340950
 ] 

Inigo Goiri commented on YARN-5171:
---

[~subru], agreed on the naming thing; external can be confusing with YARN-5215.
We need to agree on a naming as this patch has a couple variables that make 
reference to it.
What about "distributed"? We also need to tweak {{ResourceUsage}} to do proper 
accounting.

> Extend DistributedSchedulerProtocol to notify RM of containers allocated by 
> the Node
> 
>
> Key: YARN-5171
> URL: https://issues.apache.org/jira/browse/YARN-5171
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Inigo Goiri
> Attachments: YARN-5171.000.patch, YARN-5171.001.patch, 
> YARN-5171.002.patch, YARN-5171.003.patch, YARN-5171.004.patch, 
> YARN-5171.005.patch, YARN-5171.006.patch, YARN-5171.007.patch, 
> YARN-5171.008.patch
>
>
> Currently, the RM does not know about Containers allocated by the 
> OpportunisticContainerAllocator on the NM. This JIRA proposes to extend the 
> Distributed Scheduler request interceptor and the protocol to notify the RM 
> of new containers as and when they are allocated at the NM. The 
> {{RMContainer}} should also be extended to expose the {{ExecutionType}} of 
> the container.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5171) Extend DistributedSchedulerProtocol to notify RM of containers allocated by the Node

2016-06-20 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated YARN-5171:
--
Attachment: YARN-5171.008.patch

Adding one simple unit test and tweaking the names again.

> Extend DistributedSchedulerProtocol to notify RM of containers allocated by 
> the Node
> 
>
> Key: YARN-5171
> URL: https://issues.apache.org/jira/browse/YARN-5171
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Inigo Goiri
> Attachments: YARN-5171.000.patch, YARN-5171.001.patch, 
> YARN-5171.002.patch, YARN-5171.003.patch, YARN-5171.004.patch, 
> YARN-5171.005.patch, YARN-5171.006.patch, YARN-5171.007.patch, 
> YARN-5171.008.patch
>
>
> Currently, the RM does not know about Containers allocated by the 
> OpportunisticContainerAllocator on the NM. This JIRA proposes to extend the 
> Distributed Scheduler request interceptor and the protocol to notify the RM 
> of new containers as and when they are allocated at the NM. The 
> {{RMContainer}} should also be extended to expose the {{ExecutionType}} of 
> the container.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5224) Logs for a completed container are not available in the yarn logs output for a live application

2016-06-20 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-5224:
--
Labels:   (was: incompatible)

> Logs for a completed container are not available in the yarn logs output for 
> a live application
> ---
>
> Key: YARN-5224
> URL: https://issues.apache.org/jira/browse/YARN-5224
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Siddharth Seth
>Assignee: Xuan Gong
> Attachments: YARN-5224.1.patch, YARN-5224.2.patch, YARN-5224.3.patch, 
> YARN-5224.4.patch, YARN-5224.5.patch
>
>
> This affects 'short' jobs like MapReduce and Tez more than long running apps.
> Related: YARN-5193 (but that only covers long running apps)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5242) Update DominantResourceCalculator to consider all resource types in calculations

2016-06-20 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev updated YARN-5242:

Attachment: YARN-5242-YARN-3926.002.patch

Uploaded a patch to fix DominantResourceCalculator as well as the failing unit 
tests.

> Update DominantResourceCalculator to consider all resource types in 
> calculations
> 
>
> Key: YARN-5242
> URL: https://issues.apache.org/jira/browse/YARN-5242
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-5242-YARN-3926.001.patch, 
> YARN-5242-YARN-3926.002.patch
>
>
> The fitsIn function in the DominantResourceCalculator only looks at memory 
> and cpu. It should be modified to use all available resource types.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5171) Extend DistributedSchedulerProtocol to notify RM of containers allocated by the Node

2016-06-20 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340875#comment-15340875
 ] 

Subru Krishnan commented on YARN-5171:
--

I had an offline discussion with [~jianhe] and he agreed that option (2) is the 
best option. 

I would suggest keeping the method names as {{addRMContainer}} and 
{{removeRMContainer}} as external container seems confusing with the 
discussions going on in YARN-5215. You could open another JIRA to clean up code 
to use the new methods instead of directly updating {{liveContainers}}.

> Extend DistributedSchedulerProtocol to notify RM of containers allocated by 
> the Node
> 
>
> Key: YARN-5171
> URL: https://issues.apache.org/jira/browse/YARN-5171
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Inigo Goiri
> Attachments: YARN-5171.000.patch, YARN-5171.001.patch, 
> YARN-5171.002.patch, YARN-5171.003.patch, YARN-5171.004.patch, 
> YARN-5171.005.patch, YARN-5171.006.patch, YARN-5171.007.patch
>
>
> Currently, the RM does not know about Containers allocated by the 
> OpportunisticContainerAllocator on the NM. This JIRA proposes to extend the 
> Distributed Scheduler request interceptor and the protocol to notify the RM 
> of new containers as and when they are allocated at the NM. The 
> {{RMContainer}} should also be extended to expose the {{ExecutionType}} of 
> the container.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5224) Logs for a completed container are not available in the yarn logs output for a live application

2016-06-20 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340847#comment-15340847
 ] 

Vinod Kumar Vavilapalli commented on YARN-5224:
---

bq. Marking this as incompatible since the patch includes RESTful API's 
endpoint change
[~ozawa], the patch is not deleting or renaming the API, it is adding a new API 
leaving the old one behind for deprecation. It's a compatible change.

> Logs for a completed container are not available in the yarn logs output for 
> a live application
> ---
>
> Key: YARN-5224
> URL: https://issues.apache.org/jira/browse/YARN-5224
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Siddharth Seth
>Assignee: Xuan Gong
>  Labels: incompatible
> Attachments: YARN-5224.1.patch, YARN-5224.2.patch, YARN-5224.3.patch, 
> YARN-5224.4.patch, YARN-5224.5.patch
>
>
> This affects 'short' jobs like MapReduce and Tez more than long running apps.
> Related: YARN-5193 (but that only covers long running apps)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5224) Logs for a completed container are not available in the yarn logs output for a live application

2016-06-20 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340841#comment-15340841
 ] 

Tsuyoshi Ozawa commented on YARN-5224:
--

Marking this as incompatible since the patch includes RESTful API's endpoint 
change

> Logs for a completed container are not available in the yarn logs output for 
> a live application
> ---
>
> Key: YARN-5224
> URL: https://issues.apache.org/jira/browse/YARN-5224
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Siddharth Seth
>Assignee: Xuan Gong
>  Labels: incompatible
> Attachments: YARN-5224.1.patch, YARN-5224.2.patch, YARN-5224.3.patch, 
> YARN-5224.4.patch, YARN-5224.5.patch
>
>
> This affects 'short' jobs like MapReduce and Tez more than long running apps.
> Related: YARN-5193 (but that only covers long running apps)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5224) Logs for a completed container are not available in the yarn logs output for a live application

2016-06-20 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated YARN-5224:
-
Labels: incompatible  (was: )

> Logs for a completed container are not available in the yarn logs output for 
> a live application
> ---
>
> Key: YARN-5224
> URL: https://issues.apache.org/jira/browse/YARN-5224
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Siddharth Seth
>Assignee: Xuan Gong
>  Labels: incompatible
> Attachments: YARN-5224.1.patch, YARN-5224.2.patch, YARN-5224.3.patch, 
> YARN-5224.4.patch, YARN-5224.5.patch
>
>
> This affects 'short' jobs like MapReduce and Tez more than long running apps.
> Related: YARN-5193 (but that only covers long running apps)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5224) Logs for a completed container are not available in the yarn logs output for a live application

2016-06-20 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340787#comment-15340787
 ] 

Varun Vasudev commented on YARN-5224:
-

The API rename makes sense.

> Logs for a completed container are not available in the yarn logs output for 
> a live application
> ---
>
> Key: YARN-5224
> URL: https://issues.apache.org/jira/browse/YARN-5224
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Siddharth Seth
>Assignee: Xuan Gong
> Attachments: YARN-5224.1.patch, YARN-5224.2.patch, YARN-5224.3.patch, 
> YARN-5224.4.patch, YARN-5224.5.patch
>
>
> This affects 'short' jobs like MapReduce and Tez more than long running apps.
> Related: YARN-5193 (but that only covers long running apps)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5200) Improve yarn logs to get Container List

2016-06-20 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340785#comment-15340785
 ] 

Varun Vasudev commented on YARN-5200:
-

[~xgong] - can you please take a look at the unit tests? In addition, 
# Instead of printing {code} "The state of this applicaiton: " + 
request.getAppId()
++ " is " + (request.isAppFinished() ? "Completed." : "Running."); 
{code} please print  "Application State: Completed/Running";
# Rename show_application_info to show_application_log_info and the rename the 
functions similarly - my apologies for asking you to rename this again.

> Improve yarn logs to get Container List
> ---
>
> Key: YARN-5200
> URL: https://issues.apache.org/jira/browse/YARN-5200
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-5200.1.patch, YARN-5200.2.patch, YARN-5200.3.patch, 
> YARN-5200.4.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5197) RM leaks containers if running container disappears from node update

2016-06-20 Thread sandflee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340742#comment-15340742
 ] 

sandflee commented on YARN-5197:


Hi, [~jlowe], is this possible that container info disappear from node update? 
since NM only remove containers when AM acks container complete msg. correct me 
if I missed some thing, thanks!

> RM leaks containers if running container disappears from node update
> 
>
> Key: YARN-5197
> URL: https://issues.apache.org/jira/browse/YARN-5197
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2, 2.6.4
>Reporter: Jason Lowe
>Assignee: Jason Lowe
> Attachments: YARN-5197-branch-2.7.003.patch, 
> YARN-5197-branch-2.8.003.patch, YARN-5197.001.patch, YARN-5197.002.patch, 
> YARN-5197.003.patch
>
>
> Once a node reports a container running in a status update, the corresponding 
> RMNodeImpl will track the container in its launchedContainers map.  If the 
> node somehow misses sending the completed container status to the RM and the 
> container simply disappears from subsequent heartbeats, the container will 
> leak in launchedContainers forever and the container completion event will 
> not be sent to the scheduler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5275) Timeline application page cannot be loaded when no application submitted/running on the cluster after HADOOP-9613

2016-06-20 Thread Tsuyoshi Ozawa (JIRA)
Tsuyoshi Ozawa created YARN-5275:


 Summary: Timeline application page cannot be loaded when no 
application submitted/running on the cluster after HADOOP-9613
 Key: YARN-5275
 URL: https://issues.apache.org/jira/browse/YARN-5275
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0-alpha1
Reporter: Tsuyoshi Ozawa
Priority: Critical


After HADOOP-9613, Timeline Web UI has a problem reported by [~leftnoteasy] and 
[~sunilg]

{quote}
when no application submitted/running on the cluster, applications page cannot 
be loaded. 
{quote}

We should investigate the reason and fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5216) Expose configurable preemption policy for OPPORTUNISTIC containers running on the NM

2016-06-20 Thread Hitesh Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitesh Sharma reassigned YARN-5216:
---

Assignee: Hitesh Sharma  (was: Arun Suresh)

> Expose configurable preemption policy for OPPORTUNISTIC containers running on 
> the NM
> 
>
> Key: YARN-5216
> URL: https://issues.apache.org/jira/browse/YARN-5216
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Hitesh Sharma
>
> Currently, the default action taken by the QueuingContainerManager, 
> introduced in YARN-2883, when a GUARANTEED Container is scheduled on an NM 
> with OPPORTUNISTIC containers using up resources, is to KILL the running 
> OPPORTUNISTIC containers.
> This JIRA proposes to expose a configurable hook to allow the NM to take a 
> different action.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5197) RM leaks containers if running container disappears from node update

2016-06-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340540#comment-15340540
 ] 

Hadoop QA commented on YARN-5197:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 29s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
54s {color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} branch-2.7 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} branch-2.7 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 43s 
{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} branch-2.7 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 11s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in branch-2.7 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s 
{color} | {color:green} branch-2.7 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} branch-2.7 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 20s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 3 new + 209 unchanged - 0 fixed = 212 total (was 209) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 2138 line(s) that end in whitespace. Use 
git apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 42s 
{color} | {color:red} The patch 76 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 50m 38s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 50m 49s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 120m 20s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestAMAuthorization |
|   | 

[jira] [Commented] (YARN-5267) RM REST API doc for app lists "Application Type" instead of "applicationType"

2016-06-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340520#comment-15340520
 ] 

Hadoop QA commented on YARN-5267:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 8m 37s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12811999/YARN-5267.001.patch |
| JIRA Issue | YARN-5267 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 226d15b3c321 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8c1f81d |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12084/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> RM REST API doc for app lists "Application Type" instead of "applicationType" 
> --
>
> Key: YARN-5267
> URL: https://issues.apache.org/jira/browse/YARN-5267
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api, documentation
>Affects Versions: 2.6.4
>Reporter: Grant Sohn
>Priority: Trivial
>  Labels: documentation
> Attachments: YARN-5267.001.patch
>
>
> From the docs:
> {noformat}
> Note that depending on security settings a user might not be able to see all 
> the fields.
> Item  Data Type   Description
> idstring  The application id
> user  string  The user who started the application
> name  string  The application name
> Application Type  string  The application type
> 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5267) RM REST API doc for app lists "Application Type" instead of "applicationType"

2016-06-20 Thread Grant Sohn (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Sohn updated YARN-5267:
-
Attachment: YARN-5267.001.patch

Fixes wrong name and 2 minor spelling bugs.

> RM REST API doc for app lists "Application Type" instead of "applicationType" 
> --
>
> Key: YARN-5267
> URL: https://issues.apache.org/jira/browse/YARN-5267
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api, documentation
>Affects Versions: 2.6.4
>Reporter: Grant Sohn
>Priority: Trivial
>  Labels: documentation
> Attachments: YARN-5267.001.patch
>
>
> From the docs:
> {noformat}
> Note that depending on security settings a user might not be able to see all 
> the fields.
> Item  Data Type   Description
> idstring  The application id
> user  string  The user who started the application
> name  string  The application name
> Application Type  string  The application type
> 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5200) Improve yarn logs to get Container List

2016-06-20 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340443#comment-15340443
 ] 

Vinod Kumar Vavilapalli commented on YARN-5200:
---

[~xgong], can you paste the final command lines and example outputs on a shell?

Also, you made your usual typo "applicaiton" :)

> Improve yarn logs to get Container List
> ---
>
> Key: YARN-5200
> URL: https://issues.apache.org/jira/browse/YARN-5200
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-5200.1.patch, YARN-5200.2.patch, YARN-5200.3.patch, 
> YARN-5200.4.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5274) Use smartctl to determine health of disks

2016-06-20 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340415#comment-15340415
 ] 

Varun Vasudev commented on YARN-5274:
-

# Making it a sub-task makes sense; converted. 
# I wouldn't combine this YARN-1072 - a lot of folks don't use Ambari and have 
their own deployment and cluster management solutions.
# I did not know about badblocks - I think, like smartctl, as long as the code 
can handle the utility not being present I'm fine with adding support for it.


> Use smartctl to determine health of disks
> -
>
> Key: YARN-5274
> URL: https://issues.apache.org/jira/browse/YARN-5274
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Varun Vasudev
>
> It would be nice to add support for smartctl(on machines where it is 
> available) to determine disk health for the YARN local and log dirs(if 
> smartctl is applicable). The current disk checking mechanism misses out on 
> issues like bad sectors, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5274) Use smartctl to determine health of disks

2016-06-20 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev updated YARN-5274:

Issue Type: Sub-task  (was: Improvement)
Parent: YARN-5078

> Use smartctl to determine health of disks
> -
>
> Key: YARN-5274
> URL: https://issues.apache.org/jira/browse/YARN-5274
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Varun Vasudev
>
> It would be nice to add support for smartctl(on machines where it is 
> available) to determine disk health for the YARN local and log dirs(if 
> smartctl is applicable). The current disk checking mechanism misses out on 
> issues like bad sectors, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5274) Use smartctl to determine health of disks

2016-06-20 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340376#comment-15340376
 ] 

Ray Chiang commented on YARN-5274:
--

One question:

# [~vvasudev], do you think this should be organized as a subtask of YARN-5078?

Two comments:

# I was thinking about making this an example or option within YARN-1072.
# There's also a {{badblocks}} command available under Linux.  I'm not sure how 
common or easily available it is, but it could also be handy, depending on what 
data we gather/store.


> Use smartctl to determine health of disks
> -
>
> Key: YARN-5274
> URL: https://issues.apache.org/jira/browse/YARN-5274
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Reporter: Varun Vasudev
>
> It would be nice to add support for smartctl(on machines where it is 
> available) to determine disk health for the YARN local and log dirs(if 
> smartctl is applicable). The current disk checking mechanism misses out on 
> issues like bad sectors, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5270) Solve miscellaneous issues caused by YARN-4844

2016-06-20 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340336#comment-15340336
 ] 

Siddharth Seth commented on YARN-5270:
--

Not sure why the NotImplementedYet exceptions are required. Is this to handle 
cases where some projects may have implemented Resource ?
Anyway - if the exception has to stay - the message should be better to avoid 
confusion. Indicate that this is implemented in the actual implementation.

> Solve miscellaneous issues caused by YARN-4844
> --
>
> Key: YARN-5270
> URL: https://issues.apache.org/jira/browse/YARN-5270
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-5270-branch-2.001.patch, 
> YARN-5270-branch-2.8.001.patch
>
>
> Such as javac warnings reported by YARN-5077 and type converting issues in 
> Resources class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5197) RM leaks containers if running container disappears from node update

2016-06-20 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-5197:
-
Attachment: YARN-5197-branch-2.7.003.patch
YARN-5197-branch-2.8.003.patch

Thanks for the review and commit, Rohith!  Here are patches for branch-2.8 and 
branch-2.7.  I believe the 2.7 patch will work on 2.6 as well.


> RM leaks containers if running container disappears from node update
> 
>
> Key: YARN-5197
> URL: https://issues.apache.org/jira/browse/YARN-5197
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2, 2.6.4
>Reporter: Jason Lowe
>Assignee: Jason Lowe
> Attachments: YARN-5197-branch-2.7.003.patch, 
> YARN-5197-branch-2.8.003.patch, YARN-5197.001.patch, YARN-5197.002.patch, 
> YARN-5197.003.patch
>
>
> Once a node reports a container running in a status update, the corresponding 
> RMNodeImpl will track the container in its launchedContainers map.  If the 
> node somehow misses sending the completed container status to the RM and the 
> container simply disappears from subsequent heartbeats, the container will 
> leak in launchedContainers forever and the container completion event will 
> not be sent to the scheduler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5270) Solve miscellaneous issues caused by YARN-4844

2016-06-20 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5270:
-
Attachment: YARN-5270-branch-2.8.001.patch

> Solve miscellaneous issues caused by YARN-4844
> --
>
> Key: YARN-5270
> URL: https://issues.apache.org/jira/browse/YARN-5270
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-5270-branch-2.001.patch, 
> YARN-5270-branch-2.8.001.patch
>
>
> Such as javac warnings reported by YARN-5077 and type converting issues in 
> Resources class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5082) Limit ContainerId increase in fair scheduler if the num of node app reserved reached the limit

2016-06-20 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340215#comment-15340215
 ] 

Arun Suresh commented on YARN-5082:
---

[~vinodkv],
Even though the symptom is the same, the cause for the container ID leak in the 
FS and CS look different. In the FS case, the original issue (which this JIRA 
fixes) was that the containerId used to leak because a container that was 
created with the intent of reservation, ends up discarded due to a threshold 
check. From the description and comments on YARN-5074, the cause of leakage in 
the CS looks different.

> Limit ContainerId increase in fair scheduler if the num of  node app reserved 
> reached the limit
> ---
>
> Key: YARN-5082
> URL: https://issues.apache.org/jira/browse/YARN-5082
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: sandflee
>Assignee: sandflee
> Fix For: 2.9.0
>
> Attachments: YARN-5082.01.patch, YARN-5082.02.patch, 
> YARN-5082.03.patch, YARN-5082.04.patch, YARN-5082.addendum.patch
>
>
> see many logs like 
> {quote}
> 16/05/14 01:07:58 DEBUG fair.FSAppAttempt: Not creating reservation as 
> container container_1463159225729_0002_01_03 is not reservable
> 16/05/14 01:07:58 DEBUG fair.FSAppAttempt: Not creating reservation as 
> container container_1463159225729_0002_01_04 is not reservable
> 16/05/14 01:07:58 DEBUG fair.FSAppAttempt: Not creating reservation as 
> container container_1463159225729_0002_01_05 is not reservable
> 16/05/14 01:07:58 DEBUG fair.FSAppAttempt: Not creating reservation as 
> container container_1463159225729_0002_01_06 is not reservable
> 16/05/14 01:07:58 DEBUG fair.FSAppAttempt: Not creating reservation as 
> container container_1463159225729_0002_01_07 is not reservable
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5082) Limit ContainerId increase in fair scheduler if the num of node app reserved reached the limit

2016-06-20 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340184#comment-15340184
 ] 

Vinod Kumar Vavilapalli commented on YARN-5082:
---

bq. CS has similar issue. (Linked)
Pitching in late. Haven't looked at the patch / commit. Can this not be fixed 
in a unified way between the schedulers?

> Limit ContainerId increase in fair scheduler if the num of  node app reserved 
> reached the limit
> ---
>
> Key: YARN-5082
> URL: https://issues.apache.org/jira/browse/YARN-5082
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: sandflee
>Assignee: sandflee
> Fix For: 2.9.0
>
> Attachments: YARN-5082.01.patch, YARN-5082.02.patch, 
> YARN-5082.03.patch, YARN-5082.04.patch, YARN-5082.addendum.patch
>
>
> see many logs like 
> {quote}
> 16/05/14 01:07:58 DEBUG fair.FSAppAttempt: Not creating reservation as 
> container container_1463159225729_0002_01_03 is not reservable
> 16/05/14 01:07:58 DEBUG fair.FSAppAttempt: Not creating reservation as 
> container container_1463159225729_0002_01_04 is not reservable
> 16/05/14 01:07:58 DEBUG fair.FSAppAttempt: Not creating reservation as 
> container container_1463159225729_0002_01_05 is not reservable
> 16/05/14 01:07:58 DEBUG fair.FSAppAttempt: Not creating reservation as 
> container container_1463159225729_0002_01_06 is not reservable
> 16/05/14 01:07:58 DEBUG fair.FSAppAttempt: Not creating reservation as 
> container container_1463159225729_0002_01_07 is not reservable
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5274) Use smartctl to determine health of disks

2016-06-20 Thread Varun Vasudev (JIRA)
Varun Vasudev created YARN-5274:
---

 Summary: Use smartctl to determine health of disks
 Key: YARN-5274
 URL: https://issues.apache.org/jira/browse/YARN-5274
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: nodemanager
Reporter: Varun Vasudev


It would be nice to add support for smartctl(on machines where it is available) 
to determine disk health for the YARN local and log dirs(if smartctl is 
applicable). The current disk checking mechanism misses out on issues like bad 
sectors, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5270) Solve miscellaneous issues caused by YARN-4844

2016-06-20 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5270:
-
Attachment: YARN-5270-branch-2.001.patch

Thanks [~kasha],

[~sseth] offline mentioned that YARN-4844 breaks binary compatibility as well, 
according to Java compatibility doc, change parameter types in signature causes 
method cannot find issue. That means if we run a YARN application compiled 
against Hadoop 2.7 in a Hadoop 2.8 deployment, it fails.

What I have done in this patch:
- Added {{long getMemorySize}} and kept {{int getMemory}}
- Removed {{long getVirtualCoresSize}} and kept {{int setVirtualCores}}
- Other changes like wrong type converting issues in Resources class.

> Solve miscellaneous issues caused by YARN-4844
> --
>
> Key: YARN-5270
> URL: https://issues.apache.org/jira/browse/YARN-5270
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-5270-branch-2.001.patch
>
>
> Such as javac warnings reported by YARN-5077 and type converting issues in 
> Resources class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5214) Pending on synchronized method DirectoryCollection#checkDirs can hang NM's NodeStatusUpdater

2016-06-20 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340165#comment-15340165
 ] 

Vinod Kumar Vavilapalli commented on YARN-5214:
---

Tx for working on this [~djp].

The patch looks good overall, very close, but couple of comments follow. We can 
do better in some areas I think
 - {{dirsChangeListeners}} doesn't change except on service-start and stop. So 
no need to grab the global / read / write lock, we can simply make it use a 
thread-safe collection?
 - In {{createNonExistentDirs()}}, you don't need to make a manual copy of 
localDirs, the iterator() method already does it for you.
 - Can you not use the java8 stuff (diamond operator etc), so that this patch 
can be backported to the older releases?

> Pending on synchronized method DirectoryCollection#checkDirs can hang NM's 
> NodeStatusUpdater
> 
>
> Key: YARN-5214
> URL: https://issues.apache.org/jira/browse/YARN-5214
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Junping Du
>Assignee: Junping Du
>Priority: Critical
> Attachments: YARN-5214.patch
>
>
> In one cluster, we notice NM's heartbeat to RM is suddenly stopped and wait a 
> while and marked LOST by RM. From the log, the NM daemon is still running, 
> but jstack hints NM's NodeStatusUpdater thread get blocked:
> 1.  Node Status Updater thread get blocked by 0x8065eae8 
> {noformat}
> "Node Status Updater" #191 prio=5 os_prio=0 tid=0x7f0354194000 nid=0x26fa 
> waiting for monitor entry [0x7f035945a000]
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection.getFailedDirs(DirectoryCollection.java:170)
> - waiting to lock <0x8065eae8> (a 
> org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.getDisksHealthReport(LocalDirsHandlerService.java:287)
> at 
> org.apache.hadoop.yarn.server.nodemanager.NodeHealthCheckerService.getHealthReport(NodeHealthCheckerService.java:58)
> at 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.getNodeStatus(NodeStatusUpdaterImpl.java:389)
> at 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.access$300(NodeStatusUpdaterImpl.java:83)
> at 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl$1.run(NodeStatusUpdaterImpl.java:643)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> 2. The actual holder of this lock is DiskHealthMonitor:
> {noformat}
> "DiskHealthMonitor-Timer" #132 daemon prio=5 os_prio=0 tid=0x7f0397393000 
> nid=0x26bd runnable [0x7f035e511000]
>java.lang.Thread.State: RUNNABLE
> at java.io.UnixFileSystem.createDirectory(Native Method)
> at java.io.File.mkdir(File.java:1316)
> at 
> org.apache.hadoop.util.DiskChecker.mkdirsWithExistsCheck(DiskChecker.java:67)
> at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:104)
> at 
> org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection.verifyDirUsingMkdir(DirectoryCollection.java:340)
> at 
> org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection.testDirs(DirectoryCollection.java:312)
> at 
> org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection.checkDirs(DirectoryCollection.java:231)
> - locked <0x8065eae8> (a 
> org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.checkDirs(LocalDirsHandlerService.java:389)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.access$400(LocalDirsHandlerService.java:50)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService$MonitoringTimerTask.run(LocalDirsHandlerService.java:122)
> at java.util.TimerThread.mainLoop(Timer.java:555)
> at java.util.TimerThread.run(Timer.java:505)
> {noformat}
> This disk operation could take longer time than expectation especially in 
> high IO throughput case and we should have fine-grained lock for related 
> operations here. 
> The same issue on HDFS get raised and fixed in HDFS-7489, and we probably 
> should have similar fix here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-4097) Create POC timeline web UI with new YARN web UI framework

2016-06-20 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340145#comment-15340145
 ] 

Varun Saxena edited comment on YARN-4097 at 6/20/16 6:42 PM:
-

bq. Can we sync up this work with YARN-3368's progress ?
Yeah it seems there will be some refactoring. Just wanted to point out because 
we may want to give UI demo(ATS specific) too at upcoming ATS talk. We can fix 
this after refactoring work going on in YARN-3368.


was (Author: varun_saxena):
bq. Can we sync up this work with YARN-3368's progress ?
Yeah it seems there will be some refactoring. Just wanted to point out because 
we may want to give UI demo(ATS specific) too at upcoming ATS talk.

> Create POC timeline web UI with new YARN web UI framework
> -
>
> Key: YARN-4097
> URL: https://issues.apache.org/jira/browse/YARN-4097
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: Screen Shot 2016-02-24 at 15.57.38.png, Screen Shot 
> 2016-02-24 at 15.57.53.png, Screen Shot 2016-02-24 at 15.58.08.png, Screen 
> Shot 2016-02-24 at 15.58.26.png, YARN-4097-bugfix.patch
>
>
> As planned, we need to try out the new YARN web UI framework and implement 
> timeline v2 web UI on top of it. This JIRA proposes to build the basic active 
> flow and application lists of the timeline data. We can add more content 
> after we get used to this framework.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4097) Create POC timeline web UI with new YARN web UI framework

2016-06-20 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340145#comment-15340145
 ] 

Varun Saxena commented on YARN-4097:


bq. Can we sync up this work with YARN-3368's progress ?
Yeah it seems there will be some refactoring. Just wanted to point out because 
we may want to give UI demo(ATS specific) too at upcoming ATS talk.

> Create POC timeline web UI with new YARN web UI framework
> -
>
> Key: YARN-4097
> URL: https://issues.apache.org/jira/browse/YARN-4097
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: Screen Shot 2016-02-24 at 15.57.38.png, Screen Shot 
> 2016-02-24 at 15.57.53.png, Screen Shot 2016-02-24 at 15.58.08.png, Screen 
> Shot 2016-02-24 at 15.58.26.png, YARN-4097-bugfix.patch
>
>
> As planned, we need to try out the new YARN web UI framework and implement 
> timeline v2 web UI on top of it. This JIRA proposes to build the basic active 
> flow and application lists of the timeline data. We can add more content 
> after we get used to this framework.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4097) Create POC timeline web UI with new YARN web UI framework

2016-06-20 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340139#comment-15340139
 ] 

Li Lu commented on YARN-4097:
-

OK got your point. Yes that was not included. Can we sync up this work with 
YARN-3368's progress? Seems like they're also doing some refactoring on 
application.hbs in the UI branch. 

> Create POC timeline web UI with new YARN web UI framework
> -
>
> Key: YARN-4097
> URL: https://issues.apache.org/jira/browse/YARN-4097
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: Screen Shot 2016-02-24 at 15.57.38.png, Screen Shot 
> 2016-02-24 at 15.57.53.png, Screen Shot 2016-02-24 at 15.58.08.png, Screen 
> Shot 2016-02-24 at 15.58.26.png, YARN-4097-bugfix.patch
>
>
> As planned, we need to try out the new YARN web UI framework and implement 
> timeline v2 web UI on top of it. This JIRA proposes to build the basic active 
> flow and application lists of the timeline data. We can add more content 
> after we get used to this framework.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-4097) Create POC timeline web UI with new YARN web UI framework

2016-06-20 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340130#comment-15340130
 ] 

Varun Saxena edited comment on YARN-4097 at 6/20/16 6:36 PM:
-

The patch applies. I meant that we have a navigation bar on top of the UI page 
which has links to Cluster Overview, Applications, Nodes, etc. We do not have a 
similar link for Flows there.
It was there in an earlier patch you had shared once offline. It seems to have 
been added in application controller in your patch. It needs to be added in 
{{templates/application.hbs}} instead.

It was there in controller earlier but it seems to have changed since then.


was (Author: varun_saxena):
The patch applies. I meant that we have a navigation bar on top of the UI page 
which has links to Cluster Overview, Applications, Nodes, etc. We do not have a 
similar link for Flows there.
It was there in an earlier patch you had shared once offline. But has been 
missed in this one. It needs to be added in {{templates/application.hbs}}

> Create POC timeline web UI with new YARN web UI framework
> -
>
> Key: YARN-4097
> URL: https://issues.apache.org/jira/browse/YARN-4097
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: Screen Shot 2016-02-24 at 15.57.38.png, Screen Shot 
> 2016-02-24 at 15.57.53.png, Screen Shot 2016-02-24 at 15.58.08.png, Screen 
> Shot 2016-02-24 at 15.58.26.png, YARN-4097-bugfix.patch
>
>
> As planned, we need to try out the new YARN web UI framework and implement 
> timeline v2 web UI on top of it. This JIRA proposes to build the basic active 
> flow and application lists of the timeline data. We can add more content 
> after we get used to this framework.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5196) Command to refresh cache without having to restart the cluster

2016-06-20 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340133#comment-15340133
 ] 

Joep Rottinghuis commented on YARN-5196:


We may want to rename this to
Add refreshUserToGroupsMappings and refreshSuperUserGroupsConfiguration command 
options to scmadmin command.

Indeed the scmadmin command lacks the following options (which rmadmin does 
have):
{code}
-refreshUserToGroupsMappingsRefresh user-to-groups mappings.
-refreshSuperUserGroupsConfigurationRefresh superuser proxy groups 
mappings. 
{code}

That said, the scm can be restarted "without having to restart the cluster", 
meaning that the RM does not need to be restarted for this. SCM restart should 
be safe.

> Command to refresh cache without having to restart the cluster
> --
>
> Key: YARN-5196
> URL: https://issues.apache.org/jira/browse/YARN-5196
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Prasad Wagle
>Priority: Minor
>
> After changing hadoop.proxyuser.x.groups in core-site.xml, we ran:
> dfsadmin -refreshSuperUserGroupsConfiguration 
> rmadmin -refreshSuperUserGroupsConfiguration
> However we are getting a warning:
>  WARN [2016-06-02 17:54:50,914] ({pool-10-thread-1} 
> SharedCacheClient.java[use]:137) - SCM might be down. The exception is User: 
> x is not allowed to impersonate y
> Will be good to have a command to refresh the cache without having to restart 
> the cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4097) Create POC timeline web UI with new YARN web UI framework

2016-06-20 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340130#comment-15340130
 ] 

Varun Saxena commented on YARN-4097:


The patch applies. I meant that we have a navigation bar on top of the UI page 
which has links to Cluster Overview, Applications, Nodes, etc. We do not have a 
similar link for Flows there.
It was there in an earlier patch you had shared once offline. But has been 
missed in this one. It needs to be added in {{templates/application.hbs}}

> Create POC timeline web UI with new YARN web UI framework
> -
>
> Key: YARN-4097
> URL: https://issues.apache.org/jira/browse/YARN-4097
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: Screen Shot 2016-02-24 at 15.57.38.png, Screen Shot 
> 2016-02-24 at 15.57.53.png, Screen Shot 2016-02-24 at 15.58.08.png, Screen 
> Shot 2016-02-24 at 15.58.26.png, YARN-4097-bugfix.patch
>
>
> As planned, we need to try out the new YARN web UI framework and implement 
> timeline v2 web UI on top of it. This JIRA proposes to build the basic active 
> flow and application lists of the timeline data. We can add more content 
> after we get used to this framework.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2928) YARN Timeline Service: Next generation

2016-06-20 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340115#comment-15340115
 ] 

Joep Rottinghuis commented on YARN-2928:


email thread on yarn-dev "[DISCUSS] merging YARN-2928 (Timeline Service v.2) to 
trunk": http://markmail.org/thread/bnpwpjhkbs6wsn7z

> YARN Timeline Service: Next generation
> --
>
> Key: YARN-2928
> URL: https://issues.apache.org/jira/browse/YARN-2928
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: timelineserver
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Critical
> Attachments: ATSv2.rev1.pdf, ATSv2.rev2.pdf, 
> ATSv2BackendHBaseSchemaproposal.pdf, Data model proposal v1.pdf, The YARN 
> Timeline Service v.2 Documentation.pdf, Timeline Service Next Gen - Planning 
> - ppt.pptx, TimelineServiceStoragePerformanceTestSummaryYARN-2928.pdf, 
> timeline_service_v2_next_milestones.pdf
>
>
> We have the application timeline server implemented in yarn per YARN-1530 and 
> YARN-321. Although it is a great feature, we have recognized several critical 
> issues and features that need to be addressed.
> This JIRA proposes the design and implementation changes to address those. 
> This is phase 1 of this effort.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4097) Create POC timeline web UI with new YARN web UI framework

2016-06-20 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340110#comment-15340110
 ] 

Li Lu commented on YARN-4097:
-

Hi [~varun_saxena], maybe the patch does not quite apply to the latest 
YARN-3368 branch? I can post a new one if needed... 

> Create POC timeline web UI with new YARN web UI framework
> -
>
> Key: YARN-4097
> URL: https://issues.apache.org/jira/browse/YARN-4097
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: Screen Shot 2016-02-24 at 15.57.38.png, Screen Shot 
> 2016-02-24 at 15.57.53.png, Screen Shot 2016-02-24 at 15.58.08.png, Screen 
> Shot 2016-02-24 at 15.58.26.png, YARN-4097-bugfix.patch
>
>
> As planned, we need to try out the new YARN web UI framework and implement 
> timeline v2 web UI on top of it. This JIRA proposes to build the basic active 
> flow and application lists of the timeline data. We can add more content 
> after we get used to this framework.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-4097) Create POC timeline web UI with new YARN web UI framework

2016-06-20 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340010#comment-15340010
 ] 

Varun Saxena edited comment on YARN-4097 at 6/20/16 5:46 PM:
-

[~gtCarrera9], in the patch above, there is no link to flow activities in the 
top level tab.
I have to access yarn-flow-activity URL(to access flows) directly via browser 
address bar. So that can be fixed

By the way, I was trying out sunburst diagrams to display flow run 
aggregations. Will be doing that on top of this patch.


was (Author: varun_saxena):
[~gtCarrera9], in the patch above, there is no link to flow activities in the 
top level tab.
I have to access yarn-flow-activity URL(to access flows) directly via browser 
address bar.

By the way, I was trying out sunburst diagrams to display flow run 
aggregations. Will be doing that on top of this patch.

> Create POC timeline web UI with new YARN web UI framework
> -
>
> Key: YARN-4097
> URL: https://issues.apache.org/jira/browse/YARN-4097
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: Screen Shot 2016-02-24 at 15.57.38.png, Screen Shot 
> 2016-02-24 at 15.57.53.png, Screen Shot 2016-02-24 at 15.58.08.png, Screen 
> Shot 2016-02-24 at 15.58.26.png, YARN-4097-bugfix.patch
>
>
> As planned, we need to try out the new YARN web UI framework and implement 
> timeline v2 web UI on top of it. This JIRA proposes to build the basic active 
> flow and application lists of the timeline data. We can add more content 
> after we get used to this framework.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-4097) Create POC timeline web UI with new YARN web UI framework

2016-06-20 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340010#comment-15340010
 ] 

Varun Saxena edited comment on YARN-4097 at 6/20/16 5:43 PM:
-

[~gtCarrera9], in the patch above, there is no link to flow activities in the 
top level tab.
I have to access yarn-flow-activity URL(to access flows) directly via browser 
address bar.

By the way, I was trying out sunburst diagrams to display flow run 
aggregations. Will be doing that on top of this patch.


was (Author: varun_saxena):
[~gtCarrera9], in the patch above, there is no link to flow activities in the 
top level tab.
I have to explicitly access yarn-flow-activity endpoint to access flows.

By the way, I was trying out sunburst diagrams to display flow run 
aggregations. Will be doing that on top of this patch.

> Create POC timeline web UI with new YARN web UI framework
> -
>
> Key: YARN-4097
> URL: https://issues.apache.org/jira/browse/YARN-4097
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: Screen Shot 2016-02-24 at 15.57.38.png, Screen Shot 
> 2016-02-24 at 15.57.53.png, Screen Shot 2016-02-24 at 15.58.08.png, Screen 
> Shot 2016-02-24 at 15.58.26.png, YARN-4097-bugfix.patch
>
>
> As planned, we need to try out the new YARN web UI framework and implement 
> timeline v2 web UI on top of it. This JIRA proposes to build the basic active 
> flow and application lists of the timeline data. We can add more content 
> after we get used to this framework.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4097) Create POC timeline web UI with new YARN web UI framework

2016-06-20 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340010#comment-15340010
 ] 

Varun Saxena commented on YARN-4097:


[~gtCarrera9], in the patch above, there is no link to flow activities in the 
top level tab.
I have to explicitly access yarn-flow-activity endpoint to access flows.

By the way, I was trying out sunburst diagrams to display flow run 
aggregations. Will be doing that on top of this patch.

> Create POC timeline web UI with new YARN web UI framework
> -
>
> Key: YARN-4097
> URL: https://issues.apache.org/jira/browse/YARN-4097
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: Screen Shot 2016-02-24 at 15.57.38.png, Screen Shot 
> 2016-02-24 at 15.57.53.png, Screen Shot 2016-02-24 at 15.58.08.png, Screen 
> Shot 2016-02-24 at 15.58.26.png, YARN-4097-bugfix.patch
>
>
> As planned, we need to try out the new YARN web UI framework and implement 
> timeline v2 web UI on top of it. This JIRA proposes to build the basic active 
> flow and application lists of the timeline data. We can add more content 
> after we get used to this framework.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5260) Review / Recommendations for hbase writer code

2016-06-20 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15339965#comment-15339965
 ] 

Joep Rottinghuis commented on YARN-5260:


InternalScanner is now tagged with 
LimitedPrivate(HBaseInterfaceAudience.COPROC) as of HBase 1.3.0 and up as per 
HBASE-16048

> Review / Recommendations for hbase writer code
> --
>
> Key: YARN-5260
> URL: https://issues.apache.org/jira/browse/YARN-5260
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>
> [~ted_yu] is graciously reviewing the hbase writer related code and has some 
> recommendations. (more to come as review progresses). I will keep track of 
> those in this jira and perhaps spin off other jira(s) depending on the scope 
> of changes. 
> For FlowRunCoprocessor.java :
>  
> -  private HRegion region;
> Try to declare as Region - the interface. This way, you are to call methods 
> that are stable across future releases.
> -  private long getCellTimestamp(long timestamp, List tags) {
> tags is not used, remove the parameter.
> For FlowScanner:
> - private final InternalScanner flowRunScanner;
> Currently InternalScanner is Private. If you must use it, try surfacing your 
> case to hbase so that it can be marked:
> @InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.COPROC)
> @InterfaceStability.Evolving
> w.r.t. regionScanner :
> {code} 
> if (internalScanner instanceof RegionScanner) {
>   this.regionScanner = (RegionScanner) internalScanner;
> }
> {code}
> I see IllegalStateException being thrown in some methods when regionScanner 
> is null. Better bail out early in the ctor.
> {code}
>   public static AggregationOperation getAggregationOperationFromTagsList(
>   List tags) {
> for (AggregationOperation aggOp : AggregationOperation.values()) {
>   for (Tag tag : tags) {
> if (tag.getType() == aggOp.getTagType()) {
>   return aggOp;
> {code}
> The above nested loop can be improved (a lot):
> values() returns an array. If you pre-generate a Set 
> (https://docs.oracle.com/javase/7/docs/api/java/util/EnumSet.html) containing 
> all the values, the outer loop can be omitted.
> You iterate through tags and see if tag.getType() is in the Set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4958) The file localization process should allow for wildcards to reduce the application footprint in the state store

2016-06-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15339962#comment-15339962
 ] 

Hudson commented on YARN-4958:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #9986 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9986/])
YARN-4958. The file localization process should allow for wildcards to (sjlee: 
rev 5107a967fa2558deba11c33a326d4d2e5748f452)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java


> The file localization process should allow for wildcards to reduce the 
> application footprint in the state store
> ---
>
> Key: YARN-4958
> URL: https://issues.apache.org/jira/browse/YARN-4958
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Fix For: 2.9.0
>
> Attachments: YARN-4958.001.patch, YARN-4958.002.patch, 
> YARN-4958.003.patch, YARN-4958.004.patch
>
>
> When using the -libjars option to add classes to the classpath, every library 
> so added is explicitly listed in the {{ContainerLaunchContext}}'s local 
> resources even though they're all uploaded to the same directory in HDFS.  
> When using tools like Crunch without an uber JAR or when trying to take 
> advantage of the shared cache, the number of libraries can be quite large.  
> We've seen many cases where we had to turn down the max number of 
> applications to prevent ZK from running out of heap because of the size of 
> the state store entries.
> Rather than listing all files independently, this JIRA proposes to have the 
> NM allow wildcards in the resource localization paths.  Specifically, we 
> propose to allow a path to have a final component (name) set to "*", which is 
> interpreted by the NM as "download the full directory and link to every file 
> in it from the job's working directory."  This behavior is the same as the 
> current behavior when using -libjars, but avoids explicitly listing every 
> file.
> This JIRA does not attempt to provide more general purpose wildcards, such as 
> "\*.jar" or "file\*", as having multiple entries for a single directory 
> presents numerous logistical issues.
> This JIRA also does not attempt to integrate with the shared cache.  That 
> work will be left to a future JIRA.  Specifically, this JIRA only applies 
> when a full directory is uploaded.  Currently the shared cache does not 
> handle directory uploads.
> This JIRA proposes to allow for wildcards both in the internal processing of 
> the -libjars switch and in paths added through the {{Job}} and 
> {{DistributedCache}} classes.
> The proposed approach is to treat a path, "dir/\*", as "dir" for purposes of 
> all file verification and localization.  In the final step, the NM will query 
> the localized directory to get a list of the files in "dir" such that each 
> can be linked from the job's working directory.  Since $PWD/\* is always 
> included on the classpath, all JAR files in "dir" will be in the classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4958) The file localization process should allow for wildcards to reduce the application footprint in the state store

2016-06-20 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15339917#comment-15339917
 ] 

Daniel Templeton commented on YARN-4958:


I think it's safe for 2.9.

> The file localization process should allow for wildcards to reduce the 
> application footprint in the state store
> ---
>
> Key: YARN-4958
> URL: https://issues.apache.org/jira/browse/YARN-4958
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Fix For: 2.9.0
>
> Attachments: YARN-4958.001.patch, YARN-4958.002.patch, 
> YARN-4958.003.patch, YARN-4958.004.patch
>
>
> When using the -libjars option to add classes to the classpath, every library 
> so added is explicitly listed in the {{ContainerLaunchContext}}'s local 
> resources even though they're all uploaded to the same directory in HDFS.  
> When using tools like Crunch without an uber JAR or when trying to take 
> advantage of the shared cache, the number of libraries can be quite large.  
> We've seen many cases where we had to turn down the max number of 
> applications to prevent ZK from running out of heap because of the size of 
> the state store entries.
> Rather than listing all files independently, this JIRA proposes to have the 
> NM allow wildcards in the resource localization paths.  Specifically, we 
> propose to allow a path to have a final component (name) set to "*", which is 
> interpreted by the NM as "download the full directory and link to every file 
> in it from the job's working directory."  This behavior is the same as the 
> current behavior when using -libjars, but avoids explicitly listing every 
> file.
> This JIRA does not attempt to provide more general purpose wildcards, such as 
> "\*.jar" or "file\*", as having multiple entries for a single directory 
> presents numerous logistical issues.
> This JIRA also does not attempt to integrate with the shared cache.  That 
> work will be left to a future JIRA.  Specifically, this JIRA only applies 
> when a full directory is uploaded.  Currently the shared cache does not 
> handle directory uploads.
> This JIRA proposes to allow for wildcards both in the internal processing of 
> the -libjars switch and in paths added through the {{Job}} and 
> {{DistributedCache}} classes.
> The proposed approach is to treat a path, "dir/\*", as "dir" for purposes of 
> all file verification and localization.  In the final step, the NM will query 
> the localized directory to get a list of the files in "dir" such that each 
> can be linked from the job's working directory.  Since $PWD/\* is always 
> included on the classpath, all JAR files in "dir" will be in the classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4958) The file localization process should allow for wildcards to reduce the application footprint in the state store

2016-06-20 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15339913#comment-15339913
 ] 

Daniel Templeton commented on YARN-4958:


I filed HADOOP-13296 to deal with the {{Path}} changes.

> The file localization process should allow for wildcards to reduce the 
> application footprint in the state store
> ---
>
> Key: YARN-4958
> URL: https://issues.apache.org/jira/browse/YARN-4958
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Fix For: 2.9.0
>
> Attachments: YARN-4958.001.patch, YARN-4958.002.patch, 
> YARN-4958.003.patch, YARN-4958.004.patch
>
>
> When using the -libjars option to add classes to the classpath, every library 
> so added is explicitly listed in the {{ContainerLaunchContext}}'s local 
> resources even though they're all uploaded to the same directory in HDFS.  
> When using tools like Crunch without an uber JAR or when trying to take 
> advantage of the shared cache, the number of libraries can be quite large.  
> We've seen many cases where we had to turn down the max number of 
> applications to prevent ZK from running out of heap because of the size of 
> the state store entries.
> Rather than listing all files independently, this JIRA proposes to have the 
> NM allow wildcards in the resource localization paths.  Specifically, we 
> propose to allow a path to have a final component (name) set to "*", which is 
> interpreted by the NM as "download the full directory and link to every file 
> in it from the job's working directory."  This behavior is the same as the 
> current behavior when using -libjars, but avoids explicitly listing every 
> file.
> This JIRA does not attempt to provide more general purpose wildcards, such as 
> "\*.jar" or "file\*", as having multiple entries for a single directory 
> presents numerous logistical issues.
> This JIRA also does not attempt to integrate with the shared cache.  That 
> work will be left to a future JIRA.  Specifically, this JIRA only applies 
> when a full directory is uploaded.  Currently the shared cache does not 
> handle directory uploads.
> This JIRA proposes to allow for wildcards both in the internal processing of 
> the -libjars switch and in paths added through the {{Job}} and 
> {{DistributedCache}} classes.
> The proposed approach is to treat a path, "dir/\*", as "dir" for purposes of 
> all file verification and localization.  In the final step, the NM will query 
> the localized directory to get a list of the files in "dir" such that each 
> can be linked from the job's working directory.  Since $PWD/\* is always 
> included on the classpath, all JAR files in "dir" will be in the classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4958) The file localization process should allow for wildcards to reduce the application footprint in the state store

2016-06-20 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15339890#comment-15339890
 ] 

Sangjin Lee commented on YARN-4958:
---

Thanks for the update [~templedf]! The latest patch looks good to me. I'll 
commit it shortly and look at MAPREDUCE-6719 after that. Should this go into 
trunk and 2.9.0?

> The file localization process should allow for wildcards to reduce the 
> application footprint in the state store
> ---
>
> Key: YARN-4958
> URL: https://issues.apache.org/jira/browse/YARN-4958
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-4958.001.patch, YARN-4958.002.patch, 
> YARN-4958.003.patch, YARN-4958.004.patch
>
>
> When using the -libjars option to add classes to the classpath, every library 
> so added is explicitly listed in the {{ContainerLaunchContext}}'s local 
> resources even though they're all uploaded to the same directory in HDFS.  
> When using tools like Crunch without an uber JAR or when trying to take 
> advantage of the shared cache, the number of libraries can be quite large.  
> We've seen many cases where we had to turn down the max number of 
> applications to prevent ZK from running out of heap because of the size of 
> the state store entries.
> Rather than listing all files independently, this JIRA proposes to have the 
> NM allow wildcards in the resource localization paths.  Specifically, we 
> propose to allow a path to have a final component (name) set to "*", which is 
> interpreted by the NM as "download the full directory and link to every file 
> in it from the job's working directory."  This behavior is the same as the 
> current behavior when using -libjars, but avoids explicitly listing every 
> file.
> This JIRA does not attempt to provide more general purpose wildcards, such as 
> "\*.jar" or "file\*", as having multiple entries for a single directory 
> presents numerous logistical issues.
> This JIRA also does not attempt to integrate with the shared cache.  That 
> work will be left to a future JIRA.  Specifically, this JIRA only applies 
> when a full directory is uploaded.  Currently the shared cache does not 
> handle directory uploads.
> This JIRA proposes to allow for wildcards both in the internal processing of 
> the -libjars switch and in paths added through the {{Job}} and 
> {{DistributedCache}} classes.
> The proposed approach is to treat a path, "dir/\*", as "dir" for purposes of 
> all file verification and localization.  In the final step, the NM will query 
> the localized directory to get a list of the files in "dir" such that each 
> can be linked from the job's working directory.  Since $PWD/\* is always 
> included on the classpath, all JAR files in "dir" will be in the classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5265) Make HBase configuration for the timeline service configurable

2016-06-20 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15339680#comment-15339680
 ] 

Joep Rottinghuis commented on YARN-5265:


Appears Findbugs warning unrelated to this patch:
org.apache.hadoop.yarn.event.EventDispatcher$EventProcessor.run() invokes 
System.exit(...), 




> Make HBase configuration for the timeline service configurable
> --
>
> Key: YARN-5265
> URL: https://issues.apache.org/jira/browse/YARN-5265
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Joep Rottinghuis
> Attachments: ATS v2 cluster deployment v1.png, 
> YARN-5265-YARN-2928.01.patch, YARN-5265-YARN-2928.02.patch, 
> YARN-5265-YARN-2928.03.patch
>
>
> Currently we create "default" HBase configurations, this works as long as the 
> user places the appropriate configuration on the classpath.
> This works fine for a standalone Hadoop cluster.
> However, if a user wants to monitor an HBase cluster and has a separate ATS 
> HBase cluster, then it can become tricky to create the right classpath for 
> the nodemanagers and still have tasks have their separate configs.
> It will be much easier to add a yarn configuration to let cluster admins 
> configure which HBase to connect to to write ATS metrics to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4958) The file localization process should allow for wildcards to reduce the application footprint in the state store

2016-06-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15339669#comment-15339669
 ] 

Hadoop QA commented on YARN-4958:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 29s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
48s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 52s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 44s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12811828/YARN-4958.004.patch |
| JIRA Issue | YARN-4958 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f7999c69fd98 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / fc6b50c |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/12080/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/12080/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12080/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 

[jira] [Commented] (YARN-4516) [YARN-3368] Use em-table to better render tables

2016-06-20 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15339640#comment-15339640
 ] 

Sunil G commented on YARN-4516:
---

Hi [~gtCarrera9]
Is this task intending only for TimeLine?
For YARN UI, we needed this change for app and nodes pages. I have done this 
change for NodeLabel page. If you have not started, I could help here for YARN 
UI.

> [YARN-3368] Use em-table to better render tables
> 
>
> Key: YARN-4516
> URL: https://issues.apache.org/jira/browse/YARN-4516
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Li Lu
>
> Currently we're using DataTables, it isn't integrated to Ember.js very well. 
> Instead we can use em-table (see https://github.com/sreenaths/em-table/wiki, 
> which is created for Tez UI). It supports features such as selectable 
> columns, pagination, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5161) [YARN-3368] Add Apache Hadoop logo to UI home page

2016-06-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15339592#comment-15339592
 ] 

Hadoop QA commented on YARN-5161:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 2m 39s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 3m 12s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:6d3a5f5 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12811827/YARN-5161-YARN-3368.05.patch
 |
| JIRA Issue | YARN-5161 |
| Optional Tests |  asflicense  |
| uname | Linux d07019f6502d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-3368 / b775df6 |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12079/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> [YARN-3368] Add Apache Hadoop logo to UI home page
> --
>
> Key: YARN-5161
> URL: https://issues.apache.org/jira/browse/YARN-5161
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: webapp
>Reporter: Sunil G
>Assignee: Kai Sasaki
> Attachments: Screen Shot 2016-05-31 at 21.22.30.png, Screen Shot 
> 2016-06-11 at 12.33.39.png, Screen Shot 2016-06-20 at 23.15.05.png, 
> YARN-5161-YARN-3368.03.patch, YARN-5161-YARN-3368.04.patch, 
> YARN-5161-YARN-3368.05.patch, YARN-5161.01.patch, YARN-5161.02.patch, 
> apache_logo.png, hadoop_logo.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4958) The file localization process should allow for wildcards to reduce the application footprint in the state store

2016-06-20 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-4958:
---
Attachment: YARN-4958.004.patch

OK, I moved the MR parts to MAPREDUCE-6719.  I'll also file a separate JIRA to 
clean up the {{Path}} javadoc.

> The file localization process should allow for wildcards to reduce the 
> application footprint in the state store
> ---
>
> Key: YARN-4958
> URL: https://issues.apache.org/jira/browse/YARN-4958
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-4958.001.patch, YARN-4958.002.patch, 
> YARN-4958.003.patch, YARN-4958.004.patch
>
>
> When using the -libjars option to add classes to the classpath, every library 
> so added is explicitly listed in the {{ContainerLaunchContext}}'s local 
> resources even though they're all uploaded to the same directory in HDFS.  
> When using tools like Crunch without an uber JAR or when trying to take 
> advantage of the shared cache, the number of libraries can be quite large.  
> We've seen many cases where we had to turn down the max number of 
> applications to prevent ZK from running out of heap because of the size of 
> the state store entries.
> Rather than listing all files independently, this JIRA proposes to have the 
> NM allow wildcards in the resource localization paths.  Specifically, we 
> propose to allow a path to have a final component (name) set to "*", which is 
> interpreted by the NM as "download the full directory and link to every file 
> in it from the job's working directory."  This behavior is the same as the 
> current behavior when using -libjars, but avoids explicitly listing every 
> file.
> This JIRA does not attempt to provide more general purpose wildcards, such as 
> "\*.jar" or "file\*", as having multiple entries for a single directory 
> presents numerous logistical issues.
> This JIRA also does not attempt to integrate with the shared cache.  That 
> work will be left to a future JIRA.  Specifically, this JIRA only applies 
> when a full directory is uploaded.  Currently the shared cache does not 
> handle directory uploads.
> This JIRA proposes to allow for wildcards both in the internal processing of 
> the -libjars switch and in paths added through the {{Job}} and 
> {{DistributedCache}} classes.
> The proposed approach is to treat a path, "dir/\*", as "dir" for purposes of 
> all file verification and localization.  In the final step, the NM will query 
> the localized directory to get a list of the files in "dir" such that each 
> can be linked from the job's working directory.  Since $PWD/\* is always 
> included on the classpath, all JAR files in "dir" will be in the classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5161) [YARN-3368] Add Apache Hadoop logo to UI home page

2016-06-20 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15339566#comment-15339566
 ] 

Kai Sasaki commented on YARN-5161:
--

Thanks [~Sreenath] and [~sunilg] for reviewing.
I updated to use darker color and attache screenshot as well.

> [YARN-3368] Add Apache Hadoop logo to UI home page
> --
>
> Key: YARN-5161
> URL: https://issues.apache.org/jira/browse/YARN-5161
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: webapp
>Reporter: Sunil G
>Assignee: Kai Sasaki
> Attachments: Screen Shot 2016-05-31 at 21.22.30.png, Screen Shot 
> 2016-06-11 at 12.33.39.png, Screen Shot 2016-06-20 at 23.15.05.png, 
> YARN-5161-YARN-3368.03.patch, YARN-5161-YARN-3368.04.patch, 
> YARN-5161-YARN-3368.05.patch, YARN-5161.01.patch, YARN-5161.02.patch, 
> apache_logo.png, hadoop_logo.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5161) [YARN-3368] Add Apache Hadoop logo to UI home page

2016-06-20 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated YARN-5161:
-
Attachment: YARN-5161-YARN-3368.05.patch

> [YARN-3368] Add Apache Hadoop logo to UI home page
> --
>
> Key: YARN-5161
> URL: https://issues.apache.org/jira/browse/YARN-5161
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: webapp
>Reporter: Sunil G
>Assignee: Kai Sasaki
> Attachments: Screen Shot 2016-05-31 at 21.22.30.png, Screen Shot 
> 2016-06-11 at 12.33.39.png, Screen Shot 2016-06-20 at 23.15.05.png, 
> YARN-5161-YARN-3368.03.patch, YARN-5161-YARN-3368.04.patch, 
> YARN-5161-YARN-3368.05.patch, YARN-5161.01.patch, YARN-5161.02.patch, 
> apache_logo.png, hadoop_logo.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5161) [YARN-3368] Add Apache Hadoop logo to UI home page

2016-06-20 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated YARN-5161:
-
Attachment: Screen Shot 2016-06-20 at 23.15.05.png

> [YARN-3368] Add Apache Hadoop logo to UI home page
> --
>
> Key: YARN-5161
> URL: https://issues.apache.org/jira/browse/YARN-5161
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: webapp
>Reporter: Sunil G
>Assignee: Kai Sasaki
> Attachments: Screen Shot 2016-05-31 at 21.22.30.png, Screen Shot 
> 2016-06-11 at 12.33.39.png, Screen Shot 2016-06-20 at 23.15.05.png, 
> YARN-5161-YARN-3368.03.patch, YARN-5161-YARN-3368.04.patch, 
> YARN-5161.01.patch, YARN-5161.02.patch, apache_logo.png, hadoop_logo.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4518) [YARN-3368] Support rendering statistic-by-node-label for queues/apps page

2016-06-20 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-4518:
--
Attachment: YARN-4518-YARN-3368.1.patch

Adding an initial version of patch.

> [YARN-3368] Support rendering statistic-by-node-label for queues/apps page
> --
>
> Key: YARN-4518
> URL: https://issues.apache.org/jira/browse/YARN-4518
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Sunil G
> Attachments: YARN-4518-YARN-3368.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5273) [YARN-3368] Introduce footer in YARN UI

2016-06-20 Thread Sreenath Somarajapuram (JIRA)
Sreenath Somarajapuram created YARN-5273:


 Summary: [YARN-3368] Introduce footer in YARN UI
 Key: YARN-5273
 URL: https://issues.apache.org/jira/browse/YARN-5273
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Sreenath Somarajapuram
Assignee: Sreenath Somarajapuram


It would look good to have a footer in the UI that can display various basic 
details about the UI
- To start with, display the Apache License version message with a link to the 
License doc.
- The footer must always stick to the bottom of the page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org