[jira] [Commented] (YARN-3730) scheduler reserve more resource than required

2015-05-30 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14566130#comment-14566130
 ] 

Naganarasimha G R commented on YARN-3730:
-

hi [~gu chi], 
Which version did you find this problem ? If its below 2.6.0, please test with 
the latest as there have been some improvements wrt to reservation in 
YARN-1769. If its with 2.6.0 and above version, share some RM logs with debug 
enabled, so that we can do further analysis.

 scheduler reserve more resource than required
 -

 Key: YARN-3730
 URL: https://issues.apache.org/jira/browse/YARN-3730
 Project: Hadoop YARN
  Issue Type: Bug
  Components: scheduler
Reporter: gu-chi

 Using capacity scheduler, environment is 3 NM and each has 9 vcores, I ran a 
 spark task with 4 executors and each executor 5 cores, as suspected, only 1 
 executor not able to start and will be reserved, but actually more containers 
 are reserved. This way, I can not run some other smaller tasks. As I checked 
 the capacity scheduler, the 'needContainers' method in LeafQueue.java has a 
 computation of 'starvation', this cause the scenario of more container 
 reserved than required, any idea or suggestion on this?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1462) AHS API and other AHS changes to handle tags for completed MR jobs

2015-05-30 Thread Zhijie Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14566095#comment-14566095
 ] 

Zhijie Shen commented on YARN-1462:
---

If there's no more concern, I'll commit the patch today.

 AHS API and other AHS changes to handle tags for completed MR jobs
 --

 Key: YARN-1462
 URL: https://issues.apache.org/jira/browse/YARN-1462
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.2.0
Reporter: Karthik Kambatla
Assignee: Xuan Gong
 Attachments: YARN-1462-branch-2.7-1.2.patch, 
 YARN-1462-branch-2.7-1.patch, YARN-1462.1.patch, YARN-1462.2.patch, 
 YARN-1462.3.patch


 AHS related work for tags. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3748) Cleanup Findbugs volatile warnings

2015-05-30 Thread Gabor Liptak (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Liptak updated YARN-3748:
---
Attachment: YARN-3748.2.patch

 Cleanup Findbugs volatile warnings
 --

 Key: YARN-3748
 URL: https://issues.apache.org/jira/browse/YARN-3748
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Gabor Liptak
Priority: Minor
 Attachments: YARN-3748.1.patch, YARN-3748.2.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3045) [Event producers] Implement NM writing container lifecycle events to ATS

2015-05-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14566132#comment-14566132
 ] 

Hadoop QA commented on YARN-3045:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  15m 32s | Findbugs (version ) appears to 
be broken on YARN-2928. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 6 new or modified test files. |
| {color:green}+1{color} | javac |   7m 42s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 41s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 26s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  2s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 38s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 40s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   1m 59s | The patch appears to introduce 1 
new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | yarn tests |   7m  2s | Tests failed in 
hadoop-yarn-applications-distributedshell. |
| {color:red}-1{color} | yarn tests |   5m 59s | Tests failed in 
hadoop-yarn-server-nodemanager. |
| | |  51m  8s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-yarn-applications-distributedshell |
| Failed unit tests | 
hadoop.yarn.applications.distributedshell.TestDistributedShellWithNodeLabels |
|   | hadoop.yarn.applications.distributedshell.TestDistributedShell |
|   | 
hadoop.yarn.server.nodemanager.containermanager.TestContainerManagerRecovery |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12736354/YARN-3045-YARN-2928.003.patch
 |
| Optional Tests | javac unit findbugs checkstyle javadoc |
| git revision | YARN-2928 / a9738ceb |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-YARN-Build/8140/artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-applications-distributedshell.html
 |
| hadoop-yarn-applications-distributedshell test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/8140/artifact/patchprocess/testrun_hadoop-yarn-applications-distributedshell.txt
 |
| hadoop-yarn-server-nodemanager test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/8140/artifact/patchprocess/testrun_hadoop-yarn-server-nodemanager.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/8140/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf908.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/8140/console |


This message was automatically generated.

 [Event producers] Implement NM writing container lifecycle events to ATS
 

 Key: YARN-3045
 URL: https://issues.apache.org/jira/browse/YARN-3045
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Reporter: Sangjin Lee
Assignee: Naganarasimha G R
  Labels: BB2015-05-TBR
 Attachments: YARN-3045-YARN-2928.002.patch, 
 YARN-3045-YARN-2928.003.patch, YARN-3045.20150420-1.patch


 Per design in YARN-2928, implement NM writing container lifecycle events and 
 container system metrics to ATS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3706) Generalize native HBase writer for additional tables

2015-05-30 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-3706:
---
Attachment: YARN-3726-YARN-2928.003.patch

Moved common classes into common package to separate them from the actual 
talbes and column definitions.
Cleaned up some of the methods. Pushed common code into column class.

Left to do:
- Fix remainder of unit test that is currently commented out
- Create similar readResult for the column prefix.
- Creating ColumnFactory to move EntityColumn valueOf methods into.
- Clean up unneeded methods from TimelineWriterUtils
- Add additional unit tests for methods newly added to TimelineWriterUtils
- Create a reader method (some class) that reads an entire entity back
- Add unit test to read entire entity back and then compare all values to 
written entity. That's pretty much the ultimate test and will ensure we have 
clean reader APIs.


 Generalize native HBase writer for additional tables
 

 Key: YARN-3706
 URL: https://issues.apache.org/jira/browse/YARN-3706
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Reporter: Joep Rottinghuis
Assignee: Joep Rottinghuis
Priority: Minor
 Attachments: YARN-3706-YARN-2928.001.patch, 
 YARN-3726-YARN-2928.002.patch, YARN-3726-YARN-2928.003.patch


 When reviewing YARN-3411 we noticed that we could change the class hierarchy 
 a little in order to accommodate additional tables easily.
 In order to get ready for benchmark testing we left the original layout in 
 place, as performance would not be impacted by the code hierarchy.
 Here is a separate jira to address the hierarchy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3534) Collect memory/cpu usage on the node

2015-05-30 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-3534:
---
Description: 
YARN should be aware of the resource utilization of the nodes when scheduling 
containers. For this, this task will implement the collection of memory/cpu 
usage on the node.


  was:YARN should be aware of the resource utilization of the nodes when 
scheduling containers. For this, this task will implement the 
NodeResourceMonitor and send this information to the Resource Manager in the 
heartbeat.


 Collect memory/cpu usage on the node
 

 Key: YARN-3534
 URL: https://issues.apache.org/jira/browse/YARN-3534
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager, resourcemanager
Affects Versions: 2.7.0
Reporter: Inigo Goiri
Assignee: Inigo Goiri
 Attachments: YARN-3534-1.patch, YARN-3534-2.patch, YARN-3534-3.patch, 
 YARN-3534-3.patch, YARN-3534-4.patch, YARN-3534-5.patch, YARN-3534-6.patch, 
 YARN-3534-7.patch, YARN-3534-8.patch, YARN-3534-9.patch

   Original Estimate: 336h
  Remaining Estimate: 336h

 YARN should be aware of the resource utilization of the nodes when scheduling 
 containers. For this, this task will implement the collection of memory/cpu 
 usage on the node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3045) [Event producers] Implement NM writing container lifecycle events to ATS

2015-05-30 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-3045:

Attachment: YARN-3045-YARN-2928.003.patch

hi [~zjshen],[~djp]  [~sjlee0]
Based on the discussions in YARN-3044, conclusions are
YARN-3044 : RM will send the all other life cycle events except Container, and 
will send that too only when configured
YARN-3045 : NM will send always the container life cycle events
YARN-3616 : RM publish container life cycle events for Scenarios where 
containers are created and finished before reaching NM.
Based on this i have continued this jira with updating some test cases. Please 
review latest updated patch.
Also as per  [~vinodkv]'s earlier 
[comment|https://issues.apache.org/jira/browse/YARN-3045?focusedCommentId=14520929page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14520929],
 i would like to push the container resource metrics publishing also to 
NMTimelinePublisher class itself (currently in ContainersMonitorImpl), thoughts?

 [Event producers] Implement NM writing container lifecycle events to ATS
 

 Key: YARN-3045
 URL: https://issues.apache.org/jira/browse/YARN-3045
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Reporter: Sangjin Lee
Assignee: Naganarasimha G R
  Labels: BB2015-05-TBR
 Attachments: YARN-3045-YARN-2928.002.patch, 
 YARN-3045-YARN-2928.003.patch, YARN-3045.20150420-1.patch


 Per design in YARN-2928, implement NM writing container lifecycle events and 
 container system metrics to ATS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3748) Cleanup Findbugs volatile warnings

2015-05-30 Thread Gabor Liptak (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Liptak updated YARN-3748:
---
Attachment: YARN-3748.3.patch

 Cleanup Findbugs volatile warnings
 --

 Key: YARN-3748
 URL: https://issues.apache.org/jira/browse/YARN-3748
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Gabor Liptak
Priority: Minor
 Attachments: YARN-3748.1.patch, YARN-3748.2.patch, YARN-3748.3.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3748) Cleanup Findbugs volatile warnings

2015-05-30 Thread Gabor Liptak (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14566115#comment-14566115
 ] 

Gabor Liptak commented on YARN-3748:


[~busbey] Updated patch to make variables final.

 Cleanup Findbugs volatile warnings
 --

 Key: YARN-3748
 URL: https://issues.apache.org/jira/browse/YARN-3748
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Gabor Liptak
Priority: Minor
 Attachments: YARN-3748.1.patch, YARN-3748.2.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3467) Expose allocatedMB, allocatedVCores, and runningContainers metrics on running Applications in RM Web UI

2015-05-30 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14566182#comment-14566182
 ] 

Karthik Kambatla commented on YARN-3467:


The test failure looks unrelated, and I am comfortable with not adding a test 
here.

+1, checking this in.

 Expose allocatedMB, allocatedVCores, and runningContainers metrics on running 
 Applications in RM Web UI
 ---

 Key: YARN-3467
 URL: https://issues.apache.org/jira/browse/YARN-3467
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: webapp, yarn
Affects Versions: 2.5.0
Reporter: Anthony Rojas
Assignee: Anubhav Dhoot
Priority: Minor
 Attachments: ApplicationAttemptPage.png, Screen Shot 2015-05-26 at 
 5.46.54 PM.png, YARN-3467.001.patch, YARN-3467.002.patch, yarn-3467-1.patch


 The YARN REST API can report on the following properties:
 *allocatedMB*: The sum of memory in MB allocated to the application's running 
 containers
 *allocatedVCores*: The sum of virtual cores allocated to the application's 
 running containers
 *runningContainers*: The number of containers currently running for the 
 application
 Currently, the RM Web UI does not report on these items (at least I couldn't 
 find any entries within the Web UI).
 It would be useful for YARN Application and Resource troubleshooting to have 
 these properties and their corresponding values exposed on the RM WebUI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3467) Expose allocatedMB, allocatedVCores, and runningContainers metrics on running Applications in RM Web UI

2015-05-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14566188#comment-14566188
 ] 

Hudson commented on YARN-3467:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7932 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7932/])
YARN-3467. Expose allocatedMB, allocatedVCores, and runningContainers metrics 
on running Applications in RM Web UI. (Anubhav Dhoot via kasha) (kasha: rev 
a8acdd65b3f0e8633050a1100136fd5e02ebdcfa)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMAppsBlock.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/dao/AppInfo.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/FairSchedulerAppsBlock.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java


 Expose allocatedMB, allocatedVCores, and runningContainers metrics on running 
 Applications in RM Web UI
 ---

 Key: YARN-3467
 URL: https://issues.apache.org/jira/browse/YARN-3467
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: webapp, yarn
Affects Versions: 2.5.0
Reporter: Anthony Rojas
Assignee: Anubhav Dhoot
Priority: Minor
 Fix For: 2.8.0

 Attachments: ApplicationAttemptPage.png, Screen Shot 2015-05-26 at 
 5.46.54 PM.png, YARN-3467.001.patch, YARN-3467.002.patch, yarn-3467-1.patch


 The YARN REST API can report on the following properties:
 *allocatedMB*: The sum of memory in MB allocated to the application's running 
 containers
 *allocatedVCores*: The sum of virtual cores allocated to the application's 
 running containers
 *runningContainers*: The number of containers currently running for the 
 application
 Currently, the RM Web UI does not report on these items (at least I couldn't 
 find any entries within the Web UI).
 It would be useful for YARN Application and Resource troubleshooting to have 
 these properties and their corresponding values exposed on the RM WebUI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3748) Cleanup Findbugs volatile warnings

2015-05-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14566151#comment-14566151
 ] 

Hadoop QA commented on YARN-3748:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  15m 59s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 34s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 40s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   0m 46s | The applied patch generated  1 
new checkstyle issues (total was 228, now 228). |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 26s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | yarn tests |  60m 28s | Tests failed in 
hadoop-yarn-server-resourcemanager. |
| | |  98m 23s | |
\\
\\
|| Reason || Tests ||
| Timed out tests | 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12736353/YARN-3748.2.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / eb6bf91 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-YARN-Build/8139/artifact/patchprocess/diffcheckstylehadoop-yarn-server-resourcemanager.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/8139/artifact/patchprocess/whitespace.txt
 |
| hadoop-yarn-server-resourcemanager test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/8139/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/8139/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/8139/console |


This message was automatically generated.

 Cleanup Findbugs volatile warnings
 --

 Key: YARN-3748
 URL: https://issues.apache.org/jira/browse/YARN-3748
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Gabor Liptak
Priority: Minor
 Attachments: YARN-3748.1.patch, YARN-3748.2.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3749) We should make a copy of configuration when init MiniYARNCluster with multiple RMs

2015-05-30 Thread Chun Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14566326#comment-14566326
 ] 

Chun Chen commented on YARN-3749:
-

Upload a patch to fix it.

 We should make a copy of configuration when init MiniYARNCluster with 
 multiple RMs
 --

 Key: YARN-3749
 URL: https://issues.apache.org/jira/browse/YARN-3749
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Chun Chen
Assignee: Chun Chen
 Attachments: YARN-3749.patch


 When I was trying to write a test case for YARN-2674, I found DS client 
 trying to connect to both rm1 and rm2 with the same address 0.0.0.0:18032 
 when RM failover. But I initially set 
 yarn.resourcemanager.address.rm1=0.0.0.0:18032, 
 yarn.resourcemanager.address.rm2=0.0.0.0:28032  After digging, I found it is 
 in ClientRMService where the value of yarn.resourcemanager.address.rm2 
 changed to 0.0.0.0:18032. See the following code in ClientRMService:
 {code}
 clientBindAddress = conf.updateConnectAddr(YarnConfiguration.RM_BIND_HOST,
YarnConfiguration.RM_ADDRESS,

 YarnConfiguration.DEFAULT_RM_ADDRESS,
server.getListenerAddress());
 {code}
 Since we use the same instance of configuration in rm1 and rm2 and init both 
 RM before we start both RM, we will change yarn.resourcemanager.ha.id to rm2 
 during init of rm2 and yarn.resourcemanager.ha.id will become rm2 during 
 starting of rm1.
 So I think it is safe to make a copy of configuration when init both of the 
 rm.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2674) Distributed shell AM may re-launch containers if RM work preserving restart happens

2015-05-30 Thread Chun Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14566327#comment-14566327
 ] 

Chun Chen commented on YARN-2674:
-

Thanks for the comments [~vinodkv]. Will upload a new patch with test case 
after YARN-3749 fixed.

 Distributed shell AM may re-launch containers if RM work preserving restart 
 happens
 ---

 Key: YARN-2674
 URL: https://issues.apache.org/jira/browse/YARN-2674
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Chun Chen
Assignee: Chun Chen
 Attachments: YARN-2674.1.patch, YARN-2674.2.patch


 Currently, if RM work preserving restart happens while distributed shell is 
 running, distribute shell AM may re-launch all the containers, including 
 new/running/complete. We must make sure it won't re-launch the 
 running/complete containers.
 We need to remove allocated containers from 
 AMRMClientImpl#remoteRequestsTable once AM receive them from RM. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2900) Application (Attempt and Container) Not Found in AHS results in Internal Server Error (500)

2015-05-30 Thread Mit Desai (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14566294#comment-14566294
 ] 

Mit Desai commented on YARN-2900:
-

looks good to me. Thanks [~zjshen] for your looking into it.

 Application (Attempt and Container) Not Found in AHS results in Internal 
 Server Error (500)
 ---

 Key: YARN-2900
 URL: https://issues.apache.org/jira/browse/YARN-2900
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Reporter: Jonathan Eagles
Assignee: Mit Desai
 Attachments: YARN-2900-b2-2.patch, YARN-2900-b2.patch, 
 YARN-2900.20150529.patch, YARN-2900.patch, YARN-2900.patch, YARN-2900.patch, 
 YARN-2900.patch, YARN-2900.patch, YARN-2900.patch, YARN-2900.patch, 
 YARN-2900.patch


 Caused by: java.lang.NullPointerException
   at 
 org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryManagerImpl.convertToApplicationReport(ApplicationHistoryManagerImpl.java:128)
   at 
 org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryManagerImpl.getApplication(ApplicationHistoryManagerImpl.java:118)
   at 
 org.apache.hadoop.yarn.server.webapp.WebServices$2.run(WebServices.java:222)
   at 
 org.apache.hadoop.yarn.server.webapp.WebServices$2.run(WebServices.java:219)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1679)
   at 
 org.apache.hadoop.yarn.server.webapp.WebServices.getApp(WebServices.java:218)
   ... 59 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1462) AHS API and other AHS changes to handle tags for completed MR jobs

2015-05-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14566334#comment-14566334
 ] 

Hudson commented on YARN-1462:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7933 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7933/])
YARN-1462. Made RM write application tags to timeline server and exposed them 
to users via generic history web UI and REST API. Contributed by Xuan Gong. 
(zjshen: rev 4a9ec1a8243e2394ff7221b1c20dfaa80e9f5111)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/metrics/SystemMetricsPublisher.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestApplicatonReport.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/metrics/ApplicationMetricsConstants.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/NotRunningJob.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestYarnClient.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/metrics/ApplicationCreatedEvent.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/metrics/TestSystemMetricsPublisher.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestClientServiceDelegate.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryManagerOnTimelineStore.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebApp.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/MockAsm.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAHSClient.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServer.md
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/ProtocolHATestBase.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ApplicationReport.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestYARNRunner.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestYarnCLI.java


 AHS API and other AHS changes to handle tags for completed MR jobs
 --

 Key: YARN-1462
 URL: https://issues.apache.org/jira/browse/YARN-1462
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.2.0
Reporter: Karthik Kambatla
Assignee: Xuan Gong
 Fix For: 2.7.1

 Attachments: YARN-1462-branch-2.7-1.2.patch, 
 YARN-1462-branch-2.7-1.patch, YARN-1462.1.patch, YARN-1462.2.patch, 
 YARN-1462.3.patch


 AHS related work for tags. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3748) Cleanup Findbugs volatile warnings

2015-05-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14566229#comment-14566229
 ] 

Hadoop QA commented on YARN-3748:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  18m 24s | Findbugs (version ) appears to 
be broken on trunk. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   9m 47s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  12m  0s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 41s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 35s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   2m  0s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 41s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 53s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | yarn tests |  64m 13s | Tests failed in 
hadoop-yarn-server-resourcemanager. |
| | | 110m 17s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | 
hadoop.yarn.server.resourcemanager.ahs.TestRMApplicationHistoryWriter |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12736372/YARN-3748.3.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / a8acdd6 |
| hadoop-yarn-server-resourcemanager test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/8141/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/8141/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf908.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/8141/console |


This message was automatically generated.

 Cleanup Findbugs volatile warnings
 --

 Key: YARN-3748
 URL: https://issues.apache.org/jira/browse/YARN-3748
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Gabor Liptak
Priority: Minor
 Attachments: YARN-3748.1.patch, YARN-3748.2.patch, YARN-3748.3.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3534) Collect memory/cpu usage on the node

2015-05-30 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14566283#comment-14566283
 ] 

Karthik Kambatla commented on YARN-3534:


Thanks for working on this, Inigo.

Few comments:
# Given these stats are to be sent to the RM in heartbeat, should we capture 
the aggregate node resource usage in {{ResourceUtilization}}? 
# Instead of adding a separate {{DEFAULT_NM_NODE_MON_INTERVAL_MS}}, we should 
probably just re-use the default for container-monitor?
# Also, should we add another config 
{{yarn.nodemanager.usage-monitor.interval-ms}} that both container-monitor and 
node-monitor inherit unless specified otherwise? If that seems reasonable, we 
should deprecate the default value for container-monitor-interval.
# For the monitoring thread,
## set the thread name?
## make it a daemon thread?
## on {{monitoringThread.join()}}, specify a timeout as well. 
## in the corresponding catch-block, at least log that we couldn't wait until 
the monitoring-thread is interrupted.

 Collect memory/cpu usage on the node
 

 Key: YARN-3534
 URL: https://issues.apache.org/jira/browse/YARN-3534
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager, resourcemanager
Affects Versions: 2.7.0
Reporter: Inigo Goiri
Assignee: Inigo Goiri
 Attachments: YARN-3534-1.patch, YARN-3534-2.patch, YARN-3534-3.patch, 
 YARN-3534-3.patch, YARN-3534-4.patch, YARN-3534-5.patch, YARN-3534-6.patch, 
 YARN-3534-7.patch, YARN-3534-8.patch, YARN-3534-9.patch

   Original Estimate: 336h
  Remaining Estimate: 336h

 YARN should be aware of the resource utilization of the nodes when scheduling 
 containers. For this, this task will implement the collection of memory/cpu 
 usage on the node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3749) We should make a copy of configuration when init MiniYARNCluster with multiple RMs

2015-05-30 Thread Chun Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chun Chen updated YARN-3749:

Summary: We should make a copy of configuration when init MiniYARNCluster 
with multiple RMs  (was: We should make a copy of config MiniYARNCluster )

 We should make a copy of configuration when init MiniYARNCluster with 
 multiple RMs
 --

 Key: YARN-3749
 URL: https://issues.apache.org/jira/browse/YARN-3749
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Chun Chen





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-3749) We should make a copy of config MiniYARNCluster

2015-05-30 Thread Chun Chen (JIRA)
Chun Chen created YARN-3749:
---

 Summary: We should make a copy of config MiniYARNCluster 
 Key: YARN-3749
 URL: https://issues.apache.org/jira/browse/YARN-3749
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Chun Chen






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3749) We should make a copy of configuration when init MiniYARNCluster with multiple RMs

2015-05-30 Thread Chun Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chun Chen updated YARN-3749:

Description: 
When I was trying to write a test case for YARN-2674, I found DS client trying 
to connect to both rm1 and rm2 with the same address 0.0.0.0:18032 when RM 
failover. But I initially set yarn.resourcemanager.address.rm1=0.0.0.0:18032, 
yarn.resourcemanager.address.rm2=0.0.0.0:28032  After digging, I found it is in 
ClientRMService where the value of yarn.resourcemanager.address.rm2 changed to 
0.0.0.0:18032. See the following code in ClientRMService:
{code}
clientBindAddress = conf.updateConnectAddr(YarnConfiguration.RM_BIND_HOST,
   YarnConfiguration.RM_ADDRESS,
   
YarnConfiguration.DEFAULT_RM_ADDRESS,
   server.getListenerAddress());
{code}

Since we use the same instance of configuration in rm1 and rm2 and init both RM 
before we start both RM, we will change yarn.resourcemanager.ha.id to rm2 
during init of rm2 and yarn.resourcemanager.ha.id will become rm2 during 
starting of rm1.
So I think it is safe to make a copy of configuration when init both of the rm.

 We should make a copy of configuration when init MiniYARNCluster with 
 multiple RMs
 --

 Key: YARN-3749
 URL: https://issues.apache.org/jira/browse/YARN-3749
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Chun Chen

 When I was trying to write a test case for YARN-2674, I found DS client 
 trying to connect to both rm1 and rm2 with the same address 0.0.0.0:18032 
 when RM failover. But I initially set 
 yarn.resourcemanager.address.rm1=0.0.0.0:18032, 
 yarn.resourcemanager.address.rm2=0.0.0.0:28032  After digging, I found it is 
 in ClientRMService where the value of yarn.resourcemanager.address.rm2 
 changed to 0.0.0.0:18032. See the following code in ClientRMService:
 {code}
 clientBindAddress = conf.updateConnectAddr(YarnConfiguration.RM_BIND_HOST,
YarnConfiguration.RM_ADDRESS,

 YarnConfiguration.DEFAULT_RM_ADDRESS,
server.getListenerAddress());
 {code}
 Since we use the same instance of configuration in rm1 and rm2 and init both 
 RM before we start both RM, we will change yarn.resourcemanager.ha.id to rm2 
 during init of rm2 and yarn.resourcemanager.ha.id will become rm2 during 
 starting of rm1.
 So I think it is safe to make a copy of configuration when init both of the 
 rm.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (YARN-3749) We should make a copy of configuration when init MiniYARNCluster with multiple RMs

2015-05-30 Thread Chun Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chun Chen reassigned YARN-3749:
---

Assignee: Chun Chen

 We should make a copy of configuration when init MiniYARNCluster with 
 multiple RMs
 --

 Key: YARN-3749
 URL: https://issues.apache.org/jira/browse/YARN-3749
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Chun Chen
Assignee: Chun Chen

 When I was trying to write a test case for YARN-2674, I found DS client 
 trying to connect to both rm1 and rm2 with the same address 0.0.0.0:18032 
 when RM failover. But I initially set 
 yarn.resourcemanager.address.rm1=0.0.0.0:18032, 
 yarn.resourcemanager.address.rm2=0.0.0.0:28032  After digging, I found it is 
 in ClientRMService where the value of yarn.resourcemanager.address.rm2 
 changed to 0.0.0.0:18032. See the following code in ClientRMService:
 {code}
 clientBindAddress = conf.updateConnectAddr(YarnConfiguration.RM_BIND_HOST,
YarnConfiguration.RM_ADDRESS,

 YarnConfiguration.DEFAULT_RM_ADDRESS,
server.getListenerAddress());
 {code}
 Since we use the same instance of configuration in rm1 and rm2 and init both 
 RM before we start both RM, we will change yarn.resourcemanager.ha.id to rm2 
 during init of rm2 and yarn.resourcemanager.ha.id will become rm2 during 
 starting of rm1.
 So I think it is safe to make a copy of configuration when init both of the 
 rm.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2716) Refactor ZKRMStateStore retry code with Apache Curator

2015-05-30 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-2716:
---
Attachment: yarn-2716-super-prelim.patch

Had some time today to look at this again. Here is a super-preliminary patch, 
in case I fail to spend more time and wrap this up. 

Pending items:
# Use Curator transactions to handle multi.
# Fix the test failures in TestZKRMStateStore. I tried catching NoNode and 
NodeExists in delete and create, but that led to protobuf issues.
# Rewrite TestZKRMStateStoreZKClientConnections

 Refactor ZKRMStateStore retry code with Apache Curator
 --

 Key: YARN-2716
 URL: https://issues.apache.org/jira/browse/YARN-2716
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Jian He
Assignee: Karthik Kambatla
 Attachments: yarn-2716-super-prelim.patch


 Per suggestion by [~kasha] in YARN-2131,  it's nice to use curator to 
 simplify the retry logic in ZKRMStateStore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3749) We should make a copy of configuration when init MiniYARNCluster with multiple RMs

2015-05-30 Thread Chun Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chun Chen updated YARN-3749:

Attachment: YARN-3749.patch

 We should make a copy of configuration when init MiniYARNCluster with 
 multiple RMs
 --

 Key: YARN-3749
 URL: https://issues.apache.org/jira/browse/YARN-3749
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Chun Chen
Assignee: Chun Chen
 Attachments: YARN-3749.patch


 When I was trying to write a test case for YARN-2674, I found DS client 
 trying to connect to both rm1 and rm2 with the same address 0.0.0.0:18032 
 when RM failover. But I initially set 
 yarn.resourcemanager.address.rm1=0.0.0.0:18032, 
 yarn.resourcemanager.address.rm2=0.0.0.0:28032  After digging, I found it is 
 in ClientRMService where the value of yarn.resourcemanager.address.rm2 
 changed to 0.0.0.0:18032. See the following code in ClientRMService:
 {code}
 clientBindAddress = conf.updateConnectAddr(YarnConfiguration.RM_BIND_HOST,
YarnConfiguration.RM_ADDRESS,

 YarnConfiguration.DEFAULT_RM_ADDRESS,
server.getListenerAddress());
 {code}
 Since we use the same instance of configuration in rm1 and rm2 and init both 
 RM before we start both RM, we will change yarn.resourcemanager.ha.id to rm2 
 during init of rm2 and yarn.resourcemanager.ha.id will become rm2 during 
 starting of rm1.
 So I think it is safe to make a copy of configuration when init both of the 
 rm.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2900) Application (Attempt and Container) Not Found in AHS results in Internal Server Error (500)

2015-05-30 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14566354#comment-14566354
 ] 

Xuan Gong commented on YARN-2900:
-

[~zjshen] I have committed the patch into trunk/branch-2. But looks like it 
does not apply for branch-2.7. Could you work a patch for branch-2.7 ?

 Application (Attempt and Container) Not Found in AHS results in Internal 
 Server Error (500)
 ---

 Key: YARN-2900
 URL: https://issues.apache.org/jira/browse/YARN-2900
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Reporter: Jonathan Eagles
Assignee: Mit Desai
 Attachments: YARN-2900-b2-2.patch, YARN-2900-b2.patch, 
 YARN-2900.20150529.patch, YARN-2900.patch, YARN-2900.patch, YARN-2900.patch, 
 YARN-2900.patch, YARN-2900.patch, YARN-2900.patch, YARN-2900.patch, 
 YARN-2900.patch


 Caused by: java.lang.NullPointerException
   at 
 org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryManagerImpl.convertToApplicationReport(ApplicationHistoryManagerImpl.java:128)
   at 
 org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryManagerImpl.getApplication(ApplicationHistoryManagerImpl.java:118)
   at 
 org.apache.hadoop.yarn.server.webapp.WebServices$2.run(WebServices.java:222)
   at 
 org.apache.hadoop.yarn.server.webapp.WebServices$2.run(WebServices.java:219)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1679)
   at 
 org.apache.hadoop.yarn.server.webapp.WebServices.getApp(WebServices.java:218)
   ... 59 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3467) Expose allocatedMB, allocatedVCores, and runningContainers metrics on running Applications in RM Web UI

2015-05-30 Thread Anubhav Dhoot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14566363#comment-14566363
 ] 

Anubhav Dhoot commented on YARN-3467:
-

Thanks [~kasha] for review and commit!

 Expose allocatedMB, allocatedVCores, and runningContainers metrics on running 
 Applications in RM Web UI
 ---

 Key: YARN-3467
 URL: https://issues.apache.org/jira/browse/YARN-3467
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: webapp, yarn
Affects Versions: 2.5.0
Reporter: Anthony Rojas
Assignee: Anubhav Dhoot
Priority: Minor
 Fix For: 2.8.0

 Attachments: ApplicationAttemptPage.png, Screen Shot 2015-05-26 at 
 5.46.54 PM.png, YARN-3467.001.patch, YARN-3467.002.patch, yarn-3467-1.patch


 The YARN REST API can report on the following properties:
 *allocatedMB*: The sum of memory in MB allocated to the application's running 
 containers
 *allocatedVCores*: The sum of virtual cores allocated to the application's 
 running containers
 *runningContainers*: The number of containers currently running for the 
 application
 Currently, the RM Web UI does not report on these items (at least I couldn't 
 find any entries within the Web UI).
 It would be useful for YARN Application and Resource troubleshooting to have 
 these properties and their corresponding values exposed on the RM WebUI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-3750) yarn.log.server.url is not documented in yarn-default.xml

2015-05-30 Thread Dmitry Sivachenko (JIRA)
Dmitry Sivachenko created YARN-3750:
---

 Summary: yarn.log.server.url is not documented in yarn-default.xml
 Key: YARN-3750
 URL: https://issues.apache.org/jira/browse/YARN-3750
 Project: Hadoop YARN
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.7.0
Reporter: Dmitry Sivachenko
Priority: Minor


From 
http://mail-archives.apache.org/mod_mbox/hadoop-user/201505.mbox/%3cd18c9931.52700%25xg...@hortonworks.com%3e
I learned about yarn.log.server.url setting.

But it is not mentioned in yarn-default.xml file.

I propose to add this variable there with some short description.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2900) Application (Attempt and Container) Not Found in AHS results in Internal Server Error (500)

2015-05-30 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14566351#comment-14566351
 ] 

Xuan Gong commented on YARN-2900:
-

+1. Check this in

 Application (Attempt and Container) Not Found in AHS results in Internal 
 Server Error (500)
 ---

 Key: YARN-2900
 URL: https://issues.apache.org/jira/browse/YARN-2900
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Reporter: Jonathan Eagles
Assignee: Mit Desai
 Attachments: YARN-2900-b2-2.patch, YARN-2900-b2.patch, 
 YARN-2900.20150529.patch, YARN-2900.patch, YARN-2900.patch, YARN-2900.patch, 
 YARN-2900.patch, YARN-2900.patch, YARN-2900.patch, YARN-2900.patch, 
 YARN-2900.patch


 Caused by: java.lang.NullPointerException
   at 
 org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryManagerImpl.convertToApplicationReport(ApplicationHistoryManagerImpl.java:128)
   at 
 org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryManagerImpl.getApplication(ApplicationHistoryManagerImpl.java:118)
   at 
 org.apache.hadoop.yarn.server.webapp.WebServices$2.run(WebServices.java:222)
   at 
 org.apache.hadoop.yarn.server.webapp.WebServices$2.run(WebServices.java:219)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1679)
   at 
 org.apache.hadoop.yarn.server.webapp.WebServices.getApp(WebServices.java:218)
   ... 59 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2900) Application (Attempt and Container) Not Found in AHS results in Internal Server Error (500)

2015-05-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14566355#comment-14566355
 ] 

Hudson commented on YARN-2900:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7934 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7934/])
YARN-2900. Application (Attempt and Container) Not Found in AHS results (xgong: 
rev 06f8e9cabaf3c05cd7d16215cff47265ea773f39)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebServices.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/webapp/TimelineWebServices.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryClientService.java


 Application (Attempt and Container) Not Found in AHS results in Internal 
 Server Error (500)
 ---

 Key: YARN-2900
 URL: https://issues.apache.org/jira/browse/YARN-2900
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Reporter: Jonathan Eagles
Assignee: Mit Desai
 Attachments: YARN-2900-b2-2.patch, YARN-2900-b2.patch, 
 YARN-2900.20150529.patch, YARN-2900.patch, YARN-2900.patch, YARN-2900.patch, 
 YARN-2900.patch, YARN-2900.patch, YARN-2900.patch, YARN-2900.patch, 
 YARN-2900.patch


 Caused by: java.lang.NullPointerException
   at 
 org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryManagerImpl.convertToApplicationReport(ApplicationHistoryManagerImpl.java:128)
   at 
 org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryManagerImpl.getApplication(ApplicationHistoryManagerImpl.java:118)
   at 
 org.apache.hadoop.yarn.server.webapp.WebServices$2.run(WebServices.java:222)
   at 
 org.apache.hadoop.yarn.server.webapp.WebServices$2.run(WebServices.java:219)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1679)
   at 
 org.apache.hadoop.yarn.server.webapp.WebServices.getApp(WebServices.java:218)
   ... 59 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3508) Preemption processing occuring on the main RM dispatcher

2015-05-30 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-3508:
---
Attachment: YARN-3508.02.patch

 Preemption processing occuring on the main RM dispatcher
 

 Key: YARN-3508
 URL: https://issues.apache.org/jira/browse/YARN-3508
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager, scheduler
Affects Versions: 2.6.0
Reporter: Jason Lowe
Assignee: Varun Saxena
 Attachments: YARN-3508.01.patch, YARN-3508.02.patch


 We recently saw the RM for a large cluster lag far behind on the 
 AsyncDispacher event queue.  The AsyncDispatcher thread was consistently 
 blocked on the highly-contended CapacityScheduler lock trying to dispatch 
 preemption-related events for RMContainerPreemptEventDispatcher.  Preemption 
 processing should occur on the scheduler event dispatcher thread or a 
 separate thread to avoid delaying the processing of other events in the 
 primary dispatcher queue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3547) FairScheduler: Apps that have no resource demand should not participate scheduling

2015-05-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14565934#comment-14565934
 ] 

Hudson commented on YARN-3547:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #943 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/943/])
YARN-3547. FairScheduler: Apps that have no resource demand should not 
participate scheduling. (Xianyin Xin via kasha) (kasha: rev 
3ae2a625018bc8cf431aa19da5bf8fe4ef8c1ad4)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSLeafQueue.java
* hadoop-yarn-project/CHANGES.txt


 FairScheduler: Apps that have no resource demand should not participate 
 scheduling
 --

 Key: YARN-3547
 URL: https://issues.apache.org/jira/browse/YARN-3547
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: fairscheduler
Reporter: Xianyin Xin
Assignee: Xianyin Xin
 Fix For: 2.8.0

 Attachments: YARN-3547.001.patch, YARN-3547.002.patch, 
 YARN-3547.003.patch, YARN-3547.004.patch, YARN-3547.005.patch, 
 YARN-3547.006.patch


 At present, all of the 'running' apps participate the scheduling process, 
 however, most of them may have no resource demand on a production cluster, as 
 the app's status is running other than waiting for resource at the most of 
 the app's lifetime. It's not a wise way we sort all the 'running' apps and 
 try to fulfill them, especially on a large-scale cluster which has heavy 
 scheduling load. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3713) Remove duplicate function call storeContainerDiagnostics in ContainerDiagnosticsUpdateTransition

2015-05-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14565936#comment-14565936
 ] 

Hudson commented on YARN-3713:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #943 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/943/])
YARN-3713. Remove duplicate function call storeContainerDiagnostics in 
ContainerDiagnosticsUpdateTransition (zxu via rkanter) (rkanter: rev 
6aec13cb338b0fe62ca915f78aa729c9b0b86fba)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
* hadoop-yarn-project/CHANGES.txt


 Remove duplicate function call storeContainerDiagnostics in 
 ContainerDiagnosticsUpdateTransition
 

 Key: YARN-3713
 URL: https://issues.apache.org/jira/browse/YARN-3713
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: nodemanager
Affects Versions: 2.7.0
Reporter: zhihai xu
Assignee: zhihai xu
Priority: Minor
  Labels: cleanup
 Fix For: 2.8.0

 Attachments: YARN-3713.000.patch


 remove duplicate function call {{storeContainerDiagnostics}} in 
 ContainerDiagnosticsUpdateTransition. {{storeContainerDiagnostics}} is 
 already called at ContainerImpl#addDiagnostics. 
 {code}
   private void addDiagnostics(String... diags) {
 for (String s : diags) {
   this.diagnostics.append(s);
 }
 try {
   stateStore.storeContainerDiagnostics(containerId, diagnostics);
 } catch (IOException e) {
   LOG.warn(Unable to update diagnostics in state store for 
   + containerId, e);
 }
   }
 {code} 
 So we don't need call {{storeContainerDiagnostics}} in  
 ContainerDiagnosticsUpdateTransition#transition.
 {code}
   container.addDiagnostics(updateEvent.getDiagnosticsUpdate(), \n);
   try {
 container.stateStore.storeContainerDiagnostics(container.containerId,
 container.diagnostics);
   } catch (IOException e) {
 LOG.warn(Unable to update state store diagnostics for 
 + container.containerId, e);
   }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3740) Fixed the typo with the configuration name: APPLICATION_HISTORY_PREFIX_MAX_APPS

2015-05-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14565937#comment-14565937
 ] 

Hudson commented on YARN-3740:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #943 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/943/])
YARN-3740. Fixed the typo in the configuration name: 
APPLICATION_HISTORY_PREFIX_MAX_APPS. Contributed by Xuan Gong. (zjshen: rev 
eb6bf91eeacf97afb4cefe590f75ba94f3187d2b)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryClientService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java


 Fixed the typo with the configuration name: 
 APPLICATION_HISTORY_PREFIX_MAX_APPS
 ---

 Key: YARN-3740
 URL: https://issues.apache.org/jira/browse/YARN-3740
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager, webapp, yarn
Reporter: Xuan Gong
Assignee: Xuan Gong
 Fix For: 2.8.0

 Attachments: YARN-3740.1.patch


 YARN-3700 introduces a new configuration named 
 APPLICATION_HISTORY_PREFIX_MAX_APPS, which need be changed to 
 APPLICATION_HISTORY_MAX_APPS. 
 This is not an incompatibility change since YARN-3700 is in 2.8



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3467) Expose allocatedMB, allocatedVCores, and runningContainers metrics on running Applications in RM Web UI

2015-05-30 Thread Anubhav Dhoot (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anubhav Dhoot updated YARN-3467:

Attachment: YARN-3467.002.patch

Fixed checkstyle issue

 Expose allocatedMB, allocatedVCores, and runningContainers metrics on running 
 Applications in RM Web UI
 ---

 Key: YARN-3467
 URL: https://issues.apache.org/jira/browse/YARN-3467
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: webapp, yarn
Affects Versions: 2.5.0
Reporter: Anthony Rojas
Assignee: Anubhav Dhoot
Priority: Minor
 Attachments: ApplicationAttemptPage.png, Screen Shot 2015-05-26 at 
 5.46.54 PM.png, YARN-3467.001.patch, YARN-3467.002.patch, yarn-3467-1.patch


 The YARN REST API can report on the following properties:
 *allocatedMB*: The sum of memory in MB allocated to the application's running 
 containers
 *allocatedVCores*: The sum of virtual cores allocated to the application's 
 running containers
 *runningContainers*: The number of containers currently running for the 
 application
 Currently, the RM Web UI does not report on these items (at least I couldn't 
 find any entries within the Web UI).
 It would be useful for YARN Application and Resource troubleshooting to have 
 these properties and their corresponding values exposed on the RM WebUI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3508) Preemption processing occuring on the main RM dispatcher

2015-05-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14565929#comment-14565929
 ] 

Hadoop QA commented on YARN-3508:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  16m 14s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 35s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 35s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   0m 50s | The applied patch generated  1 
new checkstyle issues (total was 53, now 53). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 36s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 27s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | yarn tests |  50m 38s | Tests passed in 
hadoop-yarn-server-resourcemanager. |
| | |  89m  2s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12736323/YARN-3508.02.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / eb6bf91 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-YARN-Build/8137/artifact/patchprocess/diffcheckstylehadoop-yarn-server-resourcemanager.txt
 |
| hadoop-yarn-server-resourcemanager test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/8137/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/8137/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/8137/console |


This message was automatically generated.

 Preemption processing occuring on the main RM dispatcher
 

 Key: YARN-3508
 URL: https://issues.apache.org/jira/browse/YARN-3508
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager, scheduler
Affects Versions: 2.6.0
Reporter: Jason Lowe
Assignee: Varun Saxena
 Attachments: YARN-3508.01.patch, YARN-3508.02.patch


 We recently saw the RM for a large cluster lag far behind on the 
 AsyncDispacher event queue.  The AsyncDispatcher thread was consistently 
 blocked on the highly-contended CapacityScheduler lock trying to dispatch 
 preemption-related events for RMContainerPreemptEventDispatcher.  Preemption 
 processing should occur on the scheduler event dispatcher thread or a 
 separate thread to avoid delaying the processing of other events in the 
 primary dispatcher queue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3547) FairScheduler: Apps that have no resource demand should not participate scheduling

2015-05-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14565959#comment-14565959
 ] 

Hudson commented on YARN-3547:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #213 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/213/])
YARN-3547. FairScheduler: Apps that have no resource demand should not 
participate scheduling. (Xianyin Xin via kasha) (kasha: rev 
3ae2a625018bc8cf431aa19da5bf8fe4ef8c1ad4)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSLeafQueue.java


 FairScheduler: Apps that have no resource demand should not participate 
 scheduling
 --

 Key: YARN-3547
 URL: https://issues.apache.org/jira/browse/YARN-3547
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: fairscheduler
Reporter: Xianyin Xin
Assignee: Xianyin Xin
 Fix For: 2.8.0

 Attachments: YARN-3547.001.patch, YARN-3547.002.patch, 
 YARN-3547.003.patch, YARN-3547.004.patch, YARN-3547.005.patch, 
 YARN-3547.006.patch


 At present, all of the 'running' apps participate the scheduling process, 
 however, most of them may have no resource demand on a production cluster, as 
 the app's status is running other than waiting for resource at the most of 
 the app's lifetime. It's not a wise way we sort all the 'running' apps and 
 try to fulfill them, especially on a large-scale cluster which has heavy 
 scheduling load. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3713) Remove duplicate function call storeContainerDiagnostics in ContainerDiagnosticsUpdateTransition

2015-05-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14565961#comment-14565961
 ] 

Hudson commented on YARN-3713:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #213 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/213/])
YARN-3713. Remove duplicate function call storeContainerDiagnostics in 
ContainerDiagnosticsUpdateTransition (zxu via rkanter) (rkanter: rev 
6aec13cb338b0fe62ca915f78aa729c9b0b86fba)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java


 Remove duplicate function call storeContainerDiagnostics in 
 ContainerDiagnosticsUpdateTransition
 

 Key: YARN-3713
 URL: https://issues.apache.org/jira/browse/YARN-3713
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: nodemanager
Affects Versions: 2.7.0
Reporter: zhihai xu
Assignee: zhihai xu
Priority: Minor
  Labels: cleanup
 Fix For: 2.8.0

 Attachments: YARN-3713.000.patch


 remove duplicate function call {{storeContainerDiagnostics}} in 
 ContainerDiagnosticsUpdateTransition. {{storeContainerDiagnostics}} is 
 already called at ContainerImpl#addDiagnostics. 
 {code}
   private void addDiagnostics(String... diags) {
 for (String s : diags) {
   this.diagnostics.append(s);
 }
 try {
   stateStore.storeContainerDiagnostics(containerId, diagnostics);
 } catch (IOException e) {
   LOG.warn(Unable to update diagnostics in state store for 
   + containerId, e);
 }
   }
 {code} 
 So we don't need call {{storeContainerDiagnostics}} in  
 ContainerDiagnosticsUpdateTransition#transition.
 {code}
   container.addDiagnostics(updateEvent.getDiagnosticsUpdate(), \n);
   try {
 container.stateStore.storeContainerDiagnostics(container.containerId,
 container.diagnostics);
   } catch (IOException e) {
 LOG.warn(Unable to update state store diagnostics for 
 + container.containerId, e);
   }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3740) Fixed the typo with the configuration name: APPLICATION_HISTORY_PREFIX_MAX_APPS

2015-05-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14565962#comment-14565962
 ] 

Hudson commented on YARN-3740:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #213 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/213/])
YARN-3740. Fixed the typo in the configuration name: 
APPLICATION_HISTORY_PREFIX_MAX_APPS. Contributed by Xuan Gong. (zjshen: rev 
eb6bf91eeacf97afb4cefe590f75ba94f3187d2b)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryClientService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java


 Fixed the typo with the configuration name: 
 APPLICATION_HISTORY_PREFIX_MAX_APPS
 ---

 Key: YARN-3740
 URL: https://issues.apache.org/jira/browse/YARN-3740
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager, webapp, yarn
Reporter: Xuan Gong
Assignee: Xuan Gong
 Fix For: 2.8.0

 Attachments: YARN-3740.1.patch


 YARN-3700 introduces a new configuration named 
 APPLICATION_HISTORY_PREFIX_MAX_APPS, which need be changed to 
 APPLICATION_HISTORY_MAX_APPS. 
 This is not an incompatibility change since YARN-3700 is in 2.8



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3630) YARN should suggest a heartbeat interval for applications

2015-05-30 Thread Xianyin Xin (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xianyin Xin updated YARN-3630:
--
Attachment: YARN-3630.003.patch

Upload a patch, adding a configured hard heartbeat interval upper limit, and 
following [~vvasudev]'s suggestion, making the calculated nodes heartbeat 
delays random to avoid clustered pings.

 YARN should suggest a heartbeat interval for applications
 -

 Key: YARN-3630
 URL: https://issues.apache.org/jira/browse/YARN-3630
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: resourcemanager, scheduler
Affects Versions: 2.7.0
Reporter: Zoltán Zvara
Assignee: Xianyin Xin
Priority: Minor
 Attachments: Notes_for_adaptive_heartbeat_policy.pdf, 
 YARN-3630.001.patch.patch, YARN-3630.002.patch, YARN-3630.003.patch


 It seems currently applications - for example Spark - are not adaptive to RM 
 regarding heartbeat intervals. RM should be able to suggest a desired 
 heartbeat interval to applications.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3547) FairScheduler: Apps that have no resource demand should not participate scheduling

2015-05-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14566007#comment-14566007
 ] 

Hudson commented on YARN-3547:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2141 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2141/])
YARN-3547. FairScheduler: Apps that have no resource demand should not 
participate scheduling. (Xianyin Xin via kasha) (kasha: rev 
3ae2a625018bc8cf431aa19da5bf8fe4ef8c1ad4)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSLeafQueue.java


 FairScheduler: Apps that have no resource demand should not participate 
 scheduling
 --

 Key: YARN-3547
 URL: https://issues.apache.org/jira/browse/YARN-3547
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: fairscheduler
Reporter: Xianyin Xin
Assignee: Xianyin Xin
 Fix For: 2.8.0

 Attachments: YARN-3547.001.patch, YARN-3547.002.patch, 
 YARN-3547.003.patch, YARN-3547.004.patch, YARN-3547.005.patch, 
 YARN-3547.006.patch


 At present, all of the 'running' apps participate the scheduling process, 
 however, most of them may have no resource demand on a production cluster, as 
 the app's status is running other than waiting for resource at the most of 
 the app's lifetime. It's not a wise way we sort all the 'running' apps and 
 try to fulfill them, especially on a large-scale cluster which has heavy 
 scheduling load. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3713) Remove duplicate function call storeContainerDiagnostics in ContainerDiagnosticsUpdateTransition

2015-05-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14566009#comment-14566009
 ] 

Hudson commented on YARN-3713:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2141 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2141/])
YARN-3713. Remove duplicate function call storeContainerDiagnostics in 
ContainerDiagnosticsUpdateTransition (zxu via rkanter) (rkanter: rev 
6aec13cb338b0fe62ca915f78aa729c9b0b86fba)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java


 Remove duplicate function call storeContainerDiagnostics in 
 ContainerDiagnosticsUpdateTransition
 

 Key: YARN-3713
 URL: https://issues.apache.org/jira/browse/YARN-3713
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: nodemanager
Affects Versions: 2.7.0
Reporter: zhihai xu
Assignee: zhihai xu
Priority: Minor
  Labels: cleanup
 Fix For: 2.8.0

 Attachments: YARN-3713.000.patch


 remove duplicate function call {{storeContainerDiagnostics}} in 
 ContainerDiagnosticsUpdateTransition. {{storeContainerDiagnostics}} is 
 already called at ContainerImpl#addDiagnostics. 
 {code}
   private void addDiagnostics(String... diags) {
 for (String s : diags) {
   this.diagnostics.append(s);
 }
 try {
   stateStore.storeContainerDiagnostics(containerId, diagnostics);
 } catch (IOException e) {
   LOG.warn(Unable to update diagnostics in state store for 
   + containerId, e);
 }
   }
 {code} 
 So we don't need call {{storeContainerDiagnostics}} in  
 ContainerDiagnosticsUpdateTransition#transition.
 {code}
   container.addDiagnostics(updateEvent.getDiagnosticsUpdate(), \n);
   try {
 container.stateStore.storeContainerDiagnostics(container.containerId,
 container.diagnostics);
   } catch (IOException e) {
 LOG.warn(Unable to update state store diagnostics for 
 + container.containerId, e);
   }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3740) Fixed the typo with the configuration name: APPLICATION_HISTORY_PREFIX_MAX_APPS

2015-05-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14566010#comment-14566010
 ] 

Hudson commented on YARN-3740:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2141 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2141/])
YARN-3740. Fixed the typo in the configuration name: 
APPLICATION_HISTORY_PREFIX_MAX_APPS. Contributed by Xuan Gong. (zjshen: rev 
eb6bf91eeacf97afb4cefe590f75ba94f3187d2b)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryClientService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java


 Fixed the typo with the configuration name: 
 APPLICATION_HISTORY_PREFIX_MAX_APPS
 ---

 Key: YARN-3740
 URL: https://issues.apache.org/jira/browse/YARN-3740
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager, webapp, yarn
Reporter: Xuan Gong
Assignee: Xuan Gong
 Fix For: 2.8.0

 Attachments: YARN-3740.1.patch


 YARN-3700 introduces a new configuration named 
 APPLICATION_HISTORY_PREFIX_MAX_APPS, which need be changed to 
 APPLICATION_HISTORY_MAX_APPS. 
 This is not an incompatibility change since YARN-3700 is in 2.8



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3713) Remove duplicate function call storeContainerDiagnostics in ContainerDiagnosticsUpdateTransition

2015-05-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14566018#comment-14566018
 ] 

Hudson commented on YARN-3713:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #202 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/202/])
YARN-3713. Remove duplicate function call storeContainerDiagnostics in 
ContainerDiagnosticsUpdateTransition (zxu via rkanter) (rkanter: rev 
6aec13cb338b0fe62ca915f78aa729c9b0b86fba)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java


 Remove duplicate function call storeContainerDiagnostics in 
 ContainerDiagnosticsUpdateTransition
 

 Key: YARN-3713
 URL: https://issues.apache.org/jira/browse/YARN-3713
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: nodemanager
Affects Versions: 2.7.0
Reporter: zhihai xu
Assignee: zhihai xu
Priority: Minor
  Labels: cleanup
 Fix For: 2.8.0

 Attachments: YARN-3713.000.patch


 remove duplicate function call {{storeContainerDiagnostics}} in 
 ContainerDiagnosticsUpdateTransition. {{storeContainerDiagnostics}} is 
 already called at ContainerImpl#addDiagnostics. 
 {code}
   private void addDiagnostics(String... diags) {
 for (String s : diags) {
   this.diagnostics.append(s);
 }
 try {
   stateStore.storeContainerDiagnostics(containerId, diagnostics);
 } catch (IOException e) {
   LOG.warn(Unable to update diagnostics in state store for 
   + containerId, e);
 }
   }
 {code} 
 So we don't need call {{storeContainerDiagnostics}} in  
 ContainerDiagnosticsUpdateTransition#transition.
 {code}
   container.addDiagnostics(updateEvent.getDiagnosticsUpdate(), \n);
   try {
 container.stateStore.storeContainerDiagnostics(container.containerId,
 container.diagnostics);
   } catch (IOException e) {
 LOG.warn(Unable to update state store diagnostics for 
 + container.containerId, e);
   }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3740) Fixed the typo with the configuration name: APPLICATION_HISTORY_PREFIX_MAX_APPS

2015-05-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14566019#comment-14566019
 ] 

Hudson commented on YARN-3740:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #202 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/202/])
YARN-3740. Fixed the typo in the configuration name: 
APPLICATION_HISTORY_PREFIX_MAX_APPS. Contributed by Xuan Gong. (zjshen: rev 
eb6bf91eeacf97afb4cefe590f75ba94f3187d2b)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryClientService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* hadoop-yarn-project/CHANGES.txt


 Fixed the typo with the configuration name: 
 APPLICATION_HISTORY_PREFIX_MAX_APPS
 ---

 Key: YARN-3740
 URL: https://issues.apache.org/jira/browse/YARN-3740
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager, webapp, yarn
Reporter: Xuan Gong
Assignee: Xuan Gong
 Fix For: 2.8.0

 Attachments: YARN-3740.1.patch


 YARN-3700 introduces a new configuration named 
 APPLICATION_HISTORY_PREFIX_MAX_APPS, which need be changed to 
 APPLICATION_HISTORY_MAX_APPS. 
 This is not an incompatibility change since YARN-3700 is in 2.8



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3547) FairScheduler: Apps that have no resource demand should not participate scheduling

2015-05-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14566016#comment-14566016
 ] 

Hudson commented on YARN-3547:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #202 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/202/])
YARN-3547. FairScheduler: Apps that have no resource demand should not 
participate scheduling. (Xianyin Xin via kasha) (kasha: rev 
3ae2a625018bc8cf431aa19da5bf8fe4ef8c1ad4)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSLeafQueue.java


 FairScheduler: Apps that have no resource demand should not participate 
 scheduling
 --

 Key: YARN-3547
 URL: https://issues.apache.org/jira/browse/YARN-3547
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: fairscheduler
Reporter: Xianyin Xin
Assignee: Xianyin Xin
 Fix For: 2.8.0

 Attachments: YARN-3547.001.patch, YARN-3547.002.patch, 
 YARN-3547.003.patch, YARN-3547.004.patch, YARN-3547.005.patch, 
 YARN-3547.006.patch


 At present, all of the 'running' apps participate the scheduling process, 
 however, most of them may have no resource demand on a production cluster, as 
 the app's status is running other than waiting for resource at the most of 
 the app's lifetime. It's not a wise way we sort all the 'running' apps and 
 try to fulfill them, especially on a large-scale cluster which has heavy 
 scheduling load. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3467) Expose allocatedMB, allocatedVCores, and runningContainers metrics on running Applications in RM Web UI

2015-05-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14566015#comment-14566015
 ] 

Hadoop QA commented on YARN-3467:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 31s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 34s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 40s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 11s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 37s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 28s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | yarn tests |   0m 24s | Tests passed in 
hadoop-yarn-server-common. |
| {color:red}-1{color} | yarn tests |  60m 31s | Tests failed in 
hadoop-yarn-server-resourcemanager. |
| | | 101m 57s | |
\\
\\
|| Reason || Tests ||
| Timed out tests | 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12736338/YARN-3467.002.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / eb6bf91 |
| hadoop-yarn-server-common test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/8138/artifact/patchprocess/testrun_hadoop-yarn-server-common.txt
 |
| hadoop-yarn-server-resourcemanager test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/8138/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/8138/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/8138/console |


This message was automatically generated.

 Expose allocatedMB, allocatedVCores, and runningContainers metrics on running 
 Applications in RM Web UI
 ---

 Key: YARN-3467
 URL: https://issues.apache.org/jira/browse/YARN-3467
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: webapp, yarn
Affects Versions: 2.5.0
Reporter: Anthony Rojas
Assignee: Anubhav Dhoot
Priority: Minor
 Attachments: ApplicationAttemptPage.png, Screen Shot 2015-05-26 at 
 5.46.54 PM.png, YARN-3467.001.patch, YARN-3467.002.patch, yarn-3467-1.patch


 The YARN REST API can report on the following properties:
 *allocatedMB*: The sum of memory in MB allocated to the application's running 
 containers
 *allocatedVCores*: The sum of virtual cores allocated to the application's 
 running containers
 *runningContainers*: The number of containers currently running for the 
 application
 Currently, the RM Web UI does not report on these items (at least I couldn't 
 find any entries within the Web UI).
 It would be useful for YARN Application and Resource troubleshooting to have 
 these properties and their corresponding values exposed on the RM WebUI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3740) Fixed the typo with the configuration name: APPLICATION_HISTORY_PREFIX_MAX_APPS

2015-05-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14566046#comment-14566046
 ] 

Hudson commented on YARN-3740:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #211 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/211/])
YARN-3740. Fixed the typo in the configuration name: 
APPLICATION_HISTORY_PREFIX_MAX_APPS. Contributed by Xuan Gong. (zjshen: rev 
eb6bf91eeacf97afb4cefe590f75ba94f3187d2b)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryClientService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java


 Fixed the typo with the configuration name: 
 APPLICATION_HISTORY_PREFIX_MAX_APPS
 ---

 Key: YARN-3740
 URL: https://issues.apache.org/jira/browse/YARN-3740
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager, webapp, yarn
Reporter: Xuan Gong
Assignee: Xuan Gong
 Fix For: 2.8.0

 Attachments: YARN-3740.1.patch


 YARN-3700 introduces a new configuration named 
 APPLICATION_HISTORY_PREFIX_MAX_APPS, which need be changed to 
 APPLICATION_HISTORY_MAX_APPS. 
 This is not an incompatibility change since YARN-3700 is in 2.8



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3713) Remove duplicate function call storeContainerDiagnostics in ContainerDiagnosticsUpdateTransition

2015-05-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14566045#comment-14566045
 ] 

Hudson commented on YARN-3713:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #211 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/211/])
YARN-3713. Remove duplicate function call storeContainerDiagnostics in 
ContainerDiagnosticsUpdateTransition (zxu via rkanter) (rkanter: rev 
6aec13cb338b0fe62ca915f78aa729c9b0b86fba)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
* hadoop-yarn-project/CHANGES.txt


 Remove duplicate function call storeContainerDiagnostics in 
 ContainerDiagnosticsUpdateTransition
 

 Key: YARN-3713
 URL: https://issues.apache.org/jira/browse/YARN-3713
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: nodemanager
Affects Versions: 2.7.0
Reporter: zhihai xu
Assignee: zhihai xu
Priority: Minor
  Labels: cleanup
 Fix For: 2.8.0

 Attachments: YARN-3713.000.patch


 remove duplicate function call {{storeContainerDiagnostics}} in 
 ContainerDiagnosticsUpdateTransition. {{storeContainerDiagnostics}} is 
 already called at ContainerImpl#addDiagnostics. 
 {code}
   private void addDiagnostics(String... diags) {
 for (String s : diags) {
   this.diagnostics.append(s);
 }
 try {
   stateStore.storeContainerDiagnostics(containerId, diagnostics);
 } catch (IOException e) {
   LOG.warn(Unable to update diagnostics in state store for 
   + containerId, e);
 }
   }
 {code} 
 So we don't need call {{storeContainerDiagnostics}} in  
 ContainerDiagnosticsUpdateTransition#transition.
 {code}
   container.addDiagnostics(updateEvent.getDiagnosticsUpdate(), \n);
   try {
 container.stateStore.storeContainerDiagnostics(container.containerId,
 container.diagnostics);
   } catch (IOException e) {
 LOG.warn(Unable to update state store diagnostics for 
 + container.containerId, e);
   }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3547) FairScheduler: Apps that have no resource demand should not participate scheduling

2015-05-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14566043#comment-14566043
 ] 

Hudson commented on YARN-3547:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #211 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/211/])
YARN-3547. FairScheduler: Apps that have no resource demand should not 
participate scheduling. (Xianyin Xin via kasha) (kasha: rev 
3ae2a625018bc8cf431aa19da5bf8fe4ef8c1ad4)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSLeafQueue.java
* hadoop-yarn-project/CHANGES.txt


 FairScheduler: Apps that have no resource demand should not participate 
 scheduling
 --

 Key: YARN-3547
 URL: https://issues.apache.org/jira/browse/YARN-3547
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: fairscheduler
Reporter: Xianyin Xin
Assignee: Xianyin Xin
 Fix For: 2.8.0

 Attachments: YARN-3547.001.patch, YARN-3547.002.patch, 
 YARN-3547.003.patch, YARN-3547.004.patch, YARN-3547.005.patch, 
 YARN-3547.006.patch


 At present, all of the 'running' apps participate the scheduling process, 
 however, most of them may have no resource demand on a production cluster, as 
 the app's status is running other than waiting for resource at the most of 
 the app's lifetime. It's not a wise way we sort all the 'running' apps and 
 try to fulfill them, especially on a large-scale cluster which has heavy 
 scheduling load. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3547) FairScheduler: Apps that have no resource demand should not participate scheduling

2015-05-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14566054#comment-14566054
 ] 

Hudson commented on YARN-3547:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2159 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2159/])
YARN-3547. FairScheduler: Apps that have no resource demand should not 
participate scheduling. (Xianyin Xin via kasha) (kasha: rev 
3ae2a625018bc8cf431aa19da5bf8fe4ef8c1ad4)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSLeafQueue.java


 FairScheduler: Apps that have no resource demand should not participate 
 scheduling
 --

 Key: YARN-3547
 URL: https://issues.apache.org/jira/browse/YARN-3547
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: fairscheduler
Reporter: Xianyin Xin
Assignee: Xianyin Xin
 Fix For: 2.8.0

 Attachments: YARN-3547.001.patch, YARN-3547.002.patch, 
 YARN-3547.003.patch, YARN-3547.004.patch, YARN-3547.005.patch, 
 YARN-3547.006.patch


 At present, all of the 'running' apps participate the scheduling process, 
 however, most of them may have no resource demand on a production cluster, as 
 the app's status is running other than waiting for resource at the most of 
 the app's lifetime. It's not a wise way we sort all the 'running' apps and 
 try to fulfill them, especially on a large-scale cluster which has heavy 
 scheduling load. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3713) Remove duplicate function call storeContainerDiagnostics in ContainerDiagnosticsUpdateTransition

2015-05-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14566056#comment-14566056
 ] 

Hudson commented on YARN-3713:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2159 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2159/])
YARN-3713. Remove duplicate function call storeContainerDiagnostics in 
ContainerDiagnosticsUpdateTransition (zxu via rkanter) (rkanter: rev 
6aec13cb338b0fe62ca915f78aa729c9b0b86fba)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
* hadoop-yarn-project/CHANGES.txt


 Remove duplicate function call storeContainerDiagnostics in 
 ContainerDiagnosticsUpdateTransition
 

 Key: YARN-3713
 URL: https://issues.apache.org/jira/browse/YARN-3713
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: nodemanager
Affects Versions: 2.7.0
Reporter: zhihai xu
Assignee: zhihai xu
Priority: Minor
  Labels: cleanup
 Fix For: 2.8.0

 Attachments: YARN-3713.000.patch


 remove duplicate function call {{storeContainerDiagnostics}} in 
 ContainerDiagnosticsUpdateTransition. {{storeContainerDiagnostics}} is 
 already called at ContainerImpl#addDiagnostics. 
 {code}
   private void addDiagnostics(String... diags) {
 for (String s : diags) {
   this.diagnostics.append(s);
 }
 try {
   stateStore.storeContainerDiagnostics(containerId, diagnostics);
 } catch (IOException e) {
   LOG.warn(Unable to update diagnostics in state store for 
   + containerId, e);
 }
   }
 {code} 
 So we don't need call {{storeContainerDiagnostics}} in  
 ContainerDiagnosticsUpdateTransition#transition.
 {code}
   container.addDiagnostics(updateEvent.getDiagnosticsUpdate(), \n);
   try {
 container.stateStore.storeContainerDiagnostics(container.containerId,
 container.diagnostics);
   } catch (IOException e) {
 LOG.warn(Unable to update state store diagnostics for 
 + container.containerId, e);
   }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3740) Fixed the typo with the configuration name: APPLICATION_HISTORY_PREFIX_MAX_APPS

2015-05-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14566057#comment-14566057
 ] 

Hudson commented on YARN-3740:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2159 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2159/])
YARN-3740. Fixed the typo in the configuration name: 
APPLICATION_HISTORY_PREFIX_MAX_APPS. Contributed by Xuan Gong. (zjshen: rev 
eb6bf91eeacf97afb4cefe590f75ba94f3187d2b)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryClientService.java
* hadoop-yarn-project/CHANGES.txt


 Fixed the typo with the configuration name: 
 APPLICATION_HISTORY_PREFIX_MAX_APPS
 ---

 Key: YARN-3740
 URL: https://issues.apache.org/jira/browse/YARN-3740
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager, webapp, yarn
Reporter: Xuan Gong
Assignee: Xuan Gong
 Fix For: 2.8.0

 Attachments: YARN-3740.1.patch


 YARN-3700 introduces a new configuration named 
 APPLICATION_HISTORY_PREFIX_MAX_APPS, which need be changed to 
 APPLICATION_HISTORY_MAX_APPS. 
 This is not an incompatibility change since YARN-3700 is in 2.8



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)