[jira] [Updated] (YARN-311) Dynamic node resource configuration: core scheduler changes

2013-09-03 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-311:


Attachment: YARN-311-v5.patch

 Dynamic node resource configuration: core scheduler changes
 ---

 Key: YARN-311
 URL: https://issues.apache.org/jira/browse/YARN-311
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager, scheduler
Reporter: Junping Du
Assignee: Junping Du
 Attachments: YARN-311-v1.patch, YARN-311-v2.patch, YARN-311-v3.patch, 
 YARN-311-v4.patch, YARN-311-v4.patch, YARN-311-v5.patch


 As the first step, we go for resource change on RM side and expose admin APIs 
 (admin protocol, CLI, REST and JMX API). In this jira, we will only contain 
 changes in scheduler.
 For design details, please refer proposal and discussions in parent JIRA: 
 YARN-291.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-291) Dynamic resource configuration

2013-09-03 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13756576#comment-13756576
 ] 

Junping Du commented on YARN-291:
-

Hi, I start working on sub JIRAs and will address your comments there. Thanks 
for review and comments!

 Dynamic resource configuration
 --

 Key: YARN-291
 URL: https://issues.apache.org/jira/browse/YARN-291
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: nodemanager, scheduler
Reporter: Junping Du
Assignee: Junping Du
  Labels: features
 Attachments: Elastic Resources for YARN-v0.2.pdf, 
 YARN-291-AddClientRMProtocolToSetNodeResource-03.patch, 
 YARN-291-all-v1.patch, YARN-291-CoreAndAdmin.patch, 
 YARN-291-core-HeartBeatAndScheduler-01.patch, 
 YARN-291-JMXInterfaceOnNM-02.patch, 
 YARN-291-OnlyUpdateWhenResourceChange-01-fix.patch, 
 YARN-291-YARNClientCommandline-04.patch


 The current Hadoop YARN resource management logic assumes per node resource 
 is static during the lifetime of the NM process. Allowing run-time 
 configuration on per node resource will give us finer granularity of resource 
 elasticity. This allows Hadoop workloads to coexist with other workloads on 
 the same hardware efficiently, whether or not the environment is virtualized. 
 More background and design details can be found in attached proposal.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1120) Make ApplicationConstants.Environment.USER definition OS neutral

2013-09-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13756580#comment-13756580
 ] 

Hudson commented on YARN-1120:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1511 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1511/])
YARN-1120. Made ApplicationConstants.Environment.USER definition OS neutral as 
the corresponding value is now set correctly end-to-end. Contributed by Chuan 
Liu. (vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1519330)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationConstants.java


 Make ApplicationConstants.Environment.USER definition OS neutral
 

 Key: YARN-1120
 URL: https://issues.apache.org/jira/browse/YARN-1120
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Fix For: 2.1.1-beta

 Attachments: YARN-1120.patch


 In YARN-557, we added some code to make 
 {{ApplicationConstants.Environment.USER}} has OS-specific definition in order 
 to fix the unit test TestUnmanagedAMLauncher. In YARN-571, the relevant test 
 code was corrected. In YARN-602, we actually will explicitly set the 
 environment variables for the child containers. With these changes, I think 
 we can revert the YARN-557 change to make 
 {{ApplicationConstants.Environment.USER}} OS neutral. The main benefit is 
 that we can use the same method over the Enum constants. This should also fix 
 the TestContainerLaunch#testContainerEnvVariables failure on Windows. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-649) Make container logs available over HTTP in plain text

2013-09-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13756581#comment-13756581
 ] 

Hudson commented on YARN-649:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #1511 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1511/])
YARN-649. Added a new NM web-service to serve container logs in plain text over 
HTTP. Contributed by Sandy Ryza. (vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1519326)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/Context.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/ContainerLogsPage.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/ContainerLogsUtils.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/NMWebServices.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestEventFlow.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeStatusUpdater.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/BaseContainerManagerTest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/TestApplication.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestContainerLogsPage.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServer.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServices.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServicesApps.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServicesContainers.java


 Make container logs available over HTTP in plain text
 -

 Key: YARN-649
 URL: https://issues.apache.org/jira/browse/YARN-649
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Fix For: 2.3.0

 Attachments: YARN-649-2.patch, YARN-649-3.patch, YARN-649-4.patch, 
 YARN-649-5.patch, YARN-649-6.patch, YARN-649-7.patch, YARN-649.patch, 
 YARN-752-1.patch


 It would be good to make container logs available over the REST API for 
 MAPREDUCE-4362 and so that they can be accessed programatically in general.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1077) TestContainerLaunch fails on Windows

2013-09-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13756582#comment-13756582
 ] 

Hudson commented on YARN-1077:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1511 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1511/])
YARN-1077. Fixed TestContainerLaunch test failure on Windows. Contributed by 
Chuan Liu. (vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1519333)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainersLauncher.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/TestContainerLaunch.java


 TestContainerLaunch fails on Windows
 

 Key: YARN-1077
 URL: https://issues.apache.org/jira/browse/YARN-1077
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0, 2.3.0
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Fix For: 2.1.1-beta

 Attachments: YARN-1077.2.patch, YARN-1077.3.patch, YARN-1077.4.patch, 
 YARN-1077.5.patch, YARN-1077.patch


 Several cases in this unit tests fail on Windows. (Append error log at the 
 end.)
 testInvalidEnvSyntaxDiagnostics fails because the difference between cmd and 
 bash script error handling. If some command fails in the cmd script, cmd will 
 continue execute the the rest of the script command. Error handling needs to 
 be explicitly carried out in the script file. The error code of the last 
 command will be returned as the error code of the whole script. In this test, 
 some error happened in the middle of the cmd script, the test expect an 
 exception and non-zero error code. In the cmd script, the intermediate errors 
 are ignored. The last command call succeeded and there is no exception.
 testContainerLaunchStdoutAndStderrDiagnostics fails due to wrong cmd commands 
 used by the test.
 testContainerEnvVariables and testDelayedKill fail due to a regression from 
 YARN-906.
 {noformat}
 ---
 Test set: 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch
 ---
 Tests run: 7, Failures: 4, Errors: 0, Skipped: 0, Time elapsed: 11.526 sec 
  FAILURE!
 testInvalidEnvSyntaxDiagnostics(org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch)
   Time elapsed: 583 sec   FAILURE!
 junit.framework.AssertionFailedError: Should catch exception
   at junit.framework.Assert.fail(Assert.java:50)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch.testInvalidEnvSyntaxDiagnostics(TestContainerLaunch.java:269)
 ...
 testContainerLaunchStdoutAndStderrDiagnostics(org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch)
   Time elapsed: 561 sec   FAILURE!
 junit.framework.AssertionFailedError: Should catch exception
   at junit.framework.Assert.fail(Assert.java:50)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch.testContainerLaunchStdoutAndStderrDiagnostics(TestContainerLaunch.java:314)
 ...
 testContainerEnvVariables(org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch)
   Time elapsed: 4136 sec   FAILURE!
 junit.framework.AssertionFailedError: expected:137 but was:143
   at junit.framework.Assert.fail(Assert.java:50)
   at junit.framework.Assert.failNotEquals(Assert.java:287)
   at junit.framework.Assert.assertEquals(Assert.java:67)
   at junit.framework.Assert.assertEquals(Assert.java:199)
   at junit.framework.Assert.assertEquals(Assert.java:205)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch.testContainerEnvVariables(TestContainerLaunch.java:500)
 ...
 testDelayedKill(org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch)
   Time elapsed: 2744 sec   FAILURE!
 junit.framework.AssertionFailedError: expected:137 but was:143
   at junit.framework.Assert.fail(Assert.java:50)
   at junit.framework.Assert.failNotEquals(Assert.java:287)
   at junit.framework.Assert.assertEquals(Assert.java:67)
   at 

[jira] [Commented] (YARN-311) Dynamic node resource configuration: core scheduler changes

2013-09-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13756599#comment-13756599
 ] 

Hadoop QA commented on YARN-311:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12601148/YARN-311-v5.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1819//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1819//console

This message is automatically generated.

 Dynamic node resource configuration: core scheduler changes
 ---

 Key: YARN-311
 URL: https://issues.apache.org/jira/browse/YARN-311
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager, scheduler
Reporter: Junping Du
Assignee: Junping Du
 Attachments: YARN-311-v1.patch, YARN-311-v2.patch, YARN-311-v3.patch, 
 YARN-311-v4.patch, YARN-311-v4.patch, YARN-311-v5.patch


 As the first step, we go for resource change on RM side and expose admin APIs 
 (admin protocol, CLI, REST and JMX API). In this jira, we will only contain 
 changes in scheduler.
 For design details, please refer proposal and discussions in parent JIRA: 
 YARN-291.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (YARN-305) Too many 'Node offerred to app:... messages in RM

2013-09-03 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe reassigned YARN-305:
---

Assignee: Lohit Vijayarenu

 Too many 'Node offerred to app:... messages in RM
 --

 Key: YARN-305
 URL: https://issues.apache.org/jira/browse/YARN-305
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Reporter: Lohit Vijayarenu
Assignee: Lohit Vijayarenu
Priority: Minor
 Attachments: YARN-305.1.patch


 Running fair scheduler YARN shows that RM has lots of messages like the below.
 {noformat}
 INFO 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AppSchedulable: 
 Node offered to app: application_1357147147433_0002 reserved: false
 {noformat}
 They dont seem to tell much and same line is dumped many times in RM log. It 
 would be good to have it improved with node information or moved to some 
 other logging level with enough debug information

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-311) Dynamic node resource configuration: core scheduler changes

2013-09-03 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13756614#comment-13756614
 ] 

Alejandro Abdelnur commented on YARN-311:
-

* the patch as a few false changes (formatting)
* RMNodeImpl#setTotalCapability() is modifying the existing resource instead 
assigning the received one? is that intentional? If so, the locking is kind of 
pointless as holders of a RMNodeImpl reference will see an 
inconsistent/non-locked value.
* the SchedulerNode#updateAvailableResource() method name is not clear with 
respect of a delta correction happening (which is clear with the formal 
parameter name). Either we should make the method name more obvious of we 
should set the new value. It is kind of confusing that for RMNodeImpl the patch 
uses the full new capacity while for SchedulerNode the patch uses the delta 
change; could we use full or delta values in both?

Also, in this patch it seems the change will be triggered from a NM heartbeat. 
Under witch situation a NM would do such change without being restarted? If 
seems to me the change should come from an admin API to the RM, this would be 
set as a correction in the RMNodeImpl, and the RMNodeImpl would use the correct 
valued as the total instead of the total reported by the NM. Am I missing 
something?

 Dynamic node resource configuration: core scheduler changes
 ---

 Key: YARN-311
 URL: https://issues.apache.org/jira/browse/YARN-311
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager, scheduler
Reporter: Junping Du
Assignee: Junping Du
 Attachments: YARN-311-v1.patch, YARN-311-v2.patch, YARN-311-v3.patch, 
 YARN-311-v4.patch, YARN-311-v4.patch, YARN-311-v5.patch


 As the first step, we go for resource change on RM side and expose admin APIs 
 (admin protocol, CLI, REST and JMX API). In this jira, we will only contain 
 changes in scheduler.
 For design details, please refer proposal and discussions in parent JIRA: 
 YARN-291.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-311) Dynamic node resource configuration: core scheduler changes

2013-09-03 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13756634#comment-13756634
 ] 

Junping Du commented on YARN-311:
-

Hi, [~tucu00] Thanks for comments! 
bq. RMNodeImpl#setTotalCapability() is modifying the existing resource instead 
assigning the received one? is that intentional? If so, the locking is kind of 
pointless as holders of a RMNodeImpl reference will see an 
inconsistent/non-locked value.
You are right. We should set resource directly rather than current way (my 
previous intention is to check null and get rid of NPE).
bq. the SchedulerNode#updateAvailableResource() method name is not clear with 
respect of a delta correction happening (which is clear with the formal 
parameter name). Either we should make the method name more obvious of we 
should set the new value. It is kind of confusing that for RMNodeImpl the patch 
uses the full new capacity while for SchedulerNode the patch uses the delta 
change; could we use full or delta values in both?
It is because RMNodeImpl is tracking totalCapacity while SchedulerNode is 
tracking availableResource and usedResource. So we will set new resource 
directly in RMNodeImpl and update the delta resource to SchedulerNode's 
availableResource to make sure availableResource + usedResource = newResource, 
does that make sense?
bq. Also, in this patch it seems the change will be triggered from a NM 
heartbeat. Under witch situation a NM would do such change without being 
restarted? If seems to me the change should come from an admin API to the RM, 
this would be set as a correction in the RMNodeImpl, and the RMNodeImpl would 
use the correct valued as the total instead of the total reported by the NM. Am 
I missing something?
You previous thinking is correct. the current flow is still: resource update is 
through admin API to RM and take effect on RMNodeImpl, when NM heartbeat (just 
status update, but not register), the change will be aware there and pass to 
schedulerNode before scheduling. 

 Dynamic node resource configuration: core scheduler changes
 ---

 Key: YARN-311
 URL: https://issues.apache.org/jira/browse/YARN-311
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager, scheduler
Reporter: Junping Du
Assignee: Junping Du
 Attachments: YARN-311-v1.patch, YARN-311-v2.patch, YARN-311-v3.patch, 
 YARN-311-v4.patch, YARN-311-v4.patch, YARN-311-v5.patch


 As the first step, we go for resource change on RM side and expose admin APIs 
 (admin protocol, CLI, REST and JMX API). In this jira, we will only contain 
 changes in scheduler.
 For design details, please refer proposal and discussions in parent JIRA: 
 YARN-291.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-649) Make container logs available over HTTP in plain text

2013-09-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13756636#comment-13756636
 ] 

Hudson commented on YARN-649:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1538 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1538/])
YARN-649. Added a new NM web-service to serve container logs in plain text over 
HTTP. Contributed by Sandy Ryza. (vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1519326)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/Context.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/ContainerLogsPage.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/ContainerLogsUtils.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/NMWebServices.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestEventFlow.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeStatusUpdater.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/BaseContainerManagerTest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/TestApplication.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestContainerLogsPage.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServer.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServices.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServicesApps.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServicesContainers.java


 Make container logs available over HTTP in plain text
 -

 Key: YARN-649
 URL: https://issues.apache.org/jira/browse/YARN-649
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Fix For: 2.3.0

 Attachments: YARN-649-2.patch, YARN-649-3.patch, YARN-649-4.patch, 
 YARN-649-5.patch, YARN-649-6.patch, YARN-649-7.patch, YARN-649.patch, 
 YARN-752-1.patch


 It would be good to make container logs available over the REST API for 
 MAPREDUCE-4362 and so that they can be accessed programatically in general.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1120) Make ApplicationConstants.Environment.USER definition OS neutral

2013-09-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13756635#comment-13756635
 ] 

Hudson commented on YARN-1120:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1538 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1538/])
YARN-1120. Made ApplicationConstants.Environment.USER definition OS neutral as 
the corresponding value is now set correctly end-to-end. Contributed by Chuan 
Liu. (vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1519330)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationConstants.java


 Make ApplicationConstants.Environment.USER definition OS neutral
 

 Key: YARN-1120
 URL: https://issues.apache.org/jira/browse/YARN-1120
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Fix For: 2.1.1-beta

 Attachments: YARN-1120.patch


 In YARN-557, we added some code to make 
 {{ApplicationConstants.Environment.USER}} has OS-specific definition in order 
 to fix the unit test TestUnmanagedAMLauncher. In YARN-571, the relevant test 
 code was corrected. In YARN-602, we actually will explicitly set the 
 environment variables for the child containers. With these changes, I think 
 we can revert the YARN-557 change to make 
 {{ApplicationConstants.Environment.USER}} OS neutral. The main benefit is 
 that we can use the same method over the Enum constants. This should also fix 
 the TestContainerLaunch#testContainerEnvVariables failure on Windows. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1077) TestContainerLaunch fails on Windows

2013-09-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13756637#comment-13756637
 ] 

Hudson commented on YARN-1077:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1538 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1538/])
YARN-1077. Fixed TestContainerLaunch test failure on Windows. Contributed by 
Chuan Liu. (vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1519333)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainersLauncher.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/TestContainerLaunch.java


 TestContainerLaunch fails on Windows
 

 Key: YARN-1077
 URL: https://issues.apache.org/jira/browse/YARN-1077
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0, 2.3.0
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Fix For: 2.1.1-beta

 Attachments: YARN-1077.2.patch, YARN-1077.3.patch, YARN-1077.4.patch, 
 YARN-1077.5.patch, YARN-1077.patch


 Several cases in this unit tests fail on Windows. (Append error log at the 
 end.)
 testInvalidEnvSyntaxDiagnostics fails because the difference between cmd and 
 bash script error handling. If some command fails in the cmd script, cmd will 
 continue execute the the rest of the script command. Error handling needs to 
 be explicitly carried out in the script file. The error code of the last 
 command will be returned as the error code of the whole script. In this test, 
 some error happened in the middle of the cmd script, the test expect an 
 exception and non-zero error code. In the cmd script, the intermediate errors 
 are ignored. The last command call succeeded and there is no exception.
 testContainerLaunchStdoutAndStderrDiagnostics fails due to wrong cmd commands 
 used by the test.
 testContainerEnvVariables and testDelayedKill fail due to a regression from 
 YARN-906.
 {noformat}
 ---
 Test set: 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch
 ---
 Tests run: 7, Failures: 4, Errors: 0, Skipped: 0, Time elapsed: 11.526 sec 
  FAILURE!
 testInvalidEnvSyntaxDiagnostics(org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch)
   Time elapsed: 583 sec   FAILURE!
 junit.framework.AssertionFailedError: Should catch exception
   at junit.framework.Assert.fail(Assert.java:50)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch.testInvalidEnvSyntaxDiagnostics(TestContainerLaunch.java:269)
 ...
 testContainerLaunchStdoutAndStderrDiagnostics(org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch)
   Time elapsed: 561 sec   FAILURE!
 junit.framework.AssertionFailedError: Should catch exception
   at junit.framework.Assert.fail(Assert.java:50)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch.testContainerLaunchStdoutAndStderrDiagnostics(TestContainerLaunch.java:314)
 ...
 testContainerEnvVariables(org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch)
   Time elapsed: 4136 sec   FAILURE!
 junit.framework.AssertionFailedError: expected:137 but was:143
   at junit.framework.Assert.fail(Assert.java:50)
   at junit.framework.Assert.failNotEquals(Assert.java:287)
   at junit.framework.Assert.assertEquals(Assert.java:67)
   at junit.framework.Assert.assertEquals(Assert.java:199)
   at junit.framework.Assert.assertEquals(Assert.java:205)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch.testContainerEnvVariables(TestContainerLaunch.java:500)
 ...
 testDelayedKill(org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch)
   Time elapsed: 2744 sec   FAILURE!
 junit.framework.AssertionFailedError: expected:137 but was:143
   at junit.framework.Assert.fail(Assert.java:50)
   at junit.framework.Assert.failNotEquals(Assert.java:287)
   at junit.framework.Assert.assertEquals(Assert.java:67)
   at 

[jira] [Updated] (YARN-1027) Implement RMHAServiceProtocol

2013-09-03 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-1027:
---

Attachment: test-yarn-1027.patch

 Implement RMHAServiceProtocol
 -

 Key: YARN-1027
 URL: https://issues.apache.org/jira/browse/YARN-1027
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Bikas Saha
Assignee: Karthik Kambatla
 Attachments: test-yarn-1027.patch, yarn-1027-1.patch, 
 yarn-1027-2.patch, yarn-1027-in-rm-poc.patch


 Implement existing HAServiceProtocol from Hadoop common. This protocol is the 
 single point of interaction between the RM and HA clients/services.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1027) Implement RMHAServiceProtocol

2013-09-03 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13756655#comment-13756655
 ] 

Karthik Kambatla commented on YARN-1027:


Just uploaded a patch (yarn-1027-2.patch) that builds on top of YARN-1098 
patch, and depends on it.

Patch outline:
# Implement RMHAProtocolService
# When HA is enabled, make this HA-service one of the services managed by the 
RM. RM no longer manages the activeStateServices directly, these are to be 
managed by the HA-service.
# Tests to check HA enable/disable and transitions when enabled.
# Included another patch (test-yarn-1027.patch) that I used to force 
transitionToActive() after RM starts in the Standby mode. Post 
transitionToActive, the RM behaves normally. I was able to run jobs.

 Implement RMHAServiceProtocol
 -

 Key: YARN-1027
 URL: https://issues.apache.org/jira/browse/YARN-1027
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Bikas Saha
Assignee: Karthik Kambatla
 Attachments: test-yarn-1027.patch, yarn-1027-1.patch, 
 yarn-1027-2.patch, yarn-1027-in-rm-poc.patch


 Implement existing HAServiceProtocol from Hadoop common. This protocol is the 
 single point of interaction between the RM and HA clients/services.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-311) Dynamic node resource configuration: core scheduler changes

2013-09-03 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-311:


Attachment: YARN-311-v6.patch

Update in v6 patch according to Alejandro's feedback:
- Fix write/read lock issue on setTotalCapability (shouldn't be 
setTotalcapacity?)
- Rename updateAvailableResource() to applyDeltaOnAvailableResource() in 
SchedulerNode which seems to be more explicit in meaning.

 Dynamic node resource configuration: core scheduler changes
 ---

 Key: YARN-311
 URL: https://issues.apache.org/jira/browse/YARN-311
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager, scheduler
Reporter: Junping Du
Assignee: Junping Du
 Attachments: YARN-311-v1.patch, YARN-311-v2.patch, YARN-311-v3.patch, 
 YARN-311-v4.patch, YARN-311-v4.patch, YARN-311-v5.patch, YARN-311-v6.patch


 As the first step, we go for resource change on RM side and expose admin APIs 
 (admin protocol, CLI, REST and JMX API). In this jira, we will only contain 
 changes in scheduler.
 For design details, please refer proposal and discussions in parent JIRA: 
 YARN-291.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1027) Implement RMHAServiceProtocol

2013-09-03 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-1027:
---

Attachment: yarn-1027-2.patch

 Implement RMHAServiceProtocol
 -

 Key: YARN-1027
 URL: https://issues.apache.org/jira/browse/YARN-1027
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Bikas Saha
Assignee: Karthik Kambatla
 Attachments: test-yarn-1027.patch, yarn-1027-1.patch, 
 yarn-1027-2.patch, yarn-1027-in-rm-poc.patch


 Implement existing HAServiceProtocol from Hadoop common. This protocol is the 
 single point of interaction between the RM and HA clients/services.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-311) Dynamic node resource configuration: core scheduler changes

2013-09-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13756674#comment-13756674
 ] 

Hadoop QA commented on YARN-311:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12601167/YARN-311-v6.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1820//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1820//console

This message is automatically generated.

 Dynamic node resource configuration: core scheduler changes
 ---

 Key: YARN-311
 URL: https://issues.apache.org/jira/browse/YARN-311
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager, scheduler
Reporter: Junping Du
Assignee: Junping Du
 Attachments: YARN-311-v1.patch, YARN-311-v2.patch, YARN-311-v3.patch, 
 YARN-311-v4.patch, YARN-311-v4.patch, YARN-311-v5.patch, YARN-311-v6.patch


 As the first step, we go for resource change on RM side and expose admin APIs 
 (admin protocol, CLI, REST and JMX API). In this jira, we will only contain 
 changes in scheduler.
 For design details, please refer proposal and discussions in parent JIRA: 
 YARN-291.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1100) Giving multiple commands to ContainerLaunchContext doesn't work as expected

2013-09-03 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13756789#comment-13756789
 ] 

Bikas Saha commented on YARN-1100:
--

Lets take a moment to figure out what the intention of the field is. It might 
just be that the name commands was a bad choice. The NM might be trying to 
launch just one process with the commands array being arguments to that 
process. Thats what the code looks like its doing. Rather than semantics being 
that the NM will launch multiple different commands.

 Giving multiple commands to ContainerLaunchContext doesn't work as expected
 ---

 Key: YARN-1100
 URL: https://issues.apache.org/jira/browse/YARN-1100
 Project: Hadoop YARN
  Issue Type: Bug
  Components: api, nodemanager
Affects Versions: 2.1.0-beta
Reporter: Sandy Ryza

 A ContainerLaunchContext accepts a list of commands (as strings) to be 
 executed to launch the container.  I would expect that giving a list with the 
 two commands echo yolo and date would print something like
 {code}
 yolo
 Mon Aug 26 14:40:23 PDT 2013
 {code}
 Instead it prints
 {code}
 yolo date
 {code}
 This is because the commands get executed with:
 {code}
 exec /bin/bash -c echo yolo date
 {code}
 To get the expected behavior I have to include semicolons at the end of each 
 command. At the very least, this should be documented, but I think better 
 would be for the NM to insert the semicolons.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-649) Make container logs available over HTTP in plain text

2013-09-03 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13756793#comment-13756793
 ] 

Sandy Ryza commented on YARN-649:
-

Thanks Vinod!

 Make container logs available over HTTP in plain text
 -

 Key: YARN-649
 URL: https://issues.apache.org/jira/browse/YARN-649
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Fix For: 2.3.0

 Attachments: YARN-649-2.patch, YARN-649-3.patch, YARN-649-4.patch, 
 YARN-649-5.patch, YARN-649-6.patch, YARN-649-7.patch, YARN-649.patch, 
 YARN-752-1.patch


 It would be good to make container logs available over the REST API for 
 MAPREDUCE-4362 and so that they can be accessed programatically in general.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1100) Giving multiple commands to ContainerLaunchContext doesn't work as expected

2013-09-03 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13756795#comment-13756795
 ] 

Sandy Ryza commented on YARN-1100:
--

In the MR and distributed shell code, the command list does not seem to be used 
for holding arguments to the same process.  All arguments are concatenated and 
placed in a single entry in the commands list.

 Giving multiple commands to ContainerLaunchContext doesn't work as expected
 ---

 Key: YARN-1100
 URL: https://issues.apache.org/jira/browse/YARN-1100
 Project: Hadoop YARN
  Issue Type: Bug
  Components: api, nodemanager
Affects Versions: 2.1.0-beta
Reporter: Sandy Ryza

 A ContainerLaunchContext accepts a list of commands (as strings) to be 
 executed to launch the container.  I would expect that giving a list with the 
 two commands echo yolo and date would print something like
 {code}
 yolo
 Mon Aug 26 14:40:23 PDT 2013
 {code}
 Instead it prints
 {code}
 yolo date
 {code}
 This is because the commands get executed with:
 {code}
 exec /bin/bash -c echo yolo date
 {code}
 To get the expected behavior I have to include semicolons at the end of each 
 command. At the very least, this should be documented, but I think better 
 would be for the NM to insert the semicolons.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-609) Fix synchronization issues in APIs which take in lists

2013-09-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13756849#comment-13756849
 ] 

Hadoop QA commented on YARN-609:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12601192/YARN-609.3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 29 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1821//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/1821//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-yarn-common.html
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1821//console

This message is automatically generated.

 Fix synchronization issues in APIs which take in lists
 --

 Key: YARN-609
 URL: https://issues.apache.org/jira/browse/YARN-609
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Vinod Kumar Vavilapalli
Assignee: Xuan Gong
 Attachments: YARN-609.1.patch, YARN-609.2.patch, YARN-609.3.patch


 Some of the APIs take in lists and the setter-APIs don't always do proper 
 synchronization. We need to fix these.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-609) Fix synchronization issues in APIs which take in lists

2013-09-03 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-609:
---

Attachment: YARN-609.3.patch

 Fix synchronization issues in APIs which take in lists
 --

 Key: YARN-609
 URL: https://issues.apache.org/jira/browse/YARN-609
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Vinod Kumar Vavilapalli
Assignee: Xuan Gong
 Attachments: YARN-609.1.patch, YARN-609.2.patch, YARN-609.3.patch


 Some of the APIs take in lists and the setter-APIs don't always do proper 
 synchronization. We need to fix these.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-609) Fix synchronization issues in APIs which take in lists

2013-09-03 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13756819#comment-13756819
 ] 

Xuan Gong commented on YARN-609:


Do the synchronized set and get on the apis which take in lists

 Fix synchronization issues in APIs which take in lists
 --

 Key: YARN-609
 URL: https://issues.apache.org/jira/browse/YARN-609
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Vinod Kumar Vavilapalli
Assignee: Xuan Gong
 Attachments: YARN-609.1.patch, YARN-609.2.patch, YARN-609.3.patch


 Some of the APIs take in lists and the setter-APIs don't always do proper 
 synchronization. We need to fix these.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-609) Fix synchronization issues in APIs which take in lists

2013-09-03 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-609:
---

Attachment: YARN-609.4.patch

Fix -1 on findBug

 Fix synchronization issues in APIs which take in lists
 --

 Key: YARN-609
 URL: https://issues.apache.org/jira/browse/YARN-609
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Vinod Kumar Vavilapalli
Assignee: Xuan Gong
 Attachments: YARN-609.1.patch, YARN-609.2.patch, YARN-609.3.patch, 
 YARN-609.4.patch


 Some of the APIs take in lists and the setter-APIs don't always do proper 
 synchronization. We need to fix these.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1124) By default yarn application -list should display all the applications in a state other than FINISHED / FAILED

2013-09-03 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13756987#comment-13756987
 ] 

Vinod Kumar Vavilapalli commented on YARN-1124:
---

bq. I don't see why we have to list submitted/accepted/running application.
Users are interested in seeing all their outstanding applications by default.

The patch looks good to me, +1. Checking this in.

 By default yarn application -list should display all the applications in a 
 state other than FINISHED / FAILED
 -

 Key: YARN-1124
 URL: https://issues.apache.org/jira/browse/YARN-1124
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Xuan Gong
Priority: Blocker
 Fix For: 2.1.1-beta

 Attachments: YARN-1124.1.patch


 Today we are just listing application in RUNNING state by default for yarn 
 application -list. Instead we should show all the applications which are 
 either submitted/accepted/running.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1106) The RM should point the tracking url to the RM app page if its empty

2013-09-03 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated YARN-1106:


Assignee: Thomas Graves

 The RM should point the tracking url to the RM app page if its empty
 

 Key: YARN-1106
 URL: https://issues.apache.org/jira/browse/YARN-1106
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 3.0.0, 2.1.0-beta, 0.23.9
Reporter: Thomas Graves
Assignee: Thomas Graves

 It would be nice if the Resourcemanager set the tracking url to the RM app 
 page if the application master doesn't pass one or passes the empty string.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1065) NM should provide AuxillaryService data to the container

2013-09-03 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13757052#comment-13757052
 ] 

Bikas Saha commented on YARN-1065:
--

Shouldn't it be transparent to the user and done all the time instead of 
depending on the user to use the prefix?
{code}
+public class AuxiliaryServiceHelper {
+
+  public static String NM_AUX_SERVICE = NM_AUX_SERVICE_;
...
+  AuxiliaryServiceHelper.setServiceDataIntoEnv(
+  AuxiliaryServiceHelper.NM_AUX_SERVICE + meta.getKey(),
+  meta.getValue(), environment);
{code}

There is another place in code where we can use the new method call. Its where 
startContainersResponse is being created.
{code}
+  public MapString, ByteBuffer getAuxServiceMetaData() {
+return this.auxiliaryServices.getMetaData();
{code}

Can we use a non specific name here?
{code}
+serviceData.put(TezService,
+ByteBuffer.wrap(TezServiceMetaData.getBytes()));
+return serviceData;
{code}

Shouldnt the tests make sure that the basic functionality is working by 
checking that getServiceDataFromEnv() obtains the stuff that has been set via 
setServiceDataIntoEnv()? Dont see anything that verifies this.


 NM should provide AuxillaryService data to the container
 

 Key: YARN-1065
 URL: https://issues.apache.org/jira/browse/YARN-1065
 Project: Hadoop YARN
  Issue Type: Task
Reporter: Bikas Saha
Assignee: Xuan Gong
 Attachments: YARN-1065.1.patch, YARN-1065.2.patch, YARN-1065.3.patch, 
 YARN-1065.4.patch, YARN-1065.5.patch


 Start container returns auxillary service data to the AM but does not provide 
 the same information to the task itself. It could add that information to the 
 container env with key=service_name and value=service_data. This allows the 
 container to start using the service without having to depend on the AM to 
 send the info to it indirectly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Moved] (YARN-1139) [Umbrella] Convert all RM components to Services

2013-09-03 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla moved HADOOP-9931 to YARN-1139:


 Target Version/s: 2.3.0  (was: 2.3.0)
Affects Version/s: (was: 2.1.0-beta)
   2.1.0-beta
  Key: YARN-1139  (was: HADOOP-9931)
  Project: Hadoop YARN  (was: Hadoop Common)

 [Umbrella] Convert all RM components to Services
 

 Key: YARN-1139
 URL: https://issues.apache.org/jira/browse/YARN-1139
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: Karthik Kambatla
  Labels: rm

 Some of the RM components - state store, scheduler etc. are not services. 
 Converting them to services goes well with the Always On and Active 
 service separation proposed on YARN-1098.
 Given that some of them already have start(), stop() methods, it should not 
 be too hard to convert them to services.
 That would also be a cleaner way of addressing YARN-1125.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1139) [Umbrella] Convert all RM components to Services

2013-09-03 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-1139:
---

Component/s: resourcemanager

 [Umbrella] Convert all RM components to Services
 

 Key: YARN-1139
 URL: https://issues.apache.org/jira/browse/YARN-1139
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: resourcemanager
Affects Versions: 2.1.0-beta
Reporter: Karthik Kambatla
  Labels: rm

 Some of the RM components - state store, scheduler etc. are not services. 
 Converting them to services goes well with the Always On and Active 
 service separation proposed on YARN-1098.
 Given that some of them already have start(), stop() methods, it should not 
 be too hard to convert them to services.
 That would also be a cleaner way of addressing YARN-1125.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1139) [Umbrella] Convert all RM components to Services

2013-09-03 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-1139:
---

Labels:   (was: rm)

 [Umbrella] Convert all RM components to Services
 

 Key: YARN-1139
 URL: https://issues.apache.org/jira/browse/YARN-1139
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: resourcemanager
Affects Versions: 2.1.0-beta
Reporter: Karthik Kambatla

 Some of the RM components - state store, scheduler etc. are not services. 
 Converting them to services goes well with the Always On and Active 
 service separation proposed on YARN-1098.
 Given that some of them already have start(), stop() methods, it should not 
 be too hard to convert them to services.
 That would also be a cleaner way of addressing YARN-1125.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1106) The RM should point the tracking url to the RM app page if its empty

2013-09-03 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13757148#comment-13757148
 ] 

Vinod Kumar Vavilapalli commented on YARN-1106:
---

I just created YARN-1140. If we do that, this is not needed?

 The RM should point the tracking url to the RM app page if its empty
 

 Key: YARN-1106
 URL: https://issues.apache.org/jira/browse/YARN-1106
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 3.0.0, 2.1.0-beta, 0.23.9
Reporter: Thomas Graves
Assignee: Thomas Graves

 It would be nice if the Resourcemanager set the tracking url to the RM app 
 page if the application master doesn't pass one or passes the empty string.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-609) Fix synchronization issues in APIs which take in lists

2013-09-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13757126#comment-13757126
 ] 

Hadoop QA commented on YARN-609:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12601245/YARN-609.5.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1823//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1823//console

This message is automatically generated.

 Fix synchronization issues in APIs which take in lists
 --

 Key: YARN-609
 URL: https://issues.apache.org/jira/browse/YARN-609
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Vinod Kumar Vavilapalli
Assignee: Xuan Gong
 Attachments: YARN-609.1.patch, YARN-609.2.patch, YARN-609.3.patch, 
 YARN-609.4.patch, YARN-609.5.patch


 Some of the APIs take in lists and the setter-APIs don't always do proper 
 synchronization. We need to fix these.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-609) Fix synchronization issues in APIs which take in lists

2013-09-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13757030#comment-13757030
 ] 

Hadoop QA commented on YARN-609:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12601224/YARN-609.4.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 12 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1822//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/1822//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-yarn-common.html
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1822//console

This message is automatically generated.

 Fix synchronization issues in APIs which take in lists
 --

 Key: YARN-609
 URL: https://issues.apache.org/jira/browse/YARN-609
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Vinod Kumar Vavilapalli
Assignee: Xuan Gong
 Attachments: YARN-609.1.patch, YARN-609.2.patch, YARN-609.3.patch, 
 YARN-609.4.patch


 Some of the APIs take in lists and the setter-APIs don't always do proper 
 synchronization. We need to fix these.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-292) ResourceManager throws ArrayIndexOutOfBoundsException while handling CONTAINER_ALLOCATED for application attempt

2013-09-03 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13757208#comment-13757208
 ] 

Vinod Kumar Vavilapalli commented on YARN-292:
--

bq. I will try to post my test result after applying this patch when i have 
time. No idea about the test case part.
Nemon, we are unable to come up with a scenario when this happens. The next 
time you run into this, can you please capture the RM logs and upload them 
here? Tx.

 ResourceManager throws ArrayIndexOutOfBoundsException while handling 
 CONTAINER_ALLOCATED for application attempt
 

 Key: YARN-292
 URL: https://issues.apache.org/jira/browse/YARN-292
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Affects Versions: 2.0.1-alpha
Reporter: Devaraj K
Assignee: Zhijie Shen
 Attachments: YARN-292.1.patch, YARN-292.2.patch, YARN-292.3.patch


 {code:xml}
 2012-12-26 08:41:15,030 ERROR 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.fifo.FifoScheduler: 
 Calling allocate on removed or non existant application 
 appattempt_1356385141279_49525_01
 2012-12-26 08:41:15,031 ERROR 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
 handling event type CONTAINER_ALLOCATED for applicationAttempt 
 application_1356385141279_49525
 java.lang.ArrayIndexOutOfBoundsException: 0
   at java.util.Arrays$ArrayList.get(Arrays.java:3381)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl$AMContainerAllocatedTransition.transition(RMAppAttemptImpl.java:655)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl$AMContainerAllocatedTransition.transition(RMAppAttemptImpl.java:644)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory$SingleInternalArc.doTransition(StateMachineFactory.java:357)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:298)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:490)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:80)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:433)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:414)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:126)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:75)
   at java.lang.Thread.run(Thread.java:662)
  {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1065) NM should provide AuxillaryService data to the container

2013-09-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13757228#comment-13757228
 ] 

Hadoop QA commented on YARN-1065:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12601261/YARN-1065.6.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1824//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/1824//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-yarn-common.html
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1824//console

This message is automatically generated.

 NM should provide AuxillaryService data to the container
 

 Key: YARN-1065
 URL: https://issues.apache.org/jira/browse/YARN-1065
 Project: Hadoop YARN
  Issue Type: Task
Reporter: Bikas Saha
Assignee: Xuan Gong
 Attachments: YARN-1065.1.patch, YARN-1065.2.patch, YARN-1065.3.patch, 
 YARN-1065.4.patch, YARN-1065.5.patch, YARN-1065.6.patch


 Start container returns auxillary service data to the AM but does not provide 
 the same information to the task itself. It could add that information to the 
 container env with key=service_name and value=service_data. This allows the 
 container to start using the service without having to depend on the AM to 
 send the info to it indirectly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1065) NM should provide AuxillaryService data to the container

2013-09-03 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13757234#comment-13757234
 ] 

Xuan Gong commented on YARN-1065:
-

Change AuxiliaryServiceHelper.NM_AUX_SERVICE to final

 NM should provide AuxillaryService data to the container
 

 Key: YARN-1065
 URL: https://issues.apache.org/jira/browse/YARN-1065
 Project: Hadoop YARN
  Issue Type: Task
Reporter: Bikas Saha
Assignee: Xuan Gong
 Attachments: YARN-1065.1.patch, YARN-1065.2.patch, YARN-1065.3.patch, 
 YARN-1065.4.patch, YARN-1065.5.patch, YARN-1065.6.patch, YARN-1065.7.patch


 Start container returns auxillary service data to the AM but does not provide 
 the same information to the task itself. It could add that information to the 
 container env with key=service_name and value=service_data. This allows the 
 container to start using the service without having to depend on the AM to 
 send the info to it indirectly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1065) NM should provide AuxillaryService data to the container

2013-09-03 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-1065:


Attachment: YARN-1065.7.patch

 NM should provide AuxillaryService data to the container
 

 Key: YARN-1065
 URL: https://issues.apache.org/jira/browse/YARN-1065
 Project: Hadoop YARN
  Issue Type: Task
Reporter: Bikas Saha
Assignee: Xuan Gong
 Attachments: YARN-1065.1.patch, YARN-1065.2.patch, YARN-1065.3.patch, 
 YARN-1065.4.patch, YARN-1065.5.patch, YARN-1065.6.patch, YARN-1065.7.patch


 Start container returns auxillary service data to the AM but does not provide 
 the same information to the task itself. It could add that information to the 
 container env with key=service_name and value=service_data. This allows the 
 container to start using the service without having to depend on the AM to 
 send the info to it indirectly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-311) Dynamic node resource configuration: core scheduler changes

2013-09-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13757256#comment-13757256
 ] 

Hadoop QA commented on YARN-311:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12601265/YARN-311-v6.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1825//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1825//console

This message is automatically generated.

 Dynamic node resource configuration: core scheduler changes
 ---

 Key: YARN-311
 URL: https://issues.apache.org/jira/browse/YARN-311
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager, scheduler
Reporter: Junping Du
Assignee: Junping Du
 Attachments: YARN-311-v1.patch, YARN-311-v2.patch, YARN-311-v3.patch, 
 YARN-311-v4.patch, YARN-311-v4.patch, YARN-311-v5.patch, YARN-311-v6.1.patch, 
 YARN-311-v6.patch


 As the first step, we go for resource change on RM side and expose admin APIs 
 (admin protocol, CLI, REST and JMX API). In this jira, we will only contain 
 changes in scheduler.
 For design details, please refer proposal and discussions in parent JIRA: 
 YARN-291.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1065) NM should provide AuxillaryService data to the container

2013-09-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13757273#comment-13757273
 ] 

Hadoop QA commented on YARN-1065:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12601269/YARN-1065.7.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1826//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1826//console

This message is automatically generated.

 NM should provide AuxillaryService data to the container
 

 Key: YARN-1065
 URL: https://issues.apache.org/jira/browse/YARN-1065
 Project: Hadoop YARN
  Issue Type: Task
Reporter: Bikas Saha
Assignee: Xuan Gong
 Attachments: YARN-1065.1.patch, YARN-1065.2.patch, YARN-1065.3.patch, 
 YARN-1065.4.patch, YARN-1065.5.patch, YARN-1065.6.patch, YARN-1065.7.patch


 Start container returns auxillary service data to the AM but does not provide 
 the same information to the task itself. It could add that information to the 
 container env with key=service_name and value=service_data. This allows the 
 container to start using the service without having to depend on the AM to 
 send the info to it indirectly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-540) Race condition causing RM to potentially relaunch already unregistered AMs on RM restart

2013-09-03 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-540:
-

Priority: Major  (was: Blocker)

Synced up with [~jianhe] and [~bikassaha] offline, and we all agree that the 
correct solution is RM restart that preserves work. If RM preserves work, then 
it will not blindly start new ApplicationAttempts, accepts connections from old 
AMs and so we are good.

Reducing the priority for now to change MR to have some work-around via 
MAPREDUCE-5471.

 Race condition causing RM to potentially relaunch already unregistered AMs on 
 RM restart
 

 Key: YARN-540
 URL: https://issues.apache.org/jira/browse/YARN-540
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Jian He
Assignee: Jian He
 Attachments: YARN-540.1.patch, YARN-540.2.patch, YARN-540.patch, 
 YARN-540.patch


 When job succeeds and successfully call finishApplicationMaster, RM shutdown 
 and restart-dispatcher is stopped before it can process REMOVE_APP event. The 
 next time RM comes back, it will reload the existing state files even though 
 the job is succeeded

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-1141) Updating resource requests should be decoupled with updating blacklist

2013-09-03 Thread Zhijie Shen (JIRA)
Zhijie Shen created YARN-1141:
-

 Summary: Updating resource requests should be decoupled with 
updating blacklist
 Key: YARN-1141
 URL: https://issues.apache.org/jira/browse/YARN-1141
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Zhijie Shen
Assignee: Zhijie Shen


Currently, in CapacityScheduler and FifoScheduler, blacklist is updated 
together with resource requests, only when the incoming resource requests are 
not empty. Therefore, when the incoming resource requests are empty, the 
blacklist will not be updated even when blacklist additions and removals are 
not empty.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1141) Updating resource requests should be decoupled with updating blacklist

2013-09-03 Thread Zhijie Shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijie Shen updated YARN-1141:
--

Attachment: YARN-1141.1.patch

Created a patch to decouple the updateResourceRequests method.

 Updating resource requests should be decoupled with updating blacklist
 --

 Key: YARN-1141
 URL: https://issues.apache.org/jira/browse/YARN-1141
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Zhijie Shen
Assignee: Zhijie Shen
 Attachments: YARN-1141.1.patch


 Currently, in CapacityScheduler and FifoScheduler, blacklist is updated 
 together with resource requests, only when the incoming resource requests are 
 not empty. Therefore, when the incoming resource requests are empty, the 
 blacklist will not be updated even when blacklist additions and removals are 
 not empty.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1001) YARN should provide per application-type and state statistics

2013-09-03 Thread Srimanth Gunturi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13757378#comment-13757378
 ] 

Srimanth Gunturi commented on YARN-1001:


If the work is done, can this patch be committed for the next release? Ambari 
has functionality dependent on this.

 YARN should provide per application-type and state statistics
 -

 Key: YARN-1001
 URL: https://issues.apache.org/jira/browse/YARN-1001
 Project: Hadoop YARN
  Issue Type: Task
  Components: api
Affects Versions: 2.1.0-beta
Reporter: Srimanth Gunturi
Assignee: Zhijie Shen
 Attachments: YARN-1001.1.patch


 In Ambari we plan to show for MR2 the number of applications finished, 
 running, waiting, etc. It would be efficient if YARN could provide per 
 application-type and state aggregated counts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1141) Updating resource requests should be decoupled with updating blacklist

2013-09-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13757382#comment-13757382
 ] 

Hadoop QA commented on YARN-1141:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12601303/YARN-1141.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1827//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1827//console

This message is automatically generated.

 Updating resource requests should be decoupled with updating blacklist
 --

 Key: YARN-1141
 URL: https://issues.apache.org/jira/browse/YARN-1141
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Zhijie Shen
Assignee: Zhijie Shen
 Attachments: YARN-1141.1.patch


 Currently, in CapacityScheduler and FifoScheduler, blacklist is updated 
 together with resource requests, only when the incoming resource requests are 
 not empty. Therefore, when the incoming resource requests are empty, the 
 blacklist will not be updated even when blacklist additions and removals are 
 not empty.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-696) Enable multiple states to to be specified in Resource Manager apps REST call

2013-09-03 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated YARN-696:


Target Version/s: 3.0.0, 2.1.1-beta  (was: 3.0.0, 2.1.0-beta)

 Enable multiple states to to be specified in Resource Manager apps REST call
 

 Key: YARN-696
 URL: https://issues.apache.org/jira/browse/YARN-696
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: resourcemanager
Affects Versions: 2.0.4-alpha
Reporter: Trevor Lorimer
Assignee: Trevor Lorimer
 Attachments: YARN-696.diff, YARN-696.diff, YARN-696.diff, 
 YARN-696.diff


 Within the YARN Resource Manager REST API the GET call which returns all 
 Applications can be filtered by a single State query parameter (http://rm 
 http address:port/ws/v1/cluster/apps). 
 There are 8 possible states (New, Submitted, Accepted, Running, Finishing, 
 Finished, Failed, Killed), if no state parameter is specified all states are 
 returned, however if a sub-set of states is required then multiple REST calls 
 are required (max. of 7).
 The proposal is to be able to specify multiple states in a single REST call.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-696) Enable multiple states to to be specified in Resource Manager apps REST call

2013-09-03 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13757471#comment-13757471
 ] 

Konstantin Boudnik commented on YARN-696:
-

Thanks for working on this, Trevor. I just realized that this JIRA has been 
back and forth for 3 and a half months: incredible patience on your account!

New patch looks quite good to me and all comments seemed to be addressed. I 
have also changed targeted version to 2.1.1-beta. Vinod, do you think it can go 
in now?

 Enable multiple states to to be specified in Resource Manager apps REST call
 

 Key: YARN-696
 URL: https://issues.apache.org/jira/browse/YARN-696
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: resourcemanager
Affects Versions: 2.0.4-alpha
Reporter: Trevor Lorimer
Assignee: Trevor Lorimer
 Attachments: YARN-696.diff, YARN-696.diff, YARN-696.diff, 
 YARN-696.diff


 Within the YARN Resource Manager REST API the GET call which returns all 
 Applications can be filtered by a single State query parameter (http://rm 
 http address:port/ws/v1/cluster/apps). 
 There are 8 possible states (New, Submitted, Accepted, Running, Finishing, 
 Finished, Failed, Killed), if no state parameter is specified all states are 
 returned, however if a sub-set of states is required then multiple REST calls 
 are required (max. of 7).
 The proposal is to be able to specify multiple states in a single REST call.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-696) Enable multiple states to to be specified in Resource Manager apps REST call

2013-09-03 Thread Zhijie Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13757484#comment-13757484
 ] 

Zhijie Shen commented on YARN-696:
--

Hi Trevor, the patch is almost good except:

1. 
{code}
+  if (checkAppStates) {
+boolean match = false;
+for (String appState : appStates) {
+  if (appState.equals(rmapp.getState().toString().toLowerCase())) {
+match = true;
+  }
+}
+if (!match) {
+  continue;
+}
+  }
{code}
Is it more concise to change to:
{code}
+  if (checkAppStates
+   !appStates.contains(rmapp.getState().toString().toLowerCase())) {
+continue;
+  }
{code}

2. Some lines have more than 80 chars. Would you please break them?

Sorry for being so harsh, but I think these will make the patch perfect. Thanks!

 Enable multiple states to to be specified in Resource Manager apps REST call
 

 Key: YARN-696
 URL: https://issues.apache.org/jira/browse/YARN-696
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: resourcemanager
Affects Versions: 2.0.4-alpha
Reporter: Trevor Lorimer
Assignee: Trevor Lorimer
 Attachments: YARN-696.diff, YARN-696.diff, YARN-696.diff, 
 YARN-696.diff


 Within the YARN Resource Manager REST API the GET call which returns all 
 Applications can be filtered by a single State query parameter (http://rm 
 http address:port/ws/v1/cluster/apps). 
 There are 8 possible states (New, Submitted, Accepted, Running, Finishing, 
 Finished, Failed, Killed), if no state parameter is specified all states are 
 returned, however if a sub-set of states is required then multiple REST calls 
 are required (max. of 7).
 The proposal is to be able to specify multiple states in a single REST call.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira