[jira] [Commented] (YARN-771) AMRMClient support for resource blacklisting

2013-08-30 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754441#comment-13754441
 ] 

Junping Du commented on YARN-771:
-

bq. an API should be clear by itself.
That's a good point. Clear is more important here. Will update patch 
accordingly.

 AMRMClient  support for resource blacklisting
 -

 Key: YARN-771
 URL: https://issues.apache.org/jira/browse/YARN-771
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Bikas Saha
Assignee: Junping Du
 Attachments: YARN-771-v1.0.patch, YARN-771-v2.patch, YARN-771-v3.patch


 After YARN-750 AMRMClient should support blacklisting via the new YARN API's

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-771) AMRMClient support for resource blacklisting

2013-08-30 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-771:


Attachment: YARN-771-v4.patch

 AMRMClient  support for resource blacklisting
 -

 Key: YARN-771
 URL: https://issues.apache.org/jira/browse/YARN-771
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Bikas Saha
Assignee: Junping Du
 Attachments: YARN-771-v1.0.patch, YARN-771-v2.patch, 
 YARN-771-v3.patch, YARN-771-v4.patch


 After YARN-750 AMRMClient should support blacklisting via the new YARN API's

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-879) Fix tests w.r.t o.a.h.y.server.resourcemanager.Application

2013-08-30 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-879:


Attachment: YARN-879-v3.patch

Sync up patch with latest trunk branch. Can someone take a review on this? Thx!

 Fix tests w.r.t o.a.h.y.server.resourcemanager.Application
 --

 Key: YARN-879
 URL: https://issues.apache.org/jira/browse/YARN-879
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Junping Du
Assignee: Junping Du
 Attachments: YARN-879.patch, YARN-879-v2.patch, YARN-879-v3.patch


 getResources() will return a list of containers that allocated by RM. 
 However, it is now return null directly. The worse thing is: if LOG.debug is 
 enabled, then it will definitely cause NPE exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-771) AMRMClient support for resource blacklisting

2013-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754494#comment-13754494
 ] 

Hadoop QA commented on YARN-771:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12600749/YARN-771-v4.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1805//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1805//console

This message is automatically generated.

 AMRMClient  support for resource blacklisting
 -

 Key: YARN-771
 URL: https://issues.apache.org/jira/browse/YARN-771
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Bikas Saha
Assignee: Junping Du
 Attachments: YARN-771-v1.0.patch, YARN-771-v2.patch, 
 YARN-771-v3.patch, YARN-771-v4.patch


 After YARN-750 AMRMClient should support blacklisting via the new YARN API's

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-879) Fix tests w.r.t o.a.h.y.server.resourcemanager.Application

2013-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754505#comment-13754505
 ] 

Hadoop QA commented on YARN-879:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12600751/YARN-879-v3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 5 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1806//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1806//console

This message is automatically generated.

 Fix tests w.r.t o.a.h.y.server.resourcemanager.Application
 --

 Key: YARN-879
 URL: https://issues.apache.org/jira/browse/YARN-879
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Junping Du
Assignee: Junping Du
 Attachments: YARN-879.patch, YARN-879-v2.patch, YARN-879-v3.patch


 getResources() will return a list of containers that allocated by RM. 
 However, it is now return null directly. The worse thing is: if LOG.debug is 
 enabled, then it will definitely cause NPE exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-707) Add user info in the YARN ClientToken

2013-08-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754580#comment-13754580
 ] 

Hudson commented on YARN-707:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #317 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/317/])
YARN-707. Added user information also in the YARN ClientToken so that AMs can 
implement authorization based on incoming users. Contributed by Jason Lowe. 
(vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1518868)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/ClientToAMTokenIdentifier.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/ClientToAMTokenSecretManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMApp.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttempt.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/ClientToAMTokenSecretManagerInRM.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockRM.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMRestart.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/MockAsm.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestRMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/MockRMApp.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/TestRMAppTransitions.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestClientToAMTokens.java


 Add user info in the YARN ClientToken
 -

 Key: YARN-707
 URL: https://issues.apache.org/jira/browse/YARN-707
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Bikas Saha
Assignee: Jason Lowe
Priority: Blocker
 Fix For: 3.0.0, 2.1.1-beta

 Attachments: YARN-707-20130822.txt, YARN-707-20130827.txt, 
 YARN-707-20130828-2.txt, YARN-707-20130828.txt, YARN-707-20130829.txt


 If user info is present in the client token then it can be used to do limited 
 authz in the AM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1080) Improve help message for $ yarn logs

2013-08-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754586#comment-13754586
 ] 

Hudson commented on YARN-1080:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #317 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/317/])
YARN-1080. Improved help message for yarn logs command. Contributed by Xuan 
Gong. (vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1518731)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/LogDumper.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestLogDumper.java


 Improve help message for $ yarn logs
 

 Key: YARN-1080
 URL: https://issues.apache.org/jira/browse/YARN-1080
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: client
Reporter: Tassapol Athiapinya
Assignee: Xuan Gong
 Fix For: 2.1.1-beta

 Attachments: YARN-1080.1.patch, YARN-1080.2.patch


 There are 2 parts I am proposing in this jira. They can be fixed together in 
 one patch.
 1. Standardize help message for required parameter of $ yarn logs
 YARN CLI has a command logs ($ yarn logs). The command always requires a 
 parameter of -applicationId arg. However, help message of the command 
 does not make it clear. It lists -applicationId as optional parameter. If I 
 don't set it, YARN CLI will complain this is missing. It is better to use 
 standard required notation used in other Linux command for help message. Any 
 user familiar to the command can understand that this parameter is needed 
 more easily.
 {code:title=current help message}
 -bash-4.1$ yarn logs
 usage: general options are:
  -applicationId arg   ApplicationId (required)
  -appOwner argAppOwner (assumed to be current user if not
 specified)
  -containerId arg ContainerId (must be specified if node address is
 specified)
  -nodeAddress arg NodeAddress in the format nodename:port (must be
 specified if container id is specified)
 {code}
 {code:title=proposed help message}
 -bash-4.1$ yarn logs
 usage: yarn logs -applicationId application ID [OPTIONS]
 general options are:
  -appOwner argAppOwner (assumed to be current user if not
 specified)
  -containerId arg ContainerId (must be specified if node address is
 specified)
  -nodeAddress arg NodeAddress in the format nodename:port (must be
 specified if container id is specified)
 {code}
 2. Add description for help command. As far as I know, a user cannot get logs 
 for running job. Since I spent some time trying to get logs of running 
 applications, it should be nice to say this in command description.
 {code:title=proposed help}
 Retrieve logs for completed/killed YARN application
 usage: general options are...
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1080) Improve help message for $ yarn logs

2013-08-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754671#comment-13754671
 ] 

Hudson commented on YARN-1080:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1507 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1507/])
YARN-1080. Improved help message for yarn logs command. Contributed by Xuan 
Gong. (vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1518731)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/LogDumper.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestLogDumper.java


 Improve help message for $ yarn logs
 

 Key: YARN-1080
 URL: https://issues.apache.org/jira/browse/YARN-1080
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: client
Reporter: Tassapol Athiapinya
Assignee: Xuan Gong
 Fix For: 2.1.1-beta

 Attachments: YARN-1080.1.patch, YARN-1080.2.patch


 There are 2 parts I am proposing in this jira. They can be fixed together in 
 one patch.
 1. Standardize help message for required parameter of $ yarn logs
 YARN CLI has a command logs ($ yarn logs). The command always requires a 
 parameter of -applicationId arg. However, help message of the command 
 does not make it clear. It lists -applicationId as optional parameter. If I 
 don't set it, YARN CLI will complain this is missing. It is better to use 
 standard required notation used in other Linux command for help message. Any 
 user familiar to the command can understand that this parameter is needed 
 more easily.
 {code:title=current help message}
 -bash-4.1$ yarn logs
 usage: general options are:
  -applicationId arg   ApplicationId (required)
  -appOwner argAppOwner (assumed to be current user if not
 specified)
  -containerId arg ContainerId (must be specified if node address is
 specified)
  -nodeAddress arg NodeAddress in the format nodename:port (must be
 specified if container id is specified)
 {code}
 {code:title=proposed help message}
 -bash-4.1$ yarn logs
 usage: yarn logs -applicationId application ID [OPTIONS]
 general options are:
  -appOwner argAppOwner (assumed to be current user if not
 specified)
  -containerId arg ContainerId (must be specified if node address is
 specified)
  -nodeAddress arg NodeAddress in the format nodename:port (must be
 specified if container id is specified)
 {code}
 2. Add description for help command. As far as I know, a user cannot get logs 
 for running job. Since I spent some time trying to get logs of running 
 applications, it should be nice to say this in command description.
 {code:title=proposed help}
 Retrieve logs for completed/killed YARN application
 usage: general options are...
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-707) Add user info in the YARN ClientToken

2013-08-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754741#comment-13754741
 ] 

Hudson commented on YARN-707:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1534 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1534/])
YARN-707. Added user information also in the YARN ClientToken so that AMs can 
implement authorization based on incoming users. Contributed by Jason Lowe. 
(vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1518868)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/ClientToAMTokenIdentifier.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/ClientToAMTokenSecretManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMApp.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttempt.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/ClientToAMTokenSecretManagerInRM.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockRM.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMRestart.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/MockAsm.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestRMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/MockRMApp.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/TestRMAppTransitions.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestClientToAMTokens.java


 Add user info in the YARN ClientToken
 -

 Key: YARN-707
 URL: https://issues.apache.org/jira/browse/YARN-707
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Bikas Saha
Assignee: Jason Lowe
Priority: Blocker
 Fix For: 3.0.0, 2.1.1-beta

 Attachments: YARN-707-20130822.txt, YARN-707-20130827.txt, 
 YARN-707-20130828-2.txt, YARN-707-20130828.txt, YARN-707-20130829.txt


 If user info is present in the client token then it can be used to do limited 
 authz in the AM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-896) Roll up for long lived YARN

2013-08-30 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754787#comment-13754787
 ] 

Robert Joseph Evans commented on YARN-896:
--

I agree that providing a good way handle stdout and stderr is important. I 
don't know if I want the NM to be doing this for us though, but that is an 
implementation detail that we can talk about on the follow up JIRA.  Chris, 
feel free to file a JIRA for rolling of stdout and stderr and we can look into 
what it will take to support that properly.

 Roll up for long lived YARN
 ---

 Key: YARN-896
 URL: https://issues.apache.org/jira/browse/YARN-896
 Project: Hadoop YARN
  Issue Type: New Feature
Reporter: Robert Joseph Evans

 YARN is intended to be general purpose, but it is missing some features to be 
 able to truly support long lived applications and long lived containers.
 This ticket is intended to
  # discuss what is needed to support long lived processes
  # track the resulting JIRA.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1065) NM should provide AuxillaryService data to the container

2013-08-30 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13755048#comment-13755048
 ] 

Xuan Gong commented on YARN-1065:
-

[~bikassaha]
Please ignore the YARN-1065.1.patch, please take a look at YARN-1065.2.patch.

 NM should provide AuxillaryService data to the container
 

 Key: YARN-1065
 URL: https://issues.apache.org/jira/browse/YARN-1065
 Project: Hadoop YARN
  Issue Type: Task
Reporter: Bikas Saha
Assignee: Xuan Gong
 Attachments: YARN-1065.1.patch, YARN-1065.2.patch


 Start container returns auxillary service data to the AM but does not provide 
 the same information to the task itself. It could add that information to the 
 container env with key=service_name and value=service_data. This allows the 
 container to start using the service without having to depend on the AM to 
 send the info to it indirectly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-1127) reservation exchange and excess reservation is not working for capacity scheduler

2013-08-30 Thread Omkar Vinit Joshi (JIRA)
Omkar Vinit Joshi created YARN-1127:
---

 Summary: reservation exchange and excess reservation is not 
working for capacity scheduler
 Key: YARN-1127
 URL: https://issues.apache.org/jira/browse/YARN-1127
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.1.1-beta
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
Priority: Blocker


I have 2 node managers.
* one with 1024 MB memory.(nm1)
* second with 2048 MB memory.(nm2)
I am submitting simple map reduce application with 1 mapper and one reducer 
with 1024mb each. The steps to reproduce this are
* stop nm2 with 2048MB memory.( This I am doing to make sure that this node's 
heartbeat doesn't reach RM first).
* now submit application. As soon as it receives first node's (nm1) heartbeat 
it will try to reserve memory for AM-container (2048MB). However it has only 
1024MB of memory.
* now start nm2 with 2048 MB memory.

It hangs forever... Ideally this has two potential issues.

* Say 2048MB is reserved on nm1 but nm2 comes back with 2048MB available 
memory. In this case if the original request was made without any locality then 
scheduler should unreserve memory on nm1 and allocate requested 2048MB 
container on nm2. 
* We support a notion where if say we have 5 nodes with 4 AM and all node 
managers have 8GB each and AM 2 GB each. Each AM is requesting 8GB each. Now to 
avoid deadlock AM will make an extra reservation. By doing this we would never 
hit the deadlock situation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1065) NM should provide AuxillaryService data to the container

2013-08-30 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-1065:


Attachment: YARN-1065.2.patch

 NM should provide AuxillaryService data to the container
 

 Key: YARN-1065
 URL: https://issues.apache.org/jira/browse/YARN-1065
 Project: Hadoop YARN
  Issue Type: Task
Reporter: Bikas Saha
Assignee: Xuan Gong
 Attachments: YARN-1065.1.patch, YARN-1065.2.patch


 Start container returns auxillary service data to the AM but does not provide 
 the same information to the task itself. It could add that information to the 
 container env with key=service_name and value=service_data. This allows the 
 container to start using the service without having to depend on the AM to 
 send the info to it indirectly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-957) Capacity Scheduler tries to reserve the memory more than what node manager reports.

2013-08-30 Thread Omkar Vinit Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13755089#comment-13755089
 ] 

Omkar Vinit Joshi commented on YARN-957:


uploading a patch which only fixes excess memory reservation issue.

 Capacity Scheduler tries to reserve the memory more than what node manager 
 reports.
 ---

 Key: YARN-957
 URL: https://issues.apache.org/jira/browse/YARN-957
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
Priority: Blocker
 Attachments: YARN-957-20130730.1.patch, YARN-957-20130730.2.patch, 
 YARN-957-20130730.3.patch, YARN-957-20130731.1.patch


 I have 2 node managers.
 * one with 1024 MB memory.(nm1)
 * second with 2048 MB memory.(nm2)
 I am submitting simple map reduce application with 1 mapper and one reducer 
 with 1024mb each. The steps to reproduce this are
 * stop nm2 with 2048MB memory.( This I am doing to make sure that this node's 
 heartbeat doesn't reach RM first).
 * now submit application. As soon as it receives first node's (nm1) heartbeat 
 it will try to reserve memory for AM-container (2048MB). However it has only 
 1024MB of memory.
 * now start nm2 with 2048 MB memory.
 It hangs forever... Ideally this has two potential issues.
 * It should not try to reserve memory on a node manager which is never going 
 to give requested memory. i.e. Current max capability of node manager is 
 1024MB but 2048MB is reserved on it. But it still does that.
 * Say 2048MB is reserved on nm1 but nm2 comes back with 2048MB available 
 memory. In this case if the original request was made without any locality 
 then scheduler should unreserve memory on nm1 and allocate requested 2048MB 
 container on nm2.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-957) Capacity Scheduler tries to reserve the memory more than what node manager reports.

2013-08-30 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi updated YARN-957:
---

Attachment: YARN-957-20130830.1.patch

 Capacity Scheduler tries to reserve the memory more than what node manager 
 reports.
 ---

 Key: YARN-957
 URL: https://issues.apache.org/jira/browse/YARN-957
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
Priority: Blocker
 Attachments: YARN-957-20130730.1.patch, YARN-957-20130730.2.patch, 
 YARN-957-20130730.3.patch, YARN-957-20130731.1.patch, 
 YARN-957-20130830.1.patch


 I have 2 node managers.
 * one with 1024 MB memory.(nm1)
 * second with 2048 MB memory.(nm2)
 I am submitting simple map reduce application with 1 mapper and one reducer 
 with 1024mb each. The steps to reproduce this are
 * stop nm2 with 2048MB memory.( This I am doing to make sure that this node's 
 heartbeat doesn't reach RM first).
 * now submit application. As soon as it receives first node's (nm1) heartbeat 
 it will try to reserve memory for AM-container (2048MB). However it has only 
 1024MB of memory.
 * now start nm2 with 2048 MB memory.
 It hangs forever... Ideally this has two potential issues.
 * It should not try to reserve memory on a node manager which is never going 
 to give requested memory. i.e. Current max capability of node manager is 
 1024MB but 2048MB is reserved on it. But it still does that.
 * Say 2048MB is reserved on nm1 but nm2 comes back with 2048MB available 
 memory. In this case if the original request was made without any locality 
 then scheduler should unreserve memory on nm1 and allocate requested 2048MB 
 container on nm2.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1127) reservation exchange and excess reservation is not working for capacity scheduler

2013-08-30 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13755090#comment-13755090
 ] 

Bikas Saha commented on YARN-1127:
--

Isnt this similar to a jira opened by you already? The issue being that the 
scheduler puts a reservation on a node whose total capacity is smaller than the 
reservation resource size. In this case, nm1 has capacity=1024 but the 
scheduler is putting a reservation of 2048 on it and that can never be 
satisfied. So it does not make sense to make that reservation at all.

 reservation exchange and excess reservation is not working for capacity 
 scheduler
 -

 Key: YARN-1127
 URL: https://issues.apache.org/jira/browse/YARN-1127
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.1.1-beta
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
Priority: Blocker

 I have 2 node managers.
 * one with 1024 MB memory.(nm1)
 * second with 2048 MB memory.(nm2)
 I am submitting simple map reduce application with 1 mapper and one reducer 
 with 1024mb each. The steps to reproduce this are
 * stop nm2 with 2048MB memory.( This I am doing to make sure that this node's 
 heartbeat doesn't reach RM first).
 * now submit application. As soon as it receives first node's (nm1) heartbeat 
 it will try to reserve memory for AM-container (2048MB). However it has only 
 1024MB of memory.
 * now start nm2 with 2048 MB memory.
 It hangs forever... Ideally this has two potential issues.
 * Say 2048MB is reserved on nm1 but nm2 comes back with 2048MB available 
 memory. In this case if the original request was made without any locality 
 then scheduler should unreserve memory on nm1 and allocate requested 2048MB 
 container on nm2. 
 * We support a notion where if say we have 5 nodes with 4 AM and all node 
 managers have 8GB each and AM 2 GB each. Each AM is requesting 8GB each. Now 
 to avoid deadlock AM will make an extra reservation. By doing this we would 
 never hit the deadlock situation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1127) reservation exchange and excess reservation is not working for capacity scheduler

2013-08-30 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13755095#comment-13755095
 ] 

Bikas Saha commented on YARN-1127:
--

How is this different from YARN-957

 reservation exchange and excess reservation is not working for capacity 
 scheduler
 -

 Key: YARN-1127
 URL: https://issues.apache.org/jira/browse/YARN-1127
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.1.1-beta
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
Priority: Blocker

 I have 2 node managers.
 * one with 1024 MB memory.(nm1)
 * second with 2048 MB memory.(nm2)
 I am submitting simple map reduce application with 1 mapper and one reducer 
 with 1024mb each. The steps to reproduce this are
 * stop nm2 with 2048MB memory.( This I am doing to make sure that this node's 
 heartbeat doesn't reach RM first).
 * now submit application. As soon as it receives first node's (nm1) heartbeat 
 it will try to reserve memory for AM-container (2048MB). However it has only 
 1024MB of memory.
 * now start nm2 with 2048 MB memory.
 It hangs forever... Ideally this has two potential issues.
 * Say 2048MB is reserved on nm1 but nm2 comes back with 2048MB available 
 memory. In this case if the original request was made without any locality 
 then scheduler should unreserve memory on nm1 and allocate requested 2048MB 
 container on nm2. 
 * We support a notion where if say we have 5 nodes with 4 AM and all node 
 managers have 8GB each and AM 2 GB each. Each AM is requesting 8GB each. Now 
 to avoid deadlock AM will make an extra reservation. By doing this we would 
 never hit the deadlock situation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-696) Enable multiple states to to be specified in Resource Manager apps REST call

2013-08-30 Thread Trevor Lorimer (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Lorimer updated YARN-696:


Attachment: YARN-696.diff

Okay i'll take your word for it, uploading latest diff.

The unit tests use 2 different applications with states of ACCEPTED and KILLED. 

 Enable multiple states to to be specified in Resource Manager apps REST call
 

 Key: YARN-696
 URL: https://issues.apache.org/jira/browse/YARN-696
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: resourcemanager
Affects Versions: 2.0.4-alpha
Reporter: Trevor Lorimer
Assignee: Trevor Lorimer
 Attachments: YARN-696.diff, YARN-696.diff


 Within the YARN Resource Manager REST API the GET call which returns all 
 Applications can be filtered by a single State query parameter (http://rm 
 http address:port/ws/v1/cluster/apps). 
 There are 8 possible states (New, Submitted, Accepted, Running, Finishing, 
 Finished, Failed, Killed), if no state parameter is specified all states are 
 returned, however if a sub-set of states is required then multiple REST calls 
 are required (max. of 7).
 The proposal is to be able to specify multiple states in a single REST call.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (YARN-1128) FifoPolicy.computeShares throws NPE on empty list of Schedulables

2013-08-30 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla reassigned YARN-1128:
--

Assignee: Karthik Kambatla

 FifoPolicy.computeShares throws NPE on empty list of Schedulables
 -

 Key: YARN-1128
 URL: https://issues.apache.org/jira/browse/YARN-1128
 Project: Hadoop YARN
  Issue Type: Bug
  Components: scheduler
Affects Versions: 2.1.0-beta
Reporter: Sandy Ryza
Assignee: Karthik Kambatla

 FifoPolicy gives all of a queue's share to the earliest-scheduled application.
 {code}
 Schedulable earliest = null;
 for (Schedulable schedulable : schedulables) {
   if (earliest == null ||
   schedulable.getStartTime()  earliest.getStartTime()) {
 earliest = schedulable;
   }
 }
 earliest.setFairShare(Resources.clone(totalResources));
 {code}
 If the queue has no schedulables in it, earliest will be left null, leading 
 to an NPE on the last line.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1127) reservation exchange and excess reservation is not working for capacity scheduler

2013-08-30 Thread Omkar Vinit Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13755103#comment-13755103
 ] 

Omkar Vinit Joshi commented on YARN-1127:
-

No as per Arun I am separating out issues which are causing this failure.
* YARN-957 :- Fix if container is getting reserved on a node manager which 
exceeds its memory.
* this jira :- Ideally the switch should have taken place from one to other 
node manager if another node manager has sufficient memory. However that did 
not happen. This must have occurred either because excess reservation did not 
work or reservation exchange did not occur. We need to find the root cause  and 
fix this.

 reservation exchange and excess reservation is not working for capacity 
 scheduler
 -

 Key: YARN-1127
 URL: https://issues.apache.org/jira/browse/YARN-1127
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.1.1-beta
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
Priority: Blocker

 I have 2 node managers.
 * one with 1024 MB memory.(nm1)
 * second with 2048 MB memory.(nm2)
 I am submitting simple map reduce application with 1 mapper and one reducer 
 with 1024mb each. The steps to reproduce this are
 * stop nm2 with 2048MB memory.( This I am doing to make sure that this node's 
 heartbeat doesn't reach RM first).
 * now submit application. As soon as it receives first node's (nm1) heartbeat 
 it will try to reserve memory for AM-container (2048MB). However it has only 
 1024MB of memory.
 * now start nm2 with 2048 MB memory.
 It hangs forever... Ideally this has two potential issues.
 * Say 2048MB is reserved on nm1 but nm2 comes back with 2048MB available 
 memory. In this case if the original request was made without any locality 
 then scheduler should unreserve memory on nm1 and allocate requested 2048MB 
 container on nm2. 
 * We support a notion where if say we have 5 nodes with 4 AM and all node 
 managers have 8GB each and AM 2 GB each. Each AM is requesting 8GB each. Now 
 to avoid deadlock AM will make an extra reservation. By doing this we would 
 never hit the deadlock situation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-957) Capacity Scheduler tries to reserve the memory more than what node manager reports.

2013-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13755115#comment-13755115
 ] 

Hadoop QA commented on YARN-957:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12600849/YARN-957-20130830.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1810//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1810//console

This message is automatically generated.

 Capacity Scheduler tries to reserve the memory more than what node manager 
 reports.
 ---

 Key: YARN-957
 URL: https://issues.apache.org/jira/browse/YARN-957
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
Priority: Blocker
 Attachments: YARN-957-20130730.1.patch, YARN-957-20130730.2.patch, 
 YARN-957-20130730.3.patch, YARN-957-20130731.1.patch, 
 YARN-957-20130830.1.patch


 I have 2 node managers.
 * one with 1024 MB memory.(nm1)
 * second with 2048 MB memory.(nm2)
 I am submitting simple map reduce application with 1 mapper and one reducer 
 with 1024mb each. The steps to reproduce this are
 * stop nm2 with 2048MB memory.( This I am doing to make sure that this node's 
 heartbeat doesn't reach RM first).
 * now submit application. As soon as it receives first node's (nm1) heartbeat 
 it will try to reserve memory for AM-container (2048MB). However it has only 
 1024MB of memory.
 * now start nm2 with 2048 MB memory.
 It hangs forever... Ideally this has two potential issues.
 * It should not try to reserve memory on a node manager which is never going 
 to give requested memory. i.e. Current max capability of node manager is 
 1024MB but 2048MB is reserved on it. But it still does that.
 * Say 2048MB is reserved on nm1 but nm2 comes back with 2048MB available 
 memory. In this case if the original request was made without any locality 
 then scheduler should unreserve memory on nm1 and allocate requested 2048MB 
 container on nm2.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1129) Job hungs when any node is blacklisted after RMrestart

2013-08-30 Thread Zhijie Shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijie Shen updated YARN-1129:
--

Assignee: Zhijie Shen

 Job hungs when any node is blacklisted after RMrestart
 --

 Key: YARN-1129
 URL: https://issues.apache.org/jira/browse/YARN-1129
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: yeshavora
Assignee: Zhijie Shen

 When RM restarted, if during restart one NM went bad (bad disk), NM got 
 blacklisted by AM and RM keeps giving the containers on the same node even 
 though AM doesn't want it there.
 Need to change AM to specifically blacklist node in the RM requests.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-896) Roll up for long lived YARN

2013-08-30 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754812#comment-13754812
 ] 

Jason Lowe commented on YARN-896:
-

bq. Chris, feel free to file a JIRA for rolling of stdout and stderr and we can 
look into what it will take to support that properly.

[~ste...@apache.org] recently filed YARN-1104 as a subtask of this JIRA which 
covers the NM rolling stdout/stderr.  We can transmute that JIRA into whatever 
ends up rolling the logs if it's not the NM.

 Roll up for long lived YARN
 ---

 Key: YARN-896
 URL: https://issues.apache.org/jira/browse/YARN-896
 Project: Hadoop YARN
  Issue Type: New Feature
Reporter: Robert Joseph Evans

 YARN is intended to be general purpose, but it is missing some features to be 
 able to truly support long lived applications and long lived containers.
 This ticket is intended to
  # discuss what is needed to support long lived processes
  # track the resulting JIRA.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-1129) Job hungs when any node is blacklisted after RMrestart

2013-08-30 Thread yeshavora (JIRA)
yeshavora created YARN-1129:
---

 Summary: Job hungs when any node is blacklisted after RMrestart
 Key: YARN-1129
 URL: https://issues.apache.org/jira/browse/YARN-1129
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: yeshavora


When RM restarted, if during restart one NM went bad (bad disk), NM got 
blacklisted by AM and RM keeps giving the containers on the same node even 
though AM doesn't want it there.

Need to change AM to specifically blacklist node in the RM requests.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1065) NM should provide AuxillaryService data to the container

2013-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13755063#comment-13755063
 ] 

Hadoop QA commented on YARN-1065:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12600840/YARN-1065.2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1809//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1809//console

This message is automatically generated.

 NM should provide AuxillaryService data to the container
 

 Key: YARN-1065
 URL: https://issues.apache.org/jira/browse/YARN-1065
 Project: Hadoop YARN
  Issue Type: Task
Reporter: Bikas Saha
Assignee: Xuan Gong
 Attachments: YARN-1065.1.patch, YARN-1065.2.patch


 Start container returns auxillary service data to the AM but does not provide 
 the same information to the task itself. It could add that information to the 
 container env with key=service_name and value=service_data. This allows the 
 container to start using the service without having to depend on the AM to 
 send the info to it indirectly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1127) reservation exchange and excess reservation is not working for capacity scheduler

2013-08-30 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13755110#comment-13755110
 ] 

Bikas Saha commented on YARN-1127:
--

Then please clarify this in the description or comment. Otherwise it looked 
like an exact duplicate. So the purpose of this jira is to fix the following 
situation.
1) NM1 has 2048 capacity in total but only 512 is free. A reservation of 1024 
is placed on it
2) NM2 now reports 1024 free space. At this point, the above reservation should 
be removed from NM1 and container should be assigned to NM2.
Step 2 is not happening and this jira intends to fix it.

 reservation exchange and excess reservation is not working for capacity 
 scheduler
 -

 Key: YARN-1127
 URL: https://issues.apache.org/jira/browse/YARN-1127
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.1.1-beta
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
Priority: Blocker

 I have 2 node managers.
 * one with 1024 MB memory.(nm1)
 * second with 2048 MB memory.(nm2)
 I am submitting simple map reduce application with 1 mapper and one reducer 
 with 1024mb each. The steps to reproduce this are
 * stop nm2 with 2048MB memory.( This I am doing to make sure that this node's 
 heartbeat doesn't reach RM first).
 * now submit application. As soon as it receives first node's (nm1) heartbeat 
 it will try to reserve memory for AM-container (2048MB). However it has only 
 1024MB of memory.
 * now start nm2 with 2048 MB memory.
 It hangs forever... Ideally this has two potential issues.
 * Say 2048MB is reserved on nm1 but nm2 comes back with 2048MB available 
 memory. In this case if the original request was made without any locality 
 then scheduler should unreserve memory on nm1 and allocate requested 2048MB 
 container on nm2. 
 * We support a notion where if say we have 5 nodes with 4 AM and all node 
 managers have 8GB each and AM 2 GB each. Each AM is requesting 8GB each. Now 
 to avoid deadlock AM will make an extra reservation. By doing this we would 
 never hit the deadlock situation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1098) Separate out RM services into Always On and Active

2013-08-30 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-1098:
---

Attachment: yarn-1098-1.patch

Uploading a patch along the lines of my previous commit.

The only change is that AdminService is also in the Active services. We should 
move it to Always On once we make it HA-aware; currently, some of the 
functions don't apply to Standby RM.

Also, the patch depends on HADOOP-9918. 

 Separate out RM services into Always On and Active
 --

 Key: YARN-1098
 URL: https://issues.apache.org/jira/browse/YARN-1098
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Affects Versions: 2.1.0-beta
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
  Labels: ha
 Attachments: yarn-1098-1.patch, yarn-1098-approach.patch, 
 yarn-1098-approach.patch


 From discussion on YARN-1027, it makes sense to separate out services that 
 are stateful and stateless. The stateless services can  run perennially 
 irrespective of whether the RM is in Active/Standby state, while the stateful 
 services need to  be started on transitionToActive() and completely shutdown 
 on transitionToStandby().
 The external-facing stateless services should respond to the client/AM/NM 
 requests depending on whether the RM is Active/Standby.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1065) NM should provide AuxillaryService data to the container

2013-08-30 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-1065:


Attachment: YARN-1065.3.patch

 NM should provide AuxillaryService data to the container
 

 Key: YARN-1065
 URL: https://issues.apache.org/jira/browse/YARN-1065
 Project: Hadoop YARN
  Issue Type: Task
Reporter: Bikas Saha
Assignee: Xuan Gong
 Attachments: YARN-1065.1.patch, YARN-1065.2.patch, YARN-1065.3.patch


 Start container returns auxillary service data to the AM but does not provide 
 the same information to the task itself. It could add that information to the 
 container env with key=service_name and value=service_data. This allows the 
 container to start using the service without having to depend on the AM to 
 send the info to it indirectly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-1130) Improve the log flushing for tasks when mapred.userlog.limit.kb is set

2013-08-30 Thread Paul Han (JIRA)
Paul Han created YARN-1130:
--

 Summary: Improve the log flushing for tasks when 
mapred.userlog.limit.kb is set
 Key: YARN-1130
 URL: https://issues.apache.org/jira/browse/YARN-1130
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.0.5-alpha
Reporter: Paul Han


When userlog limit is set with something like this:
{code}
property
namemapred.userlog.limit.kb/name
value2048/value
descriptionThe maximum size of user-logs of each task in KB. 0 disables the 
cap.
/description
/property
{code}
the log entry will be truncated randomly for the jobs.

The log size is left between 1.2MB to 1.6MB.

Since the log is already limited, avoid the log truncation is crucial for user.

The other issue with the current 
impl(org.apache.hadoop.yarn.ContainerLogAppender) will not write to log until 
the container shutdown and logmanager close all appenders. If user likes to see 
the log during task execution, it doesn't support it.

Will propose a patch to add a flush mechanism and also flush the log when task 
is done.  


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1130) Improve the log flushing for tasks when mapred.userlog.limit.kb is set

2013-08-30 Thread Paul Han (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Han updated YARN-1130:
---

Description: 
When userlog limit is set with something like this:
{code}
property
namemapred.userlog.limit.kb/name
value2048/value
descriptionThe maximum size of user-logs of each task in KB. 0 disables the 
cap.
/description
/property
{code}
the log entry will be truncated randomly for the jobs.

The log size is left between 1.2MB to 1.6MB.

Since the log is already limited, avoid the log truncation is crucial for user.

The other issue with the current 
impl(org.apache.hadoop.yarn.ContainerLogAppender) is that log entries will not 
flush to file until the container shutdown and logmanager close all appenders. 
If user likes to see the log during task execution, it doesn't support it.

Will propose a patch to add a flush mechanism and also flush the log when task 
is done.  


  was:
When userlog limit is set with something like this:
{code}
property
namemapred.userlog.limit.kb/name
value2048/value
descriptionThe maximum size of user-logs of each task in KB. 0 disables the 
cap.
/description
/property
{code}
the log entry will be truncated randomly for the jobs.

The log size is left between 1.2MB to 1.6MB.

Since the log is already limited, avoid the log truncation is crucial for user.

The other issue with the current 
impl(org.apache.hadoop.yarn.ContainerLogAppender) will not write to log until 
the container shutdown and logmanager close all appenders. If user likes to see 
the log during task execution, it doesn't support it.

Will propose a patch to add a flush mechanism and also flush the log when task 
is done.  



 Improve the log flushing for tasks when mapred.userlog.limit.kb is set
 --

 Key: YARN-1130
 URL: https://issues.apache.org/jira/browse/YARN-1130
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.0.5-alpha
Reporter: Paul Han

 When userlog limit is set with something like this:
 {code}
 property
 namemapred.userlog.limit.kb/name
 value2048/value
 descriptionThe maximum size of user-logs of each task in KB. 0 disables the 
 cap.
 /description
 /property
 {code}
 the log entry will be truncated randomly for the jobs.
 The log size is left between 1.2MB to 1.6MB.
 Since the log is already limited, avoid the log truncation is crucial for 
 user.
 The other issue with the current 
 impl(org.apache.hadoop.yarn.ContainerLogAppender) is that log entries will 
 not flush to file until the container shutdown and logmanager close all 
 appenders. If user likes to see the log during task execution, it doesn't 
 support it.
 Will propose a patch to add a flush mechanism and also flush the log when 
 task is done.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-1131) $ yarn logs should return a message log aggregation is during progress if YARN application is running

2013-08-30 Thread Tassapol Athiapinya (JIRA)
Tassapol Athiapinya created YARN-1131:
-

 Summary: $ yarn logs should return a message log aggregation is 
during progress if YARN application is running
 Key: YARN-1131
 URL: https://issues.apache.org/jira/browse/YARN-1131
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: client
Reporter: Tassapol Athiapinya
Priority: Minor
 Fix For: 2.1.1-beta


In the case when log aggregation is enabled, if a user submits MapReduce job 
and runs $ yarn logs -applicationId app ID while the YARN application is 
running, the command will return no message and return user back to shell. It 
is nice to tell the user that log aggregation is in progress.

{code}
-bash-4.1$ /usr/bin/yarn logs -applicationId application_1377900193583_0002
-bash-4.1$
{code}

At the same time, if invalid application ID is given, YARN CLI should say that 
the application ID is incorrect rather than throwing NoSuchElementException.
{code}
$ /usr/bin/yarn logs -applicationId application_0
Exception in thread main java.util.NoSuchElementException
at com.google.common.base.AbstractIterator.next(AbstractIterator.java:75)
at 
org.apache.hadoop.yarn.util.ConverterUtils.toApplicationId(ConverterUtils.java:124)
at 
org.apache.hadoop.yarn.util.ConverterUtils.toApplicationId(ConverterUtils.java:119)
at org.apache.hadoop.yarn.logaggregation.LogDumper.run(LogDumper.java:110)
at org.apache.hadoop.yarn.logaggregation.LogDumper.main(LogDumper.java:255)

{code}


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-1132) QueueMetrics.java has wrong comments

2013-08-30 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created YARN-1132:
---

 Summary: QueueMetrics.java has wrong comments
 Key: YARN-1132
 URL: https://issues.apache.org/jira/browse/YARN-1132
 Project: Hadoop YARN
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.1.0-beta
Reporter: Akira AJISAKA
Priority: Minor


I found o.a.h.yarn.server.resourcemanager.scheduler.QueueMetrics.java has wrong 
comments

{code}
  @Metric(# of reserved memory in MB) MutableGaugeInt reservedMB;
  @Metric(# of active users) MutableGaugeInt activeApplications;
{code}

they should be fixed as follows:

{code}
  @Metric(Reserved memory in MB) MutableGaugeInt reservedMB;
  @Metric(# of active applications) MutableGaugeInt activeApplications;
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1090) Job does not get into Pending State

2013-08-30 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13755227#comment-13755227
 ] 

Jian He commented on YARN-1090:
---

bq. My understanding was that an application is Pending if no AM has yet been 
allocated to it
Yes, that's correct. What I meant here is about the UI of the Application 
Queues usage inside which:
'Num Pending Applications' doesn't mean that no AM has been allocated it, and 
propose to rename it to 'Num non-schedulable applications'.

 Job does not get into Pending State
 ---

 Key: YARN-1090
 URL: https://issues.apache.org/jira/browse/YARN-1090
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: yeshavora
Assignee: Jian He
 Attachments: YARN-1090.patch


 When there is no resource available to run a job, next job should go in 
 pending state. RM UI should show next job as pending app and the counter for 
 the pending app should be incremented.
 But Currently. Next job stays in ACCEPTED state and No AM has been assigned 
 to this job.Though Pending App count is not incremented. 
 Running 'job status nextjob' shows job state=PREP. 
 $ mapred job -status job_1377122233385_0002
 13/08/21 21:59:23 INFO client.RMProxy: Connecting to ResourceManager at 
 host1/ip1
 Job: job_1377122233385_0002
 Job File: /ABC/.staging/job_1377122233385_0002/job.xml
 Job Tracking URL : http://host1:port1/application_1377122233385_0002/
 Uber job : false
 Number of maps: 0
 Number of reduces: 0
 map() completion: 0.0
 reduce() completion: 0.0
 Job state: PREP
 retired: false
 reason for failure:

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1065) NM should provide AuxillaryService data to the container

2013-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13755228#comment-13755228
 ] 

Hadoop QA commented on YARN-1065:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12600867/YARN-1065.3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1811//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/1811//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-nodemanager.html
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1811//console

This message is automatically generated.

 NM should provide AuxillaryService data to the container
 

 Key: YARN-1065
 URL: https://issues.apache.org/jira/browse/YARN-1065
 Project: Hadoop YARN
  Issue Type: Task
Reporter: Bikas Saha
Assignee: Xuan Gong
 Attachments: YARN-1065.1.patch, YARN-1065.2.patch, YARN-1065.3.patch


 Start container returns auxillary service data to the AM but does not provide 
 the same information to the task itself. It could add that information to the 
 container env with key=service_name and value=service_data. This allows the 
 container to start using the service without having to depend on the AM to 
 send the info to it indirectly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-888) clean up POM dependencies

2013-08-30 Thread Timothy St. Clair (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13755236#comment-13755236
 ] 

Timothy St. Clair commented on YARN-888:


Our current list of JIRA's can be found here: 
https://fedoraproject.org/wiki/Changes/Hadoop#Upstream_patch_tracking

 clean up POM dependencies
 -

 Key: YARN-888
 URL: https://issues.apache.org/jira/browse/YARN-888
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: Alejandro Abdelnur
Assignee: Roman Shaposhnik

 Intermediate 'pom' modules define dependencies inherited by leaf modules.
 This is causing issues in intellij IDE.
 We should normalize the leaf modules like in common, hdfs and tools where all 
 dependencies are defined in each leaf module and the intermediate 'pom' 
 module do not define any dependency.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-696) Enable multiple states to to be specified in Resource Manager apps REST call

2013-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13755031#comment-13755031
 ] 

Hadoop QA commented on YARN-696:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12600827/YARN-696.diff
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1807//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1807//console

This message is automatically generated.

 Enable multiple states to to be specified in Resource Manager apps REST call
 

 Key: YARN-696
 URL: https://issues.apache.org/jira/browse/YARN-696
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: resourcemanager
Affects Versions: 2.0.4-alpha
Reporter: Trevor Lorimer
Assignee: Trevor Lorimer
 Attachments: YARN-696.diff, YARN-696.diff


 Within the YARN Resource Manager REST API the GET call which returns all 
 Applications can be filtered by a single State query parameter (http://rm 
 http address:port/ws/v1/cluster/apps). 
 There are 8 possible states (New, Submitted, Accepted, Running, Finishing, 
 Finished, Failed, Killed), if no state parameter is specified all states are 
 returned, however if a sub-set of states is required then multiple REST calls 
 are required (max. of 7).
 The proposal is to be able to specify multiple states in a single REST call.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-1133) AMRMClient should be able to blacklist nodes, and specify them in AllocateRequest

2013-08-30 Thread Zhijie Shen (JIRA)
Zhijie Shen created YARN-1133:
-

 Summary: AMRMClient should be able to blacklist nodes, and specify 
them in AllocateRequest
 Key: YARN-1133
 URL: https://issues.apache.org/jira/browse/YARN-1133
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Zhijie Shen
Assignee: Zhijie Shen


After YARN-750, YARN scheduler is able to blacklist nodes when scheduling. 
AMRMClient should enable this feature to the clients.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1117) Improve help message for $ yarn applications and $yarn node

2013-08-30 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13755253#comment-13755253
 ] 

Vinod Kumar Vavilapalli commented on YARN-1117:
---

Looks good, +1. Checking this in.

 Improve help message for $ yarn applications and $yarn node
 ---

 Key: YARN-1117
 URL: https://issues.apache.org/jira/browse/YARN-1117
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: client
Reporter: Tassapol Athiapinya
Assignee: Xuan Gong
 Attachments: YARN-1117.1.patch, YARN-1117.2.patch, YARN-1117.3.patch


 There is standardization of help message in YARN-1080. It is nice to have 
 similar changes for $ yarn appications and yarn node

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1117) Improve help message for $ yarn applications and $yarn node

2013-08-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13755265#comment-13755265
 ] 

Hudson commented on YARN-1117:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4355 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4355/])
YARN-1117. Improved help messages for yarn application and yarn node 
commands. Contributed by Xuan Gong. (vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1519117)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/ApplicationCLI.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/NodeCLI.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestYarnClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestYarnCLI.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/ConverterUtils.java


 Improve help message for $ yarn applications and $yarn node
 ---

 Key: YARN-1117
 URL: https://issues.apache.org/jira/browse/YARN-1117
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: client
Reporter: Tassapol Athiapinya
Assignee: Xuan Gong
 Fix For: 2.1.1-beta

 Attachments: YARN-1117.1.patch, YARN-1117.2.patch, YARN-1117.3.patch


 There is standardization of help message in YARN-1080. It is nice to have 
 similar changes for $ yarn appications and yarn node

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (YARN-1133) AMRMClient should be able to blacklist nodes, and specify them in AllocateRequest

2013-08-30 Thread Zhijie Shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijie Shen resolved YARN-1133.
---

Resolution: Duplicate

 AMRMClient should be able to blacklist nodes, and specify them in 
 AllocateRequest
 -

 Key: YARN-1133
 URL: https://issues.apache.org/jira/browse/YARN-1133
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Zhijie Shen
Assignee: Zhijie Shen

 After YARN-750, YARN scheduler is able to blacklist nodes when scheduling. 
 AMRMClient should enable this feature to the clients.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1116) Populate AMRMTokens back to AMRMTokenSecretManager after RM restarts

2013-08-30 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13755278#comment-13755278
 ] 

Jian He commented on YARN-1116:
---

Looks like simply populating the password inside AMRMToken back to 
AMRMTokenSecretManager after RM restarts is enough.

 Populate AMRMTokens back to AMRMTokenSecretManager after RM restarts
 

 Key: YARN-1116
 URL: https://issues.apache.org/jira/browse/YARN-1116
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Jian He
Assignee: Jian He

 The AMRMTokens are now only saved in RMStateStore and not populated back to 
 AMRMTokenSecretManager after RM restarts. This is more needed now since 
 AMRMToken also becomes used in non-secure env.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1065) NM should provide AuxillaryService data to the container

2013-08-30 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-1065:


Attachment: YARN-1065.4.patch

Fix -1 on findBug
Modify the test case to test if environment variable is forwarded using 
sanitizeEnv

 NM should provide AuxillaryService data to the container
 

 Key: YARN-1065
 URL: https://issues.apache.org/jira/browse/YARN-1065
 Project: Hadoop YARN
  Issue Type: Task
Reporter: Bikas Saha
Assignee: Xuan Gong
 Attachments: YARN-1065.1.patch, YARN-1065.2.patch, YARN-1065.3.patch, 
 YARN-1065.4.patch


 Start container returns auxillary service data to the AM but does not provide 
 the same information to the task itself. It could add that information to the 
 container env with key=service_name and value=service_data. This allows the 
 container to start using the service without having to depend on the AM to 
 send the info to it indirectly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1065) NM should provide AuxillaryService data to the container

2013-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13755303#comment-13755303
 ] 

Hadoop QA commented on YARN-1065:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12600896/YARN-1065.4.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1812//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1812//console

This message is automatically generated.

 NM should provide AuxillaryService data to the container
 

 Key: YARN-1065
 URL: https://issues.apache.org/jira/browse/YARN-1065
 Project: Hadoop YARN
  Issue Type: Task
Reporter: Bikas Saha
Assignee: Xuan Gong
 Attachments: YARN-1065.1.patch, YARN-1065.2.patch, YARN-1065.3.patch, 
 YARN-1065.4.patch


 Start container returns auxillary service data to the AM but does not provide 
 the same information to the task itself. It could add that information to the 
 container env with key=service_name and value=service_data. This allows the 
 container to start using the service without having to depend on the AM to 
 send the info to it indirectly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-771) AMRMClient support for resource blacklisting

2013-08-30 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13755313#comment-13755313
 ] 

Junping Du commented on YARN-771:
-

Thanks [~bikassaha] for review!

 AMRMClient  support for resource blacklisting
 -

 Key: YARN-771
 URL: https://issues.apache.org/jira/browse/YARN-771
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Bikas Saha
Assignee: Junping Du
 Fix For: 2.1.1-beta

 Attachments: YARN-771-v1.0.patch, YARN-771-v2.patch, 
 YARN-771-v3.patch, YARN-771-v4.patch


 After YARN-750 AMRMClient should support blacklisting via the new YARN API's

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1116) Populate AMRMTokens back to AMRMTokenSecretManager after RM restarts

2013-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13755329#comment-13755329
 ] 

Hadoop QA commented on YARN-1116:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12600898/YARN-1116.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1813//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1813//console

This message is automatically generated.

 Populate AMRMTokens back to AMRMTokenSecretManager after RM restarts
 

 Key: YARN-1116
 URL: https://issues.apache.org/jira/browse/YARN-1116
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Jian He
Assignee: Jian He
 Attachments: YARN-1116.patch


 The AMRMTokens are now only saved in RMStateStore and not populated back to 
 AMRMTokenSecretManager after RM restarts. This is more needed now since 
 AMRMToken also becomes used in non-secure env.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1090) Job does not get into Pending State

2013-08-30 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13755368#comment-13755368
 ] 

Sandy Ryza commented on YARN-1090:
--

Ah, ok, makes total sense

 Job does not get into Pending State
 ---

 Key: YARN-1090
 URL: https://issues.apache.org/jira/browse/YARN-1090
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: yeshavora
Assignee: Jian He
 Attachments: YARN-1090.patch


 When there is no resource available to run a job, next job should go in 
 pending state. RM UI should show next job as pending app and the counter for 
 the pending app should be incremented.
 But Currently. Next job stays in ACCEPTED state and No AM has been assigned 
 to this job.Though Pending App count is not incremented. 
 Running 'job status nextjob' shows job state=PREP. 
 $ mapred job -status job_1377122233385_0002
 13/08/21 21:59:23 INFO client.RMProxy: Connecting to ResourceManager at 
 host1/ip1
 Job: job_1377122233385_0002
 Job File: /ABC/.staging/job_1377122233385_0002/job.xml
 Job Tracking URL : http://host1:port1/application_1377122233385_0002/
 Uber job : false
 Number of maps: 0
 Number of reduces: 0
 map() completion: 0.0
 reduce() completion: 0.0
 Job state: PREP
 retired: false
 reason for failure:

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1065) NM should provide AuxillaryService data to the container

2013-08-30 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13755382#comment-13755382
 ] 

Xuan Gong commented on YARN-1065:
-

Thanks for the comments.
Expose the ContainerManagerImpl to containerLauncher and containerLaunch, that 
we can use ContainerManagerImpl.auxiliaryServices.getMetaData() to get all the 
service data

 NM should provide AuxillaryService data to the container
 

 Key: YARN-1065
 URL: https://issues.apache.org/jira/browse/YARN-1065
 Project: Hadoop YARN
  Issue Type: Task
Reporter: Bikas Saha
Assignee: Xuan Gong
 Attachments: YARN-1065.1.patch, YARN-1065.2.patch, YARN-1065.3.patch, 
 YARN-1065.4.patch, YARN-1065.5.patch


 Start container returns auxillary service data to the AM but does not provide 
 the same information to the task itself. It could add that information to the 
 container env with key=service_name and value=service_data. This allows the 
 container to start using the service without having to depend on the AM to 
 send the info to it indirectly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1065) NM should provide AuxillaryService data to the container

2013-08-30 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-1065:


Attachment: YARN-1065.5.patch

 NM should provide AuxillaryService data to the container
 

 Key: YARN-1065
 URL: https://issues.apache.org/jira/browse/YARN-1065
 Project: Hadoop YARN
  Issue Type: Task
Reporter: Bikas Saha
Assignee: Xuan Gong
 Attachments: YARN-1065.1.patch, YARN-1065.2.patch, YARN-1065.3.patch, 
 YARN-1065.4.patch, YARN-1065.5.patch


 Start container returns auxillary service data to the AM but does not provide 
 the same information to the task itself. It could add that information to the 
 container env with key=service_name and value=service_data. This allows the 
 container to start using the service without having to depend on the AM to 
 send the info to it indirectly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1065) NM should provide AuxillaryService data to the container

2013-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13755412#comment-13755412
 ] 

Hadoop QA commented on YARN-1065:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12600917/YARN-1065.5.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1814//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1814//console

This message is automatically generated.

 NM should provide AuxillaryService data to the container
 

 Key: YARN-1065
 URL: https://issues.apache.org/jira/browse/YARN-1065
 Project: Hadoop YARN
  Issue Type: Task
Reporter: Bikas Saha
Assignee: Xuan Gong
 Attachments: YARN-1065.1.patch, YARN-1065.2.patch, YARN-1065.3.patch, 
 YARN-1065.4.patch, YARN-1065.5.patch


 Start container returns auxillary service data to the AM but does not provide 
 the same information to the task itself. It could add that information to the 
 container env with key=service_name and value=service_data. This allows the 
 container to start using the service without having to depend on the AM to 
 send the info to it indirectly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira