[jira] [Updated] (YARN-727) ClientRMProtocol.getAllApplications should accept ApplicationType as a parameter

2013-06-14 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-727:
---

Attachment: YARN-727.1.patch

add applicationType as parameter to clientRMProtocol.getAllApplications().


 ClientRMProtocol.getAllApplications should accept ApplicationType as a 
 parameter
 

 Key: YARN-727
 URL: https://issues.apache.org/jira/browse/YARN-727
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.0-beta
Reporter: Siddharth Seth
Assignee: Xuan Gong
 Attachments: YARN-727.1.patch


 Now that an ApplicationType is registered on ApplicationSubmission, 
 getAllApplications should be able to use this string to query for a specific 
 application type.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-610) ClientToken should not be set in the environment

2013-06-14 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi updated YARN-610:
---

Attachment: YARN-610-20130614.patch

 ClientToken should not be set in the environment
 

 Key: YARN-610
 URL: https://issues.apache.org/jira/browse/YARN-610
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Siddharth Seth
Assignee: Omkar Vinit Joshi
 Attachments: YARN-610-20130614.patch


 Similar to YARN-579, this can be set via ContainerTokens

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-693) Sending NMToken to AM on allocate call

2013-06-14 Thread Omkar Vinit Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13684041#comment-13684041
 ] 

Omkar Vinit Joshi commented on YARN-693:


bq. NMToken should be in org.apache.hadoop.yarn.api.records.

bq. Similarly NMTokenPBIMpl in org.apache.hadoop.yarn.api.records.impl.pb.
fixed.


bq. AMRMClient signature can just have a ConcurrentMap instead of 
ConcurrentHashMap.

fixed.

bq. I suppose that the test to verify that the previously given out tokens 
before roll-over will work after roll-over is in YARN-694, right?
yes there are more scenarios which will be tested in YARN-694.

bq. TestAMRMClient: Also validate NMTokens = nodeCount ?
fixed.

bq. Static factory for NMToken?
bq. And then use the above in NMTokenSecretManagerInRM.
fixed. added static method in NMTokenIdentifier.newNMToken.

bq. The MR changes can be moved to YARN-694 or its MR companion.
I have commented out the code. will uncomment it after YARN-694 :)

bq. The correct place for calling register and unregister with 
NMTokenSecretManagerInRM is RMAppAttemptImpl.
I think ApplicationMasterService looks good as it is easier to follow. keeping 
it in ApplicationMasterService only.



 Sending NMToken to AM on allocate call
 --

 Key: YARN-693
 URL: https://issues.apache.org/jira/browse/YARN-693
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
 Attachments: YARN-693-20130610.patch, YARN-693-20130613.patch


 This is part of YARN-613.
 As per the updated design, AM will receive per NM, NMToken in following 
 scenarios
 * AM is receiving first container on underlying NM.
 * AM is receiving container on underlying NM after either NM or RM rebooted.
 ** After RM reboot, as RM doesn't remember (persist) the information about 
 keys issued per AM per NM, it will reissue tokens in case AM gets new 
 container on underlying NM. However on NM side NM will still retain older 
 token until it receives new token to support long running jobs (in work 
 preserving environment).
 ** After NM reboot, RM will delete the token information corresponding to 
 that AM for all AMs.
 * AM is receiving container on underlying NM after NMToken master key is 
 rolled over on RM side.
 In all the cases if AM receives new NMToken then it is suppose to store it 
 for future NM communication until it receives a new one.
 AMRMClient should expose these NMToken to client. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-727) ClientRMProtocol.getAllApplications should accept ApplicationType as a parameter

2013-06-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13684046#comment-13684046
 ] 

Hadoop QA commented on YARN-727:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12587944/YARN-727.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 8 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 3 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  org.apache.hadoop.mapreduce.v2.TestUberAM
  org.apache.hadoop.mapreduce.TestMRJobClient
  org.apache.hadoop.mapreduce.v2.TestMRJobs

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1256//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1256//console

This message is automatically generated.

 ClientRMProtocol.getAllApplications should accept ApplicationType as a 
 parameter
 

 Key: YARN-727
 URL: https://issues.apache.org/jira/browse/YARN-727
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.0-beta
Reporter: Siddharth Seth
Assignee: Xuan Gong
 Attachments: YARN-727.1.patch


 Now that an ApplicationType is registered on ApplicationSubmission, 
 getAllApplications should be able to use this string to query for a specific 
 application type.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-693) Sending NMToken to AM on allocate call

2013-06-14 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi updated YARN-693:
---

Attachment: YARN-693-20130614.1.patch

 Sending NMToken to AM on allocate call
 --

 Key: YARN-693
 URL: https://issues.apache.org/jira/browse/YARN-693
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
 Attachments: YARN-693-20130610.patch, YARN-693-20130613.patch, 
 YARN-693-20130614.1.patch


 This is part of YARN-613.
 As per the updated design, AM will receive per NM, NMToken in following 
 scenarios
 * AM is receiving first container on underlying NM.
 * AM is receiving container on underlying NM after either NM or RM rebooted.
 ** After RM reboot, as RM doesn't remember (persist) the information about 
 keys issued per AM per NM, it will reissue tokens in case AM gets new 
 container on underlying NM. However on NM side NM will still retain older 
 token until it receives new token to support long running jobs (in work 
 preserving environment).
 ** After NM reboot, RM will delete the token information corresponding to 
 that AM for all AMs.
 * AM is receiving container on underlying NM after NMToken master key is 
 rolled over on RM side.
 In all the cases if AM receives new NMToken then it is suppose to store it 
 for future NM communication until it receives a new one.
 AMRMClient should expose these NMToken to client. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-693) Sending NMToken to AM on allocate call

2013-06-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13684053#comment-13684053
 ] 

Hadoop QA commented on YARN-693:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12587950/YARN-693-20130614.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  org.apache.hadoop.yarn.client.TestNMClient
  org.apache.hadoop.yarn.client.TestAMRMClient

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1257//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/1257//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-yarn-api.html
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1257//console

This message is automatically generated.

 Sending NMToken to AM on allocate call
 --

 Key: YARN-693
 URL: https://issues.apache.org/jira/browse/YARN-693
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
 Attachments: YARN-693-20130610.patch, YARN-693-20130613.patch, 
 YARN-693-20130614.1.patch


 This is part of YARN-613.
 As per the updated design, AM will receive per NM, NMToken in following 
 scenarios
 * AM is receiving first container on underlying NM.
 * AM is receiving container on underlying NM after either NM or RM rebooted.
 ** After RM reboot, as RM doesn't remember (persist) the information about 
 keys issued per AM per NM, it will reissue tokens in case AM gets new 
 container on underlying NM. However on NM side NM will still retain older 
 token until it receives new token to support long running jobs (in work 
 preserving environment).
 ** After NM reboot, RM will delete the token information corresponding to 
 that AM for all AMs.
 * AM is receiving container on underlying NM after NMToken master key is 
 rolled over on RM side.
 In all the cases if AM receives new NMToken then it is suppose to store it 
 for future NM communication until it receives a new one.
 AMRMClient should expose these NMToken to client. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-787) Remove resource min from Yarn client API

2013-06-14 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated YARN-787:


Attachment: YARN-787.patch

Updated patch which does not depend on MAPREDUCE-5311 to be sorted out.

It removes MIN from the Yarn client API and wires the slot-millis calculation 
to the MIN from the configuration.

The assumption here is that the configuration of the client will be the same as 
the configuration of the client, which typically is the case.

AS discussed in MAPREDUCE-5311, we should get rid of slot-millis calculation 
after introducing memory-millis and vcores-millis. We can definitely do that 
for the release following 2.1.0-beta.



 Remove resource min from Yarn client API
 

 Key: YARN-787
 URL: https://issues.apache.org/jira/browse/YARN-787
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: api
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: YARN-787.patch, YARN-787.patch


 Per discussions in YARN-689 and YARN-769 we should remove minimum from the 
 API as this is a scheduler internal thing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-821) Rename FinishApplicationMasterRequest.setFinishApplicationStatus to setFinalApplicationStatus to be consistent with getter

2013-06-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13684078#comment-13684078
 ] 

Hudson commented on YARN-821:
-

Integrated in Hadoop-trunk-Commit #3932 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3932/])
YARN-821. Renamed setFinishApplicationStatus to setFinalApplicationStatus 
in FinishApplicationMasterRequest for consistency. Contributed by Jian He. 
(Revision 1493315)

 Result = SUCCESS
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1493315
Files : 
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMCommunicator.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/FinishApplicationMasterRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/FinishApplicationMasterRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/AMRMClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMApp.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockAM.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestApplicationTokens.java


 Rename FinishApplicationMasterRequest.setFinishApplicationStatus to 
 setFinalApplicationStatus to be consistent with getter
 --

 Key: YARN-821
 URL: https://issues.apache.org/jira/browse/YARN-821
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Jian He
Assignee: Jian He
 Fix For: 2.1.0-beta

 Attachments: YARN-821.1.patch, YARN-821.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-787) Remove resource min from Yarn client API

2013-06-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13684081#comment-13684081
 ] 

Hadoop QA commented on YARN-787:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12587951/YARN-787.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1258//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1258//console

This message is automatically generated.

 Remove resource min from Yarn client API
 

 Key: YARN-787
 URL: https://issues.apache.org/jira/browse/YARN-787
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: api
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: YARN-787.patch, YARN-787.patch


 Per discussions in YARN-689 and YARN-769 we should remove minimum from the 
 API as this is a scheduler internal thing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (YARN-779) AMRMClient should clean up dangling unsatisfied request

2013-06-14 Thread Maysam Yabandeh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maysam Yabandeh reassigned YARN-779:


Assignee: Maysam Yabandeh

 AMRMClient should clean up dangling unsatisfied request
 ---

 Key: YARN-779
 URL: https://issues.apache.org/jira/browse/YARN-779
 Project: Hadoop YARN
  Issue Type: Bug
  Components: client
Affects Versions: 2.0.4-alpha
Reporter: Alejandro Abdelnur
Assignee: Maysam Yabandeh
Priority: Critical

 If an AMRMClient allocates a ContainerRequest for 10 containers in node1 or 
 node2 is placed (assuming a single rack) the resulting ResourceRequests will 
 be
 {code}
 location - containers
 -
 node1- 10
 node2- 10
 rack - 10
 ANY  - 10
 {code}
 Assuming 5 containers are allocated in node1 and 5 containers are allocated 
 in node2, the following ResourceRequests will be outstanding on the RM.
 {code}
 location - containers
 -
 node1- 5
 node2- 5
 {code}
 If the AMMRClient does a new ContainerRequest allocation, this time for 5 
 containers in node3, the resulting outstanding ResourceRequests on the RM 
 will be:
 {code}
 location - containers
 -
 node1- 5
 node2- 5
 node3- 5
 rack - 5
 ANY  - 5
 {code}
 At this point, the scheduler may assign 5 containers to node1 and it will 
 never assign the 5 containers node3 asked for.
 AMRMClient should keep track of the outstanding allocations counts per 
 ContainerRequest and when gets to zero it should update the the RACK/ANY 
 decrementing the dangling requests. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


<    1   2