[jira] [Commented] (YARN-117) Enhance YARN service model

2013-06-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13672899#comment-13672899
 ] 

Hadoop QA commented on YARN-117:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12585781/YARN-117-015.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 26 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 4 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-unmanaged-am-launcher
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy:

  org.apache.hadoop.yarn.client.TestNMClientAsync

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1076//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/1076//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-yarn-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/1076//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-yarn-common.html
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1076//console

This message is automatically generated.

 Enhance YARN service model
 --

 Key: YARN-117
 URL: https://issues.apache.org/jira/browse/YARN-117
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.0.4-alpha
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: YARN-117-007.patch, YARN-117-008.patch, 
 YARN-117-009.patch, YARN-117-010.patch, YARN-117-011.patch, 
 YARN-117-012.patch, YARN-117-013.patch, YARN-117-014.patch, 
 YARN-117-015.patch, YARN-117-2.patch, YARN-117-3.patch, YARN-117.4.patch, 
 YARN-117.5.patch, YARN-117.6.patch, YARN-117.patch


 Having played the YARN service model, there are some issues
 that I've identified based on past work and initial use.
 This JIRA issue is an overall one to cover the issues, with solutions pushed 
 out to separate JIRAs.
 h2. state model prevents stopped state being entered if you could not 
 successfully start the service.
 In the current lifecycle you cannot stop a service unless it was successfully 
 started, but
 * {{init()}} may acquire resources that need to be explicitly released
 * if the {{start()}} operation fails partway through, the {{stop()}} 
 operation may be needed to release resources.
 *Fix:* make {{stop()}} a valid state transition from all states and require 
 the implementations to be able to stop safely without requiring all fields to 
 be non null.
 Before anyone points out that the {{stop()}} operations assume that all 
 fields are valid; and if called before a {{start()}} they will NPE; 
 MAPREDUCE-3431 shows that this problem arises today, MAPREDUCE-3502 is a fix 
 for this. It is independent of the rest of the issues in this doc but it will 
 aid making {{stop()}} execute from all states other than stopped.
 MAPREDUCE-3502 is too big a patch and needs to be broken down for easier 
 review and take up; this can be done with issues linked to this one.
 h2. AbstractService doesn't prevent duplicate state change requests.
 The {{ensureState()}} checks to verify whether or not a state transition is 
 allowed from the current state are performed in the base {{AbstractService}} 
 class -yet subclasses tend to call this *after* their own {{init()}}, 
 {{start()}}  {{stop()}} operations. This means that these operations can be 
 performed out of order, and even if the outcome of the call is an exception, 
 all actions performed by the subclasses will 

[jira] [Commented] (YARN-749) Rename ResourceRequest (get,set)HostName to (get,set)ResourceName

2013-06-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13672997#comment-13672997
 ] 

Hudson commented on YARN-749:
-

Integrated in Hadoop-Yarn-trunk #229 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/229/])
YARN-749. Rename ResourceRequest.(get,set)HostName to 
ResourceRequest.(get,set)ResourceName. Contributed by Arun C. Murthy. (Revision 
1488806)

 Result = SUCCESS
acmurthy : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1488806
Files : 
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMContainerRequestor.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/MRAppBenchmark.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRMContainerAllocator.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ResourceRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/BuilderUtils.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockAM.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestUtils.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/TestFifoScheduler.java


 Rename ResourceRequest (get,set)HostName to (get,set)ResourceName
 -

 Key: YARN-749
 URL: https://issues.apache.org/jira/browse/YARN-749
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.0.4-alpha
Reporter: Arun C Murthy
Assignee: Arun C Murthy
  Labels: api-change
 Fix For: 2.1.0-beta

 Attachments: YARN-749.patch, YARN-749.patch


 We should rename ResourceRequest (get,set)HostName to (get,set)ResourceName 
 since the name can be host, rack or *.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-730) NMClientAsync needs to remove completed container

2013-06-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13672994#comment-13672994
 ] 

Hudson commented on YARN-730:
-

Integrated in Hadoop-Yarn-trunk #229 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/229/])
YARN-730. Fix NMClientAsync to remove completed containers. Contributed by 
Zhijie Shen. (Revision 1488840)

 Result = SUCCESS
acmurthy : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1488840
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/NMClientAsync.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/TestNMClientAsync.java


 NMClientAsync needs to remove completed container
 -

 Key: YARN-730
 URL: https://issues.apache.org/jira/browse/YARN-730
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Zhijie Shen
Assignee: Zhijie Shen
 Attachments: YARN-730.1.patch


 NMClientAsync needs to remove completed container

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-720) container-log4j.properties should not refer to mapreduce properties

2013-06-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13672995#comment-13672995
 ] 

Hudson commented on YARN-720:
-

Integrated in Hadoop-Yarn-trunk #229 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/229/])
YARN-720 and MAPREDUCE-5291. container-log4j.properties should not refer to 
mapreduce properties. Update MRApp to use YARN properties for log setup. 
Contributed by Zhijie Shen. (Revision 1488829)

 Result = SUCCESS
sseth : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1488829
Files : 
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestMapReduceChildJVM.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/util/MRApps.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/TaskLog.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestTaskLog.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestStreamingTaskLog.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/resources/container-log4j.properties


 container-log4j.properties should not refer to mapreduce properties
 ---

 Key: YARN-720
 URL: https://issues.apache.org/jira/browse/YARN-720
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Siddharth Seth
Assignee: Zhijie Shen
 Fix For: 2.1.0-beta

 Attachments: YARN-720.1.patch


 This refers to yarn.app.mapreduce.container.log.dir and 
 yarn.app.mapreduce.container.log.filesize. This should either be moved into 
 the MR codebase. Alternately the parameters should be renamed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-749) Rename ResourceRequest (get,set)HostName to (get,set)ResourceName

2013-06-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673087#comment-13673087
 ] 

Hudson commented on YARN-749:
-

Integrated in Hadoop-Hdfs-trunk #1419 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1419/])
YARN-749. Rename ResourceRequest.(get,set)HostName to 
ResourceRequest.(get,set)ResourceName. Contributed by Arun C. Murthy. (Revision 
1488806)

 Result = FAILURE
acmurthy : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1488806
Files : 
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMContainerRequestor.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/MRAppBenchmark.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRMContainerAllocator.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ResourceRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/BuilderUtils.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockAM.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestUtils.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/TestFifoScheduler.java


 Rename ResourceRequest (get,set)HostName to (get,set)ResourceName
 -

 Key: YARN-749
 URL: https://issues.apache.org/jira/browse/YARN-749
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.0.4-alpha
Reporter: Arun C Murthy
Assignee: Arun C Murthy
  Labels: api-change
 Fix For: 2.1.0-beta

 Attachments: YARN-749.patch, YARN-749.patch


 We should rename ResourceRequest (get,set)HostName to (get,set)ResourceName 
 since the name can be host, rack or *.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-730) NMClientAsync needs to remove completed container

2013-06-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673084#comment-13673084
 ] 

Hudson commented on YARN-730:
-

Integrated in Hadoop-Hdfs-trunk #1419 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1419/])
YARN-730. Fix NMClientAsync to remove completed containers. Contributed by 
Zhijie Shen. (Revision 1488840)

 Result = FAILURE
acmurthy : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1488840
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/NMClientAsync.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/TestNMClientAsync.java


 NMClientAsync needs to remove completed container
 -

 Key: YARN-730
 URL: https://issues.apache.org/jira/browse/YARN-730
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Zhijie Shen
Assignee: Zhijie Shen
 Attachments: YARN-730.1.patch


 NMClientAsync needs to remove completed container

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-720) container-log4j.properties should not refer to mapreduce properties

2013-06-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673085#comment-13673085
 ] 

Hudson commented on YARN-720:
-

Integrated in Hadoop-Hdfs-trunk #1419 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1419/])
YARN-720 and MAPREDUCE-5291. container-log4j.properties should not refer to 
mapreduce properties. Update MRApp to use YARN properties for log setup. 
Contributed by Zhijie Shen. (Revision 1488829)

 Result = FAILURE
sseth : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1488829
Files : 
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestMapReduceChildJVM.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/util/MRApps.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/TaskLog.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestTaskLog.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestStreamingTaskLog.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/resources/container-log4j.properties


 container-log4j.properties should not refer to mapreduce properties
 ---

 Key: YARN-720
 URL: https://issues.apache.org/jira/browse/YARN-720
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Siddharth Seth
Assignee: Zhijie Shen
 Fix For: 2.1.0-beta

 Attachments: YARN-720.1.patch


 This refers to yarn.app.mapreduce.container.log.dir and 
 yarn.app.mapreduce.container.log.filesize. This should either be moved into 
 the MR codebase. Alternately the parameters should be renamed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-730) NMClientAsync needs to remove completed container

2013-06-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673133#comment-13673133
 ] 

Hudson commented on YARN-730:
-

Integrated in Hadoop-Mapreduce-trunk #1445 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1445/])
YARN-730. Fix NMClientAsync to remove completed containers. Contributed by 
Zhijie Shen. (Revision 1488840)

 Result = SUCCESS
acmurthy : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1488840
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/NMClientAsync.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/TestNMClientAsync.java


 NMClientAsync needs to remove completed container
 -

 Key: YARN-730
 URL: https://issues.apache.org/jira/browse/YARN-730
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Zhijie Shen
Assignee: Zhijie Shen
 Attachments: YARN-730.1.patch


 NMClientAsync needs to remove completed container

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-720) container-log4j.properties should not refer to mapreduce properties

2013-06-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673134#comment-13673134
 ] 

Hudson commented on YARN-720:
-

Integrated in Hadoop-Mapreduce-trunk #1445 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1445/])
YARN-720 and MAPREDUCE-5291. container-log4j.properties should not refer to 
mapreduce properties. Update MRApp to use YARN properties for log setup. 
Contributed by Zhijie Shen. (Revision 1488829)

 Result = SUCCESS
sseth : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1488829
Files : 
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestMapReduceChildJVM.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/util/MRApps.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/TaskLog.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestTaskLog.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestStreamingTaskLog.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/resources/container-log4j.properties


 container-log4j.properties should not refer to mapreduce properties
 ---

 Key: YARN-720
 URL: https://issues.apache.org/jira/browse/YARN-720
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Siddharth Seth
Assignee: Zhijie Shen
 Fix For: 2.1.0-beta

 Attachments: YARN-720.1.patch


 This refers to yarn.app.mapreduce.container.log.dir and 
 yarn.app.mapreduce.container.log.filesize. This should either be moved into 
 the MR codebase. Alternately the parameters should be renamed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-750) Allow for black-listing resources in CS

2013-06-03 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated YARN-750:
---

Attachment: YARN-750.patch

Straight-fwd patch (needs YARN-398 to go in first).

Per discussion in YARN-392/YARN-398 lead by [~bikassaha], I've added a 
BlacklistRequest to the AMRMProtocol#allocate call. The BlacklistRequest has 2 
lists: additions and removals to the blacklist.

I've also enhanced javadocs to document both whitelist (YARN-392/YARN-398) and 
blacklist features.

 Allow for black-listing resources in CS
 ---

 Key: YARN-750
 URL: https://issues.apache.org/jira/browse/YARN-750
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Arun C Murthy
Assignee: Arun C Murthy
 Attachments: YARN-750.patch


 YARN-392 and YARN-398 enhance scheduler api to allow for white-lists of 
 resources.
 This jira is a companion to allow for black-listing (in CS).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-749) Rename ResourceRequest (get,set)HostName to (get,set)ResourceName

2013-06-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673136#comment-13673136
 ] 

Hudson commented on YARN-749:
-

Integrated in Hadoop-Mapreduce-trunk #1445 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1445/])
YARN-749. Rename ResourceRequest.(get,set)HostName to 
ResourceRequest.(get,set)ResourceName. Contributed by Arun C. Murthy. (Revision 
1488806)

 Result = SUCCESS
acmurthy : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1488806
Files : 
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMContainerRequestor.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/MRAppBenchmark.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRMContainerAllocator.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ResourceRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/BuilderUtils.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockAM.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestUtils.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/TestFifoScheduler.java


 Rename ResourceRequest (get,set)HostName to (get,set)ResourceName
 -

 Key: YARN-749
 URL: https://issues.apache.org/jira/browse/YARN-749
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.0.4-alpha
Reporter: Arun C Murthy
Assignee: Arun C Murthy
  Labels: api-change
 Fix For: 2.1.0-beta

 Attachments: YARN-749.patch, YARN-749.patch


 We should rename ResourceRequest (get,set)HostName to (get,set)ResourceName 
 since the name can be host, rack or *.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-326) Add multi-resource scheduling to the fair scheduler

2013-06-03 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673279#comment-13673279
 ] 

Alejandro Abdelnur commented on YARN-326:
-

[~sandyr], mind rebasing again? it seems YARN-749 commit got this patch a bit 
off.

 Add multi-resource scheduling to the fair scheduler
 ---

 Key: YARN-326
 URL: https://issues.apache.org/jira/browse/YARN-326
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: scheduler
Affects Versions: 2.0.2-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
  Labels: incompatible
 Attachments: FairSchedulerDRFDesignDoc-1.pdf, 
 FairSchedulerDRFDesignDoc.pdf, YARN-326-1.patch, YARN-326-1.patch, 
 YARN-326-2.patch, YARN-326-3.patch, YARN-326-4.patch, YARN-326-5.patch, 
 YARN-326-6.patch, YARN-326-7.patch, YARN-326.patch, YARN-326.patch


 With YARN-2 in, the capacity scheduler has the ability to schedule based on 
 multiple resources, using dominant resource fairness.  The fair scheduler 
 should be able to do multiple resource scheduling as well, also using 
 dominant resource fairness.
 More details to come on how the corner cases with fair scheduler configs such 
 as min and max resources will be handled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-326) Add multi-resource scheduling to the fair scheduler

2013-06-03 Thread Sandy Ryza (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandy Ryza updated YARN-326:


Attachment: YARN-326-8.patch

 Add multi-resource scheduling to the fair scheduler
 ---

 Key: YARN-326
 URL: https://issues.apache.org/jira/browse/YARN-326
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: scheduler
Affects Versions: 2.0.2-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
  Labels: incompatible
 Attachments: FairSchedulerDRFDesignDoc-1.pdf, 
 FairSchedulerDRFDesignDoc.pdf, YARN-326-1.patch, YARN-326-1.patch, 
 YARN-326-2.patch, YARN-326-3.patch, YARN-326-4.patch, YARN-326-5.patch, 
 YARN-326-6.patch, YARN-326-7.patch, YARN-326-8.patch, YARN-326.patch, 
 YARN-326.patch


 With YARN-2 in, the capacity scheduler has the ability to schedule based on 
 multiple resources, using dominant resource fairness.  The fair scheduler 
 should be able to do multiple resource scheduling as well, also using 
 dominant resource fairness.
 More details to come on how the corner cases with fair scheduler configs such 
 as min and max resources will be handled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-326) Add multi-resource scheduling to the fair scheduler

2013-06-03 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673291#comment-13673291
 ] 

Sandy Ryza commented on YARN-326:
-

Posted a rebased patch

 Add multi-resource scheduling to the fair scheduler
 ---

 Key: YARN-326
 URL: https://issues.apache.org/jira/browse/YARN-326
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: scheduler
Affects Versions: 2.0.2-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
  Labels: incompatible
 Attachments: FairSchedulerDRFDesignDoc-1.pdf, 
 FairSchedulerDRFDesignDoc.pdf, YARN-326-1.patch, YARN-326-1.patch, 
 YARN-326-2.patch, YARN-326-3.patch, YARN-326-4.patch, YARN-326-5.patch, 
 YARN-326-6.patch, YARN-326-7.patch, YARN-326-8.patch, YARN-326.patch, 
 YARN-326.patch


 With YARN-2 in, the capacity scheduler has the ability to schedule based on 
 multiple resources, using dominant resource fairness.  The fair scheduler 
 should be able to do multiple resource scheduling as well, also using 
 dominant resource fairness.
 More details to come on how the corner cases with fair scheduler configs such 
 as min and max resources will be handled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-326) Add multi-resource scheduling to the fair scheduler

2013-06-03 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673323#comment-13673323
 ] 

Alejandro Abdelnur commented on YARN-326:
-

+1 pending test-patch.

 Add multi-resource scheduling to the fair scheduler
 ---

 Key: YARN-326
 URL: https://issues.apache.org/jira/browse/YARN-326
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: scheduler
Affects Versions: 2.0.2-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
  Labels: incompatible
 Attachments: FairSchedulerDRFDesignDoc-1.pdf, 
 FairSchedulerDRFDesignDoc.pdf, YARN-326-1.patch, YARN-326-1.patch, 
 YARN-326-2.patch, YARN-326-3.patch, YARN-326-4.patch, YARN-326-5.patch, 
 YARN-326-6.patch, YARN-326-7.patch, YARN-326-8.patch, YARN-326.patch, 
 YARN-326.patch


 With YARN-2 in, the capacity scheduler has the ability to schedule based on 
 multiple resources, using dominant resource fairness.  The fair scheduler 
 should be able to do multiple resource scheduling as well, also using 
 dominant resource fairness.
 More details to come on how the corner cases with fair scheduler configs such 
 as min and max resources will be handled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-326) Add multi-resource scheduling to the fair scheduler

2013-06-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673358#comment-13673358
 ] 

Hudson commented on YARN-326:
-

Integrated in Hadoop-trunk-Commit #3842 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3842/])
YARN-326. Add multi-resource scheduling to the fair scheduler. (sandyr via 
tucu) (Revision 1489070)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1489070
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/ResourceType.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/ResourceWeights.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationConfigurationException.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AppSchedulable.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSQueue.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSSchedulerNode.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerConfiguration.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/QueueManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/Schedulable.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/SchedulingPolicy.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/policies/DominantResourceFairnessPolicy.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/policies/FairSharePolicy.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FakeSchedulable.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairSchedulerConfiguration.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestSchedulingPolicy.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/policies
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/policies/TestDominantResourceFairnessPolicy.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/FairScheduler.apt.vm


 Add multi-resource scheduling to the fair scheduler
 ---

 Key: YARN-326
 URL: https://issues.apache.org/jira/browse/YARN-326
 Project: Hadoop YARN
  Issue Type: New Feature
  

[jira] [Updated] (YARN-117) Enhance YARN service model

2013-06-03 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated YARN-117:


Attachment: YARN-117-016.patch

This appears to be Mockito changing things, marking them as final stopped that. 
Made two fields non-final and tightened service stop to handle them not being 
null

 Enhance YARN service model
 --

 Key: YARN-117
 URL: https://issues.apache.org/jira/browse/YARN-117
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.0.4-alpha
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: YARN-117-007.patch, YARN-117-008.patch, 
 YARN-117-009.patch, YARN-117-010.patch, YARN-117-011.patch, 
 YARN-117-012.patch, YARN-117-013.patch, YARN-117-014.patch, 
 YARN-117-015.patch, YARN-117-016.patch, YARN-117-2.patch, YARN-117-3.patch, 
 YARN-117.4.patch, YARN-117.5.patch, YARN-117.6.patch, YARN-117.patch


 Having played the YARN service model, there are some issues
 that I've identified based on past work and initial use.
 This JIRA issue is an overall one to cover the issues, with solutions pushed 
 out to separate JIRAs.
 h2. state model prevents stopped state being entered if you could not 
 successfully start the service.
 In the current lifecycle you cannot stop a service unless it was successfully 
 started, but
 * {{init()}} may acquire resources that need to be explicitly released
 * if the {{start()}} operation fails partway through, the {{stop()}} 
 operation may be needed to release resources.
 *Fix:* make {{stop()}} a valid state transition from all states and require 
 the implementations to be able to stop safely without requiring all fields to 
 be non null.
 Before anyone points out that the {{stop()}} operations assume that all 
 fields are valid; and if called before a {{start()}} they will NPE; 
 MAPREDUCE-3431 shows that this problem arises today, MAPREDUCE-3502 is a fix 
 for this. It is independent of the rest of the issues in this doc but it will 
 aid making {{stop()}} execute from all states other than stopped.
 MAPREDUCE-3502 is too big a patch and needs to be broken down for easier 
 review and take up; this can be done with issues linked to this one.
 h2. AbstractService doesn't prevent duplicate state change requests.
 The {{ensureState()}} checks to verify whether or not a state transition is 
 allowed from the current state are performed in the base {{AbstractService}} 
 class -yet subclasses tend to call this *after* their own {{init()}}, 
 {{start()}}  {{stop()}} operations. This means that these operations can be 
 performed out of order, and even if the outcome of the call is an exception, 
 all actions performed by the subclasses will have taken place. MAPREDUCE-3877 
 demonstrates this.
 This is a tricky one to address. In HADOOP-3128 I used a base class instead 
 of an interface and made the {{init()}}, {{start()}}  {{stop()}} methods 
 {{final}}. These methods would do the checks, and then invoke protected inner 
 methods, {{innerStart()}}, {{innerStop()}}, etc. It should be possible to 
 retrofit the same behaviour to everything that extends {{AbstractService}} 
 -something that must be done before the class is considered stable (because 
 once the lifecycle methods are declared final, all subclasses that are out of 
 the source tree will need fixing by the respective developers.
 h2. AbstractService state change doesn't defend against race conditions.
 There's no concurrency locks on the state transitions. Whatever fix for wrong 
 state calls is added should correct this to prevent re-entrancy, such as 
 {{stop()}} being called from two threads.
 h2.  Static methods to choreograph of lifecycle operations
 Helper methods to move things through lifecycles. init-start is common, 
 stop-if-service!=null another. Some static methods can execute these, and 
 even call {{stop()}} if {{init()}} raises an exception. These could go into a 
 class {{ServiceOps}} in the same package. These can be used by those services 
 that wrap other services, and help manage more robust shutdowns.
 h2. state transition failures are something that registered service listeners 
 may wish to be informed of.
 When a state transition fails a {{RuntimeException}} can be thrown -and the 
 service listeners are not informed as the notification point isn't reached. 
 They may wish to know this, especially for management and diagnostics.
 *Fix:* extend {{ServiceStateChangeListener}} with a callback such as 
 {{stateChangeFailed(Service service,Service.State targeted-state, 
 RuntimeException e)}} that is invoked from the (final) state change methods 
 in the {{AbstractService}} class (once they delegate to their inner 
 {{innerStart()}}, {{innerStop()}} methods; make a no-op on the existing 
 implementations of the interface.
 h2. Service 

[jira] [Commented] (YARN-398) Enhance CS to allow for white-list of resources

2013-06-03 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673384#comment-13673384
 ] 

Bikas Saha commented on YARN-398:
-

+1. Looks good overall.

 Enhance CS to allow for white-list of resources
 ---

 Key: YARN-398
 URL: https://issues.apache.org/jira/browse/YARN-398
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Arun C Murthy
Assignee: Arun C Murthy
 Attachments: YARN-398.patch, YARN-398.patch


 Allow white-list and black-list of resources in scheduler api.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-702) minicluster classpath construction requires user to set yarn.is.minicluster in the job conf

2013-06-03 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673385#comment-13673385
 ] 

Jonathan Hsieh commented on YARN-702:
-

bq. What may be possible is for the MiniMRCluster to copy these configs into 
the user specified config. That would be very similar to a blanket copy.

This sounds great.  This would be a method of some sort that we pass in a 
Confiugration into that either mutates it?  I don't like the idea of the mr 
client layer needed to know which settings to extract from the minimrcluster's 
config from an encapsulation point of view

 minicluster classpath construction requires user to set yarn.is.minicluster 
 in the job conf
 ---

 Key: YARN-702
 URL: https://issues.apache.org/jira/browse/YARN-702
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: nodemanager
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza

 YARN-129 improved classpath construction for miniclusters by, when 
 yarn.is.minicluster is set, adding the current JVM's classpath to the 
 ContainerLaunchContext for the MR AM and tasks.  An issue with this is that 
 it requires the user to set yarn.is.minicluster on the mapreduce side in the 
 job conf, if they are not copying to RM conf into the jobconf.
 I think it would be better to bypass the ContainerLaunchContext and instead 
 have the nodemanager check the property, and if it is true, do the classpath 
 additions there.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-117) Enhance YARN service model

2013-06-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673421#comment-13673421
 ] 

Hadoop QA commented on YARN-117:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12585899/YARN-117-016.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 26 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 4 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-unmanaged-am-launcher
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy:

  org.apache.hadoop.yarn.client.TestNMClientAsync

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1079//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/1079//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-yarn-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/1079//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-yarn-common.html
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1079//console

This message is automatically generated.

 Enhance YARN service model
 --

 Key: YARN-117
 URL: https://issues.apache.org/jira/browse/YARN-117
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.0.4-alpha
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: YARN-117-007.patch, YARN-117-008.patch, 
 YARN-117-009.patch, YARN-117-010.patch, YARN-117-011.patch, 
 YARN-117-012.patch, YARN-117-013.patch, YARN-117-014.patch, 
 YARN-117-015.patch, YARN-117-016.patch, YARN-117-2.patch, YARN-117-3.patch, 
 YARN-117.4.patch, YARN-117.5.patch, YARN-117.6.patch, YARN-117.patch


 Having played the YARN service model, there are some issues
 that I've identified based on past work and initial use.
 This JIRA issue is an overall one to cover the issues, with solutions pushed 
 out to separate JIRAs.
 h2. state model prevents stopped state being entered if you could not 
 successfully start the service.
 In the current lifecycle you cannot stop a service unless it was successfully 
 started, but
 * {{init()}} may acquire resources that need to be explicitly released
 * if the {{start()}} operation fails partway through, the {{stop()}} 
 operation may be needed to release resources.
 *Fix:* make {{stop()}} a valid state transition from all states and require 
 the implementations to be able to stop safely without requiring all fields to 
 be non null.
 Before anyone points out that the {{stop()}} operations assume that all 
 fields are valid; and if called before a {{start()}} they will NPE; 
 MAPREDUCE-3431 shows that this problem arises today, MAPREDUCE-3502 is a fix 
 for this. It is independent of the rest of the issues in this doc but it will 
 aid making {{stop()}} execute from all states other than stopped.
 MAPREDUCE-3502 is too big a patch and needs to be broken down for easier 
 review and take up; this can be done with issues linked to this one.
 h2. AbstractService doesn't prevent duplicate state change requests.
 The {{ensureState()}} checks to verify whether or not a state transition is 
 allowed from the current state are performed in the base {{AbstractService}} 
 class -yet subclasses tend to call this *after* their own {{init()}}, 
 {{start()}}  {{stop()}} operations. This means that these operations can be 
 performed out of order, and even if the outcome of the call is an exception, 
 all actions performed by 

[jira] [Assigned] (YARN-753) Add individual factory method for user-facing api protocol records

2013-06-03 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He reassigned YARN-753:


Assignee: Jian He

 Add individual factory method for user-facing api protocol records
 --

 Key: YARN-753
 URL: https://issues.apache.org/jira/browse/YARN-753
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Jian He
Assignee: Jian He



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-753) Add individual factory method for user-facing api protocol records

2013-06-03 Thread Jian He (JIRA)
Jian He created YARN-753:


 Summary: Add individual factory method for user-facing api 
protocol records
 Key: YARN-753
 URL: https://issues.apache.org/jira/browse/YARN-753
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Jian He




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-276) Capacity Scheduler can hang when submit many jobs concurrently

2013-06-03 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673458#comment-13673458
 ] 

Thomas Graves commented on YARN-276:


Nemon, Sorry it appears this got lost in the shuffle and it no longer applies, 
could you update the patch for the current trunk/branch-2?

 Capacity Scheduler can hang when submit many jobs concurrently
 --

 Key: YARN-276
 URL: https://issues.apache.org/jira/browse/YARN-276
 Project: Hadoop YARN
  Issue Type: Bug
  Components: capacityscheduler
Affects Versions: 3.0.0, 2.0.1-alpha
Reporter: nemon lou
Assignee: nemon lou
  Labels: incompatible
 Attachments: YARN-276.patch, YARN-276.patch, YARN-276.patch, 
 YARN-276.patch, YARN-276.patch, YARN-276.patch, YARN-276.patch, 
 YARN-276.patch, YARN-276.patch, YARN-276.patch, YARN-276.patch, 
 YARN-276.patch, YARN-276.patch

   Original Estimate: 24h
  Remaining Estimate: 24h

 In hadoop2.0.1,When i submit many jobs concurrently at the same time,Capacity 
 scheduler can hang with most resources taken up by AM and don't have enough 
 resources for tasks.And then all applications hang there.
 The cause is that yarn.scheduler.capacity.maximum-am-resource-percent not 
 check directly.Instead ,this property only used for maxActiveApplications. 
 And maxActiveApplications is computed by minimumAllocation (not by Am 
 actually used).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-245) Node Manager gives InvalidStateTransitonException for FINISH_APPLICATION at FINISHED

2013-06-03 Thread Mayank Bansal (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673461#comment-13673461
 ] 

Mayank Bansal commented on YARN-245:


Hi Devaraj,

Are you working on this?

I would like to take this up if thats ok with you.

Please let me know.

If you also let me know whats the reproducible scenario for this error?

Thanks,
Mayank

 Node Manager gives InvalidStateTransitonException for FINISH_APPLICATION at 
 FINISHED
 

 Key: YARN-245
 URL: https://issues.apache.org/jira/browse/YARN-245
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.0.2-alpha, 2.0.1-alpha
Reporter: Devaraj K
Assignee: Devaraj K

 {code:xml}
 2012-11-25 12:56:11,795 WARN 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application:
  Can't handle this event at current state
 org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: 
 FINISH_APPLICATION at FINISHED
 at 
 org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:301)
 at 
 org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)
 at 
 org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.handle(ApplicationImpl.java:398)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.handle(ApplicationImpl.java:58)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ApplicationEventDispatcher.handle(ContainerManagerImpl.java:520)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ApplicationEventDispatcher.handle(ContainerManagerImpl.java:512)
 at 
 org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:126)
 at 
 org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:75)
 at java.lang.Thread.run(Thread.java:662)
 2012-11-25 12:56:11,796 INFO 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application:
  Application application_1353818859056_0004 transitioned from FINISHED to null
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-748) Move BuilderUtils from yarn-common to yarn-server-common

2013-06-03 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-748:
-

Attachment: YARN-748.4.patch

Update against latest trunk

 Move BuilderUtils from yarn-common to yarn-server-common
 

 Key: YARN-748
 URL: https://issues.apache.org/jira/browse/YARN-748
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Jian He
Assignee: Jian He
 Attachments: YARN-748.1.patch, YARN-748.2.patch, YARN-748.3.patch, 
 YARN-748.4.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-276) Capacity Scheduler can hang when submit many jobs concurrently

2013-06-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673453#comment-13673453
 ] 

Hadoop QA commented on YARN-276:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/1258/YARN-276.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1081//console

This message is automatically generated.

 Capacity Scheduler can hang when submit many jobs concurrently
 --

 Key: YARN-276
 URL: https://issues.apache.org/jira/browse/YARN-276
 Project: Hadoop YARN
  Issue Type: Bug
  Components: capacityscheduler
Affects Versions: 3.0.0, 2.0.1-alpha
Reporter: nemon lou
Assignee: nemon lou
  Labels: incompatible
 Attachments: YARN-276.patch, YARN-276.patch, YARN-276.patch, 
 YARN-276.patch, YARN-276.patch, YARN-276.patch, YARN-276.patch, 
 YARN-276.patch, YARN-276.patch, YARN-276.patch, YARN-276.patch, 
 YARN-276.patch, YARN-276.patch

   Original Estimate: 24h
  Remaining Estimate: 24h

 In hadoop2.0.1,When i submit many jobs concurrently at the same time,Capacity 
 scheduler can hang with most resources taken up by AM and don't have enough 
 resources for tasks.And then all applications hang there.
 The cause is that yarn.scheduler.capacity.maximum-am-resource-percent not 
 check directly.Instead ,this property only used for maxActiveApplications. 
 And maxActiveApplications is computed by minimumAllocation (not by Am 
 actually used).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-750) Allow for black-listing resources in CS

2013-06-03 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673496#comment-13673496
 ] 

Bikas Saha commented on YARN-750:
-

Probably dont need the iterator
{code}
+  private void addBlacklistAdditionsToProto() {
+maybeInitBuilder();
+builder.clearBlacklistAdditions();
+if (blacklistAdditions == null) {
+  return;
+}
+IterableString iterable = new IterableString() {
+  @Override
+  public IteratorString iterator() {
+return new IteratorString() {
+
+  IteratorString iter = blacklistAdditions.iterator();
+
+  @Override
+  public boolean hasNext() {
+return iter.hasNext();
+  }
+
+  @Override
+  public String next() {
+return iter.next();
+  }
+
+  @Override
+  public void remove() {
+throw new UnsupportedOperationException();
+
+  }
+};
+
+  }
+};
{code}

The null should probably clear the internal list also.
{code}
+  @Override
+  public void setBlacklistAdditions(ListString resourceNames) {
+if (blacklistAdditions == null) {
+  return;
+}
+initBlacklistAdditions();
+this.blacklistAdditions.clear();
+if (resourceNames != null) {
+  this.blacklistAdditions.addAll(resourceNames);
+}
+  }
{code}

Please check with [~sseth]. build() is probably required for immutable records.
{code}
+  @Override
+  protected void build() {
+proto = builder.build();
+builder = null;
+  }
+}
{code}

Looks like the javadoc has gotten mixed up
{code}
+/**
+ * The exception is thrown when the requested resource is out of the range
+ * of the configured lower and upper resource boundaries.
+ *
+ */
+public class InvalidBlacklistRequestException extends YarnException {
.
 /**
- * The exception is thrown when the requested resource is out of the range
- * of the configured lower and upper resource boundaries.
- *
+ * The exception is thrown when an application tries to blacklist
+ * {@link ResourceRequest#ANY}.
  */
 public class InvalidResourceRequestException extends YarnException {
{code}

This and a couple of others like this should probably not be null.
{code}
 // Update application requests
-application.updateResourceRequests(ask);
+application.updateResourceRequests(ask, null);
{code}

Changes for Fair Scheduler?

 Allow for black-listing resources in CS
 ---

 Key: YARN-750
 URL: https://issues.apache.org/jira/browse/YARN-750
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Arun C Murthy
Assignee: Arun C Murthy
 Attachments: YARN-750.patch


 YARN-392 and YARN-398 enhance scheduler api to allow for white-lists of 
 resources.
 This jira is a companion to allow for black-listing (in CS).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-689) Add multiplier unit to resourcecapabilities

2013-06-03 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated YARN-689:


Attachment: YARN-689.patch

Rebasing patch to trunk.

Changed new property name from 'multiplier' to 'increment' as it is more clear 
of the intend.

Set the default value of resource increment to be the resource minimum. A 
increment different from the minimum will only be in place if increment is set 
in the yarn-site.xml file, otherwise the increment always matches the minimum 
(the current behavior).

Adding to my last comment on why this should be in the API. By being in the 
API, client can ask the RM without relying on having the RM configuration 
locally available. Also, it provides consistent behavior across Scheduler 
implementations.

 Add multiplier unit to resourcecapabilities
 ---

 Key: YARN-689
 URL: https://issues.apache.org/jira/browse/YARN-689
 Project: Hadoop YARN
  Issue Type: Bug
  Components: api, scheduler
Affects Versions: 2.0.4-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: YARN-689.patch, YARN-689.patch, YARN-689.patch, 
 YARN-689.patch


 Currently we overloading the minimum resource value as the actual multiplier 
 used by the scheduler.
 Today with a minimum memory set to 1GB, requests for 1.5GB are always 
 translated to allocation of 2GB.
 We should decouple the minimum allocation from the multiplier.
 The multiplier should also be exposed to the client via the 
 RegisterApplicationMasterResponse

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-750) Allow for black-listing resources in CS

2013-06-03 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673501#comment-13673501
 ] 

Sandy Ryza commented on YARN-750:
-

I can take on the fair scheduler changes

 Allow for black-listing resources in CS
 ---

 Key: YARN-750
 URL: https://issues.apache.org/jira/browse/YARN-750
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Arun C Murthy
Assignee: Arun C Murthy
 Attachments: YARN-750.patch


 YARN-392 and YARN-398 enhance scheduler api to allow for white-lists of 
 resources.
 This jira is a companion to allow for black-listing (in CS).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-530) Define Service model strictly, implement AbstractService for robust subclassing, migrate yarn-common services

2013-06-03 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-530:
-

Attachment: YARN-530-016.patch

Locking only for wait and notify in AbstractService.

 Define Service model strictly, implement AbstractService for robust 
 subclassing, migrate yarn-common services
 -

 Key: YARN-530
 URL: https://issues.apache.org/jira/browse/YARN-530
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.0.4-alpha
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: YARN-117changes.pdf, YARN-530-005.patch, 
 YARN-530-008.patch, YARN-530-009.patch, YARN-530-010.patch, 
 YARN-530-011.patch, YARN-530-012.patch, YARN-530-013.patch, 
 YARN-530-014.patch, YARN-530-015.patch, YARN-530-016.patch, YARN-530-2.patch, 
 YARN-530-3.patch, YARN-530.4.patch, YARN-530.patch


 # Extend the YARN {{Service}} interface as discussed in YARN-117
 # Implement the changes in {{AbstractService}} and {{FilterService}}.
 # Migrate all services in yarn-common to the more robust service model, test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-750) Allow for black-listing resources in CS

2013-06-03 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673511#comment-13673511
 ] 

Sandy Ryza commented on YARN-750:
-

Filed YARN-754

 Allow for black-listing resources in CS
 ---

 Key: YARN-750
 URL: https://issues.apache.org/jira/browse/YARN-750
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Arun C Murthy
Assignee: Arun C Murthy
 Attachments: YARN-750.patch


 YARN-392 and YARN-398 enhance scheduler api to allow for white-lists of 
 resources.
 This jira is a companion to allow for black-listing (in CS).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-754) Allow for black-listing resources in FS

2013-06-03 Thread Sandy Ryza (JIRA)
Sandy Ryza created YARN-754:
---

 Summary: Allow for black-listing resources in FS
 Key: YARN-754
 URL: https://issues.apache.org/jira/browse/YARN-754
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: scheduler
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-752) Throw exception if AMRMClient.ContainerRequest is given invalid locations

2013-06-03 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673512#comment-13673512
 ] 

Bikas Saha commented on YARN-752:
-

I was going to open a jira for the second option actually. Given yarn's 
defaults on relaxing locality etc. it makes sense for the client to 
automatically add racks when a node is present without its rack. So users can 
specify mix of nodes and racks and the client will fill in missing racks for 
the nodes if needed. If strict locality is enabled then it will not add the 
racks.
If anyone knows the node to rack mapping (as understood by the RM and thats 
what matters) then its the amrmclient.
It is perfectly valid for users to specify nodes without racks to amrmclient.

 Throw exception if AMRMClient.ContainerRequest is given invalid locations
 -

 Key: YARN-752
 URL: https://issues.apache.org/jira/browse/YARN-752
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: api, applications
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza

 A ContainerRequest that includes node-level requests must also include 
 matching rack-level requests for the racks that those nodes are on.  At the 
 very least, an exception should be thrown if one is constructed with a 
 non-empty set of nodes but an empty set of racks.
 If possible, it would also be nice to validate that the given nodes are on 
 the racks that are given.  Although if that is possible, then it might be 
 even better to just automatically fill in racks for nodes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-689) Add multiplier unit to resourcecapabilities

2013-06-03 Thread Arun C Murthy (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673518#comment-13673518
 ] 

Arun C Murthy commented on YARN-689:


Regarding API, we can add a FS specific api. That is something we'll need 
eventually anyway.

 Add multiplier unit to resourcecapabilities
 ---

 Key: YARN-689
 URL: https://issues.apache.org/jira/browse/YARN-689
 Project: Hadoop YARN
  Issue Type: Bug
  Components: api, scheduler
Affects Versions: 2.0.4-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: YARN-689.patch, YARN-689.patch, YARN-689.patch, 
 YARN-689.patch


 Currently we overloading the minimum resource value as the actual multiplier 
 used by the scheduler.
 Today with a minimum memory set to 1GB, requests for 1.5GB are always 
 translated to allocation of 2GB.
 We should decouple the minimum allocation from the multiplier.
 The multiplier should also be exposed to the client via the 
 RegisterApplicationMasterResponse

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-689) Add multiplier unit to resourcecapabilities

2013-06-03 Thread Arun C Murthy (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673517#comment-13673517
 ] 

Arun C Murthy commented on YARN-689:


bq. What is for concern for decoupling minimum and increment and exposing it 
the API?

I'm -1 on changes for CS, this really complicates reservations and could lead 
to too much fragmentation.

For e.g. if minimum is 1G and increment is 128M, then trying to get 
reservations to work for 1.7G v/s 1.9G on can be very hard. Worse, this leads 
to lots of nodes with 1G fragments leading to poor utilization.

As I've said before I'm not going to block this going into FS.

 Add multiplier unit to resourcecapabilities
 ---

 Key: YARN-689
 URL: https://issues.apache.org/jira/browse/YARN-689
 Project: Hadoop YARN
  Issue Type: Bug
  Components: api, scheduler
Affects Versions: 2.0.4-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: YARN-689.patch, YARN-689.patch, YARN-689.patch, 
 YARN-689.patch


 Currently we overloading the minimum resource value as the actual multiplier 
 used by the scheduler.
 Today with a minimum memory set to 1GB, requests for 1.5GB are always 
 translated to allocation of 2GB.
 We should decouple the minimum allocation from the multiplier.
 The multiplier should also be exposed to the client via the 
 RegisterApplicationMasterResponse

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-752) In AMRMClient, automatically add corresponding rack requests for requested nodes

2013-06-03 Thread Sandy Ryza (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandy Ryza updated YARN-752:


Summary: In AMRMClient, automatically add corresponding rack requests for 
requested nodes  (was: Throw exception if AMRMClient.ContainerRequest is given 
invalid locations)

 In AMRMClient, automatically add corresponding rack requests for requested 
 nodes
 

 Key: YARN-752
 URL: https://issues.apache.org/jira/browse/YARN-752
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: api, applications
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza

 A ContainerRequest that includes node-level requests must also include 
 matching rack-level requests for the racks that those nodes are on.  At the 
 very least, an exception should be thrown if one is constructed with a 
 non-empty set of nodes but an empty set of racks.
 If possible, it would also be nice to validate that the given nodes are on 
 the racks that are given.  Although if that is possible, then it might be 
 even better to just automatically fill in racks for nodes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-752) In AMRMClient, automatically add corresponding rack requests for requested nodes

2013-06-03 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673526#comment-13673526
 ] 

Sandy Ryza commented on YARN-752:
-

Cool, I think that option is the best too.  Updated the title and description.

 In AMRMClient, automatically add corresponding rack requests for requested 
 nodes
 

 Key: YARN-752
 URL: https://issues.apache.org/jira/browse/YARN-752
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: api, applications
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza

 A ContainerRequest that includes node-level requests must also include 
 matching rack-level requests for the racks that those nodes are on.  When a 
 node is present without its rack, it makes sense for the client to 
 automatically add the node's rack.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-689) Add multiplier unit to resourcecapabilities

2013-06-03 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673525#comment-13673525
 ] 

Alejandro Abdelnur commented on YARN-689:
-

[~acmurthy], the fragmentation problem you describe it can happen today 
-without this patch- if people set the minimum to 128M, you need to set 
sensible values (with or without this patch). Also, as I've mentioned before, 
the default behavior is {{increment == minimum}} which is today's behavior.

 Add multiplier unit to resourcecapabilities
 ---

 Key: YARN-689
 URL: https://issues.apache.org/jira/browse/YARN-689
 Project: Hadoop YARN
  Issue Type: Bug
  Components: api, scheduler
Affects Versions: 2.0.4-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: YARN-689.patch, YARN-689.patch, YARN-689.patch, 
 YARN-689.patch


 Currently we overloading the minimum resource value as the actual multiplier 
 used by the scheduler.
 Today with a minimum memory set to 1GB, requests for 1.5GB are always 
 translated to allocation of 2GB.
 We should decouple the minimum allocation from the multiplier.
 The multiplier should also be exposed to the client via the 
 RegisterApplicationMasterResponse

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-752) In AMRMClient, automatically add corresponding rack requests for requested nodes

2013-06-03 Thread Sandy Ryza (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandy Ryza updated YARN-752:


Description: A ContainerRequest that includes node-level requests must also 
include matching rack-level requests for the racks that those nodes are on.  
When a node is present without its rack, it makes sense for the client to 
automatically add the node's rack.  (was: A ContainerRequest that includes 
node-level requests must also include matching rack-level requests for the 
racks that those nodes are on.  At the very least, an exception should be 
thrown if one is constructed with a non-empty set of nodes but an empty set of 
racks.

If possible, it would also be nice to validate that the given nodes are on the 
racks that are given.  Although if that is possible, then it might be even 
better to just automatically fill in racks for nodes.)

 In AMRMClient, automatically add corresponding rack requests for requested 
 nodes
 

 Key: YARN-752
 URL: https://issues.apache.org/jira/browse/YARN-752
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: api, applications
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza

 A ContainerRequest that includes node-level requests must also include 
 matching rack-level requests for the racks that those nodes are on.  When a 
 node is present without its rack, it makes sense for the client to 
 automatically add the node's rack.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-689) Add multiplier unit to resourcecapabilities

2013-06-03 Thread Arun C Murthy (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673532#comment-13673532
 ] 

Arun C Murthy commented on YARN-689:


bq. the fragmentation problem you describe it can happen today if people set 
the minimum to 128M

Not true. A minimum container can always we allocated since minimum == 
increment.

 Add multiplier unit to resourcecapabilities
 ---

 Key: YARN-689
 URL: https://issues.apache.org/jira/browse/YARN-689
 Project: Hadoop YARN
  Issue Type: Bug
  Components: api, scheduler
Affects Versions: 2.0.4-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: YARN-689.patch, YARN-689.patch, YARN-689.patch, 
 YARN-689.patch


 Currently we overloading the minimum resource value as the actual multiplier 
 used by the scheduler.
 Today with a minimum memory set to 1GB, requests for 1.5GB are always 
 translated to allocation of 2GB.
 We should decouple the minimum allocation from the multiplier.
 The multiplier should also be exposed to the client via the 
 RegisterApplicationMasterResponse

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-530) Define Service model strictly, implement AbstractService for robust subclassing, migrate yarn-common services

2013-06-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673561#comment-13673561
 ] 

Hadoop QA commented on YARN-530:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12585932/YARN-530-016.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 5 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 3 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1084//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/1084//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-yarn-common.html
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1084//console

This message is automatically generated.

 Define Service model strictly, implement AbstractService for robust 
 subclassing, migrate yarn-common services
 -

 Key: YARN-530
 URL: https://issues.apache.org/jira/browse/YARN-530
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.0.4-alpha
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: YARN-117changes.pdf, YARN-530-005.patch, 
 YARN-530-008.patch, YARN-530-009.patch, YARN-530-010.patch, 
 YARN-530-011.patch, YARN-530-012.patch, YARN-530-013.patch, 
 YARN-530-014.patch, YARN-530-015.patch, YARN-530-016.patch, YARN-530-2.patch, 
 YARN-530-3.patch, YARN-530.4.patch, YARN-530.patch


 # Extend the YARN {{Service}} interface as discussed in YARN-117
 # Implement the changes in {{AbstractService}} and {{FilterService}}.
 # Migrate all services in yarn-common to the more robust service model, test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-752) In AMRMClient, automatically add corresponding rack requests for requested nodes

2013-06-03 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673577#comment-13673577
 ] 

Sandy Ryza commented on YARN-752:
-

[~bikassaha], do you have an opinion on whether it makes more sense to add the 
missing racks when a ContainerRequest is constructed or when it's submitted to 
the AMRMClient?

 In AMRMClient, automatically add corresponding rack requests for requested 
 nodes
 

 Key: YARN-752
 URL: https://issues.apache.org/jira/browse/YARN-752
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: api, applications
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza

 A ContainerRequest that includes node-level requests must also include 
 matching rack-level requests for the racks that those nodes are on.  When a 
 node is present without its rack, it makes sense for the client to 
 automatically add the node's rack.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-689) Add multiplier unit to resourcecapabilities

2013-06-03 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673582#comment-13673582
 ] 

Tom White commented on YARN-689:


It seems that there's a concern about impacting CS with this change - if so, 
then have CapacityScheduler just return minCapability for 
getIncrementResourceCapability().

Also, can you keep the two arg constructor for ClusterInfo by setting 
incrementCapability to minCapability by default? Then unrelated tests wouldn't 
need to change.


 Add multiplier unit to resourcecapabilities
 ---

 Key: YARN-689
 URL: https://issues.apache.org/jira/browse/YARN-689
 Project: Hadoop YARN
  Issue Type: Bug
  Components: api, scheduler
Affects Versions: 2.0.4-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: YARN-689.patch, YARN-689.patch, YARN-689.patch, 
 YARN-689.patch


 Currently we overloading the minimum resource value as the actual multiplier 
 used by the scheduler.
 Today with a minimum memory set to 1GB, requests for 1.5GB are always 
 translated to allocation of 2GB.
 We should decouple the minimum allocation from the multiplier.
 The multiplier should also be exposed to the client via the 
 RegisterApplicationMasterResponse

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-750) Allow for black-listing resources in CS

2013-06-03 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated YARN-750:
---

Attachment: YARN-750.patch

Thanks for the reviews Bikas. Here is a patch which addresses your f/b.

 Allow for black-listing resources in CS
 ---

 Key: YARN-750
 URL: https://issues.apache.org/jira/browse/YARN-750
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Arun C Murthy
Assignee: Arun C Murthy
 Attachments: YARN-750.patch, YARN-750.patch


 YARN-392 and YARN-398 enhance scheduler api to allow for white-lists of 
 resources.
 This jira is a companion to allow for black-listing (in CS).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-748) Move BuilderUtils from yarn-common to yarn-server-common

2013-06-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673615#comment-13673615
 ] 

Hadoop QA commented on YARN-748:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12585922/YARN-748.4.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 85 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs-plugins
 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1082//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1082//console

This message is automatically generated.

 Move BuilderUtils from yarn-common to yarn-server-common
 

 Key: YARN-748
 URL: https://issues.apache.org/jira/browse/YARN-748
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Jian He
Assignee: Jian He
 Attachments: YARN-748.1.patch, YARN-748.2.patch, YARN-748.3.patch, 
 YARN-748.4.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-750) Allow for black-listing resources in CS

2013-06-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673645#comment-13673645
 ] 

Hadoop QA commented on YARN-750:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12585953/YARN-750.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1085//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1085//console

This message is automatically generated.

 Allow for black-listing resources in CS
 ---

 Key: YARN-750
 URL: https://issues.apache.org/jira/browse/YARN-750
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Arun C Murthy
Assignee: Arun C Murthy
 Attachments: YARN-750.patch, YARN-750.patch


 YARN-392 and YARN-398 enhance scheduler api to allow for white-lists of 
 resources.
 This jira is a companion to allow for black-listing (in CS).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-689) Add multiplier unit to resourcecapabilities

2013-06-03 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated YARN-689:


Attachment: YARN-689.patch

[~acmurthy], regarding your last comment. Unless I'm missing something, if 
today you set the minimum to 128M, you run into the issue you describe then 
trying to get reservations to work for 1.7G v/s 1.9G on can be very hard. 
Worse, this leads to lots of nodes with 1G fragments leading to poor 
utilization.. No?

Anyway, as having {{increment}} decoupled from {{minimum}} outside of FS seems 
to the the issue, updating with a patch doing what Tom suggested: For CS and 
FIFO {{increment}} is hardwired to {{minimum}}.

Also, added a testcase with minicluster running with FS to verify the increment 
makes it to the AMRMClient correctly.


 Add multiplier unit to resourcecapabilities
 ---

 Key: YARN-689
 URL: https://issues.apache.org/jira/browse/YARN-689
 Project: Hadoop YARN
  Issue Type: Bug
  Components: api, scheduler
Affects Versions: 2.0.4-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: YARN-689.patch, YARN-689.patch, YARN-689.patch, 
 YARN-689.patch, YARN-689.patch


 Currently we overloading the minimum resource value as the actual multiplier 
 used by the scheduler.
 Today with a minimum memory set to 1GB, requests for 1.5GB are always 
 translated to allocation of 2GB.
 We should decouple the minimum allocation from the multiplier.
 The multiplier should also be exposed to the client via the 
 RegisterApplicationMasterResponse

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-742) Log aggregation causes a lot of redundant setPermission calls

2013-06-03 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-742:


Attachment: YARN-742.patch

Patch to walk back up the app log dir path to check for the existence of a 
directory before blindly proceeding to create it.  It gets out early if it 
finds a path that exists.

In addition to the unit test, I manually tested this on a single-node cluster 
verifying the directories are created iff they are missing and permissions are 
set iff necessary due to a too-restrictive umask.

 Log aggregation causes a lot of redundant setPermission calls
 -

 Key: YARN-742
 URL: https://issues.apache.org/jira/browse/YARN-742
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 0.23.7, 2.0.4-alpha
Reporter: Kihwal Lee
Assignee: Jason Lowe
 Attachments: YARN-742.patch


 In one of our clusters, namenode RPC is spending 45% of its time on serving 
 setPermission calls. Further investigation has revealed that most calls are 
 redundantly made on /mapred/logs/user/logs. Also mkdirs calls are made 
 before this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-642) Fix up /nodes REST API to have 1 param and be consistent with the Java API

2013-06-03 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673677#comment-13673677
 ] 

Sandy Ryza commented on YARN-642:
-

[~vinodkv], is the latest patch satisfactory to you?  Just verified that it 
still applies cleanly to trunk.

 Fix up /nodes REST API to have 1 param and be consistent with the Java API
 --

 Key: YARN-642
 URL: https://issues.apache.org/jira/browse/YARN-642
 Project: Hadoop YARN
  Issue Type: Bug
  Components: api, resourcemanager
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
  Labels: incompatible
 Attachments: YARN-642-1.patch, YARN-642-2.patch, YARN-642-2.patch, 
 YARN-642.patch


 The code behind the /nodes RM REST API is unnecessarily muddled, logs the 
 same misspelled INFO message repeatedly, and does not return unhealthy nodes, 
 even when asked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-748) Move BuilderUtils from yarn-common to yarn-server-common

2013-06-03 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-748:
-

Attachment: YARN-748.5.patch

change yarn-client to not use BuilderUtils. 
Change TestFairSchedulerConfiguration and TestDominantResourceFairnessPolicy to 
use BuilderUtils in server-common

 Move BuilderUtils from yarn-common to yarn-server-common
 

 Key: YARN-748
 URL: https://issues.apache.org/jira/browse/YARN-748
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Jian He
Assignee: Jian He
 Attachments: YARN-748.1.patch, YARN-748.2.patch, YARN-748.3.patch, 
 YARN-748.4.patch, YARN-748.5.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-742) Log aggregation causes a lot of redundant setPermission calls

2013-06-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673693#comment-13673693
 ] 

Hadoop QA commented on YARN-742:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12585968/YARN-742.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1086//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1086//console

This message is automatically generated.

 Log aggregation causes a lot of redundant setPermission calls
 -

 Key: YARN-742
 URL: https://issues.apache.org/jira/browse/YARN-742
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 0.23.7, 2.0.4-alpha
Reporter: Kihwal Lee
Assignee: Jason Lowe
 Attachments: YARN-742.patch


 In one of our clusters, namenode RPC is spending 45% of its time on serving 
 setPermission calls. Further investigation has revealed that most calls are 
 redundantly made on /mapred/logs/user/logs. Also mkdirs calls are made 
 before this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-689) Add multiplier unit to resourcecapabilities

2013-06-03 Thread Arun C Murthy (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673703#comment-13673703
 ] 

Arun C Murthy commented on YARN-689:


bq. Unless I'm missing something, if today you set the minimum to 128M, you run 
into the issue you describe then trying to get reservations to work for 1.7G 
v/s 1.9G on can be very hard. Worse, this leads to lots of nodes with 1G 
fragments leading to poor utilization.. No?

Again, if minimum == increment, this does not arise at all. 

bq. It seems that there's a concern about impacting CS with this change

It's more than just that - I thought I made this clear. I don't see this as the 
right thing to do directionally/architecturally for YARN - this change doesn't 
smell right to me. 

This is why I'm against this change in the protocol. I won't block changes to 
FS as long as they don't expose this on the protocol.

Thanks.

 Add multiplier unit to resourcecapabilities
 ---

 Key: YARN-689
 URL: https://issues.apache.org/jira/browse/YARN-689
 Project: Hadoop YARN
  Issue Type: Bug
  Components: api, scheduler
Affects Versions: 2.0.4-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: YARN-689.patch, YARN-689.patch, YARN-689.patch, 
 YARN-689.patch, YARN-689.patch


 Currently we overloading the minimum resource value as the actual multiplier 
 used by the scheduler.
 Today with a minimum memory set to 1GB, requests for 1.5GB are always 
 translated to allocation of 2GB.
 We should decouple the minimum allocation from the multiplier.
 The multiplier should also be exposed to the client via the 
 RegisterApplicationMasterResponse

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-117) Enhance YARN service model

2013-06-03 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated YARN-117:


Attachment: MAPREDUCE-5298-016.patch

Patch in sync w/ YARN-530-016.patch

 Enhance YARN service model
 --

 Key: YARN-117
 URL: https://issues.apache.org/jira/browse/YARN-117
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.0.4-alpha
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: MAPREDUCE-5298-016.patch, YARN-117-007.patch, 
 YARN-117-008.patch, YARN-117-009.patch, YARN-117-010.patch, 
 YARN-117-011.patch, YARN-117-012.patch, YARN-117-013.patch, 
 YARN-117-014.patch, YARN-117-015.patch, YARN-117-016.patch, YARN-117-2.patch, 
 YARN-117-3.patch, YARN-117.4.patch, YARN-117.5.patch, YARN-117.6.patch, 
 YARN-117.patch


 Having played the YARN service model, there are some issues
 that I've identified based on past work and initial use.
 This JIRA issue is an overall one to cover the issues, with solutions pushed 
 out to separate JIRAs.
 h2. state model prevents stopped state being entered if you could not 
 successfully start the service.
 In the current lifecycle you cannot stop a service unless it was successfully 
 started, but
 * {{init()}} may acquire resources that need to be explicitly released
 * if the {{start()}} operation fails partway through, the {{stop()}} 
 operation may be needed to release resources.
 *Fix:* make {{stop()}} a valid state transition from all states and require 
 the implementations to be able to stop safely without requiring all fields to 
 be non null.
 Before anyone points out that the {{stop()}} operations assume that all 
 fields are valid; and if called before a {{start()}} they will NPE; 
 MAPREDUCE-3431 shows that this problem arises today, MAPREDUCE-3502 is a fix 
 for this. It is independent of the rest of the issues in this doc but it will 
 aid making {{stop()}} execute from all states other than stopped.
 MAPREDUCE-3502 is too big a patch and needs to be broken down for easier 
 review and take up; this can be done with issues linked to this one.
 h2. AbstractService doesn't prevent duplicate state change requests.
 The {{ensureState()}} checks to verify whether or not a state transition is 
 allowed from the current state are performed in the base {{AbstractService}} 
 class -yet subclasses tend to call this *after* their own {{init()}}, 
 {{start()}}  {{stop()}} operations. This means that these operations can be 
 performed out of order, and even if the outcome of the call is an exception, 
 all actions performed by the subclasses will have taken place. MAPREDUCE-3877 
 demonstrates this.
 This is a tricky one to address. In HADOOP-3128 I used a base class instead 
 of an interface and made the {{init()}}, {{start()}}  {{stop()}} methods 
 {{final}}. These methods would do the checks, and then invoke protected inner 
 methods, {{innerStart()}}, {{innerStop()}}, etc. It should be possible to 
 retrofit the same behaviour to everything that extends {{AbstractService}} 
 -something that must be done before the class is considered stable (because 
 once the lifecycle methods are declared final, all subclasses that are out of 
 the source tree will need fixing by the respective developers.
 h2. AbstractService state change doesn't defend against race conditions.
 There's no concurrency locks on the state transitions. Whatever fix for wrong 
 state calls is added should correct this to prevent re-entrancy, such as 
 {{stop()}} being called from two threads.
 h2.  Static methods to choreograph of lifecycle operations
 Helper methods to move things through lifecycles. init-start is common, 
 stop-if-service!=null another. Some static methods can execute these, and 
 even call {{stop()}} if {{init()}} raises an exception. These could go into a 
 class {{ServiceOps}} in the same package. These can be used by those services 
 that wrap other services, and help manage more robust shutdowns.
 h2. state transition failures are something that registered service listeners 
 may wish to be informed of.
 When a state transition fails a {{RuntimeException}} can be thrown -and the 
 service listeners are not informed as the notification point isn't reached. 
 They may wish to know this, especially for management and diagnostics.
 *Fix:* extend {{ServiceStateChangeListener}} with a callback such as 
 {{stateChangeFailed(Service service,Service.State targeted-state, 
 RuntimeException e)}} that is invoked from the (final) state change methods 
 in the {{AbstractService}} class (once they delegate to their inner 
 {{innerStart()}}, {{innerStop()}} methods; make a no-op on the existing 
 implementations of the interface.
 h2. Service listener failures not handled
 Is this an error an error or not? Log and ignore may not be what 

[jira] [Commented] (YARN-117) Enhance YARN service model

2013-06-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673720#comment-13673720
 ] 

Hadoop QA commented on YARN-117:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12585974/MAPREDUCE-5298-016.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 12 new 
or modified test files.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1089//console

This message is automatically generated.

 Enhance YARN service model
 --

 Key: YARN-117
 URL: https://issues.apache.org/jira/browse/YARN-117
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.0.4-alpha
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: MAPREDUCE-5298-016.patch, YARN-117-007.patch, 
 YARN-117-008.patch, YARN-117-009.patch, YARN-117-010.patch, 
 YARN-117-011.patch, YARN-117-012.patch, YARN-117-013.patch, 
 YARN-117-014.patch, YARN-117-015.patch, YARN-117-016.patch, YARN-117-2.patch, 
 YARN-117-3.patch, YARN-117.4.patch, YARN-117.5.patch, YARN-117.6.patch, 
 YARN-117.patch


 Having played the YARN service model, there are some issues
 that I've identified based on past work and initial use.
 This JIRA issue is an overall one to cover the issues, with solutions pushed 
 out to separate JIRAs.
 h2. state model prevents stopped state being entered if you could not 
 successfully start the service.
 In the current lifecycle you cannot stop a service unless it was successfully 
 started, but
 * {{init()}} may acquire resources that need to be explicitly released
 * if the {{start()}} operation fails partway through, the {{stop()}} 
 operation may be needed to release resources.
 *Fix:* make {{stop()}} a valid state transition from all states and require 
 the implementations to be able to stop safely without requiring all fields to 
 be non null.
 Before anyone points out that the {{stop()}} operations assume that all 
 fields are valid; and if called before a {{start()}} they will NPE; 
 MAPREDUCE-3431 shows that this problem arises today, MAPREDUCE-3502 is a fix 
 for this. It is independent of the rest of the issues in this doc but it will 
 aid making {{stop()}} execute from all states other than stopped.
 MAPREDUCE-3502 is too big a patch and needs to be broken down for easier 
 review and take up; this can be done with issues linked to this one.
 h2. AbstractService doesn't prevent duplicate state change requests.
 The {{ensureState()}} checks to verify whether or not a state transition is 
 allowed from the current state are performed in the base {{AbstractService}} 
 class -yet subclasses tend to call this *after* their own {{init()}}, 
 {{start()}}  {{stop()}} operations. This means that these operations can be 
 performed out of order, and even if the outcome of the call is an exception, 
 all actions performed by the subclasses will have taken place. MAPREDUCE-3877 
 demonstrates this.
 This is a tricky one to address. In HADOOP-3128 I used a base class instead 
 of an interface and made the {{init()}}, {{start()}}  {{stop()}} methods 
 {{final}}. These methods would do the checks, and then invoke protected inner 
 methods, {{innerStart()}}, {{innerStop()}}, etc. It should be possible to 
 retrofit the same behaviour to everything that extends {{AbstractService}} 
 -something that must be done before the class is considered stable (because 
 once the lifecycle methods are declared final, all subclasses that are out of 
 the source tree will need fixing by the respective developers.
 h2. AbstractService state change doesn't defend against race conditions.
 There's no concurrency locks on the state transitions. Whatever fix for wrong 
 state calls is added should correct this to prevent re-entrancy, such as 
 {{stop()}} being called from two threads.
 h2.  Static methods to choreograph of lifecycle operations
 Helper methods to move things through lifecycles. init-start is common, 
 stop-if-service!=null another. Some static methods can execute these, and 
 even call {{stop()}} if {{init()}} raises an exception. These could go into a 
 class {{ServiceOps}} in the same package. These can be used by those services 
 that wrap other services, and help manage more robust shutdowns.
 h2. state transition failures are something that registered service listeners 
 may wish to be informed of.
 When a state transition fails a {{RuntimeException}} can be thrown -and the 
 service listeners are not informed as the notification point isn't reached. 
 They may 

[jira] [Commented] (YARN-689) Add multiplier unit to resourcecapabilities

2013-06-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673722#comment-13673722
 ] 

Hadoop QA commented on YARN-689:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12585966/YARN-689.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1087//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1087//console

This message is automatically generated.

 Add multiplier unit to resourcecapabilities
 ---

 Key: YARN-689
 URL: https://issues.apache.org/jira/browse/YARN-689
 Project: Hadoop YARN
  Issue Type: Bug
  Components: api, scheduler
Affects Versions: 2.0.4-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: YARN-689.patch, YARN-689.patch, YARN-689.patch, 
 YARN-689.patch, YARN-689.patch


 Currently we overloading the minimum resource value as the actual multiplier 
 used by the scheduler.
 Today with a minimum memory set to 1GB, requests for 1.5GB are always 
 translated to allocation of 2GB.
 We should decouple the minimum allocation from the multiplier.
 The multiplier should also be exposed to the client via the 
 RegisterApplicationMasterResponse

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-750) Allow for black-listing resources in CS

2013-06-03 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673733#comment-13673733
 ] 

Bikas Saha commented on YARN-750:
-

Dont we need to clear the local cache of the list also? because in 
mergelocaltobuilder() we will be merging setting that onto the builder.
{code}
+  public void setBlacklistRequest(BlacklistRequest blacklistRequest) {
+maybeInitBuilder();
+if (this.blacklistRequest == null) {
+  builder.clearBlacklistRequest();
+}
+this.blacklistRequest = blacklistRequest;
{code}

First if stmtm is probably redundant code? we just need to check for non-null 
resourceNames before calling .addAll(). Not even that much if the collections 
api handles null.
{code}
+  public void setBlacklistAdditions(ListString resourceNames) {
+if (resourceNames == null) {
+  if (this.blacklistAdditions != null) {
+this.blacklistAdditions.clear();
+  }
+  return;
+}
+initBlacklistAdditions();
+this.blacklistAdditions.clear();
+this.blacklistAdditions.addAll(resourceNames);
{code}

The javadoc mismatches in the exceptions still seem to be there.

Still sending null instead of blacklist. Did you forget to save the code before 
creating the patch? :P
{code}
-application.updateResourceRequests(ask);
+application.updateResourceRequests(ask, null);
{code}

 Allow for black-listing resources in CS
 ---

 Key: YARN-750
 URL: https://issues.apache.org/jira/browse/YARN-750
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Arun C Murthy
Assignee: Arun C Murthy
 Attachments: YARN-750.patch, YARN-750.patch


 YARN-392 and YARN-398 enhance scheduler api to allow for white-lists of 
 resources.
 This jira is a companion to allow for black-listing (in CS).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-748) Move BuilderUtils from yarn-common to yarn-server-common

2013-06-03 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673738#comment-13673738
 ] 

Vinod Kumar Vavilapalli commented on YARN-748:
--

+1. This looks good. Will check it in when MAPREDUCE-5297 is ready..

 Move BuilderUtils from yarn-common to yarn-server-common
 

 Key: YARN-748
 URL: https://issues.apache.org/jira/browse/YARN-748
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Jian He
Assignee: Jian He
 Attachments: YARN-748.1.patch, YARN-748.2.patch, YARN-748.3.patch, 
 YARN-748.4.patch, YARN-748.5.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-748) Move BuilderUtils from yarn-common to yarn-server-common

2013-06-03 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-748:
-

Attachment: YARN-748.includeMR.patch

The new patch covers MR side change not to use BuilderUtils at all

 Move BuilderUtils from yarn-common to yarn-server-common
 

 Key: YARN-748
 URL: https://issues.apache.org/jira/browse/YARN-748
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Jian He
Assignee: Jian He
 Attachments: YARN-748.1.patch, YARN-748.2.patch, YARN-748.3.patch, 
 YARN-748.4.patch, YARN-748.5.patch, YARN-748.includeMR.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-755) Rename AllocateResponse.reboot to AllocateResponse.resync

2013-06-03 Thread Bikas Saha (JIRA)
Bikas Saha created YARN-755:
---

 Summary: Rename AllocateResponse.reboot to AllocateResponse.resync
 Key: YARN-755
 URL: https://issues.apache.org/jira/browse/YARN-755
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Bikas Saha
Assignee: Bikas Saha


For work preserving rm restart the am's will be resyncing instead of rebooting. 
rebooting is an action that currently satisfies the resync requirement. 
Changing the name now so that it continues to make sense in the real resync 
case. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-748) Move BuilderUtils from yarn-common to yarn-server-common

2013-06-03 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-748:
-

Attachment: (was: YARN-748.includeMR.patch)

 Move BuilderUtils from yarn-common to yarn-server-common
 

 Key: YARN-748
 URL: https://issues.apache.org/jira/browse/YARN-748
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Jian He
Assignee: Jian He
 Attachments: YARN-748.1.patch, YARN-748.2.patch, YARN-748.3.patch, 
 YARN-748.4.patch, YARN-748.5.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-748) Move BuilderUtils from yarn-common to yarn-server-common

2013-06-03 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-748:
-

Attachment: YARN-748.includeMR.patch

 Move BuilderUtils from yarn-common to yarn-server-common
 

 Key: YARN-748
 URL: https://issues.apache.org/jira/browse/YARN-748
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Jian He
Assignee: Jian He
 Attachments: YARN-748.1.patch, YARN-748.2.patch, YARN-748.3.patch, 
 YARN-748.4.patch, YARN-748.5.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-748) Move BuilderUtils from yarn-common to yarn-server-common

2013-06-03 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-748:
-

Attachment: (was: YARN-748.includeMR.patch)

 Move BuilderUtils from yarn-common to yarn-server-common
 

 Key: YARN-748
 URL: https://issues.apache.org/jira/browse/YARN-748
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Jian He
Assignee: Jian He
 Attachments: YARN-748.1.patch, YARN-748.2.patch, YARN-748.3.patch, 
 YARN-748.4.patch, YARN-748.5.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-755) Rename AllocateResponse.reboot to AllocateResponse.resync

2013-06-03 Thread Bikas Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikas Saha updated YARN-755:


Affects Version/s: 2.1.0-beta

 Rename AllocateResponse.reboot to AllocateResponse.resync
 -

 Key: YARN-755
 URL: https://issues.apache.org/jira/browse/YARN-755
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.0-beta
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: YARN-755.1.patch


 For work preserving rm restart the am's will be resyncing instead of 
 rebooting. rebooting is an action that currently satisfies the resync 
 requirement. Changing the name now so that it continues to make sense in the 
 real resync case. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-755) Rename AllocateResponse.reboot to AllocateResponse.resync

2013-06-03 Thread Bikas Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikas Saha updated YARN-755:


Attachment: YARN-755.1.patch

Refactor using Eclipse. Changed the proto field also.

 Rename AllocateResponse.reboot to AllocateResponse.resync
 -

 Key: YARN-755
 URL: https://issues.apache.org/jira/browse/YARN-755
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: YARN-755.1.patch


 For work preserving rm restart the am's will be resyncing instead of 
 rebooting. rebooting is an action that currently satisfies the resync 
 requirement. Changing the name now so that it continues to make sense in the 
 real resync case. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-530) Define Service model strictly, implement AbstractService for robust subclassing, migrate yarn-common services

2013-06-03 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-530:
-

Attachment: YARN-530-017.patch

 Define Service model strictly, implement AbstractService for robust 
 subclassing, migrate yarn-common services
 -

 Key: YARN-530
 URL: https://issues.apache.org/jira/browse/YARN-530
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.0.4-alpha
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: YARN-117changes.pdf, YARN-530-005.patch, 
 YARN-530-008.patch, YARN-530-009.patch, YARN-530-010.patch, 
 YARN-530-011.patch, YARN-530-012.patch, YARN-530-013.patch, 
 YARN-530-014.patch, YARN-530-015.patch, YARN-530-016.patch, 
 YARN-530-017.patch, YARN-530-2.patch, YARN-530-3.patch, YARN-530.4.patch, 
 YARN-530.patch


 # Extend the YARN {{Service}} interface as discussed in YARN-117
 # Implement the changes in {{AbstractService}} and {{FilterService}}.
 # Migrate all services in yarn-common to the more robust service model, test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-750) Allow for black-listing resources in CS

2013-06-03 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated YARN-750:
---

Attachment: YARN-750.patch

 Allow for black-listing resources in CS
 ---

 Key: YARN-750
 URL: https://issues.apache.org/jira/browse/YARN-750
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Arun C Murthy
Assignee: Arun C Murthy
 Attachments: YARN-750.patch, YARN-750.patch, YARN-750.patch


 YARN-392 and YARN-398 enhance scheduler api to allow for white-lists of 
 resources.
 This jira is a companion to allow for black-listing (in CS).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-756) Move PreemptionContainer/PremptionContract/PreemptionMessage to api.records

2013-06-03 Thread Jian He (JIRA)
Jian He created YARN-756:


 Summary: Move 
PreemptionContainer/PremptionContract/PreemptionMessage to api.records
 Key: YARN-756
 URL: https://issues.apache.org/jira/browse/YARN-756
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Jian He
Assignee: Jian He




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-726) Queue, FinishTime fields broken on RM UI

2013-06-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673821#comment-13673821
 ] 

Hudson commented on YARN-726:
-

Integrated in Hadoop-trunk-Commit #3848 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3848/])
YARN-726. Fix queue  finish time fields in web-ui for ResourceManager. 
Contributed by Mayank Bansal. (Revision 1489234)

 Result = SUCCESS
acmurthy : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1489234
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RmView.java


 Queue, FinishTime fields broken on RM UI
 

 Key: YARN-726
 URL: https://issues.apache.org/jira/browse/YARN-726
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: Siddharth Seth
Assignee: Mayank Bansal
Priority: Critical
 Fix For: 2.1.0-beta

 Attachments: YARN-726-trunk-1.patch


 The queue shows up as Invalid Date
 Finish Time shows up as a Long value.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-748) Move BuilderUtils from yarn-common to yarn-server-common

2013-06-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673825#comment-13673825
 ] 

Hadoop QA commented on YARN-748:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12585969/YARN-748.5.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 85 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs-plugins
 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1088//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1088//console

This message is automatically generated.

 Move BuilderUtils from yarn-common to yarn-server-common
 

 Key: YARN-748
 URL: https://issues.apache.org/jira/browse/YARN-748
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Jian He
Assignee: Jian He
 Attachments: YARN-748.1.patch, YARN-748.2.patch, YARN-748.3.patch, 
 YARN-748.4.patch, YARN-748.5.patch, YARN-748.includeMR.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-750) Allow for black-listing resources in CS

2013-06-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673830#comment-13673830
 ] 

Hadoop QA commented on YARN-750:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12585993/YARN-750.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 7 new 
or modified test files.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1092//console

This message is automatically generated.

 Allow for black-listing resources in CS
 ---

 Key: YARN-750
 URL: https://issues.apache.org/jira/browse/YARN-750
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Arun C Murthy
Assignee: Arun C Murthy
 Attachments: YARN-750.patch, YARN-750.patch, YARN-750.patch


 YARN-392 and YARN-398 enhance scheduler api to allow for white-lists of 
 resources.
 This jira is a companion to allow for black-listing (in CS).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-750) Allow for black-listing resources in CS

2013-06-03 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673836#comment-13673836
 ] 

Bikas Saha commented on YARN-750:
-

The javadoc on the 2 exceptions are still mixed with each other. Need to fix 
both of them.

In testlocalityconstraints, these and others like it should not be null right?
{code}
+app_0.updateResourceRequests(app_0_requests_0, null, null);
{code}

 Allow for black-listing resources in CS
 ---

 Key: YARN-750
 URL: https://issues.apache.org/jira/browse/YARN-750
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Arun C Murthy
Assignee: Arun C Murthy
 Attachments: YARN-750.patch, YARN-750.patch, YARN-750.patch


 YARN-392 and YARN-398 enhance scheduler api to allow for white-lists of 
 resources.
 This jira is a companion to allow for black-listing (in CS).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-756) Move PreemptionContainer/PremptionContract/PreemptionMessage to api.records

2013-06-03 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-756:
-

Attachment: YARN-756.1.patch

Simple eclipse move

 Move PreemptionContainer/PremptionContract/PreemptionMessage to api.records
 ---

 Key: YARN-756
 URL: https://issues.apache.org/jira/browse/YARN-756
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Jian He
Assignee: Jian He
 Attachments: YARN-756.1.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-755) Rename AllocateResponse.reboot to AllocateResponse.resync

2013-06-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673840#comment-13673840
 ] 

Hadoop QA commented on YARN-755:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12585990/YARN-755.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1090//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1090//console

This message is automatically generated.

 Rename AllocateResponse.reboot to AllocateResponse.resync
 -

 Key: YARN-755
 URL: https://issues.apache.org/jira/browse/YARN-755
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.0-beta
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: YARN-755.1.patch


 For work preserving rm restart the am's will be resyncing instead of 
 rebooting. rebooting is an action that currently satisfies the resync 
 requirement. Changing the name now so that it continues to make sense in the 
 real resync case. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-752) In AMRMClient, automatically add corresponding rack requests for requested nodes

2013-06-03 Thread Sandy Ryza (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandy Ryza updated YARN-752:


Attachment: YARN-752.patch

 In AMRMClient, automatically add corresponding rack requests for requested 
 nodes
 

 Key: YARN-752
 URL: https://issues.apache.org/jira/browse/YARN-752
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: api, applications
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: YARN-752.patch


 A ContainerRequest that includes node-level requests must also include 
 matching rack-level requests for the racks that those nodes are on.  When a 
 node is present without its rack, it makes sense for the client to 
 automatically add the node's rack.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-752) In AMRMClient, automatically add corresponding rack requests for requested nodes

2013-06-03 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673859#comment-13673859
 ] 

Sandy Ryza commented on YARN-752:
-

Attached a patch that resolves racks in the AMRMClient

 In AMRMClient, automatically add corresponding rack requests for requested 
 nodes
 

 Key: YARN-752
 URL: https://issues.apache.org/jira/browse/YARN-752
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: api, applications
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: YARN-752.patch


 A ContainerRequest that includes node-level requests must also include 
 matching rack-level requests for the racks that those nodes are on.  When a 
 node is present without its rack, it makes sense for the client to 
 automatically add the node's rack.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-757) TestRMRestart failing/stuck on trunk

2013-06-03 Thread Bikas Saha (JIRA)
Bikas Saha created YARN-757:
---

 Summary: TestRMRestart failing/stuck on trunk
 Key: YARN-757
 URL: https://issues.apache.org/jira/browse/YARN-757
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Bikas Saha
Assignee: Bikas Saha




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-752) In AMRMClient, automatically add corresponding rack requests for requested nodes

2013-06-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673879#comment-13673879
 ] 

Hadoop QA commented on YARN-752:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12586005/YARN-752.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1094//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1094//console

This message is automatically generated.

 In AMRMClient, automatically add corresponding rack requests for requested 
 nodes
 

 Key: YARN-752
 URL: https://issues.apache.org/jira/browse/YARN-752
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: api, applications
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: YARN-752.patch


 A ContainerRequest that includes node-level requests must also include 
 matching rack-level requests for the racks that those nodes are on.  When a 
 node is present without its rack, it makes sense for the client to 
 automatically add the node's rack.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-748) Move BuilderUtils from yarn-common to yarn-server-common

2013-06-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673882#comment-13673882
 ] 

Hudson commented on YARN-748:
-

Integrated in Hadoop-trunk-Commit #3850 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3850/])
YARN-748. Moved BuilderUtils from yarn-common to yarn-server-common for 
eventual retirement. Contributed by Jian He.
MAPREDUCE-5297. Updated MR App since BuilderUtils is no longer public after 
YARN-748. Contributed by Jian He. (Revision 1489257)

 Result = SUCCESS
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1489257
Files : 
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/local/LocalContainerAllocator.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMContainerRequestor.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestJobHistoryEventHandler.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/MRApp.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/MockJobs.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestMRAppMaster.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRMContainerAllocator.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRecovery.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestTaskHeartbeatHandler.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestTaskAttempt.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestTaskAttemptContainerRequest.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncher.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncherImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/local/TestLocalContainerAllocator.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/security/MRDelegationTokenRenewer.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/util/MRBuilderUtils.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/test/java/org/apache/hadoop/mapreduce/v2/api/records/TestIds.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs-plugins/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestMapReduceTrackingUriPlugin.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/HistoryClientService.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/TestJobHistoryEvents.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/TestJobHistoryParsing.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestHSWebApp.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/NotRunningJob.java
* 

[jira] [Resolved] (YARN-748) Move BuilderUtils from yarn-common to yarn-server-common

2013-06-03 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli resolved YARN-748.
--

   Resolution: Fixed
Fix Version/s: 2.1.0-beta
 Hadoop Flags: Incompatible change,Reviewed

Committed this to trunk and branch-2. Thanks Jian!

Marking this as incompatible as BuilderUtils is no longer a public library.

 Move BuilderUtils from yarn-common to yarn-server-common
 

 Key: YARN-748
 URL: https://issues.apache.org/jira/browse/YARN-748
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Jian He
Assignee: Jian He
 Fix For: 2.1.0-beta

 Attachments: YARN-748.1.patch, YARN-748.2.patch, YARN-748.3.patch, 
 YARN-748.4.patch, YARN-748.5.patch, YARN-748.includeMR-branch-2.patch, 
 YARN-748.includeMR.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-757) TestRMRestart failing/stuck on trunk

2013-06-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673893#comment-13673893
 ] 

Hadoop QA commented on YARN-757:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12586008/YARN-757.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1096//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1096//console

This message is automatically generated.

 TestRMRestart failing/stuck on trunk
 

 Key: YARN-757
 URL: https://issues.apache.org/jira/browse/YARN-757
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Bikas Saha
Assignee: Bikas Saha
Priority: Blocker
 Attachments: YARN-757.1.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-757) TestRMRestart failing/stuck on trunk

2013-06-03 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673896#comment-13673896
 ] 

Vinod Kumar Vavilapalli commented on YARN-757:
--

I am going to commit this to unblock builds. Can you file a separate ticket to 
fix any possible bugs in FairScheduler? Tx.

 TestRMRestart failing/stuck on trunk
 

 Key: YARN-757
 URL: https://issues.apache.org/jira/browse/YARN-757
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Bikas Saha
Assignee: Bikas Saha
Priority: Blocker
 Attachments: YARN-757.1.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-750) Allow for black-listing resources in CS

2013-06-03 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673921#comment-13673921
 ] 

Bikas Saha commented on YARN-750:
-

Need a jira for Fifo scheduler. Thanks Sandy for the Fair scheduler jira.

 Allow for black-listing resources in CS
 ---

 Key: YARN-750
 URL: https://issues.apache.org/jira/browse/YARN-750
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Arun C Murthy
Assignee: Arun C Murthy
 Attachments: YARN-750.patch, YARN-750.patch, YARN-750.patch, 
 YARN-750.patch


 YARN-392 and YARN-398 enhance scheduler api to allow for white-lists of 
 resources.
 This jira is a companion to allow for black-listing (in CS).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (YARN-185) Add preemption to CS

2013-06-03 Thread Bikas Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikas Saha resolved YARN-185.
-

Resolution: Duplicate

Duplicated and patch available on YARN-569

 Add preemption to CS
 

 Key: YARN-185
 URL: https://issues.apache.org/jira/browse/YARN-185
 Project: Hadoop YARN
  Issue Type: New Feature
Reporter: Arun C Murthy
Assignee: Arun C Murthy

 Umbrella jira to track adding preemption to CS, let's track via sub-tasks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-755) Rename AllocateResponse.reboot to AllocateResponse.resync

2013-06-03 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673939#comment-13673939
 ] 

Vinod Kumar Vavilapalli commented on YARN-755:
--

The patch looks good. Will wait for Jenkins for this to commit.

I wish we had a shut-down too - there are cases where this is useful. And then 
instead of a boolean reboot, we can have an action. I can even think of 
Preemption as an action/command, but that perhaps is taking it a little too 
far..

 Rename AllocateResponse.reboot to AllocateResponse.resync
 -

 Key: YARN-755
 URL: https://issues.apache.org/jira/browse/YARN-755
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.0-beta
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: YARN-755.1.patch, YARN-755.2.patch


 For work preserving rm restart the am's will be resyncing instead of 
 rebooting. rebooting is an action that currently satisfies the resync 
 requirement. Changing the name now so that it continues to make sense in the 
 real resync case. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-759) Create Command enum in AllocateResponse

2013-06-03 Thread Bikas Saha (JIRA)
Bikas Saha created YARN-759:
---

 Summary: Create Command enum in AllocateResponse
 Key: YARN-759
 URL: https://issues.apache.org/jira/browse/YARN-759
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Bikas Saha


Use command enums for shutdown/resync instead of booleans.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-755) Rename AllocateResponse.reboot to AllocateResponse.resync

2013-06-03 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13674078#comment-13674078
 ] 

Bikas Saha commented on YARN-755:
-

TestNMClientAsync does not look related to this patch.

 Rename AllocateResponse.reboot to AllocateResponse.resync
 -

 Key: YARN-755
 URL: https://issues.apache.org/jira/browse/YARN-755
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.0-beta
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: YARN-755.1.patch, YARN-755.2.patch


 For work preserving rm restart the am's will be resyncing instead of 
 rebooting. rebooting is an action that currently satisfies the resync 
 requirement. Changing the name now so that it continues to make sense in the 
 real resync case. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-755) Rename AllocateResponse.reboot to AllocateResponse.resync

2013-06-03 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13674088#comment-13674088
 ] 

Vinod Kumar Vavilapalli commented on YARN-755:
--

Yup, i just ran it and it works fine. I'll ask Zhijie who wrote it up to have a 
look at it.

Committing this.

 Rename AllocateResponse.reboot to AllocateResponse.resync
 -

 Key: YARN-755
 URL: https://issues.apache.org/jira/browse/YARN-755
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.0-beta
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: YARN-755.1.patch, YARN-755.2.patch


 For work preserving rm restart the am's will be resyncing instead of 
 rebooting. rebooting is an action that currently satisfies the resync 
 requirement. Changing the name now so that it continues to make sense in the 
 real resync case. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-755) Rename AllocateResponse.reboot to AllocateResponse.resync

2013-06-03 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13674093#comment-13674093
 ] 

Vinod Kumar Vavilapalli commented on YARN-755:
--

File YARN-761 for the test issue.

 Rename AllocateResponse.reboot to AllocateResponse.resync
 -

 Key: YARN-755
 URL: https://issues.apache.org/jira/browse/YARN-755
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.0-beta
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: YARN-755.1.patch, YARN-755.2.patch


 For work preserving rm restart the am's will be resyncing instead of 
 rebooting. rebooting is an action that currently satisfies the resync 
 requirement. Changing the name now so that it continues to make sense in the 
 real resync case. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira