[jira] [Commented] (YARN-733) TestNMClient fails occasionally

2013-06-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13672461#comment-13672461
 ] 

Hudson commented on YARN-733:
-

Integrated in Hadoop-Yarn-trunk #228 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/228/])
YARN-733. Fixed TestNMClient from failing occasionally. Contributed by 
Zhijie Shen. (Revision 1488618)

 Result = SUCCESS
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1488618
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/NMClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/TestNMClient.java


 TestNMClient fails occasionally
 ---

 Key: YARN-733
 URL: https://issues.apache.org/jira/browse/YARN-733
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Zhijie Shen
Assignee: Zhijie Shen
 Fix For: 2.1.0-beta

 Attachments: YARN-733.1.patch, YARN-733.2.patch


 The problem happens at:
 {code}
 // getContainerStatus can be called after stopContainer
 try {
   ContainerStatus status = nmClient.getContainerStatus(
   container.getId(), container.getNodeId(),
   container.getContainerToken());
   assertEquals(container.getId(), status.getContainerId());
   assertEquals(ContainerState.RUNNING, status.getState());
   assertTrue( + i, status.getDiagnostics().contains(
   Container killed by the ApplicationMaster.));
   assertEquals(-1000, status.getExitStatus());
 } catch (YarnRemoteException e) {
   fail(Exception is not expected);
 }
 {code}
 NMClientImpl#stopContainer returns, but container hasn't been stopped 
 immediately. ContainerManangerImpl implements stopContainer in async style. 
 Therefore, the container's status is in transition. 
 NMClientImpl#getContainerStatus immediately after stopContainer will get 
 either the RUNNING status or the COMPLETE one.
 There will be the similar problem wrt NMClientImpl#startContainer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-733) TestNMClient fails occasionally

2013-06-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13672498#comment-13672498
 ] 

Hudson commented on YARN-733:
-

Integrated in Hadoop-Hdfs-trunk #1418 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1418/])
YARN-733. Fixed TestNMClient from failing occasionally. Contributed by 
Zhijie Shen. (Revision 1488618)

 Result = FAILURE
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1488618
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/NMClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/TestNMClient.java


 TestNMClient fails occasionally
 ---

 Key: YARN-733
 URL: https://issues.apache.org/jira/browse/YARN-733
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Zhijie Shen
Assignee: Zhijie Shen
 Fix For: 2.1.0-beta

 Attachments: YARN-733.1.patch, YARN-733.2.patch


 The problem happens at:
 {code}
 // getContainerStatus can be called after stopContainer
 try {
   ContainerStatus status = nmClient.getContainerStatus(
   container.getId(), container.getNodeId(),
   container.getContainerToken());
   assertEquals(container.getId(), status.getContainerId());
   assertEquals(ContainerState.RUNNING, status.getState());
   assertTrue( + i, status.getDiagnostics().contains(
   Container killed by the ApplicationMaster.));
   assertEquals(-1000, status.getExitStatus());
 } catch (YarnRemoteException e) {
   fail(Exception is not expected);
 }
 {code}
 NMClientImpl#stopContainer returns, but container hasn't been stopped 
 immediately. ContainerManangerImpl implements stopContainer in async style. 
 Therefore, the container's status is in transition. 
 NMClientImpl#getContainerStatus immediately after stopContainer will get 
 either the RUNNING status or the COMPLETE one.
 There will be the similar problem wrt NMClientImpl#startContainer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-746) rename Service.register() and Service.unregister() to registerServiceListener() unregisterServiceListener() respectively

2013-06-02 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated YARN-746:


Attachment: YARN-746-001.patch

Patch -designed to go in after the big YARN-117 commit -it updates the new 
tests.

 rename Service.register() and Service.unregister() to 
 registerServiceListener()  unregisterServiceListener() respectively
 --

 Key: YARN-746
 URL: https://issues.apache.org/jira/browse/YARN-746
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Steve Loughran
 Attachments: YARN-746-001.patch


 make it clear what you are registering on a {{Service}} by naming the methods 
 {{registerServiceListener()}}  {{unregisterServiceListener()}} respectively.
 This only affects a couple of production classes; {{Service.register()}} and 
 is used in some of the lifecycle tests of the YARN-530. There are no tests of 
 {{Service.unregister()}}, which is something that could be corrected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-530) Define Service model strictly, implement AbstractService for robust subclassing, migrate yarn-common services

2013-06-02 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated YARN-530:


Attachment: YARN-530-014.patch

Pulled the {{ServiceShutdownHook}} code off to YARN-679, this eliminates two of 
the FindBugs warnings. The remaining two are spurious as they note that an 
AtomicBool is being used for both a wait/notify barrier as well as for the 
thread-safe get/set operations.

 Define Service model strictly, implement AbstractService for robust 
 subclassing, migrate yarn-common services
 -

 Key: YARN-530
 URL: https://issues.apache.org/jira/browse/YARN-530
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.0.4-alpha
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: YARN-117changes.pdf, YARN-530-005.patch, 
 YARN-530-008.patch, YARN-530-009.patch, YARN-530-010.patch, 
 YARN-530-011.patch, YARN-530-012.patch, YARN-530-013.patch, 
 YARN-530-014.patch, YARN-530-2.patch, YARN-530-3.patch, YARN-530.4.patch, 
 YARN-530.patch


 # Extend the YARN {{Service}} interface as discussed in YARN-117
 # Implement the changes in {{AbstractService}} and {{FilterService}}.
 # Migrate all services in yarn-common to the more robust service model, test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-117) Enhance YARN service model

2013-06-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13672623#comment-13672623
 ] 

Hadoop QA commented on YARN-117:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12585771/YARN-117-014.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1070//console

This message is automatically generated.

 Enhance YARN service model
 --

 Key: YARN-117
 URL: https://issues.apache.org/jira/browse/YARN-117
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.0.4-alpha
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: YARN-117-007.patch, YARN-117-008.patch, 
 YARN-117-009.patch, YARN-117-010.patch, YARN-117-011.patch, 
 YARN-117-012.patch, YARN-117-013.patch, YARN-117-014.patch, YARN-117-2.patch, 
 YARN-117-3.patch, YARN-117.4.patch, YARN-117.5.patch, YARN-117.6.patch, 
 YARN-117.patch


 Having played the YARN service model, there are some issues
 that I've identified based on past work and initial use.
 This JIRA issue is an overall one to cover the issues, with solutions pushed 
 out to separate JIRAs.
 h2. state model prevents stopped state being entered if you could not 
 successfully start the service.
 In the current lifecycle you cannot stop a service unless it was successfully 
 started, but
 * {{init()}} may acquire resources that need to be explicitly released
 * if the {{start()}} operation fails partway through, the {{stop()}} 
 operation may be needed to release resources.
 *Fix:* make {{stop()}} a valid state transition from all states and require 
 the implementations to be able to stop safely without requiring all fields to 
 be non null.
 Before anyone points out that the {{stop()}} operations assume that all 
 fields are valid; and if called before a {{start()}} they will NPE; 
 MAPREDUCE-3431 shows that this problem arises today, MAPREDUCE-3502 is a fix 
 for this. It is independent of the rest of the issues in this doc but it will 
 aid making {{stop()}} execute from all states other than stopped.
 MAPREDUCE-3502 is too big a patch and needs to be broken down for easier 
 review and take up; this can be done with issues linked to this one.
 h2. AbstractService doesn't prevent duplicate state change requests.
 The {{ensureState()}} checks to verify whether or not a state transition is 
 allowed from the current state are performed in the base {{AbstractService}} 
 class -yet subclasses tend to call this *after* their own {{init()}}, 
 {{start()}}  {{stop()}} operations. This means that these operations can be 
 performed out of order, and even if the outcome of the call is an exception, 
 all actions performed by the subclasses will have taken place. MAPREDUCE-3877 
 demonstrates this.
 This is a tricky one to address. In HADOOP-3128 I used a base class instead 
 of an interface and made the {{init()}}, {{start()}}  {{stop()}} methods 
 {{final}}. These methods would do the checks, and then invoke protected inner 
 methods, {{innerStart()}}, {{innerStop()}}, etc. It should be possible to 
 retrofit the same behaviour to everything that extends {{AbstractService}} 
 -something that must be done before the class is considered stable (because 
 once the lifecycle methods are declared final, all subclasses that are out of 
 the source tree will need fixing by the respective developers.
 h2. AbstractService state change doesn't defend against race conditions.
 There's no concurrency locks on the state transitions. Whatever fix for wrong 
 state calls is added should correct this to prevent re-entrancy, such as 
 {{stop()}} being called from two threads.
 h2.  Static methods to choreograph of lifecycle operations
 Helper methods to move things through lifecycles. init-start is common, 
 stop-if-service!=null another. Some static methods can execute these, and 
 even call {{stop()}} if {{init()}} raises an exception. These could go into a 
 class {{ServiceOps}} in the same package. These can be used by those services 
 that wrap other services, and help manage more robust shutdowns.
 h2. state transition failures are something that registered service listeners 
 may wish to be informed of.
 When a state transition fails a {{RuntimeException}} can be thrown -and the 
 service listeners are not informed as the notification point isn't reached. 
 They may wish to know this, especially for management and diagnostics.
 *Fix:* extend {{ServiceStateChangeListener}} with a callback such as 
 {{stateChangeFailed(Service service,Service.State targeted-state, 
 RuntimeException e)}} that is invoked from the (final) state change 

[jira] [Commented] (YARN-530) Define Service model strictly, implement AbstractService for robust subclassing, migrate yarn-common services

2013-06-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13672626#comment-13672626
 ] 

Hadoop QA commented on YARN-530:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12585770/YARN-530-014.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 5 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 3 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1069//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/1069//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-yarn-common.html
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1069//console

This message is automatically generated.

 Define Service model strictly, implement AbstractService for robust 
 subclassing, migrate yarn-common services
 -

 Key: YARN-530
 URL: https://issues.apache.org/jira/browse/YARN-530
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.0.4-alpha
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: YARN-117changes.pdf, YARN-530-005.patch, 
 YARN-530-008.patch, YARN-530-009.patch, YARN-530-010.patch, 
 YARN-530-011.patch, YARN-530-012.patch, YARN-530-013.patch, 
 YARN-530-014.patch, YARN-530-2.patch, YARN-530-3.patch, YARN-530.4.patch, 
 YARN-530.patch


 # Extend the YARN {{Service}} interface as discussed in YARN-117
 # Implement the changes in {{AbstractService}} and {{FilterService}}.
 # Migrate all services in yarn-common to the more robust service model, test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-678) Delete FilterService

2013-06-02 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated YARN-678:


Attachment: YARN-678-001.patch

deletes filter service. This patch is built on the YARN-746 rename-registration 
patch, so my only apply there. It's trivial to recreate for any other branch

 Delete FilterService
 

 Key: YARN-678
 URL: https://issues.apache.org/jira/browse/YARN-678
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: api
Affects Versions: 2.0.4-alpha
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Minor
 Attachments: YARN-678-001.patch


 The {{FilterService}} never gets used -remove it

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (YARN-746) rename Service.register() and Service.unregister() to registerServiceListener() unregisterServiceListener() respectively

2013-06-02 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned YARN-746:
---

Assignee: Steve Loughran

 rename Service.register() and Service.unregister() to 
 registerServiceListener()  unregisterServiceListener() respectively
 --

 Key: YARN-746
 URL: https://issues.apache.org/jira/browse/YARN-746
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: YARN-746-001.patch


 make it clear what you are registering on a {{Service}} by naming the methods 
 {{registerServiceListener()}}  {{unregisterServiceListener()}} respectively.
 This only affects a couple of production classes; {{Service.register()}} and 
 is used in some of the lifecycle tests of the YARN-530. There are no tests of 
 {{Service.unregister()}}, which is something that could be corrected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-749) Rename ResourceRequest (get,set)HostName to (get,set)ResourceName

2013-06-02 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13672636#comment-13672636
 ] 

Sandy Ryza commented on YARN-749:
-

+1, have thought this would be a good change for a while.

Will the field ever include anything that's not host, rack, or *?  If not, 
something like setLocation or setLocationName might be more descriptive?

 Rename ResourceRequest (get,set)HostName to (get,set)ResourceName
 -

 Key: YARN-749
 URL: https://issues.apache.org/jira/browse/YARN-749
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Arun C Murthy
Assignee: Arun C Murthy
  Labels: api-change

 We should rename ResourceRequest (get,set)HostName to (get,set)ResourceName 
 since the name can be host, rack or *.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-749) Rename ResourceRequest (get,set)HostName to (get,set)ResourceName

2013-06-02 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated YARN-749:
---

Attachment: YARN-749.patch

Straight-fwd patch, re-factored with eclipse.

 Rename ResourceRequest (get,set)HostName to (get,set)ResourceName
 -

 Key: YARN-749
 URL: https://issues.apache.org/jira/browse/YARN-749
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Arun C Murthy
Assignee: Arun C Murthy
  Labels: api-change
 Attachments: YARN-749.patch


 We should rename ResourceRequest (get,set)HostName to (get,set)ResourceName 
 since the name can be host, rack or *.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-749) Rename ResourceRequest (get,set)HostName to (get,set)ResourceName

2013-06-02 Thread Arun C Murthy (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13672649#comment-13672649
 ] 

Arun C Murthy commented on YARN-749:


[~sandyr] 'resource name' seems more likely to survive than 'location name'? We 
are splitting hairs... :)

 Rename ResourceRequest (get,set)HostName to (get,set)ResourceName
 -

 Key: YARN-749
 URL: https://issues.apache.org/jira/browse/YARN-749
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Arun C Murthy
Assignee: Arun C Murthy
  Labels: api-change
 Attachments: YARN-749.patch


 We should rename ResourceRequest (get,set)HostName to (get,set)ResourceName 
 since the name can be host, rack or *.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-749) Rename ResourceRequest (get,set)HostName to (get,set)ResourceName

2013-06-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13672655#comment-13672655
 ] 

Hadoop QA commented on YARN-749:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12585775/YARN-749.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1071//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1071//console

This message is automatically generated.

 Rename ResourceRequest (get,set)HostName to (get,set)ResourceName
 -

 Key: YARN-749
 URL: https://issues.apache.org/jira/browse/YARN-749
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Arun C Murthy
Assignee: Arun C Murthy
  Labels: api-change
 Attachments: YARN-749.patch


 We should rename ResourceRequest (get,set)HostName to (get,set)ResourceName 
 since the name can be host, rack or *.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-749) Rename ResourceRequest (get,set)HostName to (get,set)ResourceName

2013-06-02 Thread Hitesh Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13672679#comment-13672679
 ] 

Hitesh Shah commented on YARN-749:
--

Looks good to me. +1.

 Rename ResourceRequest (get,set)HostName to (get,set)ResourceName
 -

 Key: YARN-749
 URL: https://issues.apache.org/jira/browse/YARN-749
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Arun C Murthy
Assignee: Arun C Murthy
  Labels: api-change
 Attachments: YARN-749.patch


 We should rename ResourceRequest (get,set)HostName to (get,set)ResourceName 
 since the name can be host, rack or *.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-749) Rename ResourceRequest (get,set)HostName to (get,set)ResourceName

2013-06-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13672697#comment-13672697
 ] 

Hadoop QA commented on YARN-749:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12585783/YARN-749.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1072//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1072//console

This message is automatically generated.

 Rename ResourceRequest (get,set)HostName to (get,set)ResourceName
 -

 Key: YARN-749
 URL: https://issues.apache.org/jira/browse/YARN-749
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.0.4-alpha
Reporter: Arun C Murthy
Assignee: Arun C Murthy
  Labels: api-change
 Attachments: YARN-749.patch, YARN-749.patch


 We should rename ResourceRequest (get,set)HostName to (get,set)ResourceName 
 since the name can be host, rack or *.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-749) Rename ResourceRequest (get,set)HostName to (get,set)ResourceName

2013-06-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13672710#comment-13672710
 ] 

Hudson commented on YARN-749:
-

Integrated in Hadoop-trunk-Commit #3833 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3833/])
YARN-749. Rename ResourceRequest.(get,set)HostName to 
ResourceRequest.(get,set)ResourceName. Contributed by Arun C. Murthy. (Revision 
1488806)

 Result = SUCCESS
acmurthy : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1488806
Files : 
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMContainerRequestor.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/MRAppBenchmark.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRMContainerAllocator.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ResourceRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/BuilderUtils.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockAM.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestUtils.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/TestFifoScheduler.java


 Rename ResourceRequest (get,set)HostName to (get,set)ResourceName
 -

 Key: YARN-749
 URL: https://issues.apache.org/jira/browse/YARN-749
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.0.4-alpha
Reporter: Arun C Murthy
Assignee: Arun C Murthy
  Labels: api-change
 Fix For: 2.1.0-beta

 Attachments: YARN-749.patch, YARN-749.patch


 We should rename ResourceRequest (get,set)HostName to (get,set)ResourceName 
 since the name can be host, rack or *.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-398) Enhance CS to allow for white-list of resources

2013-06-02 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated YARN-398:
---

Summary: Enhance CS to allow for white-list of resources  (was: Allow 
white-list and black-list of resources)

 Enhance CS to allow for white-list of resources
 ---

 Key: YARN-398
 URL: https://issues.apache.org/jira/browse/YARN-398
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Arun C Murthy
Assignee: Arun C Murthy
 Attachments: YARN-398.patch, YARN-398.patch


 Allow white-list and black-list of resources in scheduler api.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-398) Enhance CS to allow for white-list of resources

2013-06-02 Thread Arun C Murthy (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13672743#comment-13672743
 ] 

Arun C Murthy commented on YARN-398:


I'm changing scope of this jira to restrict it to white-list for CS. I'll open 
a separate one for black-listing.

 Enhance CS to allow for white-list of resources
 ---

 Key: YARN-398
 URL: https://issues.apache.org/jira/browse/YARN-398
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Arun C Murthy
Assignee: Arun C Murthy
 Attachments: YARN-398.patch, YARN-398.patch


 Allow white-list and black-list of resources in scheduler api.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-750) Allow for black-listing resources in CS

2013-06-02 Thread Arun C Murthy (JIRA)
Arun C Murthy created YARN-750:
--

 Summary: Allow for black-listing resources in CS
 Key: YARN-750
 URL: https://issues.apache.org/jira/browse/YARN-750
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Arun C Murthy
Assignee: Arun C Murthy


YARN-392 and YARN-398 enhance scheduler api to allow for white-lists of 
resources.

This jira is a companion to allow for black-listing (in CS).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-398) Enhance CS to allow for white-list of resources

2013-06-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13672746#comment-13672746
 ] 

Hadoop QA commented on YARN-398:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12585791/YARN-398.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1074//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1074//console

This message is automatically generated.

 Enhance CS to allow for white-list of resources
 ---

 Key: YARN-398
 URL: https://issues.apache.org/jira/browse/YARN-398
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Arun C Murthy
Assignee: Arun C Murthy
 Attachments: YARN-398.patch, YARN-398.patch


 Allow white-list and black-list of resources in scheduler api.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Moved] (YARN-751) CLONE - CapacityScheduler incorrectly utilizes extra-resources of queue for high-memory jobs

2013-06-02 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy moved MAPREDUCE-5290 to YARN-751:
---

  Component/s: (was: capacity-sched)
   capacityscheduler
Fix Version/s: (was: 2.1.0-beta)
   2.1.0-beta
Affects Version/s: (was: 2.0.4-alpha)
   2.0.4-alpha
 Release Note:   (was: Fixed wrong CapacityScheduler resource 
allocation for high memory consumption jobs)
  Key: YARN-751  (was: MAPREDUCE-5290)
  Project: Hadoop YARN  (was: Hadoop Map/Reduce)

 CLONE - CapacityScheduler incorrectly utilizes extra-resources of queue for 
 high-memory jobs
 

 Key: YARN-751
 URL: https://issues.apache.org/jira/browse/YARN-751
 Project: Hadoop YARN
  Issue Type: Bug
  Components: capacityscheduler
Affects Versions: 2.0.4-alpha
Reporter: Sergey Tryuber
Assignee: Arun C Murthy
 Fix For: 2.1.0-beta


 Imagine, we have a queue A with capacity 10 slots and 20 as extra-capacity, 
 jobs which use 3 map slots will never consume more than 9 slots, regardless 
 how many free slots on a cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-696) Enable multiple states to to be specified in Resource Manager apps REST call

2013-06-02 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13672750#comment-13672750
 ] 

Konstantin Boudnik commented on YARN-696:
-

+1 patch looks good. If there's no objection from more experienced YARN 
developers - I will commit on Monday.

 Enable multiple states to to be specified in Resource Manager apps REST call
 

 Key: YARN-696
 URL: https://issues.apache.org/jira/browse/YARN-696
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: resourcemanager
Affects Versions: 2.0.4-alpha
Reporter: Trevor Lorimer
Priority: Trivial
 Attachments: 0001-YARN-696.patch


 Within the YARN Resource Manager REST API the GET call which returns all 
 Applications can be filtered by a single State query parameter (http://rm 
 http address:port/ws/v1/cluster/apps). 
 There are 8 possible states (New, Submitted, Accepted, Running, Finishing, 
 Finished, Failed, Killed), if no state parameter is specified all states are 
 returned, however if a sub-set of states is required then multiple REST calls 
 are required (max. of 7).
 The proposal is to be able to specify multiple states in a single REST call.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-696) Enable multiple states to to be specified in Resource Manager apps REST call

2013-06-02 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated YARN-696:


Target Version/s: 3.0.0, 2.1.0-beta

 Enable multiple states to to be specified in Resource Manager apps REST call
 

 Key: YARN-696
 URL: https://issues.apache.org/jira/browse/YARN-696
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: resourcemanager
Affects Versions: 2.0.4-alpha
Reporter: Trevor Lorimer
Priority: Trivial
 Attachments: 0001-YARN-696.patch


 Within the YARN Resource Manager REST API the GET call which returns all 
 Applications can be filtered by a single State query parameter (http://rm 
 http address:port/ws/v1/cluster/apps). 
 There are 8 possible states (New, Submitted, Accepted, Running, Finishing, 
 Finished, Failed, Killed), if no state parameter is specified all states are 
 returned, however if a sub-set of states is required then multiple REST calls 
 are required (max. of 7).
 The proposal is to be able to specify multiple states in a single REST call.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-720) container-log4j.properties should not refer to mapreduce properties

2013-06-02 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13672761#comment-13672761
 ] 

Siddharth Seth commented on YARN-720:
-

+1. Looks good. Committing this. Thanks Zhijie.

 container-log4j.properties should not refer to mapreduce properties
 ---

 Key: YARN-720
 URL: https://issues.apache.org/jira/browse/YARN-720
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Siddharth Seth
Assignee: Zhijie Shen
 Attachments: YARN-720.1.patch


 This refers to yarn.app.mapreduce.container.log.dir and 
 yarn.app.mapreduce.container.log.filesize. This should either be moved into 
 the MR codebase. Alternately the parameters should be renamed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-720) container-log4j.properties should not refer to mapreduce properties

2013-06-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13672769#comment-13672769
 ] 

Hudson commented on YARN-720:
-

Integrated in Hadoop-trunk-Commit #3834 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3834/])
YARN-720 and MAPREDUCE-5291. container-log4j.properties should not refer to 
mapreduce properties. Update MRApp to use YARN properties for log setup. 
Contributed by Zhijie Shen. (Revision 1488829)

 Result = SUCCESS
sseth : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1488829
Files : 
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestMapReduceChildJVM.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/util/MRApps.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/TaskLog.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestTaskLog.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestStreamingTaskLog.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/resources/container-log4j.properties


 container-log4j.properties should not refer to mapreduce properties
 ---

 Key: YARN-720
 URL: https://issues.apache.org/jira/browse/YARN-720
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Siddharth Seth
Assignee: Zhijie Shen
 Fix For: 2.1.0-beta

 Attachments: YARN-720.1.patch


 This refers to yarn.app.mapreduce.container.log.dir and 
 yarn.app.mapreduce.container.log.filesize. This should either be moved into 
 the MR codebase. Alternately the parameters should be renamed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-635) Rename YarnRemoteException to YarnException

2013-06-02 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated YARN-635:


Attachment: YARN-635.2.txt

Rebased patch.

bq. Doubtful on YarnRemoteException renaming. YarnRemoteException helps me 
understand that the exception was generated on the remote server and is useful 
as a piece of debug information.
Wondering if there's a better way to convey this information - maybe via the 
message string. If any of the client libraries start doing more than they do 
rightnow - they could end up generating the same exception as the remote 
server, in which case a single exception helps. A slightly contrived example - 
validating the resource request sent by the user in the AMRMClient.

 Rename YarnRemoteException to YarnException
 ---

 Key: YARN-635
 URL: https://issues.apache.org/jira/browse/YARN-635
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Xuan Gong
Assignee: Xuan Gong
 Attachments: YARN-635.2.txt, YARN-635.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (YARN-751) CLONE - CapacityScheduler incorrectly utilizes extra-resources of queue for high-memory jobs

2013-06-02 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy resolved YARN-751.


Resolution: Invalid

I verified that this doesn't happen in YARN. The user-limit computation is 
significantly different and hence a non-issue.

 CLONE - CapacityScheduler incorrectly utilizes extra-resources of queue for 
 high-memory jobs
 

 Key: YARN-751
 URL: https://issues.apache.org/jira/browse/YARN-751
 Project: Hadoop YARN
  Issue Type: Bug
  Components: capacityscheduler
Affects Versions: 2.0.4-alpha
Reporter: Sergey Tryuber
Assignee: Arun C Murthy
 Fix For: 2.1.0-beta


 Imagine, we have a queue A with capacity 10 slots and 20 as extra-capacity, 
 jobs which use 3 map slots will never consume more than 9 slots, regardless 
 how many free slots on a cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (YARN-747) Improve CapacityScheduler to support allocation to specific locations

2013-06-02 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy resolved YARN-747.


Resolution: Duplicate

Duplicate of YARN-398 since I moved black-list out of there...

 Improve CapacityScheduler to support allocation to specific locations
 -

 Key: YARN-747
 URL: https://issues.apache.org/jira/browse/YARN-747
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: resourcemanager
Affects Versions: 2.1.0-beta
Reporter: Bikas Saha
Assignee: Zhijie Shen

 YARN-392 added support in ResourceRequest to enable a relaxLocality flag that 
 can be used to specify enable/disable of locality relaxation at different 
 network hierarchy levels. Using this the scheduler can be enhanced to support 
 specific locations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-635) Rename YarnRemoteException to YarnException

2013-06-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13672809#comment-13672809
 ] 

Hadoop QA commented on YARN-635:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12585795/YARN-635.2.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 69 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-unmanaged-am-launcher
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1075//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1075//console

This message is automatically generated.

 Rename YarnRemoteException to YarnException
 ---

 Key: YARN-635
 URL: https://issues.apache.org/jira/browse/YARN-635
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.0-beta
Reporter: Xuan Gong
Assignee: Xuan Gong
 Attachments: YARN-635.2.txt, YARN-635.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-752) Throw exception if AMRMClient.ContainerRequest is given invalid locations

2013-06-02 Thread Sandy Ryza (JIRA)
Sandy Ryza created YARN-752:
---

 Summary: Throw exception if AMRMClient.ContainerRequest is given 
invalid locations
 Key: YARN-752
 URL: https://issues.apache.org/jira/browse/YARN-752
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: api, applications
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza


A ContainerRequest that includes node-level requests must also include matching 
rack-level requests for the racks that those nodes are on.  At the very least, 
an exception should be thrown if one is constructed with a non-empty set of 
nodes but an empty set of racks.

If possible, it would also be nice to validate that the given nodes are on the 
racks that are given.  Although if that is possible, then it might be even 
better to just automatically fill in racks for nodes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira