[jira] [Commented] (YARN-18) Make locatlity in YARN's container assignment and task scheduling pluggable for other deployment topology

2012-11-18 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-18?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13500044#comment-13500044
 ] 

Junping Du commented on YARN-18:


Looks like pre-commit test 
(https://builds.apache.org/job/PreCommit-YARN-Build/) is not automatically 
triggered. Is there anything I should do besides set this issue to Patch 
Available?

> Make locatlity in YARN's container assignment and task scheduling pluggable 
> for other deployment topology
> -
>
> Key: YARN-18
> URL: https://issues.apache.org/jira/browse/YARN-18
> Project: Hadoop YARN
>  Issue Type: New Feature
>Affects Versions: 2.0.3-alpha
>Reporter: Junping Du
>Assignee: Junping Du
>  Labels: features
> Attachments: 
> HADOOP-8474-ContainerAssignmentTaskScheduling-pluggable.patch, 
> MAPREDUCE-4309.patch, MAPREDUCE-4309-v2.patch, MAPREDUCE-4309-v3.patch, 
> MAPREDUCE-4309-v4.patch, MAPREDUCE-4309-v5.patch, MAPREDUCE-4309-v6.patch, 
> MAPREDUCE-4309-v7.patch, YARN-18.patch, YARN-18-v2.patch
>
>
> There are several classes in YARN’s container assignment and task scheduling 
> algorithms that relate to data locality which were updated to give preference 
> to running a container on other locality besides node-local and rack-local 
> (like nodegroup-local). This propose to make these data structure/algorithms 
> pluggable, like: SchedulerNode, RMNodeImpl, etc. The inner class 
> ScheduledRequests was made a package level class to it would be easier to 
> create a subclass, ScheduledRequestsWithNodeGroup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-19) 4-layer topology (with NodeGroup layer) implementation of Container Assignment and Task Scheduling (for YARN)

2012-11-18 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-19?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-19:
---

Attachment: YARN-19.patch

Update patch to recently code base. Note: this patch should be checked in after 
YARN-18.

> 4-layer topology (with NodeGroup layer) implementation of Container 
> Assignment and Task Scheduling (for YARN)
> -
>
> Key: YARN-19
> URL: https://issues.apache.org/jira/browse/YARN-19
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Junping Du
>Assignee: Junping Du
> Attachments: 
> HADOOP-8475-ContainerAssignmentTaskScheduling-withNodeGroup.patch, 
> MAPREDUCE-4310.patch, MAPREDUCE-4310-v1.patch, YARN-19.patch
>
>
> There are several classes in YARN’s container assignment and task scheduling 
> algorithms that related to data locality which were updated to give 
> preference to running a container on the same nodegroup. This section 
> summarized the changes in the patch that provides a new implementation to 
> support a four-layer hierarchy.
> When the ApplicationMaster makes a resource allocation request to the 
> scheduler of ResourceManager, it will add the node group to the list of 
> attributes in the ResourceRequest. The parameters of the resource request 
> will change from  to 
> .
> After receiving the ResoureRequest the RM scheduler will assign containers 
> for requests in the sequence of data-local, nodegroup-local, rack-local and 
> off-switch.Then, ApplicationMaster schedules tasks on allocated containers in 
> sequence of data- local, nodegroup-local, rack-local and off-switch.
> In terms of code changes made to YARN task scheduling, we updated the class 
> ContainerRequestEvent so that applications can requests for containers can 
> include anodegroup. In RM schedulers, FifoScheduler and CapacityScheduler 
> were updated. For the FifoScheduler, the changes were in the method 
> assignContainers. For the Capacity Scheduler the method 
> assignContainersOnNode in the class of LeafQueue was updated. In both changes 
> a new method, assignNodeGroupLocalContainers() was added in between the 
> assignment data-local and rack-local.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-18) Make locatlity in YARN's container assignment and task scheduling pluggable for other deployment topology

2012-11-18 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-18?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-18:
---

Attachment: YARN-18-v2.patch

Fix minor issues due to rebase work in YARN-18-v2 patch. 

> Make locatlity in YARN's container assignment and task scheduling pluggable 
> for other deployment topology
> -
>
> Key: YARN-18
> URL: https://issues.apache.org/jira/browse/YARN-18
> Project: Hadoop YARN
>  Issue Type: New Feature
>Affects Versions: 2.0.3-alpha
>Reporter: Junping Du
>Assignee: Junping Du
>  Labels: features
> Attachments: 
> HADOOP-8474-ContainerAssignmentTaskScheduling-pluggable.patch, 
> MAPREDUCE-4309.patch, MAPREDUCE-4309-v2.patch, MAPREDUCE-4309-v3.patch, 
> MAPREDUCE-4309-v4.patch, MAPREDUCE-4309-v5.patch, MAPREDUCE-4309-v6.patch, 
> MAPREDUCE-4309-v7.patch, YARN-18.patch, YARN-18-v2.patch
>
>
> There are several classes in YARN’s container assignment and task scheduling 
> algorithms that relate to data locality which were updated to give preference 
> to running a container on other locality besides node-local and rack-local 
> (like nodegroup-local). This propose to make these data structure/algorithms 
> pluggable, like: SchedulerNode, RMNodeImpl, etc. The inner class 
> ScheduledRequests was made a package level class to it would be easier to 
> create a subclass, ScheduledRequestsWithNodeGroup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-18) Make locatlity in YARN's container assignment and task scheduling pluggable for other deployment topology

2012-11-18 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-18?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-18:
---

Attachment: YARN-18.patch

Rebase to recently code base on YARN.

> Make locatlity in YARN's container assignment and task scheduling pluggable 
> for other deployment topology
> -
>
> Key: YARN-18
> URL: https://issues.apache.org/jira/browse/YARN-18
> Project: Hadoop YARN
>  Issue Type: New Feature
>Affects Versions: 2.0.3-alpha
>Reporter: Junping Du
>Assignee: Junping Du
>  Labels: features
> Attachments: 
> HADOOP-8474-ContainerAssignmentTaskScheduling-pluggable.patch, 
> MAPREDUCE-4309.patch, MAPREDUCE-4309-v2.patch, MAPREDUCE-4309-v3.patch, 
> MAPREDUCE-4309-v4.patch, MAPREDUCE-4309-v5.patch, MAPREDUCE-4309-v6.patch, 
> MAPREDUCE-4309-v7.patch, YARN-18.patch
>
>
> There are several classes in YARN’s container assignment and task scheduling 
> algorithms that relate to data locality which were updated to give preference 
> to running a container on other locality besides node-local and rack-local 
> (like nodegroup-local). This propose to make these data structure/algorithms 
> pluggable, like: SchedulerNode, RMNodeImpl, etc. The inner class 
> ScheduledRequests was made a package level class to it would be easier to 
> create a subclass, ScheduledRequestsWithNodeGroup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-223) Change processTree interface to work better with native code

2012-11-18 Thread Radim Kolar (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radim Kolar updated YARN-223:
-

Attachment: pstree-update.txt

> Change processTree interface to work better with native code
> 
>
> Key: YARN-223
> URL: https://issues.apache.org/jira/browse/YARN-223
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Radim Kolar
>Priority: Critical
> Attachments: pstree-update.txt
>
>
> Problem is that on every update of processTree new object is required. This 
> is undesired when working with processTree implementation in native code.
> replace ProcessTree.getProcessTree() with updateProcessTree(). No new object 
> allocation is needed and it simplify application code a bit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-223) Change processTree interface to work better with native code

2012-11-18 Thread Radim Kolar (JIRA)
Radim Kolar created YARN-223:


 Summary: Change processTree interface to work better with native 
code
 Key: YARN-223
 URL: https://issues.apache.org/jira/browse/YARN-223
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Radim Kolar
Priority: Critical


Problem is that on every update of processTree new object is required. This is 
undesired when working with processTree implementation in native code.

replace ProcessTree.getProcessTree() with updateProcessTree(). No new object 
allocation is needed and it simplify application code a bit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-211) Allow definition of max-active-applications per queue

2012-11-18 Thread Radim Kolar (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radim Kolar updated YARN-211:
-

Attachment: max-running.txt

Max active applications renamed to max running applications. This makes its 
meaning more clear, because maximum active applications is used in 
documentation for number of running + queued applications.

> Allow definition of max-active-applications per queue
> -
>
> Key: YARN-211
> URL: https://issues.apache.org/jira/browse/YARN-211
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Reporter: Radim Kolar
>Assignee: Radim Kolar
> Attachments: capacity-maxactive.txt, max-running.txt
>
>
> In some cases, automatic max-active is not enough, especially if you need 
> less active tasks in given queue

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-184) Remove unnecessary locking in fair scheduler, and address findbugs excludes.

2012-11-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13499800#comment-13499800
 ] 

Hudson commented on YARN-184:
-

Integrated in Hadoop-Mapreduce-trunk #1261 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1261/])
YARN-184. Remove unnecessary locking in fair scheduler, and address 
findbugs excludes. (sandyr via tucu) (Revision 1410826)

 Result = FAILURE
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1410826
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/QueueManager.java


> Remove unnecessary locking in fair scheduler, and address findbugs excludes.
> 
>
> Key: YARN-184
> URL: https://issues.apache.org/jira/browse/YARN-184
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.0.2-alpha
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
> Fix For: 2.0.3-alpha
>
> Attachments: YARN-184-1.patch, YARN-184-2.patch, YARN-184-3.patch, 
> YARN-184-3.patch, YARN-184.patch
>
>
> In YARN-12, locks were added to all fields of QueueManager to address 
> findbugs.  In addition, findbugs exclusions were added in response to 
> MAPREDUCE-4439, without a deep look at the code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-184) Remove unnecessary locking in fair scheduler, and address findbugs excludes.

2012-11-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13499792#comment-13499792
 ] 

Hudson commented on YARN-184:
-

Integrated in Hadoop-Hdfs-trunk #1230 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1230/])
YARN-184. Remove unnecessary locking in fair scheduler, and address 
findbugs excludes. (sandyr via tucu) (Revision 1410826)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1410826
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/QueueManager.java


> Remove unnecessary locking in fair scheduler, and address findbugs excludes.
> 
>
> Key: YARN-184
> URL: https://issues.apache.org/jira/browse/YARN-184
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.0.2-alpha
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
> Fix For: 2.0.3-alpha
>
> Attachments: YARN-184-1.patch, YARN-184-2.patch, YARN-184-3.patch, 
> YARN-184-3.patch, YARN-184.patch
>
>
> In YARN-12, locks were added to all fields of QueueManager to address 
> findbugs.  In addition, findbugs exclusions were added in response to 
> MAPREDUCE-4439, without a deep look at the code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira