[ 
https://issues.apache.org/jira/browse/HADOOP-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12715728#action_12715728
 ] 

Hadoop QA commented on HADOOP-5170:
-----------------------------------

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12409578/tasklimits-v4.patch
  against trunk revision 781115.

    +1 @author.  The patch does not contain any @author tags.

    +1 tests included.  The patch appears to include 4 new or modified tests.

    +1 javadoc.  The javadoc tool did not generate any warning messages.

    +1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

    +1 findbugs.  The patch does not introduce any new Findbugs warnings.

    +1 Eclipse classpath. The patch retains Eclipse classpath integrity.

    -1 release audit.  The applied patch generated 493 release audit warnings 
(more than the trunk's current 492 warnings).

    +1 core tests.  The patch passed core unit tests.

    -1 contrib tests.  The patch failed contrib unit tests.

Test results: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/451/testReport/
Release audit warnings: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/451/artifact/trunk/patchprocess/releaseAuditDiffWarnings.txt
Findbugs warnings: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/451/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/451/artifact/trunk/build/test/checkstyle-errors.html
Console output: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/451/console

This message is automatically generated.

> Set max map/reduce tasks on a per-job basis, either per-node or cluster-wide
> ----------------------------------------------------------------------------
>
>                 Key: HADOOP-5170
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5170
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: mapred
>            Reporter: Jonathan Gray
>            Assignee: Matei Zaharia
>         Attachments: HADOOP-5170-tasklimits-v3-0.18.3.patch, 
> tasklimits-v2.patch, tasklimits-v3-0.19.patch, tasklimits-v3.patch, 
> tasklimits-v4.patch, tasklimits.patch
>
>
> There are a number of use cases for being able to do this.  The focus of this 
> jira should be on finding what would be the simplest to implement that would 
> satisfy the most use cases.
> This could be implemented as either a per-node maximum or a cluster-wide 
> maximum.  It seems that for most uses, the former is preferable however 
> either would fulfill the requirements of this jira.
> Some of the reasons for allowing this feature (mine and from others on list):
> - I have some very large CPU-bound jobs.  I am forced to keep the max 
> map/node limit at 2 or 3 (on a 4 core node) so that I do not starve the 
> Datanode and Regionserver.  I have other jobs that are network latency bound 
> and would like to be able to run high numbers of them concurrently on each 
> node.  Though I can thread some jobs, there are some use cases that are 
> difficult to thread (scanning from hbase) and there's significant complexity 
> added to the job rather than letting hadoop handle the concurrency.
> - Poor assignment of tasks to nodes creates some situations where you have 
> multiple reducers on a single node but other nodes that received none.  A 
> limit of 1 reducer per node for that job would prevent that from happening. 
> (only works with per-node limit)
> - Poor mans MR job virtualization.  Since we can limit a jobs resources, this 
> gives much more control in allocating and dividing up resources of a large 
> cluster.  (makes most sense w/ cluster-wide limit)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to