[ https://issues.apache.org/jira/browse/MAPREDUCE-3096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13115045#comment-13115045 ]
Aaron T. Myers commented on MAPREDUCE-3096: ------------------------------------------- Hi Arsen, to be completely clear, do you want to be able to limit the maximum number of concurrent map/reduce tasks *from a single job* that run on a given node? Or the maximum number of concurrent map/reduce tasks that run on a given node *across all jobs*? > Add a good way to control the number of map/reduce tasks per node > ----------------------------------------------------------------- > > Key: MAPREDUCE-3096 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-3096 > Project: Hadoop Map/Reduce > Issue Type: Task > Reporter: Arsen Zahray > Fix For: 0.20.204.0 > > > Currently, controlling the number of map/reduce tasks is a hell. > I've tried for it many times, and it doesn't work right. Also, I am not the > only one person, who seems to have this problem. > There must be a better way to do it. > Here's my proposal: > add following functions to Job: > setNumberOfMappersPerNode(int); > setNumberOfReducersPerNode(int); > setMaxMemoryPerMapper(int); > setMaxMemoryPerReducer(int); -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira