[ https://issues.apache.org/jira/browse/MAPREDUCE-3096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13115281#comment-13115281 ]
Arun C Murthy commented on MAPREDUCE-3096: ------------------------------------------ Arsen, this is trivial with CapacityScheduler. Just ask for all slots for each map/reduce on the one machine... > Add a good way to control the number of map/reduce tasks per node > ----------------------------------------------------------------- > > Key: MAPREDUCE-3096 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-3096 > Project: Hadoop Map/Reduce > Issue Type: Task > Reporter: Arsen Zahray > Fix For: 0.20.204.0 > > > Currently, controlling the number of map/reduce tasks is a hell. > I've tried for it many times, and it doesn't work right. Also, I am not the > only one person, who seems to have this problem. > There must be a better way to do it. > Here's my proposal: > add following functions to Job: > setNumberOfMappersPerNode(int); > setNumberOfReducersPerNode(int); > setMaxMemoryPerMapper(int); > setMaxMemoryPerReducer(int); -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira