[ https://issues.apache.org/jira/browse/HADOOP-4035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12635445#action_12635445 ]
Owen O'Malley commented on HADOOP-4035: --------------------------------------- I think it is important that we not starve jobs, so I think we should not take the next job's task, if the current job's task doesn't fit. I've also filed HADOOP-4306 to change the disk space monitoring the same way. It probably makes sense in the future, to have a special case if the TaskTracker is job's task will never fit on this TaskTracker and let other low priority jobs use it. But, it should be a different jira. > Modify the capacity scheduler (HADOOP-3445) to schedule tasks based on memory > requirements and task trackers free memory > ------------------------------------------------------------------------------------------------------------------------ > > Key: HADOOP-4035 > URL: https://issues.apache.org/jira/browse/HADOOP-4035 > Project: Hadoop Core > Issue Type: Bug > Components: contrib/capacity-sched > Affects Versions: 0.19.0 > Reporter: Hemanth Yamijala > Assignee: Vinod K V > Priority: Blocker > Fix For: 0.19.0 > > Attachments: 4035.1.patch, HADOOP-4035-20080918.1.txt > > > HADOOP-3759 introduced configuration variables that can be used to specify > memory requirements for jobs, and also modified the tasktrackers to report > their free memory. The capacity scheduler in HADOOP-3445 should schedule > tasks based on these parameters. A task that is scheduled on a TT that uses > more than the default amount of memory per slot can be viewed as effectively > using more than one slot, as it would decrease the amount of free memory on > the TT by more than the default amount while it runs. The scheduler should > make the used capacity account for this additional usage while enforcing > limits, etc. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.