[ https://issues.apache.org/jira/browse/HADOOP-4035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12644640#action_12644640 ]
Vivek Ratan commented on HADOOP-4035: ------------------------------------- While I agree that you want to make it easier for various schedulers to share common functionality, keep in mind that different schedulers may choose to behave differently if a TT does not have enough memory for a task. The Capacity Scheduler, for example, chooses to block, i.e., it prefers returning nothing to the TT, so that it doesn't starve a job with high-mem requirements. However, another Scheduler might choose to look at the next job, or find some other task to give the TT. For this reason, you don't want to put the memory checks in _shouldRunOnTaskTracker()_. Schedulers will behave differently if a TT is blacklisted, versus if the TT doesn't have enough free memory. Now, you could argue that we can add a new method to JobInProgress, something like isMemoryAvailable(), that decides if the TT has enough free memory. You could call it from obtainNewMapTask() or obtainNewReduceTask(), but again, you'd have to modify what these methods return to the schedulers. If obtainNewMapTask, for example, returns no task, the Scheduler needs to know why. It can behave differently if there was no task to run, or if the TT was blacklisted, or if the TT didn't have any free memory. This will make things messy. I think there is a lot of scheduling code in the JobInProgress object that needs to be moved into the schedulers. IMO, JobInProgress, and other objects, should expose their data structures, and maybe simpler methods that decide whether a TT is blacklisted or if a TT has enough free memory (methods that return the same response irrespective of schedulers). The various Schedulers should compose these methods as they seem fit. One may check for user quotas before it checks for memory fit (maybe the former is a faster check) while another may do something else. This does imply a fair bit of refactoring, and could be a longer term effort. This will also help share common code across Schedulers. Short term, I think it's better that each scheduler decide if it wants to support memory checks when scheduling, what it does if the TT does not have enough free mem, and implement that individually. To ease this, maybe you can put the logic of deciding whether a TT has enough free memory for a task in a separate method in JobInProgress, but call that from each Scheduler, not from another JobInProgress method. > Modify the capacity scheduler (HADOOP-3445) to schedule tasks based on memory > requirements and task trackers free memory > ------------------------------------------------------------------------------------------------------------------------ > > Key: HADOOP-4035 > URL: https://issues.apache.org/jira/browse/HADOOP-4035 > Project: Hadoop Core > Issue Type: Bug > Components: contrib/capacity-sched > Affects Versions: 0.19.0 > Reporter: Hemanth Yamijala > Assignee: Vinod K V > Priority: Blocker > Fix For: 0.20.0 > > Attachments: 4035.1.patch, HADOOP-4035-20080918.1.txt, > HADOOP-4035-20081006.1.txt, HADOOP-4035-20081006.txt, HADOOP-4035-20081008.txt > > > HADOOP-3759 introduced configuration variables that can be used to specify > memory requirements for jobs, and also modified the tasktrackers to report > their free memory. The capacity scheduler in HADOOP-3445 should schedule > tasks based on these parameters. A task that is scheduled on a TT that uses > more than the default amount of memory per slot can be viewed as effectively > using more than one slot, as it would decrease the amount of free memory on > the TT by more than the default amount while it runs. The scheduler should > make the used capacity account for this additional usage while enforcing > limits, etc. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.