[ https://issues.apache.org/jira/browse/HADOOP-5964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Arun C Murthy updated HADOOP-5964: ---------------------------------- Attachment: HADOOP-5964_4_20090615.patch After much consideration I decided to revert back to the approach where we keep tasks on the JobTracker until sufficient memory is available, the problem with caching them on the TaskTracker is that it caused too many changes to the task status-reporting components which are currently too hairy to muck with. I'm still testing the current patch. > Fix the 'cluster drain' problem in the Capacity Scheduler wrt High RAM Jobs > --------------------------------------------------------------------------- > > Key: HADOOP-5964 > URL: https://issues.apache.org/jira/browse/HADOOP-5964 > Project: Hadoop Core > Issue Type: Bug > Components: contrib/capacity-sched > Affects Versions: 0.20.0 > Reporter: Arun C Murthy > Assignee: Arun C Murthy > Fix For: 0.21.0 > > Attachments: HADOOP-5964_0_20090602.patch, > HADOOP-5964_1_20090608.patch, HADOOP-5964_2_20090609.patch, > HADOOP-5964_4_20090615.patch > > > When a HighRAMJob turns up at the head of the queue, the current > implementation of support for HighRAMJobs in the Capacity Scheduler has > problem in that the scheduler stops assigning tasks to all TaskTrackers in > the cluster until a HighRAMJob finds a suitable TaskTrackers for all its > tasks. > This causes a severe utilization problem since effectively no new tasks are > allowed to run until the HighRAMJob (at the head of the queue) gets slots. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.