[ 
https://issues.apache.org/jira/browse/HADOOP-5964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-5964:
----------------------------------

    Attachment: HADOOP-5964_0_20090602.patch

Very early patch.

I haven't introduced a WAITING_FOR_SLOT task state since it might be desirable 
for the ExpireLaunchingTasks thread to actually kill high-ram jobs which have 
waited too long. Thoughts?

> Fix the 'cluster drain' problem in the Capacity Scheduler wrt High RAM Jobs
> ---------------------------------------------------------------------------
>
>                 Key: HADOOP-5964
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5964
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: contrib/capacity-sched
>    Affects Versions: 0.20.0
>            Reporter: Arun C Murthy
>             Fix For: 0.21.0
>
>         Attachments: HADOOP-5964_0_20090602.patch
>
>
> When a HighRAMJob turns up at the head of the queue, the current 
> implementation of support for HighRAMJobs in the Capacity Scheduler has 
> problem in that the scheduler stops assigning tasks to all TaskTrackers in 
> the cluster until a HighRAMJob finds a suitable TaskTrackers for all its 
> tasks.
> This causes a severe utilization problem since effectively no new tasks are 
> allowed to run until the HighRAMJob (at the head of the queue) gets slots.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to