[
https://issues.apache.org/jira/browse/HADOOP-5811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12708379#action_12708379
]
Vinod K V commented on HADOOP-5811:
-----------------------------------
- Test I:
Sumbit 2 normal jobs with the number of map and reduce tasks equalling the
cluster capacity one after another. When one of them starts running and its
maps occupies the whole cluster, submit a high-memory job with the number of
tasks equal to the cluster-capacity. After this job gets submitted, submit 2
more normal jobs.
Problem found:
When maps of the 2 normal jobs finished, the maps of high-memory job were
scheduled and not the reduces of normal second job. Reduces of the second job
were blocked for want of memory till the maps of high-memory job were finished.
- Test II:
Submit a high-memory job with number of tasks equal to the cluster-capacity.
When this job starts running, submit 3 normal jobs.
Problem found:
Maps of high-memory jobs were scheduled and when the maps got finished, the
maps of other jobs started getting scheduled instead of the reduces of
high-memory jobs. Also though the other jobs were normal, their maps were also
not scheduled totally, instead only in one wave. After one wave, reduce
scheduler took over and no task from any job was scheduled till the tasks of
normal jobs started finishing, thus making the scheduling of tasks hard to
predict.
> High-memory jobs in CapacityTaskScheduler: Some scenarios make job execution
> unpredictable
> ------------------------------------------------------------------------------------------
>
> Key: HADOOP-5811
> URL: https://issues.apache.org/jira/browse/HADOOP-5811
> Project: Hadoop Core
> Issue Type: Bug
> Components: contrib/capacity-sched
> Reporter: Vinod K V
>
> Karam found some problems with scheduling for high-memory jobs by
> CapacityTaskScheduler. This issue tracks these.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.