[
https://issues.apache.org/jira/browse/MAPREDUCE-722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Vinod K V updated MAPREDUCE-722:
--------------------------------
Attachment: MAPREDUCE-722.txt
Hemanth found out the reason for this - a faulty conditional in
getTaskFromQueue (CapacityTaskScheduler.java +538) :
{code}
if (memory requirement match for this job on this TT) {
Go ahead and give a task
} else {
if (getPendingTasks(j) != 0 || hasSpeculativeTask(j, taskTrackerStatus)
||
!hasSufficientReservedTaskTrackers(j)) {
Reserve this TaskTracker.
}
}
{code}
Even when enough reservations are already made, because all the conditions are
OR'ed instead of AND'ed, reservations continue to be made till all nodes in the
cluster get reserved for the job.
I am attaching a patch for this. Changing the conditional to be:
{code}
if ((getPendingTasks(j) != 0 && !hasSufficientReservedTaskTrackers(j))
|| hasSpeculativeTask(j, taskTrackerStatus) {
Reserve the taskTracker.
}
{code}
Added a new test case that fails without the code changes and succeeds with.
Also fixed two other tests that were buggy and so didn't catch the problems
found in this issue.
This patch is still *incomplete* w.r.t speculative tasks. We need more thought
regarding this, as TaskTrackers reserved for one speculative task T-1 may not
be usable by another task T-2 of the same job.
> More slots are getting reserved for HiRAM job tasks then required
> -----------------------------------------------------------------
>
> Key: MAPREDUCE-722
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-722
> Project: Hadoop Map/Reduce
> Issue Type: Bug
> Components: contrib/capacity-sched
> Environment: Cluster MR capacity=248/248 Map slot size=1500 mb and
> reduce slot size=2048 mb. Total number of nodes=124
> 4 queues each having Capacity=25% User Limit=100%.
> Reporter: Karam Singh
> Assignee: Vinod K V
> Attachments: MAPREDUCE-722.txt
>
>
> Submitted a normal job with map=124=reduces
> After submitted High RAM with maps=31=reduces map.memory=1800
> reduce.memory=2800
> Again 3 job maps=124=reduces
> total of 248 slots were reserved for both maps and reduces for High Job which
> much higher then required.
> Is observed in Hadoop 0.20.0
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.