[
https://issues.apache.org/jira/browse/HADOOP-3136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12656164#action_12656164
]
Arun C Murthy commented on HADOOP-3136:
---------------------------------------
bq. Why assign an off rack task if you could also assign on rack?
The strategy is to assign as many node-local and rack-local tasks per heartbeat
as available slots, however no more than 1 off-switch task per heartbeat
regardless of the number of available slots. To prevent under-utilization of
the TaskTracker (which might have got only one off-switch task) this patch
halves the heartbeat interval (which we can afford now, given that this patch
removes the heartbeat on _every_ task completion).
bq. The problem seems to be that processing these heartbeats is too expensive?
This would argue for a more complete JT redesign. Heartbeats would then be very
cheap, since it would just be a state update on a few tasks and a dispatch of
already queued work.
Agreed, we would need to implement finer-grained locking in the JT to reduce
the cost of heartbeat processing - currently each heartbeat locks up the entire
JT (ala the BFLock in the Linux 2.2 kernel... *smile*). HADOOP-869 tracks this.
bq. We should probably start another thread on that, where the JT plans and
queues tasks for task trackers in one thread.
+1. I'll open another jira for global scheduling.
> Assign multiple tasks per TaskTracker heartbeat
> -----------------------------------------------
>
> Key: HADOOP-3136
> URL: https://issues.apache.org/jira/browse/HADOOP-3136
> Project: Hadoop Core
> Issue Type: Improvement
> Components: mapred
> Reporter: Devaraj Das
> Assignee: Arun C Murthy
> Fix For: 0.20.0
>
> Attachments: HADOOP-3136_0_20080805.patch,
> HADOOP-3136_1_20080809.patch, HADOOP-3136_2_20080911.patch,
> HADOOP-3136_3_20081211.patch
>
>
> In today's logic of finding a new task, we assign only one task per heartbeat.
> We probably could give the tasktracker multiple tasks subject to the max
> number of free slots it has - for maps we could assign it data local tasks.
> We could probably run some logic to decide what to give it if we run out of
> data local tasks (e.g., tasks from overloaded racks, tasks that have least
> locality, etc.). In addition to maps, if it has reduce slots free, we could
> give it reduce task(s) as well. Again for reduces we could probably run some
> logic to give more tasks to nodes that are closer to nodes running most maps
> (assuming data generated is proportional to the number of maps). For e.g., if
> rack1 has 70% of the input splits, and we know that most maps are data/rack
> local, we try to schedule ~70% of the reducers there.
> Thoughts?
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.