[
https://issues.apache.org/jira/browse/MAPREDUCE-2038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12903666#action_12903666
]
Hong Tang commented on MAPREDUCE-2038:
--------------------------------------
bq. This is a very interesting direction. We have another use case for HBase
bulk loads, where we know that a given reducer partition is going to end up on
a particular region server (often colocated with a TT). Scheduling the reducer
to run on the same node or rack will ensure a local replica of the HFile when
it comes time to serve it.
I believe this is similar to the second usage case I described.
bq. Another interesting use case is for aggregation queries where we can make
use of something like a "rack combiner". We can simply implement a Partitioner
that returns the rack index of the mapper, and then schedule that reduce task
on the same rack. Thus we end up with a result set per rack, and can do a
second small job to recombine those. This is not unlike the multilevel query
execution trees used in Dremel - I imagine Hive and Pig's query planners could
make use of plenty of techniques like this.
Do you mean that for aggregation operations that would reduce data-volume along
the way, so you want to do a hierarchical approach? (So the real hierarchy
would be: inside-the-map-combiner, same host-multiple-map-combining,
inside-the-rack reduction, and cross-rack reduction).
> Making reduce tasks locality-aware
> ----------------------------------
>
> Key: MAPREDUCE-2038
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-2038
> Project: Hadoop Map/Reduce
> Issue Type: New Feature
> Reporter: Hong Tang
>
> Currently Hadoop MapReduce framework does not take into consideration of data
> locality when it decides to launch reduce tasks. There are several cases
> where it could become sub-optimal.
> - The map output data for a particular reduce task are not distributed evenly
> across different racks. This could happen when the job does not have many
> maps, or when there is heavy skew in map output data.
> - A reduce task may need to access some side file (e.g. Pig fragmented join,
> or incremental merge of unsorted smaller dataset with an already sorted large
> dataset). It'd be useful to place reduce tasks based on the location of the
> side files they need to access.
> This jira is created for the purpose of soliciting ideas on how we can make
> it better.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.