[
https://issues.apache.org/jira/browse/MAPREDUCE-5691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13857251#comment-13857251
]
Gera Shegalov commented on MAPREDUCE-5691:
------------------------------------------
[~liangly] As a short-term relief, is it possible for you to reduce
mapreduce.reduce.shuffle.parallelcopies for this job.
> Throttle shuffle's bandwidth utilization
> ----------------------------------------
>
> Key: MAPREDUCE-5691
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5691
> Project: Hadoop Map/Reduce
> Issue Type: Improvement
> Affects Versions: 2.2.0
> Reporter: Liyin Liang
> Attachments: ganglia-slave.jpg
>
>
> In our hadoop cluster, a reducer of a big job can utilize all the bandwidth
> during shuffle phase. Then any task reading data from the machine which
> running that reducer becomes very very slow.
> It's better to move DataTransferThrottler from hadoop-hdfs to hadoop-common.
> And create a throttler for Shuffle to throttle each Fetcher.
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)