[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14223433#comment-14223433
 ] 

Jason Lowe commented on MAPREDUCE-6166:
---------------------------------------

I'm not sure we need all the boilerplate of an extra class to save one line of 
code (two if we count the MergeManager member), and I'm not sure that extra 
class alone will make it clear that MapOutput can be used externally.  IMHO if 
we want to do this kind of refactoring then that can be done as another JIRA.

> Reducers do not catch bad map output transfers during shuffle if data 
> shuffled directly to disk
> -----------------------------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-6166
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6166
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: mrv2
>    Affects Versions: 2.6.0
>            Reporter: Eric Payne
>            Assignee: Eric Payne
>         Attachments: MAPREDUCE-6166.v1.201411221941.txt
>
>
> In very large map/reduce jobs (50000 maps, 2500 reducers), the intermediate 
> map partition output gets corrupted on disk on the map side. If this 
> corrupted map output is too large to shuffle in memory, the reducer streams 
> it to disk without validating the checksum. In jobs this large, it could take 
> hours before the reducer finally tries to read the corrupted file and fails. 
> Since retries of the failed reduce attempt will also take hours, this delay 
> in discovering the failure is multiplied greatly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to