[ 
https://issues.apache.org/jira/browse/HADOOP-153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12577162#action_12577162
 ] 

Sameer Paranjpye commented on HADOOP-153:
-----------------------------------------

The *order* of the number of exceptions that the framework intends to handle 
affects the design a great deal. What is the scope of the problem we intend to 
handle here? I see there being two cases:
# the number of exceptions is small compared to the number of tasks in a job. 
In this scenario Enis' strategy makes a lot of sense. In general we assume that 
tasks are fine grained enough that re-executing a handful of them is not a 
significant burden in terms of job runtime and throughput.
# the number of exceptions is _O(num tasks)_. In this scenario, re-execution 
could cause job runtime to double (or worse) since every task could in 
principle be executed two or more times. If we set out to handle this case then 
we'll need to keep enough state to enable each task to skip the offending 
record(s).

Perhaps we should attempt to resolve case 1) with task re-execution for now 
since it represents a useful incremental step towards a more sophisticated 
solution. It may prove to be sufficient. One could in principle argue that if 
the number of exceptions is _O(num tasks)_ the problem is better handled at the 
application level.

> skip records that throw exceptions
> ----------------------------------
>
>                 Key: HADOOP-153
>                 URL: https://issues.apache.org/jira/browse/HADOOP-153
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: mapred
>    Affects Versions: 0.2.0
>            Reporter: Doug Cutting
>            Assignee: Devaraj Das
>             Fix For: 0.17.0
>
>
> MapReduce should skip records that throw exceptions.
> If the exception is thrown under RecordReader.next() then RecordReader 
> implementations should automatically skip to the start of a subsequent record.
> Exceptions in map and reduce implementations can simply be logged, unless 
> they happen under RecordWriter.write().  Cancelling partial output could be 
> hard.  So such output errors will still result in task failure.
> This behaviour should be optional, but enabled by default.  A count of errors 
> per task and job should be maintained and displayed in the web ui.  Perhaps 
> if some percentage of records (>50%?) result in exceptions then the task 
> should fail.  This would stop jobs early that are misconfigured or have buggy 
> code.
> Thoughts?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to