[
https://issues.apache.org/jira/browse/HADOOP-153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12571528#action_12571528
]
Devaraj Das commented on HADOOP-153:
------------------------------------
Am starting to look at this issue. I am tending to handle this by only keeping
track of how RecordReader.next behaves. This will take care of all cases -
Java, Streaming, and Pipes in a generic manner. I am not very much in favor of
providing hooks in the applications. Is that a requirement? Any other
requirement? Thoughts welcome.
> skip records that throw exceptions
> ----------------------------------
>
> Key: HADOOP-153
> URL: https://issues.apache.org/jira/browse/HADOOP-153
> Project: Hadoop Core
> Issue Type: New Feature
> Components: mapred
> Affects Versions: 0.2.0
> Reporter: Doug Cutting
> Assignee: Owen O'Malley
>
> MapReduce should skip records that throw exceptions.
> If the exception is thrown under RecordReader.next() then RecordReader
> implementations should automatically skip to the start of a subsequent record.
> Exceptions in map and reduce implementations can simply be logged, unless
> they happen under RecordWriter.write(). Cancelling partial output could be
> hard. So such output errors will still result in task failure.
> This behaviour should be optional, but enabled by default. A count of errors
> per task and job should be maintained and displayed in the web ui. Perhaps
> if some percentage of records (>50%?) result in exceptions then the task
> should fail. This would stop jobs early that are misconfigured or have buggy
> code.
> Thoughts?
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.